text
stringlengths 14
1.76M
|
|---|
# On the robustness of spin polarization for magnetic vortex accelerated
proton bunches in density down-ramps
L Reichwein1, A Hützen2,3, M Büscher2,3, A Pukhov1 1Institut für Theoretische
Physik I, Heinrich-Heine-Universität Düsseldorf, 40225 Düsseldorf, Germany
2Peter Grünberg Institut (PGI-6), Forschungszentrum Jülich, 52425 Jülich,
Germany 3Institut für Laser- und Plasmaphysik, Heinrich-Heine-Universität
Düsseldorf, 40225 Düsseldorf, Germany<EMAIL_ADDRESS>
###### Abstract
We investigate the effect of density down-ramps on the acceleration of ions
via Magnetic Vortex Acceleration (MVA) in a near-critical density gas target
by means of particle-in-cell simulations. The spin-polarization of the
accelerated protons is robust for a variety of ramp lengths at around 80%.
Significant increase of the ramp length is accompanied by collimation of low-
polarization protons into the final beam and large transverse spread of the
highly polarized protons with respect to the direction of laser propagation.
*
Keywords: magnetic vortex acceleration, spin polarization, ion acceleration
Accepted for publication in Plasma Phys. Control. Fusion
## 1 Introduction
The acceleration of spin-polarized particles is interesting for a variety of
applications, from testing the Standard Model of particle physics [1] to
examining the structure of subatomic particles for further insight on QCD [2].
As laser-plasma based acceleration mechanisms have grown to be more prominent
due to the high achievable energies over a shorter distance than in
conventional accelerators [3, 4], it is the logical next step to study the
acceleration of spin-polarized particles in these regimes. The current state-
of-the-art is given in the paper by Büscher et al. [5].
In the case of electrons, Wu et al. [6, 7] have shown that via both laser-
driven and particle beam-driven wakefield acceleration, high degrees of
polarization can be achieved, if an appropriately chosen laser pulse or
driving beam, respectively, are used. It could be seen that the real crux for
generating high-polarization electrons lies within the injection: due to
strong azimuthal magnetic fields during injection, the spins of the electrons
start to precess strongly, leading to a significant loss of polarization,
while during the acceleration phase, changes in polarization are mostly
negligible.
For the acceleration of protons in general, various methods like Target Normal
Sheath Acceleration (TNSA) [8], Radiation Pressure Acceleration (RPA) [9] or
Magnetic Vortex Acceleration (MVA) [10, 11, 12] are feasible options.
Wakefield acceleration of protons is also possible, although significantly
higher laser intensities are necessary [14, 13]. If we, however, need spin-
polarized beams, we have to restrict ourselves to setups where we can pre-
polarize our targets, ruling out some of the options due to the properties of
the materials that are needed. Pre-polarizing the particles to be accelerated
is necessary, since at the time scales and field strengths considered for
acceleration, significant polarization build-up during the process is not
possible [15].
Jin et al. [16] recently considered the acceleration of spin-polarized protons
using a near-critical density target. The process, identified as MVA, works as
follows: When the laser pulse enters the target, the ponderomotive force
pushes the electron in the direction transverse to laser propagation, leaving
behind a channel of low electron density [17, 10]. Electrons can be
accelerated in the wake induced by the laser and form a central filament in
the channel. A strong azimuthal magnetic field is created by a current flowing
in the central filament along the axis and an opposing current along the
channel wall. This current also accelerates some ions in the filament
structure along the channel center. When leaving the interaction volume, the
magnetic fields can expand in the transverse region because of the decrease in
density. Strong longitudinal and transverse electric fields are induced by the
displacement of the electrons with respect to the ions. Finally, an ion beam
is obtained that is further accelerated by the prominent fields after leaving
the plasma. Jin et al. showed that while higher intensities lead to better
energies ($\mathcal{E}_{p}>100$ MeV for a laser with normalized laser vector
potential $a_{0}=eA_{0}/(m_{e}c)=100$), it comes at the price of lower
polarization. Here, $m_{e}$ denotes the electron mass and $c$ the vacuum speed
of light.
In this paper, we investigate the effect of density down-ramps at the end of
the interaction volume onto the obtained proton bunches, specifically the
degree of polarization. We consider a gaseous HCl target similar to Ref. [16]
in our PIC simulations, keeping all parameters except the length of the down-
ramp fixed throughout. It is shown that the degree of polarization is robust
against down-ramp length and that obtaining high-quality bunches is mainly
limited by the change in spatial beam structure due to the prevalent
electromagnetic fields. Only for longer ramps the spatial structure is
modulated so strongly, that lower-polarization protons are collimated into the
beam. The results presented are discussed in the scope of the scaling laws of
Ref. [15]. We find that the accelerated proton bunch can be described as
consisting of three components, namely its back, middle and front. These parts
contain protons from different states of the acceleration process, leading to
distinct average polarizations. The extent of each of those parts is
determined by the slope of the down-ramp that influences the focusing of the
protons into the bunch and in turn the final beam quality.
Figure 1: Distribution of particle spin and field configuration for the case
of $L_{\mathrm{ramp}}=0\lambda_{L}$ at $t=320\tau_{0}$. All protons in the
plasma have initial polarization $s_{y}=1$. The electromagnetic fields are
normalized with $E_{0}=B_{0}=mc\omega_{0}/e$. It can be seen that the
accelerated proton bunch leaving the plasma maintains a high degree of
polarization, while protons surrounding the remaining filament of the coaxial
channel gain transverse polarization.
## 2 Simulation setup
For our simulations we use the PIC code VLPL [18] that includes the precession
of particle spin s according to the T-BMT equation
$\frac{\mathrm{d}\textbf{s}_{i}}{\mathrm{d}t}=-\boldsymbol{\Omega}\times\textbf{s}_{i}\;,$
(1)
where
$\boldsymbol{\Omega}=\frac{q}{mc}\left[\Omega_{\textbf{B}}\textbf{B}-\Omega_{\textbf{v}}\left(\frac{\textbf{v}}{c}\cdot\textbf{B}\right)\frac{\textbf{v}}{c}-\Omega_{\textbf{E}}\frac{\textbf{v}}{c}\times\textbf{E}\right]$
(2)
is the precession frequency for a particle with charge $q$, mass $m$ and
velocity v. The prefactors are given as
$\Omega_{\textbf{B}}=a+\frac{1}{\gamma}\;,\Omega_{\textbf{v}}=\frac{a\gamma}{\gamma+1}\;,\Omega_{\textbf{E}}=a+\frac{1}{1+\gamma}\;,$
(3)
with $a$ and $\gamma$ being the particle’s anomalous magnetic moment and its
Lorentz factor, respectively. This equation describes the change in spin for a
particle that traverses through some arbritary configuration of electric
fields E and magnetic fields B.
In general, more spin-related effects would have to be considered, like the
Stern-Gerlach force that describes the effect of spin onto a particle’s
trajectory, and also the Sokolov-Ternov effect, that links radiative effects
with spin. It has, however, been shown by Thomas et al. [15], that these two
effects can be neglected for the parameter regimes considered in the
following.
For our setup, we choose a circularly polarized laser with $a_{0}=25$ and
wavelength $\lambda_{L}=800$ nm. The length of the pulse is
$\tau_{0}=10\lambda_{L}/c$ and it has a focal radius of $w_{0}=10\lambda_{L}$
(at $1/e^{2}$ of the intensity).
The target consists of HCl gas with a peak density of
$n_{\mathrm{H}}=n_{\mathrm{Cl}}=0.0122n_{\mathrm{cr}}$, leading to a near-
critical electron background. Here, $n_{\mathrm{cr}}$ denotes the critical
density. This specific gas is chosen because it allows for an easily
achievable pre-polarization of the protons (see Ref. [7] for a detailed
description of the process). In our case, for all protons, we initially choose
$s_{y}=1$.
The interaction volume starts with an up-ramp rising from vacuum to peak
density over a distance of $5\lambda_{L}$, then maintaining peak density for
$200\lambda_{L}$. The down-ramp length at the end is varied in the range of
$0\lambda_{L}$ up to $100\lambda_{L}$ (see Table 1).
In our simulations, we use a box of size $(100\times 60\times 60)\lambda_{L}$
that is moving alongside the laser pulse until the accelerated protons leave
the plasma. The grid size is chosen as $h_{x}=0.05\lambda_{L}$ (direction of
propagation) and $h_{y}=h_{z}=0.4\lambda_{L}$. We do, however, use a feature
of VLPL that allows for the increase of cell size the further we go from the
central axis in the transverse direction in order to reduce computational
effort. The solver used for the simulations is the RIP solver [19], which
requires that the time step is $\Delta t=h_{x}/c$.
Table 1: Results of the simulations with different ramp lengths in terms of average polarization and peak density of the proton bunch. The average polarization of the proton bunch is obtained by selecting the particles in the high-density region leaving the plasma channel. Note that for longer ramps ($75\lambda_{L}$, $100\lambda_{L}$) the shape of proton bunch is increasingly ill-defined. $L_{\mathrm{ramp}}\;[\lambda_{L}]$ | 0 | 25 | 50 | 75 | 100
---|---|---|---|---|---
avg. polarization $\langle s_{y}\rangle$ | 0.81 | 0.83 | 0.83 | 0.83 | 0.63
$n_{\mathrm{peak}}$ [$n_{\mathrm{cr}}$] | 0.209 | 0.126 | 0.044 | 0.027 | 0.025
Figure 2: Density and spin polarization for the simulations with ramp lengths
$0\lambda_{L}$, $50\lambda_{L}$ and $100\lambda_{L}$ (left to right) after the
accelerated proton bunch has left the plasma (end of ramp in box middle). Note
that the density plots are clipped at $0.1n_{\mathrm{cr}}$ for better
visbility. The density plots show that increasing the ramp length is
accompanied by a higher transverse spread of the resulting proton beam, which
is located at $X\approx 320\lambda_{L}$ for the case of
$L_{\mathrm{ramp}}=0\lambda_{L}$.
## 3 Discussion
When the laser pulse enters the target, the electrons are driven out in the
direction transverse to laser propagation, leaving behind an ionic filament
that is pushed out at the end of the plasma due to the electromagnetic fields.
Since all of our simulations have the same configuration at start, the created
proton bunch will be identical until the start of the down-ramp. We can see
that the central filament initially keeps its polarization very well while the
region around it starts to depolarize due to the electromagnetic fields
(compare Fig. 1).
As we enter the down-ramp region, we can start to see the effects of the
different ramp lengths $L_{\mathrm{ramp}}$. For the target with a hard cut-off
in density, i.e. $L_{\mathrm{ramp}}=0\lambda_{L}$, the usual MVA fields can be
observed: the magnetic vortex starts to appear and a uniform longitudinal
electric field $E_{x}$ that drives the protons further out of the plasma. The
proton energies that can be achieved for a comparable setup are discussed in
the work by Jin et al. [16], where they reached $\mathcal{E}_{p}\approx 53$
MeV for a laser with $a_{0}=25$ and a HCl plasma of similar density, but with
$L_{\mathrm{ramp}}=5\lambda_{L}$.
Going to a longer ramp length, we can see that, due to the lower densities in
those regions, the fields start to expand transversely while the proton bunch
is still in the plasma (not shown here). An approximation for the strength of
the magnetic field in a down-ramp is given by Nakamura et al. [10]. This
change in field configuration leads to differences clearly visible when
looking the the density plots in Fig. 2: the accelerated proton bunch is
modulated such that for longer ramps it further spreads in the transverse
direction. Especially in the case of $L_{\mathrm{ramp}}=75\lambda_{L}$ and
$100\lambda_{L}$, the protons leaving the plasma hardly form a consistent
bunch structure anymore. Transverse density profile of the different beams as
well as peak density are shown in Fig. 3 and Table 1.
Figure 3: Transverse beam profile (at the plane with peak density) for a
selection of different ramp lengths. Longer ramps lead to a widening of the
accelerated proton beam, reducing the peak density. Figure 4: Exemplary
polarization data for the case of $L_{\mathrm{ramp}}=0\lambda_{L}$ at time
step $300\tau_{0}$. The spin for each PIC particle is assigned to a
corresponding bin in the longitudinal direction for which the average spin
polarization (red line) and the number of PIC particles (blue, dashed) are
given.
The change in bunch structure can be attributed to two factors. Firstly,
increasing the ramp lengths in a fashion as we do in our simulations, also
effectively leads to a longer interaction volume, meaning that the laser is
depleted of more energy. Secondly, the down-ramp allows for the transverse
fields to expand, leading to a wider channel (also visible at the left
boundary of the density plots in Fig. 2) and therefore the transverse growth
of the proton bunch. The defocusing of the proton bunch during the passage of
the down-ramp region is in agreement with the observations in [10, 17]. There,
the steepness of the density gradient was fixed to a value that allowed for
the best collimation possible.
Besides the quality of the bunch in terms of transverse and longitudinal
structure, the degree of polarization obtained at the end is of main interest.
We can directly tell by looking at the precession frequency
$\boldsymbol{\Omega}$ in equation (2) that the change in proton spin should be
significantly lower than for electrons, as $|\boldsymbol{\Omega}|\propto
m^{-1}$. To measure the polarization of the bunch, we consider the particles
close to the central axis. We subdivide the longitudinal direction into
several bins for which we calculate the average polarization $\langle
s_{y}\rangle$. Depending on the spatial beam structure, different degrees of
polarization can be located along the volume (compare Fig. 4). This is due to
the fact that protons that end up in the beam front experience different
electromagnetic fields than the ones in the beam’s stern, especially when
traversing through the down-ramp.
More precisely, if we subdivide our bunch into a back, middle and front part,
it becomes clear that protons in the front have been focused into the bunch
for the shortest amount of time. This is because here the laser pulse just has
created the channel inside the plasma slab and in turn created the filament.
Therefore, the protons experience a comparatively strong field, decreasing the
average polarization. In contrast, protons from the back of the bunch have
been propagating for a longer period of time through the channel and
consquently experiencing more depolarizing fields. This is why the spin
polarization at the back of the bunch (towards $x\approx 377\lambda_{L}$ in
Fig. 4) decreases even faster than in its front (towards $x\approx
380\lambda_{L}$). The middle part of the bunch encounters comparatively lower
field strengths and has been propagating for a moderate amount of time,
yielding a higher degree of polarization than the other two parts, in
accordance with the result that
$|\boldsymbol{\Omega}|\propto F:=\mathrm{max}(|\textbf{E}|,|\textbf{B}|)\;,$
(4)
which we can see from the derivation in [15].
As seen in Figure 4, this difference in polarization between the different
proton bunch “components” can already be seen in the absence of a down-ramp,
however there it is mostly negligible as we still have a considerable average
polarization. Once we go over to longer ramps, the observations of different
polarization degrees inside the bunch is strongly amplified up to the point
where we see a significant reduction in average polarization. For these longer
ramps, we get more lower-polarization protons since on the one hand the
protons traverse through an effectively longer plasma, leading to further spin
precession. On the other hand, the flatter (i.e. longer) density gradient
amplifies the differences in the electromagnetic fields that the protons in
the bunch’s front and back experience, respectively. This is a direct
consequence of the fact that depending on the down-ramp slope, the focusing
(or compression) of the bunch becomes more or less pronounced: longer ramp
lengths lead to the compression of a higher amount of low polarization protons
into the bunch tail (which can be seen in Fig. 2). Further, the magnetic field
amplitude decreases for lower densities. Nakamura et al. [10] have found that
for a down-ramp like in our case the magnetic field decreases as
$B_{2}=B_{1}\frac{n_{1}+n_{2}}{2n_{1}}\;,$ (5)
where indices 1 and 2 denote the high- and low-density region, respectively.
This gives further insight into why the front portion of the bunch has a
slower decrease in polarization than the back.
In total, for most of the ramp lengths considered, a high polarization of
around 80% is maintained. Only in the simulation with
$L_{\mathrm{ramp}}=100\lambda_{L}$ we see a significant decrease to roughly
60%. It has, however, to be strongly emphasized that high-polarization protons
are still pushed in propagation direction (see spin plot in Fig. 2), only in a
non-collimated form. Instead, some protons with lower polarization (red region
in the spin plots) make up a significant part of the proton bunch visible in
the density plots.
These negative effects can partly be mitigated by choosing different laser-
plasma parameters, although it has to be noted that for higher laser
intensities the polarization degree will also decrease as it was shown in
[16]. This, as well as polarization decrease in case of significantly longer
ramps, is explained by the scaling laws derived by Thomas et al. [15]: A
particle beam can be viewed as depolarized, once the angle between initial
polarization direction and the final spin vectors is in the range of $\pi/2$.
The time after which this is to be expected is called the minimum
depolarization time $T_{D,p}$ and scales as
$T_{D,p}=\frac{\pi}{6.6aF}\;.$ (6)
This means that stronger electromagnetic fields induced by the laser pulse
lead to a faster depolarization of the protons. Further, the longer
interaction volume due to longer down-ramps also may decrease the polarization
once we reach the range of the depolarization time. It has to be noted that in
the equation above, $F$ is assumed constant, i.e. this holds for constant
density plasma slabs as long as the laser pulse still has most of its energy.
Newly “born” protons, especially in down-ramp regions, can experience
differing (in the ramp: lower) field strengths. While shorter interaction
volumes are desirable for high-quality proton bunches, this may come at the
cost of experimental realizability due to limitations of the nozzles and
blades usable for the creation of a pre-polarized plasma target.
In a publication by A. Sharma et al. [20], it has been shown that the ideal
plasma (plateau) length for MVA scales as
$L_{\mathrm{ch}}=a_{0}c\tau_{0}\frac{n_{\mathrm{cr}}}{n_{e}}K\;,$ (7)
where $K=0.074$ (in 3D) is a geometric factor. This means, that depending on
the target density, we can adjust our laser parameters accordingly. Especially
for lower $a_{0}$ we are not limited to the pulse duration we have proposed
for the simulation setup. A different $\tau_{0}$ leads to a different time
scale over which the MVA structures are built up, meaning that the collimation
process of the protons into the final bunch can be aligned in such a way that
we achieve both a good spatial focusing as well as the collimation of highly
polarized protons.
Another option to reduce spin precession due to the prevalent electromagnetic
fields would be to place a foil (e.g. made of Carbon) that is able to shield
part of the fields. This setup would, however, be more in line with RPA [9].
In the case of electrons, a mechanical setup for filtering out unwanted spin
contributions has recently been proposed [21]. For protons, a similar setup
might be realizable. Depolarization after the initial acceleration of the
protons out of the channel gets increasingly negligible, as the prefactors of
the precession frequency (3) get smaller for higher energies $\gamma$.
Lastly, we note that to experimentally test whether longer gas-jet targets are
suitable for polarized beam preparation, elements with more inert spins might
be employed. It has, e.g., be shown that ${}^{129}\mathrm{Xe}$ gas can be
nuclear polarized to a high degree (see [22] and references therein). However,
in this case different densities (and, consequently, laser parameters), as
compared to a HCl target, have to be used.
## 4 Conclusion
We have studied the effect of down-ramp length for a near-critical HCl gas
target that we use to obtain highly spin-polarized proton bunches via MVA. The
interaction plasma has been pre-polarized, since polarization build-up over
the course of acceleration is negligible. We observe that longer down-ramps
modulate the spatial bunch structure, leading to ill-defined bunches. For most
of the ramp lengths examined, the yielded polarization robustly stays around
80% due to the inert proton spin. Significantly longer ramps lead to the
collimation of lower-polarization protons instead of the wanted ones. The
difference in average bunch polarization along the propagation direction could
be explained in terms of the disparate field strengths different parts of the
bunch experience: the front-most part contains only recently collimated
protons that therefore experience weaker fields (especially in the down-ramp).
Further, they experience those fields for a shorter period of time than
protons from the bunch back, which have propagated longer distances and
consequently are depolarized further. The deteriorative effects of longer
down-ramps can be compensated by adjusting the parameters of the laser and
plasma used to some extent. Generally, as-short-as-possible interaction
volumes are preferable, since the minimum depolarization time for the bunch is
inversely proportional to the field strength experienced by the protons. A
next step in this subject could be an extended (semi-)analytical description
of the collimation process and specifically its effect on the bunch
polarization.
L.R. would like to thank X.F. Shen and K. Jiang for the fruitful discussions.
This work has been supported by the DFG (project PU 213/9-1). The authors
gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-
centre.eu) for funding this project by providing computing time on the GCS
Supercomputer JUWELS at Jülich Supercomputing Centre (JSC). The work of A.H.
and M.B. has been carried out in the framework of the JuSPARC (Jülich Short-
Pulse Particle and Radiation Center) and has been supported by the ATHENA
(Accelerator Technology Helmholtz Infrastructure) consortium.
## References
* [1] D. Androic et al., Nature 557, 207 (2018)
* [2] M. Burkardt et al., Rep. Prog. Phys. 73, 016201 (2010)
* [3] A. Pukhov and J. Meyer-ter Vehn, Appl. Phys. B 74, 355 (2002)
* [4] J. Faure et al., Nature 431, 541 (2004)
* [5] M. Büscher et al., High Power Laser Sci 8, e36 (2020)
* [6] Y. Wu et al., Phys. Rev. E 100, 043202 (2019)
* [7] Y. Wu et al., New. J. Phys. 21, 073052 (2019)
* [8] M. Roth and M. Schollmeier, arXiv:1705.10569 (2017)
* [9] A. Macchi et al., JINST 12, C04016 (2017)
* [10] T. Nakamura et al., Phys. Rev. Lett. 105, 135002 (2010)
* [11] L. Willingale et al., Phys. Rev. Lett. 96, 245002 (2006)
* [12] L. Willingale et al., IEEE Trans. Plasma Sci. 36(4), 1825-1823 (2008)
* [13] A. Hützen et al., High Power Laser Sci 7, e16 (2019)
* [14] B. Shen et al., Phys. Rev. E 76, 055402(R) (2007)
* [15] J. Thomas et al., Phys. Rev. Accel. Beams 23, 064401 (2020)
* [16] L. Jin et al., Phys. Rev. E 102, 011201(R) (2020)
* [17] J. Park et al., Phys. Plasmas 26, 103108 (2019)
* [18] A. Pukhov, J. Plasma Phys. 61(3), 425-433 (1999)
* [19] A. Pukhov, J. Comp. Phys. 418, 109622 (2020)
* [20] A. Sharma, Sci. Rep. 8, 2191 (2018)
* [21] Y. Wu et al. Phys. Rev. Applied 13, 044064 (2020)
* [22] D. J. Kennedy et al., Sci. Rep. 7, 43994 (2017)
|
# Adversarial Learning of Poisson Factorisation Model for Gauging Brand
Sentiment in User Reviews
Runcong Zhao, Lin Gui, Gabriele Pergola, Yulan He
Department of Computer Science, University of Warwick, UK
<EMAIL_ADDRESS>
###### Abstract
In this paper, we propose the Brand-Topic Model (BTM) which aims to detect
brand-associated polarity-bearing topics from product reviews. Different from
existing models for sentiment-topic extraction which assume topics are grouped
under discrete sentiment categories such as ‘ _positive_ ’, ‘ _negative_ ’ and
‘ _neural_ ’, BTM is able to automatically infer real-valued brand-associated
sentiment scores and generate fine-grained sentiment-topics in which we can
observe continuous changes of words under a certain topic (e.g., ‘ _shaver_ ’
or ‘ _cream_ ’) while its associated sentiment gradually varies from negative
to positive. BTM is built on the Poisson factorisation model with the
incorporation of adversarial learning. It has been evaluated on a dataset
constructed from Amazon reviews. Experimental results show that BTM
outperforms a number of competitive baselines in brand ranking, achieving a
better balance of topic coherence and uniqueness, and extracting better-
separated polarity-bearing topics.
## 1 Introduction
Market intelligence aims to gather data from a company’s external environment,
such as customer surveys, news outlets and social media sites, in order to
understand customer feedback to their products and services and to their
competitors, for a better decision making of their marketing strategies. Since
consumer purchase decisions are heavily influenced by online reviews, it is
important to automatically analyse customer reviews for online brand
monitoring. Existing sentiment analysis models either classify reviews into
discrete polarity categories such as ‘ _positive_ ’, ‘ _negative_ ’ or ‘
_neural_ ’, or perform more fine-grained sentiment analysis, in which aspect-
level sentiment label is predicted, though still in the discrete polarity
category space. We argue that it is desirable to be able to detect subtle
topic changes under continuous sentiment scores. This allows us to identify,
for example, whether customers with slightly negative views share similar
concerns with those holding strong negative opinions; and what positive
aspects are praised by customers the most. In addition, deriving brand-
associated sentiment scores in a continuous space makes it easier to generate
a ranked list of brands, allowing for easy comparison.
Existing studies on brand topic detection were largely built on the Latent
Dirichlet Allocation (LDA) model Blei et al. (2003) which assumes that latent
topics are shared among competing brands for a certain market. They however
are not able to separate positive topics from negative ones. Approaches to
polarity-bearing topic detection can only identify topics under discrete
polarity categories such as ‘ _positive_ ’ and ‘ _negative_ ’. We instead
assume that each brand is associated with a latent real-valued sentiment score
falling into the range of $[-1,1]$ in which $-1$ denotes negative, $0$ being
neutral and $1$ positive, and propose a Brand-Topic Model built on the Poisson
Factorisation model with adversarial learning. Example outputs generated from
BTM are shown in Figure 1 in which we can observe a transition of topics with
varying topic polarity scores together with their associated brands.
Figure 1: Example topic results generated from proposed Brand-Topic Model. We
observe a transition of topics with varying topic polarity scores. Besides the
change of sentiment-related words (e.g., ‘ _problem_ ’ in negative topics and
‘ _better_ ’ in positive topics), we could also see a change of their
associated brands. Users are more positive about Braun, negative about
Remington, and have mixed opinions on Norelco.
More concretely, in BTM, a document-word count matrix is factorised into a
product of two positive matrices, a document-topic matrix and a topic-word
matrix. A word count in a document is assumed drawn from a Poisson
distribution with its rate parameter defined as a product of a document-
specific topic intensity and its word probability under the corresponding
topic, summing over all topics. We further assume that each document is
associated with a brand-associated sentiment score and a latent topic-word
offset value. The occurrence count of a word is then jointly determined by
both the brand-associated sentiment score and the topic-word offset value. The
intuition behind is that if a word tends to occur in documents with positive
polarities, but the brand-associated sentiment score is negative, then the
topic-word offset value will have an opposite sign, forcing the occurrence
count of such a word to be reduced. Furthermore, for each document, we can
sample its word counts from their corresponding Poisson distributions and form
a document representation which is subsequently fed into a sentiment
classifier to predict its sentiment label. If we reverse the sign of the
latent brand-associated sentiment score and sample the word counts again, then
the sentiment classifier fed with the resulting document representation should
generate an opposite sentiment label.
Our proposed BTM is partly inspired by the recently developed Text-Based Ideal
Point (TBIP) model Vafa et al. (2020) in which the topic-specific word choices
are influenced by the ideal points of authors in political debates. However,
TBIP is fully unsupervised and when used in customer reviews, it generates
topics with mixed polarities. On the contrary, BTM makes use of the document-
level sentiment labels and is able to produce better separated polarity-
bearing topics. As will be shown in the experiments section, BTM outperforms
TBIP on brand ranking, achieving a better balance of topic coherence and topic
uniqueness measures.
The contributions of the model are three-fold:
* •
We propose a novel model built on Poisson Factorisation with adversarial
learning for brand topic analysis which can disentangle the sentiment factor
from the semantic latent representations to achieve a flexible and
controllable topic generation;
* •
We approximate word count sampling from Poisson distributions by the Gumbel-
Softmax-based word sampling technique, and construct document representations
based on the sampled word counts, which can be fed into a sentiment
classifier, allowing for end-to-end learning of the model;
* •
The model, trained with the supervision of review ratings, is able to
automatically infer the brand polarity scores from review text only.
The rest of the paper is organised as follows. Section 2 presents the related
work. Section 3 describes our proposed Brand-Topic Model. Section 4 and 5
discusses the experimental setup and evaluation results, respectively.
Finally, Section 5 concludes the paper and outlines the future research
directions.
## 2 Related Work
Our work is related to the following research:
#### Poisson Factorisation Models
Poisson factorisation is a class of non-negative matrix factorisation in which
a matrix is decomposed into a product of matrices. It has been used in many
personalise application such as personalised budgets recommendation Guo et al.
(2017), ranking Kuo et al. (2018), or content-based social recommendation Su
et al. (2019); de Souza da Silva et al. (2017).
Poisson factorisation can also be used for topic modelling where a document-
word count matrix is factorised into a product of two positive matrices, a
document-topic matrix and a topic-word matrix Gan et al. (2015); Jiang et al.
(2017). In such a setup, a word count in a document is assumed drawn from a
Poisson distribution with its rate parameter defined as a product of a
document-specific topic intensity and its word probability under the
corresponding topic, summing over all topics.
#### Polarity-bearing Topics Models
Early approaches to polarity-bearing topics extraction were built on LDA in
which a word is assumed to be generated from a corpus-wide sentiment-topic-
word distributions Lin and He (2009). In order to be able to separate topics
bearing different polarities, word prior polarity knowledge needs to be
incorporated into model learning. In recent years, the neural network based
topic models have been proposed for many NLP tasks, such as information
retrieval Xie et al. (2015), aspect extraction He (2017) and sentiment
classification He et al. (2018). Most of them are built upon Variational
Autoencode (VAE) Kingma and Welling (2014) which constructs a neural network
to approximate the topic-word distribution in probabilistic topic models
Srivastava and Sutton (2017); Sønderby et al. (2016); Bouchacourt et al.
(2018). Intuitively, training the VAE-based supervised neural topic models
with class labels Chaidaroon and Fang (2017); Huang et al. (2018); Gui et al.
(2020) can introduce sentiment information into topic modelling, which may
generate better features for sentiment classification.
#### Market/Brand Topic Analysis
The classic LDA can also be used to analyse market segmentation and brand
reputation in various fields such as finance and medicine (Barry et al., 2018;
Doyle and Elkan, 2009). For market analysis, the model proposed by Iwata et
al. (2009) used topic tracking to analyse customers’ purchase probabilities
and trends without storing historical data for inference at the current time
step. Topic analysis can also be combined with additional market information
for recommendations. For example, based on user profiles and item topics, Gao
et al. (2017) dynamically modelled users’ interested items for recommendation.
Zhang et al. (2015) focused on brand topic tracking. They built a dynamic
topic model to analyse texts and images posted on Twitter and track
competitions in the luxury market among given brands, in which topic words
were used to identify recent hot topics in the market (e.g. _Rolex watch_) and
brands over topics were used to identify the market share of each brand.
#### Adversarial Learning
Several studies have explored the application of adversarial learning
mechanics to text processing for style transferring John et al. (2019),
disentangling representations John et al. (2019) and topic modelling Masada
and Takasu (2018). In particular, Wang et al. (2019) has proposed an
Adversarial-neural Topic Model (ATM) based on the Generative Adversarial
Network (GAN) Goodfellow et al. (2014), that employees an adversarial approach
to train a generator network producing word distributions indistinguishable
from topic distributions in the training set. (Wang et al., 2020) further
extended the ATM model with a Bidirectional Adversarial Topic (BAT) model,
using a bidirectional adversarial training to incorporate a Dirichlet
distribution as prior and exploit the information encoded in word embeddings.
Similarly, (Hu et al., 2020) builds on the aforementioned adversarial approach
adding cycle-consistent constraints.
Although the previous methods make use of adversarial mechanisms to
approximate the posterior distribution of topics, to the best of our
knowledge, none of them has so far used adversarial learning to lead the
generation of topics based on their sentiment polarity and they do not provide
any mechanism for smooth transitions between topics, as introduced in the
presented Brand-Topic Model.
## 3 Brand-Topic Model (BTM)
Figure 2: The overall architecture of the Brand-Topic Model.
We propose a probabilistic model for monitoring the assessment of various
brands in the beauty market from Amazon reviews. We extend the Text-Based
Ideal Point (TBIP) model with adversarial learning and Gumbel-Softmax to
construct document features for sentiment classification. The overall
architecture of our proposed BTM is shown in Figure 2. In what follows, we
will first give a brief introduction of TBIP, followed by the presentation of
our proposed BTM.
### 3.1 Background: Text-Based Ideal Point (TBIP) model
TBIP Vafa et al. (2020) is a probabilistic model which aims to quantify
political positions (i.e. ideal points) from politicians’ speeches and tweets
via Poisson factorisation. In its generative processes, political text is
generated from the interactions of several latent variables: the per-document
topic intensity $\theta_{dk}$ for $K$ topics and $D$ documents, the
$V$-vectors representing the topics $\beta_{kv}$ with vocabulary size $|V|$,
the author’s ideal point $s$ expressed with a real-valued scalar $x_{s}$ and
the ideological topic expressed by a real-valued $V$-vector $\eta_{k}$. In
particular, the ideological topic $\eta_{k}$ aligns the neutral topic (e.g.
_gun_ , _abortion_ , etc.) according to the author’s ideal point (e.g.
_liberal_ , _neutral_ , _conservative_), thus modifying the prominent words in
the original topic (e.g. ’ _gun violence_ ’, or ’ _constitutional rights_ ’).
The observed variables are the author $a_{d}$ for a document $d$, and the word
count for a term $v$ in $d$ encoded as $c_{dv}$ .
The TBIP model places a Gamma prior on $\bm{\beta}$ and $\bm{\theta}$, which
is the assumption inherited from the Poisson factorisation, with $m$, $n$
being hyper-parameters.
$\theta_{dk}\sim\mbox{Gamma}(m,n)\quad\beta_{kv}\sim\mbox{Gamma}(m,n)$
It places instead a normal prior over the ideological topic $\bm{\eta}$ and
ideal point $\bm{x}$:
$\eta_{kv}\sim\mathcal{N}(0,1)\quad x_{s}\sim\mathcal{N}(0,1)$
The word count for a term $v$ in $d$, $c_{dv}$, can be modelled with Poisson
distribution:
$c_{dv}\sim\text{Pois}(\sum_{k}{\theta_{dk}\beta_{kv}\exp\\{x_{a_{d}}\eta_{kv}\\})}$
(1)
### 3.2 Brand-Topic Model (BTM)
Inspired by the TBIP model, we introduce the Brand-Topic Model by
reinterpreting the ideal point $x_{s}$ as brand-polarity score $x_{b}$
expressing an ideal feeling derived from reviews related to a brand, and the
ideological topics $\eta_{kv}$ as opinionated topics, i.e. polarised topics
about brand qualities.
Thus, a term count $c_{dv}$ for a product’s reviews derives from the hidden
variable interactions as $c_{dv}\sim Pois({\lambda}_{dv})$ where:
${\lambda}_{dv}=\sum_{k}{\theta_{dk}\exp\\{\log\beta_{kv}+x_{b_{d}}\eta_{kv}\\})}$
(2)
with the priors over $\bm{\beta}$, $\bm{\theta}$, $\bm{\eta}$ and $\bm{x}$
initialised according to the TBIP model.
The intuition is that if a word tends to frequently occur in reviews with
positive polarities, but the brand-polarity score for the current brand is
negative, then the occurrence count of such a word would be reduced since
$x_{b_{d}}$ and $\eta_{kv}$ have opposite signs.
#### Distant Supervision and Adversarial Learning
Product reviews might contain opinions about products and more general users’
experiences (e.g. delivery service), which are not strictly related to the
product itself and could mislead the inference of a reliable brand-polarity
score. Therefore, to generate topics which are mainly characterised by product
opinions, we provide an additional distant supervision signal via their review
ratings. To this aim, we use a sentiment classifier, a simple linear layer,
over the generated document representations to infer topics that are
discriminative of the review’s rating.
In addition, to deal with the imbalanced distribution in the reviews, we
design an adversarial mechanism linking the brand-polarity score to the topics
as shown in Figure 3. We contrastively sample adversarial training instances
by reversing the original brand-polarity score ($x_{b}\in[-1,1]$) and
generating associated representations. This representation will be fed into
the shared sentiment classifier with the original representation to maximise
their distance in the latent feature space.
Figure 3: Process of Adversarial Learning (AL): (a) The imbalanced
distribution of different sentiment categories; (b) The biased estimation of
distribution from training samples; (c) Contrastive sample generation (white
triangles) by reversing the sampling results from biased estimation (white
dots); (d) Adjusting the biased estimation of (b) by the contrastive samples.
#### Gumbel-Softmax for Word Sampling
As discussed earlier, in order to construct document features for sentiment
classification, we need to sample word counts from the Poisson distribution.
However, directly sampling word counts from the Poisson distribution is not
differentiable. In order to enable back-propagation of gradients, we apply
Gumbel-Softmax (Jang et al., 2017; Joo et al., 2020), which is a gradient
estimator with the reparameterization trick.
For a word $v$ in document $d$, its occurrence count,
$c_{dv}\sim\mbox{Pois}(\lambda_{dv})$, is a non-negative random variable with
the Poisson rate $\lambda_{dv}$. We can approximate it by sampling from the
truncated Poisson distribution,
$c_{dv_{n}}\sim\mbox{TruncatedPois}(\lambda_{dv},n)$, where
$\displaystyle\pi_{k}=Pr(c_{dv}=k)=\frac{\lambda_{dv}^{k}e^{-\lambda_{dv}}}{k!}$
$\displaystyle\pi_{n-1}=1-\sum_{k}\pi_{k}\quad\mbox{for}\quad
k\in\\{0,1,...,n-2\\}.$
We can then draw samples $z_{dv}$ from the categorical distribution with class
probabilities $\pi$ = ($\pi_{0}$, $\pi_{1}$, $\cdots$, $\pi_{n-1})$ using:
$\displaystyle u_{i}\sim\mbox{Uniform}(0,1)\quad g_{i}=-\log(-\log(u_{i}))$
$\displaystyle w_{i}=\mbox{softmax}\big{(}(g_{i}+\log\pi_{i})/\tau\big{)}\quad
z_{dv}=\sum_{i}w_{i}c_{i}$
where $\tau$ is a constant referred to as the temperature, $c$ is the outcome
vector. By using the average of weighted word account, the process is now
differentiable and we use the sampled word counts to form the document
representation and feed it as an input to the sentiment classifier.
#### Objective Function
Our final objective function consists of three parts, including the Poisson
factorisation model, the sentiment classification loss, and the reversed
sentiment classification loss (for adversarial learning). For the Poisson
factorisation modelling part, mean-field variational inference is used to
approximate posterior distribution (Jordan et al., 1999; Wainwright and
Jordan, 2008; Blei et al., 2017).
$q_{\phi}(\theta,\beta,\eta,x)=\prod_{d,k.b}q(\theta_{d})q(\beta_{k})q(\eta_{k})q(x_{b})$
(3)
For optimisation, to minimise the approximation of
$q_{\phi}(\theta,\beta,\eta,x)$ and the posterior, equivalently we maximise
the evidence lower bound (ELBO):
$\begin{split}ELBO=\mathbb{E}_{q_{\phi}}[log\ p(\theta,\beta,\eta,x)]+\\\ log\
p(y|\theta,\beta,\eta,x)-log\ q_{\phi}(\theta,\beta,\eta,x)]\end{split}$ (4)
The Poisson factorization model is pre-trained by applying the algorithm in
Gan et al. (2015), which is then used to initialise the varational parameters
of $\theta_{d}$ and $\beta_{k}$. Our final objective function is:
$Loss=ELBO+\lambda(L_{s}+L_{a})$ (5)
where $L_{s}$ and $L_{a}$ are the cross entropy loss of sentiment
classification for sampled documents and reversed sampled documents,
respectively, and $\lambda$ is the weight to balance the two parts of loss,
which is set to be 100 in our experiments.
## 4 Experimental Setup
#### Datasets
We construct our dataset by retrieving reviews in the Beauty category from the
Amazon review corpus111 http://jmcauley.ucsd.edu/data/amazon/ (He and McAuley,
2016). Each review is accompanied with the rating score (between 1 and 5),
reviewer name and the product meta-data such as product ID, description, brand
and image. We use the product meta-data to relate a product with its
associated brand. By only selecting brands with relatively more and balanced
reviews, our final dataset contains a total of 78,322 reviews from 45 brands.
Reviews with the rating score of 1 and 2 are grouped as negative reviews;
those with the score of 3 are neutral reviews; and the remaining are positive
reviews. The statistics of our dataset is shown in Table 1222The detailed
rating score distributions of brands and their average rating are shown in
Table A1 in the Appendix.. We can observe that our data is highly imbalanced,
with the positive reviews far more than negative and neutral reviews.
Dataset | Amazon-Beauty Reviews
---|---
Documents per classes |
Neg / Neu / Pos | 9,545 / 5,578 / 63,199
Brands | 45
Total #Documents | 78,322
Avg. Document Length | 9.7
Vocabulary size | $\sim 5000$
Table 1: Dataset statistics of reviews within the Amazon dataset under the
Beauty category.
#### Baselines
We compare the performance of our model with the following baselines:
* •
Joint Sentiment-Topic (JST) model (Lin and He, 2009), built on LDA, can
extract polarity-bearing topics from text provided that it is supplied with
the word prior sentiment knowledge. In our experiments, the MPQA subjectivity
lexicon333https://mpqa.cs.pitt.edu/lexicons/ is used to derive the word prior
sentiment information.
* •
Scholar (Card et al., 2018), a neural topic model built on VAE. It allows the
incorporation of meta-information such as document class labels into the model
for training, essentially turning it into a supervised topic model.
* •
Text-Based Ideal Point (TBIP) model, an unsupervised Poisson factorisation
model which can infer latent brand sentiment scores.
#### Parameter setting
Since documents are represented as the bag-of-words which result in the loss
of word ordering or structural linguistics information, frequent bigrams and
trigrams such as ‘ _without doubt_ ’, ‘ _stopped working_ ’, are also used as
features for document representation construction. Tokens, i.e., $n$-grams
($n=\\{1,2,3\\}$), occurred less than twice are filtered. In our experiments,
we set aside 10% reviews (7,826 reviews) as the test set and the remaining
(70,436 reviews) as the training set. For hyperparameters, we set the batch
size to 1,024, the maximum training steps to 50,000, the topic number to 30,
the temperature in the Gumbel-Softmax equation in Section 3.2 to 1. Since our
dataset is highly imbalanced, we balance data in each mini-batch by
oversampling. For a fair comparison, we report two sets of results from the
baseline models, one trained from the original data, the other trained from
the balanced training data by oversampling negative reviews. The latter
results in an increased training set consisting of 113,730 reviews.
## 5 Experimental Results
In this section, we will present the experimental results in comparison with
the baseline models in brand ranking, topic coherence and uniqueness measures,
and also present the qualitative evaluation of the topic extraction results.
We will further discuss the limitations of our model and outline future
directions.
### 5.1 Comparison with Existing Models
Model | Spearman’s | Kendall’s tau
---|---|---
corr | p-val | corr | p-val
JST | 0.241 | 0.111 | 0.180 | 0.082
JST* | 0.395 | 0.007 | 0.281 | 0.007
Scholar | -0.140 | 0.358 | -0.103 | 0.318
Scholar* | 0.050 | 0.743 | 0.046 | 0.653
TBIP | 0.361 | 0.016 | 0.264 | 0.012
BTM | 0.486 | 0.001 | 0.352 | 0.001
Table 2: Brand ranking results generated by various models based on the test
set. We report the correlation coefficients corr and its associated two-sided
$p$-values for both Spearman’s correlations and Kendall’s tau. * indicates
models trained on balanced training data.
#### Brand Ranking
We report in Table 2 the brand ranking results generated by various models on
the test set. The two commonly used evaluation metrics for ranking tasks,
Spearman’s correlations and Kendall’s Tau, are used here. They penalise
inversions equally across the ranked list. Both TBIP and BTM can infer each
brand’s associated polarity score automatically which can be used for ranking.
For both JST and Scholar, we derive the polarity score of a brand by
aggregating the sentiment probabilities of its associated review documents and
then normalising over the total number of brand-related reviews. It can be
observed from Table 2 that JST outperforms both Scholar and TBIP. Balancing
the distributions of sentiment classes improves the performance of JST and
Scholar. Overall, BTM gives the best results, showing the effectiveness of
adversarial learning.
#### Topic Coherence and Uniqueness
Here we choose the top 10 words for each topics to calculate the context-
vector-based topic coherence scores Röder et al. (2015). In the topics
generated by TBIP and BTM, we can vary the topic polarity scores to generate
positive, negative and neutral subtopics as shown in Table 4. We would like to
achieve high topic coherence, but at the same time maintain a good level of
topic uniqueness across the sentiment subtopics since they express different
polarities. Therefore, we additionally consider the topic uniqueness (Nan et
al., 2019) to measure word redundancy among sentiment subtopics,
$TU=\frac{1}{LK}\sum_{l=1}^{K}\sum_{l=1}^{L}{\frac{1}{cnt(l,k)}}$, where
$cnt(l,k)$ denotes the number of times word $l$ appear across _positive_ ,
_neutral_ and _negative_ topics under the same topic number $k$. We can see
from Table 3 that both TBIP and BTM achieve higher coherence scores compared
to JST and Scholar. TBIP slightly outperforms BTM on topic coherence, but has
a lower topic uniqueness score. As will be shown in Table 4, topics extracted
by TBIP contain words significantly overlapped with each other among sentiment
subtopics. Scholar gives the highest topic uniqueness score. However, it
cannot separate topics with different polarities. Overall, our proposed BTM
achieves the best balance between topic coherence and topic uniqueness.
Model | Topic Coherence | Topic Uniqueness
---|---|---
JST | 0.1423 | 0.7699
JST* | 0.1317 | 0.7217
Scholar | 0.1287 | 0.9640
Scholar* | 0.1196 | 0.9256
TBIP | 0.1525 | 0.8647
BTM | 0.1407 | 0.9033
Table 3: Topic coherence/uniqueness measures of results generated by various models. Topic | Sentiment | Top Words
---|---|---
Label | Topics
BTM
Brush | Positive | brushes, cheap, came, pay, pretty, brush, okay, case, glue, soft
Neutral | cheap, feel, set, buy, cheaply made, feels, made, worth, spend, bucks
Negative | plastic, made, cheap, parts, feels, flimsy, money, break, metal, bucks
Oral Care | Positive | teeth, taste, mouth, strips, crest, mouthwash, tongue, using, white, rinse
Neutral | teeth, pain, mouth, strips, using, taste, used, crest, mouthwash, white
Negative | pain, issues, causing, teeth, caused, removing, wore, burn, little, cause
Duration | Positive | stay, pillow, comfortable, string, tub, mirror, stick, back, months
Neutral | months, year, lasted, stopped working, sorry, n, worked, working, u, last
Negative | months, year, last, lasted, battery, warranty, stopped working, died, less
TBIP
Brush | Positive | love, favorite, products, definitely recommend, forever, carry, brushes
Neutral | love, brushes, cute, favorite, definitely recommend, soft, cheap
Negative | love, brushes, cute, soft, cheap, set, case, quality price, buy, bag
Oral Care | Positive | teeth, strips, crest, mouth, mouthwash, taste, white, whitening, sensitivity
Neutral | teeth, strips, mouth, crest, taste, work, pain, using, white, mouthwash
Negative | teeth, strips, mouth, crest, taste, work, pain, using, white, mouthwash
Duration | Positive | great, love shampoo, great price, great product, lasts long time
Neutral | great, great price, lasts long time, great product, price, works expected
Negative | quality, great, fast shipping, great price, low price, price quality, hoped
Table 4: Example topics generated by BTM and TBIP on Amazon reviews. The
topic labels are assigned by manual inspection. Positive words are highlighted
with the blue colour, while negative words are marked with the red colour. BTM
generates better-separated sentiment topics compared to TBIP.
### 5.2 Example Topics Extracted from Amazon Reviews
We illustrate some representative topics generated by TBIP and BTM in Table 4.
It is worth noting that we can generate a smooth transition of topics by
varying the topic polarity score gradually as shown in Figure 1. Due to space
limit, we only show topics when the topic polarity score takes the value of
$-1$ (_negative_), $0$ (_neutral_) and $1$ (_positive_). It can be observed
that TBIP fails to separate subtopics bearing different sentiments. For
example, all the subtopics under ‘Duration’ express a positive polarity. On
the contrary, BTM shows a better-separated sentiment subtopics. For
‘Duration’, we see positive words such as ‘ _comfortable_ ’ under the positive
subtopic, and words such as ‘ _stopped working_ ’ clearly expressing negative
sentiment under the negative subtopic. Moreover, top words under different
sentiment subtopics largely overlapped with each other for TBIP. But we
observe a more varied vocabulary in the sentiment subtopics for BTM.
TBIP was originally proposed to deal with political speeches in which speakers
holding different ideal points tend to use different words to express their
stance on the same topic. This is however not the case in Amazon reviews where
the same word could appear in both positive and negative reviews. For example,
‘ _cheap_ ’ for lower-priced products could convey a positive polarity to
express value for money, but it could also bear a negative polarity implying a
poor quality. As such, it is difficult for TBIP to separate words under
different polarity-bearing topics. On the contrary, with the incorporation of
adversarial learning, our proposed BTM is able to extract different set of
words co-occurred with ‘ _cheap_ ’ under topics with different polarities,
thus accurately capturing the contextual polarity of the word ‘ _cheap_ ’. For
example, ‘ _cheap_ ’ appears in both positive and negative subtopics for
‘Brush’ in Table 4. But we can find other co-occurred words such as ‘ _pretty_
’ and ‘ _soft_ ’ under the positive subtopic, and ‘ _plastic_ ’ and ‘ _flimsy_
’ under the negative subtopic, which help to infer the contextual polarity of
‘ _cheap_ ’.
TBIP also appears to have a difficulty in dealing with highly imbalanced data.
In our constructed dataset, positive reviews significantly outnumber both
negative and neutral ones. In many sentiment subtopics extracted by TBIP, all
of them convey a positive polarity. One example is the ‘Duration’ topic under
TBIP, where words such as ‘ _great_ ’, ‘ _great price_ ’ appear in all
positive, negative and neutral topics. With the incorporation of supervised
signals such as the document-level sentiment labels, our proposed BTM is able
to derive better separated polarised topics.
As an example shown in Figure 1, if we vary the polarity score of a topic from
$-1$ to $1$, we observe a smooth transition of its associated topic words,
gradually moving from negative to positive. Under the topic (_shaver_) shown
in this figure, four brand names appeared: Remington, Norelco, Braun and
Lectric Shave. The first three brands can be found in our dataset. Remington
appears in the negative side and it indeed has the lowest review score among
these 3 brands; Norelco appears most and it is indeed a popular brand with
mixed reviews; and Braun gets the highest score in these 3 brands, which is
also consistent with the observations in our data. Another interesting finding
is the brand Lectric Shave, which is not one of the brands we have in the
dataset. But we could predict from the results that it is a product with
relatively good reviews.
### 5.3 Limitations and Future work
Our model requires the use of a vanilla Poisson factorisation model to
initialise the topic distributions before applying the adversarial learning
mechanism of BTM to perform a further split of topics based on varying
polarities. Essentially topics generated by a vanilla Poisson factorisation
model can be considered as parent topics, while polarity-bearing subtopics
generated by BTM can be considered as child topics. Ideally, we would like the
parent topics to be either neutral or carrying a mixed sentiment which would
facilitate the learning of polarised sub-topics better. In cases when parent
topics carry either strongly positive or strongly negative sentiment signals,
BTM would fail to produce polarity-varying subtopics. One possible way is to
employ earlier filtering of topics with strong polarities. For example, topic
labeling Bhatia et al. (2016) could be employed to obtain a rough estimate of
initial topic polarities; these labels would be in turn used for filtering out
topics carrying strong sentiment polarities.
Although the adversarial mechanism tends to be robust with respect to class
imbalance, the disproportion of available reviews with different polarities
could hinder the model performance. One promising approach suitable for the
BTM adversarial mechanism would consist in decoupling the representation
learning and the classification, as suggested in Kang et al. (2020),
preserving the original data distribution used by the model to estimate the
brand score.
## 6 Conclusion
In this paper, we presented the Brand-Topic Model, a probabilistic model which
is able to generate polarity-bearing topics of commercial brands. Compared to
other topic models, BMT infers real-valued brand-associated sentiment scores
and extracts fine-grained sentiment-topics which vary smoothly in a continuous
range of polarity scores. It builds on the Poisson factorisation model,
combining it with an adversarial learning mechanism to induce better-separated
polarity-bearing topics. Experimental evaluation on Amazon reviews against
several baselines shows an overall improvement of topic quality in terms of
coherence, uniqueness and separation of polarised topics.
## Acknowledgements
This work is funded by the EPSRC (grant no. EP/T017112/1, EP/V048597/1). YH is
supported by a Turing AI Fellowship funded by the UK Research and Innovation
(UKRI) (grant no. EP/V020579/1).
## References
* Barry et al. (2018) Adam E. Barry, Danny Valdez, Alisa A.Padon, and Alex M. Russel. 2018. Alcohol advertising on twitter—a topic model. _American Journal of Health Education_ , pages 256–263.
* Bhatia et al. (2016) Shraey Bhatia, Jey Han Lau, and Timothy Baldwin. 2016. Automatic labelling of topics with neural embeddings. In _Proceedings of the 26th International Conference on Computational Linguistics_ , pages 953–963.
* Blei et al. (2017) David M Blei, Alp Kucukelbir, and Jon D McAuliffe. 2017. Variational inference: A review for statisticians. _Journal of the American Statistical Association_ , 112(518):859–877.
* Blei et al. (2003) David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. _Journal of Machine Learning Research_ , 3(2003):993–1022.
* Bouchacourt et al. (2018) Diane Bouchacourt, Ryota Tomioka, and Sebastian Nowozin. 2018. Multi-level variational autoencoder: Learning disentangled representations from grouped observations. In _The 32nd AAAI Conference on Artificial Intelligence_ , pages 2095–2102.
* Card et al. (2018) Dallas Card, Chenhao Tan, and Noah A. Smith. 2018. Neural models for documents with metadata. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics_ , pages 2031–2040.
* Chaidaroon and Fang (2017) Suthee Chaidaroon and Yi Fang. 2017. Variational deep semantic hashing for text documents. In _Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval_ , pages 75–84.
* Doyle and Elkan (2009) G. Doyle and C. Elkan. 2009. Financial topic models. In _In NIPS Workshop for Applications for Topic Models: Text and Beyond_.
* Gan et al. (2015) Zhe Gan, Changyou Chen, Ricardo Henao, David E. Carlson, and Lawrence Carin. 2015\. Scalable deep poisson factor analysis for topic modeling. In _Proceedings of the 32nd International Conference on Machine Learning_ , volume 37, pages 1823–1832.
* Gao et al. (2017) Li Gao, Jia Wu, Chuan Zhou, and Yue Hu. 2017. Collaborative dynamic sparse topic regression with user profile evolution for item recommendation. In _Proceedings of the 31st AAAI Conference on Artificial Intelligence_ , pages 1316–1322.
* Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In _Advances in Neural Information Processing Systems 27_ , pages 2672–2680.
* Gui et al. (2020) Lin Gui, Leng Jia, Jiyun Zhou, Ruifeng Xu, and Yulan He. 2020. Multi-task mutual learning for joint sentiment classification and topic detection. In _IEEE Transactions on Knowledge and Data Engineering_ , pages 1–1.
* Guo et al. (2017) Yunhui Guo, Congfu Xu, Hanzhang Song, and Xin Wang. 2017. Understanding users’ budgets for recommendation with hierarchical poisson factorization. In _Proceedings of the 26th International Joint Conference on Artificial Intelligence_ , pages 1781–1787.
* He (2017) Ruidan He. 2017. An unsupervised neural attention model for aspect extraction. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics_ , pages 388–397.
* He et al. (2018) Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018. Effective attention modeling for aspect-level sentiment classification. In _Proceedings of the 27th International Conference on Computational Linguistics_ , pages 1121–1131.
* He and McAuley (2016) Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In _Proceedings of the 25th International Conference on World Wide Web_.
* Hu et al. (2020) Xuemeng Hu, Rui Wang, Deyu Zhou, and Yuxuan Xiong. 2020. Neural topic modeling with cycle-consistent adversarial training. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_.
* Huang et al. (2018) Minghui Huang, Yanghui Rao, Yuwei Liu, Haoran Xie, and Fu Lee Wang. 2018. Siamese network-based supervised topic modeling. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 4652–4662.
* Iwata et al. (2009) Tomoharu Iwata, Shinji Watanabe, Takeshi Yamada, and Naonori Ueda. 2009. Topic tracking model for analyzing consumer purchase behavior. In _Proceedings of the 21st International Joint Conference on Artificial Intelligence_ , pages 1427–1432.
* Jang et al. (2017) Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In _International Conference on Learning Representations_.
* Jiang et al. (2017) Haixin Jiang, Rui Zhou, Limeng Zhang, Hua Wang, and Yanchun Zhang. 2017. A topic model based on poisson decomposition. In _Proceedings of the 2017 ACM on Conference on Information and Knowledge Management_ , pages 1489–1498.
* John et al. (2019) Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled representation learning for non-parallel text style transfer. In _Proceedings of the 57th Conference of the Association for Computational Linguistics_ , pages 424–434.
* Joo et al. (2020) Weonyoung Joo, Dongjun Kim, Seungjae Shin, and Il-Chul Moon. 2020. Generalized gumbel-softmax gradient estimator for various discrete random variables. _Computing Research Repository_ , arXiv:2003.01847v2.
* Jordan et al. (1999) Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. 1999\. An introduction to variational methods for graphical models. _Machine Learning_ , 37(2):183–233.
* Kang et al. (2020) Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis. 2020. Decoupling representation and classifier for long-tailed recognition. In _the 8th International Conference on Learning Representations_.
* Kingma and Welling (2014) Diederik P. Kingma and Max Welling. 2014. Auto-encoding variational bayes. In _2nd International Conference on Learning Representations_.
* Kuo et al. (2018) Li-Yen Kuo, Chung-Kuang Chou, and Ming-Syan Chen. 2018. Personalized ranking on poisson factorization. In _Proceedings of the 2018 SIAM International Conference on Data Mining_ , pages 720–728.
* Lin and He (2009) Chenghua Lin and Yulan He. 2009. Joint sentiment/topic model for sentiment analysis. In _Proceedings of the 18th ACM Conference on Information and Knowledge Management_ , pages 375–384.
* Masada and Takasu (2018) Tomonari Masada and Atsuhiro Takasu. 2018. Adversarial learning for topic models. In _Proceedings of the 14th International Conference on Advanced Data Mining and Applications_ , volume 11323 of _Lecture Notes in Computer Science_ , pages 292–302.
* Nan et al. (2019) Feng Nan, Ran Ding, Ramesh Nallapati, , and Bing Xiang. 2019. Topic modeling with wasserstein autoencoders. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , page 6345–6381.
* Röder et al. (2015) Michael Röder, Andreas Both, and Alexander Hinneburg. 2015. Exploring the space of topic coherence measures. In _The 8th ACM International Conference on Web Search and Data Mining_ , pages 399–408.
* de Souza da Silva et al. (2017) Eliezer de Souza da Silva, Helge Langseth, and Heri Ramampiaro. 2017. Content-based social recommendation with poisson matrix factorization. In _Joint European Conference on Machine Learning and Knowledge Discovery in Databases_ , volume 10534 of _Lecture Notes in Computer Science_ , pages 530–546.
* Sønderby et al. (2016) Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. 2016. Ladder variational autoencoders. In _The Annual Conference on Neural Information Processing Systems_ , pages 3738–3746.
* Srivastava and Sutton (2017) Akash Srivastava and Charles Sutton. 2017. Autoencoding variational inference for topic models. In _International Conference on Learning Representations_.
* Su et al. (2019) Yijun Su, Xiang Li, Wei Tang, Daren Zha, Ji Xiang, and Neng Gao. 2019. Personalized point-of-interest recommendation on ranking with poisson factorization. In _International Joint Conference on Neural Networks_ , pages 1–8.
* Vafa et al. (2020) Keyon Vafa, Suresh Naidu, and David M. Blei. 2020. Text-based ideal points. In _Proceedings of the 2020 Conference of the Association for Computational Linguistics_ , pages 5345–5357.
* Wainwright and Jordan (2008) Martin J Wainwright and Michael I Jordan. 2008. Graphical models, exponential families, and variational inference. _Foundations and Trends in Machine Learning_ , 1(1-2):1–305.
* Wang et al. (2020) Rui Wang, Xuemeng Hu, Deyu Zhou, Yulan He, Yuxuan Xiong, Chenchen Ye, and Haiyang Xu. 2020. Neural topic modeling with bidirectional adversarial training. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 340–350.
* Wang et al. (2019) Rui Wang, Deyu Zhou, and Yulan He. 2019. ATM: Adversarial-neural topic model. _Information Processing & Management_, 56(6):102098.
* Xie et al. (2015) Pengtao Xie, Yuntian Deng, and Eric P. Xing. 2015. Diversifying restricted boltzmann machine for document modeling. In _Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , pages 1315–1324.
* Zhang et al. (2015) Hao Zhang, Gunhee Kim, and Eric P. Xing. 2015. Dynamic topic modeling for monitoring market competition from online text and image data. In _Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , pages 1425–1434.
## Appendix A Appendix
Brand | Average Rating | Number of Reviews | Distribution of Ratings
---|---|---|---
1 | 2 | 3 | 4 | 5
General | 3.478 | 1103 | 236 | 89 | 144 | 180 | 454
VAGA | 3.492 | 1057 | 209 | 116 | 133 | 144 | 455
Remington | 3.609 | 1211 | 193 | 111 | 149 | 282 | 476
Hittime | 3.611 | 815 | 143 | 62 | 110 | 154 | 346
Crest | 3.637 | 1744 | 352 | 96 | 159 | 363 | 774
ArtNaturals | 3.714 | 767 | 138 | 54 | 65 | 143 | 368
Urban Spa | 3.802 | 1279 | 118 | 105 | 211 | 323 | 522
GiGi | 3.811 | 1047 | 151 | 79 | 110 | 184 | 523
Helen Of Troy | 3.865 | 3386 | 463 | 20 | 325 | 472 | 1836
Super Sunnies | 3.929 | 1205 | 166 | 64 | 126 | 193 | 666
e.l.f | 3.966 | 1218 | 117 | 85 | 148 | 241 | 627
AXE PW | 4.002 | 834 | 85 | 71 | 55 | 169 | 454
Fiery Youth | 4.005 | 2177 | 208 | 146 | 257 | 381 | 1185
Philips Norelco | 4.034 | 12427 | 1067 | 818 | 1155 | 2975 | 6412
Panasonic | 4.048 | 2473 | 276 | 158 | 179 | 419 | 1441
SilcSkin | 4.051 | 710 | 69 | 49 | 58 | 135 | 399
Rimmel | 4.122 | 911 | 67 | 58 | 99 | 160 | 527
Avalon Organics | 4.147 | 1066 | 111 | 52 | 82 | 145 | 676
L’Oreal Paris | 4.238 | 973 | 88 | 40 | 72 | 136 | 651
OZ Naturals | 4.245 | 973 | 79 | 43 | 74 | 142 | 635
Andalou Naturals | 4.302 | 1033 | 58 | 57 | 83 | 152 | 683
Avalon | 4.304 | 1344 | 132 | 62 | 57 | 108 | 985
TIGI | 4.319 | 712 | 53 | 32 | 42 | 93 | 492
Neutrogena | 4.331 | 1200 | 91 | 55 | 66 | 142 | 846
Dr. Woods | 4.345 | 911 | 60 | 42 | 74 | 83 | 652
Gillette | 4.361 | 2576 | 115 | 94 | 174 | 555 | 1638
Jubujub | 4.367 | 1328 | 53 | 42 | 132 | 238 | 863
Williams | 4.380 | 1887 | 85 | 65 | 144 | 347 | 1246
Braun | 4.382 | 2636 | 163 | 85 | 147 | 429 | 1812
Italia-Deluxe | 4.385 | 1964 | 96 | 73 | 134 | 336 | 1325
Booty Magic | 4.488 | 728 | 28 | 7 | 48 | 144 | 501
Greenvida | 4.520 | 1102 | 55 | 33 | 51 | 108 | 855
Catrice | 4.527 | 990 | 49 | 35 | 34 | 99 | 773
NARS | 4.535 | 1719 | 60 | 36 | 107 | 237 | 1279
Astra | 4.556 | 4578 | 155 | 121 | 220 | 608 | 3474
Heritage Products | 4.577 | 837 | 25 | 18 | 52 | 96 | 646
Poppy Austin | 4.603 | 1079 | 36 | 31 | 38 | 115 | 859
Aquaphor | 4.633 | 2882 | 100 | 58 | 106 | 272 | 2346
KENT | 4.636 | 752 | 23 | 8 | 42 | 74 | 605
Perfecto | 4.801 | 4862 | 44 | 36 | 81 | 523 | 4178
Citre Shine | 4.815 | 713 | 17 | 5 | 3 | 43 | 645
Bath $\&$ Body Works | 4.819 | 2525 | 60 | 27 | 20 | 95 | 2323
Bonne Bell | 4.840 | 1010 | 22 | 9 | 6 | 35 | 938
Yardley | 4.923 | 788 | 3 | 4 | 3 | 31 | 747
Fruits $\&$ Passion | 4.932 | 776 | 3 | 2 | 3 | 29 | 739
Overall | 4.259 | 78322 | 5922 | 3623 | 5578 | 12322 | 50877
Table A1: Brand Statistics. The table shows the average rating score, the
total number of associated reviews, and the distribution of the number of
reviews for ratings ranging between 1 star to 5 stars, for each of the 45
brands.
|
# Mask-based Data Augmentation for Semi-supervised Semantic Segmentation
###### Abstract
Semantic segmentation using convolutional neural networks (CNN) is a crucial
component in image analysis. Training a CNN to perform semantic segmentation
requires a large amount of labeled data, where the production of such labeled
data is both costly and labor intensive. Semi-supervised learning algorithms
address this issue by utilizing unlabeled data and so reduce the amount of
labeled data needed for training. In particular, data augmentation techniques
such as CutMix and ClassMix generate additional training data from existing
labeled data. In this paper we propose a new approach for data augmentation,
termed ComplexMix, which incorporates aspects of CutMix and ClassMix with
improved performance. The proposed approach has the ability to control the
complexity of the augmented data while attempting to be semantically-correct
and address the tradeoff between complexity and correctness. The proposed
ComplexMix approach is evaluated on a standard dataset for semantic
segmentation and compared to other state-of-the-art techniques. Experimental
results show that our method yields improvement over state-of-the-art methods
on standard datasets for semantic image segmentation.
Index Terms— Semi-supervised learning, semantic segmentation, data
augmentation, ComplexMix
Fig. 1: Illustration of our proposed approach to semi-supervised segmentation
via mask-based data augmentation. Our approach uses the mean teacher strategy.
The top and bottom branches in this network belong to the teacher who is
trained to produce semantic segmentation predictions, whereas the middle
branch belongs to the student which attempts to match mixed predictions from
the teacher with its own predictions based on the mixed image input.
## 1 Introduction
Semantic segmentation is concerned with assigning a semantic label to pixels
belonging to certain objects in an image. Semantic segmentation is fundamental
to image analysis and serves as a high-level pre-processing step to support
many applications including scene understanding and autonomous driving. CNN-
based fully-supervised approaches have achieved remarkable results in semantic
segmentation of standard datasets. Generally, when sufficient labeled data is
available, training a state-of-the-art network can easily achieve high
accuracy. Labeling a large set of samples is expensive and time consuming and
so the goal in semi-supervised semantic segmentation is to use a small labeled
set and a large unlabeled set to train the network thus reducing the amount of
labeled data needed.
Consistency regularization has been applied to semi-supervised classification
[1, 2, 3] yielding significant progress in the past few years. The key idea
behind consistency regularization is to apply various data augmentations to
encourage consistent predictions for unlabeled samples. Its effectiveness
relies on the observation that the decision boundary of a classifier usually
lies in low density regions and so can benefit from clusters formed by
augmented data [4]. While consistency regularization has been successfully
employed for classification tasks, applying traditional data augmentation
techniques to semantic segmentation has been shown [5] to be less effective as
semantic segmentation may not exhibit low density regions around class
boundaries. Several approaches have been developed to address this issue by
applying augmentation on encoded space instead of input space [6], or by
enforcing consistent predictions for unsupervised mixed samples as in CutMix
[7, 5], CowMix [8], and ClassMix [9].
The method proposed in this paper belongs to the category of enforcing
consistent predictions for unsupervised mixed samples. We propose a more
effective mask-based augmentation strategy for segmentation maps, termed
ComplexMix, to address semi-supervised semantic segmentation. We hypothesize
that there is added value in increasing the complexity of semantically correct
augmentation and so attempt to produce complex augmentation which is
semantically correct. We do so by splitting the segmentation map of one image
into several squares of identical size and predict semantic labels in each
square based on the current model. Following the augmentation strategy of
ClassMix [9], we then select in each square half of the predicted classes and
paste them onto the augmented image to form a new augmentation that respects
semantic boundaries. The complexity of the augmentation is controlled by the
number of squares generated in the initial split. Experimental evaluation
results demonstrate that the proposed ComplexMix augmentation is superior to
random augmentations or simple semantically correct augmentation techniques.
The key contribution of this paper is in employing consistency regularization
to semantic segmentation through a novel data augmentation strategy for
producing complex and semantically-correct data from unlabeled examples. The
proposed approach has the ability to control the complexity of the augmented
data and so balance a tradeoff between complexity ad correctness. Experimental
evaluation results on a standard dataset demonstrate improved performance over
state-of-the-art techniques.
## 2 Related work
Semi-supervised semantic segmentation has been studied using different
mechanisms, including generative adversarial learning [10, 11], pseudo
labeling [12, 13], and consistency regularization [5, 14, 9].
Generative adversarial learning. GAN-based adversarial learning has been
applied to semi-supervised semantic segmentation in different ways. Mittal et
al. [11] use two network branches to link semi-supervised classification with
semi-supervised segmentation, including self-training, and so reduce both low-
and the high-level artifacts typical when training with few labels. In [10],
fully convolutional discriminator enables semi-supervised learning through
discovering trustworthy regions in predicted results of unlabeled images,
thereby providing additional supervisory signal.
Pseudo labeling. Pseudo labeling is a commonly used technique for semi-
supervised learning in semantic segmentation. Feng et al. [12] exploit inter-
model disagreement based on prediction confidence to construct a dynamic loss
which is robust against pseudo label noise, and so enable it to extend pseudo
labeling to class-balanced curriculum learning. Chen et al. [13] predict
pseudo-labels for unlabeled data and train subsequent models with both
manually-annotated and pseudo-labeled data.
Consistency regularization. Consistency regularization works by enforcing a
learned model to produce robust predictions for perturbations of unlabeled
samples. Consistency regularization for semantic segmentation was first
successfully used for medical imaging but has since been applied to other
domains. French et al. [5] attribute the challenge in semi-supervised semantic
segmentation to cluster assumption violations, and propose a data augmentation
technique termed CutMix [7] to solve it. Ouali et al. [6] apply perturbations
to the output of an encoder to preserve the cluster assumption. Olsson et al.
[9] propose a similar technique based on predictions by a segmentation network
to construct mixing, thus encouraging consistency over highly varied mixed
samples while respecting semantic boundaries in the original images. Our
proposed method incorporates ideas from [5] and [9] to enforce a tradeoff
between complexity and correctness and avoid the problem where large objects
dominate the mixing.
## 3 Proposed semi-supervised learning approach
In this section, we present our proposed approach for addressing semi-
supervised semantic segmentation. We introduce the proposed augmentation
strategy termed ComplexMix, discuss the loss functions used to guide the model
parameter estimate, and provide details of the training procedure.
### 3.1 ComplexMix for semantic segmentation
Mean-teacher framework. The proposed approach follows commonly employed state-
of-the-art semi-supervised learning techniques [5, 8, 9] by using the mean
teacher framework [15], where the student and teacher networks have identical
structure. In this approach the student network is updated by training whereas
the teacher network is updated by blending its parameters with that of the
student network. Our approach follows interpolation consistency training (ICT)
[16] by feeding an input image pair to the teacher network and a blended image
to the student network. We then enforce correspondence between student
predictions on blended input and blended teacher predictions. An illustration
of this framework is shown in Figure 1. In this figure, the student and
teacher segmentation networks are denoted by $f_{\theta}$ and $g_{\phi}$,
respectively, where $\theta$ and $\phi$ are the network parameters. The input
image pair to be mixed is denoted by $u_{a}$ and $u_{b}$, and the mixed image
is denoted by $u_{ab}$. The blending mask used to generate the mix is denoted
by $M$. To generate the mask $M$ the teacher provides predictions
$\hat{y}_{a}=g_{\theta}(u_{a})$ and $\hat{y}_{b}=g_{\theta}(u_{b})$. The
teacher’s mixed prediction for $u_{ab}$ is denoted by $\hat{y}_{ab}$ whereas
the student’s prediction for $u_{ab}$ is given by $f_{\theta}(u_{ab})$. The
consistency loss term enforcing correspondence between student and blended
teacher predictions is denoted by $\mathcal{L}_{u}$. All the data used in this
figure is unsupervised.
Mixing strategy. Producing a mix of images for training the student is
possible in different ways. The proposed approach uses a mask $M$ to achieve
this. Given a pair of images ($u_{a}$, $u_{b}$) and a mask $M$, a portion of
$u_{a}$ defined by $M$ can be cut from $u_{a}$ and pasted onto $u_{b}$ to
create a mixed image $u_{ab}=M\odot u_{a}+(1-M)\odot u_{b}$. Likewise,
semantic labels $\hat{y}_{a}$ and $\hat{y}_{b}$ could be mixed using $M$ to
produce the mixed semantic label $\hat{y}_{ab}$. Different approaches for
generating the mask $M$ exist. The proposed ComplexMix strategy combines ideas
from CutMix and ClassMix to generate $M$. In CutMix [7, 5] the mask $M$ is a
random rectangular region with area covering half of the image. In ClassMix
[9], the mask $M$ is generated based on semantic labels produced by a network.
The motivation for the proposed ComplexMix strategy is to create complex and
semantically-correct mixing masks $M$. Given two images $u_{a}$ and $u_{b}$
with corresponding semantic labels $\hat{y}_{a}$ and $\hat{y}_{b}$, we split
$u_{a}$ and its corresponding semantic label $\hat{y}_{a}$ into $p\times p$
equal size blocks. In each block we randomly select $C/2$ classes (where $C$
is the total number of classes) and use the pixels belonging to the selected
classes (based on $\hat{y}_{a}$) to form the mask $M$.
The parameter $p$ is used to control the complexity of the mask. With a higher
value of $p$ there are more blocks and so we have a more granular mixing with
higher complexity. However, because the boundaries of blocks are arbitrary
they introduce errors into the mixing. There is, thus, a tradeoff between
complexity and correctness that needs to be balanced by the selection of the
parameter $p$. In our experiments we treat $p$ as a hyper parameter and
determine its value empirically. The selection of the parameter $p$ may depend
on the size of objects in the image (a larger $p$ is possible for small
objects). Subsequently, to account for different scales of objects in the
image, instead of a fixed value for $p$ we select it randomly during each
iteration from a possible set of values ($[4,16,64,128]$ in our experiments).
There are three key benefits to the proposed ComplexMix strategy: preventing
large objects from dominating the blended image, forcing mixed objects to have
more complex boundaries, and controlling the tradeoff between complexity and
correctness.
Algorithm. The student model $f_{\theta}$ is initially trained based on
labeled data using a supervised segmentation loss. The teacher model is then
initialized by copying the student network weights. Note that the student and
teacher networks are identical. We denote the supervised training set using
$S=\\{(s,y)|s\in{R^{H\times W\times 3}},y\in{(1,C)^{H\times W}}\\}$, where
each sample $s$ is an $H\times W$ color image which is associated with a
ground-truth C-class segmentation map $y$. Each entry $y^{i,j}$ takes a class
label from a finite set (${1,2,...,C}$) or a one-hot vector
$[y^{(i,j,c)}]_{c}$. Similarly, we denote the unlabeled set using
$U=\\{(u)|u\in{R^{H\times W\times 3}}\\}$.
After the initial training of the student using supervised data, the training
continues using both supervised and unsupervised data. Two images $u_{a}$ and
$u_{b}$ are randomly sampled from the unlabeled dataset $U$ and fed into the
teacher’s model $g_{\phi}$ to produce pseudo-labels (segmentation map
predictions) $\hat{y}_{a}=g_{\phi}(u_{a})$ and $\hat{y}_{b}=g_{\phi}(u_{b})$.
To improve performance we use a common self-supervised-learning (SSL) method
where pseudo-labels are assigned to to unlabeled samples only when the
confidence in the label is sufficiently high. The pseudo-labels are then used
to produce a mixing mask $M$ and a mixed image $u_{ab}$ with a corresponding
pseudo-label $\hat{y}_{ab}$ as described in Section 3.1. The pseudo label
$\hat{y}_{ab}$ is used to train the student through the unsupervised loss term
$\mathcal{L}_{u}$. In addition, supervised images $s$ are selected from the
labeled set $S$ and used to train the student through the supervised loss term
$\mathcal{L}_{s}$.
Table 1: Evaluation results showing mean IoU in percent for different portions of the data with labels. The symbol “-” indicates data was not provided in reference paper. The different columns show the fraction of labeled data used in training. Group | Labeled samples | 1/30 | 1/8 | 1/4 | 1/2 | Full
---|---|---|---|---|---|---
1 | Deeplab-V2 | 43.84 | 54.84 | 60.08 | 63.02 | 66.19
2 | Adversarial [10] | - | 58.8 | 62.3 | 65.7 | N/A
| s4GAN [11] | - | 59.3 | 61.9 | - | N/A
| DST-CBC [12] | 48.7 | 60.5 | 64.4 | - | N/A
3 | French et al. [5] | 51.20 | 60.34 | 63.87 | - | N/A
| ClassMix [9] | 54.07 | 61.35 | 63.63 | 66.29 | N/A
4 | Ours (ComplexMix) | 53.88 $\pm$ 0.56 | 62.25 $\pm$ 1.22 | 64.07 $\pm$ 0.46 | 66.77 $\pm$ 0.83 | N/A
### 3.2 Loss function and training
Loss function. Our model is trained to minimize a combined loss composed of a
supervised loss term $\mathcal{L}_{s}$ and an unsupervised consistency loss
term $\mathcal{L}_{u}$:
$\mathcal{L}=\mathcal{L}_{s}(f_{\theta}(s),y)+\lambda\mathcal{L}_{u}(f_{\theta}(u),g_{\phi}(u))$
(1)
In this equation $\lambda$ is a hyper-parameter used to control the balance
between the supervised and unsupervised terms.
The supervised loss term $\mathcal{L}_{s}$ is used to train the student model
$f_{\theta}$ with labeled images in a supervised manner using the categorical
cross entropy loss:
$\mathcal{L}_{s}(f_{\theta}(s),y)=-\dfrac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{H\times
W}\sum_{c=1}^{C}y^{(i,j,c)}\log f_{\theta}(s)^{(i,j,c)}$ (2)
where $N$ is the total number of labeled examples. In this equation,
$y^{(i,j,c)}$ and $f_{\theta}(s)^{(i,j,c)}$ are the target and predicted
probabilities for pixel $(i,j)$ belonging to class $c$, respectively.
The unsupervised loss term $\mathcal{L}_{u}$ is used to train the student
model $f_{\theta}$ with unlabeled image pairs $u_{a}$ and $u_{b}$ using the
categorical cross entropy loss to match pseudo labels:
$\mathcal{L}_{u}(f_{\theta}(u_{ab}),\hat{y}_{ab})=-\dfrac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{H\times
W}\sum_{c=1}^{C}\hat{y}_{ab}^{(i,j,c)}\log f_{\theta}(u_{ab})^{(i,j,c)}$ (3)
where $u_{ab}$ is the mixed image of $u_{a}$ and $u_{b}$ using $M$, and
$\hat{y}_{ab}$ is the mixed pseudo label of $\hat{y}_{a}=g_{\phi}(u_{a})$ and
$\hat{y}_{b}=g_{\phi}(u_{b})$ based on $M$. As described earlier, the teacher
model is updated by blending its coefficients with updated student
coefficients.
Training details. To obtain high-quality segmentation results, it is critical
to choose a strong base model. In this work, we use Deeplab-V2 [17] with a
pretrained ResNet-101 [18] model, as the base semantic segmentation network
$f_{\theta}$.
We use the Pytorch deep learning framework to implement our network on two
NVIDIA-SMI GPU with $16$ GB memory in total. Stochastic Gradient Descent is
employed as the optimizer with momentum of $0.9$ and a weight decay of
$5\times 10^{-4}$ to train the model. The initial learning rate is set to
$2.5\times 10^{-4}$ and decayed using the polynomial decay schedule of [17].
## 4 Evaluation
In this section, we present experimental results using common metrics. We
evaluate the proposed approach and compare it with known approaches using
standard evaluation datasets.
Datasets. We demonstrate the effectiveness of our method on the standard
Cityscapes [19] urban scene dataset. The dataset consists of $2975$ training
images and $500$ validation images. We follow the common standard of the
baseline methods we compare to by resizing each image to $512\times 1024$
without any additional augmentation such as random cropping or scaling. The
batch size for labeled and unlabeled samples is set to $2$ for training, and
the total number of training iterations is set to $40k$ following the settings
in [10, 9].
Evaluation metrics. To evaluate the proposed method, we use Intersection over
Union (IoU) which is a commonly used metric for semantic segmentation. The
different columns show the fraction of labeled data used in training. When
training on fraction of the data we repeated each experiment $5$ times and
computed the average IoU value of all experiments for all classes in the
dataset.
Results. The evaluation results for Cityscapes are shown in Table 1 where
entries indicate mean intersection-over-union (mIoU) percentages. A higher
mIoU indicates better results. The different columns show the fraction of
labeled data used in training. We compare the proposed approach with six
baseline methods, all using the same DeepLab-v2 framework. The baseline result
in group 1 is a fully supervised method that does not take advantage of
unlabeled data. It is a lower bound for results. The methods in group 2 are
semi-supervised approaches using unlabeled data in an adversarial way. The
methods in group 3 use mask-based data augmentation, and are in the same
category as the proposed approach. N/A indicates the full labeled data set is
used for supervised learning, while “-” indicates the evaluation was not
reported in the reference paper. Note that the baselines Deeplab-V2 results
reported in [10, 11, 5, 9] have small insignificant variations compared with
the results shown here.
As can be expected, smaller portions of labeled data result in reduced
performance. However, observe in the table that adding unlabeled data with
semi-supervised approaches improves performance in a meaningful way. The
methods in group 3 where augmentation is used generally preform better than
the methods in group 2. The proposed ComplexMix approach belongs to the the
class of group 3 and as can be observed obtains results which are better than
other group 3 methods in most cases.
## 5 Conclusion
In this paper, we address the problem of semi-supervised learning for semantic
segmentation using mask-based data augmentation. We propose a new augmentation
technique that can balance between complexity and correctness and show that by
using it we are able to improve on the state-of-the-art when evaluating
semantic segmentation over a standard dataset.
## References
* [1] Avital Oliver, Augustus Odena, Colin A Raffel, Ekin Dogus Cubuk, and Ian Goodfellow, “Realistic evaluation of deep semi-supervised learning algorithms,” in Advances in neural information processing systems, 2018, pp. 3235–3246.
* [2] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii, “Virtual adversarial training: a regularization method for supervised and semi-supervised learning,” IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 8, pp. 1979–1993, 2018.
* [3] Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel, “Fixmatch: Simplifying semi-supervised learning with consistency and confidence,” arXiv preprint arXiv:2001.07685, 2020.
* [4] Olivier Chapelle and Alexander Zien, “Semi-supervised classification by low density separation.,” in AISTATS. Citeseer, 2005, vol. 2005, pp. 57–64.
* [5] Geoff French, Timo Aila, Samuli Laine, Michal Mackiewicz, and Graham Finlayson, “Semi-supervised semantic segmentation needs strong, high-dimensional perturbations,” CoRR, abs/1906.01916, 2019.
* [6] Yassine Ouali, Céline Hudelot, and Myriam Tami, “Semi-supervised semantic segmentation with cross-consistency training,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 12674–12684.
* [7] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo, “Cutmix: Regularization strategy to train strong classifiers with localizable features,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 6023–6032.
* [8] Geoff French, Avital Oliver, and Tim Salimans, “Milking cowmask for semi-supervised image classification,” arXiv preprint arXiv:2003.12022, 2020.
* [9] Viktor Olsson, Wilhelm Tranheden, Juliano Pinto, and Lennart Svensson, “Classmix: Segmentation-based data augmentation for semi-supervised learning,” arXiv preprint arXiv:2007.07936, 2020.
* [10] Wei-Chih Hung, Yi-Hsuan Tsai, Yan-Ting Liou, Yen-Yu Lin, and Ming-Hsuan Yang, “Adversarial learning for semi-supervised semantic segmentation,” arXiv preprint arXiv:1802.07934, 2018.
* [11] Sudhanshu Mittal, Maxim Tatarchenko, and Thomas Brox, “Semi-supervised semantic segmentation with high-and low-level consistency,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019\.
* [12] Zhengyang Feng, Qianyu Zhou, Guangliang Cheng, Xin Tan, Jianping Shi, and Lizhuang Ma, “Semi-supervised semantic segmentation via dynamic self-training and class-balanced curriculum,” arXiv preprint arXiv:2004.08514, 2020.
* [13] Liang-Chieh Chen, Raphael Gontijo Lopes, Bowen Cheng, Maxwell D Collins, Ekin D Cubuk, Barret Zoph, Hartwig Adam, and Jonathon Shlens, “Leveraging semi-supervised learning in video sequences for urban scene segmentation.,” arXiv preprint arXiv:2005.10266, 2020.
* [14] Jongmok Kim, Jooyoung Jang, and Hyunwoo Park, “Structured consistency loss for semi-supervised semantic segmentation,” arXiv preprint arXiv:2001.04647, 2020.
* [15] Antti Tarvainen and Harri Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,” Advances in neural information processing systems, vol. 30, pp. 1195–1204, 2017.
* [16] Vikas Verma, Alex Lamb, Juho Kannala, Yoshua Bengio, and David Lopez-Paz, “Interpolation consistency training for semi-supervised learning,” arXiv preprint arXiv:1903.03825, 2019.
* [17] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834–848, 2017.
* [18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
* [19] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele, “The cityscapes dataset for semantic urban scene understanding,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3213–3223.
|
# On $w$-Optimization of the Split Covariance Intersection Filter
Hao Li
———
This preprint note is extracted from the officially published book
Fundamentals and Applications of Recursive Estimation Theory written by the
author. For referring to the preprint works, please use official citation as
follows:
H. Li, “Fundamentals and Applications of Recursive Estimation Theory”,
Shanghai Jiao Tong University Press, 2022 H. Li, Assoc. Prof., is with Dept.
Automation and SPEIT, Shanghai Jiao Tong University (SJTU), Shanghai, 200240,
China (e-mail<EMAIL_ADDRESS>
###### Abstract
The split covariance intersection filter (split CIF) is a useful tool for
general data fusion and has the potential to be applied in a variety of
engineering tasks. An indispensable optimization step (referred to as
w-optimization) involved in the split CIF concerns the performance and
implementation efficiency of the Split CIF, but explanation on w-optimization
is neglected in the paper [1] that provides a theoretical foundation for the
Split CIF. This note complements [1] by providing a theoretical proof for the
convexity of the w-optimization problem involved in the split CIF (convexity
is always a desired property for optimization problems as it facilitates
optimization considerably).
## I Introduction
The paper [1] provides a theoretical foundation for the split covariance
intersection filter (split CIF). A reference closely related to [1] is [2]
which presents the Split CIF heuristically without theoretical analysis — [2]
originally coined it simply as “split covariance intersection”. In [1], the
term “filter” is added to form an analogy of the Split CIF to the well-known
Kalman filter. Although the Split CIF is called “filter”, it is not limited to
temporal recursive estimation but can be used as a pure data fusion method
besides the filtering sense, just as the Kalman filter can also be treated as
a data fusion method — The split CIF can reasonably handle both known
independent information and unknown correlated information in source data; it
is a useful tool for general data fusion and has the potential to be applied
in a variety of engineering tasks [3] [4] [5] [6] [7].
An indispensable optimization step (referred to as w-optimization) involved in
the split CIF concerns the performance and implementation efficiency of the
Split CIF; however, explanation on this w-optimization problem is neglected in
[1]. As a consequence, readers may find it difficult to follow the split CIF
completely as they are not informed of how the w-optimization problem can be
handled or whether the w-optimization problem satisfies certain property
(especially convexity) that facilitates optimization. To enable readers to
better follow the split CIF and incorporate it into their prospective research
works, this note complements [1] by providing a theoretical proof for the
convexity of the w-optimization problem involved in the split CIF (convexity
is always a desired property for optimization problems as it facilitates
optimization considerably).
## II The w-optimization problem
Matrices mentioned in this note are symmetric matrices by default. Given
matrices $\mathbf{P}_{1d}$, $\mathbf{P}_{1i}$, $\mathbf{P}_{2d}$, and
$\mathbf{P}_{2i}$ that are positive semi-definite, i.e.
$\mathbf{P}_{1d}\geq\mathbf{0}$, $\mathbf{P}_{1i}\geq\mathbf{0}$,
$\mathbf{P}_{2d}\geq\mathbf{0}$, $\mathbf{P}_{2i}\geq\mathbf{0}$; denotations
$\mathbf{P}_{1d}$, $\mathbf{P}_{1i}$, $\mathbf{P}_{2d}$, and $\mathbf{P}_{2i}$
are used for presentation of the Split CIF in [1]. For $w\in[0,1]$, define
$\displaystyle\mathbf{P}_{1}(w)$
$\displaystyle=\mathbf{P}_{1d}/w+\mathbf{P}_{1i}$
$\displaystyle\mathbf{P}_{2}(w)$
$\displaystyle=\mathbf{P}_{2d}/(1-w)+\mathbf{P}_{2i}$
$\displaystyle\mathbf{P}(w)$
$\displaystyle=(\mathbf{P}_{1}(w)^{-1}+\mathbf{P}_{2}(w)^{-1})^{-1}$ (1)
When $w=0$ or $w=1$, $\mathbf{P}(w)$ denotes the limit value as $w\to 0$ or
$w\to 1$ respectively. For $w\in(0,1)$, we further assume that
$\mathbf{P}_{1}(w)$ and $\mathbf{P}_{2}(w)$ are positive definite i.e.
$\mathbf{P}_{1}(w)>0$, $\mathbf{P}_{2}(w)>0$; in fact, this fair assumption is
well rooted in real applications where $\mathbf{P}_{1}(w)$ and
$\mathbf{P}_{2}(w)$ normally correspond to covariances of certain estimates
and hence are always positive definite. With this assumption, we naturally
have $\mathbf{P}(w)>0$.
The w-optimization problem involved in the split CIF [1] can be formalized as
follows:
$w=\arg\min_{w\in[0,1]}\det(\mathbf{P}(w))$ (2)
## III Convexity of the w-optimization problem
We provide a theoretical proof for the convexity of the w-optimization problem
formalized in the previous section. This is equivalent to proving that the
second-order differential of $\det(\mathbf{P}(w))$ in (2) is always non-
negative for $w\in(0,1)$:
$\frac{d^{2}}{dw^{2}}\det(\mathbf{P}(w))\geq 0$ (3)
Note that
$\displaystyle\frac{d^{2}}{dw^{2}}\ln\det(\mathbf{P}(w))$
$\displaystyle=\frac{\det(\mathbf{P}(w))\frac{d^{2}}{dw^{2}}\det(\mathbf{P}(w))-(\frac{d}{dw}\det(\mathbf{P}(w)))^{2}}{\det(\mathbf{P}(w))^{2}}$
$\displaystyle\leq\frac{\frac{d^{2}}{dw^{2}}\det(\mathbf{P}(w))}{\det(\mathbf{P}(w))}$
So if the following inequality (4) is proved, then (3) holds true as well.
$\frac{d^{2}}{dw^{2}}\ln\det(\mathbf{P}(w))\geq 0$ (4)
A detailed theoretical proof for (4) is given below. For denotation
conciseness in the following proof, we omit explicit writing of “$(w)$” for
$w$-parameterized variables; for example, we denote above mentioned
$\mathbf{P}_{1}(w)$, $\mathbf{P}_{2}(w)$, and $\mathbf{P}(w)$ simply as
$\mathbf{P}_{1}$, $\mathbf{P}_{2}$, and $\mathbf{P}$.
###### Lemma 1.
Given a first-order differentiable $w$-parameterized matrix $\mathbf{M}(w)$
(denoted shortly as $\mathbf{M}$) satisfying $\mathbf{M}(w)>0$, we have
$\frac{d}{dw}\ln\det(\mathbf{M})=tr\\{\mathbf{M}^{-1}\frac{d\mathbf{M}}{dw}\\}$
###### Proof.
According to the Jacobi’s formula [8]
$\frac{d}{dw}\det(\mathbf{M})=\det(\mathbf{M})tr\\{\mathbf{M}^{-1}\frac{d\mathbf{M}}{dw}\\}$
Thus we have
$\frac{d}{dw}\ln\det(\mathbf{M})=\frac{1}{\det(\mathbf{M})}\frac{d}{dw}\det(\mathbf{M})=tr\\{\mathbf{M}^{-1}\frac{d\mathbf{M}}{dw}\\}$
∎
###### Lemma 2.
Given a second-order differentiable matrix $\mathbf{M}(w)$ satisfying
$\mathbf{M}(w)>0$, we have
$\frac{d^{2}}{dw^{2}}\ln\det(\mathbf{M})=tr\\{-\mathbf{M}^{-1}\frac{d\mathbf{M}}{dw}\mathbf{M}^{-1}\frac{d\mathbf{M}}{dw}+\mathbf{M}^{-1}\frac{d^{2}\mathbf{M}}{dw^{2}}\\}$
###### Proof.
Note that the differential of a matrix inverse can be computed as follows [8]:
$\frac{d\mathbf{M}^{-1}}{dw}=-\mathbf{M}^{-1}\frac{d\mathbf{M}}{dw}\mathbf{M}^{-1}$
Following Lemma.1 we have
$\displaystyle\frac{d^{2}}{dw^{2}}\ln\det(\mathbf{M})=\frac{d}{dw}tr\\{\mathbf{M}^{-1}\frac{d\mathbf{M}}{dw}\\}=tr\\{\frac{d}{dw}(\mathbf{M}^{-1}\frac{d\mathbf{M}}{dw})\\}$
$\displaystyle=tr\\{\frac{d\mathbf{M}^{-1}}{dw}\frac{d\mathbf{M}}{dw}+\mathbf{M}^{-1}\frac{d^{2}\mathbf{M}}{dw^{2}}\\}$
$\displaystyle=tr\\{-\mathbf{M}^{-1}\frac{d\mathbf{M}}{dw}\mathbf{M}^{-1}\frac{d\mathbf{M}}{dw}+\mathbf{M}^{-1}\frac{d^{2}\mathbf{M}}{dw^{2}}\\}$
∎
Following Lemma.2 we can compute the second-order differential of
$\ln\det(\mathbf{P}(w))$ as follows
$\displaystyle\frac{d^{2}}{dw^{2}}\ln\det\mathbf{P}=\frac{d^{2}}{dw^{2}}\ln\det((\mathbf{P}_{1}^{-1}+\mathbf{P}_{2}^{-1})^{-1})$
$\displaystyle=\frac{d^{2}}{dw^{2}}\ln\det\mathbf{P}_{1}+\frac{d^{2}}{dw^{2}}\ln\det\mathbf{P}_{2}-\frac{d^{2}}{dw^{2}}\ln\det(\mathbf{P}_{1}+\mathbf{P}_{2})$
$\displaystyle=tr\\{-\mathbf{P}_{1}^{-1}\frac{d\mathbf{P}_{1}}{dw}\mathbf{P}_{1}^{-1}\frac{d\mathbf{P}_{1}}{dw}+\mathbf{P}_{1}^{-1}\frac{d^{2}\mathbf{P}_{1}}{dw^{2}}\\}$
$\displaystyle+tr\\{-\mathbf{P}_{2}^{-1}\frac{d\mathbf{P}_{2}}{dw}\mathbf{P}_{2}^{-1}\frac{d\mathbf{P}_{2}}{dw}+\mathbf{P}_{2}^{-1}\frac{d^{2}\mathbf{P}_{2}}{dw^{2}}\\}$
$\displaystyle-
tr\\{-(\mathbf{P}_{1}+\mathbf{P}_{2})^{-1}\frac{d(\mathbf{P}_{1}+\mathbf{P}_{2})}{dw}(\mathbf{P}_{1}+\mathbf{P}_{2})^{-1}\frac{d(\mathbf{P}_{1}+\mathbf{P}_{2})}{dw}$
$\displaystyle~{}~{}~{}~{}~{}+(\mathbf{P}_{1}+\mathbf{P}_{2})^{-1}\frac{d^{2}(\mathbf{P}_{1}+\mathbf{P}_{2})}{dw^{2}}\\}$
(5)
###### Lemma 3.
Given two matrices $\mathbf{M}_{1}$ and $\mathbf{M}_{2}$ whose dimensions are
consistent with each other for multiplication $\mathbf{M}_{1}\mathbf{M}_{2}$
and $\mathbf{M}_{2}\mathbf{M}_{1}$, we have
$tr\\{\mathbf{M}_{1}\mathbf{M}_{2}\\}=tr\\{\mathbf{M}_{2}\mathbf{M}_{1}\\}$.
The proof for Lemma.3 can be found in [9]. More generally, given matrices
$\mathbf{M}_{1}$, $\mathbf{M}_{2}$, and $\mathbf{M}_{k}$, we have
$\displaystyle
tr\\{\mathbf{M}_{1}\mathbf{M}_{2}...\mathbf{M}_{k}\\}=tr\\{\mathbf{M}_{2}\mathbf{M}_{3}...\mathbf{M}_{k}\mathbf{M}_{1}\\}$
$\displaystyle~{}~{}~{}~{}=...=tr\\{\mathbf{M}_{k}\mathbf{M}_{1}...\mathbf{M}_{k-2}\mathbf{M}_{k-1}\\}$
which is called cyclic property of trace operation.
Define $\mathbf{D}_{1}(w)=\mathbf{P}_{1d}/w$ and
$\mathbf{D}_{2}(w)=\mathbf{P}_{2d}/(1-w)$ for $w\in(0,1)$. As
$\mathbf{P}_{1d}\geq 0$ and $\mathbf{P}_{2d}\geq 0$, we also have
$\mathbf{D}_{1}\geq 0$, $\mathbf{D}_{2}\geq 0$. Like $\mathbf{P}_{1d}$ and
$\mathbf{P}_{2d}$, $\mathbf{D}_{1}$ and $\mathbf{D}_{2}$ are also symmetric
matrices. From definitions given in (II) we have
$\displaystyle\frac{d\mathbf{P}_{1}}{dw}=-\frac{\mathbf{D}_{1}}{w}~{}~{}~{}~{}~{}~{}\frac{d\mathbf{P}_{2}}{dw}=\frac{\mathbf{D}_{2}}{1-w}$
$\displaystyle\frac{d^{2}\mathbf{P}_{1}}{dw^{2}}=\frac{2\mathbf{D}_{1}}{w^{2}}~{}~{}~{}~{}~{}~{}\frac{d^{2}\mathbf{P}_{2}}{dw^{2}}=\frac{2\mathbf{D}_{2}}{(1-w)^{2}}$
Substitute above formulas into (III) and use Lemma.3 (the cyclic property of
trace operation) when necessary in following derivation, we have
$\displaystyle\frac{d^{2}}{dw^{2}}\ln\det\mathbf{P}=tr\\{-\mathbf{P}_{1}^{-1}(-\frac{\mathbf{D}_{1}}{w})\mathbf{P}_{1}^{-1}(-\frac{\mathbf{D}_{1}}{w})+\mathbf{P}_{1}^{-1}\frac{2\mathbf{D}_{1}}{w^{2}}$
$\displaystyle~{}-\mathbf{P}_{2}^{-1}(\frac{\mathbf{D}_{2}}{1-w})\mathbf{P}_{2}^{-1}(\frac{\mathbf{D}_{2}}{1-w})+\mathbf{P}_{2}^{-1}\frac{2\mathbf{D}_{2}}{(1-w)^{2}}$
$\displaystyle~{}+(\mathbf{P}_{1}+\mathbf{P}_{2})^{-1}(\frac{\mathbf{D}_{2}}{1-w}-\frac{\mathbf{D}_{1}}{w})(\mathbf{P}_{1}+\mathbf{P}_{2})^{-1}(\frac{\mathbf{D}_{2}}{1-w}-\frac{\mathbf{D}_{1}}{w})$
$\displaystyle~{}-(\mathbf{P}_{1}+\mathbf{P}_{2})^{-1}(\frac{2\mathbf{D}_{1}}{w^{2}}+\frac{2\mathbf{D}_{2}}{(1-w)^{2}})\\}$
$\displaystyle=\frac{1}{w^{2}}\mathbf{T}_{1}+\frac{1}{(1-w)^{2}}\mathbf{T}_{2}-\frac{2}{w(1-w)}\mathbf{T}_{3}$
(6)
where
$\displaystyle\mathbf{T}_{1}=tr\\{2\mathbf{P}_{1}^{-1}\mathbf{D}_{1}-2(\mathbf{P}_{1}+\mathbf{P}_{2})^{-1}\mathbf{D}_{1}-\mathbf{P}_{1}^{-1}\mathbf{D}_{1}\mathbf{P}_{1}^{-1}\mathbf{D}_{1}$
$\displaystyle~{}~{}~{}~{}~{}~{}+(\mathbf{P}_{1}+\mathbf{P}_{2})^{-1}\mathbf{D}_{1}(\mathbf{P}_{1}+\mathbf{P}_{2})^{-1}\mathbf{D}_{1}\\}$
$\displaystyle\mathbf{T}_{2}=tr\\{2\mathbf{P}_{2}^{-1}\mathbf{D}_{2}-2(\mathbf{P}_{1}+\mathbf{P}_{2})^{-1}\mathbf{D}_{2}-\mathbf{P}_{2}^{-1}\mathbf{D}_{2}\mathbf{P}_{2}^{-1}\mathbf{D}_{2}$
$\displaystyle~{}~{}~{}~{}~{}~{}+(\mathbf{P}_{1}+\mathbf{P}_{2})^{-1}\mathbf{D}_{2}(\mathbf{P}_{1}+\mathbf{P}_{2})^{-1}\mathbf{D}_{2}\\}$
$\displaystyle\mathbf{T}_{3}=tr\\{(\mathbf{P}_{1}+\mathbf{P}_{2})^{-1}\mathbf{D}_{1}(\mathbf{P}_{1}+\mathbf{P}_{2})^{-1}\mathbf{D}_{2}\\}$
###### Lemma 4.
Given two positive semi-definite matrices $\mathbf{M}_{1}$ and
$\mathbf{M}_{2}$ (i.e. $\mathbf{M}_{1}\geq 0$, $\mathbf{M}_{2}\geq 0$), we
have
$tr\\{\mathbf{M}_{1}\mathbf{M}_{2}\\}=tr\\{\mathbf{M}_{2}\mathbf{M}_{1}\\}\geq
0$.
The proof for Lemma.4 can be found in [9].
###### Lemma 5.
Given symmetric matrices $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$
satisfying $0<\mathbf{X}\leq\mathbf{Y}$ and $0\leq\mathbf{Z}\leq\mathbf{X}$,
we have
$\displaystyle
tr\\{2\mathbf{X}^{-1}\mathbf{Z}-2\mathbf{Y}^{-1}\mathbf{Z}-\mathbf{X}^{-1}\mathbf{Z}\mathbf{X}^{-1}\mathbf{Z}+\mathbf{Y}^{-1}\mathbf{Z}\mathbf{Y}^{-1}\mathbf{Z}\\}$
$\displaystyle~{}~{}~{}~{}\geq
tr\\{(\mathbf{X}^{-1}-\mathbf{Y}^{-1})\mathbf{Z}(\mathbf{X}^{-1}-\mathbf{Y}^{-1})\mathbf{Z}\\}$
###### Proof.
Lemma.3 is used in following derivation
$\displaystyle
tr\\{2\mathbf{X}^{-1}\mathbf{Z}-2\mathbf{Y}^{-1}\mathbf{Z}-\mathbf{X}^{-1}\mathbf{Z}\mathbf{X}^{-1}\mathbf{Z}+\mathbf{Y}^{-1}\mathbf{Z}\mathbf{Y}^{-1}\mathbf{Z}\\}$
$\displaystyle~{}~{}~{}~{}-tr\\{(\mathbf{X}^{-1}-\mathbf{Y}^{-1})\mathbf{Z}(\mathbf{X}^{-1}-\mathbf{Y}^{-1})\mathbf{Z}\\}$
$\displaystyle=tr\\{2\mathbf{X}^{-1}\mathbf{Z}-2\mathbf{Y}^{-1}\mathbf{Z}-2\mathbf{X}^{-1}\mathbf{Z}\mathbf{X}^{-1}\mathbf{Z}$
$\displaystyle\quad\qquad\qquad\qquad+\mathbf{X}^{-1}\mathbf{Z}\mathbf{Y}^{-1}\mathbf{Z}+\mathbf{Y}^{-1}\mathbf{Z}\mathbf{X}^{-1}\mathbf{Z}\\}$
$\displaystyle=tr\\{2\mathbf{X}^{-1}\mathbf{Z}-2\mathbf{Y}^{-1}\mathbf{Z}-2\mathbf{X}^{-1}\mathbf{Z}\mathbf{X}^{-1}\mathbf{Z}+2\mathbf{X}^{-1}\mathbf{Z}\mathbf{Y}^{-1}\mathbf{Z}\\}$
$\displaystyle=2~{}tr\\{(\mathbf{I}-\mathbf{X}^{-1}\mathbf{Z})(\mathbf{X}^{-1}-\mathbf{Y}^{-1})\mathbf{Z}\\}$
$\displaystyle=2~{}tr\\{\mathbf{Z}(\mathbf{I}-\mathbf{X}^{-1}\mathbf{Z})(\mathbf{X}^{-1}-\mathbf{Y}^{-1})\\}$
$\displaystyle=2~{}tr\\{\mathbf{Z}(\mathbf{Z}^{-1}-\mathbf{X}^{-1})\mathbf{Z}(\mathbf{X}^{-1}-\mathbf{Y}^{-1})\\}$
As $\mathbf{Z}^{-1}-\mathbf{X}^{-1}\geq 0$, we have
$\displaystyle\mathbf{Z}(\mathbf{Z}^{-1}-\mathbf{X}^{-1})\mathbf{Z}=\mathbf{Z}^{T}(\mathbf{Z}^{-1}-\mathbf{X}^{-1})\mathbf{Z}\geq
0$
Besides, as $\mathbf{X}^{-1}-\mathbf{Y}^{-1}\geq 0$; following Lemma.4 we have
$tr\\{\mathbf{Z}(\mathbf{Z}^{-1}-\mathbf{X}^{-1})\mathbf{Z}(\mathbf{X}^{-1}-\mathbf{Y}^{-1})\\}\geq
0$. The proof is done ∎
Note that $\mathbf{P}_{1}$, $\mathbf{P}_{2}$, $\mathbf{D}_{1}$,
$\mathbf{D}_{2}$, and $\mathbf{P}_{1}+\mathbf{P}_{2}$ are symmetric matrices
satisfying
$\mathbf{P}_{1}+\mathbf{P}_{2}>\mathbf{P}_{1}=\mathbf{D}_{1}+\mathbf{P}_{1i}\geq\mathbf{D}_{1}\geq
0$ and
$\mathbf{P}_{1}+\mathbf{P}_{2}>\mathbf{P}_{2}=\mathbf{D}_{2}+\mathbf{P}_{2i}\geq\mathbf{D}_{2}\geq
0$; following Lemma.5 we have (denote
$\mathbf{P}_{3}=\mathbf{P}_{1}+\mathbf{P}_{2}$)
$\displaystyle\mathbf{T}_{1}\geq
tr\\{(\mathbf{P}_{1}^{-1}-\mathbf{P}_{3}^{-1})\mathbf{D}_{1}(\mathbf{P}_{1}^{-1}-\mathbf{P}_{3}^{-1})\mathbf{D}_{1}\\}$
$\displaystyle\mathbf{T}_{2}\geq
tr\\{(\mathbf{P}_{2}^{-1}-\mathbf{P}_{3}^{-1})\mathbf{D}_{2}(\mathbf{P}_{2}^{-1}-\mathbf{P}_{3}^{-1})\mathbf{D}_{2}\\}$
Substitute above inequalities into (III) and we have
$\displaystyle\frac{d^{2}}{dw^{2}}\ln\det\mathbf{P}\geq
tr\\{(\mathbf{P}_{1}^{-1}-\mathbf{P}_{3}^{-1})\frac{\mathbf{D}_{1}}{w}(\mathbf{P}_{1}^{-1}-\mathbf{P}_{3}^{-1})\frac{\mathbf{D}_{1}}{w}\\}$
$\displaystyle\quad\qquad+tr\\{(\mathbf{P}_{2}^{-1}-\mathbf{P}_{3}^{-1})\frac{\mathbf{D}_{2}}{1-w}(\mathbf{P}_{2}^{-1}-\mathbf{P}_{3}^{-1})\frac{\mathbf{D}_{2}}{1-w}\\}$
$\displaystyle\quad\qquad-2~{}tr\\{\mathbf{P}_{3}^{-1}\frac{\mathbf{D}_{1}}{w}\mathbf{P}_{3}^{-1}\frac{\mathbf{D}_{2}}{1-w}\\}$
(7)
Denote $\mathbf{B}_{3}=\mathbf{P}_{1}^{-1}+\mathbf{P}_{2}^{-1}$. Note that
$\displaystyle\mathbf{P}_{3}^{-1}$
$\displaystyle=(\mathbf{P}_{1}+\mathbf{P}_{2})^{-1}=(\mathbf{P}_{1}(\mathbf{P}_{1}^{-1}+\mathbf{P}_{2}^{-1})\mathbf{P}_{2})^{-1}$
$\displaystyle=\mathbf{P}_{2}^{-1}(\mathbf{P}_{1}^{-1}+\mathbf{P}_{2}^{-1})^{-1}\mathbf{P}_{1}^{-1}$
$\displaystyle=\mathbf{P}_{2}^{-1}\mathbf{B}_{3}^{-1}\mathbf{P}_{1}^{-1}$
$\displaystyle\textnormal{or}\quad\mathbf{P}_{3}^{-1}$
$\displaystyle=(\mathbf{P}_{2}(\mathbf{P}_{1}^{-1}+\mathbf{P}_{2}^{-1})\mathbf{P}_{1})^{-1}=\mathbf{P}_{1}^{-1}\mathbf{B}_{3}^{-1}\mathbf{P}_{2}^{-1}$
We have
$\displaystyle\mathbf{P}_{1}^{-1}-\mathbf{P}_{3}^{-1}$
$\displaystyle=\mathbf{P}_{1}^{-1}-\mathbf{P}_{2}^{-1}(\mathbf{P}_{1}^{-1}+\mathbf{P}_{2}^{-1})^{-1}\mathbf{P}_{1}^{-1}$
$\displaystyle=((\mathbf{P}_{1}^{-1}+\mathbf{P}_{2}^{-1})-\mathbf{P}_{2}^{-1})(\mathbf{P}_{1}^{-1}+\mathbf{P}_{2}^{-1})^{-1}\mathbf{P}_{1}^{-1}$
$\displaystyle=\mathbf{P}_{1}^{-1}(\mathbf{P}_{1}^{-1}+\mathbf{P}_{2}^{-1})^{-1}\mathbf{P}_{1}^{-1}$
$\displaystyle=\mathbf{P}_{1}^{-1}\mathbf{B}_{3}^{-1}\mathbf{P}_{1}^{-1}$
Similarly we have
$\displaystyle\mathbf{P}_{2}^{-1}-\mathbf{P}_{3}^{-1}=\mathbf{P}_{2}^{-1}\mathbf{B}_{3}^{-1}\mathbf{P}_{2}^{-1}$
Therefore, (III) becomes
$\displaystyle\frac{d^{2}}{dw^{2}}\ln\det\mathbf{P}$ $\displaystyle\quad\geq
tr\\{\mathbf{P}_{1}^{-1}\mathbf{B}_{3}^{-1}\mathbf{P}_{1}^{-1}\frac{\mathbf{D}_{1}}{w}\mathbf{P}_{1}^{-1}\mathbf{B}_{3}^{-1}\mathbf{P}_{1}^{-1}\frac{\mathbf{D}_{1}}{w}\\}$
$\displaystyle\qquad+tr\\{\mathbf{P}_{2}^{-1}\mathbf{B}_{3}^{-1}\mathbf{P}_{2}^{-1}\frac{\mathbf{D}_{2}}{1-w}\mathbf{P}_{2}^{-1}\mathbf{B}_{3}^{-1}\mathbf{P}_{2}^{-1}\frac{\mathbf{D}_{2}}{1-w}\\}$
$\displaystyle\qquad-2~{}tr\\{\mathbf{P}_{2}^{-1}\mathbf{B}_{3}^{-1}\mathbf{P}_{1}^{-1}\frac{\mathbf{D}_{1}}{w}\mathbf{P}_{1}^{-1}\mathbf{B}_{3}^{-1}\mathbf{P}_{2}^{-1}\frac{\mathbf{D}_{2}}{1-w}\\}$
$\displaystyle\quad=tr\\{\mathbf{B}_{3}^{-1}\mathbf{P}_{1}^{-1}\frac{\mathbf{D}_{1}}{w}\mathbf{P}_{1}^{-1}\mathbf{B}_{3}^{-1}\mathbf{P}_{1}^{-1}\frac{\mathbf{D}_{1}}{w}\mathbf{P}_{1}^{-1}\\}$
$\displaystyle\qquad+tr\\{\mathbf{B}_{3}^{-1}\mathbf{P}_{2}^{-1}\frac{\mathbf{D}_{2}}{1-w}\mathbf{P}_{2}^{-1}\mathbf{B}_{3}^{-1}\mathbf{P}_{2}^{-1}\frac{\mathbf{D}_{2}}{1-w}\mathbf{P}_{2}^{-1}\\}$
$\displaystyle\qquad-2~{}tr\\{\mathbf{B}_{3}^{-1}\mathbf{P}_{1}^{-1}\frac{\mathbf{D}_{1}}{w}\mathbf{P}_{1}^{-1}\mathbf{B}_{3}^{-1}\mathbf{P}_{2}^{-1}\frac{\mathbf{D}_{2}}{1-w}\mathbf{P}_{2}^{-1}\\}$
$\displaystyle\quad=tr\\{\mathbf{B}_{3}^{-1}\mathbf{C}\mathbf{B}_{3}^{-1}\mathbf{C}\\}$
(8)
where
$\displaystyle\mathbf{C}=\mathbf{P}_{1}^{-1}\frac{\mathbf{D}_{1}}{w}\mathbf{P}_{1}^{-1}-\mathbf{P}_{2}^{-1}\frac{\mathbf{D}_{2}}{1-w}\mathbf{P}_{2}^{-1}$
As matrices $\mathbf{P}_{1}$, $\mathbf{P}_{2}$, $\mathbf{D}_{1}$, and
$\mathbf{D}_{2}$ are all symmetric, so is $\mathbf{C}$. Note that
$\mathbf{B}_{3}=\mathbf{P}_{1}^{-1}+\mathbf{P}_{2}^{-1}>0$ ($\mathbf{B}_{3}$
is symmetric as well) and hence $\mathbf{B}_{3}^{-1}>0$, we have
$\displaystyle\mathbf{C}\mathbf{B}_{3}^{-1}\mathbf{C}=\mathbf{C}^{T}\mathbf{B}_{3}^{-1}\mathbf{C}\geq
0$
Follow (III) and Lemma.4 and we have
$\displaystyle\frac{d^{2}}{dw^{2}}\ln\det\mathbf{P}\geq
tr\\{\mathbf{B}_{3}^{-1}\mathbf{C}\mathbf{B}_{3}^{-1}\mathbf{C}\\}\geq 0$
So all the proof for (4) is presented. As we have already explained at the
beginning of this section, (3) also holds true and the convexity of the
$w$-optimization problem is proved.
## IV Conclusion
Explanation on an indispensable optimization step (i.e. the $w$-optimization
problem) involved in the split CIF is neglected in [1], this note complements
[1] by providing a theoretical proof with details for the convexity of the
$w$-optimization problem. As convexity facilitates optimization considerably,
readers can resort to convex optimization techniques to solve the
$w$-optimization problem when they intend to incorporate the split CIF into
their prospective research works.
## Appendix
Demo code: https://github.com/LI-Hao-SJTU/SplitCIF
## References
* [1] H. Li, F. Nashashibi, and M. Yang, “Split covariance intersection filter: Theory and its application to vehicle localization,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 14, no. 4, pp. 1860–1871, 2013.
* [2] S. Julier and J. Uhlmann, “General decentralized data fusion with covariance intersection (ci),” _Handbook of Data Fusion_ , 2001.
* [3] H. Li and F. Nashashibi, “Cooperative multi-vehicle localization using split covariance intersection filter,” _IEEE Intelligent Transportation Systems Magazine_ , vol. 5, no. 2, pp. 33–44, 2013.
* [4] T. R. Wanasinghe, G. K. I. Mann, and R. G. Gosine, “Decentralized cooperative localization for heterogeneous multi-robot system using split covariance intersection filter,” in _Canadian Conference on Computer and Robot Vision_ , 2014, pp. 167–174.
* [5] C. Pierre, R. Chapuis, R. Aufrère, J. Laneurit, and C. Debain, “Range-only based cooperative localization for mobile robots,” in _International Conference on Information Fusion_ , 2018, pp. 1933–1939.
* [6] X. Chen, M. Yang, W. Yuan, H. Li, and C. Wang, “Split covariance intersection filter based front-vehicle track estimation for vehicle platooning without communication,” in _IEEE Intelligent Vehicles Symposium_ , 2020, pp. 1510–1515.
* [7] C. Allig and G. Wanielik, “Unequal dimension track-to-track fusion approaches using covariance intersection,” _IEEE Transactions on Intelligent Transportation Systems_ , 2021.
* [8] R. Horn and C. Johnson, _Topics in Matrix Analysis_. Cambridge University Press, 1991.
* [9] ——, _Matrix Analysis_. Cambridge University Press, 1990.
|
# Planning to Repose Long and Heavy Objects Considering a Combination of
Regrasp and Constrained Drooping
Mohamed Raessa1, Weiwei Wan∗1, Keisuke Koyama1, and Kensuke Harada12 ∗ Weiwei
Wan is the corresponding author. Email<EMAIL_ADDRESS>1 Graduate
School of Engineering Science, Osaka University, Osaka, Japan. 2 National
Institute of Advanced Industrial Science and Technology (AIST), Tsukuba,
Japan.
###### Abstract
Purpose of this paper: This paper presents a hierarchical motion planner for
planning the manipulation motion to repose long and heavy objects considering
external support surfaces.
Design/methodology/approach: The planner includes a task level layer and a
motion level layer. We formulate the manipulation planning problem at the task
level by considering grasp poses as nodes and object poses for edges. We
consider regrasping and constrained in-hand slip (drooping) during building
graphs and find mixed regrasping and drooping sequences by searching the
graph. The generated sequences autonomously divide the object weight between
the arm and the support surface and avoid configuration obstacles. Cartesian
planning is used at the robot motion level to generate motions between
adjacent critical grasp poses of the sequence found by the task level layer.
Findings: Various experiments are carried out to examine the performance of
the proposed planner. The results show improved capability of robot arms to
manipulate long and heavy objects using the proposed planner.
What is original/value of paper: Our contribution is we initially develop a
graph-based planning system that reasons both in-hand and regrasp manipulation
motion considering external supports. On one hand, the planner integrates
regrasping and drooping to realize in-hand manipulation with external support.
On the other hand, it switches states by releasing and regrasping objects when
the object is in stably placed. The search graphs’ nodes could be retrieved
from remote cloud servers that provide a large amount of pre-annotated data to
implement cyber intelligence.
## I Introduction
Manipulating long and heavy objects using a single robot arm is challenging
because of robots and grippers’ limited duty. This difficulty originates from
the objects’ shapes and masses. They dictate how external forces, such as
gravity and inertia, affect the object’s stability during the manipulation
process. Previously, researchers considered overcoming the problem by using
multiple robots to share object weight. The examples include but not limit to
using multiple arms [1, 2, 3], multiple mobile robots [4], multiple mobile
manipulators [5], or a combination of robots and other machines [6]. The
drawback is using multiple robots decreases the overall automation system’s
efficiency because of the high costs. Also, the complications associated with
multi-robot motion planning adds additional overhead to the system. To reduce
the costs, this paper proposed to plan manipulating heavy objects using a
single arm while keeping it partially supported by a support surface.
We consider regrasping and constrained drooping for effective maneuvering and
in-hand pose adjustment. Especially, drooping refers to the in-hand sliding
motion caused by gravitational torque. The earliest studies worked on drooping
manipulation are [7, 8]. In our previous research [9], we examined the reasons
behind the drooping motion associated with heavy objects manipulation, and
implemented a constrained motion planner to eliminate it. In this paper we
take advantage of our understanding about the drooping to transit grasp poses
and realize in-hand pose adjustment. We consider constraining the drooping
motion by moving the robot gripper’s height above a support surface in a
limited. One end of the object is grasped throughout a task while the other
end is kept in contact with the support surface. The heavy object weight is
shared between the gripper and the support surface. Meanwhile, the holding
gripper’s height is adjusted in a range computed considering the object’s
shape and the difference between the gripper’s current pose and a goal in-hand
pose. We formulate the manipulation planning problem by considering grasp
poses as nodes and object poses for edges. We use hierarchical motion planning
approaches to autonomously determine regrasp and drooping sequences and
generate robotic manipulation motion.
Our development is based on several assumptions about the difficulties as
follows.
1. 1.
The grasped object is long and heavy enough to slip and rotate in a parallel
robot gripper. We refer to the slippage–rotation motion in the parallel
gripper drooping.
2. 2.
The object needs to remain in contact with the support surface during
manipulation. The surface fully supports the object’s weight while being
regrasped and partially support it during drooping.
3. 3.
The gripper finger pads are made of soft materials, which enables the gripper
to exert friction torque on the object while partially holding it. The soft
finger contact assumption allows dividing the object’s weight between the
gripper and the support surface.
We model and develop a combined regrasp and drooping planner based on these
assumptions and examine our development using real-world experiments. The
results show that our method can successfully find manipulation sequences for
a robot to maneuver long and heavy objects. The robot may autonomously
determine the switches between grasping poses and in-hand drooping poses and
finish reposing tasks. Fig.1 shows an example of the robot motion sequence
found by our planner. Here, the robot cannot fully lift the stick. Given the
start and goal poses, our planner finds a sequence (ID (1)-(4) in the figures)
to deliver the stick to the goal pose at 4.
Figure 1: An example of the robot motion sequence found by our planner. The
robot conducts regrasp from ID (1) to (3), and performs constrained drooping
considering the table as a support from ID (3) to (4). The object is
successfully delivered to a goal pose at ID (4) with the help combined
regrasping and drooping.
This paper is organized as follow, Section II presents related work. Section
III explains the drooping manipulation planning and grasp transition criteria.
Section IV describes the regrasp planning and the integration of drooping and
regrasp. Section V presents the experiments and analysis. Section VI carries
out a discussion about the proposed system performance and highlights its good
points and challenges. Section VII concludes the study and discusses the
potential future directions of improvements.
## II Related Work
### II-A Heavy Objects Manipulation
Different methods have been developed to solve the problem of heavy object
manipulation. Those methods assume that a single robot is not enough to get
such tasks done and employ multiple robots for help. For example, a method for
coordinating a multi-arm system’s motion to receive an object from a human
handover was proposed in [1]. In [2], the authors explored the changes in the
configuration space connectivity when the multi-arm system work together in a
closed chain to manipulate a large object. A method for stable planning of
heavy object carrying tasks using mobile manipulators was presented in [5].
The authors formulated the motion planning problem for several robots as an
optimization problem with a cost function that minimizes the mobile base
motion. In [10], a hybrid system consisting of a serial manipulator attached
to a mobile Stewart mechanism was proposed. The aim was to provide stable
maneuvers through the analysis of postural stability of the combined system
components. An approach that uses a group of mobile robots with a handcart for
heavy objects transportation was presented in [6]. In [3], the authors used
four robot arms to manipulate heavy objects. The object was modeled as a
virtual link to include in the dynamic model to improve the task accuracy.
Our study proposes an approach that enables a single robot arm to manipulate
heavy objects with the help of a support surface in the arm’s workspace. The
gravitational torque that long and heavy objects experience while being
manipulated is carefully controlled to adjust the object’s in-hand pose.
### II-B Manipulation with Regrasping
Industrial manipulators usually use simple two-jaw grippers to interact with
the environment. Such grippers do not possess enough dexterity required for
manipulation tasks [11]. Therefore, multiple methods such as regrasping [12],
vision-based grasping [13], and dual-arm manipulation [14] have been developed
to fulfill the need for dexterity. In this study, the first manipulation
primitive motion we use is regrasping. In regrasping, the existence of an
external surface within the manipulator workspace is assumed. The surface
makes it possible to obtain stable placements of the manipulated objects. The
manipulation process becomes a search for a sequence of stable placements of
the object that connect the object’s start pose to the end pose. The
transition between the different grasps in this sequence is made by breaking
the grasp and moving to another grasp at the same object’s stable placement
pose. In our previous work [15, 16], we implemented regrasping through graph
search in three different steps – grasp planning, placement planning, and
graph construction. Then, we performed the regrasp task planning by searching
the shortest path between the start and end poses of the object [16, 17].
In this study, we integrate regrasping and constrained drooping to extend a
robot arms’ manipulation capability. A robot can manipulate objects with
autonomously determined regrasp and drooping considering minimum times of
adjustment. The regrasp is used for discrete in-hand pose adjustment. The
constrained drooping is used for continuous adjustment without releasing.
### II-C Manipulation with External Forces
During constrained drooping, one end of the grasped heavy object is allowed to
slip in-hand under the effect of gravity. Meanwhile, the other end is kept in
contact with the support surface in a controlled way to maintain the desired
grasps or transit between them. From a broader view, the constrained drooping
is one example of “manipulation with external forces and contacts”, namely
extrinsic manipulation. The gravitational torque is the external force that
induces the change of the in-hand pose. The in-hand slip is limited by keeping
the other end of the object always in contact with an external surface.
Previous research that presented multiple ways to manipulate objects with
external contacts and a simple gripper is available in [18]. Multiple non-
prehensile approaches for object manipulation were also implemented. Examples
include but not limit to planar pushing [19, 20, 21], pushing against external
supports [22, 23, 24], pivoting [25, 8].
## III Constrained Drooping and Grasp Transition Criterion
In our research, we consider reposing manipulation using drooping or in-hand
slip caused by gravitational toque. The following sub-sections explain the
principles of how a gravitational torque induces the drooping motion and how
we use it for grasp reposing manipulation. Especially, we focus on constrained
drooping, where a support surface keeps up one end of the grasped heavy object
while the object body slips and rotates in-hand under the influence of
gravity. We plan the robot motion to ensure the other end continuously
contacts with the table surface in a controlled way to maintain the desired
grasp poses or transit between them.
### III-A Gravity Torque Effect
When a two-finger parallel gripper manipulates long and heavy objects, they
become prone to slippage in-hand (or drooping) due to the effect of gravity
torque. The gravitational torque determines the drooping motion and the
gripper’s frictional torque [9]. When a parallel gripper gets inclined, the
gravitational torque around the jaw opening direction increases. A larger
inclination would further increase the gravitational torque, and at a certain
instant, the gravitational torque may exceed the maximum friction torque of
the gripper’s finger pads and causes the grasped object to droop in the robot
hand. The following equation relates the gravity torque to the various
parameters that affect drooping.
$\displaystyle T_{g}=\dfrac{mg}{2}sin(\theta)sin(\phi)(Obj_{CoM_{rel-
EE}}\hskip 2.84526ptsin(\phi))$
$\displaystyle+\dfrac{mg}{2}sin(\theta)cos(\phi)(EE_{length}+Obj_{CoM_{rel-
EE}}\hskip 2.84526ptcos(\phi)),$ (1)
By observing Equation (1) we understand that the parameter with the most
significant influence on the gravity torque is the inclination angle $\theta$.
Thus, in our drooping-based manipulation approach, we maximize an object’s
drooping by keeping the inclination angle $\theta$ at its maximum during the
manipulation task. In the next subsection, we explain how we use drooping to
realize in-hand pose adjustment and reach to grasp transitions.
### III-B Grasp Transition Criterion
Based on Equation(1), we operate the robot within the range of a gripper
inclination angle that causes the maximum possible drooping motion. An object
can freely droop in-hand within this range while being partially grasped by
the gripper and kept up by a support surface. The support surface acts as an
external pusher and changes the object’s in-hand pose as the gripper moves
upward or downward. We name such a change constrained drooping. In our
proposed planner, we employ constrained drooping to change the in-hand grasp
poses. By properly sequencing the gripper’s upward and downward motion, a
robot can autonomously change grasp poses and hence object poses. Essentially,
the criterion of grasp transition depends on changing the gripper’s height
over the support surface with a distance equivalent to the change in angle
between two consecutive grasps. This criterion is described by Equations (2)
and (3) for the upward and downward motions.
$d_{up}=l_{stick}[sin(\theta_{stick_{init}}+\theta_{target_{grasp}})-sin(\theta_{stick_{init}})]$
(2)
$d_{down}=l_{stick}[sin(\theta_{stick_{init}}-sin(\theta_{stick_{init}}-\theta_{target_{grasp}})]$
(3)
Figure 2: Drooping based grasp transition for in-hand manipulation. The change
of the gripper’s height above the support table enables grasp transition
between different grasp poses.
Fig.2(a) shows a set of predefined grasp poses that hold the end of an object.
Fig.2(b) illustrates an example of the in-hand pose change based on the
drooping motion from grasp pose with ID (4) to grasp pose with ID (3). This
transition requires the object to move up a distance equivalent to the angle
between the two grasps, which is $45^{\circ}$ in the shown example. The same
criterion generalizes to any desired change of grasp angle. This method is
also flexible to different horizontal positions as in the manipulation
process, the height change defines a plane parallel to the support surface,
and any point in the parallel plan fulfills the transition condition according
to Equations (2) and (3). Having a plan that satisfies the condition allows
not only grasp transitions but also changing the object translation and
orientation at the same time. On the other side, to implement a grasp
transition in the other direction, i.e., transit from the grasp pose at ID (3)
to (4), the gripper needs to move downward. The conversion criterion between
up-down motion and constrained drooping is effective as long as the gripper
inclination angle is larger than the drooping threshold.
## IV Hierarchical Planning Considering Regrasping and Constrained Drooping
We use a hierarchical planning scheme for finding a sequence of constrained
drooping grasp poses that change the pose of an object. At the task level, we
employ a graph-based planner to generate sequences of object poses between the
start and the goal pose. We build a graph of grasp poses and object poses that
satisfy the contact condition and traverse the graph to find a sequence of the
object’s critical poses and a sequence of grasp poses for manipulating the
object between the given start and goal. Each grasp pose is modeled as a node
of the graph. They are connected by edges defined considering object poses,
robot payload, and the grasp transitions criterion shown in Equations (2) and
(3).
Besides drooping, we expand our graph with regrasping by connecting the grasp
poses associated with stable object poses at the task level. These poses
indicate the critical poses for regrasping. By connecting them, we can search
across both regrasping and constrained drooping and implemented combined
sequence planning.
At the motion level, we use Cartesian planning to generate robot motions that
move the object between the critical poses designated by the task-level
planner. The critical grasp poses sequence found at the task level are
connected through Cartesian motion planning. Cartesian planning is used
because it helps find robot motion trajectories that satisfy the condition of
maintaining the contact between the object and the support surface. The
following three subsections present the details of the task level planning (A,
B) and the Cartesian motion planning (C), respectively.
### IV-A Task Level Planning
#### IV-A1 Drooping manipulation graph
The essential requirement for drooping manipulation of heavy objects is to
have the object always contact the support surface. This requirement is taken
into consideration when designing the graph nodes of a drooping manipulation
graph. The process of sampling graph nodes is illustrated in Fig. 3. The
process starts with an object at a placement point on the support surface.
Starting from this pose, the object is virtually rotated about the $X$ axis of
the placement point as shown in Fig. 3(b) to generate many different poses. In
the following step, every generated object pose from the previous rotation is
further rotated about the $Z$ axis of the placement point as shown in Fig.
3(c). The second set of rotations result in a bouquet of object poses that
share the same placement point. All the object poses in a single bouquet
satisfy the condition that the object must contact the support surface. After
that, we discretize the support surface into a grid of placement points and
repeat the bouquet generation process at each point to get the evenly sampled
object poses on the whole surface. Then, we transform pre-annotated grasp
poses to each of the evenly sampled object poses and filter out the IK
reachable and collision-free ones. The remaining grasp poses after filtering
are used as the graph nodes.
After sampling the graph nodes, we connect them to finish up the drooping
manipulation graph. Whether the nodes can be connected is determined
considering the object poses, robot payload, and the grasp transitions
criterion shown in Equations (2) and (3). The graph is then ready to be
searched for finding drooping manipulation sequences after the edges are
connected.
Figure 3: The process of generating object poses with the condition of always
being in contact with the support surface. (a) The process starts with a
virtual object placed on the support surface. (b) The object is rotated around
the $X$ axis of the placement point in steps between $0^{\circ}$ and
$90^{\circ}$. (c) Every resulting pose from the previous step is rotated about
the $Z$ axis of the placement point and the result is a bouquet of object
poses that are sharing the same placement point. Figure 4: (a) Samples of
stable object poses on a support surface. (b) The pre-annotated grasp poses
for grasping the stick. The object poses that exist both in (a) and the set of
the bouquets in Fig.3(c) represent the connecting poses. Their associated
grasp poses are the candidate connecting nodes between the regrasp graph
component and the drooping graph component.
In detail, the edge connection between the graph nodes is determined
considering the criterion shown in Equations (2) and (3). If the height
between a pair of consecutive grasps matches the connection criterion, an edge
will be established in the graph. This edge is referred to as a grasp
transition edge, and it implies a constrained drooping action. Another
criterion considered for making edges is to connect graph nodes that share the
same grasp pose. This edge is referred to as the translation edge. Such edges
allow manipulating objects while maintaining the same relative grasp between
the gripper and the object. In this way, the resulted connected graph enables
both grasp transitions and object pose translation while being in contact with
the support surface.
#### IV-A2 Expansion with Regrasping Nodes
For the advanced dexterity of industrial robots, we may further expand the
graph with regrasp nodes. Regrasp nodes and edges consider the stable
placement poses on the support surface during a manipulation task. In a
regrasp sequence, a robot will release and regrasp objects while they are
resting stably on the support surface. Thus, we further sample stable
placements and find their associated grasp poses, and connect these nodes to
the previously built drooping manipulation graph. Similar to the previous
step, the reachable, collision-free grasp poses are included as graph nodes,
and the unsatisfactory grasp poses are discarded. Fig. 4(a) shows an example
of the stable placements of a stick on a table surface. Fig. 4(b) shows the
pre-annotated grasp poses. They are associate with each of the sampled object
pose to create candidate expansion nodes. The set of object poses that exist
in both of the support surface poses and the bouquet poses represent the
possible connecting poses between drooping and regrasp. The grasp poses
associated with these connecting poses are the shared nodes. They represent
the candidate interchange node for switching between drooping and regrasp.
Fig. 4(c) shows an example of the expanded manipulation graph. Here, the
regrasping nodes are illustrated in blue. The drooping nodes are illustrated
in green. The shared connecting nodes are shown in red. A planned path between
a given regrasping start node to a drooping goal node is shown on the graph
with yellow color. The expanded graph enables a robot to manipulate objects
from any given pose on the support surface into its workspace and complete
meaningful tasks.
## V Experiments and Analysis
The experimental setup of our research is shown in Fig. 5. We use one of the
two UR3 arms and the Robotiq 2F-85 gripper attached to its end flange for
object manipulation. The finger pads of the grippers have rubber pads and form
soft-finger contacts during grasping. Two wooden objects are prepared to
verify the proposed approach’s efficacy, including a stick and a duck-board.
The various parameters of the objects are listed in Table. I.
Figure 5: The experimental setup of our system. One of the two UR3 arms and the Robotiq 2F-85 gripper attached to its end flange is used to examine our planner. The objects used in the experiments are shown in front of the robot. They include a wooden stick and a wooden duck-board. Object | Length(mm) | Width(mm) | | Thickness /
---
Diameter(mm)
Weight(g)
Duck-board | 750 | 330 | 35 | 920
Stick | 656 | - | 32 | 280
TABLE I: Dimensions and weights of the objects used in the experiments.
We designed two sets of experiments to examine the developed planner. In the
first set, we only consider drooping manipulation. The goal is to verify that
our method can successfully carry out grasp transitions using the criterion
shown in section III-B. The second set concentrates on the hierarchical
planner’s efficacy in generating motion sequences of combined regrasping and
drooping.
The first set contains two tasks. In the first task, we require the UR3 arm to
move the wooden stick from a start pose on the table to a tilted goal pose
facing the right direction. The start and goal poses are denoted by green
arrows in Fig.6(a). Using the drooping manipulation graph, the robot
successfully found a sequence of critical poses to finish the task. The
sequence is marked by ID (1)-(3) in the figure. It involves one time of in-
hand grasp transition at the edge that connects ID (1) and (2). At edge
(2)-(3), the robot kept the same grasp pose. The result of the real-world
execution for this task is shown in Fig.8(a). The second task’s start and goal
object poses are denoted by the green arrows in Fig.6(b). The planner found a
sequence involving two times of in-hand grasp transitions. The sequence of
critical poses is denoted by ID (1)-(4) in the figure. The in-hand grasp
transitions appeared at edges (1)-(2) and (3)-(4). The result of the real-
world executions is shown in Fig.8(b). The second sequence shows interesting
behavior. The robot cannot complete the task by performing a direct upward
motion because a configuration obstacle blocked the direct connection between
the joint configurations at ID (1) and (4). To solve the problem, the planner
tried transiting to the grasp pose at ID (2). The robot could move from ID (1)
to (2) with a direct upward motion. However, the direct connection between ID
(2) and (4) remained obstructed by configuration obstacles. The planner
continued to search the graph and found an intermediate grasp pose ID (3). The
robot may either directly move downward from ID (2) to (3) and move upward
from (3) to (4), and thus could successfully finish the task. The critical
grasp poses at ID (2) and (3) are the same in the object’s local coordinate
system. The edges at (1)-(2) and (3)-(4) indicate two in-hand grasp
transitions.
Figure 6: Results of the first experimental set. The goal of this set is to
examine the drooping manipulation graph. (a) The key poses of the first task
in this set. The sequence involves one grasp transition. The robot picks up
the object at the start pose using grasp pose (1), moves it up to the transit
pose using grasp pose (2) to change to a proper in-hand pose, and finally
moves the object to the goal pose while keeping the same grasp pose. The right
part of this subfigure shows the manipulation graph and the node/edge
information. The yellow segments are the planned path. (b) Key poses of the
second task. The sequence involves two grasp transitions. The robot moves up
from grasp pose (1) to (2) to realize the first grasp transition. From grasp
pose (2) to (3), the robot moves down while preserving the obtained grasp
transition in the previous step. Then, from pose (3) to (4), the second grasp
transition is conducted, and the object reached its goal pose. Like (a), the
right part of this subfigure shows the manipulation graph and the node/edge
information. The yellow segments are the planned path. Figure 7: Results of
the second experimental set. (a) The results of the first task in this set.
The key poses include two regrasps and one transition. The robot grasps p the
object using grasp pose (1) and slides it on the table to an intermediate pose
while keeping the grasp. After that, the robot change grasp pose (3) to hold
one end of the object. From pose (3) to (4), the object is delivered to its
goal pose using constrained drooping. During the delivery, the grasp pose is
transited to (4). (b) The planned sequence for the second task in this set.
The planner finds a sequence where the robot slides the object to an
intermediate pose for constrained drooping. Like Fig.6, the right parts of the
subfigures show the manipulation graph and the node/edge information. The
yellow segments are the planned path. Figure 8: Real-world executions of the
four tasks. (a-b) The two tasks in the first experiment set. (c-d) The two
tasks in the second experiment set. Cartesian planning are used to interpolate
the intermediate motion between the critical poses found from the manipulation
graph.
The second set of experiments aims to examine the planner’s effectiveness in
combining regrasping and constrained drooping. It also includes two tasks. In
the first task, we ask the planner to move the stick from a start pose on the
table from a goal pose facing forward, as is shown in Fig.7(a). The path found
by our planner involved both regrasping and constrained drooping motion. Since
the start pose was far from the robot. The robot grasped the stick using grasp
pose ID (1) and slided it to an intermediate pose. The grasp pose was kept the
same in the process. Then, at the intermediate pose, the robot regrasped the
object and changed its grasping pose to ID (3). Finally, the robot performed
constrained droop to deliver the object to the goal pose. During the
constrained drooping, the grasp poses were kept the same. There were no in-
hand grasp transitions. The real-world executions of the planned sequence is
shown in Fig.8(a). In the second task, the robot is asked to manipulate a
duck-bench. The sequence of key poses for this task is shown in Fig. 7(a). The
planner found a sequence that brought the object to an intermediate pose for
drooping. The intermediate pose was found at the connecting nodes in the
combined graph, and thus the robot did not conduct release and regrasp.
Instead, it directly delivered the object to its goal pose with the help of
constrained drooping. The real-world executions of the sequence are shown in
Fig.8(b). The results showed that the combined planning of regrasping and
constrained drooping effectively finds motion sequences for previously
unsolvable tasks. Especially, regrasping enables a robot to repose an object
to an appropriate state for drooping.
## VI Discussions
The simulation and real-world results show the feasibility of our proposed
method to manipulate long and heavy objects using a single arm and with the
support of a table surface. The planner can autonomously decide regrasping and
drooping intermediate object and grasp poses between the start and goal, and
generate joint motion using Cartesian planning. Especially, the constrained
drooping enabled changing the in-hand pose of the object and improved the
connections among the key poses of the manipulated object.
While previous research has focused on using multiple robots to manipulate
long and heavy objects, the results of this work demonstrate that a single arm
with a supporting surface can be satisfactory for the same purpose. This
finding can help reduce the scale of system integration because fewer
manipulators are needed to solve the same problem. However, the generality of
the results is subjected to limitations related to the objects’ dimensions and
physical properties. It is beyond this study’s scope to decide the limit of
physical properties of the objects that can be manipulated by a single arm and
a support surface, and solve the related control and learning problems [26,
27, 28]. These out-of-scope topics are interesting directions for future
studies.
## VII Conclusions
We presented a planner for improving two-finger parallel grippers’ dexterity
to manipulate long and heavy objects. The planner could find robot motion
sequences that manipulate objects while keeping them supported by an external
support surface. The planner’s essential part was the constrained drooping,
which allowed tilting an object around a supporting point on the support
surface with upward or downward motion. The planner considered a combination
of constrained drooping and regrasping to build a manipulation graph and
search the graph to get a manipulation sequence. The intermediate robot
motions between the sequences were generated using Cartesian planning. The
method was verified using various objects and tasks. The results showed that
the method enabled using a single manipulator to maneuver long and heavy
objects, rather than multiple arms as assumed in previous literature. In the
future, we hope to refine the model of the soft-finger contact with tactile
sensors and apply the method to objects with unknown materials and mass
distributions.
## References
* [1] S. Sina Mirrazavi Salehian, N. Figueroa, and A. Billard, “Coordinated multi-arm motion planning: Reaching for moving objects in the face of uncertainty,” in _Proceedings of Robotics: Science and Systems_ , 2016.
* [2] Z. Xian, P. Lertkultanon, and Q.-C. Pham, “Closed-chain manipulation of large objects by multi-arm robotic systems,” _IEEE Robotics and Automation Letters_ , vol. 2, no. 4, pp. 1832–1839, 2017.
* [3] N. Dehio, J. Smith, D. L. Wigand, G. Xin, H.-C. Lin, J. J. Steil, and M. Mistry, “Modeling and control of multi-arm and multi-leg robots: Compensating for object dynamics during grasping,” in _2018 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2018, pp. 294–301.
* [4] A. Yamashita, J. Sasaki, J. Ota, and T. Arai, “Cooperative manipulation of objects by multiple mobile robots with tools,” in _Proceedings of the 4th Japan-France/2nd Asia-Europe congress on mechatronics_ , vol. 310, 1998, p. 315.
* [5] K. Alipour and S. A. A. Moosavian, “Point-to-point stable motion planning of wheeled mobile robots with multiple arms for heavy object manipulation,” in _2011 IEEE International Conference on Robotics and Automation_. IEEE, 2011, pp. 6162–6167.
* [6] F. Ohashi, K. Kaminishi, J. D. F. Heredia, H. Kato, T. Ogata, T. Hara, and J. Ota, “Realization of heavy object transportation by mobile robots using handcarts and outrigger,” _Robomech journal_ , vol. 3, no. 1, p. 27, 2016\.
* [7] Y. Aiyama, M. Inaba, and H. Inoue, “Pivoting: A new method of graspless manipulation of object by robot fingers,” in _Proceedings of 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’93)_ , vol. 1. IEEE, 1993, pp. 136–143.
* [8] B. Carlisle, K. Goldberg, A. Rao, and J. Wiegley, “A pivoting gripper for feeding industrial parts,” in _Proceedings of the 1994 IEEE International Conference on Robotics and Automation_. IEEE, 1994, pp. 1650–1655.
* [9] M. Raessa, J. C. Y. Chen, W. Wan, and K. Harada, “Human-in-the-loop robotic manipulation planning for collaborative assembly,” _IEEE Transactions on Automation Science and Engineering_ , 2020.
* [10] S. A. A. Moosavian and A. Pourreza, “Heavy object manipulation by a hybrid serial-parallel mobile robot,” _International Journal of Robotics & Automation_, vol. 25, no. 2, p. 109, 2010.
* [11] A. Bicchi, “Hands for dexterous manipulation and robust grasping: A difficult road toward simplicity,” _IEEE Transactions on robotics and automation_ , vol. 16, no. 6, pp. 652–662, 2000.
* [12] P. Tournassoud, T. Lozano-Pérez, and E. Mazer, “Regrasping,” in _Robotics and Automation. Proceedings. 1987 IEEE International Conference on_ , vol. 4. IEEE, 1987, pp. 1924–1928.
* [13] C. Liu, H. Qiao, J. Su, and P. Zhang, “Vision-based 3-d grasping of 3-d objects with a simple 2-d gripper,” _IEEE Transactions on Systems, Man, and Cybernetics: Systems_ , vol. 44, no. 5, pp. 605–620, 2014.
* [14] R. O. Ambrose, H. Aldridge, R. S. Askew, R. R. Burridge, W. Bluethmann, M. Diftler, C. Lovchik, D. Magruder, and F. Rehnmark, “Robonaut: Nasa’s space humanoid,” _IEEE Intelligent Systems and Their Applications_ , vol. 15, no. 4, pp. 57–63, 2000.
* [15] W. Wan and K. Harada, “Reorientating objects with a gripping hand and a table surface,” in _Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on_. IEEE, 2015, pp. 101–106.
* [16] ——, “Regrasp planning using 10,000 s of grasps,” in _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2017, pp. 1929–1936.
* [17] R. Calandra, A. Owens, D. Jayaraman, J. Lin, W. Yuan, J. Malik, E. H. Adelson, and S. Levine, “More than a feeling: Learning to grasp and regrasp using vision and touch,” _IEEE Robotics and Automation Letters_ , vol. 3, no. 4, pp. 3300–3307, 2018.
* [18] N. C. Dafle, A. Rodriguez, R. Paolini, B. Tang, S. S. Srinivasa, M. Erdmann, M. T. Mason, I. Lundberg, H. Staab, and T. Fuhlbrigge, “Extrinsic dexterity: In-hand manipulation with external forces,” in _2014 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2014, pp. 1578–1585.
* [19] F. R. Hogan and A. Rodriguez, “Feedback control of the pusher-slider system: A story of hybrid and underactuated contact dynamics,” in _Algorithmic Foundations of Robotics XII_. Springer, 2020, pp. 800–815.
* [20] K. M. Lynch and M. T. Mason, “Stable pushing: Mechanics, controllability, and planning,” _The International Journal of Robotics Research_ , vol. 15, no. 6, pp. 533–556, 1996.
* [21] J. Zhou, R. Paolini, J. A. Bagnell, and M. T. Mason, “A convex polynomial force-motion model for planar sliding: Identification and application,” in _2016 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2016, pp. 372–377.
* [22] N. Chavan-Dafle and A. Rodriguez, “Prehensile pushing: In-hand manipulation with push-primitives,” in _2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2015, pp. 6215–6222.
* [23] A. Holladay, R. Paolini, and M. T. Mason, “A general framework for open-loop pivoting,” in _2015 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2015, pp. 3675–3681.
* [24] Y. Hou, Z. Jia, and M. T. Mason, “Fast planning for 3d any-pose-reorienting using pivoting,” in _2018 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2018, pp. 1631–1638.
* [25] N. Sawasaki and H. INOUE, “Tumbling objects using a multi-fingered robot,” _Journal of the Robotics Society of Japan_ , vol. 9, no. 5, pp. 560–571, 1991\.
* [26] G. Peng, C. Yang, W. He, and C. L. P. Chen, “Force sensorless admittance control with neural learning for robots with actuator saturation,” _IEEE Transactions on Industrial Electronics_ , vol. 67, no. 4, pp. 3138–3148, 2020.
* [27] H. Huang, T. Zhang, C. Yang, and C. L. P. Chen, “Motor learning and generalization using broad learning adaptive neural control,” _IEEE Transactions on Industrial Electronics_ , vol. 67, no. 10, pp. 8608–8617, 2020\.
* [28] C. Yang, C. Chen, W. He, R. Cui, and Z. Li, “Robot learning system based on adaptive neural control and dynamic movement primitives,” _IEEE Transactions on Neural Networks and Learning Systems_ , vol. 30, no. 3, pp. 777–787, 2019.
|
# Comment on “Constraints on Low-Energy Effective Theories from Weak Cosmic
Censorship”
Jie Jiang<EMAIL_ADDRESS>Department of Physics, Beijing Normal
University, Beijing 100875, China Aofei Sang<EMAIL_ADDRESS>Department of Physics, Beijing Normal University, Beijing 100875, China Ming
Zhang Corresponding author<EMAIL_ADDRESS>Department of Physics,
Jiangxi Normal University, Nanchang 330022, China
###### Abstract
Recently, it was argued in [Phys. Rev. Lett. 126, 031102 (2021)] that the WCCC
can serve as a constraint to high-order effective field theories. However, we
find there exists a key error in their approximate black hole solution. After
correcting it, their calculation cannot show the ability of WCCC to constrain
the gravitational theories.
Weak cosmic censorship conjecture (WCCC) Penrose:1969pc is a basic principle
that guarantees the predictability of spacetime. One critical scientific
question is whether the WCCC can give a new constraint to the gravitational
theory. Recently, an attempt to this question was given in Ref. Chenprl .
Using Sorce-Wald’s method SW under the first-order approximation, the authors
showed that the WCCC fails for some possible generations; thus they argued
that the WCCC provides a constraint to the high-order low-energy effective
theories(EFT). However, after examining their letter, we found that key errors
occur in their approximate black hole solution. We will clarify this issue and
show that their discussion cannot give the constraint to the high-order
theories.
The Lagrangian of the EFT considered in Chenprl is given by
$\displaystyle\begin{aligned}
\mathcal{L}&=\frac{1}{2}R-\frac{1}{4}F_{ab}F^{ab}+c_{1}R^{2}+c_{2}R_{ab}R^{ab}+c_{3}R_{abcd}R^{abcd}\\\
&+c_{4}RF_{ab}F^{ab}+c_{5}R_{ab}F^{ac}F^{b}{}_{c}+c_{6}R_{abcd}F^{ab}F^{cd}\\\
&+c_{7}F_{ab}F^{ab}F_{cd}F^{cd}+c_{8}F_{ab}F^{bc}F_{cd}F^{da}\end{aligned}$
(1)
where $c_{i}$’s are some coupling constants which are treated as small
parameters in the calculations. The equation of motion (EOM) is given by
$\displaystyle\begin{aligned}
H^{ab}=0\,,\quad\quad\nabla_{a}E_{F}^{ab}&=0\,,\end{aligned}$ (2)
in which
$H^{ab}=E_{R}^{cde(a}R_{cde}{}^{b)}+2\nabla_{c}\nabla_{d}E_{R}^{(a|c|b)d}-E_{F}^{c(a}F^{b)}{}_{c}-\frac{1}{2}g^{ab}\mathcal{L}\,,$
with $E_{R}^{abcd}=\partial\mathcal{L}/\partial R_{abcd}$ and
$E_{F}^{ab}=\partial\mathcal{L}/\partial F_{ab}$.
First, we reexamine the solution given by Eqs. (6) and (7) in Chenprl . With a
straightforward calculation, it is easy to check
$\displaystyle\begin{aligned}
\nabla_{a}E_{F}^{ab}(dt)_{b}&=\frac{2q^{3}}{r^{7}}[c_{2}+4c_{3}+10c_{4}+3(c_{5}+c_{6})]+\mathcal{O}(c_{i}^{2})\quad\\\
&\neq 0\,,\end{aligned}$ (3)
which means that there are some errors in the solution given by Ref. Chenprl .
We start with the most general spherically symmetric static metric
$\displaystyle\begin{aligned}
ds^{2}=-f(r)dv^{2}+2\mu(r)dvdr+r^{2}(d\theta^{2}+\sin^{2}\theta
d\phi^{2})\,,\end{aligned}$ (4)
and the Maxwell field
$\displaystyle\begin{aligned} \bm{A}=\Psi(r)dv\,.\end{aligned}$ (5)
Under the leading-order correction of $c_{i}$, we can expand the solution to
$\displaystyle\begin{aligned}
&f_{0}(r)=1-\frac{m}{r}+\frac{q^{2}}{2r^{2}}+f_{1}(r)+\mathcal{O}(c_{i}^{2})\,,\\\
\mu_{0}(r)=1+&\mu_{1}(r)+\mathcal{O}(c_{i}^{2})\,,\,\Psi_{0}(r)=-\frac{q}{r}+\Psi_{1}(r)+\mathcal{O}(c_{i}^{2})\,,\end{aligned}$
(6)
where we used the fact that the background spacetime is a Reissner-Nordstrom
black hole solution. $f_{1}(r)$, $\mu_{1}(r)$ and $\Psi_{1}(r)$ are the linear
functions of $c_{i}$.
From the EOM $\nabla_{a}E_{F}^{ab}=0$, we can obtain
$\displaystyle\begin{aligned}
\Psi_{1}(r)=&\frac{2q}{5r^{5}}[c_{5}q^{2}+c_{6}(6q^{2}-5mr)+(8c_{7}+4c_{8})q^{2}]\\\
&+q\int\frac{\mu_{1}(r)}{r^{2}}dr\,.\end{aligned}$ (7)
Substituting the above result to $H^{vv}=0$, it is easy to obtain
$\displaystyle\begin{aligned}
\mu_{1}(r)=\frac{q^{2}}{r^{4}}(c_{2}+4c_{3}+10c_{4}+3c_{5}+3c_{6})\,,\end{aligned}$
(8)
which gives
$\displaystyle\begin{aligned}
\Psi_{1}(r)=&-\frac{q^{3}}{5r^{2}}\left[c_{2}+4c_{3}+10c_{4}+c_{5}\right.\\\
&\left.-\left(9-10mrq^{-2}\right)c_{6}-16c_{7}-8c_{8}\right]\,.\end{aligned}$
(9)
This result shows a different expression to $A_{a}$ given by Eq. (6) of
Chenprl . Finally, using $H^{\theta\theta}=0$, we can find that $f(r)$ shows
the same expression of $g_{tt}$ in Chenprl . Therefore, in the first-order
gedanken experiments, the condition of not destroying an extremal solution is
also given by Eq. (14) in Chenprl , i.e.,
$\displaystyle\begin{aligned} \delta m-\sqrt{2}\delta
q\left(1+\frac{4c_{0}}{5q^{2}}\right)\geq 0\,.\end{aligned}$ (10)
with
$\displaystyle\begin{aligned} c_{0}\equiv
c_{2}+4c_{3}+c_{5}+c_{6}+4c_{7}+2c_{8}\,.\end{aligned}$ (11)
The condition that the test particle can drop into the horizon or the
infalling matter satisfies the null energy condition is given by Eqs. (18) and
(27) of Chenprl , i.e.,
$\displaystyle\begin{aligned} \delta m-\Phi_{H}^{c}\delta q\geq
0\,,\end{aligned}$ (12)
in which $\Phi_{H}^{c}\equiv-\left.\xi^{a}A_{a}\right|_{H}$ with
$\xi^{a}=(\partial/\partial v)^{a}$ is the electric potential of the black
hole. Using our corrected expression (9) of $A_{a}$, for the extremal black
hole, we have
$\displaystyle\begin{aligned}
\Phi_{H}^{\text{ext}}=\sqrt{2}\left(1+\frac{4c_{0}}{5q^{2}}\right)+\mathcal{O}(c_{i}^{2})\,,\end{aligned}$
(13)
which is different from Eq. (11) of Chenprl . Then, the inequality (12) shows
the same expression as inequality (10), which implies that the extremal
charged black hole cannot be destroyed. This result is just contrary to that
shown by Ref. Chenprl where there are destructions of the extremal black
holes. This implies that after correcting the solution, their letter Chenprl
cannot show the ability of WCCC to constrain the gravitational theories.
## References
* (1) R. Penrose, Gravitational collapse: The role of general relativity, Riv. Nuovo Cim. 1 , 252-276 (1969).
* (2) B. Chen, F. L. Lin, B. Ning and Y. Chen, Constraints on low-energy effective theories from weak cosmic censorship, Phys. Rev. Lett. 126, 031102 (2021).
* (3) J. Sorce and R.M. Wald, Gedanken experiments to destroy a black hole. II. Kerr-Newman black holes cannot be overcharged or overspun, Phys. Rev. D 96, 104014 (2017).
|
HRI-RECAPP-2021-001
# Scalar Multiplet Dark Matter in a Fast Expanding Universe: resurrection of
the desert region
Basabendu Barman<EMAIL_ADDRESS>Department of Physics, IIT Guwahati,
Guwahati-781039, India Purusottam Ghosh<EMAIL_ADDRESS>Regional
Centre for Accelerator-based Particle Physics, Harish-Chandra Research
Institute, HBNI, Chhatnag Road, Jhunsi, Allahabad-211019, India Farinaldo S.
Queiroz<EMAIL_ADDRESS>International Institute of Physics,
Universidade Federal do Rio Grande do Norte, Campus Universitário, Lagoa Nova,
Natal-RN 59078-970, Brazil
Departamento de Física, Universidade Federal do Rio Grande do Norte,
59078-970, Natal, RN, Brasil Abhijit Kumar Saha<EMAIL_ADDRESS>School
of Physical Sciences, Indian Association for the Cultivation of Science, 2A
$\&$ 2B Raja S.C. Mullick Road, Kolkata 700 032, India
###### Abstract
Abstract: We examine the impact of a faster expanding Universe on the
phenomenology of scalar dark matter (DM) associated with $SU(2)_{L}$
multiplets. Earlier works with radiation dominated Universe have reported the
presence of desert region for both inert $SU(2)_{L}$ doublet and triplet DM
candidates where the DM is under abundant. We find that the existence of a
faster expanding component before BBN can revive a substantial part of the
desert parameter space consistent with relic density requirements and other
direct and indirect search bounds. We also review the possible collider search
prospects of the newly obtained parameter space and predict that such region
might be probed at the future colliders with improved sensitivity via a
disappearing/stable charged track.
## I Introduction
Production of dark matter (DM) in scenarios with a non-standard history has
gained growing interest in recent times Chung:1998rq ; Okada:2004nc ;
Okada:2007na ; Okada:2009xe ; Allahverdi:2018aux ; Baules:2019zwk ;
Waldstein:2016blt ; Arias:2019uol ; Cosme:2020nac ; Aparicio:2016qqb ;
Han:2019vxi ; Drees:2017iod ; Arcadi:2020aot ; Cosme:2020mck ; Bernal:2019mhf
; Allahverdi:2019jsc ; Maldonado:2019qmp ; Drees:2018dsj ; Bernal:2018ins ;
Visinelli:2017qga ; Arbey:2018uho ; Berger:2020maa ; McDonald:1989jd ;
Poulin:2019omz ; Hardy:2018bph ; Redmond:2017tja ; DEramo:2017gpl ;
DEramo:2017ecx ; Bernal:2018kcw ; Chanda:2019xyl ; Gelmini:2019esj ;
Biswas:2018iny ; Fernandez:2018tfa ; Betancur:2018xtj ; Mahanta:2019sfo ;
Allahverdi:2020bys . Since, the cosmological history of the early Universe
prior to Big Bang Nucleosynthesis (BBN) is vastly dark, the possibility of
presence of a non standard era in the early Universe is open. In fact, there
are no fundamental reasons to assume that the Universe was radiation-dominated
(RD) in the pre-BBN regime at $t\sim 1~{}\rm sec$. The history of the Universe
can be modelled, in general, by the presence of a fluid with arbitrary
equation of state parameter, which is zero for matter domination. If the
equation of state parameter of a fluid turns out to be larger than the value
for radiation, then the fluid acts as a fast expanding component.
Study of DM phenomenology in the presence of a modified cosmological epoch has
been performed widely and it shows several significant observational
consequences Artymowski:2016tme ; Hardy:2018bph ; Redmond:2017tja . In
DEramo:2017gpl , a model independent analysis of DM phenomenology in a fast
expanding Universe is worked out. It has been observed that if DM freezes out
during the fast expansion of the Universe, the required interaction strength
increases than the one in the standard scenario in order to satisfy the relic
bound by PLANCK experiment. At some stage during the evolution of the
Universe, at least before the BBN, the domination of fast expanding component
has to end such that the standard RD Universe takes over. A similar
phenomenological study with freeze-in production of DM in a fast expanding
Universe has been explored in DEramo:2017ecx . With the emergence of this
proposal, further efforts have been put forward to cultivate the DM
phenomenology considering such non-standard scenario in different well
established beyond standard model frameworks. For example, phenomenology of a
real gauge singlet scalar DM in non standard cosmologies can be found in
Bernal:2018kcw . Well motivated anatomy on the revival of $Z$-boson and Higgs
mediated DM model with alternative cosmology (late matter decay) are presented
in Chanda:2019xyl ; Hardy:2018bph . In Arcadi:2020aot ; Biswas:2018iny ;
Fernandez:2018tfa the possibility of sterile neutrinos as dark matter
candidates with modified cosmology have been discussed. Such sterile neutrinos
can provide a sensitive probe of the pre-BBN epoch as pointed out in
Gelmini:2019esj . In Betancur:2018xtj the case for fermion DM originating
from different order of multiplets is studied.
Motivated from these, in the present work, we aim to resurrect the so called
desert region in the parameter space of the $SU(2)_{L}$ inert doublet (IDM)
and triplet dark matter (ITDM) models Cirelli:2005uq ; Hambye:2009pw by
considering the presence of a faster expanding component (kinaton or faster
than kinaton) in the early Universe. In the context of single component111In
multi-component DM framework, individual DM candidates can be under abundant,
and the desert region is thus not an issue there. Such frameworks involving
multi-component DM are proposed in Bhattacharya:2019fgs ; Chakrabarty:2021kmr
; DuttaBanik:2020jrj . IDM dark matter it is well known LopezHonorez:2010tb ;
Borah:2017dfn that the intermediate DM mass regime $80\lesssim
m_{\text{DM}}\lesssim 525~{}\rm GeV$ suffers from under abundance issue. It
occurs due to large interaction rate of the DM (mediated by $SU(2)_{L}$ gauge
bosons) with the SM particles resulting into late freeze out and subsequently
less abundance. This particular mass window for IDM is thus referred as the
desert in the relic density allowed parameter space for the DM. On the other
hand, for single component DM that stems from an inert scalar triplet, right
relic density is achieved at a very large DM mass $\gtrsim 2~{}\rm TeV$ under
standard freeze-out assumptions. This happens due to small radiative mass
splitting between the charged and neutral component of the scalar triplet is
$\sim 166~{}\rm MeV$ which leads to huge co-annihilation resulting in DM under
abundance. Several prescriptions have been put forward for the revival of the
IDM desert. These ideas basically revolve around extending the SM particle
content Borah:2017dfn ; Chakraborti:2018aae . The case for scotogenic DM model
in a modified cosmological scenario has been discussed earlier in
Mahanta:2019sfo . Although authors of Mahanta:2019sfo have briefly remarked
on the impact of non-standard Universe in DM relic abundance, their work is
more focused on addressing neutrino mass and leptogenesis. Thus, a detailed
investigation of DM phenomenology and the impact of direct, indirect and
collider searches on the DM parameter space is highly recommended.
In the first part of the work our attempt is to make an exact prediction on
the allowed parameter space of the usual IDM scenario in the presence of a
fast expanding Universe. We also elucidate in detail the effect of fast
expansion on the subsequent collider signature of the model. We first obtain
the parameter space for the IDM dark matter that satisfies the relic abundance
criteria by varying the relevant parameters that control expansion of the
Universe. We find, a significant part of the relic allowed parameter space
further gets disfavored upon imposing the direct and indirect search
constraints together with the requirement of DM thermalization, which, in
turn, directly restricts the amount of fast expansion. Since the mass
difference of the DM with other neutral and charged eigenstates are found to
be small, the collider search of the allowed parameter space is limited and
can be probed with the identification of the charged track signal of a long-
lived charged scalar. We anticipate that the improved sensitivity of CMS/ATLAS
search CMS:2014gxa ; Khachatryan:2015lla ; Sirunyan:2018ldc can be used as an
useful tool to test the early Universe history before BBN. In the later part
we extend our analysis for a $SU(2)_{L}$ triplet DM model with zero
hypercharge. Similar to the IDM case, existence of a desert region for triplet
DM is mentioned in earlier works FileviezPerez:2008bj ; Araki:2011hm ;
Chao:2018xwz ; Jangid:2020qgo ; Fiaschi:2018rky ; Betancur:2017dhy ;
Lu:2016dbc ; Lu:2016ucn ; Bahrami:2015mwa . We use the same methodology of
faster-than-usual expansion to revive part of the desert confronting all
possible experimental bounds (including direct and indirect searches) which
has not been done earlier to the best of our knowledge.
The paper is organised as follows: in Sec. II we briefly sketch the
nonstandard cosmological framework that arises due to fast expansion; the
phenomenology for inert doublet DM in the light of fast expansion is
elaborated in Sec. III where we have discussed the modification in the
Boltzmann equation due to modified Hubble rate in subsection III.1.1;
subsection III.1.2 illustrates how the DM yield gets modified once fast
expansion is invoked; a detailed parameter space scan showing the constraints
from DM relic abundance, direct and indirect searches are discussed in
subsection III.1.3; possibile collider signature for the revived parameter
space is discussed in subsection III.1.4; the fate of scalar triplet DM in a
fast expanding Universe is illustrated in Sec. III.2 and finally in Sec. IV we
conclude by summarizing our findings.
## II Nonstandard scenarios of the Universe
Here we briefly present the recipe to analyze the early Universe by
considering both standard and non standard scenarios. The expansion rate of
the Universe measured by the Hubble parameter $\mathcal{H}$ which is connected
to the total energy density of the Universe through standard Friedmann
equation. In the standard case, it is assumed that the Universe was radiation
dominated starting from the reheating era upto BBN. Here we assume somewhat a
different possibility that the Universe before BBN were occupied by different
species namely radiation and $\eta$, with energy densities $\rho_{\rm rad}$
and $\rho_{\eta}$ respectively. The equation of state for a particular
component is given by:
$\displaystyle p=\omega\rho,$ (1)
where $p$ stands for the pressure of that component. For radiation,
$\omega_{R}=\frac{1}{3}$, while for $\eta$, $\omega_{\eta}$ could be
different. The $\omega_{\eta}=0$ case is familiar as early matter domination
and $\omega_{\eta}=1$ is dubbed as fast expanding Universe. However
irrespective of the nature of $\eta$, the energy component $\rho_{\eta}$ must
be subdominant compared to $\rho_{R}$ before BBN takes place. This poses a
strong lower bound on the temperature of the Universe $T\gtrsim(15.4)^{1/n}$
MeV before the onset of BBN (see Appendix. B). Considering the presence of a
new species ($\eta$) along with the radiation field, the total energy budget
of the Universe is $\rho=\rho_{\text{rad}}+\rho_{\eta}$. For standard
cosmology, the $\eta$ field would be absent, and we simply write
$\rho=\rho_{\text{rad}}$. One can always express the energy density of the
radiation component which is given by as function of temperature,
$\rho_{\text{rad}}(T)=\frac{\pi^{2}}{30}g_{*}(T)T^{4},$ (2)
where $g_{*}(T)$ stands for the effective number of relativistic degrees of
freedom at temperature $T$. In the limit of entropy conservation per comoving
volume i.e., $s\,a^{3}=$ const., one can define $\rho_{\text{rad}}(t)\propto
a(t)^{-4}$. Now, in case of a faster expansion of the Universe the energy
density of $\eta$ field is anticipated to be red-shifted more rapidly than the
radiation. Accordingly, one can obtain $\rho_{\eta}\propto a(t)^{-(4+n)}$ with
$n>0$.
The entropy density of the Universe is parameterized as
$s(T)=\frac{2\pi^{2}}{45}\,g_{*s}(T)\,T^{3}$ where, $g_{*s}$ is the effective
relativistic degrees of freedom that contribute to the entropy density.
Utilizing the energy conservation principle, a general form of $\rho_{\eta}$
can be constructed as:
$\rho_{\eta}(T)=\rho_{\eta}(T_{R})\,\left(\frac{g_{*s}(T)}{g_{*s}(T_{R})}\right)^{(4+n)/3}\left(\frac{T}{T_{R}}\right)^{(4+n)}.$
(3)
The temperature $T_{R}$ is an unknown parameter ($>T_{\text{BBN}}$) and can be
safely assumed as the point of equality of two respective energy densities:
$\rho_{\eta}(T_{R})=\rho_{\text{rad}}(T_{R})$. Using this criteria, it is
simple to specify the total energy density at any temperature ($T>T_{R}$) as
DEramo:2017gpl
$\displaystyle\rho(T)$ $\displaystyle=\rho_{rad}(T)+\rho_{\eta}(T)$ (4)
$\displaystyle=\rho_{rad}(T)\left[1+\frac{g_{*}(T_{R})}{g_{*}(T)}\left(\frac{g_{*s}(T)}{g_{*s}(T_{R})}\right)^{(4+n)/3}\left(\frac{T}{T_{R}}\right)^{n}\right]$
(5)
From the above equation, it is evident that the energy density of the Universe
at any arbitrary temperature ($T>T_{R}$), is dominated by $\eta$ component.
Now, the standard Friedmann equation connecting the Hubble parameter with the
energy density of the Universe is given by:
$\displaystyle\mathcal{H}^{2}=\frac{\rho}{3M_{\text{Pl}}^{2}},$ (6)
with $M_{\text{Pl}}=2.4\times 10^{18}$ GeV being the reduced Planck mass. At
temperature higher than $T_{R}$ with the condition $g_{*}(T)=\bar{g}_{*}$
which can be considered to be some constant, the Hubble rate can approximately
be recasted into the following form DEramo:2017gpl
$\displaystyle\mathcal{H}(T)$
$\displaystyle\approx\frac{\pi\bar{g}_{*}^{1/2}}{3\sqrt{10}}\frac{T^{2}}{M_{\text{Pl}}}\left(\frac{T}{T_{R}}\right)^{n/2},~{}~{}~{}~{}({\rm
with~{}~{}}T\gg T_{R}),$ (7)
$\displaystyle=\mathcal{H}_{R}(T)\left(\frac{T}{T_{R}}\right)^{n/2},$
where $\mathcal{H}_{R}(T)\simeq 0.33~{}\bar{g}_{*}^{1/2}\frac{T^{2}}{M_{\rm
Pl}}$, the Hubble rate for radiation dominated Universe. In case of SM,
$\bar{g}_{*}$ can be identified with the total SM degrees of freedom
$g_{*}\text{(SM)}=106.75$. It is important to note from Eq (7) that the
expansion rate is larger than what it is supposed to be in the standard
cosmological background provided $T>T_{R}$ and $n>0$. Hence it can be stated
that if the DM freezes out during $\eta$ domination, the situation will alter
significantly with respect to the one in the standard cosmology. Finally, it
is worth noting that $T_{R}$ can not be too small such that it alters the
standard BBN. For certain value of $n$, BBN constraints provide a lower limit
on $T_{R}$ which we report in Appendix. B:
$\displaystyle T_{R}\gtrsim\left(15.4\right)^{1/n}~{}\text{MeV}.$ (8)
To this end, we have assumed the prescription for DM freeze-out in a fast
expanding Universe in a model-agnostic way. Before moving on to the next
section we would like to provide few examples where it is possible to have
some physical realization of the new species $\eta$. We consider $\eta$ to be
a real scalar field minimally coupled to gravity. In that case a specific form
for $\omega(=p/\rho)$ can be written as
$\displaystyle\omega=\frac{\frac{1}{2}\left(\frac{d\eta}{dt}\right)^{2}-V(\eta)}{\frac{1}{2}\left(\frac{d\eta}{dt}\right)^{2}+V(\eta)}$
(9)
The energy density of $\eta$ redshifts as as DEramo:2017ecx
$\displaystyle\rho_{\eta}\propto a^{-3\left(1+\omega\right)},$ (10)
which can be converted to $\rho_{\eta}\propto a^{-4+n}$ with
$\omega=\frac{1}{3}(n+1)$. For a positive scalar potential, two possible
extreme limits are $\left(\frac{d\eta}{dt}\right)^{2}\ll V(\eta)$ or the
$\left(\frac{d\eta}{dt}\right)^{2}\gg V(\eta)$. These correspond to
$\omega\in\left(-1,+1\right)$ leading to $n\in\left(-4,+2\right)$. The $n=2$
case is realised for a Universe dominated by kinaton which can be identified
with a quintessence fluid Caldwell:1997ii ; PhysRevD.37.3406 . For theories
with $n>2$ one has to consider scenarios faster than quintessence with
negative potential. Example of such theories can be found in Khoury:2001wf ;
Buchbinder:2007ad where one assumes the presence of a pre big bang
“ekpyrotic” phase. The key ingredient of ekpyrosis is same as that of
inflation, namely a scalar field rolling down some self-interaction potential.
However, the crucial difference being, while inflation requires a flat and
positive potential, its ekpyrotic counterpart is steep and negative. Note
that, in this work we consider the kination or faster than kination scenario
with $n\geq 2$.
## III Scalar Multiplet Dark Matter in a fast expanding Universe
In this section we perform the phenomenological analysis of DM belonging to
different representation of scalar multiplets when the Hubble parameter is
modified under the assumption of faster-than-usual expansion in the pre-BBN
era. Our analysis, as mentioned in the introduction, addresses two well-
motivated DM scenario:
* •
The inert doublet model (IDM) where the second Higgs doublet carries a non-
zero hypercharge and the DM emerges either as the CP-even or as the CP-odd
component of the second Higgs.
* •
A hypercharge-less $(Y=0)$ inert triplet scalar under $SU(2)_{L}$ where the
neutral component of the scalar triplet can be a viable DM candidate. We shall
call this as the inert triplet dark matter (ITDM).
In either cases one has to impose a discrete symmetry to ensure the stability
of the DM. The DM phenomenology for both of these models have been studied in
great detail in the background of a standard radiation-dominated Universe.
From this analyses it has been found that for the case of IDM the DM mass
range $m_{W}(\sim 80)\lesssim m_{\text{DM}}\lesssim 525~{}\rm GeV$ is under
abundant, while for ITDM below 1.9 TeV is under abundant. Here we would like
to mention that other possibility of having a scalar triplet DM is to consider
a $Y=2$ triplet, however for such a non-zero hypercharge multiplet,
$Z$-mediated direct detection bound becomes severe making most of the DM
parameter space forbidden simply from spin-independent direct detection bound
Araki:2011hm ; Kanemura:2012rj ; DuttaBanik:2020jrj . Therefore, we shall
focus only on $Y=0$ triplet. Our goal is, as emphasized earlier, to see how
much of the parameter space ruled out by the standard cosmological background
can be revived under the assumption of fast expansion without extending the
particle spectrum for each of these models further. In the following sections
we shall furnish the details of the models and explicitly demonstrate how the
non-standard cosmological scenario drastically alters the standard picture.
### III.1 The inert doublet model
Here we would like to briefly summarize the inert doublet model (IDM)
framework. The IDM consists of an extra scalar that transforms as a doublet
under the SM gauge symmetry. An additional $Z_{2}$ symmetry is also imposed
under which all the SM fields are even while the inert doublet transforms non-
trivially. This discrete symmetry remains unbroken since it is assumed that
the extra scalar does not acquire a vacuum expectation value (VEV). With this
minimal particle content, the scalar potential takes the form
LopezHonorez:2006gr ; LopezHonorez:2010tb ; Arhrib:2013ela ; Queiroz:2015utg ;
Belyaev:2016lok ; Alves:2016bib ; Borah:2017dfn ; Barman:2018jhz
$\displaystyle V(H,\Phi)=$
$\displaystyle-\mu_{H}^{2}|H|^{2}+\lambda_{H}|H|^{4}+\mu_{\Phi}^{2}(\Phi^{\dagger}\Phi)+\lambda_{\Phi}(\Phi^{\dagger}\Phi)^{2}$
$\displaystyle+\lambda_{1}(H^{\dagger}H)(\Phi^{\dagger}\Phi)+\lambda_{2}(H^{\dagger}\Phi)(\Phi^{\dagger}H)$
$\displaystyle+\frac{\lambda_{3}}{2}\left[(H^{\dagger}\Phi)^{2}+h.c.\right].$
(11)
After electroweak symmetry breaking (EWSB) the SM-like Higgs doublet acquires
non-zero vacuum expectation value. Considering the unitary gauge, the two
scalar doublets can be expressed as,
$\displaystyle\begin{aligned} &H=\begin{pmatrix}0\\\
\frac{h+v}{\sqrt{2}},\end{pmatrix},~{}~{}\Phi=\begin{pmatrix}H^{\pm}\\\
\frac{H^{0}+iA^{0}}{\sqrt{2}},~{}~{}\end{pmatrix}\end{aligned},$ (12)
where $v=246~{}\rm GeV$ is the SM Higgs VEV. After minimizing the potential
along different field directions, one can obtain the following relations
between the physical masses and the associated couplings
$\displaystyle\begin{aligned}
&\mu_{H}^{2}=\frac{m_{h}^{2}}{2},~{}\mu_{\Phi}^{2}~{}=~{}m_{H^{0}}^{2}-\lambda_{L}v^{2},~{}\lambda_{3}=\frac{1}{v^{2}}(m_{H^{0}}^{2}-m_{A^{0}}^{2}),\\\
&\lambda_{2}~{}=~{}\frac{1}{v^{2}}(m_{H^{0}}^{2}+m_{A^{0}}^{2}-2m_{H^{\pm}}^{2}),\\\
&\lambda_{1}=2\lambda_{L}-\frac{2}{v^{2}}(m_{H^{0}}^{2}-m_{H^{\pm}}^{2})\end{aligned}$
(13)
where $\lambda_{L}=\frac{1}{2}(\lambda_{1}+\lambda_{2}+\lambda_{3})$ and
$m_{h},m_{H^{0}},m_{A^{0}}$ are the mass eigenvalues of SM-like neutral scalar
found at the LHC $(m_{h}=125.09~{}\text{ GeV})$, heavier or lighter additional
CP-even neutral scalar and the CP-odd neutral scalar respectively. The
$m_{H^{\pm}}$ denotes the mass of charged scalar eigenstate(s). In our case,
we consider $H^{0}$ as the DM candidate with mass $m_{H^{0}}$ which
automatically implies $m_{H^{0}}<m_{{A^{0}},H^{\pm}}$. We also assume
$\displaystyle\Delta M=m_{A^{0}}-m_{H^{0}}=m_{H^{\pm}}-m_{H^{0}}.$ (14)
to reduce the number of free parameters222Choosing $m_{A^{0}}\neq m_{H^{\pm}}$
does not alter our conclusions.. Now, the masses and couplings are subject to
a number of theoretical and experimental constraints. Below we briefly mention
them.
$\bullet$ Vacuum Stability: Stability of the 2HDM potential is ensured by the
following conditions PhysRevD.18.2574 ; Ivanov:2006yq ,
$\displaystyle\begin{gathered}\lambda_{H},\,\lambda_{\Phi}>0\,;\lambda_{1}+2\sqrt{\lambda_{H}\lambda_{\Phi}}>0\,;\\\
\lambda_{1}+\lambda_{2}-|\lambda_{3}|+2\sqrt{\lambda_{H}\lambda_{\Phi}}>0\,.\end{gathered}$
(17)
These conditions are to ensure that the scalar potential is bounded from
below.
$\bullet$ Perturbativity:
Tree-level unitarity imposes bounds on the size of the quartic couplings
$\lambda_{i}$ or various combinations of them LopezHonorez:2006gr . On top of
that, the theory remains perturbative at any given scale if naively
$\displaystyle\left|\lambda_{i}\right|\lesssim 4\pi,~{}i=1,2,3,H,\Phi.$ (18)
In view of the unitarity bound, we shall keep the magnitudes of all the
relevant couplings below order of unity.
$\bullet$ Oblique parameters:
The splitting between the heavy scalar masses is constrained by the oblique
electroweak $T$-parameter PhysRevD.46.381 whose expression in the alignment
limit is given by Barbieri:2006dq :
$\displaystyle\begin{aligned} &\Delta
T=\frac{g_{2}^{2}}{64\pi^{2}m_{W}^{2}}\Big{\\{}\zeta\left(m_{H^{\pm}}^{2},m_{A}^{2}\right)+\zeta\left(m_{H^{\pm}}^{2},m_{H}^{2}\right)\\\
&-\zeta\left(m_{A}^{2},m_{H}^{2}\right)\Big{\\}},\end{aligned}$ (19)
with,
$\displaystyle\zeta\left(x,y\right)=\begin{cases}\frac{x+y}{2}-\frac{xy}{x-y}\ln\left(\frac{x}{y}\right),&\text{if
$x\neq y$}.\\\ 0,&\text{if $x=y$}.\\\ \end{cases}$ (20)
The contribution to $S$ parameter is always small Barbieri:2006dq , and can
safely be neglected. We thus concentrate on the $T$-parameter only which is
bounded by the global electroweak fit results PhysRevD.98.030001 as
$\displaystyle\Delta T=0.07\pm 0.12.$ (21)
It can be understood from Eq.(20) that the constraints on the oblique
parameter typically prohibit large mass splittings among inert states. However
we shall see that to satisfy the other DM related constraints, in general,
relatively small mass splittings are required and hence the model easily
bypasses the bounds arising from electroweak parameters.
$\bullet$ Collider bounds:
In order to remain in compliance with the $Z$ decay width measured from LEP-II
Abbiendi:2013hk ; Arbey:2017gmh the new scalars should obey the inequality
$m_{Z}<m_{H^{0}}+m_{A^{0}}$. The LEP experiments have performed direct
searches for charged Higgs. A combination of LEP data from searches in the
$\tau\nu$ and $cs$ final states demand $m_{H^{\pm}}\raise
1.29167pt\hbox{$\;>$\kern-7.5pt\raise-4.73611pt\hbox{$\sim\;$}}80~{}\rm GeV$
under the assumption that the decay $H^{\pm}\to W^{\pm}h$ is absent
Abbiendi:2013hk ; Arbey:2017gmh . As discussed in Belanger:2015kga Run-I of
the LHC provides relevant constraints on the IDM model that significantly
extend previous limits from LEP. Run-1 of ATLAS dilepton searches exclude, at
95% CL, inert scalar masses up to about 35 GeV for pseudoscalar masses around
100 GeV, with the limits becoming stronger for larger $m_{A^{0}}$
Belanger:2015kga . Also, for $m_{H^{0}}<m_{h}/2$ the SM-like CP even Higgs can
decays invisibly to a pair of inert DM which is also constrained from the
invisible Higgs decay width measurement at the LHC PhysRevD.98.030001 .
#### III.1.1 IDM dark matter in the light of fast expansion
As stated earlier, we refer the intermediate DM mass range: $m_{W}\lesssim
m_{H^{0}}\lesssim 525~{}\text{GeV}$ as the IDM desert where the observed relic
abundance of the DM can not be generated as the DM annihilation cross section
is more than what is required to produce correct abundance through the vanilla
freeze-out mechanism. The inert doublet DM can (co-)annihilate to SM states
through both Higgs and $Z,W^{\pm}$ mediated processes. The dominant
contribution to the DM abundance generally comes from the DM pair annihilation
to gauge boson final states irrespective of the choice of $\Delta M$. Although
co-annihilation of DM with its charged counterpart $H^{\pm}$ turns out to be
important for small $\Delta M\sim 1~{}\rm GeV$, it provides sub-dominant
contribution to the relic abundance as we have checked. Due to large
annihilation rates (involving gauge interactions), the DM is under-abundant
within this mass range. Without extending the model further or resorting to
other DM production mechanisms, our aim is to revive the desert region with
the help of non-standard cosmology.
Figure 1: (a): Evolution of DM relic abundance as function of $x=m_{H^{0}}/T$
for RD dominated Universe (red) and in the presence of fast expansion for
different values of $n(>0)$. The analysis is for a fixed $T_{R}=3$ GeV,
$\Delta M=1~{}\rm GeV$ and $m_{H^{0}}=300~{}\rm GeV$ with different choices of
$n=\\{2,4,6\\}$ shown in blue, green and brown respectively. (b): The DM relic
density as a function of $x$ is plotted for a fixed $n=4$ for different
choices of $T_{R}=\\{3,4,5\\}~{}\rm GeV$ shown via the blue, green and brown
curves respectively. In both the plots the red solid curve corresponds to
usual RD Universe with $n=0$ and the thick dashed straight line indicates the
central value of the observed DM relic abundance. Figure 2: The modified
Hubble rates (dashed lines) are plotted as function of temperature for
different values of $n$. The red solid line indicates the DM interaction rate
$\Gamma_{\text{int}}$ (see text for details) as a function of temperature $T$
of the Universe. The figure in the left panel is obtained using Eq. (29),
while for right panel we obtain the thermally averaged cross-section
numerically to determine the DM interaction cross section.
Figure 3: Relic satisfied points (red, blue, orange) are shown in
$m_{H^{0}}-\Delta M$ plane as function of $n$ values considering (a) $T_{R}=3$
GeV, (b)$T_{R}=4$ GeV, (c)$T_{R}=5$ GeV and an uniform $\lambda_{L}=0.01$
value. The cyan region is forbidden from indirect search bound to $WW$ final
state.
The Boltzmann equation (BEQ) that governs the evolution of comoving number
density of the DM, in the standard radiation dominated Universe has the
familiar form Kolb:1990vq
$\displaystyle\frac{dY_{\rm DM}}{dx}=-\frac{\langle\sigma v\rangle
s}{\mathcal{H}_{R}(T)x}\Bigl{(}Y_{\rm DM}^{2}-Y_{\rm DM}^{\rm
eq^{2}}\Bigr{)},$ (22)
where, $x=\frac{m_{H^{0}}}{T}$ and $\langle\sigma v\rangle$ stands for the
thermally averaged annihilation cross section. It is always convenient to re-
cast the DM number density in terms of the dimensionless quantity
$Y_{\text{DM}}=n_{\text{DM}}/s$ with $s$ being the entropy per comoving
volume. The equilibrium number density of the DM component, in terms of the
yield $Y$ is given by
$\displaystyle
Y_{\text{DM}}^{\text{eq}}=\frac{45}{4\pi^{4}}\Biggl{(}\frac{g_{\text{DM}}}{g_{*s}}\Biggr{)}x^{2}K_{2}\left(x\right)$
(23)
where $K_{2}\left(x\right)$ is the reduced Bessel function of the second kind.
For the fast expanding Universe, $\mathcal{H}_{R}$ in Eq. (22) will be
replaced by $\mathcal{H}$ of Eq. (7) leading to
$\displaystyle\begin{aligned} &\frac{dY_{\rm DM}}{dx}=-\frac{A\langle\sigma
v\rangle}{x^{2-n/2}\sqrt{x^{n}+\left(\frac{m_{\text{DM}}}{T_{R}}\right)^{n}}}\,\Bigl{(}Y_{\rm
DM}^{2}-Y_{\rm DM}^{\rm eq^{2}}\Bigr{)}\end{aligned}$ (24)
with
$A=\frac{2\sqrt{2}\pi}{3\sqrt{5}}\frac{g_{*s}}{\sqrt{g_{*}}}M_{\text{pl}}m_{\text{DM}}$.
This is the BEQ of our interest.
As clarified before, in presence of the species $\eta$ with $n>0$, the freeze
out of DM occurs at earlier times compared to the case for radiation dominated
Universe. In post freeze out time the DM number density still keeps decreasing
due to faster red-shift of the energy density of $\eta$ and constant attempt
of the DM to go back to thermal equilibrium till the Universe reaches
radiation domination, and finally the rate of interaction $\Gamma_{\rm
DM}\ll\mathcal{H}_{R}$. The rate of decrease of the DM relic abundance in this
phase is rapid for larger $n$. An approximate analytical solution for the DM
yield considering $s$\- wave annihilation in this regime reads
$\displaystyle\begin{aligned}
&Y_{\text{DM}}\left(x\right)\simeq\left\\{\begin{array}[]{ll}\frac{x_{r}}{A\langle\sigma
v\rangle}\Biggl{[}\frac{2}{x_{f}}+\log\left(x/x_{f}\right)\Biggr{]}^{-1},~{}~{}(n=2)\\\
\frac{x_{r}^{n/2}}{2A\langle\sigma
v\rangle}\Biggl{[}x_{f}^{n/2-2}+\frac{x^{n/2-1}-x_{f}^{n/2-1}}{n-2}\Biggr{]}^{-1}~{}~{}(n\neq
2)\\\ \end{array}\right.\end{aligned}$ (25)
as reported in Appendix. A with $x_{f(r)}=m_{\text{DM}}/T_{f(R)}$. It is
evident from Eq.(25), for $n=2$ after freeze-out one can observe the slow
logarithmic decrease (although faster than the usual scenario) in the DM
yield. The slow logarithmic decrease in the number density is the result of
the relentless effort of the DM to go back to the thermal equilibrium333This
feature has been referred to as ‘relentless’ DM in DEramo:2017gpl . This
behaviour continues till $T\simeq T_{R}$ after which the Universe becomes
radiation dominated and the DM comoving number density attains a constant
value. For $n>2$ the effect of fast expansion is even more pronounced as the
DM yield has a pure power law dependence instead of a logarithmic one. Same as
before, the DM number density keeps decreasing until radiation takes over.
Similar behaviour can be seen for $p$-wave annihilation as elaborated in
DEramo:2017gpl . For different choices of the relevant parameters we shall
solve Eq. (24) numerically to obtain the DM relic abundance via
$\displaystyle\Omega_{\rm DM}h^{2}=2.82\times
10^{8}~{}m_{H^{0}}Y_{\text{DM}}\left(x=\infty\right).$ (26)
This brings us to the independent parameters for IDM dark matter model in a
fast expanding Universe that is going to affect the DM relic abundance:
$\displaystyle\left\\{m_{H^{0}},\Delta M,\lambda_{L},n,T_{R}\right\\}.$ (27)
Note that the presence of last two parameters are due to consideration of fast
expansion.
Apart from the requirement of obtaining the PLANCK observed relic abundance
($\Omega_{\rm DM}h^{2}=0.120\pm 0.001$ at 90$\%$ CL Aghanim:2018eyx ), there
are two other sources that impose severe bound on the IDM desert region. The
spin-independent direct search puts a stringent bound on the IDM parameter
space by constraining the DM-nucleon direct detection cross-section. At the
tree-level the DM-nucleon scattering cross-section mediated by the SM-like
Higgs boson reads Cline:2013gha
$\displaystyle\sigma_{n-H^{0}}^{\text{SI}}=\frac{\lambda_{L}^{2}f_{N}^{2}}{\pi}\frac{\mu^{2}m_{n}^{2}}{m_{h}^{4}m_{H^{0}}^{2}},$
(28)
where $f_{N}=0.2837$ represents the form factor of nucleon, $m_{n}=0.939$ GeV
denotes the nucleon mass and $\mu=m_{n}m_{H^{0}}/\left(m_{n}+m_{H^{0}}\right)$
is the DM-nucleon reduced mass. The spin-independent direct search exclusion
limit puts bound on the model parameters, especially on the coupling
$\lambda_{L}$ and DM mass $m_{H^{0}}$ via Eq. (28), which in turn restricts
the relic density allowed parameter space to remain viable within the direct
search limit. In our work we shall consider the recent XENON1T bound
Aprile:2018dbl to restrict the parameter space wherever applicable.
The second most rigorous bound arises from the indirect search experiments
that look for astrophysical sources of SM particles produced through DM
annihilations or via DM decays. Amongst these final states, the neutral and
stable particles e.g., photon and neutrinos, can reach indirect detection
detectors without getting affected much by intermediate regions. If the
emitted photons lie in the gamma ray regime, that can be measured at space-
based telescopes like the Fermi-LAT Fermi-LAT:2016uux or ground-based
telescopes like MAGIC Ahnen:2016qkx . Now it turns out that for single
component IDM candidate, the indirect search severely restricts the thermal
average cross section of $H^{0}H^{0}\rightarrow W^{+}W^{-}$ annihilation
process. Since bound on other annihilation processes of IDM DM candidate are
comparatively milder, we shall mostly focus into the bound on $W^{+}W^{-}$
final states for constraining the parameter space. Equipped with these we now
move on to investigate the fate of the IDM desert under the influence of fast
expansion.
#### III.1.2 IDM dark matter yield in fast expanding background
As stated earlier, we work in the standard freeze-out regime where we solve
Eq. (22) with the assumption that the DM was in thermal equilibrium in the
early Universe. In order to illustrate the effect of modified BEQ on the DM
abundance we deliberately consider a few benchmark values such that they
provide under abundance in case of standard Universe (RD), thus falling into
the desert region. Before delving into the parameter scan we would first like
to demonstrate the effect of fast expansion i.e., the parameters
$\\{n,T_{R}\\}$ on the DM yield. In order to do that we fix the coupling
$\lambda_{L}=0.01$ and choose several values of $\\{m_{H^{0}},\Delta
M,n,T_{R}\\}$ and obtain resulting DM yield by solving Eq. (24) numerically as
stated earlier. As we shall see later such a choice of $\lambda_{L}$ keeps the
DM safe from spin-independent (SI) direct search exclusion limits. We have
used the publicly available code micrOmegas Belanger:2010pz for obtaining the
annihilation cross-section $\langle\sigma v\rangle$ and fed them to the
modified BEQ in Eq. (24) to extract the DM yield.
Figure 4: Numerical estimate of DM annihilation cross section to $W^{+}W^{-}$
final states for the relic satisfied points with different $n$ values as shown
in Fig. 3 considering (a) $T_{R}=3$ GeV, (b)$T_{R}=4$ GeV, (c)$T_{R}=5$ GeV
and an uniform $\lambda_{L}=0.01$ value. The black solid line represents the
latest bound of non-observation of the DM at Fermi experiment.
* •
For the benchmark values $m_{H^{0}}=300~{}\rm GeV$ and $\Delta M=1~{}\rm GeV$
we fix $T_{R}=3$ GeV. In the left panel of Fig. 1, we show the evolution of DM
abundance as function of $x=m_{H^{0}}/T$. The solid red colored curve is the
case of standard RD Universe $(n=0)$ that clearly shows the DM relic is under
abundant for the chosen benchmark. As we increase the value of $n$ from zero,
the final relic abundance gets enhanced obeying Eq. (26) and for $n=2$ the
relic abundance is satisfied. This typical behaviour surfaces because of the
presence of fast expanding component $\eta$ during the DM freeze out. Since
the Hubble is larger than that in the RD Universe, the DM freezes out earlier
and causes an over production of relic that can be tamed down by suitable
choice of the free parameters $\\{n,T_{R}\\}$. Needless to mention that $n\to
0$ for a fixed $T_{R}$ simply reproduces the RD scenario with the unmodified
Hubble rate.
* •
Next, in the right panel of Fig. 1 we fix $n=4$ for the same DM mass of
$m_{H^{0}}=300~{}\rm GeV$ and choose different values of $T_{R}$. As one can
see from the left panel of Fig. 1 $n=4$ corresponds to DM over abundance for
$T_{R}=3~{}\rm GeV$ (shown by the green dashed curve). In order to obtain the
right abundance for $n=4$ one then has to go to a larger $T_{R}$ obeying Eq.
(26) to tame down the Hubble rate. This is exactly what we see here. The
correct DM abundance is achieved for $T_{R}=5~{}\rm GeV$ with $n=4$. Increase
$T_{R}$ further shall make the DM under abundance as $T_{R}\to\infty$ for a
fixed $n$ corresponds to the standard RD scenario.
We thus see the general trend here that when we invoke fast expansion through
the Hubble parameter then for certain choices of $\\{n,T_{R}\\}$ it is indeed
possible to revive the region of the DM parameter space that is otherwise
under abundant (shown by the red curve in each plot). Our next task is to see
the relic density allowed parameter space that survives once direct and
indirect search bounds are imposed.
Before we proceed, it is necessary to check whether the DM ever thermalizes in
the fast expanding Universe at some early time validating the BEQ (Eq. (24))
that we are using to find its yield. Thermalization can be accomplished by
satisfying the condition $\Gamma_{\text{int}}>\mathcal{H}(T)$ at some high
temperature above the weak scale ($\sim\mathcal{O}(1){\rm~{}TeV}$).
Considering the temperatures larger than the DM mass, the scattering rate of
the DM can be approximated as DEramo:2017gpl
$\displaystyle\Gamma_{\text{int}}=n_{\text{DM}}\langle\sigma
v\rangle\simeq\frac{\zeta(3)T^{3}}{2\pi^{2}}\frac{g_{2}^{4}}{32\pi}\frac{T^{2}}{(T^{2}+M_{\rm
med}^{2})^{2}},$ (29)
where $g_{2}$ is the $SU(2)_{L}$ gauge coupling and $M_{\rm med}$ is the
mediator mass 444For point interaction we can consider $M_{\text{med}}\to 0$..
For the inert doublet model, in principle $\lambda_{L}$ (one of the scalar
couplings) should also enter into the Eq.(29) Now, since $\lambda_{L}\ll
g_{2}$ (motivated from satisfying the direct search bound) always holds in our
analysis, we have found that the DM pair annihilation is always dominated by
the gauge boson final state which is proportional to the coupling strength
$g_{2}^{4}$ LopezHonorez:2006gr ; LopezHonorez:2010tb . In Fig. 2, we compare
the modified Hubble rate with the DM interaction rate as function of
temperature $T$, considering $T_{R}=1$ GeV for different values of $n$. In the
left panel of Fig. 2 we consider the approximate analytical relation in Eq.
(29), while for the right panel we calculate the DM interaction rate
numerically by evaluating the annihilation cross-section using micrOMEGAS
Belanger:2008sj for a DM of mass $m_{H^{0}}=350$ GeV. We notice, the
approximate expression closely follows the numerically obtained result,
implying the annihilation rate of DM is largely independent of its mass. From
these plots we see, for $n=6$, thermalization is achieved at temperature
$T\gtrsim 2.5~{}\rm TeV$ for $T_{R}=1$ GeV. For $T_{R}>1~{}\rm GeV$ the DM
thermalizes much earlier (at a larger temperature) as the modified Hubble rate
decreases following Eq. (7) and it could allow higher $n>6$ values. The same
conclusion can be drawn for the case of inert triplet DM where the dominant
annihilation channel is again due to gauge boson final states, and hence
determined by the $SU(2)_{L}$ gauge coupling. In case where the DM interaction
rate is always below the Hubble rate, the thermal production of the DM is not
possible, and we need to opt for the non thermal case with modified Boltzmann
equations. Taking thermalization of the DM in the early Universe into account,
we confine ourselves within the range $2\leq n\leq 6$ with $T_{R}\gtrsim
1~{}\rm GeV$, unless otherwise mentioned explicitly 555Lowering $T_{R}$
$(<1~{}\rm GeV)$ disallows higher $n$-values from the requirement of
thermalization above the weak scale.. We find, within the said range of
$n,T_{R}$, both inert doublet and inert triplet DM achieve thermal equilibrium
for the mass range $m_{\text{DM}}\lesssim 525~{}\rm GeV$ and
$m_{\text{DM}}\lesssim 1.9~{}\rm TeV$ respectively, at a temperature above the
weak scale.
#### III.1.3 Allowed parameter space for IDM dark matter in a fast expanding
Universe
Figure 5: Relic satisfied points (red, blue, orange) are shown in
$m_{H^{0}}-\Delta M$ plane as function of $n$ values considering (a) $T_{R}=3$
GeV, (b)$T_{R}=5$ GeV, (c)$T_{R}=8$ GeV and an uniform $\lambda_{L}=0.05$
value. The indirect search bound for $W^{+}W^{-}$ final state forbids the cyan
region while the gray shaded region shows direct search exclusion limit from
XENON1T.
To find out how much of the relic density allowed parameter space is left in a
fast expanding framework after satisfying (in)direct detection bounds we would
like to perform a scan over the relevant parameter space. In order to do that,
first we fix $\lambda_{L}=0.01$ as before. In that case the remaining
parameters relevant for DM phenomenology are $\\{m_{H^{0}},\Delta
M,n,T_{R}\\}$. In Fig. 3 we have shown the relic satisfied points in $\Delta
M-m_{H^{0}}$ plane by varying $n$ considering $T_{R}=\\{3,4,5\\}$ GeV. The
cyan shaded region violates the indirect detection bounds from Fermi-LAT $WW$
final state, and hence forbidden. For a constant $\Delta M$ and $T_{R}$ notice
that a larger value of $n$ requires smaller DM mass to satisfy the relic
bound. This particular nature appears since larger $n$ leads to enhanced
expansion rate of the Universe and hence the DM annihilation rate should be
sufficient enough to avoid early freeze out and subsequently over-abundance.
Thus, a smaller value of $m_{H^{0}}$ is preferred to be within the relic limit
since the annihilation rate of the DM goes roughly as $\langle\sigma
v\rangle\propto 1/m_{H^{0}}^{2}$. However such requirement of enhanced
annihilation cross section due to larger $n$ may get disfavored by the
indirect search bound as one can see in the left most panel of Fig. 3. This
can be evaded if we increase the $T_{R}$ as well, since then it reduces the
Hubble rate against larger $n$ following Eq. 7. Such pattern can be observed
in the other two figures for $T_{R}=\\{4,5\\}~{}\rm GeV$. The bound arising
from spin-independent direct detection cross section for $\lambda_{L}=0.01$ is
weak and does not appear in Fig. 3.
For a clear insight on the detection prospect of the DM at indirect detection
experiments, in Fig. 4, we estimate the numerical values of $\langle\sigma
v\rangle$ for $W^{+}W^{-}$ final states of the relic satisfied points as shown
earlier in Fig. 3. The latest exclusion bound from Fermi experiment due to
non-observation of DM signal has been shown via the solid black line. In
accordance with the earlier trend it can be seen that increasing $T_{R}$
reduces the $\langle\sigma v\rangle$ for a particular $n$. Hence improved
sensitivity of the Fermi experiment has the ability to probe or rule out the
cases particularly with low $T_{R}$ values. So far we have worked with
$\lambda_{L}=0.01$. We would now like to see the consequence of a relatively
larger $\lambda_{L}=0.05$ on the DM phenomenology. As it is evident from Eq
(28), the direct detection cross-section becomes important for a larger
$\lambda_{L}$. In Fig. 5 we present the relic satisfied points in the bi-
dimensional plane of $M_{H^{0}}-\Delta M$ for different sets of
$\\{n,T_{R}\\}$ values. As expected, we find that for $\lambda_{L}=0.05$ the
spin independent direct detection constraints become dominant over the
indirect detection ones in the mass region $m_{H^{0}}\lesssim 480~{}\rm GeV$.
The characteristics of relic satisfied contours are same as those portrayed
for the case with $\lambda_{L}=0.01$ corresponds lower value of $T_{R}$ with
other parameters are fixed. As we see, for larger $T_{R}$ and smaller $n$, the
relic satisfied points with $0.01\lesssim\Delta M\lesssim 10~{}\rm GeV$ are
unconstrained from both direct and indirect detection. More precisely, $\Delta
M<3~{}\rm GeV$ is ruled out for $T_{R}=3~{}\rm GeV$ and $n=4$, but on
increasing $T_{R}$ to $8~{}\rm GeV$, the bound on $\Delta M$ is significantly
relaxed for DM mass in the same range with the same choices of $n$.
Figure 6: Relic satisfied points are shown in $n-T_{R}$ plane for a fixed DM
mass of 480 GeV and different choices of $\Delta M$ for (a) $\lambda_{L}=0.01$
and (b) $\lambda_{L}=0.05$. In both the plots the green region is forbidden
from the BBN bound on $T_{R}$ following Eq. (8) while the orange and the brown
regions are disallowed by the non-thermalization of DM above weak scale
(following Eq. (29)) and indirect search constraints respectively. Any point
in the $n-T_{R}$ plane is also subject to additional constraint arising from
the perturbative unitarity bound (discussed later) which is relatively weaker.
Figure 7: Left: The IDM parameter space in $m_{H^{0}}-\lambda_{L}$ plane
validated with PLANCK observed relic density bound considering standard
cosmology. The presence of the desert (relic under-abundant) region for
$80{\rm~{}GeV}\lesssim m_{H^{0}}\lesssim 525$ GeV can clearly be seen. Right:
Same as left but in a non-standard cosmological background where the desert
region has been revived satisfying all constraints: relic density, direct
detection due to XENON1T and indirect search. The values of the relevant
parameters are mentioned in the plot legends.
So far we have worked with some discrete values of $\\{n,T_{R}\\}$ with
$T_{R}\gtrsim 1$ GeV and $2\leq n\leq 6$. The explicit dependence of the DM
relic on the fast expansion parameters $n,T_{R}$ are shown in Fig. 6 for a
fixed DM mass of 480 GeV considering two different values of $\lambda_{L}\sim$
0.01 and 0.05. Following the previous scans here we see similar trend for
$\lambda_{L}=\\{0.01,0.05\\}$ in the left and right panel respectively. With
the increase in $\Delta M$ we see a smaller $T_{R}$ is required to satisfy the
observed relic abundance. This can be understood from the fact that a larger
$\Delta M$ leads to under abundance (since DM annihilation dominates over the
co-annihilation in the given range of $\Delta M$) and hence a smaller $T_{R}$
is required to trigger a faster expansion following Eq. (7) to satisfy the DM
abundance. For the same reason larger $\Delta M$ requires larger $n$’s for a
fixed $T_{R}$ to produce right relic. Recall that smaller value of $T_{R}$ for
a fixed $n$ (and vice-versa) violating the limit in Eq. (8) are disfavoured
from the BBN bound. This BBN-excluded region (green) is shown in either of the
plots in green. Larger $\Delta M(\gtrsim 20)$ GeV regions, as they require
smaller $T_{R}$ to satisfy the relic abundance, get discarded from the BBN
bound. The brown region indicates the disallowed space by indirect search
constraint which is also present in Figs. 3 and 5 (shown in cyan) while the
orange region is disfavored by the violation of DM thermalisation condition
before weak scale following Eq.(7) and Eq.(29).
In principle a lower bound on $\Delta M$ should also be present in Fig. 6
arising from the condition the heavier mass eigenstates should decay
completely before the BBN. However we find that the obtained bound already
lies below our working range of $\Delta M$ as specified earlier and hence does
not appear in Fig. 6. We also see that, for fixed $\Delta M$ and $m_{H^{0}}$,
larger $\lambda_{L}$ prefers low $T_{R}$ (for a fixed $n$) or larger $n$ (for
a fixed $T_{R}$). This is typically attributed to the DM annihilation cross-
section that has a quadratic dependence on $\lambda_{L}$. The requirement of
thermalization of the DM above the weak scale disallows larger values of $n$
for smaller $T_{R}$ as shown by the orange region. A smaller $T_{R}$ results
in a faster expansion causing the DM to fall out of thermal equilibrium in
early times. This can be prohibited by tuning $n$ to smaller values such that
the DM thermalizes at temperatures above the weak scale. Thus, larger $n$
values are discarded for smaller $T_{R}$. This bound remains the same for
$\lambda_{L}=0.05$ (shown in the right panel), since the DM annihilation is
dominantly controlled by the gauge coupling $g_{2}$ as discussed earlier in
details. With these outcomes, it is understandable that the fast expansion
parameters are well restricted by all the combined constraints irrespective of
the value of $\lambda_{L}$. Finally, it is crucial to note that the indirect
search constraint disfavours DM mass $\lesssim 350~{}\rm GeV$ immaterial of
the choice of $\lambda_{L}$, eliminating the possibility of resurrecting the
low DM mass region satisfying all relevant constraints666This implies the
desert region for IDM, taking into account the indirect search bound,
typically lies in the range $350\lesssim m_{H^{0}}\lesssim 525~{}\rm GeV$ for
small $\lambda_{L}$.. This, together with the direct detection bound
(important for larger $\lambda_{L}$), typically rules out the allowed
parameter space for a DM mass of 200 GeV that was overlooked in earlier work
Mahanta:2019sfo . This can further be verified from Fig. 7 where in the left
panel we present the allowed points from relic density considering standard
cosmology in $m_{H^{0}}-\lambda_{L}$ plane. It clearly shows presence of a
void (under abundant) in the range $80{\rm~{}GeV}\lesssim m_{H^{0}}\lesssim
525$ GeV. In the right panel, considering a fast expanding Universe, we
perform a random scan for different ranges of the relevant parameters and sort
out the points satisfying observed relic abundance, indirect search and direct
search due to XENON1T exclusion. We find, viable parameter space in the said
mass range under non-standard scenario satisfying all relevant bounds. Also,
non-existence of any allowed points for $m_{H^{0}}\lesssim 350$ GeV confirms
our earlier observations777This lower bound takes into account the
thermalization condition.. From the right panel one can notice, for a given DM
mass, it is possible to choose $\lambda_{L}$ as small as 0.001. For such small
$\lambda_{L}(\lesssim 0.01$), the direct search cross section (Eq. (28))
becomes safe from XENON1T exclusion limit, and indirect search provides the
most stringent bound on DM mass (see Fig. 3). In contrast, for a larger
$\lambda_{L}\gtrsim 0.05$, direct search constraint becomes important (see
Fig.5). The DM annihilation cross-section (or equivalently, the relic
abundance), however, is controlled dominantly by the $SU(2)_{L}$ gauge
coupling, while $\lambda_{L}$ plays a sub-dominant role. Therefore, in the
present set-up, it is possible to work with further lower
$\lambda_{L}(\lesssim 0.001$) satisfying all pertinent bounds, without
altering the allowed range of DM mass.
#### III.1.4 Collider probe of the IDM desert region
Figure 8: Top Left: Variation of decay branching ratio for $H^{\pm}$ in the
bi-dimensional plane of $m_{H^{\pm}}-\Delta M$ where the relic density
satisfied benchmark points are denoted by “X”, “$\star$” and “+” for $n=2$ and
different choices of $T_{R}$ in GeV. Top Right: Same as top left but the
variation is shown against the lifetime $\tau$ (in ns) of $H^{\pm}$ decay.
Bottom Left: Total decay lifetime of $H^{\pm}$ as a function of $m_{H^{\pm}}$
where the relic satisfied points are marked by “X”, “$\star$” and “+” for
$n=2$ and different choices of $T_{R}$ (in GeV). On the same plane we also
show exclusion limits from CMS at $\sqrt{s}=13~{}\rm TeV$ corresponding to
100% (in red) and 95.5% (in blue) branching fraction (see text for details).
Bottom Right: Variation of production cross-section for $pp\to
H^{+}H^{-},H^{\pm}H^{0}(A^{0})$ at $\sqrt{s}=13~{}\rm TeV$ in the $\Delta
M-m_{H^{0}}$ plane where the colour bar represents the production cross-
section in units of $fb$. On the same plane we show DM parameter space allowed
by relic density and (in)direct detection for $T_{R}=5~{}\rm GeV$ with
$n=2,4,6$ in red, blue and orange respectively.
As we have already seen, for the mass region of our interest, satisfying relic
abundance and exclusion limits from (in)direct searches, the mass splitting
$\Delta M$ can be at best a few GeV for any $n\geq 0$. Such small $\Delta M$
regions are indeed challenging to probe at the colliders. This extremely
compressed scenario can be probed with identifying the charged track signal of
a long-lived charged scalar which is $H^{\pm}$ in this case Belyaev:2016lok ;
Bhardwaj:2019mts . For $\Delta M\approx 200~{}\text{MeV}$ the charged scalar
has the dominant decay mode: $H^{\pm}\to\pi^{\pm}H^{0}$. Following
Belyaev:2016lok one can analytically obtain the $H^{\pm}\to\pi^{\pm}H^{0}$
decay width in the $\Delta M/m_{H^{\pm}}\ll 1$ limit as:
$\displaystyle\Gamma_{H^{\pm}\to\pi^{\pm}H^{0}}=\frac{g_{2}^{4}f_{\pi}^{2}}{64\pi}\frac{\Delta
M^{3}}{m_{W}^{4}}\sqrt{1-\frac{m_{\pi^{\pm}}^{2}}{\Delta M^{2}}},$ (30)
where $g_{2}$ is the $SU(2)_{L}$ gauge coupling strength, the charged pion
mass, $m_{\pi^{\pm}}=139.57$ MeV and $f_{\pi}\approx 130~{}\text{MeV}$ is the
pion decay constant. Note that the decay width and hence the lifetime
$\tau_{H^{\pm}\to\pi^{\pm}H^{0}}\equiv\frac{1}{\Gamma_{H^{\pm}\to\pi^{\pm}H^{0}}}$
of the charged scalar is inversely proportional to the mass splitting.
Therefore, a large mass splitting shall produce a charged track of smaller
length and vice versa. Depending on $\Delta M$ two scenarios can arise:
* •
For $\Delta M\in\\{140-200\\}~{}\rm MeV$, $H^{\pm}$ shall give rise to
disappearing charged track of length
$L=c\tau\simeq\mathcal{O}\left(100-10\right)~{}\rm cm$ with branching ratio
(of $H^{\pm}\to\pi^{\pm}H^{0}$) close to $100\%$. For $\Delta M>200$ MeV the
branching ratio gets reduced as new decay modes start to appear.
* •
For $\Delta M<m_{\pi}$ the decay is defined via the 3-body process:
$H^{\pm}\to W^{\star}\,H^{0}\to\ell\,\nu_{\ell}\,H^{0}$ which is proportional
to $\Delta M^{5}/m_{W}^{4}$. The decay width of such a process turns out to be
$\lesssim 10^{-18}~{}\rm GeV$ resulting in a decay length of
$c\tau\gtrsim\mathcal{O}(\rm m)$, implying $H^{\pm}$ remains stable at
collider scales and decay outside the detector giving rise to a stable charged
track.
We have used CalcHEP Belyaev:2012qa to compute the decay width (total and
partial) numerically taking care of both the 2-body and 3-body decay of
$H^{\pm}$.
A disappearing track results from the decay products of a charged particle
which go undetected because they either have too small momentum to be
reconstructed or have interaction strength such that they do not produce hits
in the tracker and do not deposit significant energy in the calorimeters.
Searches for disappearing track signatures have been performed both by CMS
Sirunyan:2018ldc ; CMS-PAS-EXO-19-010 and ATLAS Aaboud:2017mpt in the
context of supersymmetry for a center of mass energy of $\sqrt{s}=13~{}\rm
TeV$, setting upper limits on the chragino mass and production cross-section.
To recast the exact limits from CMS and ATLAS one has to perform a careful
reconstruction and selection of events employing suitable cuts, and by taking
into account the generator-level efficiency along with a background
estimation, which is beyond the scope of this paper888A recent analysis can be
found in Belyaev:2020wok .. Alternatively here we make an estimate of the
lifetime of $H^{\pm}$ with the allowed values of $\Delta M$ and $M_{H^{0}}$
and project the available limits from CMS CMS-PAS-EXO-19-010 to realize if at
all it is feasible to see the charged tracks in colliders. This in turn could
imply a collider probe for an alternative cosmological history of the
Universe.
As stated earlier, for $\Delta M\in\\{140-200\\}~{}\rm MeV$, $H^{\pm}$ decays
dominantly into $\pi^{\pm},H^{0}$ final state, while for $\Delta M<m_{\pi}$
the decay turns out to be semi-leptonic 3-body final state. In the top left
panel of Fig. 8 we see a manifestation of this, where the branching
$\text{Br}\left(H^{\pm}\to\pi^{\pm},H^{0}\right)$ into pion final state
decreases with the increase in $\Delta M$ as the 3-body decay starts
dominating. Note that, in this case the DM mass also varies in the range
$m_{H^{0}}\in\\{450-463\\}~{}\rm GeV$. Following Eq. (30) we also expect, for
large $\Delta M$, the lifetime $\tau_{H^{\pm}\to\pi^{\pm}H^{0}}$ should
decrease producing a shorter disappearing track. This is exactly what we see
in the top right panel of Fig. 8. Thus, a larger
$\text{Br}\left(H^{\pm}\to\pi^{\pm},H^{0}\right)$ implies a longer lifetime
$\tau_{H^{\pm}\to\pi^{\pm}H^{0}}$ (and a smaller $\Delta M$) or equivalently a
longer track length. This, in turn, places constraints on the model parameter
which we are going to discuss next. One should also note the presence of
points satisfying relic abundance for $n=2$ with different choices of $T_{R}$
on the same plane, indicating the possibility of testing benchmark points
obtained from the analysis in the last sections in collider experiments.
In the bottom left panel of Fig. 8 we project the experimental limit
Sirunyan:2018ldc ; CMS-PAS-EXO-19-010 from CMS on the decay lifetime of
$H^{\pm}$ obtained using our model parameters. The red line corresponds to the
CMS limit where the decaying charged particle has 100% decay branching
fraction into pion final state, whereas for the blue line the pion decay
branching fraction is 95.5%. The black thick curve shows the total lifetime of
$H^{\pm}$ as a function of $m_{H^{\pm}}$ obtained numerically for a fixed DM
mass of 450 GeV. We again show three benchmark points where observed relic
density can be obtained for $n=2$ with different $T_{R}$. We note, based on
the approximate analysis, $\Delta M\lesssim 200~{}\rm MeV$ is tightly
constrained from CMS and likely to be ruled out, which also agrees with
earlier observations Belyaev:2016lok . However, large $\Delta M(>200~{}\rm
MeV)$ regions with shorter lifetime (for example the point denoted by “X” in
the bottom left panel of Fig. 8) still can be seen lying beyond the reach of
present CMS bound. It is understandable, by tuning $n,T_{R}$, it is always
possible to accommodate points for $\Delta M>200$ MeV which satisfy relic
density that are safe from CMS exclusion. We can thus infer, for any given
$(n,T_{R})$, the region of parameter space satisfying DM constraints with
lifetime $\lesssim 0.1~{}\rm ns$ (equivalent to a track length of $\lesssim
1~{}\rm cm$) are beyond the present sensitivity of CMS experiment, and thus
safe. Finally, in the bottom right panel we show the production cross-section
for the processes $pp\to H^{+}H^{-},H^{\pm}H^{0}(A^{0})$ at $\sqrt{s}=13~{}\rm
TeV$. A detailed analysis utilizing the numerically obtained production cross
section can constrain $m_{H^{\pm}}$ and therefore the DM mass, by providing
the number of disappearing track events for a given luminosity. However, here
we only show that our model parameters can give rise to a sizeable production
cross-section in colliders abiding all DM constraints. For computing the
production cross-section we have again relied upon CalcHEP Belyaev:2012qa and
used CTEQ6l as the representative parton distribution function (PDF)
Placakyte:2011az . We see, for DM mass $\gtrsim 400~{}\rm GeV$, the production
cross-section is $\sim 2~{}\rm fb$. For all the plots, to show the
corresponding DM parameter space, we have chosen $\lambda_{L}=0.01$ such that
the DM is safe from direct and indirect search constraints. We conclude this
section by observing that a charged track of length
$\lesssim\mathcal{O}(1)~{}\rm cm$ could indeed be a probe for a non-standard
cosmological parameters for the IDM providing an evidence for fast expanding
pre-BBN era at the LHC.
### III.2 ITDM in a fast expanding Universe
Figure 9: Top: Variation of DM relic abundance with ITDM mass where the
colourful curves correspond to different values of $n$ for a fixed $T_{R}$ as
mentioned in the plot legends. Bottom: Relic density allowed parameter space
in $T_{R}-m_{T^{0}}$ plane for different choices of $n=2,4,6$ shown in
respectively red, green and blue. The brown and the cyan regions respectively
show the DM mass region disallowed by the direct search (XENON1T) and indirect
search ($W^{+}W^{-}$ final state) data. The red dashed straight line in each
plot shows the limit from LHC on triplet mass for $36~{}\text{fb}^{-1}$ of
luminosity at $\sqrt{s}=13~{}\rm TeV$. In all cases we have set the portal
coupling to a fixed value of $\lambda_{HT}=0.01$.
As mentioned in the beginning, in order to recover the desert region beyond
the IDM paradigm, we also apply the prescription of modified Hubble rate due
to fast expansion to scalar DM with larger representation under $SU(2)_{L}$.
Here we describe the general structure of a $SU(2)_{L}$ triplet dark matter
model. In this set-up the SM is extended by introducing a $SU(2)_{L}$ triplet
scalar with hypercharge $Y=0$. An additional $Z_{2}$ symmetry is also imposed
under which the triplet transforms non-trivially. It is also considered that
the triplet has zero VEV. The scalar potential under $\text{SM}\times Z_{2}$
symmetry then reads Araki:2011hm
$\displaystyle\begin{aligned}
&V\left(H,T\right)\supset\mu_{H}^{2}\left|H\right|^{2}+\lambda_{H}\left|H\right|^{4}+\frac{\mu_{T}^{2}}{2}\text{Tr}\Bigl{[}T^{2}\Bigr{]}\\\
&+\frac{\lambda_{T}}{4!}\Biggl{(}\text{Tr}\Bigl{[}T^{2}\Bigr{]}\Biggr{)}^{2}+\frac{\lambda_{HT}}{2}\left|H\right|^{2}\text{Tr}\left[T^{2}\right],\end{aligned}$
(31)
where $H$ is the SM-like Higgs doublet and the triplet $T$ is parameterized as
$\displaystyle T=\begin{pmatrix}T^{0}/\sqrt{2}&&-T^{+}\\\
-T^{-}&&-T^{0}/\sqrt{2}\end{pmatrix}.$ (32)
Now, after electroweak symmetry breaking the masses of the physical scalar
triplets are given by
$\displaystyle m_{T^{0},T^{\pm}}^{2}=\mu_{T}^{2}+\frac{\lambda_{HT}}{2}v^{2},$
(33)
with $v=246~{}\rm GeV$. Notice that although mass of neutral and charged
triplet scalar are degenerate (Eq. (33)), a small mass difference $\delta
m\simeq 166~{}\rm MeV$ can be generated via 1-loop radiative correction
Cirelli:2009uv that makes $T^{0}$ as the lighter component and hence a stable
DM candidate. This is the crucial difference between IDM and scalar triplet DM
where in IDM the mass difference is a free parameter while for scalar triplet
this is fixed from 1-loop correction. The bounded from below conditions for
the scalar potential in all field directions in Eq. (31) require
$\displaystyle\begin{aligned} &\lambda_{H,T}\geq
0;~{}~{}\sqrt{\lambda_{H}\lambda_{T}}>\frac{1}{2}\left|\lambda_{HT}\right|.\end{aligned}$
(34)
Apart from the theoretical constraints arising from the stability,
perturbativity and tree-level unitarity of the scalar potential one needs to
also consider the experimental constraints on the parameters of the scalar
potential. As the charged and neutral component of the triplet scalar are
almost degenerate, the contributions to the $T$ and $U$ parameters are very
much suppressed in this scenario. However, the charged component $T^{\pm}$ can
contribute significantly to the Higgs diphoton signal strength which is
accurately measured $\mu_{\gamma\gamma}=0.99\pm 0.14$ from ATLAS
Aaboud:2018xdt and $\mu_{\gamma\gamma}=1.17\pm 0.10$ from CMS. It has
recently been shown Chiang:2020rcv ; Bell:2020hnr that searches for
disappearing tracks at the LHC excludes a real triplet scalar lighter than 287
GeV using $36~{}\text{fb}^{-1}$ of data at $\sqrt{s}=13~{}\rm TeV$.
Figure 10: Required order of cross-sections to satisfy the observed DM
abundance for different choices of $\\{n,T_{R}\\}$ are shown as function of DM
mass. The purple region is disfavored by the perturbative unitarity bound on
DM pair annihilation cross-section (see text for details).
We again numerically solve the BEQ in Eq. (24) with the modified Hubble rate
in Eq. (7) and determine the subsequent DM relic density for different choices
of the fast expansion parameters $n,T_{R}$. In the top and middle panel of
Fig. 9 we show the variation of the DM relic abundance as a function of the
ITDM mass. Here we have kept the portal coupling fixed and obtained the
resulting direct and indirect search exclusion regions for
$\lambda_{HT}=0.01$. The parameter space excluded by XENON1T limit is shown by
the brown region where the direct search cross-section is given by
DuttaBanik:2020jrj
$\displaystyle\sigma_{n-T^{0}}^{\text{SI}}=\frac{\lambda_{HT}^{2}f_{N}^{2}}{4\pi}\frac{\mu^{2}m_{n}^{2}}{m_{h}^{4}m_{T^{0}}^{2}}$
(35)
while the indirect search exclusion due to the $W^{+}W^{-}$ final state is
shown by the cyan region. Since the mass splitting $\delta m$ is no more a
free parameter and fixed to a small value of $\delta m\simeq 166~{}\rm MeV$,
co-annihilation plays the dominating role here. As a result right relic is
obtained in the case of ITDM for a very large DM mass $m_{T^{0}}\sim 1.8~{}\rm
TeV$ as shown by the red curve $(n=0)$ in each plot. Once fast expansion is
introduced, there is drastic improvement in the parameter space. As one can
see, for $T_{R}=1~{}\rm GeV$ right relic density is achievable for
$m_{T^{0}}\sim 800~{}\rm GeV$ with $n=2$ (blue curve). While for $T_{R}=2$
GeV, the relic satisfied mass is around 900 GeV with $n=2$. As inferred
earlier, this happens because for smaller $T_{R}$ the expansion rate increases
following Eq. (7). This is being compensated by a smaller choice of the DM
mass to satisfy the observed abundance since $\langle\sigma v\rangle\propto
1/m_{T^{0}}^{2}$. Enhancement of $n$ could provide further smaller relic
satisfied DM mass consistent with direct, indirect and LHC searches. Varying
$\lambda_{HT}$ would give similar results since the effective annihilation
cross section is mostly dominated by gauged mediated co-annihilation hence
almost insensitive to $\lambda_{HT}$ unless it is very large ($\gtrsim 0.1$)
which is anyway disfavored from direct and indirect search bounds. In the
bottom panel of Fig. 9 we vary the DM mass $m_{T^{0}}$ by keeping
$\lambda_{HT}=0.01$ and obtain the resulting relic abundance allowed parameter
space in $T_{R}-m_{T^{0}}$ plane for different choices of $n$. Here we again
see the manifestation of faster expansion elaborated above i.e., for a fixed
DM mass, a smaller $n$ (in red) needs a smaller $T_{R}$ in order to obtain the
observed relic density. Note that in all cases we have considered $T_{R}\geq
1~{}\rm GeV$ to ensure that ITDM remains in thermal equilibrium at high
temperature. Limits from direct, indirect and LHC searches are also projected
with the same colour code as before. Taking all relevant constraints into
account, we see from bottom panel of Fig. 9, the region $m_{T^{0}}\gtrsim 450$
GeV can be recovered considering $2\leq n\leq 6$ and $T_{R}\gtrsim$ 1 GeV. We
find, it is also possible to resurrect part of the parameter space below 450
GeV for $T_{R}<1~{}\rm GeV$ ensuring the DM thermalizes in the early Universe
depending on the choice of $n$. This is, however, in contrast to the case of
IDM dark matter, where the lower bound on the allowed DM mass ($\gtrsim
350~{}\rm GeV$), satisfying thermalization criteria, is almost independent of
the fast expanding parameters.
The discovery prospects for a real triplet extension of the SM at the
colliders have been discussed in Chiang:2020rcv ; Bell:2020hnr . As inferred
in Chiang:2020rcv , the $13~{}\rm TeV$ LHC excludes a real triplet lighter
than $\\{287,608,761\\}~{}\rm GeV$ for
$\mathcal{L}=\\{36,300,3000\\}~{}\text{fb}^{-1}$ of luminosity. The present
case where the neutral triplet scalar is stable and contributes to the DM
Universe (ITDM) can be probed at the colliders via disappearing track
signature through the decay of the long-lived charged component:
$T^{\pm}\to\pi^{\pm}T^{0}$ due to small mass splitting $\delta m$. The
situation is exactly similar as that in the case of IDM dark matter discussed
in Sec. III.1.4, hence we do not further repeat it here.
The requirement of perturbative unitarity of the DM annihilation cross-section
can forbid some part of the relic density allowed parameter space depending on
the choice of $\\{n,T_{R}\\}$ DEramo:2017gpl , thus providing a bound on the
DM mass. A general prescription for obtaining upper bound on thermal dark
matter mass using such partial wave unitarity analysis has been worked out in
Griest:1989wd . The upper limit on the thermally averaged DM interaction cross
section is provided by Griest:1989wd
$\langle\sigma v\rangle_{\text{max}}\lesssim\frac{4\pi}{m_{\rm
DM}^{2}}\sqrt{\frac{x_{f}}{\pi}}$ (36)
By using the approximate analytical estimate of DM yield in Eq.(25) (following
the approach in Kolb:1990vq ), the freeze-out temperature $x_{f}$ can be
approximately determined by using the semi-analytical expression for DM yield
by equating DM abundances before and after freeze out (see Eq.(39))
$\displaystyle
e^{x_{f}}x_{f}^{1/2}\simeq\frac{c(c+2)}{c+1}\times\frac{0.192~{}M_{\rm
pl}}{g_{*}^{1/2}}\frac{\langle\sigma v\rangle m_{\rm
DM}}{\left(\frac{x_{r}}{x}\right)^{n/2}},$ (37)
with $c\sim\mathcal{O}(1)$ constant. We calculate the annihilation cross-
section that gives rise to right relic abundance numerically, and compare that
with the maximum cross-section allowed by the partial wave unitarity. This
eliminates a part of the parameter space for a fixed DM interaction rate, as
shown by the purple region in Fig. 10. It turns out that for both IDM
LopezHonorez:2010tb and ITDM Ayazi:2014tha , the leading contribution to the
DM annihilation cross-section is $s$-wave dominated. We find, regions with
large $n\geq 2$ (and small $T_{R}$) are typically in tension with the
unitarity bound at the higher range of DM mass. This is expected, since for
large $n$ (or small $T_{R})$ the Hubble parameter is large, hence the
interaction rate needs to be larger to avoid over abundance. This is in
conflict with the maximum allowed annihilation cross-section, disfavouring
large $\langle\sigma v\rangle$. Now, for the case of IDM, we are specifically
interested in the mass window $m_{W}\lesssim m_{H^{0}}\lesssim 525~{}\rm GeV$
while for ITDM $m_{T^{0}}\lesssim 2~{}\rm TeV$. On the other hand, as
explained earlier, we choose $n\leq 6$ ($T_{R}\gtrsim 1$ GeV) to ensure that
the DM thermalizes in the early Universe above the weak scale. Thus, within
our working regime of $n$ and $T_{R}$, we find the partial wave unitarity
bound does not pose any serious constraint for the DM mass range of our
interest.
## IV Conclusion
In this work, considering a form of alternative cosmology, we revisit two
popular DM scenarios where the DM is part of $SU(2)_{L}$ multiplets (other
than singlet). We first take up the minimal inert doublet model (IDM) where it
is observed that an intermediate DM mass range: $80~{}\text{GeV}\lesssim
m_{\text{DM}}\lesssim 525~{}\rm GeV$ is disfavored in a radiation dominated
Universe due to relic under abundance via freeze-out. In an attempt to
circumvent this, extension of the minimal inert doublet model or existence of
multiple DM candidates have been proposed earlier. Here, we follow a different
route and find that without resorting to an extended particle spectrum revival
of the desert region is possible in presence of a non standard epoch in the
early Universe. We obtain the parameter space accounting for the correct relic
abundance for single component inert doublet DM by varying the relevant
parameters responsible for the fast expansion of the Universe. Subsequently,
we see that a major part of the relic density allowed region gets ruled out
from DM direct and indirect search constraints and this in turn puts a
restriction on the fast expansion parameters. In particular, we found that for
$\lambda_{L}=0.01$, the DM mass below $350$ GeV is ruled out irrespective of
the cosmological history of the early Universe. The bound turns severe for
larger $\lambda_{L}$ i.e., for higher interaction rate. While for pure IDM,
bounds from relic density and (in)direct search experiments do not allow a
large mass splitting, for inert scalar triplet, on the other hand, this
happens naturally due to small radiative mass splitting. We then discuss
possible collider signature for pure IDM under the influence of fast
expansion, and find that the newly obtained parameter space can be probed via
the identification of the charged track signal of a long-lived charged scalar.
The resulting track length depends on the mass splitting between the charged
and neutral component of the inert scalar doublet. The track length
$(\lesssim\mathcal{O}\left(1\right)~{}\rm cm)$ for such a long-lived scalar,
however, is below the sensitivity from the present CMS/ATLAS search and hence
leaves the possibility of being probed in future experiments. This also
implies the prospect of probing the modified cosmological history of the
Universe in collider experiments.
We extend our analysis by applying the same methodology to scrutinize the case
for hyper-chargeless real triplet scalar DM anticipating such modification in
DM parameter space should also be observed for larger representation of the DM
field. We show that a significant parameter space ($m_{T^{0}}\gtrsim 450$ GeV
considering $2\leq n\leq 6$ and $T_{R}\gtrsim 1$ GeV) satisfying the relic
density and other DM search bounds for $m_{T^{0}}\lesssim 2$ TeV and portal
coupling $\lambda_{HT}=0.01$ can indeed be restored for the scalar triplet
scenario, which is otherwise disallowed. We thus conclude, this prescription
can be applied for any DM candidate which is a part of a $SU(N)$ multiplet or
even for different multi component DM frameworks. Implications of our analysis
on different aspects of particle physics and cosmology such as electroweak
phase transitions, prediction of gravitational waves, neutrino physics and
leptogenesis remain open. We keep these studies for future endeavours.
## Acknowledgement
One of the authors, AKS appreciates Sudipta Show for several discussions
during the course of work. AKS is supported by NPDF grant PDF/2020/000797 from
Science and Engineering Research Board, Government of India. PG would like to
acknowledge the support from DAE, India for the Regional Centre for
Accelerator based Particle Physics (RECAPP), Harish Chandra Research
Institute. FSQ is supported by the Sao Paulo Research Foundation (FAPESP)
through grant 2015/15897-1 and ICTP-SAIFR FAPESP grant 2016/01343-7. FSQ
acknowledges support from CNPq grants 303817/2018-6 and 421952/2018-0 and the
Serrapilheira Institute (grant number Serra-1912-31613).
## Appendix A Semi-analytical freeze-out yield
To obtain a semi-analytical expression for the DM yield under the influence of
fast expansion we closely follow Ref. DEramo:2017gpl . Assuming the DM freezes
out during the epoch of $\eta$-domination i.e., $x_{f}\ll x_{r}$ the BEQ in
Eq. (24) can be approximated as
$\displaystyle\begin{aligned}
&\frac{dY_{\text{DM}}}{dx}\simeq-A\frac{\langle\sigma
v\rangle}{x^{2-n/2}x_{r}^{n/2}}\Bigl{(}Y_{\text{DM}}^{2}-Y_{\rm DM}^{\rm
eq^{2}}\Bigr{)}\end{aligned}$ (38)
with $A=\frac{2\sqrt{2}\pi}{3\sqrt{5}}\sqrt{g_{*}}m_{\text{DM}}M_{\text{pl}}$.
Defining $\Delta\equiv Y_{\text{DM}}-Y_{\text{DM}}^{\text{eq}}$ and ignoring
terms proportional to $\mathcal{O}\left[\Delta^{2}\right]$ in times much
earlier than freeze-out (as departure from equilibrium is minimal), while
neglecting the equilibrium distribution in the post freeze-out regime, we
obtain
$\displaystyle\
Y_{\text{DM}}\left(x\right)\simeq\begin{cases}Y_{\text{DM}}^{\text{eq}}\left(x\right)+\frac{x^{2-n/2}x_{r}^{n/2}}{2A\langle\sigma
v\rangle}\text{~{}~{}~{}~{}for~{}~{}}~{}1<x<x_{f}\\\
\Biggl{(}\frac{1}{Y_{\text{DM}}\left(x_{f}\right)}+A\,\xi\left(x\right)\Biggr{)}^{-1}\text{~{}for~{}~{}}~{}x_{f}<x<x_{r}\end{cases}$
(39)
where
$\displaystyle\xi\left(x\right)=\frac{1}{x_{r}^{n/2}}\int_{x_{f}}^{x}\,dx\,\frac{\langle\sigma
v\rangle}{x^{2-n/2}}.$ (40)
Now, one can expand the thermally averaged cross-section in terms of the
partial waves as: $\langle\sigma
v\rangle\simeq\sigma_{s}+\sigma_{p}/x+\mathcal{O}\left(x^{-2}\right)$.
Considering $s$-wave domination and on substitution in Eq. (40) we find
$\displaystyle\xi\left(x\right)=\frac{\sigma_{s}}{x_{r}^{n/2}}\begin{cases}\frac{x_{f}^{n/2-1}-x^{n/2-1}}{1-n/2}\text{~{}~{}~{}~{}for~{}~{}}n\neq
2\\\
~{}\text{Log}\left[\frac{x}{x_{f}}\right]\text{~{}~{}~{}~{}~{}~{}~{}~{}for~{}~{}}n=2\end{cases}$
(41)
After the end of fast expansion regime ($x>x_{r}$), the radiation dominates
the energy density and the resulting DM yield reads
$\displaystyle\begin{aligned}
&Y_{\text{DM}}\left(x\right)\simeq\Biggl{(}\frac{1}{Y_{\text{DM}}\left(x_{f}\right)}+A\,\xi_{\text{rad}}\left(x\right)\Biggr{)}^{-1},~{}x>x_{r}\end{aligned}$
(42)
where
$\displaystyle\xi_{\text{rad}}\left(x\right)=\int_{x_{r}}^{x}\,dx\,\frac{\langle\sigma
v\rangle}{x^{2}}.$ (43)
## Appendix B BBN constraints
The effect of the new species $\eta$ can be parametrized by an effective
number of relativistic degrees of freedom (DOF) as evident from Eq.(5).
$\displaystyle\begin{aligned}
&\rho\left(T\right)=\frac{\pi^{2}}{30}g_{*\text{eff}}T^{4}\end{aligned}$ (44)
with
$\displaystyle\begin{aligned} &g_{*\text{eff}}=g_{*}^{\text{SM}}+\Delta
g_{*}^{\eta}\\\ &=\Bigl{(}2+\frac{7}{8}\times
4\Bigr{)}+\Bigl{(}2\times\frac{7}{8}\times
N_{\nu}\Bigr{)}+\Bigl{(}2\times\frac{7}{8}\times\Delta
N_{\nu}\Bigr{)},\end{aligned}$ (45)
The first two terms in the last equation stand for the SM contribution with
the $N_{\nu}$ indicates the number of effective neutrinos. The notation
$\Delta N_{\nu}$ accounts for the $\eta$ contribution to the number of
relativistic degrees of freedom as obtained from Eq.(5).
$\displaystyle\begin{aligned} &\Delta
N_{\nu}=\frac{4}{7}\,g_{*}\left(T_{R}\right)\,\Biggl{(}\frac{g_{*s}\left(T\right)}{g_{*s}\left(T_{R}\right)}\Biggr{)}^{(4+n)/3}\,\Biggl{(}\frac{T}{T_{R}}\Biggr{)}^{n}.\end{aligned}$
(46)
Considering $T_{R}$ around $T_{\text{BBN}}$ and $T\sim T_{\rm BBN}$ we can
assume $g_{*s}(T)\sim g_{*s}(T_{R})$. We also use
$g_{*}(T_{R})=\left(2+\frac{7}{8}\times 4+\frac{7}{8}\times 2\times 3\right)$
to include the contributions of photon, positrons and neutrinos and reach at
$\displaystyle\begin{aligned} &\Delta N_{\nu}\simeq
6.14\Biggl{(}\frac{T}{T_{R}}\Biggr{)}^{n}.\end{aligned}$ (47)
Since the additional contribution to $N_{\nu}$ is positive, we use the bound
$N_{\nu}+\Delta N_{\nu}\lesssim 3.4$ Cyburt:2015mya at 95% CL (2$\sigma$) and
$T\simeq 1~{}\rm MeV$ to obtain
$\displaystyle T_{R}\gtrsim\left(15.4\right)^{1/n}~{}\text{MeV}.$ (48)
## References
* (1) D. J. Chung, E. W. Kolb, and A. Riotto, Production of massive particles during reheating, Phys. Rev. D 60 (1999) 063504, [hep-ph/9809453].
* (2) N. Okada and O. Seto, Relic density of dark matter in brane world cosmology, Phys. Rev. D 70 (2004) 083531, [hep-ph/0407092].
* (3) N. Okada and O. Seto, Gravitino dark matter from increased thermal relic particles, Phys. Rev. D 77 (2008) 123505, [arXiv:0710.0449].
* (4) N. Okada and S. Okada, Gauss-Bonnet braneworld cosmological effect on relic density of dark matter, Phys. Rev. D 79 (2009) 103528, [arXiv:0903.2384].
* (5) R. Allahverdi and J. K. Osiński, Nonthermal dark matter from modified early matter domination, Phys. Rev. D 99 (2019), no. 8 083517, [arXiv:1812.10522].
* (6) V. Baules, N. Okada, and S. Okada, Braneworld Cosmological Effect on Freeze-in Dark Matter Density and Lifetime Frontier, arXiv:1911.05344.
* (7) I. R. Waldstein, A. L. Erickcek, and C. Ilie, Quasidecoupled state for dark matter in nonstandard thermal histories, Phys. Rev. D 95 (2017), no. 12 123531, [arXiv:1609.05927].
* (8) P. Arias, N. Bernal, A. Herrera, and C. Maldonado, Reconstructing Non-standard Cosmologies with Dark Matter, JCAP 10 (2019) 047, [arXiv:1906.04183].
* (9) C. Cosme and T. Tenkanen, Spectator dark matter in non-standard cosmologies, arXiv:2009.01149.
* (10) L. Aparicio, M. Cicoli, B. Dutta, F. Muia, and F. Quevedo, Light Higgsino Dark Matter from Non-thermal Cosmology, JHEP 11 (2016) 038, [arXiv:1607.00004].
* (11) C. Han, Higgsino Dark Matter in a Non-Standard History of the Universe, Phys. Lett. B 798 (2019) 134997, [arXiv:1907.09235].
* (12) M. Drees and F. Hajkarim, Dark Matter Production in an Early Matter Dominated Era, JCAP 02 (2018) 057, [arXiv:1711.05007].
* (13) G. Arcadi, S. Profumo, F. Queiroz, and C. Siqueira, Right-handed Neutrino Dark Matter, Neutrino Masses, and non-Standard Cosmology in a 2HDM, arXiv:2007.07920.
* (14) C. Cosme, M. a. Dutra, T. Ma, Y. Wu, and L. Yang, Neutrino Portal to FIMP Dark Matter with an Early Matter Era, arXiv:2003.01723.
* (15) N. a. Bernal, F. Elahi, C. Maldonado, and J. Unwin, Ultraviolet Freeze-in and Non-Standard Cosmologies, JCAP 11 (2019) 026, [arXiv:1909.07992].
* (16) R. Allahverdi and J. K. Osiński, Freeze-in Production of Dark Matter Prior to Early Matter Domination, Phys. Rev. D 101 (2020), no. 6 063503, [arXiv:1909.01457].
* (17) C. Maldonado and J. Unwin, Establishing the Dark Matter Relic Density in an Era of Particle Decays, JCAP 06 (2019) 037, [arXiv:1902.10746].
* (18) M. Drees and F. Hajkarim, Neutralino Dark Matter in Scenarios with Early Matter Domination, JHEP 12 (2018) 042, [arXiv:1808.05706].
* (19) N. Bernal, C. Cosme, and T. Tenkanen, Phenomenology of Self-Interacting Dark Matter in a Matter-Dominated Universe, Eur. Phys. J. C 79 (2019), no. 2 99, [arXiv:1803.08064].
* (20) L. Visinelli, (Non-)thermal production of WIMPs during kination, Symmetry 10 (2018), no. 11 546, [arXiv:1710.11006].
* (21) A. Arbey, J. Ellis, F. Mahmoudi, and G. Robbins, Dark Matter Casts Light on the Early Universe, JHEP 10 (2018) 132, [arXiv:1807.00554].
* (22) D. Berger, S. Ipek, T. M. Tait, and M. Waterbury, Dark Matter Freeze Out during an Early Cosmological Period of QCD Confinement, JHEP 07 (2020) 192, [arXiv:2004.06727].
* (23) J. McDonald, {WIMP} Densities in Decaying Particle Dominated Cosmology, Phys. Rev. D 43 (1991) 1063–1068.
* (24) A. Poulin, Dark matter freeze-out in modified cosmological scenarios, Phys. Rev. D 100 (2019), no. 4 043022, [arXiv:1905.03126].
* (25) E. Hardy, Higgs portal dark matter in non-standard cosmological histories, JHEP 06 (2018) 043, [arXiv:1804.06783].
* (26) K. Redmond and A. L. Erickcek, New Constraints on Dark Matter Production during Kination, Phys. Rev. D 96 (2017), no. 4 043511, [arXiv:1704.01056].
* (27) F. D’Eramo, N. Fernandez, and S. Profumo, When the Universe Expands Too Fast: Relentless Dark Matter, JCAP 05 (2017) 012, [arXiv:1703.04793].
* (28) F. D’Eramo, N. Fernandez, and S. Profumo, Dark Matter Freeze-in Production in Fast-Expanding Universes, JCAP 02 (2018) 046, [arXiv:1712.07453].
* (29) N. Bernal, C. Cosme, T. Tenkanen, and V. Vaskonen, Scalar singlet dark matter in non-standard cosmologies, Eur. Phys. J. C 79 (2019), no. 1 30, [arXiv:1806.11122].
* (30) P. Chanda, S. Hamdan, and J. Unwin, Reviving $Z$ and Higgs Mediated Dark Matter Models in Matter Dominated Freeze-out, JCAP 01 (2020) 034, [arXiv:1911.02616].
* (31) G. B. Gelmini, P. Lu, and V. Takhistov, Visible Sterile Neutrinos as the Earliest Relic Probes of Cosmology, Phys. Lett. B 800 (2020) 135113, [arXiv:1909.04168].
* (32) A. Biswas, D. Borah, and D. Nanda, keV Neutrino Dark Matter in a Fast Expanding Universe, Phys. Lett. B 786 (2018) 364–372, [arXiv:1809.03519].
* (33) N. Fernandez and S. Profumo, Comment on “keV neutrino dark matter in a fast expanding universe” by Biswas et al., Phys. Lett. B 789 (2019) 603–604, [arXiv:1810.06795].
* (34) A. Betancur and O. Zapata, Phenomenology of doublet-triplet fermionic dark matter in nonstandard cosmology and multicomponent dark sectors, Phys. Rev. D 98 (2018), no. 9 095003, [arXiv:1809.04990].
* (35) D. Mahanta and D. Borah, TeV Scale Leptogenesis with Dark Matter in Non-standard Cosmology, JCAP 04 (2020), no. 04 032, [arXiv:1912.09726].
* (36) R. Allahverdi et al., The First Three Seconds: a Review of Possible Expansion Histories of the Early Universe, arXiv:2006.16182.
* (37) M. Artymowski, M. Lewicki, and J. D. Wells, Gravitational wave and collider implications of electroweak baryogenesis aided by non-standard cosmology, JHEP 03 (2017) 066, [arXiv:1609.07143].
* (38) M. Cirelli, N. Fornengo, and A. Strumia, Minimal dark matter, Nucl. Phys. B 753 (2006) 178–194, [hep-ph/0512090].
* (39) T. Hambye, F. S. Ling, L. Lopez Honorez, and J. Rocher, Scalar Multiplet Dark Matter, JHEP 07 (2009) 090, [arXiv:0903.4010]. [Erratum: JHEP 05, 066 (2010)].
* (40) S. Bhattacharya, P. Ghosh, A. K. Saha, and A. Sil, Two component dark matter with inert Higgs doublet: neutrino mass, high scale validity and collider searches, JHEP 03 (2020) 090, [arXiv:1905.12583].
* (41) N. Chakrabarty, R. Roshan, and A. Sil, Two Component Doublet-Triplet Scalar Dark Matter stabilising the Electroweak vacuum, arXiv:2102.06032.
* (42) A. Dutta Banik, R. Roshan, and A. Sil, Two component singlet-triplet scalar dark matter and electroweak vacuum stability, Phys. Rev. D 103 (2021), no. 7 075001, [arXiv:2009.01262].
* (43) L. Lopez Honorez and C. E. Yaguna, A new viable region of the inert doublet model, JCAP 01 (2011) 002, [arXiv:1011.1411].
* (44) D. Borah and A. Gupta, New viable region of an inert Higgs doublet dark matter model with scotogenic extension, Phys. Rev. D 96 (2017), no. 11 115012, [arXiv:1706.05034].
* (45) S. Chakraborti, A. Dutta Banik, and R. Islam, Probing Multicomponent Extension of Inert Doublet Model with a Vector Dark Matter, Eur. Phys. J. C 79 (2019), no. 8 662, [arXiv:1810.05595].
* (46) CMS Collaboration, V. Khachatryan et al., Search for disappearing tracks in proton-proton collisions at $\sqrt{s}=8$ TeV, JHEP 01 (2015) 096, [arXiv:1411.6006].
* (47) CMS Collaboration, V. Khachatryan et al., Constraints on the pMSSM, AMSB model and on other models from the search for long-lived charged particles in proton-proton collisions at sqrt(s) = 8 TeV, Eur. Phys. J. C 75 (2015), no. 7 325, [arXiv:1502.02522].
* (48) CMS Collaboration, A. M. Sirunyan et al., Search for disappearing tracks as a signature of new long-lived particles in proton-proton collisions at $\sqrt{s}=$ 13 TeV, JHEP 08 (2018) 016, [arXiv:1804.07321].
* (49) P. Fileviez Perez, H. H. Patel, M. J. Ramsey-Musolf, and K. Wang, Triplet Scalars and Dark Matter at the LHC, Phys. Rev. D 79 (2009) 055024, [arXiv:0811.3957].
* (50) T. Araki, C. Geng, and K. I. Nagao, Dark Matter in Inert Triplet Models, Phys. Rev. D 83 (2011) 075014, [arXiv:1102.4906].
* (51) W. Chao, G.-J. Ding, X.-G. He, and M. Ramsey-Musolf, Scalar Electroweak Multiplet Dark Matter, JHEP 08 (2019) 058, [arXiv:1812.07829].
* (52) S. Jangid and P. Bandyopadhyay, Distinguishing Inert Higgs Doublet and Inert Triplet Scenarios, Eur. Phys. J. C 80 (2020), no. 8 715, [arXiv:2003.11821].
* (53) J. Fiaschi, M. Klasen, and S. May, Singlet-doublet fermion and triplet scalar dark matter with radiative neutrino masses, JHEP 05 (2019) 015, [arXiv:1812.11133].
* (54) A. Betancur, R. Longas, and O. Zapata, Doublet-triplet dark matter with neutrino masses, Phys. Rev. D 96 (2017), no. 3 035011, [arXiv:1704.01162].
* (55) W.-B. Lu and P.-H. Gu, Mixed Inert Scalar Triplet Dark Matter, Radiative Neutrino Masses and Leptogenesis, Nucl. Phys. B 924 (2017) 279–311, [arXiv:1611.02106].
* (56) W.-B. Lu and P.-H. Gu, Leptogenesis, radiative neutrino masses and inert Higgs triplet dark matter, JCAP 05 (2016) 040, [arXiv:1603.05074].
* (57) S. Bahrami and M. Frank, Dark Matter in the Higgs Triplet Model, Phys. Rev. D 91 (2015) 075003, [arXiv:1502.02680].
* (58) R. Caldwell, R. Dave, and P. J. Steinhardt, Cosmological imprint of an energy component with general equation of state, Phys. Rev. Lett. 80 (1998) 1582–1585, [astro-ph/9708069].
* (59) B. Ratra and P. J. E. Peebles, Cosmological consequences of a rolling homogeneous scalar field, Phys. Rev. D 37 (Jun, 1988) 3406–3427.
* (60) J. Khoury, B. A. Ovrut, P. J. Steinhardt, and N. Turok, The Ekpyrotic universe: Colliding branes and the origin of the hot big bang, Phys. Rev. D 64 (2001) 123522, [hep-th/0103239].
* (61) E. I. Buchbinder, J. Khoury, and B. A. Ovrut, New Ekpyrotic cosmology, Phys. Rev. D 76 (2007) 123503, [hep-th/0702154].
* (62) S. Kanemura and H. Sugiyama, Dark matter and a suppression mechanism for neutrino masses in the Higgs triplet model, Phys. Rev. D 86 (2012) 073006, [arXiv:1202.5231].
* (63) L. Lopez Honorez, E. Nezri, J. F. Oliver, and M. H. Tytgat, The Inert Doublet Model: An Archetype for Dark Matter, JCAP 02 (2007) 028, [hep-ph/0612275].
* (64) A. Arhrib, Y.-L. S. Tsai, Q. Yuan, and T.-C. Yuan, An Updated Analysis of Inert Higgs Doublet Model in light of the Recent Results from LUX, PLANCK, AMS-02 and LHC, JCAP 06 (2014) 030, [arXiv:1310.0358].
* (65) F. S. Queiroz and C. E. Yaguna, The CTA aims at the Inert Doublet Model, JCAP 02 (2016) 038, [arXiv:1511.05967].
* (66) A. Belyaev, G. Cacciapaglia, I. P. Ivanov, F. Rojas-Abatte, and M. Thomas, Anatomy of the Inert Two Higgs Doublet Model in the light of the LHC and non-LHC Dark Matter Searches, Phys. Rev. D 97 (2018), no. 3 035011, [arXiv:1612.00511].
* (67) A. Alves, D. A. Camargo, A. G. Dias, R. Longas, C. C. Nishi, and F. S. Queiroz, Collider and Dark Matter Searches in the Inert Doublet Model from Peccei-Quinn Symmetry, JHEP 10 (2016) 015, [arXiv:1606.07086].
* (68) B. Barman, D. Borah, L. Mukherjee, and S. Nandi, Correlating the anomalous results in $b\to s$ decays with inert Higgs doublet dark matter and muon $(g-2)$, Phys. Rev. D 100 (2019), no. 11 115010, [arXiv:1808.06639].
* (69) N. G. Deshpande and E. Ma, Pattern of symmetry breaking with two higgs doublets, Phys. Rev. D 18 (Oct, 1978) 2574–2576.
* (70) I. Ivanov, Minkowski space structure of the Higgs potential in 2HDM, Phys. Rev. D 75 (2007) 035001, [hep-ph/0609018]. [Erratum: Phys.Rev.D 76, 039902 (2007)].
* (71) M. E. Peskin and T. Takeuchi, Estimation of oblique electroweak corrections, Phys. Rev. D 46 (Jul, 1992) 381–409.
* (72) R. Barbieri, L. J. Hall, and V. S. Rychkov, Improved naturalness with a heavy Higgs: An Alternative road to LHC physics, Phys. Rev. D 74 (2006) 015007, [hep-ph/0603188].
* (73) Particle Data Group Collaboration, M. Tanabashi and e. a. Hagiwara, Review of particle physics, Phys. Rev. D 98 (Aug, 2018) 030001.
* (74) ALEPH, DELPHI, L3, OPAL, LEP Collaboration, G. Abbiendi et al., Search for Charged Higgs bosons: Combined Results Using LEP Data, Eur. Phys. J. C 73 (2013) 2463, [arXiv:1301.6065].
* (75) A. Arbey, F. Mahmoudi, O. Stal, and T. Stefaniak, Status of the Charged Higgs Boson in Two Higgs Doublet Models, Eur. Phys. J. C 78 (2018), no. 3 182, [arXiv:1706.07414].
* (76) G. Belanger, B. Dumont, A. Goudelis, B. Herrmann, S. Kraml, and D. Sengupta, Dilepton constraints in the Inert Doublet Model from Run 1 of the LHC, Phys. Rev. D 91 (2015), no. 11 115011, [arXiv:1503.07367].
* (77) E. W. Kolb and M. S. Turner, The Early Universe, Front. Phys. 69 (1990) 1–547.
* (78) Planck Collaboration, N. Aghanim et al., Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641 (2020) A6, [arXiv:1807.06209].
* (79) J. M. Cline, K. Kainulainen, P. Scott, and C. Weniger, Update on scalar singlet dark matter, Phys. Rev. D 88 (2013) 055025, [arXiv:1306.4710]. [Erratum: Phys.Rev.D 92, 039906 (2015)].
* (80) XENON Collaboration, E. Aprile et al., Dark Matter Search Results from a One Ton-Year Exposure of XENON1T, Phys. Rev. Lett. 121 (2018), no. 11 111302, [arXiv:1805.12562].
* (81) Fermi-LAT, DES Collaboration, A. Albert et al., Searching for Dark Matter Annihilation in Recently Discovered Milky Way Satellites with Fermi-LAT, Astrophys. J. 834 (2017), no. 2 110, [arXiv:1611.03184].
* (82) MAGIC, Fermi-LAT Collaboration, M. Ahnen et al., Limits to Dark Matter Annihilation Cross-Section from a Combined Analysis of MAGIC and Fermi-LAT Observations of Dwarf Satellite Galaxies, JCAP 02 (2016) 039, [arXiv:1601.06590].
* (83) G. Belanger, F. Boudjema, A. Pukhov, and A. Semenov, micrOMEGAs: A Tool for dark matter studies, Nuovo Cim. C 033N2 (2010) 111–116, [arXiv:1005.4133].
* (84) G. Belanger, F. Boudjema, A. Pukhov, and A. Semenov, Dark matter direct detection rate in a generic model with micrOMEGAs 2.2, Comput. Phys. Commun. 180 (2009) 747–767, [arXiv:0803.2360].
* (85) A. Bhardwaj, P. Konar, T. Mandal, and S. Sadhukhan, Probing the inert doublet model using jet substructure with a multivariate analysis, Phys. Rev. D 100 (2019), no. 5 055040, [arXiv:1905.04195].
* (86) A. Belyaev, N. D. Christensen, and A. Pukhov, CalcHEP 3.4 for collider physics within and beyond the Standard Model, Comput. Phys. Commun. 184 (2013) 1729–1769, [arXiv:1207.6082].
* (87) CMS Collaboration Collaboration, Search for disappearing tracks in proton-proton collisions at $\sqrt{s}=13$ TeV, tech. rep., CERN, Geneva, 2020\.
* (88) ATLAS Collaboration, M. Aaboud et al., Search for long-lived charginos based on a disappearing-track signature in pp collisions at $\sqrt{s}=13$ TeV with the ATLAS detector, JHEP 06 (2018) 022, [arXiv:1712.02118].
* (89) A. Belyaev, S. Prestel, F. Rojas-Abbate, and J. Zurita, Probing Dark Matter with Disappearing Tracks at the LHC, arXiv:2008.08581.
* (90) H1and for the ZEUS Collaboration, R. Placakyte, Parton Distribution Functions, in 31st International Symposium on Physics In Collision, 11, 2011. arXiv:1111.5452.
* (91) M. Cirelli and A. Strumia, Minimal Dark Matter: Model and results, New J. Phys. 11 (2009) 105005, [arXiv:0903.3381].
* (92) ATLAS Collaboration, M. Aaboud et al., Measurements of Higgs boson properties in the diphoton decay channel with 36 fb-1 of $pp$ collision data at $\sqrt{s}=13$ TeV with the ATLAS detector, Phys. Rev. D 98 (2018) 052005, [arXiv:1802.04146].
* (93) C.-W. Chiang, G. Cottin, Y. Du, K. Fuyuto, and M. J. Ramsey-Musolf, Collider Probes of Real Triplet Scalar Dark Matter, arXiv:2003.07867.
* (94) N. F. Bell, M. J. Dolan, L. S. Friedrich, M. J. Ramsey-Musolf, and R. R. Volkas, A Real Triplet-Singlet Extended Standard Model: Dark Matter and Collider Phenomenology, arXiv:2010.13376.
* (95) K. Griest and M. Kamionkowski, Unitarity Limits on the Mass and Radius of Dark Matter Particles, Phys. Rev. Lett. 64 (1990) 615.
* (96) S. Yaser Ayazi and S. M. Firouzabadi, Constraining Inert Triplet Dark Matter by the LHC and FermiLAT, JCAP 11 (2014) 005, [arXiv:1408.0654].
* (97) R. H. Cyburt, B. D. Fields, K. A. Olive, and T.-H. Yeh, Big Bang Nucleosynthesis: 2015, Rev. Mod. Phys. 88 (2016) 015004, [arXiv:1505.01076].
|
# The fundamental gap of horoconvex domains in $\mathbb{H}^{n}$
Xuan Hien Nguyen Iowa State University<EMAIL_ADDRESS>, Alina Stancu
Concordia University<EMAIL_ADDRESS>and Guofang Wei UC Santa
Barbara<EMAIL_ADDRESS>
###### Abstract.
We show that, for horoconvex domains in the hyperbolic space, the product of
their fundamental gap with the square of their diameter has no positive lower
bound. The result follows from the study of the fundamental gap of geodesic
balls as the radius goes to infinity. In the process, we improve the lower
bound for the first eigenvalue of balls in hyperbolic space.
## 1\. Introduction
In this article, the fundamental gap of a domain is the difference between the
first two eigenvalues of the Laplacian with zero Dirichlet boundary
conditions. For convex domains in $\mathbb{R}^{n}$ or $\mathbb{S}^{n}$, $n\geq
2$, it is known from [1, 13, 7, 9] that $\lambda_{2}-\lambda_{1}\geq
3\pi^{2}/D^{2}$, where $D$ is the diameter of the domain.
In hyperbolic space, this quantity behaves very differently from the Euclidean
and spherical cases. Recently, the authors showed [5] that for any fixed
$D>0$, there are convex domains with diameter $D$ in $\mathbb{H}^{n}$, $n\geq
2$, such that $D^{2}(\lambda_{2}-\lambda_{1})$ is arbitrarily small. Since
convexity does not provide a lower bound, one naturally asks if imposing a
stronger notion of convexity, such as horoconvexity, would imply an estimate
for $D^{2}(\lambda_{2}-\lambda_{1})$ from below. Recall that for a domain with
smooth boundary, convexity corresponds to nonnegative principal curvatures of
the boundary, while horoconvexity corresponds to principal curvatures greater
or equal to 1. We show that the quantity $D^{2}(\lambda_{2}-\lambda_{1})$
still tends to zero for all horoconvex domains in hyperbolic space when the
diameter tends to infinity.
###### Theorem 1.1.
For every $n\geq 2$, there exists a constant $C(n)$ such that the Dirichlet
fundamental gap of every horoconvex domain $\Omega$ with diameter $D\geq 4\ln
2$ satisfies
$\lambda_{2}(\Omega)-\lambda_{1}(\Omega)\leq\frac{C(n)}{D^{3}}.$
In particular, as $D\to\infty$, the quantity $(\lambda_{2}-\lambda_{1})D^{2}$
tends to $0$.
We prove this by first obtaining the following estimate for the fundamental
gap for special horoconvex domains, the geodesic balls in hyperbolic space.
###### Theorem 1.2.
Let $B_{R}$ be the geodesic ball of radius $R$ in $\mathbb{H}^{n}$ and
$\lambda_{i}(B_{R})$ be the $i$-th eigenvalue of the Laplace operator
$-\Delta$ in $B_{R}$ with Dirichlet boundary conditions. Then there is a
constant $C(n)$ so that
(1) $\lambda_{2}(B_{R})-\lambda_{1}(B_{R})\leq\frac{C(n)}{R^{3}}.$
In particular, as $R\to\infty$, the quantity $(\lambda_{2}-\lambda_{1})R^{2}$
tends to $0$.
In the authors’ earlier work [5], it was shown that, for any fixed $D>0$, one
can find a domain $\Omega$ for which
$(\lambda_{2}(\Omega)-\lambda_{1}(\Omega))D^{2}$ can be made arbitrarily
small. The domains $\Omega\subset\mathbb{H}^{n}$ in [5] are convex, but not
horoconvex. Their first eigenfunction is not log-concave either. In contrast,
note that the first eigenfunction of $B_{R}$ is log-concave (see [10,
Corollary 1.1] and Lemma 4.3). On the one hand, while the log-concavity of the
first eigenfunction plays a very important role in estimating the fundamental
gap of convex domains in the Euclidean space and sphere, Theorem 1.2 shows
that the log-concavity of the first eigenfunction in the hyperbolic case does
not imply a lower bound estimate for $(\lambda_{2}-\lambda_{1})D^{2}$. On the
other hand, we believe that $D^{2}$ is not the appropriate factor for domains
in the hyperbolic space and we conjecture that, for all horoconvex convex
domains $\Omega\subset\mathbb{H}^{n}$, we have
$\lambda_{2}(\Omega)-\lambda_{1}(\Omega)\geq c(n,D)$ for some function
$c(n,D)$ depending on the dimension and diameter, that can lead to a lower
bound on the fundamental gap appropriately compared with the diameter. This is
true for balls in $\mathbb{H}^{n}$, see (9).
Theorem 1.2 is proved by transforming the eigenvalue equation of balls to the
eigenvalue equation of a Schrödinger operator. As a result, we obtain some
immediate upper and lower bound estimates on the first two eigenvalues of
balls, which improve and simplify earlier estimates on the first eigenvalues
of balls. See Sections 2, 3.
To prove Theorem 1.1, we exploit the fact that all big horoconvex domains
contain a large ball [4], see Theorem 4.1. We then combine Theorem 1.2 with
Benguria and Linde’s [3] comparison result for the fundamental gap to conclude
the proof, see Section 4.
## 2\. Basic Facts on Eigenvalues of Balls in $\mathbb{H}^{n}$
Here we review some basic facts about first two Dirichlet eigenvalues of balls
in the hyperbolic space. By transforming the eigenvalue equation of balls to
its Schrödinger form, we obtain some immediate upper and lower bound estimates
on the first two eigenvalues which improve and simplify earlier estimates.
### 2.1. The first eigenvalue
In this section, let $\lambda_{i}$ be the $i$-th eigenvalue of the Laplacian,
with Dirichlet boundary conditions, of geodesic balls with radius $r$ in
$\mathbb{H}^{n}$.
By [6, 3], the first eigenvalue $\lambda_{1}$ is the first eigenvalue of the
$1$-dimensional problem on $[0,r]$
(2) $u^{\prime\prime}+\frac{n-1}{\tanh t}u^{\prime}+\lambda u=0,\ \ u(r)=0,\
u^{\prime}(0)=0.$
With the change of variable $u(t)=(\sinh t)^{\frac{1-n}{2}}\bar{u}(t)$, we
have the associated Schrödinger equation
(3)
$-\frac{d^{2}}{dt^{2}}\bar{u}+\frac{n-1}{4}\left(n-1+\frac{n-3}{\sinh^{2}t}\right)\bar{u}=\lambda\bar{u}$
with Dirichlet boundary conditions at $0$ and $r$, and $\lambda_{1}$ is the
first eigenvalue of (3). Note that the nonconstant potential term changes sign
at $n=3$. We immediately notice that, when $n=3$,
$\lambda_{1}=1+\frac{\pi^{2}}{r^{2}}$. Since $\sinh^{-2}t\geq\sinh^{-2}r$ on
$(0,r]$, the ODE comparison theorem implies:
###### Lemma 2.1.
For $n>3$,
(4)
$\lambda_{1}>\frac{(n-1)^{2}}{4}+\frac{\pi^{2}}{r^{2}}+\frac{(n-1)(n-3)}{4\sinh^{2}r}.$
For $n=2$,
$\lambda_{1}\leq\frac{1}{4}+\frac{\pi^{2}}{r^{2}}-\frac{1}{4\sinh^{2}r}.$
The lower bound is sharper than the estimate of [2, (1.7)], which followed the
earlier estimate of McKean [11]. It is also an improvement over [12, Theorem
5.6] and an earlier estimate in [8, Theorem 5.2] when $r$ is large and $n>3$.
The upper bound in the case $n=2$ is that found by Gage [8, Theorem 5.2].
The bounds in the other direction do not follow directly from the Schrödinger
equation (3). In [12, Theorem 5.6] the following uniform upper and lower
bounds for the first eigenvalue $\lambda_{1}$ is obtained for all $n\geq 2$:
(5)
$\frac{(n-1)^{2}}{4}+\frac{\pi^{2}}{r^{2}}-\frac{4\pi^{2}}{(n-1)r^{3}}\leq\lambda_{1}\leq\frac{(n-1)^{2}}{4}+\frac{\pi^{2}}{r^{2}}+\frac{C}{r^{3}},$
with
$C=\frac{\pi^{2}(n^{2}-1)}{2}\int_{0}^{\infty}\frac{t^{2}}{\sinh^{2}t}dt=\frac{\pi^{4}(n^{2}-1)}{12}$.
We will use this lower bound and improve the upper bound in Section 3.
### 2.2. The second eigenvalue
The second eigenvalue $\lambda_{2}$ is studied in [3, Lemma 3.1], where it is
shown that it is the first eigenvalue of the following equation (see also (16)
with $k=1,l=1$):
(6) $u^{\prime\prime}+\frac{n-1}{\tanh
t}u^{\prime}-\frac{n-1}{\sinh^{2}t}u+\lambda u=0,\ \ \ u(r)=0,\ u(t)\sim t\
\mbox{as}\ t\rightarrow 0.$
Again with the change of variable $u(t)=(\sinh t)^{\frac{1-n}{2}}\bar{u}(t)$,
we have the associated Schrödinger equation
(7)
$-\frac{d^{2}}{dt^{2}}\bar{u}+\frac{n-1}{4}\left(n-1+\frac{n+1}{\sinh^{2}t}\right)\bar{u}=\lambda\bar{u}$
with Dirichlet boundary conditions at $0$ and $r$, where the second eigenvalue
$\lambda_{2}$ is the first eigenvalue of (7). Using once more the ODE
comparison theorem, we obtain
(8)
$\lambda_{2}\geq\frac{(n-1)^{2}}{4}+\frac{\pi^{2}}{r^{2}}+\frac{n^{2}-1}{4\sinh^{2}r}.$
To find an upper bound estimate for $\lambda_{2}$, we will seek in the next
section an upper bound for the first eigenvalue of a more general Schrödinger
equation and, as such, we will simultaneously obtain an upper bound for
$\lambda_{1}$, slightly improve the one in (5).
From (3) and (7) we immediately have the following lower bound on the
fundamental gap of the ball $B_{R}\subset\mathbb{H}^{n}$ for all $n\geq 2$.
(9) $\lambda_{2}-\lambda_{1}\geq\frac{n-1}{\sinh^{2}R}.$
## 3\. First Eigenvalue Upper Bound for Schrödinger Equation
Let $\lambda_{1}^{\alpha}$ be the first eigenvalue of the following equation
(10)
$-\frac{d^{2}}{dt^{2}}u+\frac{n-1}{4}\left(n-1+\frac{\alpha}{\sinh^{2}t}\right)u=\lambda
u$
with Dirichlet boundary conditions at $0$ and $r$.
###### Proposition 3.1.
For $\alpha\geq 0$, we have
(11)
$\lambda_{1}^{\alpha}<\frac{(n-1)^{2}}{4}+\frac{\pi^{2}}{r^{2}}+\frac{(n-1)\alpha}{12r^{3}}\pi^{4}.$
In particular, the first two eigenvalues of the geodesic ball of radius $r$ in
$\mathbb{H}^{n}$ satisfy
(12) $\displaystyle\lambda_{1}$ $\displaystyle<$
$\displaystyle\frac{(n-1)^{2}}{4}+\frac{\pi^{2}}{r^{2}}+\frac{(n-1)(n-3)}{12r^{3}}\pi^{4},\
\mbox{for}\ n\geq 3,$ (13) $\displaystyle\lambda_{2}$ $\displaystyle<$
$\displaystyle\frac{(n-1)^{2}}{4}+\frac{\pi^{2}}{r^{2}}+\frac{(n-1)(n+1)}{12r^{3}}\pi^{4},\
\mbox{for}\ n\geq 2.$
The upper bound (12) improves the upper bound in [12, Theorem 5.6], see (5).
###### Proof.
The first Dirichlet eigenvalue of a Schrödinger operator
$-u^{\prime\prime}+Vu$ is a minimizer of the Rayleigh quotient
$R[u]=\frac{\int|u^{\prime}|^{2}+Vu^{2}}{\int u^{2}},$
among all non-constant $u$ with $u(0)=u(r)=0$.
The equation (10) with $\alpha=0$ has its first eigenfunction equal to
$v=\sqrt{\frac{2}{r}}\sin(\pi t/r)$. It is normalized so that
$\int_{0}^{r}v^{2}dt=1$. Therefore by inserting $v$ into the Rayleigh quotient
associated to (10), we find
$\displaystyle\lambda_{1}^{\alpha}$
$\displaystyle\leq\frac{(n-1)^{2}}{4}+\int_{0}^{r}\left(\frac{dv}{dt}\right)^{2}dt+\int_{0}^{r}\frac{(n-1)\alpha}{4(\sinh
t)^{2}}v^{2}\,dt$
$\displaystyle=\frac{(n-1)^{2}}{4}+\frac{\pi^{2}}{r^{2}}+\frac{(n-1)\alpha}{4}\int_{0}^{r}\frac{v^{2}}{(\sinh
t)^{2}}\,dt.$
Using $\sin|x|\leq|x|$, we have
$r^{2}\int_{0}^{r}\left(\frac{\sin\left(\pi t/r\right)}{\sinh
t}\right)^{2}dt\leq\pi^{2}\int_{0}^{r}\left(\frac{t}{\sinh
t}\right)^{2}dt<\pi^{2}\int_{0}^{\infty}\left(\frac{t}{\sinh
t}\right)^{2}dt=\frac{\pi^{4}}{6}.$
This gives $\int_{0}^{r}\frac{v^{2}}{(\sinh
t)^{2}}\,dt<\frac{\pi^{4}}{3r^{3}}$, hence (11). ∎
Combining the lower bound in (5) with (13) gives the estimate (1) in Theorem
1.2.
## 4\. Horoconvex domains in $\mathbb{H}^{n}$
A stronger definition of convexity in the hyperbolic space considers
horospheres as natural analogues of Euclidean hyperplanes supporting a convex
domain:
###### Definition.
A set $\Omega\subset\mathbb{H}^{n}$ is called horoconvex if, for every point
$p\in\partial\Omega$, there exists a horosphere ${\mathcal{H}}$ through $p$
such that $\Omega$ lies in the horoball bounded by ${\mathcal{H}}$.
Recall that a _horosphere_ is a sphere with center on the ideal boundary of
$\mathbb{H}^{n}$ and that a _horoball_ is a domain whose boundary is a
horosphere.
When $\Omega$ is a compact domain with smooth boundary in the hyperbolic space
of constant negative curvature $-1$, the domain $\Omega$ is horoconvex if and
only if all principal curvatures of the boundary hypersurface are greater or
equal to one. As a special case, $B_{R}$, the geodesic sphere of radius $R$,
is horoconvex as each of the principal curvatures of its boundary is equal to
$\coth R$, and $\coth R>1$ for all $R>0$.
Finally, for any compact domain, recall that its inradius is the radius of the
largest ball contained in the domain, and that its circumradius is the radius
of the smallest ball containing the domain. Part of a result of Borisenko-
Miquel [4, Theorem 1] states the following:
###### Theorem 4.1.
[4] Let $\Omega$ be a compact horoconvex domain in $\mathbb{H}^{n}$ with
inradius $r$ and circumradius $R$. Denoting $\tau=\tanh\frac{r}{2}$, then
(14) $R-r\leq\ln\frac{(1+\sqrt{\tau})^{2}}{1+\tau}<\ln 2,$
and this bound is sharp.
An immediate consequence of (14) is that the diameter of the domain satisfies
$D\leq 2R\leq 2r+2\ln 2$. We are now ready to prove Theorem 1.1.
###### Proof.
Let $\Omega\subset\mathbb{H}^{n}$ be a horoconvex domain of diameter $D$.
Choose $R_{\Omega}$ such that the ball of radius $R_{\Omega}$ satisfies
$\lambda_{1}(B_{R_{\Omega}})=\lambda_{1}(\Omega)$. Theorem 4.1 implies that
$\Omega$ contains a ball of radius $r$ with $r\geq\frac{D}{2}-\ln 2$. By
domain monotonicity of the first eigenvalue, $R_{\Omega}\geq\frac{D}{2}-\ln
2$, hence
(15) $R_{\Omega}\geq\frac{D}{4},$
when $D\geq 4\ln 2$.
Using [3], Benguria-Linde’s upper bound on the second eigenvalue, we have that
$\lambda_{2}(\Omega)-\lambda_{1}(\Omega)\leq\lambda_{2}(B_{R_{\Omega}})-\lambda_{1}(B_{R_{\Omega}}).$
Applying the estimates (1) and (15) concludes the proof of Theorem 1.1. ∎
## Appendix
Small balls and log-concavity of eigenfunction of geodesics balls in
$\mathbb{M}^{n}_{K}$
To round up the discussion on the fundamental gap of balls in the hyperbolic
space, we thought to include here an observation on the fundamental gap of
balls of small radii, as well as a simple argument proving the log-concavity
of the first eigenfunction of geodesic balls in simply connected Riemannian
manifolds with constant negative sectional curvature.
### 4.1. The gap of small balls in negatively curved manifolds
Let $\mathbb{M}^{n}_{K}$ be the simply connected Riemannian manifold with
constant sectional curvature $K$. Here, we assume that $K$ is negative and
write $K=-k^{2},\,(k>0)$. Denote by $\lambda_{i}(n,k,r)$ the eigenvalues of
the Laplacian for geodesic balls with radius $r$ in $\mathbb{M}^{n}_{K}$ with
Dirichlet boundary condition.
By separation of variables, see [6, 3], the eigenvalues $\lambda_{i}(n,k,r)$
are eigenvalues of
(16)
$u^{\prime\prime}+\frac{(n-1)k}{\tanh(kt)}u^{\prime}-\frac{l(l+n-2)k^{2}}{\sinh^{2}(kt)}u+\lambda
u=0,$
where $l=0,1,2,\cdots$, with boundary condition $u^{\prime}(0)=0$ for $l=0$,
$u(t)\sim t^{l}$ as $t\to 0$ for $l>0$, and $u(r)=0$.
By scaling, this immediately gives [3, Lemma 4.1], for $c>0$,
(17) $\lambda_{i}(n,\frac{1}{c}k,cr)=c^{-2}\lambda_{i}(n,k,r).$
Hence
(18) $\lambda_{i}(n,1,r)=r^{-2}\lambda_{i}(n,r,1).$
Therefore, for small balls in $\mathbb{H}^{n}$, the value
$r^{2}\lambda_{i}(n,1,r)$ is close to the corresponding one in the Euclidean
space, as one would expect. Namely,
###### Lemma 4.2.
$\lim_{r\to
0}r^{2}\lambda_{i}(n,1,r)=\lambda_{i}(n,0,1)=r^{2}\lambda_{i}(n,0,r),$
and
(19) $\lim_{r\to
0}r^{2}\left(\lambda_{2}(n,1,r)-\lambda_{2}(n,1,r)\right)=r^{2}(\lambda_{2}(n,0,r)-\lambda_{1}(n,0,r))=j_{\frac{n}{2},1}^{2}-j_{\frac{n}{2}-1,1}^{2},$
where $j_{p,k}$ is the $k$-th positive zero of the Bessel function $J_{p}(x)$.
### 4.2. The first eigenfunction for balls
The first eigenfunction of balls is purely radial, so it is straightforward to
show that it is log-concave, as in the Euclidean and spherical case.
###### Lemma 4.3.
The first eigenfunction $u_{1}$ of (2) is strictly log-concave.
This is in [10, Corollary 1.1], where more general elliptic equations with
power are considered. For convenience, we give a simple and direct proof here.
###### Proof.
First we show $u_{1}$ is strictly decreasing. Multiplying both sides of (2) by
$\sinh^{n-1}t$, we have
$(u_{1}^{\prime}\sinh^{n-1}t)^{\prime}=-\lambda_{1}u_{1}\sinh^{n-1}t<0.$
Since $u_{1}^{\prime}(0)=0$, we have $u_{1}^{\prime}(t)<0$ for $t\in(0,r)$.
Let $\varphi=(\log u_{1})^{\prime}$. Then $\varphi(0)=0$, $\varphi<0$ on
$(0,r)$, and
$\varphi^{\prime}=\frac{u_{1}^{\prime\prime}}{u_{1}}-\left(\frac{u_{1}^{\prime}}{u_{1}}\right)^{2}=-\frac{n-1}{\tanh
t}\varphi-\lambda_{1}-\varphi^{2}.$
Taking the limit as $t\to 0$ gives
$\varphi^{\prime}(0)=-\lambda_{1}-(n-1)\lim_{t\to 0}\frac{\varphi}{\tanh
t}=-\lambda_{1}-(n-1)\varphi^{\prime}(0)$. Hence, $\varphi^{\prime}(0)<0$.
Now, we claim that $\varphi^{\prime}(t)<0$ on $[0,r)$. Otherwise, there exists
$t_{1}\in(0,r)$ such that $\varphi^{\prime}<0$ on $[0,t_{1})$,
$\varphi^{\prime}(t_{1})=0$ and $\varphi^{\prime\prime}(t_{1})\geq 0$. Note
that $\varphi^{\prime\prime}$ satisfies
$\varphi^{\prime\prime}=\frac{n-1}{\sinh^{2}t}\varphi-\frac{n-1}{\tanh
t}\varphi^{\prime}-2\varphi\varphi^{\prime}.$
Evaluating the two sides of the equation at $t_{1}$ gives
$0\leq\varphi^{\prime\prime}(t_{1})=\frac{n-1}{\sinh^{2}t_{1}}\varphi(t_{1})<0.$
This is a contradiction. ∎
## References
* [1] Ben Andrews and Julie Clutterbuck. Proof of the fundamental gap conjecture. J. Amer. Math. Soc., 24(3):899–916, 2011.
* [2] Sergei Artamoshin. Lower bounds for the first Dirichlet eigenvalue of the Laplacian for domains in hyperbolic space. Math. Proc. Cambridge Philos. Soc., 160(2):191–208, 2016.
* [3] Rafael D. Benguria and Helmut Linde. A second eigenvalue bound for the Dirichlet Laplacian in hyperbolic space. Duke Math. J., 140(2):245–279, 2007.
* [4] Alexandr A. Borisenko and Vicente Miquel. Total curvatures of convex hypersurfaces in hyperbolic space. Illinois J. of Math., 43(1):61–78, 1999.
* [5] Theodora Bourni, Julie Clutterbuck, Xuan Hien Nguyen, Alina Stancu, Guofang Wei, and Valentina-Mira Wheeler. The vanishing of the fundamental gap of convex domains in $\mathbb{H}^{n}$. arXiv:2005.11784, 2020.
* [6] Isaac Chavel. Eigenvalues in Riemannian geometry, volume 115 of Pure and Applied Mathematics. Academic Press Inc., Orlando, FL, 1984.
* [7] Xianzhe Dai, Shoo Seto, and Guofang Wei. Fundamental gap estimate for convex domains on sphere– the case $n=2$. To appear in Comm. in Analysis and Geometry, arXiv:1803.01115, 2018\.
* [8] Michael E. Gage. Upper bounds for the first eigenvalue of the Laplace-Beltrami operator. Indiana Univ. Math. J., 29(6):897–912, 1980.
* [9] Chenxu He, Guofang Wei, and Qi S. Zhang. Fundamental gap of convex domains in the spheres. Amer. J. Math., 142(4):1161–1192, 2020.
* [10] Kazuhiro Ishige, Paolo Salani, and Asuka Takatsu. Power concavity for elliptic and parabolic boundary value problems on rotationally symmetric domains. arXiv:2002.1014, 2020.
* [11] Henry P McKean. An upper bound to the spectrum of $\Delta$ on a manifold of negative curvature. J. Differential Geometry, 4:359–366, 1970.
* [12] Alessandro Savo. On the lowest eigenvalue of the Hodge Laplacian on compact, negatively curved domains. Ann. Global Anal. Geom., 35(1):39–62, 2009.
* [13] Shoo Seto, Lili Wang, and Guofang Wei. Sharp fundamental gap estimate on convex domains of sphere. Journal of Differential Geometry, 112(2):347–389, 2019.
|
# An MCMC method to determine properties of Complex Network ensembles
Oskar Pfeffer, Nora Molkenthin, Frank Hellmann Potsdam Institute for Climate
Impact Research
TU Berlin
# Short but meaningful title containing ”Canonical Network Ensembles”
Oskar Pfeffer, Nora Molkenthin, Frank Hellmann Potsdam Institute for Climate
Impact Research
TU Berlin
# Canonical Network Ensembles and their application
Oskar Pfeffer, Nora Molkenthin, Frank Hellmann Potsdam Institute for Climate
Impact Research
TU Berlin
# Canonical Network Ensembles approach to Small-World networks
Oskar Pfeffer, Nora Molkenthin, Frank Hellmann Potsdam Institute for Climate
Impact Research
TU Berlin
# Relative Canonical Network Ensembles –
(Mis)characterizing Small-World Networks
Oskar Pfeffer, Nora Molkenthin, Frank Hellmann Potsdam Institute for Climate
Impact Research
TU Berlin
###### Abstract
What do generic networks that have certain properties look like? We define
Relative Canonical Network ensembles as the ensembles that realize a property
R while being as indistinguishable as possible from a generic network
ensemble. This allows us to study the most generic features of the networks
giving rise to the property under investigation. To test the approach we apply
it first to the network measure ”small-world-ness”, thought to characterize
small-world networks. We find several phase transitions as we go to less and
less generic networks in which cliques and hubs emerge. Such features are not
shared by typical small-world networks, showing that high ”small-world-ness”
does not characterize small-world networks as they are commonly understood. On
the other hand we see that for embedded networks, the average shortest path
length and total Euclidean link length are better at characterizing small-
world networks, with hubs that emerge as a defining feature at low genericity.
We expect the overall approach to have wide applicability for understanding
network properties of real world interest.
MCMC, Complex Networks, Small-world
## I Introduction
Network ensembles are sets of networks together with a probability
distribution of their occurrence and have been successfully used to model a
wide range of natural, social and technical systems, in which the interaction
structure is subject to, or the outcome of, stochasticity [1, 2, 3, 4, 5, 6].
Typically those ensembles are generated through a heuristic process, thought
to capture some aspect of the microscopic formation process, which underlies
the real-world system they are trying to model. The resulting ensemble can
then be studied and characterized by means of network measures that quantify
certain properties of the networks. Examples for this are Watts–Strogatz
networks, which are characterized by low average shortest path length and high
clustering coefficients [7], and Barabasi–Albert networks, which are
characterized by their power-law degree distribution [8].
Here we want to approach network ensembles from the other side. Rather than
trying to model real world networks we ask: What do generic networks that have
certain properties look like? Thus, we will _define_ ensembles through a
particular property captured by a “property function” $R(G)$ on networks $G$
and a background ensemble that defines our notion of generic networks in the
given context. To this end, we will consider slightly generalized exponential
random graphs. Exponential random graphs have long been a tool in network
science, starting with [9, 10, 11, 12], see [13] for a recent review, and are
also sometimes known as canonical network ensembles (CNE) [14, 15, 16]. We
will consider CNEs relative to the background ensemble of generic networks.
Given some set of networks $\mathcal{E}$ on a finite set of vertices, denote
the probability distribution of the background ensemble as $q(G)$ for
$G\in\mathcal{E}$. The relative canonical network ensemble (RCNE) of $R$
relative to $q$ is given by the probability distribution proportional to
$\exp(-\beta R(G))q(G)$.
We emphasize that our aim is not to model empirically observed network
ensembles with certain properties. There is no reason to expect empirical
networks, that are the outcome of subtle formation processes, to be generic.
Instead, we will study the properties themselves, specifically the most
generic features that produce them, and whether or not the properties suffice
to generically characterize the networks under study. Our aim in this is to
understand properties that are of considerable practical interest. Companion
papers will consider epidemic thresholds and the vulnerability to failure
cascades in power grids. To introduce our approach, this paper will focus on
well-known and well-established network measures, that are computationally
challenging, instead. Specifically, we will consider the notion of ”small-
world-ness”.
We study two ensembles, the first defined by the _small-world-ness_ , as
introduced in [17], the second defined by a combination of Euclidean link
length and average shortest path length similar to [18]. To study these
ensembles we sample them using the straightforward Metropolis-Hastings
(MH)[19, 20, 21] algorithm.
In both cases we find phase transitions as we go from fully generic networks
to highly specific ones. At these phase transitions certain features arise,
e.g. hubs and cliques start appearing in the ensemble. Surprisingly we find
that generic networks with high small-world-ness do not resemble small-world
networks. Thus, we find that what [17] called small-world-ness does not
actually characterize small-world networks generically.
## II Relative Canonical network ensemble
Exponential random graphs were first introduced in [11, 9, 10]. Given the set
of simple graphs $\mathcal{E}_{N}$ on a set of $N$ vertices, they are defined
by the probability distribution over $\mathcal{E}_{N}$,
$p_{\beta}^{R}(G)={Z_{R}(\beta)}^{-1}\exp\left(-\beta R(G)\right)$. That is,
they are the Gibbs ensemble at temperature $T=1/k_{B}\beta$. The use of such
network ensembles is sometimes justified by the fact that these are maximum
entropy ensembles with a given expectation value for $R$. However, there is no
a priori reason to expect formation processes that lead to real world networks
to maximize entropy. For instance, typical formation processes do not resemble
exchange with an environment at fixed genericity (in analogy to a heat bath).
In fact, it was already noted in [12] that the maximum entropy ensembles do
not model real world systems easily and show unexpected structures,
interpreted there as an “unfortunate pathology”.
Instead, we want to understand the most generic features giving rise to a
property $R$. That is, a feature, that is observed more frequently the more
the expectation value of $R$ differs from the value expected for generic
networks. As mentioned in the introduction, to define our notion of genericity
we specify background ensemble $q(G)$ (for example an Erdős–Rényi ensemble at
a fixed number of edges). The relative canonical network ensemble of $R$
relative to $q$ is then given by:
$\displaystyle p^{R}_{\beta,q}(G)=\frac{1}{Z_{R,q}(\beta)}e^{-\beta
R(G)}q(G)\;,$ (1)
with normalization/partition function
$Z_{R,q}(\beta)=\sum_{G\in\mathcal{E}_{N}}e^{-\beta R(G)}q(G)$. This ensemble
is characterized by being the ensemble of minimum relative entropy $D(p||q)$
for a fixed expectation value of $R$. From an information-theoretic
perspective it is the ensemble hardest to distinguish from the generic
ensemble $q$ while having fixed expectation value $\langle R\rangle=R^{*}$,
for a more detailed discussion see Appendix A.
The parameter $\beta$ moderates the trade-off between the generic ensemble and
highly specific ones peaked on networks that are high or low in $R$, see
Figure 1. It can range from $-\infty$ to $+\infty$ with the sign depending on
whether the expectation value of $R$ is higher or lower than in the generic
network ensemble given by $\beta=0$. At $\beta\rightarrow-\infty$ we have an
ensemble concentrated on $\max(R)$, while at $\beta\rightarrow+\infty$ it is
$\min(R)$. This, and the fact that interpretation of the relative entropy is
purely information theoretic (rather than thermodynamic), motivates us to
refer to $\beta^{-1}$ in this context as the genericity rather than as a
temperature.
Of particular note are phase transitions that occur as we lower the absolute
genericity. The structure of the ensemble changes at and beyond the phase
transition. This change in structure allows us to identify specific features
that contribute to property $R$ but are not generic enough to occur before.
Figure 1: The inverse genericity $\beta$ mediates between specific ensembles
concentrated on maximum $R$, the generic background ensemble $q$ with $\langle
R\rangle=\langle R\rangle_{q}$ and specific ensembles concentrated on minimum
$R$.
Throughout the rest of this manuscript we will consider canonical ensembles
relative to the Erdős–Rényi ensemble at fixed size $N$ and mean degree $k$,
that is, the equidistribution over all graphs with vertex set $\\{1,...,N\\}$
and $kN/2$ edges. Generally what counts as a generic network is highly
dependent on context. A generic social network does not look like a generic
power grid. In some contexts it might also be appropriate to use maximum
entropy null-models as generic ensembles[13].
Since exponential random graphs were first introduced, computing capabilities
profoundly increased. This means we can now use complex, practically relevant
network properties and analyze what features of networks generically give rise
to them. This approach may help in the future to gain a better understanding
of complex network measures and provide a way to find simpler network measures
to act as predictors for the characteristics defining the ensemble.
To study these ensembles we need to sample from them. An important property of
RCNEs is that they are well suited for sampling using Metropolis-Hastings (MH)
algorithms. To use MH on our relative ensemble, we require a background
process that generates proposed steps compatible with the background
distribution $q$. For $q_{Nk}$ this can be provided simply by considering
rewiring of edges. Starting from a system in state $x$ the algorithm proposes
rewirings that are accepted with probability
$\displaystyle P_{\beta}(x\rightarrow
y)=\min\left(1,\frac{p_{\beta}(R(x))}{p_{\beta}(R(y))}\right)=\min\left(1,e^{-\beta\Delta
R}\right).$ (2)
This algorithm satisfies the detailed balance condition and the Markov chain
defined by it is strongly connected. In the limit of infinite steps the time
average for this ensemble converges to the ensemble average of the ground
state which is the relative canonical network ensemble. Unfortunately, there
are no guarantees for finite time samples and we have to resort to heuristics
to understand whether convergence has occurred. To do so we typically also run
several chains in parallel from random initial conditions. This further allows
us to obtain less correlated samples. More details on our sampling approach
are provided in the next section.
For more general background ensembles it might be complicated to find step
proposals. If $q$ is explicitly known it can be incorporated into the step
acceptance probability. If there is no explicit formula for $q$, for example
because it is implicitly defined by a stochastic growth process, it is
necessary to make use of the growth process directly to generate new
proposals.
## III Small world properties
Figure 2: Small-world-ness increases significantly as the property function
approaches the global minimum. $R_{S}$-ensemble (circles) and
$R_{WL}$-ensemble (squares) networks with $N=256$ and $\langle k\rangle=4$ to
finite inverse genericity $\beta\in\\{2^{-2}\to 2^{13}\\}$. At the start of
the range the ensembles are statistically indistinguishable from the
background ensemble $q$. a) shows the small-world-ness $S$ and b) the property
$R_{S}$ and $R_{WL}$ vs $\beta$. The stars on the right identify simulations
for $\beta\rightarrow\infty$. Each data point is averaged over 32 realizations
with $2^{24}$ MCMC steps each.
To demonstrate the approach, we analyze two different small-world-ness
properties. In particular we look at the features that give rise to them and
whether they generically characterize what is commonly known as small-world
networks. In the first instance we consider the Small-world-ness measure
$S=({C}/{L})\left({C_{\text{ER}}}/{L_{\text{ER}}}\right)^{-1}$, introduced in
[17], where $C$ is the global clustering coefficient defined as the number of
closed triplets divided by the number of all triplets, $L$ the average
shortest path length $C_{\text{ER}}$ is the average clustering coefficient of
an Erdős–Rényi network [22] of the same size and $L_{\text{ER}}$ is the
expected average shortest path length of an Erdős–Rényi network of the same
size. Finally, the form of the property we will consider is:
$\displaystyle R_{S}=\frac{L}{C}.$ (3)
That is, proportional to the inverse of $S$. Thus, small values of $R_{S}$
indicate high Small-world-ness, and we are interested in positive $\beta$.
Figure 3: At low genericity hub and clique structures emerge, transforming the
degree distribution. The degree distribution in a) shows three peaks for the
$R_{S}$-ensemble while the degree distribution of the $R_{WL}$-ensemble in c)
shows two peaks. b) and d) show a heat-map of degree distributions at various
inverse genericities. The degree distributions are an average of 32
realizations with $2^{24}$ MCMC steps each.
The second property
$\displaystyle R_{WL}=WL$ (4)
is given in terms of the average shortest path length $L$ and the wiring
length $W$ in an embedded network. $W$ is given by the sum of the Euclidean
length of all edges. The networks for this ensemble are embedded in a 2D
plane. The introduction of $W$ was inspired by [18], where it was argued that
small-world networks might arise as a secondary feature from a trade-off
between maximal connectivity and minimal edge lengths. Again small $R_{WL}$ is
expected to yield Small-world networks and we consider positive $\beta$.
Figure 4: Examples of the various phases for $N=128$ networks. The $R_{S}$
ensemble starts as a random network without any recognizable structure in a),
then a first large clique emerges in b) and finally a central hub that lowers
the average shortest path length in c), which is close to the network where
the small-world-ness is maximal in d). The $R_{WL}$ ensemble starts from a
random phase in e), then a central hub with long range connections emerges in
f). The network minimizing $R_{WL}$ resembles a random geometric network with
a central hub.
Both ensembles are taken relative to a random Erdős–Rényi network with $N$
vertices and $M={\langle k\rangle N}/{2}$ edges, where $\langle k\rangle$ is
the average degree of the network. The positions of the vertices in the
embedded networks are initialized randomly on a 2D unit square.
The proposal for each Monte Carlo step is generated by rewiring a single edge,
i.e. deleting an existing edge at random and connecting two previously
unconnected vertices chosen at random, thus keeping the number of edges
constant. The proposal is then accepted with the transition probability given
in Eq. (2). Proposals of disconnected graphs are always rejected since $L$ is
infinite. To ensure convergence at low genericity, we use an exponential
schedule
$\beta^{-1}(t)=\left({\beta^{-1}_{\text{start}}\alpha^{t}+\beta^{-1}_{\text{end}}}\right)$,
similiar to the Simulated Annealing approach [23], where $t$ is the step
parameter, $\alpha=0.99$ is a simulation parameter.
$\beta^{-1}_{\text{start}}$ and $\beta^{-1}_{\text{end}}$ are the start and
final genericities. We generated ensembles with $(128,128,128,128,64,32,16)$
networks of size $N=(8,16,32,64,128,256,512)$ correspondingly with average
degree $\langle k\rangle=4$. All the simulations appeared to converge,
allowing qualitative evaluation of the simulations. Throughout the manuscript,
genericity was decreased over $2^{11}$ equally long periods, each containing
$2^{13}$ MCMC steps for a total of $2^{24}$ MCMC steps. Note that while the
properties we consider here are conceptually simple, the presence of the
average shortest path length, which needs to be recomputed for every proposed
step, renders them computationally expensive. We further note that achieving
convergence for the $R_{S}$ ensemble was considerably harder than for
$R_{WL}$. Thus, they do constitute a real check of the ability of the approach
to study complex network properties.
As shown in Fig. 2, the Small-world-ness $S$ increases for both network
ensembles as they become less generic. For the $R_{S}$-ensemble, for which it
is mathematically guaranteed that the expectation value of $S$ increases for
decreasing genericity, this is an important sanity check on our sampling. In
the $R_{WL}$-ensemble this arises as a secondary effect as Euclidean and
network distances are reduced, showing that generic $R_{WL}$ networks do
indeed have high small-world-ness, as anticipated in [18].
As a common and simple network measure, we now look at the degree
distribution. Fig. 3 shows the degree distributions of the two ensembles for
decreasing genericity. The shift from generic (poisson distributed) to
specific networks is evident. The extremal $\beta\rightarrow\infty$ case
(simulated with the same exponential schedule as above until we observed
convergence) is shown explicitly in Fig. 3 a) and c), and we see highly
pronounced features in the degree distribution. Example of networks at this
state are shown in Fig. 4 d) and g), looking at these allows us to identify
the features in the degree distribution as major cliques and hubs. Note that
the degree distribution of the $R_{S}$ ensemble in particular does not
resemble that of the WS-ensemble.
The $R_{S}$ example network (Fig. 4 d) looks almost star-shaped with a very
highly connected central node and a few fully connected branches. This
indicates that the two components of the property, namely average shortest
path length and clustering are optimized in specialized areas of the network.
The star graph, which is the smallest possible sparse graph, is thereby
combined with many nodes in fully connected cliques. The $R_{WL}$ network
(Fig. 4 g) on the other hand looks like a sparse geometric network with star-
shaped shortcuts, making it much closer in spirit to the WS-ensemble and its
two-dimensional relatives.
As a result, the nodes in the $R_{S}$-networks can be categorized into hub-
nodes, clique-nodes, and the rest, where hub-nodes have very high degrees
($k\approx 20-30$), clique nodes above-average degrees ($k\approx 10-15$) and
the rest has low degrees. This can be seen in Fig. 3b) as 3 major peaks.
To understand how these extremal cases come about, we consider the degree
distributions over the whole parameter space in (Fig. 3 b and d). Here we see
several abrupt transitions. For the $R_{S}$ ensemble the major clique starts
forming at $\beta\approx 2^{-2}$, while the hub only emerges at high inverse
genericities of around $\beta\approx 2^{8}$.
In the embedded networks, nodes fall into two categories: regional nodes and
inter-regional hubs. This can be seen in Fig. 3 c), where regional nodes fall
into the normal degree distribution of a (slightly sparser) geometric network
and hubs have higher degrees of $k\approx 40-50$. This hub emerges at around
$\beta\approx 2^{10}$.
Figure 5: The phase transition is characterized by a rise in the largest
Eigenvalue. The largest Eigenvalue is plotted in a) and d) for $R_{S}$ and
$R_{WL}$ respectively. b) and e) show the dependence of the largest degree on
the genericity and c) and f) show the maximum k-core over genericity for
network sizes from $N=\\{8,16,32,64,128,256,512\\}$ and average degree
$\langle k\rangle=4$. Simulation details: MCMC steps $=2^{24}$ each, generated
ensemble size (in order of network size) $=\\{128,128,128,128,64,32,16\\}$.
The realizations for $R_{S},N=512,\log_{2}\beta\geq 7$ did not fully converge
and are not plotted.
## IV Genericity phase transition
Fig. 4 shows various examples of networks taken from different genericity
phases. We can now study the transition between these phases in more detail.
As seen in Fig.3 b and d, both the $R_{S}$ and $R_{WL}$ ensembles show a
qualitative change in the degree distribution. At high genericity we have
essentially random graphs in both cases with the expected Poisson degree
distribution. At low genericity both ensembles show multiple peaks. In case of
the $R_{WL}$-ensemble this comes as a sudden appearance of a second peak at
$\beta=2^{10}$. In case of the $R_{S}$-ensemble this transition appears to be
less clear cut with a structure that resembles branching at $\beta\approx
2^{2}$ and almost merging again, while another peak appears at $\beta\approx
2^{8}$.
To better understand these transitions we analyze the mean largest eigenvalues
$\lambda_{1}$ of the adjacency matrix, sizes of the largest non-empty $k$-core
and maximum degree as functions of the genericity for network sizes from
$N=2^{3}$ to $N=2^{9}$. The results are shown in Fig. 5. Both ensembles show a
phase transition in the largest Eigenvalue between a low $\lambda_{1}$ state
and a high $\lambda_{1}$ state. This transition is located at $\beta\approx
2^{2}$ for the $R_{S}$-ensemble and at $\beta\approx 2^{10}$ for the
$R_{WL}$-ensemble.
This transition is mirrored by the maximum k-core in case of the
$R_{S}$-ensemble (see Fig. 5 c)). This indicates that here the formation of
the first dense region in the graph is responsible for the phase transition.
This is clearly not the case for the $R_{WL}$-ensemble, where we find no
consistent transition genericity for the largest non-empty k-core, but a shift
in its rise depending on the network size as shown in Fig. 5 f).
Instead, the largest Eigenvalue transition in the $R_{WL}$-ensemble is
mirrored by the maximum degree in the network, as displayed in Fig. 5 e). As
expected from the degree distributions shown above, the changes of the maximum
degree in the $R_{S}$-ensemble hint at two transitions, one at $\beta\approx
2^{2}$, in which the first dense region forms and one at $\beta\approx 2^{8}$,
at which the central hub forms. The phase transition giving birth to the first
dense region found at $\beta\approx 2^{2}$, can be interpreted as similar to
[24], where a first order phase transition was analytically found for
Strauss’s model of clustering [11].
These results show that certain discrete features suddenly emerge at certain
genericities. The transitions become qualitatively visible in the degree
distribution, clearly appear in their graphical representations and can be
quantified in various network measures, where the largest eigenvalue is a good
first indicator and are more detailed in the maximum k-core and degree. These
phase transitions and the emergence of hubs and cliques are a driving element
in the increase of the small-world-ness property.
## V Discussion and conclusion
Here we introduced the concept of relative canonical network ensembles of
arbitrary network properties, as a means to study what the most generic
networks with these properties look like. These ensembles are amenable to
Metropolis-Hastings and MCMC methods, providing a simple and straightforward
(if potentially computationally expensive) way of sampling from non-trivial
network ensembles defined through network measures of practical interest.
To challenge the method we studied two properties traditionally expected to
characterize small-world networks. Surprisingly we found that generic networks
with a high small-world-ness index $S$ in the sense of [17]. Instead we find
that as $S$ increases the most generic networks with high $S$ contain first
cliques and then hubs, neither of which occur in the WS-ensemble. An
alternative property defined as the product of wiring and shortest path length
fared better, here also hubs arise for the least generic networks, but the
system appears to resemble small world networks more closely. This indicates
that at least for some networks, spatial embedding may actually be the
defining feature, from which high small-world-ness arises as a secondary
effect.
The transition from highly generic to very specific ensembles in both cases is
characterized by well defined phase transitions. These are visible in a number
of network measures. Notably in both cases we have a rise of the largest
eigenvalue of the adjacency matrix. This, however corresponds to the growth of
the first dense region in the $R_{S}$-ensemble and to the emergence of an
inter-regional hub in the $R_{WL}$-ensemble.
It is somewhat surprising that new things are still to learn on properties
thought to characterize small-world networks. The fact that our perspective on
relative canonical network ensembles could discover novel features is a
promising sign for the study of properties of greater practical interest. In
companion papers we are considering epidemic thresholds, and the vulnerability
to cascading failures. More generally this method is of great interest
wherever we want to understand and design topologies that fulfill certain
functions, rather than describe empirical networks.
### Code and Data availability
All code and data used in this work will be made available at
https://doi.org/10.5281/zenodo.4462634.
## Appendix A Relative Entropy
The minimization of the relative entropy has an information theoretic
interpretation. Given a distribution $q$, the asymptotic probability to obtain
a sample that looks like $p$ goes as the exponential of the negative relative
entropy $D(p||q)$. This result of Chernoff [25] is known as Stein’s Lemma (for
a modern account phrased in terms of relative entropy see e.g. [26] Theorem
4.12) and forms the mathematical basis for the interpretation of the relative
entropy as a measure of distinguishability of probability distributions. Our
ensembles thus have an information theoretic interpretation as being the
ensembles that are hardest to distinguish from the generic ensemble $q$. In
particular we do not presuppose that real network formation processes maximize
entropy subject to some constraints, and do not interpret the resulting
ensembles as modeling real networks that have the property $R$.
For completeness, we recall here the standard argument that the relative
entropy, or the Kullback-Leibler divergence, is minimized by the exponential
ensemble. We are looking for
$\displaystyle p^{*}$
$\displaystyle=\operatorname*{arg\,min}_{\begin{subarray}{c}p\\\ \langle
R\rangle=R^{*}\end{subarray}}D(p||q)$
$\displaystyle=\operatorname*{arg\,min}_{\begin{subarray}{c}p\\\ \langle
R\rangle=R^{*}\end{subarray}}\sum_{i}p_{i}\ln\left(\frac{p_{i}}{q_{i}}\right)$
(5)
First, note that this formula diverges to positive infinity if $p$ has support
outside the support of $q$. We thus only consider $p$ whose support is
contained in that of $q$. Then, by introducing Lagrange multipliers for the
expectation value of $R$ as well as for the normalization condition on the
distribution $p$ we can rewrite the constrained minimization above as a free
minimization:
$\displaystyle p^{*}(\beta_{n},\beta_{R})$
$\displaystyle=\operatorname*{arg\,min}_{p}\sum_{i}p_{i}\ln\left(\frac{p_{i}}{q_{i}}\right)~{}+$
$\displaystyle\phantom{=}+\beta_{n}\left(\sum_{i}p_{i}-1\right)+\beta_{R}\left(\sum_{i}p_{i}R_{i}-R^{*}\right)$
(6) $\displaystyle R^{*}$
$\displaystyle=\sum_{i}R_{i}p_{i}^{*}(\beta_{n},\beta_{R})$ $\displaystyle 1$
$\displaystyle=\sum_{i}p_{i}^{*}(\beta_{n},\beta_{R})$
Now the variation in the direction $p_{j}$ produces the following condition:
$\displaystyle 0$ $\displaystyle=\frac{\partial}{\partial
p_{j}}\left[\sum_{i}p_{i}(\ln(p_{i})-\ln(q_{i}))~{}+\right.$
$\displaystyle\phantom{=}+\left.\beta_{n}\left(\sum_{i}p_{i}-1\right)+\beta_{R}\left(\sum_{i}p_{i}R_{i}-R^{*}\right)\right]$
$\displaystyle=\ln(p_{j})-\ln(q_{j})+1+\beta_{n}+\beta_{R}R_{j}$ (7)
From which we can conclude
$\displaystyle p^{*}_{j}$
$\displaystyle=\exp(\ln(q_{j})-1-\beta_{n}-\beta_{R}R_{j})$
$\displaystyle=\frac{1}{Z}\,e^{-\beta_{R}R_{j}}\,q_{j}$ (8)
with $Z=e^{1+\beta_{n}}=\sum_{i}e^{-\beta_{R}R_{i}}\,q_{i}$ fixed by the
condition $\sum_{i}p^{*}_{i}=1$ and $\beta_{R}$ determined implicitly by the
condition $R^{*}=\sum_{i}R_{i}p_{i}^{*}$.
Note that
$\displaystyle\frac{\partial}{\partial\beta_{R}}\langle R\rangle$
$\displaystyle=-\frac{1}{Z}\sum_{i}R_{i}^{2}e^{-\beta_{R}R_{i}}\,q_{i}$
$\displaystyle\phantom{=}\;-\frac{\partial
Z}{\partial\beta_{R}}\frac{1}{Z^{2}}\sum_{i}R_{i}e^{-\beta_{R}R_{i}}\,q_{i}$
$\displaystyle=-\langle R^{2}\rangle+\langle R\rangle^{2}$
$\displaystyle=-\mathrm{Var}(R)\leq 0\;.$ (9)
Further, for $\beta_{R}=-\infty$ we have the distribution peaked completely on
the global maxima: $\langle R\rangle=R_{\text{max}}$ and for
$\beta_{R}=+\infty$ we have the minima instead $\langle
R\rangle=R_{\text{min}}$. For $\beta_{R}=0$ we have exactly the expectation
value of $R$ in the generic background ensemble $q$.
## References
* Schultz _et al._ [2014] P. Schultz, J. Heitzig, and J. Kurths, A random growth model for power grids and other spatially embedded infrastructure networks, The European Physical Journal Special Topics 223, 2593 (2014).
* Opitz and Shavlik [1996] D. W. Opitz and J. W. Shavlik, Generating accurate and diverse members of a neural-network ensemble, in _Advances in neural information processing systems_ (1996) pp. 535–541.
* Snoeijer _et al._ [2004] J. H. Snoeijer, T. J. Vlugt, M. van Hecke, and W. van Saarloos, Force network ensemble: a new approach to static granular matter, Physical review letters 92, 054302 (2004).
* Dorogovtsev _et al._ [2000] S. N. Dorogovtsev, J. F. F. Mendes, and A. N. Samukhin, Structure of growing networks with preferential linking, Physical review letters 85, 4633 (2000).
* Molkenthin and Timme [2016] N. Molkenthin and M. Timme, Scaling laws in spatial network formation, Physical review letters 117, 168301 (2016).
* Molkenthin _et al._ [2018] N. Molkenthin, M. Schröder, and M. Timme, Adhesion-induced discontinuous transitions and classifying social networks, Physical review letters 121, 138301 (2018).
* Watts and Strogatz [1998] D. J. Watts and S. H. Strogatz, Collective dynamics of ‘small-world’networks, nature 393, 440 (1998).
* Barabási and Albert [1999] A.-L. Barabási and R. Albert, Emergence of scaling in random networks, science 286, 509 (1999).
* Holland and Leinhardt [1981] P. W. Holland and S. Leinhardt, An exponential family of probability distributions for directed graphs, Journal of the american Statistical association 76, 33 (1981).
* Frank and Strauss [1986] O. Frank and D. Strauss, Markov graphs, Journal of the american Statistical association 81, 832 (1986).
* Strauss [1986] D. Strauss, On a general class of models for interaction, SIAM review 28, 513 (1986).
* Newman [2003] M. E. Newman, The structure and function of complex networks, SIAM review 45, 167 (2003).
* Cimini _et al._ [2019] G. Cimini, T. Squartini, F. Saracco, D. Garlaschelli, A. Gabrielli, and G. Caldarelli, The statistical physics of real-world networks, Nature Reviews Physics 1, 58 (2019).
* Park and Newman [2004] J. Park and M. E. Newman, Statistical mechanics of networks, Physical Review E 70, 066117 (2004).
* Bianconi [2007] G. Bianconi, The entropy of randomized network ensembles, EPL (Europhysics Letters) 81, 28005 (2007).
* Bianconi [2009] G. Bianconi, Entropy of network ensembles, Physical Review E 79, 036114 (2009).
* Humphries and Gurney [2008] M. D. Humphries and K. Gurney, Network ‘small-world-ness’: A quantitative method for determining canonical network equivalence, PLOS ONE 3, 1 (2008).
* Mathias and Gopal [2001] N. Mathias and V. R. Gopal, Small worlds: how and why., Physical review. E, Statistical, nonlinear, and soft matter physics 63 2 Pt 1, 021117 (2001).
* Metropolis _et al._ [1953] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, Equation of state calculations by fast computing machines, Journal of Chemical Physics 21, 1087 (1953).
* Hastings [1970] W. D. Hastings, Monte carlo sampling methods using markov chains and their applications, Biometrika 57, 97 (1970).
* Iba _et al._ [2014] Y. Iba, N. Saito, and A. Kitajima, Multicanonical mcmc for sampling rare events: an illustrative review, Annals of the Institute of Statistical Mathematics 66, 611 (2014).
* Erdős and Rényi [1959] P. Erdős and A. Rényi, On random graphs i, Publ. math. debrecen 6, 18 (1959).
* Kirkpatrick _et al._ [1983] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, Optimization by simulated annealing, Science 220, 671 (1983).
* Park and Newman [2005] J. Park and M. Newman, Solution for the properties of a clustered network., Physical review. E, Statistical, nonlinear, and soft matter physics 72 2 Pt 2, 026136 (2005).
* Chernoff _et al._ [1952] H. Chernoff _et al._ , A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations, The Annals of Mathematical Statistics 23, 493 (1952).
* Jaksic [2018] V. Jaksic, Lectures on entropy. part i (2018), arXiv:1806.07249 [math-ph] .
|
# Machine Learning for the Detection and Identification of Internet of Things
(IoT) Devices: A Survey
Yongxin Liu, Jian Wang, Jianqiang Li, Shuteng Niu, and Houbing Song Yongxin
Liu and Jianqiang Li are with the College of Computer Science and Software
Engineering, Shenzhen University, ChinaYongxin Liu, Jian Wang, Houbing Song
and Shuteng Niu are with the Embry-Riddle Aeronautical University, Daytona
Beach, FL 32114 USACorresponding authors: Jianqiang Li, Houbing SongManuscript
received October 18, 2020; revised XXX.
###### Abstract
The Internet of Things (IoT) is becoming an indispensable part of everyday
life, enabling a variety of emerging services and applications. However, the
presence of rogue IoT devices has exposed the IoT to untold risks with severe
consequences. The first step in securing the IoT is detecting rogue IoT
devices and identifying legitimate ones. Conventional approaches use
cryptographic mechanisms to authenticate and verify legitimate devices’
identities. However, cryptographic protocols are not available in many
systems. Meanwhile, these methods are less effective when legitimate devices
can be exploited or encryption keys are disclosed. Therefore, non-
cryptographic IoT device identification and rogue device detection become
efficient solutions to secure existing systems and will provide additional
protection to systems with cryptographic protocols. Non-cryptographic
approaches require more effort and are not yet adequately investigated. In
this paper, we provide a comprehensive survey on machine learning technologies
for the identification of IoT devices along with the detection of compromised
or falsified ones from the viewpoint of passive surveillance agents or network
operators. We classify the IoT device identification and detection into four
categories: device-specific pattern recognition, Deep Learning enabled device
identification, unsupervised device identification, and abnormal device
detection. Meanwhile, we discuss various ML-related enabling technologies for
this purpose. These enabling technologies include learning algorithms, feature
engineering on network traffic traces and wireless signals, continual
learning, and abnormality detection.
###### Index Terms:
Internet of Things, Security, Physical-layer Security, Malicious Transmitter
Identification, Radiometric signature, Non-cryptographic identification,
Physical-layer identification.
## I Introduction
As a rapidly evolving field, the Internet of Things (IoT) involves the
interconnection and interaction of smart objects, i.e., IoT devices with
embedded sensors, onboard data processing capabilities, and means of
communication, to provide automated services that would otherwise not be
possible [1]. Trillions of network-connected IoT devices are expected to
emerge in the global network around 2020 [2]. The IoT is becoming pervasive
parts of everyday life, enabling a variety of emerging services and
applications in cities and communities [3], including in health [4],
transportation [5], energy/utilities, and other areas. Furthermore, big data
analytics enables the move from the IoT to real-time control [6, 7, 8, 9].
However, the IoT is subject to threats stemming from increased connectivity
[10, 11]. For example, rogue IoT devices, defined as devices claiming a
falsified identity or compromised legitimate devices, have exposed the IoT to
untold risks with severe consequences. Rogue IoT devices could conduct various
attacks: forging the identity of trusted entities to access sensitive
resources, hijacking legitimate devices to participate in distributed denial
of service (DDoS) attacks[11], and etc. The problem of rogue devices becomes
even more hazardous in wirelessly connected IoT, as the network traffic is
easier to be intercepted, falsified, and broadcast broadly. Hence, from the
perspective of network operators, the first step in securing the IoT from
risks due to rogue devices is identifying known (or unknown) devices and
detecting compromised ones. This survey defines the term Device Detection and
Identification to contain two perspectives: a) Identity verification of known
devices. b) Detection of falsified or compromised devices.
Conventional cryptographic mechanisms use message authentication code, digital
signatures, challenge-response sessions, and etc. to authenticate legitimate
peers or verify the identities of message senders. These methods make it
mathematically impossible for the malicious to forge the legitimate ones’
identities. Even though cryptographic mechanisms are effective as long as
critical keys are securely protected, security requirements may not be fully
satisfied in pervasively distributed IoT. Reports have shown that it is
possible to use reverse engineering to access encryption keys or conduct
further exploitations [12, 13, 14, 15, 16]. Moreover, it is impossible to
install cryptographic protocols into the huge amount of insecure systems or
devices in a short time. Some of them have already become part of critical
infrastructures [17, 18, 19, 20, 21, 22]. Finally, cryptographic approaches
become less effective in dealing with hijacked devices. Therefore, as a
supplementary to existing cryptography mechanisms, non-cryptographic Device
Identification with Rogue Device Detection functions are needed to secure the
IoT ecosystem especially from the perspective of network operators and
cybersecurity surveillance agents.
Figure 1: Overview of ML for the Detection and Identification of Rogue IoT
Devices
Non-cryptographic device identification and rogue device detection have
emerged as essential requirements in safety-critical IoT [23, 24, 25].
Compared with cryptographic approaches, non-cryptographic approaches aim to
identify known devices and detect rogue devices by exploiting device-specific
signal patterns or behavior characteristics [26]. More importantly, non-
cryptographic approaches do not require modifications to existing systems that
can not be upgraded easily, e.g., ADS-B [27], AIS [28] and etc.
Non-cryptographic device identification and detection are still challenging.
Firstly, the flexible deployment scenarios and diverse specifications of
devices make it challenging to provide a general guideline to derive
distinctive features from signals or network traffic. Moreover, even though
machine learning (ML) and Deep Learning (DL) have the potential to
automatically discover distinctive latent features for accurate device
identification, state-of-art algorithms require intensive modifications to be
utilized in IoT [29]. Therefore, this domain is not yet thoroughly
investigated and motivated us to conduct a comprehensive survey as a summary
of existing works and anticipate the future development of this domain from
the perspective of machine learning.
The scope of this paper and related surveys are compared in Table I. In
general, existing surveys focus on presenting broad overviews of threats and
countermeasures in IoT. In this paper, we focus on a more specific point by
providing a comprehensive survey of machine learning for the detection and
identification of devices in IoT using passively collected traffic traces and
wireless signals, which are easily accessible to network operators and
surveillance agents. Figure 1 presents an overview of ML for the detection and
identification of IoT devices with relations between key concepts in Figure 2.
We classify the IoT device identification and detection into four categories:
device-specific pattern recognition, Deep Learning enabled device
identification, unsupervised device identification, and abnormal device
detection. We identify various ML-related enabling technologies and tools for
this purpose, including statistical learning, feature engineering, digital
signal processing, and deep learning. These tools include continual learning,
unsupervised learning, and anomaly detection.
TABLE I: A comparison with existing surveys
Surveys | Year | | FD
---
| DL
---
| DT
---
| UD
---
| RD
---
| [30]
---
2020 | $\bullet$ | $\bullet$ | | | $\bullet$
| [31]
---
2019 | $\bullet$ | | $\bullet$ | | $\bullet$
| [32]
---
2017 | $\bullet$ | $\bullet$ | $\bullet$ | |
| [33]
---
2012 | $\bullet$ | | | | $\bullet$
| [34]
---
2010 | $\bullet$ | | | $\bullet$ | $\bullet$
| This paper
---
2021 | $\bullet$ | $\bullet$ | $\bullet$ | $\bullet$ | $\bullet$
* FD: Feature-based specific device identification; DL: Deep Learning enabled specific deivice identification; DT: Device type identification; UD: Unsupervised device identification; RD: Rogue device detection.
Figure 2: Key concepts in this survey.
The remainder of this paper is structured as follows. Section II presents a
general threat model and attack chain of rogue devices in IoT. In Section III,
we review device type identification (Section III-A) and statistical learning
on device-specific feature identification (Section III-B), including
conventional radiometric signature and statistical learning. In Section III-C
we review state-of-the-art Deep Learning (DL) based methods for device
identification with a focus on emerging issues such as continual learning,
abnormality detection, hyperparameter, and architecture search. A novel
emerging approach, unsupervised device detection, is reviewed in Section
III-D. In Section IV, we present methodologies to detect compromised wireless
devices using anomaly detection algorithms, which is complementary to device-
specific identification. Section V pinpoints the challenges and future
research directions with discussions on enabling technologies. Section VI
concludes this paper.
Figure 3: Attack chain in the IoT. TABLE II: Compare of cryptographic and non-cryptographic countermeasures Methods | Principles | Advantages | Challenges
---|---|---|---
Cryptographic | | Use shared secrecy to mathematically
---
make the decryption of sensitive
information and forge of identity
computationally expensive.
| $\bullet~{}$Device independent
---
$\bullet~{}$Protects both confidentiality
and can verify identity
| $\bullet~{}$Disclosure of secret keys.
---
$\bullet~{}$Re-distribution of secret keys.
$\bullet~{}$Needs special adapation to
existing systems.
Non-cryptographic | | Extract and verify device-specific
---
features from received messages to
assure that messages are from known
sources.
| $\bullet~{}$Device-specific.
---
$\bullet~{}$Can identify Hijacked
devices with abnormal behaviors.
$\bullet~{}$compatible with existing IoT
| $\bullet~{}$Computationally expensive.
---
$\bullet~{}$Identity disclosure.
## II Threat mode of rogue devices in IoT
This section briefly reviews the threat modes of rogue devices along with
countermeasures in IoT. We analyze the attack chain and identify the essential
requirements of IoT device detection and identification: verifying legitimate
devices’ identity, detecting unknown or falsified devices, and detecting
compromised (hijacked) devices with abnormal behaviors.
The cyberinfrastructure of IoT allows sharing information and collaborating
among devices with different capacities and vulnerabilities. On the one hand,
this scheme cultivates a large open system with low entry restrictions. On the
other hand, adversaries can conduct rogue activities with great convenience
[35]. Generally, the attack modes of adversaries in IoT are in two folds:
passive attack and proactive attacks. In a passive attack, adversaries do not
cause damage or performance degradation for a long time. Still, they passively
analyze devices’ communication and activity patterns, providing road maps for
proactive attacks in the future. If we regard passive attackers as spies
secretly and peacefully gathering intelligence, the proactive attackers do
whatever possible to degrade performances or exploit devices to conduct
malicious activities.
Figure 4: Identity spoofing attacks.
In practical attacks, proactive and passive attacks are combined. A typical
attack chain to IoT systems is shown in Figure 3 with a more specific
demonstration of identifying spoofing attack depicted in Figure 4. We divided
the whole attack chain into five stages, as follows:
1. 1.
Penetration: In this stage, the rogue IoT devices try to eavesdrop on
communication channels or attain the control privileges of vulnerable peers
for further actions. Research in [36] shows that using ARP (Address Resolution
Protocol) spoofing, the malicious can easily observe ongoing traffic generated
by connected IoT devices from more than 20 manufacturers. Nowadays, it is
still challenging to develop software stacks with assured security [37].
2. 2.
Spying: In this stage, the malicious will observe the ongoing activities by
exploiting penetrated devices as its agents. As in [36], more than 50% of
tested popular smart home IoT devices contain at least one vulnerable ports.
3. 3.
Data analytics: The malicious analyses the behaviors and evaluate the
vulnerabilities of the IoT from multiple perspectives. An example in [38]
reveals that even if encryption mechanisms are employed, an attacker can still
extract sensitive information, such as manufacture, device functionality, and
etc.
4. 4.
Planning: In this stage, the adversaries perform strategic planning and wait
for the best time to minimize their risk while maximizing the rewards.
5. 5.
Attack: In this stage, prevalent attacks are in action.
Among these stages, passive and proactive attacks are combined in the
penetration stage. From the perspective of network operators or cybersecurity
surveillance agents, if we can prevent the adversaries from successfully
impersonating legitimate devices in the first stage (penetration) or can
identify hijacked devices in the second stage (spying). Network operators and
surveillance agents can destroy the whole attack chain.
Various countermeasures can be applied to secure IoT systems for IoT device
identification and detection. Both cryptographic and non-cryptographic methods
can be applied. A brief comparison of them is presented in Table II.
Cryptographic methods are widely used in computer networks and
telecommunication systems. However, special modifications are needed to deploy
cryptographic protocols to existing systems without cryptographic protocols
such as ADS-B, AIS, and etc. Non-cryptographic methods require higher
computational capacities to derive device-specific fingerprints, but they are
transparently compatible with existing systems.
## III Learning-Enabled Device Identification in IoT
This section reviews methods to recognize devices’ identities and types in
IoT. Most of them are based on network traffic and wireless signal pattern
recognition. We first review device type identification methods, which are
widely used in identifying commercial IoT devices. We then discuss and compare
corresponding signal feature-based device recognition approaches. Especially,
We discuss Deep Learning in device identification with emerging issues
extensively. Finally, we review the unsupervised device identification and its
open issues.
### III-A Device type identification
Even though device types are not directly related to devices’ identities, they
still provide essential information for network management and risk control. A
brief diagram of typical IoT devices is in Figure 5, and comparisons of their
Physical Layer, Data Link Layer as well as aggregated data transmission
characteristics are presented in [39], [40] and [41], respectively. As in
Figure 5, WiFi is pervasively utilized in smart homes while smart cities
prefer reliable cellular networks. Device type identifications are frequently
performed on network, transportation, and application layers and implemented
in Software Defined Network (SDN) controllers or Software Routers [42, 43,
44]. Device types reveal functionalities and activity profiles. A taxonomy of
features for device type identification is presented in Figure 6.
Figure 5: Typical IoT devices and protocols.
As in Figure 6, remote service is a popular attack surface to disclose the
device type or even identity. The reason is that the IoT devices communicate
with remote service providers through the REST API [45]. Even though sensitive
data are encrypted, some unique strings in their Web requests can still be
exploited to infer device types. Authors in [46] present that using only port
numbers, domain names, and cipher suites, a Naive Bayesian classifier can
reach high accuracy in classifying 28 commercial IoT devices.
Figure 6: Features for device type identification. Figure 7: General pipeline
of Software-Defined wireless signal identification.
Even though modeling devices’ remote service requests provides promising
results in device type identification, these solutions may not work if they
interact with anonymous service providers. For alleviation, their activity and
data flow patterns can be utilized. Authors in [47] propose that their Random
Forest classifier reaches a high accuracy of 95% in identifying 20 IoT devices
when features of activities, network data flows, and remote service requests
are utilized simultaneously. In [48], devices’ types are identified based on
the periodicity of activities. The authors first use the Discrete Fourier
Transform (DFT) and discrete autocorrelation to find the dominant periods in
protocol-specific activities. They then use statistical and stability metrics
to model devices’ behaviors. Finally, the Bayesian-optimized k-Nearest
Neighbor algorithm is employed for classification. In [49] and [50], the
authors extract the protocols and network flow properties within a sliding
window to generate fingerprints of devices. They use one-versus-rest
classifiers to identify commercial devices. In [51], The authors first provide
a Random Forest classifier using TCP/IP stream features. They incorporate
confidence thresholds and averaged decisions within a sliding window to
identify known or unknown device types. Similar research is presented in [51]
and [52]. In [53], the authors also present that network traffic, device
types, and their operation states (boot, active, and idle) can be inferred
simultaneously.
To automate the processes to derive useful features, in [52], the authors
propose a Genetic Algorithm (GA) enabled feature selector. Furthermore, a Deep
Neural Network approach, which does not require complicated feature
engineering, is presented in [54].
An extra benefit of modeling device activity patterns is increasing the
chances of identifying behavioral variations. Such benefit directly
contributes to the detection of compromised devices or network attacks, which
will be discussed in section IV.
Deriving devices’ benign flow characteristics is nontrivial, therefore, the
IETF standard Manufacturer Usage Description (MUD) profile [55] is proposed as
an initial static profile to describe IoT device network behavior and support
the making of security policies. A collection of MUD profiles from 30
commercial devices in [56]. The MUD profiles can be used to either verify
device types or detect devices under attack or being compromised [57].
However, one issue of using the static profiles is that longer observation
time is needed to make decisions.
Device identifiers based on network flow and activity patterns may encounter
emerging issues. First, IoT devices are becoming smart devices where new
extensions can be installed, and firmware upgrades can happen periodically,
thereby changing activity patterns or network flow statistics, as suggested in
[58, 59] and [46]. Second, device types do not necessarily correlate to their
identities. Therefore, behavior-independent specific device identification is
of great significance.
### III-B Feature-based statistical learning for specific device
identification
IoT device identification can be formalized as a classification problem. In
this section, we first introduce the generic pipeline for signal reception and
then focus on feature-based statistical learning approaches for specific
device identification from raw signals and their open issues.
#### III-B1 Generic wireless signal reception pipeline for device
identification
Software-Defined Radios (SDR) are multipurpose front-ends to deal with various
modulation and baseband encoding schemes in wireless device identification.
Fundamental technologies in SDR are quadrature modulation and demodulation
[60].
Generally, wireless signals of IoT devices can be represented as:
$S(t)=\boldsymbol{I(t)}\cdot
cos[2\pi(f_{c}+f^{\prime})t]+\boldsymbol{Q(t)}\cdot
sin[2\pi(f_{c}+f^{\prime})t]$, where $\boldsymbol{I(t)}$ and
$\boldsymbol{Q(t)}$ are denoted as in-phase and quadrature components,
respectively. The key idea is use $\boldsymbol{I(t)}$ and $\boldsymbol{Q(t)}$
to represent different modulation schemes.
A brief quadrature demodulation pipeline is given in Figure 7. We denote the
reconstructed version of $\boldsymbol{I(t)}$ and $\boldsymbol{Q(t)}$ as
$\boldsymbol{\hat{I}(t)}$ and $\boldsymbol{\hat{Q}(t)}$, respectively. We can
derive the signals instantaneous amplitude, phase, and frequency by
$\hat{m}(t)=\sqrt{\hat{I}^{2}(t)+\hat{Q}^{2}(t)}$,
$\hat{\phi}(t)=tan^{-1}(\hat{Q}(t)/\hat{I}(t))$ and
$\hat{f}(t)=\partial\hat{\phi}(t)/\partial t$.
Figure 8: Physical Layer device-specific features.
Manufacturing imperfections and channel characteristics can cause
$\hat{m}(t)$, $\hat{\phi}(t)$ and $\hat{f}(t)$ to deviate from its original
form, providing side channels to identify wireless devices. A brief overview
of features for IoT device identity verification using wireless signals in
Physical Layer is given in Figure 8. The features for wireless device
identification are also named Radiometric Fingerprints.
#### III-B2 Hardware imperfections
Heterogeneous imperfections exist in IoT devices’ wireless frontends. These
imperfections do not necessarily degrade the communication performance but
influence signal waveforms, thereby providing a side channel to identify
different devices. Such features enclosed in transmitted signals are named
Physical Unclonable Features (PUF) [61, 62]) since regular users can not clone
or forge the characteristics of these manufacturing imperfections.
##### Error / noise patterns
The errors between expected rational signals and actual received signals can
disclose useful device-specific information. In [63] and [64], the authors use
phase errors of Phase Lock Loop (PLL) in transmitters as a distinctive
feature. Their simulations indicate promising results even with low SNR
(Signal-to-Noise Ratio). In [65], the authors use the instantaneous
differences between received I/Q signals and theoretically expected templates
to construct error vectors. They then combine error vectors’ statistics and
time-frequency domain statistics to synthesize the fingerprints of RF
transmitters.
In [66, 67, 68], the authors use the differential constellation trace figure
(DCTF), carrier frequency offset, phase offset, and I/Q offset to identify
different Zigbee devices. They develop a low-overhead classifier, which learns
how to adjust feature weights under different SNRs. The behaviors of their
classifiers are similar to k-NN algorithms. Authors in [69] use odd harmonics
of center frequencies as fingerprints for RFID transmitters. Their
classification (k-NN) test on 300 RFID cards shows zero error.
Figure 9: A brief dataflow of RF-DNA.
##### Persistent patterns
Persistent pattern recognition assumes that the statistics of consecutive sub-
regions in received signals can disclose identity-related information. A
typical method is named as RF-DNA (Distinctive Native Attributive [70, 71].
The basic idea is to use the statistical metrics of signals’ consecutive
subregions to form device fingerprints. A brief dataflow of RF-DNA is given in
Figure 9. In [72, 73, 74], the authors capture the preamble of WPAN (Wireless
Personal Area Network) signals and extract the variance, skewness, and
kurtosis of signals’ subregions (bins) as signatures. Research in [75] also
shows that the idea of RF-DNA can be applied in the Fourier transform of
messages’ signals.
Figure 10: Transient periods during wireless communication.
From the perspective of the Random Process, a sequence of signal symbols can
be regarded as a sample from some multivariate distributions. The parameters
of such distribution represent the unique fingerprints of devices’ wireless
transmitters. With this idea, authors in [76] use the Central Limit Theorem
and proposed a repetitive stacking symbol-based algorithm. They model
preamble of each packet as a sample from a specific multivariate distribution.
They extract statistics from preambles of ZigBee devices and employ
Mahalanobis Distance and nearest neighbor algorithm to identify 50 Zigbee
devices.
Regional statistic vectors from complete messages can unintentionally embed
protocol-dependent features and result in unreliable device identification
models. Therefore, if we only extract persistent features from the protocol-
agnostic part of signals (e.g., preambles), the resulting device
identification model will only focus on signal features rather than
communication protocols.
##### Transient patterns
Compared with persistent statistics of signals’ subregions, transient patterns
are more difficult to forge in terms of wireless channels [77]. An example of
transient periods in wireless communication is given in Figure 10. Transient
periods are commonly seen at the beginning and end of wireless packet
transmission. In [78], the authors employ the nonlinear in-band distortion and
spectral regrowth of the received signals (potentially caused by power
amplifiers of transmitters) to distinguish the masquerading device. In [79],
the authors derive the energy spectrum from transmitters’ turn-on transient
amplitude envelopes to classify eight different devices. Their experiment
shows that frequency-domain features are more reliable than time-domain
features. In [80] and [81], the time-domain statistical metrics and wavelet
features of transmitters’ turn-on transient signals are transformed into
devices’ RF fingerprints. Finally, it is notable that the authors in [82]
capture the turn-on transient signal of Bluetooth devices and extract 13 time-
frequency domain features (via Hibert-Huang spectrum) to construct devices’
fingerprints. Their experiments show that well-designed fingerprints provide
promising results even without using complicated machine learning models.
The merit of transient features is that an adversary could not forge such
nonlinear features unless they can accurately forge the coupled
characteristics of pair-wise wireless channels and RF front-ends between
victims and surveillance agents. In other words, the transient features can be
influenced by the locations of devices, as different locations can result in
variation of RF channel characteristics, e.g., transient responses, machine
learning algorithms can produce accurate but unreliable device identification
results by exploiting RF channel characteristics rather than learning device-
specific features.
TABLE III: Influential factors for feature-based specific device
identification
Influential factors1 | | Persistent feature
---
recognition
| Transient feature
---
recognition
| Channel status
---
recogniton
| Cross-domain
---
recognition
| Hybrid
---
approaches
Countermeasures | Reference
Stationary noise | | Median
---
(Exc. noise pattern)
Median | Low | Median | Low | | $\bullet~{}$Denoise filtering.
---
$\bullet~{}$Data argumentation
[83, 76]
Rx imperfections | Median | Median | Median | Median | Median | | $\bullet~{}$Adaptive filtering.
---
$\bullet~{}$Calibrations
[84, 85]
Co-channel devices | High | High | Low | High | High | | $\bullet~{}$MIMO receivers.
---
$\bullet~{}$Blind signal separation
[86, 87]
| Channel features
---
Median | Median | High | Low | Low | $\bullet~{}$Adaptive filtering | [84]
Baseband patterns | | Median
---
(Exc. noise pattern)
Low | Median | Low | Low | | $\bullet~{}$Message-independent
---
features
[88]
* 1
High: solutions include hardware modifications; Median: solutions are
software-based but require high capacity processors; Low: Software-based
optimal solutions are available and compatible with regular processors;
#### III-B3 Channel state features:
From the perspective of signal propagation, the nonlinear characteristics of
radio channels can cause recognizable distortions to received signals. Those
distortions can become unique profiles of transmitters. Therefore, the channel
state recognition approach’s basic idea is to: a) mathematically or
statistically describe the nonlinear characteristics of the propagation
channel within receivers and transmitters. b) Estimate whether a wireless
device’s signals’ distortions comply with specific channel characteristics.
Typical work is presented in [89], the authors use a kernel regression method
to model the nonlinear pattern of signals’ propagation channels. Their basic
idea is that the combination of frequency offsets and special channel
characteristics may not be forged easily, and therefore, can be used as a
profile for wireless devices.
Channel state features are commonly seen in Orthogonal Frequency-Division-
Multiplexing OFDM modulated communication systems. In the OFDM and MIMO
schemes of wireless communication, the channel state information (CSI) [90,
91] can provide rich information on the time-varying characteristics of radio
channels. IEEE 802.11 receivers estimate CSI during the reception of each
packet’s preamble. For each packet, its CSI is expressed as a complex-valued
$T_{n}$ by $R_{m}$ by $K$ matrix $\boldsymbol{H}$ along with a noise component
$\boldsymbol{n}\sim\mathcal{CN}(\boldsymbol{0},\boldsymbol{S})$, where $T_{n}$
denotes the number of transmitter’ antennas, $R_{n}$ denotes the number of
receivers’ antennas, $K$ denotes the number of sub-carriers and $n$ denotes
the complex-valued Gaussian random variable with mean zero and covariance
matrix $\boldsymbol{S}$. Each complex-valued element in $\boldsymbol{H}$
provides instantaneous phase and amplitude response of antenna-wise channels
at specific subcarriers.
Channel state information directly reveals the phase, frequency, and amplitude
responses of radio channels and has been utilized to identify fixed-position
wireless transmitters. Specifically, CSI is affected by propagation obstacles,
signal reflections, and even baseband data patterns [91]. In [92], a CSI based
device identification scheme is proposed. The authors use averaged CSI to
construct an SVM based profile for each legitimate device to prevent and
identify spoofing attacks. They compare CSI and RSS based approaches and
demonstrate the superiority of CSI. Another merit of their solution is
utilizing the two-cluster k-means algorithm to detect the presence of rogue
IoT transmitters when constructing legitimate devices’ profiles. Similar
research is presented in [93], legitimate devices’ CSI from multiple locations
are collected to train a more robust device identification model. Comparably,
in [94], the authors use the information from CSI to model the radiometric
signatures of obstacles within the signals’ propagation path. They provide an
iterative differentiation approach to derive the weights and factor out the
multipath components in received signals. The weights of reflection signals
can be used as a location-based signature of transmitters.
Except for wireless channel characteristics, CSI can disclose RF transmitter-
specific information for persistent feature-based device identification.
Related researches are as follows:
* •
Carrier Frequency Offsets (CFOs): In [95], the authors propose to derive
Carrier Frequency Offsets (CFOs) from CSI as devices’ fingerprints. Their
primitive hypothesis is that the constant CFOs can cause a linearly varying
trend in instantaneous phases in received signals. Specifically, the authors
first use phase measurements on specifically selected subcarriers to eliminate
phase shifts at the receiver of the device identification oracle. They then
use the differentiated phases from adjacent packets to eliminate the phase
shifts introduced by the relative positions of transmitters. Finally, they
derive the carriers’ frequency offsets by the slope (relative to the time
intervals of adjacent packets) of the purified instantaneous phase.
* •
Phase errors: Authors in [96] use summation of selected subcarriers’ instant
phases to extract the rationale arrival phases of subcarriers. They then
estimate and subtract the rationale arrival phases and receivers’ insertion
phase lag to derive the phase error caused by transmitters’ internal
imperfections. A drawback of their approach is they need to estimate the Time
of Flight (ToF) of received packets.
A summary of device identification based on channel state features is in
Figure 11. The drawbacks of channel state features are apparent. For one
thing, researches show that channel state features can even be influenced by
the motions of obstacles in subcarriers’ propagation path [97, 98, 99]. On the
other hand, the channel characteristics are environment-oriented. Therefore,
using channel state features based device identifier in indoor or mobile
environments with human activities is still challenging [100, 101].
Figure 11: A brief overview of channel state recognition and related
approaches.
It should be noted that a great majority of CSI enabled researches depend on
limited categories of Network Interface Cards (NICs) for data collection,
owing to the limitation of CSI Tools [90]. However, the authors in [102]
provide a new way. They use generic SDR transceivers to extract the Long
Training Sequences (LTS) in the preambles of IEEE 802.11n pilot carriers to
identify more than 50 Network Interface Cards. They show that by exploiting
the frequency offsets and comparing LTS frequency responses of adjacent pilot
carriers, they can even derive a location-agnostic device identification
model.
#### III-B4 Cross domain features
Many researchers convert signals to other domains that are more
distinguishable. A straightforward way is to remap signals into the time-
frequency domain [103]. In [104], the authors use the STFT (Short-Time Fourier
Transform) with the SVM algorithm to identify four different transceivers.
This research is comparable to [105], where Discrete Gabor Transform (Gaussian
windowed STFT) is employed.
Other domains can also be utilized as long as they can model devices’ signal
patterns. In [106, 107], the authors utilize the wavelet transform as well as
classifiers (SVM and Probabilistic Neural Network) to construct a device
identifier, compared with [104], they further use the PCA algorithm to reduce
the redundancy of the extracted data. In [108], the authors provide a normal
frequency-based method along with PCA and SVM to distinguish devices in the
GSM band. They compare their methods with Hibert-Huang Transform based method
in [109]. Similar work presented in [110], shows that Variation Mode
Decomposition theoretically provides even better performance than the
conventional EMD method even for relaying scenarios.
It is notable that Bispectrum is also widely utilized. In [111], the energy
entropy and color moments of the Bispectrum combined with Support Vector
Machine (SVM) are employed to simulate the possibility of device
identification. Their results indicate that higher-order statistics can
theoretically improve the performance of identification under low SNR.
However, other authors [112] claim that compared to Bispectrum, the squared
integral bispectra (SIB) is more robust to noise while providing the same
amount of information as the Bispectrum. In [113], the authors employed
singular values of the Axial Integrated Wigner bispectrum (AIWB) feature to
identify spoofing signals from genuine signals in navigation satellite systems
(GNSS).
TABLE IV: A brief compare of classifiers in deployable wireless transmitter identification systems Approach | | Application
---
overhead
| Continual
---
learning
| Abnormality
---
detection
k-NN | | Depends on the size of
---
fingerprint library.
| Natively
---
supported
| Clustering or
---
statistical models
SVM | | Depends on the number
---
of feature dimensions
| Knowledge
---
replay
| One-class
---
SVM
| Random
---
forest
| Depends on the number
---
of decision trees.
| Knowledge
---
replay
| Isolation
---
forest
| Neural
---
network
| Depends on structural
---
complexity
| Section
---
III-C2
| Section
---
III-C2
#### III-B5 Hybrid methods
A large number of device-specific features have been discovered along with
different signal transform techniques. Hybrid methods aim to find the
optimized combinations of features from different domains to derive robust
identification models. In [114], the authors extract the signals’ energy
distribution from wavelet coefficients, and marginal spectrum [115] and use
k-NN and SVM to identify eight devices. Their tests show that this k-NN
requires higher SNR than SVM. In [116], the authors apply Intrinsic Time-Scale
Decomposition (ITD) [117] to input signals. They extract factual, bispectrum,
and energy features to all subchannels of ITD decomposition sub-signals, their
test on SVM shows that more features can significantly improve device
identifiers’ performance.
Although integrating signals’ features from multiple domains can provide
promising device identification results, the redundant information within the
integrated features requires complicated models and considerable processing
capacity. Therefore, automatic feature selection is introduced and becomes an
indispensable part. Research in [72] demonstrates that properly selected
features, particularly from the F-test and MLF methods, enable a significant
(80%) reduction of redundancy. In [118], the authors capture the pilot tones
of the OFDM signals and extract a series of features relative to the rational
signal. They use an information-theoretic approach to select useful features
for device identification. In [119], four types of features, scramble seed
similarity, carrier frequency offset, sampling clock offset, and transient
pattern, are suggested for the physical layer fingerprints of WiFi devices.
The authors also claim that by combining all these features, their device
identification accuracy reaches 95%.
A comparison of device-specific feature-based approaches in Table III, hybrid
approaches have superior performance under various influential factors, since
the automatic feature selection methods can remove irrelevant information and
provide an optimal combination of features. However, hybrid features could
bring side effects, especially in statistical learning algorithms: a) The
complicated combination of a large number of features can result in a highly
accurate identifier with its internal mechanism not interpretable. b) High
dimension features can potentially result in complicated models that are
computationally difficult to retrain for operational variations. We can make
better use of hybrid features in Deep Neural Networks, which will be discussed
in Section III-C.
#### III-B6 Open issues
TABLE V: Countermeasures to prevent learning from trivial features Reference | Methodology | Description | Challenges
---|---|---|---
[120] | Fragmenting | | The raw I/Q signals are split into
---
small signal fragments whose duration
is shorter than the duration of trivial parts
or just use the preambles of packets..
| Long range dependent features
---
will be destroyed after fragmenting
[121] | Masking | | One can directly mask or remove the
---
trivial parts in raw signals.
| The position and length of the
---
masking bits or discontinuity can
leak protocol information
[122, 123] | Randomization | | One can let transmitters send random
---
contents
| One has to gain the access of
---
large number of transmitters to
train a reliable classifier.
In general, the following issues need to be investigated in feature-based
statistical learning for specific device identification:
1. 1.
These methods require efforts to manually extract features or high-order
statistics, the quality of handcraft features dominates device identification
performances. E.g., authors in [124] show that the combination of permutation
entropy [125] and K-NN even surpasses combination of bispectrum [126] and SVM
in [111].
2. 2.
Experiments are conducted in rational environments with a limited number (less
than 30) of IoT devices. Therefore, publicly available datasets containing
signals from a larger number of IoT devices are needed to provide a reliable
benchmark. Currently, publicly available datasets for IoT device
identification from wireless signals are still limited. Some small datasets
are provided in [127, 128] and [129] while a larger dataset but with only
ADS-B signals is in [130].
3. 3.
There’s no guarantee whether a specific type of feature is time-invariant.
Therefore, this type of system should incorporate wireless channel estimation
approaches to identify real device-specific patterns.
4. 4.
A brief comparison of the device-specific feature-based wireless device
identification with influential factor is given in Table III, co-channel
devices have the most significant impacts among all solutions. Unfortunately,
there’s limited research in dealing with it.
5. 5.
A deployable wireless device identification system should have the capacity to
report unknown abnormalities and continually evolve and adapt to operational
variations. A comparison of frequently employed statistical learning
algorithms on continual learning and abnormality detection is in Table IV.
Among these algorithms, only k-NN provides intuitive and native supports for
continual learning and abnormality detection. However, k-NN is insufficient in
handling complicated features. Though SVM or Random Forest could handle more
complicated features, they lack the continual learning and abnormality
detection abilities and explainability.
### III-C Deep Learning enabled specific device identification
The feature-based statistical learning approaches require manual selection of
useful transforms or features. In contrast, deep neural networks (DNN) can
incorporate existing features or directly deal with raw inputs and derive
latent distinctive features. Therefore, Deep Learning enabled device
identification mechanisms are increasingly investigated. A brief comparison of
device-specific feature-based statistical learning and deep learning based
approaches are presented in Table VII. In this section, we discuss typical
deep learning enabled wireless device identification solutions and then focus
on open issues that impede the application of deep learning in IoT device
identification.
Figure 12: Typical architecture of deep neural network classifiers
#### III-C1 Case studies and comparisons
A typical Deep Neural Network enabled classifier is depicted in Figure 12.
Generally, It employs convolutional layers to extract latent features and uses
fully connected dense layers to produce final results. Deep Neural Networks
with convolutional layers are also referred as Covolutional Neural Networks
(CNN).
Deep neural networks can be seamlessly integrated with existing feature
engineering methods. In [122], the authors use the differential error between
re-constructed rational signals and received signals to train Deep Neural
Networks to distinguish Zigbee transceivers. In [131], the authors compare the
effects of short-time Fourier features and wavelet features for device
identification, and their results show that wavelet features can outperform
Fourier features. In [121], the authors extract the 1-D Regions of Interest
(ROIs) from 54 Zigbee devices’ preambles under different SNRs and then
resample signals within ROIs into various substreams with different sample
rates. Finally, the substreams are fed into a convolutional neural network for
identification. Similar ideas are proposed in [120, 132] and [133].
Compared with the conventional fully-connected neural network, convolutional
layers apply filters (a.k.a. kernels) with much fewer parameters to obtain
distinctive information. In [83], the authors propose a combined solution to
denoise signals and identify devices simultaneously using an autoencoder and a
CNN network. The authors use their encoder to automatically extract relevant
features from the received signals and use the derived features to train
another deep neural network for device identification. Similar methods are
presented in [134]. In [123], the authors provide an optimized Deep
Convolutional Neural Network approach to classify wireless devices in 2.4 GHz
channels and compare the performance with SVM and Logistic Regression. Their
results show that, even by using raw I/Q digital baseband signals, CNN can
achieve high accuracy and surpass the best performance of SVM and Logistic
Regression. In [127], neural networks were trained on raw IQ samples using the
open dataset111https://wiki.cortexlab.fr/doku.php?id=tx-id from CorteXlab.
Their results also show that CNN can achieve promising results even on raw I/Q
signals, but the movement of devices and the varying amplitudes will degrade
CNN’s performance.
An extensively discussed topic for Deep Learning based device identification
is preventing the network from learning only trivial features, such as
protocol identifiers, unique identifiers, etc. Generally, three types of
countermeasures are applied, and their comparisons are provided as in Table V.
Compared with feature-based device identification approaches, Deep Learning
methods usually require a much larger dataset to initialize the network. To
know how large the training data is needed. In [135], CNN models are used to
classify different devices’ signals with controlled difficulty levels. The
classification accuracy of a fixed CNN network with different dataset sizes is
predicted using a power-law model and the Levenberg-Marquardt algorithm. Their
results show that the dataset size should be at least 10,000 to 30,000 times
the number of devices to be identified. However, this conclusion is only a
rough estimation.
New architectures in Deep Learning are emerging and can significantly
influence the performance of device identification systems. In [120], the
authors use Convolutional Deep Complex-valued Neural Network (CDCN) and
Recurrent Deep Complex-valued Neural Network [136] to address the device
identification problem. Their networks utilize fragments of raw I/Q symbols as
input, and their test is conducted on both WiFi and ADS-B datasets. Their
experiments show that the Complex-valued neural networks surpass regular real-
valued deep neural networks. In [137, 138], a zero-bias dense layer is
proposed. The authors show that their solution enables deep neural networks’
final decision stage to be interpretable. Their solution maintains equivalent
identification accuracy and outperforms regular DNN and one-class SVM in
detecting unknown devices.
TABLE VI: Methods for unknown device recognition
Methods | Description | Complexity | Memory | Pros & Cons | Reference
---|---|---|---|---|---
GAN | | Use the discriminator from GAN model as
---
an outlier detector.
High1 | | Depends on final
---
network
| $\bullet$ Can catch deep latent features.
---
$\bullet$ Hard to design and train.
[132, 139]
Autoencoder | | Train a deep Autoencoder on known signals
---
and use its reconstruction error to judge
outliers.
High1 | | Depends on final
---
network
| $\bullet$ Can catch deep latent features.
---
$\bullet$ Easier than GAN to design
and train
[140, 141]
| Statistic metrics
---
| Measure the confidence of whether a signal
---
or its fingerprint is generated by a given
category.
Low | Low | | $\bullet$ Provide explainable results.
---
$\bullet$ Accuracy depends on the
fingerprinting methods.
[142, 143, 144, 138]
Clustering | | Perform clustering analysis on known signals’
---
fingerprints to judge whether it is in identical
cluster as known ones.
Median2 | | Depends on the
---
number of existing
fingerprints.
| $\bullet$ Provide explainable results
---
$\bullet$ Accuracy depends on the
fingerprinting methods.
[142, 145]
* 1
Needs to specify both network architecture and hyperparameters.
* 2
Needs to specify the clustering algorithms to use.
#### III-C2 Open issues in Deep Learning for IoT device identification
Deep Learning is becoming a promising technology in this domain. However, as
in other domains, Deep Learning encounters several challenges. Although
researches in IoT device identification rarely cover the issues, we briefly
discuss their current states and solutions.
##### Hyperparameter searching
One critical problem for using deep neural networks is hyperparameter tuning.
Hyperparameters such as learning rate, mini-batch size, dropout rate, etc. are
used to initialize the training process. Hyperparameters can significantly
impact the performance of deep neural networks. For instance, in [146], the
authors compare the performance of Deep Neural Networks, Convolutional Neural
Network, and the LSTM (Long Short Term Memory) in device identification using
the raw I/Q signals directly. Their results show that CNN has the best
performance, followed by DNN and LSTM. They point out that the hyper-
parameters of Deep Learning, especially for network architectural parameters,
significantly influence the upper bound of performance.
Obtaining optimized hyperparameters is computationally expensive. Several
strategies are proposed for efficient hyperparameter searching, such as grid
search, random search, prediction-based approaches, and evolutionary
algorithms. Their characteristics are as follows:
* •
Grid search: Grid search divides the whole parameter space into identical
intervals and performs brute-force trials to find optimal parameter
combinations. However, this strategy is inefficient since useless combinations
of parameters can not be pruned rapidly.
* •
Random search: In random search, sample points are distributed uniformly in
search space. This strategy increases the variation and outperforms the grid
search when only a small number of parameters can impact the network
performance.
* •
Prediction-based: In prediction-based approaches, the algorithms first perform
random trials at the beginning to model the relation between the network
performances with hyperparameters. Then the algorithms perform new trials
based on parameters that are more probable to yield better results. Such
trial-model-predict paradigm is conducted repeatedly [147]. A typical
prediction strategy is the Bayesian optimization process [148], in which the
algorithms model the target outcome space as Gaussian processes.
* •
Evolution based: In evolutionary algorithm based approaches, the heuristic
searches are performed as in other nonlinear optimization problems. In [149],
the authors utilize the Genetic Algorithm to find the optimal hyperparameters
of a neural network. Compared with prediction-based approaches, evolutionary
algorithms provide the best-guess with bio-inspired strategies. However, there
is no guarantee for the performances of evolutionary algorithms.
##### Neural network Architecture search
Network Architecture Search (NAS) is another challenging task in designing
neural networks. Network architecture defines the flow of tensors and could
significantly affect the complexity and performance of neural networks [150,
151]. At the current stage, most network architectures are specified manually
or with trial-and-error.
Architecture searching algorithms are provided by several Automatic Machine
Learning (AutoML) platforms. A brief comparison of their functionality and
performance on different datasets is in [152]. A collection of recent
literature and open-source tools are given in [153] and [154] respectively.
These efforts can be classified into three categories: (i) network pruning
[155], (ii) progressively growing [156], and (iii) heuristic network
architecture search [157]. Their features are as follows:
* •
Network pruning: Network pruning algorithms use group sparsity regularizers
[158] to remove unimportant connections from a regularly trained network. Then
the pruned network will be retrained to fine-tune the weights of the remaining
connections [159, 160]. A key benefit of network pruning is that it can
greatly compress neural networks and make them suitable to deploy in low
capacity IoT devices.
* •
Progressively growing: This strategy grows a neural network architecture
during training. It is effective in simple networks with only one hidden layer
[161, 162]. More recent advances employ growing strategies to progressively
add nodes and layers to increase the network’s approximation ability [163,
164].
* •
Heuristic network search: In heuristic network search, the architecture of the
Deep Neural Network (can either be block-wise [165] or element-wise [166]) can
first be represented in a high dimension space with billions of parameters.
Next, heuristic searching algorithms are applied to transverse this search
space to find optimal solutions. Examples are given in [167, 157] and [168].
The authors make use of the Genetic Algorithm to find the possible structure
of neural networks. Notably, the Genetic Algorithm fits perfectly in NAS
problems since it allows using length-varying variables (genes) to encode the
candidate solutions. An empirical example is the NeuroEvolution of Augmenting
Topologies (NEAT) algorithm [167].
* •
Reinforcement Learning: Reinforcement learning (RL) has become a popular
strategy in NAS [169, 170, 171]. Its basic idea is to let a deep learning-
enabled agent explore network architectures’ representative space and use
validation accuracy or other metrics as rewards to adjust the agents’
solutions. Ideally, as an RL process moves on, an agent can find an optimal
searching strategy and discover a novel architecture. Intuitively, evolution
algorithms use a fixed strategy to discover the optimal architecture while RL
agents learn their own strategies and have better capabilities in avoiding bad
solutions.
* •
Differentiable space search: Aforementioned, NAS strategies use discrete space
to encode the architecture of neural networks, which is not differentiable and
lacks efficiency. Therefore, differentiable spaces to represent the Neural
Networks’ architectures are proposed, in which efficient off-the-shelf
optimization algorithms can be used. Typical solutions are given in [172,
173]. The algorithm, DART (Differentiable Architecture search), is presented.
The authors use the Softmax function to represent discrete selections in a
numerically continuous domain. They then use a gradient descent algorithm to
explore the search space. Similar work with an enhanced stochastic adaptive
searching strategy is presented in [174]. Block-wise representations of the
neural network and differentiable searching space together are bringing NAS to
practice.
Network architecture search has become an emerging topic for deep neural
network research with publicly available benchmarking tools in [175] and
[176], respectively.
##### Openset recognition
A critical problem for learning based device identification is that
classifiers only recognize pretrained devices’ signals but can not deal with
novel ones that are not in the training dataset. In [145], the authors address
it as a semi-supervised learning problem. They first train a CNN model with
the last layer as a Softmax output on a collection of known devices. They then
remove the Softmax function and turn the neural network into a nonlinear
feature extractor. Finally, they use the DBSCAN algorithm to perform cluster
analysis on the remapped features of raw I/Q signals. Their results show that
such a semi-supervised learning method has the potential of detecting a
limited number of untrained devices. Comparably, in [177], the authors use an
incremental learning approach to train neural networks to classify newly
registered devices.
From the perspective of Artificial Intelligence, this issue is categorized to
the Open Set Recognition [178, 179] and the Abnormality Detection problem. The
taxonomy of existing approaches is given in table VI. In [132], the authors
use the Generative Adversarial Network (GAN) to generate highly realistic fake
signals. Then they exploit the discriminator network to distinguish whether an
input is from an abnormal source. In [142], the authors provide two methods to
deal with unknown devices: i) Reuse trained convolutional layers to transform
signals to feature vectors, and then use Mahalanobis distance to judge the
outliers. ii) Reuse pretrained convolutional layers to transform signals to
feature vectors, and then perform k-means (k = 2) clustering to group
outliers.
Figure 13: Transfer learning and continual learning. TABLE VII: Brief compare
of IoT device identification and detection methods
| Device identification
---
approaches
| Technology
---
branch
| Feature
---
requirement
| Model
---
explanability
| Continuous
---
learning
| Anomaly detection
---
Challenges
| Feature based
---
device identification
| Supervised
---
learning
High1 | | Strong (k-NN) /
---
median (SVM)
| Easy (k-NN) /
---
median (PCA-SVM)
| Low (k-NN)
---
Median (k-Means)
| Can not discover
---
latent feature.
| Deep learning enabled
---
device identification
| Supervised
---
learning
Low | Weak2 | Hard (EWC)3 | | High (Autoencoder) /
---
Median (clustering)
| Learning from
---
trivial features
| Unsupervised device
---
detection and identification
| Unsupervised
---
learning
High1 | Strong | N/A | Low | | Can not be applied to
---
complex environment
* 1
Requires an extra feature engineering process.
* 2
Please refer to Explainable AI (XAI) in [180]
* 3
Please refer to section III-C2
##### Continual learning
In practical scenarios, deep neural networks would have to evolve to adapt to
operational variations continuously. Intuitively, a deep learning enabled IoT
device identifier has to learn new devices’ characteristics during its life
cycle. Therefore, such functionalities are defined as lifelong learning.
Generally, there are two ways to achieve this goal: Transfer Learning (TL) and
Continual Learning (CL). In Transfer Learning, neural networks are pre-trained
in the lab and then fine-tuned for deployment using practical data [181]. In
continual learning, neural networks are trained incrementally as new data come
in progressively [182]. Continual learning does not allow neural networks to
forget what they have learned in the early stages compared with transfer
learning. The phenomenon in which a neural network forgets what it has
previously learned after training on new data is named Catastrophic
Forgetting. Therefore, transfer learning is useful when deploying new signal
identification systems, and continual learning is useful in regular software
updates and maintenance, as depicted in Figure 13. The strategies to implement
continual learning for deep neural networks are as follows:
* •
Knowledge replay: An intuitive solution for continual learning is to replay
data from old tasks while training neural networks for new tasks. However,
such a solution requires longer training time and larger memory consumption.
Besides, one can not judge how many old samples are enough to catch sufficient
variations. Therefore, some studies employ data generator networks to replay
data from old tasks. For instance, in [183], Generative Adversarial Network
(GAN) based scholar networks are proposed to generate old samples and mixed
with the current task. In this way, the deep neural network could be trained
on various data without using huge memories to retain old training data.
* •
Regularization: Initially, regularization is employed to prevent models from
overfitting by penalizing the magnitude of parameters [184]. In continual
learning, regularization is employed to prevent model parameters from changing
dramatically. In this way, the knowledge (represented by weights) learned from
the old tasks will be less likely to vanish when an old network is trained on
new tasks. There are two types of regularization strategies: global
regularization and local regularization. Global regularization penalizes the
whole network’s parameters from rapid change but impedes the network from
learning new tasks. In local regularization strategies, such as Elastic Weight
Consolidation (EWC) [185], the algorithms identify important connections and
protect them from changing dramatically, in which non-critical connections are
used to learn new tasks.
* •
Dynamic network expansion: Network expansion strategies lock the weights of
existing connections and supplement additional structures for new tasks. For
instance, the Dynamic Expanding Network (DEN) [186] algorithm first trains an
existing network on a new dataset with regularization. The algorithm compares
the weights of each neuron to identify task-relevant units. Finally, critical
neurons are duplicated and to allow network capacity expansion adaptively.
Continual learning algorithms, as well as abnormality detection, together
provide great potential for deploying the neural networks in complex,
uncertain scenarios.
##### Summary
A brief comparison of Deep Learning and other statistical learning methods is
given in Table IV. Compared with statistical learning, Deep Learning is not
yet an idealistic solution. However, its unified development pipeline, and the
capability of dealing with high dimension features are making it easy to use.
Furthermore, for practical issues such as continual learning and abnormality
detection, deep learning provides better performances than the majority of
statistical learning algorithms. In one word, although deep learning is not
theoretically novel, it gains its place by providing the most balanced merits.
### III-D Unsupervised device detection and identification
Figure 14: Unsupervised device detection and identification
Feature-based statistical learning and deep learning are supervised learning
schemes, where classifiers must learn the features of legitimate devices in
advance. Unsupervised device detection and identification are required in
scenarios where the identities of devices are not directly available [187].
Generally, the methods in this topic can be divided into two folds, device
behavior modeling and signal propagation pattern modeling. the essence of
unsupervised device detection is to map devices’ signals or activity profiles
into latent representative spaces, where different devices are represented by
separated clusters or probabilistic distributions. If behavior or signal
propagation patterns are strictly correlated with specific devices, extracted
behavior or signal features can be used to verify the identity of devices.
Comparisons of supervised and unsupervised learning based device
identification are (also in Table VII)):
* •
The training data does not directly indicate device specific information
(device identifier, device type, and etc.).
* •
The number of devices may not be known in advance.
As depicted in Figure 14, the work flow of unsupervised learning enabled
device detection and identification is made up of three steps: a) Feature
engineering on IoT devices’ signals or behavior profiles, including feature
selection and mapping. b) Modeling the latent spaces, this step finds out
cluster centers, probabilistic distributions, related decision boundaries, or
state transition models. c) Matching of input signal or behavior profiles to
the most likely clusters or report abnormalities.
#### III-D1 Device behavior modeling
Device behavior modeling extracts distinctive features from the input data and
finds out the number of different devices using unsupervised learning
algorithms. However, the physical layer does not provide much information for
device behavior modeling. Therefore, the methods are more frequently employed
in the upper layers with related techniques employed are unsupervised feature
engineering, clustering, and Software-Defined Networking [44].
In [188] and [189], the data traffic attributes are obtained from flow-level
network telemetry to recognize different IoT devices. The authors utilize
Principle Component Analysis along with an adaptive one-class clustering
algorithm to find the optimal representative components and cluster centers
for each device. They provide a conflict resolution mechanism to associate
different types of devices to corresponding cluster centers in the
representative spaces. A similar approach using Deep Learning is presented in
[190]. The authors use TCP data traffics for each device to train an LSTM-
enabled autoencoder to map inputs into a representative feature space. They
then use a clustering algorithm to divide the training samples into their
natural clusters. Finally, they use probabilistic modeling to associate new
data with known clusters for device identification. Unfortunately, their
experiments show that unsupervised behavior identification may not work once
there are devices from an identical model.
#### III-D2 Signal propagation pattern modeling
TABLE VIII: Comparison of device localization methods in IoT
Methods | Requirements | Unit cost1 | Precision | Weakness | References
---|---|---|---|---|---
| Signal propagation
---
modeling
| Multiple collaborative transmitters
---
to construct signal strength map.
Low | | Depends on environmental
---
features and refresh rate of
respondent data.
| $\bullet$ Depends highly on signal
---
propagation models of certain area.
$\bullet$ Results do not directly indicate
certain device types or identities.
[191]
Coherent TDoA | | At least 4 coherent receivers and 5
---
receivers are recommended to
linearize computational process.
Median | | Depends on the estimation
---
of signals’ Time of Arrival
(ToA).
| $\bullet$ Receivers needs to be strictly
---
synchronized.
[192]
| Sync-free TDoA
---
| At least 4 receivers and receivers
---
are able to communicate mutually.
Median | Same as coherent TDoA | | $\bullet$ Needs specific hardware
---
with known response latency.
[193, 194]
* 1
Low: Does not require extra RF receivers; Median: Requiring commercially
available RF receivers with unit cost less than $1000; High: Requiring special
hardware and specific processing stacks.
* 2
Requiring multiple distributed receivers.
In the Physical Layer, signal propagation patterns provide information for
device identification. On the one hand, if devices positions are unique and
known in advance, we may directly use wireless localization algorithms to
specify whether a received data packet is from its claimed identity.
Corresponding surveys on wireless device localization are given in [195, 196,
197], and we provide a brief comparison of the widely employed methods in
Table VIII.
On the other hand, signal propagation modeling derives the path loss or
attenuation patterns of received signals to detect different devices using
unsupervised learning algorithms[34]. In [198], the authors use the signals’
propagation path effect, and they discover that the received signal strength
from transmitters in the same location would have very similar varying trends.
They convert signal strength metrics into time series and incorporate the
Dynamic Time Warping algorithm to align and find differences between received
signals. Finally, they apply a clustering algorithm to identify signals from
active transmitters. In [199], the authors assume that the received signals’
Power Spectrum Density coefficients of each device, in a specific time window,
form a mixture model dominated by a weighted sum of Gaussian distributions and
propagation path related Relay distributions. In this way, they use the
Expectation-Maximum algorithm to estimate the composition (different
transmitters) of received signals.
Signal propagation pattern modeling only provides an indirect evaluation on
whether specific signals come from devices in close locations or with similar
propagation paths. Although related methods are not widely utilized in
commercial IoT devices owing to their complicated deployment environments, the
methods provide a useful solution in preventing identity spoofing attacks in
ADS-B systems [200, 201].
#### III-D3 Open issues
Unsupervised device identification provides a novel solution when the
identities of devices are not directly available. In essence, the unsupervised
device identification and detection are similar to automatic knowledge
discovery with the following issues to be addressed:
1. 1.
Feature engineering: Unsupervised device identification relies on feature
engineering since representative vectors of devices are supposed to form
distinctive clusters. Feature selection is still conducted manually, and there
is no guarantee on whether the outputs of the mapped feature can form
distinctive clusters.
2. 2.
Clustering: Clustering in the latent space can be challenging if the number of
devices is unknown. Although one may use adaptive algorithm such as DBSCAN
[202], Optics [203] or X-Means [204], the proper configurations of these
algorithms to adapt to the latent space are still difficult, similar obstacles
are seen in setting hyperparameters in Deep Neural Networks (section III-C2).
3. 3.
Decision boundaries Even if we know the number of devices, we can still get
clusters with uncertain shapes or density, in which decision boundaries
between different devices are difficult to define, as indicated in [188].
4. 4.
Direct identity verification: Researches on unsupervised device identification
using behavior-independent and location-agnostic device specific features are
still limited. Although unsupervised behavioral modeling has shown promising
results in identifying different types of devices, whether these methods are
still effective in distinguishing devices from the same model needs further
investigation.
Therefore, we believe learning-based unsupervised device detection is
promising with great novelty, but the topic needs substantial investigation.
## IV Learning-Enabled Abnormal Device detection
Previous sections discussed methods to identify specific IoT devices. Except
for device identity verification, detection of compromised devices with
abnormal behaviors is needed to alert ongoing attacks and discover system
vulnerabilities.
In general, abnormal device detection algorithms are deployed in network and
application layers. The detection algorithms first collect a certain amount of
normal operation data from devices to create reference models (or signatures).
Then IoT devices’ operational data are collected and compared with reference
models to judge whether significant deviations appear. Compared with device-
specific identification schemes, the key idea is abnormality detection with
both unsupervised learning approaches [205] and supervised learning with
confidence thresholds [206].
### IV-A Statistical Modeling
Statistical modeling aims to judge whether devices are in abnormal situations.
In [207], Markov models are utilized to judge whether IEEE 802.11 devices are
compromised by calculating the probabilities of its sequential transitions of
the protocol state machines. In [208], the authors model the Electronic
Magnetic (EM) harmonics peaks of medical IoT devices as probabilistic
distributions to assess whether a specific device is under attack. They assume
that when devices are operated under an abnormal scenario (with rogue
shellcodes executing), its EM radiometric signals can deviate from known
scenarios. However, statistical modeling requires manual selection of
potentially informative features and define their importance.
To reduce the cost of modeling IoT devices’ normal behavior, Manufacturer
Usage Description (MUD) profile [55] is proposed. A collection of MUD profiles
for 30 commercial devices is provided in [56]. The MUD profiles enable
operators to know devices’ network flow patterns and dynamically monitor their
behavioral changes. Several open-source tools are provided to dynamically
generate, validate, and compare IoT devices’ MUD profiles in [57]. Besides,
the authors presented that by comparing the deviation of devices’ run-time MUD
profiles with static ones, we can identify their behavioral deviations or even
identify device types. In [209], MUD profiles of devices are translated into
flowtable rules and contribute to select appropriate features. The authors
then use PCA to map each device’s data traffic from side windows into its own
representative one-class space, where X-Means [204] and Markov chains are used
to partition the space and model the state transition in cluster centers.
Finally, an exception is triggered by a specific detector on either the mapped
traffic pattern is out of boundaries or the state transitions do not comply
with the reference model. Their experiments show the accurate detection of
several types of volumetric attacks.
### IV-B Reconstruction Approaches
Reconstruction approaches aim to learn and reconstruct domain-specific
patterns from devices’ normal operation records. In other words, we need to
develop a model to ”memorize” the normal schemes of IoT devices by producing
low reconstruction errors. Simultaneously, the model is supposed to produce
high reconstruction errors for unknown scenarios or encounters behavioral
deviations. This goal is generally achieved using deep autoencoders. Since an
encoder removes a great amount of information, a decoder needs to reconstruct
lost information according to domain-specific memories. Consequently, once
abnormal inputs are given to a well-trained autoencoder, its decoder would not
be able to reconstruct such unknown inputs and yields a high abnormal score
(reconstruction error). In [210, 211, 212], the authors utilize autoencoder to
detect abnormal activities by modeling the data traffic and content of IoT
devices once abnormal activities are detected. In [213], the authors show that
compared with other anomaly detection methods (one-class SVM [214], Isolation
Forest [215] and Local Outlier Factor [216]), deep autoencoder yields the best
result in terms of reliability and accuracy.
### IV-C Prediction Approaches
Prediction approaches utilize temporal information in devices’ operation
records. Corresponding methods model each IoT device’s operational data as
multi-dimension time series. Then, device-specific prediction models are
trained using time series from normal schemes. When devices are hijacked for
rogue activities, they are not supposed to behave as predicted, causing the
corresponding time series predictors to output high prediction errors.
In [217], the authors employ a CNN based predictor to analyze the abnormal
behaviors in devices’ network traffics. They show that predictors trained
without abnormal data are sensitive (yield high prediction error) to
anomalies. Similar work is shown in [218], and the authors use an
autoregression model to capture the normal varying trend of devices’ traffic
volumes. However, modeling a single variable can not be sufficient in dealing
with complicated scenarios. Recent studies combine deep Autoencoder with Long
Short Term Memory (LSTM) to derive abstracted representations of complex
scenarios and make predictions. In [219] and [220], Deep Predictive Coding
Neural Network [221] is used to predict consecutive frames of time-frequency
video streams of wireless devices. They can even specify the type of attacks
using the spatial distribution of error pixels in the reconstructed frames.
### IV-D Open issue
Methods in this topic overlap with the methods of open set recognition in Deep
Learning. We briefly list several open issues in this topic:
* •
Selection of behavioral features: Many researches use manual feature selection
along with dimension reduction. A concern is that we can not guarantee the
selected features are sensitive to unknown intrusions in the future.
* •
Processing of abnormality metrics: Generally, intrusion detection approaches
provide metrics corresponding to the degree of deviation. However, the output
error metrics require a posterior process, e.g., selecting appropriate
decision thresholds or aggregation window length, which balances between the
true positive, false negative, and response latency. One solution is to regard
the corresponding parameters as hyperparameters and use cross-validation to
tune them. The processing of error metrics remains a case-specific open issue.
## V Challenges and Future Research Directions
Our literature review has shown that device detection and identification
provide another layer of security features to IoT. However, the existing
solutions are still far from perfect. This section summarizes the existing
challenges of IoT device identification and detection as well as future
research directions.
### V-A Challenges in machine learning models
#### V-A1 Unknown device recognition
Existing works focus on the accuracy they can obtain using a fixed dataset
with all devices labeled, in which Black-Box models (e.g., Deep Learning and
SVM) are commonly employed. In practical scenarios, these models can produce
wrong answers when encountering novel devices. Additional mechanisms are
needed to identify unknown signals. Although we can use the one-versus-rest
technique to train a group of classifiers and avoid producing results on
unknown devices. However, once we have new devices to register, all
classifiers in the group are supposed to be retrained from scratch. Therefore,
we need to provide a solution to verify the known devices. Meanwhile, we need
to identify:
* •
Devices that are exactly not in the scope of the identification system.
* •
Unknown devices that are from identical manufacturers. Devices of the same
model from an identical manufacturer can share similar behavior patterns,
e.g., network flow characteristics. Such similarities can impede identity
verification in the network, transportation, or application layers.
The latter is more challenging and requires extracting behavior-independent
characteristics. We believe that without the capability of unknown device
recognition, these types of systems are still far from practice.
#### V-A2 Continual learning on new devices
Continual or incremental learning [182] in this domain emphasizes that an
identification or detection model should be able to learn newly registered
devices without retraining on a large dataset containing new and old devices.
Because retaining the old dataset or deriving generators for knowledge replay
is computationally expensive. This topic faces several challenges:
* •
Knowing the capacity or the maximum number of devices a model can memorize,
especially for the Black-Box models, e.g., the Deep Neural Networks.
* •
Expanding models dynamically as new devices are being added. Continual
learning is natively supported in Nearest Neighbor algorithms but is
challenging to implement in Deep Neural Networks.
#### V-A3 Deployment of device identification models
The deployment sites and model providers’ lab can differ dramatically, in
which identification accuracy can be impaired. This issue is more severe in
device identification models using wireless signals due to the difference of
wireless channel characteristics. For alleviation, extra works are needed:
* •
Deriving features that are independent of wireless channels or deployment
sites. Researches in [222, 223] suggest that neural networks can only learn
about channel-specific features rather than device-specific features.
* •
Occasional site fine-tunes are needed with the help of continual or transfer
learning to adapt to variations.
* •
Model providers need to use data augmentation methods to simulate operational
variations during lab training, as suggested in [224].
* •
Model providers can use multi-domain training to derive multi-purpose feature
extractors, which will be utilized as building blocks for domain-specific
device identification models. Diverse training from different domains could
provide more robust feature extractors.
#### V-A4 Reliable benchmark datasets
The IoT device identification is a pattern recognition problem on signals or
behaviors. A common benchmark dataset is critical for comparing various
methods in device identification and rogue device detection reliably. However,
by the end of this survey, we only find a limited number of datasets providing
devices’ raw signals or network traffic traces in diverse scenarios. Some
datasets are provided in [127], [128] and [129], respectively. For physical
layer device identification, a larger dataset containing raw signals from more
than 100 airborne transponders are provided in [130], but it only contains
ADS-B protocol. Another dataset containing more than 30 IoT devices’ traffic
traces under volumetric attack and benign scenarios are in [56]. Such dataset
are important because they provide fair comparisons between algorithms.
Additionally, models trained on large datasets can be efficiently transferred
to more specific applications [225, 226].
### V-B Challenges in feature engineering
#### V-B1 The robustness of features
Although many existing works claim the effectiveness of their discovered
features, only very few evaluate the features’ robustness under various
scenarios in terms of device mobility pattern, temperature, obstacles, etc.
Feature robustness has a limited influence on device type identification in
the network or higher layers. However, in the Physical Layer identification of
wireless devices, the robustness of features would severely impair the final
model. Currently, a popular way to enforce robust feature discovery is through
data augmentation to simulate various scenarios. Besides, in neural networks,
regularization and dropout methods can encourage models to make full use of
input data and discover robust latent features, but their effectiveness needs
further study.
#### V-B2 Making use of time-varying features
Some device detection and identification models make use of protocol-agnostic
and behavior-independent features from physical layer wireless signals.
However, in mobile environments, devices’ movements can result in time-varying
channel conditions, in which device identification methods based on static
channel characteristics can be impaired. On the other hand, varying patterns
of channels, signal strength, etc. also encode valuable features, e.g.,
locations, distances, to describe IoT devices [227, 228]. Therefore, both
discovering time-invariant features and making use of time-varying features
are still an open issue in device identification and detection.
#### V-B3 Challenges from deep generative attackers:
The utilization of GAN brings challenges to device identification, especially
in the Physical Layer. Using GAN models, an attacker can train highly
realistic signal or data packet generators to mimic its victims’ signal
characteristics. Research in [229] shows that GAN can increase the success
rate of spoofing attacks from less than 10% to approximately 80%. Fortunately,
a simple remedy is to use MIMO receivers and wireless localization methods to
estimate whether a transmitter is from a reasonable location. Additionally,
controlled imperfections can be dynamically imprinted into the devices’
signals or data flows, with a Pseudorandom Noise Code driven time-varying
manner [223] which is cryptographically impossible to predict.
### V-C Future research trends
#### V-C1 Deep identification models with explainable behaviors and assured
performances
The conveniences of Deep Neural Network make it a versatile tool to implement
IoT device identification and rogue device detection systems, but more efforts
have to be made, especially for model explainability and performance
assurability. On the one hand, we have limited knowledge of the decision
process, especially on how a deep neural network makes its final decisions and
corresponding decision boundaries. Without knowing the decision process and
decision boundaries, there is no way to assure its performance. On the other
hand, researches on the explainability of Deep Neural Networks focus on
explaining models’ behaviors but do not provide guidelines on deriving
assurable performance. Without explainability, we can not assure the
performances of models.
#### V-C2 Unsupervised and continual learning enabled deep identification
model
With a large number of devices being connected to IoT, device identification
and detection models need to continually adapt to operational variation in
real-time. A solution can be the seamless integration of the feature
abstraction capability of deep neural networks, continual learning and
unsupervised learning. The knowledge of using deep neural networks to perform
unsupervised learning for IoT device identification and detection is currently
limited. Meanwhile, continual learning in deep models for device
identification and detection is also rarely investigated.
#### V-C3 Controlled imprinting of verifiable patterns
Compared with passive non-cryptographic device identification and detection
methods in this survey, a proactive way is imprinting verifiable patterns into
devices’ transmitted signals or activity patterns. As suggested in [230],
controlled imperfections are utilized as verifiable patterns. Embedded these
patterns in signals could significantly enhance the performance of device
identification. However, a critical concern is how to prevent the adversaries
from collecting and learning about the imprinted identity verification
information. As suggested in [222], a possible solution is to dynamically
change the identity verification patterns according to a pair of synchronized
pseudorandom code generators, where the initialization keys are only shared
among the device and corresponding device identifiers. Methods are still
limited in imprinting verifiable patterns that are difficult to learn.
## VI Conclusion
This survey aims to provide a comprehensive on the existing technologies on
IoT device detection and identification from passively collected network
traffic traces and wireless signal patterns. We discuss existing non-
cryptographic IoT device identification mechanisms from the perspective of
machine learning and pinpoint several key developing trends such as continual
learning, abnormality detection, and deep unsupervised learning with
explainability. We found that a multi-perspective IoT wireless device
detection and identification framework is needed. Future research for rogue
IoT device identification and detection needs to cope with challenges beyond
signal processing and borrow ideas from advanced topics in Artificial
Intelligence and Knowledge Discovery.
## Acknowledgment
This research was partially supported through Embry-Riddle Aeronautical
University’s Faculty Innovative Research in Science and Technology (FIRST)
Program and the National Science Foundation under Grant No. 1956193.
## References
* [1] S. Jeschke, C. Brecher, H. Song, and D. Rawat, _Industrial Internet of Things: Cybermanufacturing Systems_. Cham, Switzerland: Springer, 2017.
* [2] Y. Sun, H. Song, A. J. Jara, and R. Bie, “Internet of things and big data analytics for smart and connected communities,” _IEEE Access_ , vol. 4, pp. 766–773, 2016.
* [3] H. Song, R. Srinivasan, T. Sookoor, and S. Jeschke, _Smart Cities: Foundations, Principles and Applications_. Hoboken, NJ: Wiley, 2017.
* [4] Y. Zhang, L. Sun, H. Song, and X. Cao, “Ubiquitous wsn for healthcare: Recent advances and future prospects,” _IEEE Internet of Things Journal_ , vol. 1, no. 4, pp. 311–318, Aug 2014.
* [5] Y. Sun and H. Song, _Secure and Trustworthy Transportation Cyber-Physical Systems_. Singapore: Springer, 2017.
* [6] G. Dartmann, H. Song, and A. Schmeink, _Big Data Analytics for Cyber-Physical Systems: Machine Learning for the Internet of Things_. Elsevier, 2019.
* [7] Y. Jiang, Y. Liu, D. Liu, and H. Song, “Applying machine learning to aviation big data for flight delay prediction,” in _2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech)_. IEEE, 2020, pp. 665–672.
* [8] K. Zhang, Y. Liu, J. Wang, H. Song, and D. Liu, “Tree-based airspace capacity estimation,” in _2020 Integrated Communications Navigation and Surveillance Conference (ICNS)_. IEEE, 2020, pp. 5C1–1.
* [9] Y. Liu, J. Li, Z. Ming, H. Song, X. Weng, and J. Wang, “Domain-specific data mining for residents’ transit pattern retrieval from incomplete information,” _Journal of Network and Computer Applications_ , vol. 134, pp. 62–71, 2019.
* [10] H. Song, G. A. Fink, and S. Jeschke, _Security and Privacy in Cyber-Physical Systems: Foundations, Principles and Applications_. Chichester, UK: Wiley-IEEE Press, 2017.
* [11] I. Butun, P. Österberg, and H. Song, “Security of the internet of things: Vulnerabilities, attacks and countermeasures,” _IEEE Communications Surveys Tutorials_ , pp. 1–1, 2019.
* [12] J. Wurm, K. Hoang, O. Arias, A.-R. Sadeghi, and Y. Jin, “Security analysis on consumer and industrial iot devices,” in _2016 21st Asia and South Pacific Design Automation Conference (ASP-DAC)_. IEEE, 2016, pp. 519–524.
* [13] O. Shwartz, Y. Mathov, M. Bohadana, Y. Elovici, and Y. Oren, “Reverse engineering iot devices: Effective techniques and methods,” _IEEE Internet of Things Journal_ , vol. 5, no. 6, pp. 4965–4976, 2018.
* [14] A. Gupta, _The IoT Hacker’s Handbook_. Springer, 2019.
* [15] K. Liu, M. Yang, Z. Ling, H. Yan, Y. Zhang, X. Fu, and W. Zhao, “On manually reverse engineering communication protocols of linux based iot systems,” _arXiv preprint arXiv:2007.11981_ , 2020.
* [16] L. Costa, J. P. Barros, and M. Tavares, “Vulnerabilities in iot devices for smart home environment,” in _Proceedings of the 5th International Conference on Information Systems Security e Privacy, ICISSP 2019._ , vol. 1. SCITEPRESS, 2019, pp. 615–622.
* [17] J. Pollack and P. Ranganathan, “Aviation navigation systems security: Ads-b, gps, iff,” in _Proceedings of the International Conference on Security and Management (SAM)_. The Steering Committee of The World Congress in Computer Science, Computer …, 2018, pp. 129–135.
* [18] M. R. Manesh and N. Kaabouch, “Analysis of vulnerabilities, attacks, countermeasures and overall risk of the automatic dependent surveillance-broadcast (ads-b) system,” _International Journal of Critical Infrastructure Protection_ , vol. 19, pp. 16–31, 2017.
* [19] D. Dimitrios, M. Baldauf, and M. Kitada, “Vulnerabilities of the automatic identification system in the era of maritime autonomous surface ships,” 06 2018\.
* [20] C. Ray, R. Gallen, C. Iphar, A. Napoli, and A. Bouju, “Deais project: detection of ais spoofing and resulting risks,” in _OCEANS 2015-Genova_. IEEE, 2015, pp. 1–6.
* [21] F. L. Yihunie, A. K. Singh, and S. Bhatia, “Assessing and exploiting security vulnerabilities of unmanned aerial vehicles,” in _Smart Systems and IoT: Innovations in Computing_. Springer, 2020, pp. 701–710.
* [22] A. Koubaa, A. Allouch, M. Alajlan, Y. Javed, A. Belghith, and M. Khalgui, “Micro air vehicle link (mavlink) in a nutshell: A survey,” _IEEE Access_ , vol. 7, pp. 87 658–87 680, 2019.
* [23] L. Y. Paul, J. S. Baras, and B. M. Sadler, “Physical-layer authentication,” _IEEE Transactions on Information Forensics and Security_ , vol. 3, no. 1, pp. 38–51, 2008.
* [24] W. Wang, Z. Sun, S. Piao, B. Zhu, and K. Ren, “Wireless physical-layer identification: Modeling and validation,” _IEEE Transactions on Information Forensics and Security_ , vol. 11, no. 9, pp. 2091–2106, 2016.
* [25] H. A. Khattak, M. A. Shah, S. Khan, I. Ali, and M. Imran, “Perception layer security in internet of things,” _Future Generation Computer Systems_ , vol. 100, pp. 144–164, 2019.
* [26] V. Brik, S. Banerjee, M. Gruteser, and S. Oh, “Wireless device identification with radiometric signatures,” in _Proceedings of the 14th ACM international conference on Mobile computing and networking_. ACM, 2008, pp. 116–127.
* [27] J. Sun, “An open-access book about decoding mode-s and ads-b data,” https://mode-s.org/, May 2017.
* [28] B. J. Tetreault, “Use of the automatic identification system (ais) for maritime domain awareness (mda),” in _Proceedings of Oceans 2005 Mts/Ieee_. IEEE, 2005, pp. 1590–1594.
* [29] M. A. Amanullah, R. A. A. Habeeb, F. H. Nasaruddin, A. Gani, E. Ahmed, A. S. M. Nainar, N. M. Akim, and M. Imran, “Deep learning and big data technologies for iot security,” _Computer Communications_ , vol. 151, pp. 495–517, 2020\.
* [30] M. A. Al-Garadi, A. Mohamed, A. Al-Ali, X. Du, I. Ali, and M. Guizani, “A survey of machine and deep learning methods for internet of things (iot) security,” _IEEE Communications Surveys & Tutorials_, 2020.
* [31] N. Wang, P. Wang, A. Alipour-Fanid, L. Jiao, and K. Zeng, “Physical-layer security of 5g wireless networks for iot: Challenges and opportunities,” _IEEE Internet of Things Journal_ , vol. 6, no. 5, pp. 8169–8181, 2019.
* [32] G. Baldini and G. Steri, “A survey of techniques for the identification of mobile phones using the physical fingerprints of the built-in components,” _IEEE Communications Surveys & Tutorials_, vol. 19, no. 3, pp. 1761–1789, 2017.
* [33] B. Danev and S. Capkun, “Physical-layer identification of wireless sensor nodes,” _Technical report/ETH Zürich, Department of Computer Science_ , vol. 604, 2012.
* [34] K. Zeng, K. Govindan, and P. Mohapatra, “Non-cryptographic authentication and identification in wireless networks,” _network security_ , vol. 1, p. 3, 2010\.
* [35] K.-H. Wang, C.-M. Chen, W. Fang, and T.-Y. Wu, “On the security of a new ultra-lightweight authentication protocol in iot environment for rfid tags,” _The Journal of Supercomputing_ , vol. 74, no. 1, pp. 65–70, 2018.
* [36] F. Loi, A. Sivanathan, H. H. Gharakheili, A. Radford, and V. Sivaraman, “Systematically evaluating security and privacy for consumer iot devices,” in _Proceedings of the 2017 Workshop on Internet of Things Security and Privacy_ , 2017, pp. 1–6.
* [37] D. S. Guamán, J. M. Del Alamo, and J. C. Caiza, “A systematic mapping study on software quality control techniques for assessing privacy in information systems,” _IEEE Access_ , vol. 8, pp. 74 808–74 833, 2020\.
* [38] J. Ren, D. J. Dubois, D. Choffnes, A. M. Mandalari, R. Kolcun, and H. Haddadi, “Information exposure from consumer iot devices: A multidimensional, network-informed measurement approach,” in _Proceedings of the Internet Measurement Conference_ , 2019, pp. 267–279.
* [39] S. Al-Sarawi, M. Anbar, K. Alieyan, and M. Alzubaidi, “Internet of things (iot) communication protocols,” in _2017 8th International conference on information technology (ICIT)_. IEEE, 2017, pp. 685–690.
* [40] I. Jawhar, N. Mohamed, and J. Al-Jaroodi, “Networking architectures and protocols for smart city systems,” _Journal of Internet Services and Applications_ , vol. 9, no. 1, p. 26, 2018.
* [41] F. Metzger, T. Hoßfeld, A. Bauer, S. Kounev, and P. E. Heegaard, “Modeling of aggregated iot traffic and its application to an iot cloud,” _Proceedings of the IEEE_ , vol. 107, no. 4, pp. 679–694, 2019.
* [42] S. Han, K. Jang, K. Park, and S. Moon, “Packetshader: a gpu-accelerated software router,” _ACM SIGCOMM Computer Communication Review_ , vol. 40, no. 4, pp. 195–206, 2010.
* [43] F. Hu, Q. Hao, and K. Bao, “A survey on software-defined network and openflow: From concept to implementation,” _IEEE Communications Surveys & Tutorials_, vol. 16, no. 4, pp. 2181–2206, 2014.
* [44] W. Rafique, L. Qi, I. Yaqoob, M. Imran, R. ur Rasool, and W. Dou, “Complementing iot services through software defined networking and edge computing: A comprehensive survey,” _IEEE Communications Surveys & Tutorials_, 2020.
* [45] L. Gao, C. Zhang, and L. Sun, “Restful web of things api in sharing sensor data,” in _2011 International Conference on Internet Technology and Applications_. IEEE, 2011, pp. 1–4.
* [46] A. Sivanathan, “Iot behavioral monitoring via network traffic analysis,” _arXiv preprint arXiv:2001.10632_ , 2020.
* [47] A. Sivanathan, D. Sherratt, H. H. Gharakheili, A. Radford, C. Wijenayake, A. Vishwanath, and V. Sivaraman, “Characterizing and classifying iot traffic in smart cities and campuses,” in _2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)_. IEEE, 2017, pp. 559–564.
* [48] S. Marchal, M. Miettinen, T. D. Nguyen, A.-R. Sadeghi, and N. Asokan, “Audi: Toward autonomous iot device-type identification using periodic communication,” _IEEE Journal on Selected Areas in Communications_ , vol. 37, no. 6, pp. 1402–1412, 2019.
* [49] M. Miettinen, S. Marchal, I. Hafeez, N. Asokan, A.-R. Sadeghi, and S. Tarkoma, “Iot sentinel: Automated device-type identification for security enforcement in iot,” in _2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)_. IEEE, 2017, pp. 2177–2184.
* [50] Y. Meidan, M. Bohadana, A. Shabtai, J. D. Guarnizo, M. Ochoa, N. O. Tippenhauer, and Y. Elovici, “Profiliot: a machine learning approach for iot device identification based on network traffic analysis,” in _Proceedings of the symposium on applied computing_ , 2017, pp. 506–509.
* [51] Y. Meidan, M. Bohadana, A. Shabtai, M. Ochoa, N. O. Tippenhauer, J. D. Guarnizo, and Y. Elovici, “Detection of unauthorized iot devices using machine learning techniques,” _arXiv preprint arXiv:1709.04647_ , 2017.
* [52] A. Aksoy and M. H. Gunes, “Automated iot device identification using network traffic,” in _ICC 2019-2019 IEEE International Conference on Communications (ICC)_. IEEE, 2019, pp. 1–7.
* [53] A. Sivanathan, H. H. Gharakheili, and V. Sivaraman, “Managing iot cyber-security using programmable telemetry and machine learning,” _IEEE Transactions on Network and Service Management_ , vol. 17, no. 1, pp. 60–74, 2020.
* [54] J. Kotak and Y. Elovici, “Iot device identification using deep learning,” _arXiv preprint arXiv:2002.11686_ , 2020.
* [55] E. Lear, R. Droms, and D. Romascanu, “Rfc 8520: Manufacturer usage description specification,” _Internet Engineering Task Force_ , 2019.
* [56] H. Hassan, S. Vijay, H. Ayyoob, S. Arunan, and P. Arman, “IoT device profiles, attack traces and traffic traces,” https://iotanalytics.unsw.edu.au/index, Accessed on Oct. 2020.
* [57] A. Hamza, D. Ranathunga, H. H. Gharakheili, T. A. Benson, M. Roughan, and V. Sivaraman, “Verifying and monitoring iots network behavior using mud profiles,” _IEEE Transactions on Dependable and Secure Computing_ , 2020\.
* [58] Y. Ashibani and Q. H. Mahmoud, “A user authentication model for iot networks based on app traffic patterns,” in _2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)_. IEEE, 2018, pp. 632–638.
* [59] ——, “Design and evaluation of a user authentication model for iot networks based on app event patterns,” _Cluster Computing_ , pp. 1–14, 2020.
* [60] Z. Zvonar and J. Mitola, _Software radio technologies: selected readings_. Wiley-IEEE Press, 2001.
* [61] B. Chatterjee, D. Das, S. Maity, and S. Sen, “Rf-puf: Enhancing iot security through authentication of wireless nodes using in-situ machine learning,” _IEEE Internet of Things Journal_ , vol. 6, no. 1, pp. 388–398, 2018.
* [62] C. Herder, M.-D. Yu, F. Koushanfar, and S. Devadas, “Physical unclonable functions and applications: A tutorial,” _Proceedings of the IEEE_ , vol. 102, no. 8, pp. 1126–1141, 2014.
* [63] A. C. Polak and D. L. Goeckel, “Wireless device identification based on rf oscillator imperfections,” _IEEE Transactions on Information Forensics and Security_ , vol. 10, no. 12, pp. 2492–2501, 2015.
* [64] M. Azarmehr, A. Mehta, and R. Rashidzadeh, “Wireless device identification using oscillator control voltage as rf fingerprint,” in _2017 IEEE 30th Canadian Conference on Electrical and Computer Engineering (CCECE)_. IEEE, 2017, pp. 1–4.
* [65] Z. Zhuang, X. Ji, T. Zhang, J. Zhang, W. Xu, Z. Li, and Y. Liu, “Fbsleuth: Fake base station forensics via radio frequency fingerprinting,” in _Proceedings of the 2018 on Asia Conference on Computer and Communications Security_. ACM, 2018, pp. 261–272.
* [66] L. Peng, A. Hu, Y. Jiang, Y. Yan, and C. Zhu, “A differential constellation trace figure based device identification method for zigbee nodes,” in _2016 8th International Conference on Wireless Communications & Signal Processing (WCSP)_. IEEE, 2016, pp. 1–6.
* [67] L. Peng, A. Hu, J. Zhang, Y. Jiang, J. Yu, and Y. Yan, “Design of a hybrid rf fingerprint extraction and device classification scheme,” _IEEE Internet of Things Journal_ , vol. 6, no. 1, pp. 349–360, 2018.
* [68] L. Peng, J. Zhang, M. Liu, and A. Hu, “Deep learning based rf fingerprint identification using differential constellation trace figure,” _IEEE Transactions on Vehicular Technology_ , 2019.
* [69] G. Zhang, L. Xia, S. Jia, and Y. Ji, “Identification of cloned hf rfid proximity cards based on rf fingerprinting,” in _2016 IEEE Trustcom/BigDataSE/ISPA_. IEEE, 2016, pp. 292–300.
* [70] W. E. Cobb, E. W. Garcia, M. A. Temple, R. O. Baldwin, and Y. C. Kim, “Physical layer identification of embedded devices using rf-dna fingerprinting,” in _2010-Milcom 2010 Military Communications Conference_. IEEE, 2010, pp. 2168–2173.
* [71] C. Wang, Y. Lin, and Z. Zhang, “Research on physical layer security of cognitive radio network based on rf-dna,” in _2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C)_. IEEE, 2017, pp. 252–255.
* [72] T. J. Bihl, K. W. Bauer, and M. A. Temple, “Feature selection for rf fingerprinting with multiple discriminant analysis and using zigbee device emissions,” _IEEE Transactions on Information Forensics and Security_ , vol. 11, no. 8, pp. 1862–1874, 2016.
* [73] B. W. Ramsey, T. D. Stubbs, B. E. Mullins, M. A. Temple, and M. A. Buckner, “Wireless infrastructure protection using low-cost radio frequency fingerprinting receivers,” _international journal of critical infrastructure protection_ , vol. 8, pp. 27–39, 2015.
* [74] C. K. Dubendorfer, B. W. Ramsey, and M. A. Temple, “An rf-dna verification process for zigbee networks,” in _MILCOM 2012-2012 IEEE Military Communications Conference_. IEEE, 2012, pp. 1–6.
* [75] P. K. Harmer, M. A. Temple, M. A. Buckner, and E. Farquahar, “Using differential evolution to optimize’learning from signals’ and enhance network security,” in _Proceedings of the 13th annual conference on Genetic and evolutionary computation_. ACM, 2011, pp. 1811–1818.
* [76] X. Zhou, A. Hu, G. Li, L. Peng, Y. Xing, and J. Yu, “Design of a robust rf fingerprint generation and classification scheme for practical device identification,” in _2019 IEEE Conference on Communications and Network Security (CNS)_. IEEE, 2019, pp. 196–204.
* [77] B. Danev, H. Luecken, S. Capkun, and K. El Defrawy, “Attacks on physical-layer identification,” in _Proceedings of the third ACM conference on Wireless network security_. ACM, 2010, pp. 89–98.
* [78] A. C. Polak and D. L. Goeckel, “Identification of wireless devices of users who actively fake their rf fingerprints with artificial data distortion,” _IEEE Transactions on Wireless Communications_ , vol. 14, no. 11, pp. 5889–5899, 2015.
* [79] M. Köse, S. Taşcioğlu, and Z. Telatar, “Rf fingerprinting of iot devices based on transient energy spectrum,” _IEEE Access_ , vol. 7, pp. 18 715–18 726, 2019.
* [80] Z. Zhang, Y. Li, C. Wang, M. Wang, Y. Tu, and J. Wang, “An ensemble learning method for wireless multimedia device identification,” _Security and Communication Networks_ , vol. 2018, 2018.
* [81] Y. Tu, Z. Zhang, Y. Li, C. Wang, and Y. Xiao, “Research on the internet of things device recognition based on rf-fingerprinting,” _IEEE Access_ , vol. 7, pp. 37 426–37 431, 2019.
* [82] A. M. Ali, E. Uzundurukan, and A. Kara, “Assessment of features and classifiers for bluetooth rf fingerprinting,” _IEEE Access_ , vol. 7, pp. 50 524–50 535, 2019.
* [83] J. Yu, A. Hu, F. Zhou, Y. Xing, Y. Yu, G. Li, and L. Peng, “Radio frequency fingerprint identification based on denoising autoencoders,” _arXiv preprint arXiv:1907.08809_ , 2019.
* [84] J. M. T. Romano, R. Attux, C. C. Cavalcante, and R. Suyama, _Unsupervised signal processing: channel equalization and source separation_. CRC Press, 2018.
* [85] F. Restuccia, S. D’Oro, A. Al-Shawabka, M. Belgiovine, L. Angioloni, S. Ioannidis, K. Chowdhury, and T. Melodia, “Deepradioid: Real-time channel-resilient optimization of deep learning-based radio fingerprinting algorithms,” _arXiv preprint arXiv:1904.07623_ , 2019.
* [86] C. Xu, G. Huang, and X. Hou, “Research on communication fm signal blind separation under jamming environments,” in _2nd International Forum on Management, Education and Information Technology Application (IFMEITA 2017)_. Atlantis Press, 2018.
* [87] S. N. Baliarsingh, A. Senapati, A. Deb, and J. S. Roy, “Adaptive beam formation for smart antenna for mobile communication network using new hybrid algorithms,” in _2016 International Conference on Communication and Signal Processing (ICCSP)_. IEEE, 2016, pp. 2146–2151.
* [88] S. S. Hanna and D. Cabric, “Deep learning based transmitter identification using power amplifier nonlinearity,” in _2019 International Conference on Computing, Networking and Communications (ICNC)_. IEEE, 2019, pp. 674–680.
* [89] T. Zheng, Z. Sun, and K. Ren, “Fid: Function modeling-based data-independent and channel-robust physical-layer identification,” in _IEEE INFOCOM 2019-IEEE Conference on Computer Communications_. IEEE, 2019, pp. 199–207.
* [90] D. Halperin, W. Hu, A. Sheth, and D. Wetherall, “Tool release: Gathering 802.11 n traces with channel state information,” _ACM SIGCOMM Computer Communication Review_ , vol. 41, no. 1, pp. 53–53, 2011.
* [91] Y. Ma, G. Zhou, and S. Wang, “Wifi sensing with channel state information: A survey,” _ACM Computing Surveys (CSUR)_ , vol. 52, no. 3, p. 46, 2019.
* [92] H. Liu, Y. Wang, J. Liu, J. Yang, and Y. Chen, “Practical user authentication leveraging channel state information (csi),” in _Proceedings of the 9th ACM symposium on Information, computer and communications security_. ACM, 2014, pp. 389–400.
* [93] S. Zaman, C. Chakraborty, N. Mehajabin, M. Mamun-Or-Rashid, and M. A. Razzaque, “A deep learning based device authentication scheme using channel state information,” in _2018 International Conference on Innovation in Engineering and Technology (ICIET)_. IEEE, 2018, pp. 1–5.
* [94] Y. Zou, Y. Wang, S. Ye, K. Wu, and L. M. Ni, “Tagfree: Passive object differentiation via physical layer radiometric signatures,” in _2017 IEEE International Conference on Pervasive Computing and Communications (PerCom)_. IEEE, 2017, pp. 237–246.
* [95] J. Hua, H. Sun, Z. Shen, Z. Qian, and S. Zhong, “Accurate and efficient wireless device fingerprinting using channel state information,” in _IEEE INFOCOM 2018-IEEE Conference on Computer Communications_. IEEE, 2018, pp. 1700–1708.
* [96] P. Liu, P. Yang, W.-Z. Song, Y. Yan, and X.-Y. Li, “Real-time identification of rogue wifi connections using environment-independent physical features,” in _IEEE INFOCOM 2019-IEEE Conference on Computer Communications_. IEEE, 2019, pp. 190–198.
* [97] Z. Wang, B. Guo, Z. Yu, and X. Zhou, “Wi-fi csi-based behavior recognition: From signals and actions to activities,” _IEEE Communications Magazine_ , vol. 56, no. 5, pp. 109–115, 2018.
* [98] F. Hong, X. Wang, Y. Yang, Y. Zong, Y. Zhang, and Z. Guo, “Wfid: Passive device-free human identification using wifi signal,” in _Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services_. ACM, 2016, pp. 47–56.
* [99] Y. Zeng, P. H. Pathak, C. Xu, and P. Mohapatra, “Your ap knows how you move: fine-grained device motion recognition through wifi,” in _Proceedings of the 1st ACM workshop on Hot topics in wireless_. ACM, 2014, pp. 49–54.
* [100] Y. Wang, J. Liu, Y. Chen, M. Gruteser, J. Yang, and H. Liu, “E-eyes: device-free location-oriented activity identification using fine-grained wifi signatures,” in _Proceedings of the 20th annual international conference on Mobile computing and networking_. ACM, 2014, pp. 617–628.
* [101] Y. Wang, K. Wu, and L. M. Ni, “Wifall: Device-free fall detection by wireless networks,” _IEEE Transactions on Mobile Computing_ , vol. 16, no. 2, pp. 581–594, 2016.
* [102] G. Li, J. Yu, Y. Xing, and A. Hu, “Location-invariant physical layer identification approach for wifi devices,” _IEEE Access_ , vol. 7, pp. 106 974–106 986, 2019.
* [103] C. Xu, B. Chen, Y. Liu, F. He, and H. Song, “Rf fingerprint measurement for detecting multiple amateur drones based on stft and feature reduction,” in _2020 Integrated Communications Navigation and Surveillance Conference (ICNS)_. IEEE, 2020, pp. 4G1–1.
* [104] S. Chen, F. Xie, Y. Chen, H. Song, and H. Wen, “Identification of wireless transceiver devices using radio frequency (rf) fingerprinting based on stft analysis to enhance authentication security,” in _2017 IEEE 5th International Symposium on Electromagnetic Compatibility (EMC-Beijing)_. IEEE, 2017, pp. 1–5.
* [105] D. R. Reising, M. A. Temple, and J. A. Jackson, “Authorized and rogue device discrimination using dimensionally reduced rf-dna fingerprints,” _IEEE Transactions on Information Forensics and Security_ , vol. 10, no. 6, pp. 1180–1192, 2015.
* [106] Y. Li, L. Chen, J. Chen, F. Xie, S. Chen, and H. Wen, “A low complexity feature extraction for the rf fingerprinting process,” in _2018 IEEE Conference on Communications and Network Security (CNS)_. IEEE, 2018, pp. 1–2.
* [107] X. Wu, Y. Shi, W. Meng, X. Ma, and N. Fang, “Specific emitter identification for satellite communication using probabilistic neural networks,” _International Journal of Satellite Communications and Networking_ , vol. 37, no. 3, pp. 283–291, 2019.
* [108] D. Sun, Y. Li, and Y. Xu, “Specific emitter identification based on normalized frequency spectrum,” in _2016 2nd IEEE International Conference on Computer and Communications (ICCC)_. IEEE, 2016, pp. 1875–1879.
* [109] Y. Yuan, Z. Huang, H. Wu, and X. Wang, “Specific emitter identification based on hilbert–huang transform-based time–frequency–energy distribution features,” _IET Communications_ , vol. 8, no. 13, pp. 2404–2412, 2014.
* [110] U. Satija, N. Trivedi, G. Biswal, and B. Ramkumar, “Specific emitter identification based on variational mode decomposition and spectral features in single hop and relaying scenarios,” _IEEE Transactions on Information Forensics and Security_ , vol. 14, no. 3, pp. 581–591, 2018.
* [111] X. Wang, J. Duan, C. Wang, G. Cui, and W. Wang, “A radio frequency fingerprinting identification method based on energy entropy and color moments of the bispectrum,” in _2017 9th International Conference on Advanced Infocomm Technology (ICAIT)_. IEEE, 2017, pp. 150–154.
* [112] Y.-K. Lei, “Individual communication transmitter identification using correntropy-based collaborative representation,” in _2016 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)_. IEEE, 2016, pp. 1194–1200.
* [113] M. Sun, L. Zhang, J. Bao, and Y. Yan, “Rf fingerprint extraction for gnss anti-spoofing using axial integrated wigner bispectrum,” _Journal of Information Security and Applications_ , vol. 35, pp. 51–54, 2017.
* [114] Z. Zhang, Y. Li, and C. Wang, “Research on individual identification of wireless devices based on signal’s energy distribution,” in _2018 IEEE 23rd International Conference on Digital Signal Processing (DSP)_. IEEE, 2018, pp. 1–5.
* [115] N. E. Huang, _Hilbert-Huang transform and its applications_. World Scientific, 2014, vol. 16.
* [116] W. Wang, H. Liu, J. Yang, and H. Yin, “Specific emitter identification using decomposed hierarchical feature extraction methods,” in _2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)_. IEEE, 2017, pp. 1639–1643.
* [117] M. G. Frei and I. Osorio, “Intrinsic time-scale decomposition: time–frequency–energy analysis and real-time filtering of non-stationary signals,” _Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences_ , vol. 463, no. 2078, pp. 321–342, 2006.
* [118] Y. Shi and M. A. Jensen, “Improved radiometric identification of wireless devices using mimo transmission,” _IEEE Transactions on Information Forensics and Security_ , vol. 6, no. 4, pp. 1346–1354, 2011.
* [119] T. D. Vo-Huu, T. D. Vo-Huu, and G. Noubir, “Fingerprinting wi-fi devices using software defined radios,” in _Proceedings of the 9th ACM Conference on Security & Privacy in Wireless and Mobile Networks_. ACM, 2016, pp. 3–14.
* [120] I. Agadakos, N. Agadakos, J. Polakis, and M. R. Amer, “Deep complex networks for protocol-agnostic radio frequency device fingerprinting in the wild,” _arXiv preprint arXiv:1909.08703_ , 2019.
* [121] J. Yu, A. Hu, G. Li, and L. Peng, “A robust rf fingerprinting approach using multi-sampling convolutional neural network,” _IEEE Internet of Things Journal_ , 2019.
* [122] K. Merchant, S. Revay, G. Stantchev, and B. Nousain, “Deep learning for rf device fingerprinting in cognitive communication networks,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 12, no. 1, pp. 160–167, 2018.
* [123] S. Riyaz, K. Sankhe, S. Ioannidis, and K. Chowdhury, “Deep learning convolutional neural networks for radio identification,” _IEEE Communications Magazine_ , vol. 56, no. 9, pp. 146–152, 2018.
* [124] G. Huang, Y. Yuan, X. Wang, and Z. Huang, “Specific emitter identification for communications transmitter using multi-measurements,” _Wireless Personal Communications_ , vol. 94, no. 3, pp. 1523–1542, 2017.
* [125] C. Bandt and B. Pompe, “Permutation entropy: a natural complexity measure for time series,” _Physical review letters_ , vol. 88, no. 17, p. 174102, 2002\.
* [126] C. L. Nikias and M. R. Raghuveer, “Bispectrum estimation: A digital signal processing framework,” _Proceedings of the IEEE_ , vol. 75, no. 7, pp. 869–891, 1987.
* [127] C. Morin, L. Cardoso, J. Hoydis, J.-M. Gorce, and T. Vial, “Transmitter classification with supervised deep learning,” _arXiv preprint arXiv:1905.07923_ , 2019.
* [128] M. S. Allahham, M. F. Al-Sa’d, A. Al-Ali, A. Mohamed, T. Khattab, and A. Erbad, “Dronerf dataset: A dataset of drones for rf-based detection, classification and identification,” _Data in brief_ , vol. 26, p. 104313, 2019.
* [129] C. Kaushik, “Genesys lab mldatasets,” http://genesys-lab.org/mldatasets.au/index, Accessed on Oct. 2020.
* [130] Y. Liu, J. Wang, H. Song, S. Niu, and Y. Thomas, “A 24-hour signal recording dataset with labels for cybersecurity and IoT,” 2020. [Online]. Available: http://dx.doi.org/10.21227/gt9v-kz32
* [131] G. Baldini, C. Gentile, R. Giuliani, and G. Steri, “Comparison of techniques for radiometric identification based on deep convolutional neural networks,” _Electronics Letters_ , vol. 55, no. 2, pp. 90–92, 2018.
* [132] D. Roy, T. Mukherjee, M. Chatterjee, E. Blasch, and E. Pasiliao, “Rfal: Adversarial learning for rf transmitter identification and classification,” _IEEE Transactions on Cognitive Communications and Networking_ , 2019.
* [133] S. Gopalakrishnan, M. Cekic, and U. Madhow, “Robust wireless fingerprinting via complex-valued neural networks,” _arXiv preprint arXiv:1905.09388_ , 2019\.
* [134] J. Huang, Y. Lei, and X. Liao, “Communication transmitter individual feature extraction method based on stacked denoising autoencoders under small sample prerequisite,” in _2017 7th IEEE International Conference on Electronics Information and Emergency Communication (ICEIEC)_. IEEE, 2017, pp. 132–135.
* [135] T. Oyedare and J.-M. J. Park, “Estimating the required training dataset size for transmitter classification using deep learning.”
* [136] J. S. Dramsch, M. Lüthje, and A. N. Christensen, “Complex-valued neural networks for machine learning on non-stationary physical data,” _CoRR_ , vol. abs/1905.12321, 2019. [Online]. Available: http://arxiv.org/abs/1905.12321
* [137] Y. Liu, J. Wang, J. Li, H. Song, T. Yang, S. Niu, and Z. Ming, “Zero-bias deep learning for accurate identification of internet of things (iot) devices,” _IEEE Internet of Things Journal_ , 2020.
* [138] Y. Liu, J. Wang, S. Niu, and H. Song, “Deep learning enabled reliable identity verification and spoofing detection,” in _International Conference on Wireless Algorithms, Systems, and Applications_. Springer, 2020, pp. 333–345.
* [139] M. Li, O. Li, G. Liu, and C. Zhang, “Generative adversarial networks-based semi-supervised automatic modulation recognition for cognitive radio networks,” _Sensors_ , vol. 18, no. 11, p. 3913, 2018.
* [140] E. Marchi, F. Vesperini, F. Eyben, S. Squartini, and B. Schuller, “A novel approach for automatic acoustic novelty detection using a denoising autoencoder with bidirectional lstm neural networks,” in _2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2015, pp. 1996–2000.
* [141] S. S. Khan and B. Taati, “Detecting unseen falls from wearable devices using channel-wise ensemble of autoencoders,” _Expert Systems with Applications_ , vol. 87, pp. 280–290, 2017.
* [142] Y. Shi, K. Davaslioglu, Y. E. Sagduyu, W. C. Headley, M. Fowler, and G. Green, “Deep learning for rf signal classification in unknown and dynamic spectrum environments,” in _2019 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN)_. IEEE, 2019, pp. 1–10.
* [143] A. Gritsenko, Z. Wang, T. Jian, J. Dy, K. Chowdhury, and S. Ioannidis, “Finding a ‘new’needle in the haystack: Unseen radio detection in large populations using deep learning,” in _2019 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN)_. IEEE, 2019, pp. 1–10.
* [144] Y. Kim, S. An, and J. So, “Identifying signal source using channel state information in wireless lans,” in _2018 International Conference on Information Networking (ICOIN)_. IEEE, 2018, pp. 616–621.
* [145] L. J. Wong, W. C. Headley, S. Andrews, R. M. Gerdes, and A. J. Michaels, “Clustering learned cnn features from raw i/q data for emitter identification,” in _MILCOM 2018-2018 IEEE Military Communications Conference (MILCOM)_. IEEE, 2018, pp. 26–33.
* [146] H. Jafari, O. Omotere, D. Adesina, H.-H. Wu, and L. Qian, “Iot devices fingerprinting using deep learning,” in _MILCOM 2018-2018 IEEE Military Communications Conference (MILCOM)_. IEEE, 2018, pp. 1–9.
* [147] J. S. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl, “Algorithms for hyper-parameter optimization,” in _Advances in neural information processing systems_ , 2011, pp. 2546–2554.
* [148] M. Pelikan, D. E. Goldberg, E. Cantú-Paz _et al._ , “Boa: The bayesian optimization algorithm,” in _Proceedings of the genetic and evolutionary computation conference GECCO-99_ , vol. 1, 1999, pp. 525–532.
* [149] S. R. Young, D. C. Rose, T. P. Karnowski, S.-H. Lim, and R. M. Patton, “Optimizing deep learning hyper-parameters through an evolutionary algorithm,” in _Proceedings of the Workshop on Machine Learning in High-Performance Computing Environments_ , 2015, pp. 1–5.
* [150] T. Elsken, J. H. Metzen, and F. Hutter, “Neural architecture search: A survey,” _arXiv preprint arXiv:1808.05377_ , 2018.
* [151] W. Liu, Z. Wang, X. Liu, N. Zeng, Y. Liu, and F. E. Alsaadi, “A survey of deep neural network architectures and their applications,” _Neurocomputing_ , vol. 234, pp. 11–26, 2017.
* [152] A. Truong, A. Walters, J. Goodsitt, K. Hines, B. Bruss, and R. Farivar, “Towards automated machine learning: Evaluation and comparison of automl approaches and tools,” _arXiv preprint arXiv:1908.05557_ , 2019.
* [153] M. Lindauer, “Literature on neural architecture search,” https://www.automl.org/automl/literature-on-neural-architecture-search/, Feb. 2020.
* [154] X. Dong, “Awesome-nas: A curated list of neural architecture search (nas) resources.” https://github.com/D-X-Y/Awesome-NAS, Jan. 2020.
* [155] C. Zanchettin and T. B. Ludermir, “A methodology to train and improve artificial neural networks’ weights and connections,” in _The 2006 IEEE International Joint Conference on Neural Network Proceedings_. IEEE, 2006, pp. 5267–5274.
* [156] M. M. Islam, M. A. Sattar, M. F. Amin, X. Yao, and K. Murase, “A new adaptive merging and growing algorithm for designing artificial neural networks,” _IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)_ , vol. 39, no. 3, pp. 705–722, 2009.
* [157] H. Lam, S. Ling, F. H. Leung, and P. K.-S. Tam, “Tuning of the structure and parameters of neural network using an improved genetic algorithm,” in _IECON’01. 27th Annual Conference of the IEEE Industrial Electronics Society (Cat. No. 37243)_ , vol. 1. IEEE, 2001, pp. 25–30.
* [158] J. M. Alvarez and M. Salzmann, “Learning the number of neurons in deep networks,” in _Advances in Neural Information Processing Systems_ , 2016, pp. 2270–2278.
* [159] Y. Guo, A. Yao, and Y. Chen, “Dynamic network surgery for efficient dnns,” in _Advances in neural information processing systems_ , 2016, pp. 1379–1387.
* [160] S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections for efficient neural network,” in _Advances in neural information processing systems_ , 2015, pp. 1135–1143.
* [161] M. M. Islam, X. Yao, and K. Murase, “A constructive algorithm for training cooperative neural network ensembles,” _IEEE Transactions on neural networks_ , vol. 14, no. 4, pp. 820–834, 2003.
* [162] P. L. Narasimha, W. H. Delashmit, M. T. Manry, J. Li, and F. Maldonado, “An integrated growing-pruning method for feedforward network training,” _Neurocomputing_ , vol. 71, no. 13-15, pp. 2831–2847, 2008.
* [163] L. Ma and K. Khorasani, “A new strategy for adaptively constructing multilayer feedforward neural networks,” _Neurocomputing_ , vol. 51, pp. 361–385, 2003\.
* [164] C. Cortes, X. Gonzalvo, V. Kuznetsov, M. Mohri, and S. Yang, “Adanet: Adaptive structural learning of artificial neural networks,” in _Proceedings of the 34th International Conference on Machine Learning-Volume 70_. JMLR. org, 2017, pp. 874–883.
* [165] C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, L.-J. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy, “Progressive neural architecture search,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 19–34.
* [166] K. O. Stanley and R. Miikkulainen, “Efficient evolution of neural network topologies,” in _Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No. 02TH8600)_ , vol. 2. IEEE, 2002, pp. 1757–1762.
* [167] ——, “Evolving neural networks through augmenting topologies,” _Evolutionary computation_ , vol. 10, no. 2, pp. 99–127, 2002.
* [168] J. Liang, E. Meyerson, B. Hodjat, D. Fink, K. Mutch, and R. Miikkulainen, “Evolutionary neural automl for deep learning,” in _Proceedings of the Genetic and Evolutionary Computation Conference_ , 2019, pp. 401–409.
* [169] B. Baker, O. Gupta, N. Naik, and R. Raskar, “Designing neural network architectures using reinforcement learning,” _arXiv preprint arXiv:1611.02167_ , 2016.
* [170] M. Tan, B. Chen, R. Pang, V. Vasudevan, M. Sandler, A. Howard, and Q. V. Le, “Mnasnet: Platform-aware neural architecture search for mobile,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 2820–2828.
* [171] C.-H. Hsu, S.-H. Chang, J.-H. Liang, H.-P. Chou, C.-H. Liu, S.-C. Chang, J.-Y. Pan, Y.-T. Chen, W. Wei, and D.-C. Juan, “Monas: Multi-objective neural architecture search using reinforcement learning,” _arXiv preprint arXiv:1806.10332_ , 2018.
* [172] H. Liu, K. Simonyan, and Y. Yang, “Darts: Differentiable architecture search,” _arXiv preprint arXiv:1806.09055_ , 2018.
* [173] X. Chen, L. Xie, J. Wu, and Q. Tian, “Progressive differentiable architecture search: Bridging the depth gap between search and evaluation,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 1294–1303.
* [174] X. Dong and Y. Yang, “Searching for a robust neural architecture in four gpu hours,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 1761–1770.
* [175] C. Ying, A. Klein, E. Real, E. Christiansen, K. Murphy, and F. Hutter, “Nas-bench-101: Towards reproducible neural architecture search,” _arXiv preprint arXiv:1902.09635_ , 2019.
* [176] X. Dong and Y. Yang, “Nas-bench-102: Extending the scope of reproducible neural architecture search,” _arXiv preprint arXiv:2001.00326_ , 2020.
* [177] K. Youssef, L. Bouchard, K. Haigh, J. Silovsky, B. Thapa, and C. Vander Valk, “Machine learning approach to rf transmitter identification,” _IEEE Journal of Radio Frequency Identification_ , vol. 2, no. 4, pp. 197–205, 2018\.
* [178] W. J. Scheirer, A. de Rezende Rocha, A. Sapkota, and T. E. Boult, “Toward open set recognition,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 35, no. 7, pp. 1757–1772, 2012.
* [179] A. Bendale and T. E. Boult, “Towards open set deep networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 1563–1572.
* [180] D. Gunning, “Explainable artificial intelligence (xai),” _Defense Advanced Research Projects Agency (DARPA), nd Web_ , vol. 2, 2017.
* [181] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, “A survey on deep transfer learning,” in _International conference on artificial neural networks_. Springer, 2018, pp. 270–279.
* [182] G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter, “Continual lifelong learning with neural networks: A review,” _Neural Networks_ , 2019\.
* [183] H. Shin, J. K. Lee, J. Kim, and J. Kim, “Continual learning with deep generative replay,” in _Advances in Neural Information Processing Systems_ , 2017, pp. 2990–2999.
* [184] F. Girosi, M. Jones, and T. Poggio, “Regularization theory and neural networks architectures,” _Neural computation_ , vol. 7, no. 2, pp. 219–269, 1995\.
* [185] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska _et al._ , “Overcoming catastrophic forgetting in neural networks,” _Proceedings of the national academy of sciences_ , vol. 114, no. 13, pp. 3521–3526, 2017.
* [186] J. Yoon, E. Yang, J. Lee, and S. J. Hwang, “Lifelong learning with dynamically expandable networks,” _arXiv preprint arXiv:1708.01547_ , 2017.
* [187] E. Axell, G. Leus, E. G. Larsson, and H. V. Poor, “Spectrum sensing for cognitive radio: State-of-the-art and recent advances,” _IEEE signal processing magazine_ , vol. 29, no. 3, pp. 101–116, 2012.
* [188] A. Sivanathan, H. H. Gharakheili, and V. Sivaraman, “Detecting behavioral change of iot devices using clustering-based network traffic modeling,” _IEEE Internet of Things Journal_ , 2020.
* [189] ——, “Inferring iot device types from network behavior using unsupervised clustering,” in _2019 IEEE 44th Conference on Local Computer Networks (LCN)_. IEEE, 2019, pp. 230–233.
* [190] J. Ortiz, C. Crawford, and F. Le, “Devicemien: network device behavior modeling for identifying unknown iot devices,” in _Proceedings of the International Conference on Internet of Things Design and Implementation_ , 2019, pp. 106–117.
* [191] G. Liu, R. Zhang, C. Wang, and L. Liu, “Synchronization-free gps spoofing detection with crowdsourced air traffic control data,” in _2019 20th IEEE International Conference on Mobile Data Management (MDM)_. IEEE, 2019, pp. 260–268.
* [192] M. Monteiro, A. Barreto, T. Kacem, D. Wijesekera, and P. Costa, “Detecting malicious ads-b transmitters using a low-bandwidth sensor network,” in _2015 18th International Conference on Information Fusion (Fusion)_. IEEE, 2015, pp. 1696–1701.
* [193] B. Xu, G. Sun, R. Yu, and Z. Yang, “High-accuracy tdoa-based localization without time synchronization,” _IEEE Transactions on Parallel and Distributed Systems_ , vol. 24, no. 8, pp. 1567–1576, 2012.
* [194] M. Schäfer, M. Strohmeier, V. Lenders, I. Martinovic, and M. Wilhelm, “Bringing up opensky: A large-scale ads-b sensor network for research,” in _Proceedings of the 13th international symposium on Information processing in sensor networks_. IEEE Press, 2014, pp. 83–94.
* [195] H. Liu, H. Darabi, P. Banerjee, and J. Liu, “Survey of wireless indoor positioning techniques and systems,” _IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)_ , vol. 37, no. 6, pp. 1067–1080, 2007.
* [196] I. Guvenc and C.-C. Chong, “A survey on toa based wireless localization and nlos mitigation techniques,” _IEEE Communications Surveys & Tutorials_, vol. 11, no. 3, pp. 107–124, 2009.
* [197] G. Han, H. Xu, T. Q. Duong, J. Jiang, and T. Hara, “Localization algorithms of wireless sensor networks: a survey,” _Telecommunication Systems_ , vol. 52, no. 4, pp. 2419–2436, 2013.
* [198] Q. Li, H. Fan, W. Sun, J. Li, L. Chen, and Z. Liu, “Fingerprints in the air: unique identification of wireless devices using rf rss fingerprints,” _IEEE Sensors Journal_ , vol. 17, no. 11, pp. 3568–3579, 2017.
* [199] M. Zheleva, R. Chandra, A. Chowdhery, A. Kapoor, and P. Garnett, “Txminer: Identifying transmitters in real-world spectrum measurements,” in _2015 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN)_. IEEE, 2015, pp. 94–105.
* [200] Y. A. Nijsure, G. Kaddoum, G. Gagnon, F. Gagnon, C. Yuen, and R. Mahapatra, “Adaptive air-to-ground secure communication system based on ads-b and wide-area multilateration,” _IEEE Transactions on Vehicular Technology_ , vol. 65, no. 5, pp. 3150–3165, 2015.
* [201] M. Monteiro, A. Barreto, T. Kacem, J. Carvalho, D. Wijesekera, and P. Costa, “Detecting malicious ads-b broadcasts using wide area multilateration,” in _2015 IEEE/AIAA 34th Digital Avionics Systems Conference (DASC)_. IEEE, 2015, pp. 4A3–1.
* [202] E. Schubert, J. Sander, M. Ester, H. P. Kriegel, and X. Xu, “Dbscan revisited, revisited: why and how you should (still) use dbscan,” _ACM Transactions on Database Systems (TODS)_ , vol. 42, no. 3, pp. 1–21, 2017.
* [203] M. Ankerst, M. M. Breunig, H.-P. Kriegel, and J. Sander, “Optics: ordering points to identify the clustering structure,” _ACM Sigmod record_ , vol. 28, no. 2, pp. 49–60, 1999.
* [204] D. Pelleg, A. W. Moore _et al._ , “X-means: Extending k-means with efficient estimation of the number of clusters.” in _Icml_ , vol. 1, 2000, pp. 727–734.
* [205] R.-H. Hwang, M.-C. Peng, C.-W. Huang, P.-C. Lin, and V.-L. Nguyen, “An unsupervised deep learning model for early network traffic anomaly detection,” _IEEE Access_ , vol. 8, pp. 30 387–30 399, 2020.
* [206] R.-H. Hwang, M.-C. Peng, and C.-W. Huang, “Detecting iot malicious traffic based on autoencoder and convolutional neural network,” in _2019 IEEE Globecom Workshops (GC Wkshps)_. IEEE, 2019, pp. 1–6.
* [207] H. Alipour, Y. B. Al-Nashif, P. Satam, and S. Hariri, “Wireless anomaly detection based on ieee 802.11 behavior analysis,” _IEEE transactions on information forensics and security_ , vol. 10, no. 10, pp. 2158–2170, 2015\.
* [208] N. Sehatbakhsh, M. Alam, A. Nazari, A. Zajic, and M. Prvulovic, “Syndrome: Spectral analysis for anomaly detection on medical iot and embedded devices,” in _2018 IEEE international symposium on hardware oriented security and trust (HOST)_. IEEE, 2018, pp. 1–8.
* [209] A. Hamza, H. H. Gharakheili, T. A. Benson, and V. Sivaraman, “Detecting volumetric attacks on lot devices via sdn-based monitoring of mud activity,” in _Proceedings of the 2019 ACM Symposium on SDN Research_ , 2019, pp. 36–48.
* [210] V. L. Thing, “Ieee 802.11 network anomaly detection and attack classification: A deep learning approach,” in _2017 IEEE Wireless Communications and Networking Conference (WCNC)_. IEEE, 2017, pp. 1–6.
* [211] R. Sridharan, R. R. Maiti, and N. O. Tippenhauer, “Wadac: Privacy-preserving anomaly detection and attack classification on wireless traffic,” in _Proceedings of the 11th ACM Conference on Security & Privacy in Wireless and Mobile Networks_, 2018, pp. 51–62.
* [212] T. Luo and S. G. Nagarajan, “Distributed anomaly detection using autoencoder neural networks in wsn for iot,” in _2018 IEEE International Conference on Communications (ICC)_. IEEE, 2018, pp. 1–6.
* [213] Y. Meidan, M. Bohadana, Y. Mathov, Y. Mirsky, A. Shabtai, D. Breitenbacher, and Y. Elovici, “N-baiot—network-based detection of iot botnet attacks using deep autoencoders,” _IEEE Pervasive Computing_ , vol. 17, no. 3, pp. 12–22, 2018.
* [214] Y. Chen, X. S. Zhou, and T. S. Huang, “One-class svm for learning in image retrieval,” in _Proceedings 2001 International Conference on Image Processing (Cat. No. 01CH37205)_ , vol. 1. IEEE, 2001, pp. 34–37.
* [215] Z. Ding and M. Fei, “An anomaly detection approach based on isolation forest algorithm for streaming data using sliding window,” _IFAC Proceedings Volumes_ , vol. 46, no. 20, pp. 12–17, 2013.
* [216] H.-P. Kriegel, P. Kröger, E. Schubert, and A. Zimek, “Loop: local outlier probabilities,” in _Proceedings of the 18th ACM conference on Information and knowledge management_ , 2009, pp. 1649–1652.
* [217] M. S. Parwez, D. B. Rawat, and M. Garuba, “Big data analytics for user-activity analysis and user-anomaly detection in mobile wireless network,” _IEEE Transactions on Industrial Informatics_ , vol. 13, no. 4, pp. 2058–2065, 2017.
* [218] M. Wei and K. Kim, “Intrusion detection scheme using traffic prediction for wireless industrial networks,” _journal of communications and networks_ , vol. 14, no. 3, pp. 310–318, 2012.
* [219] N. Tandiya, A. Jauhar, V. Marojevic, and J. H. Reed, “Deep predictive coding neural network for rf anomaly detection in wireless networks,” in _2018 IEEE International Conference on Communications Workshops (ICC Workshops)_. IEEE, 2018, pp. 1–6.
* [220] S. Rajendran, W. Meert, V. Lenders, and S. Pollin, “Saife: Unsupervised wireless spectrum anomaly detection with interpretable features,” in _2018 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN)_. IEEE, 2018, pp. 1–9.
* [221] W. Lotter, G. Kreiman, and D. Cox, “Deep predictive coding networks for video prediction and unsupervised learning,” _arXiv preprint arXiv:1605.08104_ , 2016.
* [222] K. Sankhe, M. Belgiovine, F. Zhou, S. Riyaz, S. Ioannidis, and K. Chowdhury, “Oracle: Optimized radio classification through convolutional neural networks,” in _IEEE INFOCOM 2019-IEEE Conference on Computer Communications_. IEEE, 2019, pp. 370–378.
* [223] K. Sankhe, M. Belgiovine, F. Zhou, L. Angioloni, F. Restuccia, S. D’Oro, T. Melodia, S. Ioannidis, and K. Chowdhury, “No radio left behind: Radio fingerprinting through deep learning of physical-layer hardware impairments,” _IEEE Transactions on Cognitive Communications and Networking_ , vol. 6, no. 1, pp. 165–178, 2019.
* [224] N. Soltani, K. Sankhe, J. Dy, S. Ioannidis, and K. Chowdhury, “More is better: Data augmentation for channel-resilient rf fingerprinting.”
* [225] S. J. Pan and Q. Yang, “A survey on transfer learning,” _IEEE Transactions on knowledge and data engineering_ , vol. 22, no. 10, pp. 1345–1359, 2009.
* [226] S. Niu, J. Wang, Y. Liu, and H. Song, “Transfer learning based data-efficient machine learning enabled classification,” in _2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech)_. IEEE, 2020, pp. 620–626.
* [227] P. Nguyen, H. Truong, M. Ravindranathan, A. Nguyen, R. Han, and T. Vu, “Matthan: Drone presence detection by identifying physical signatures in the drone’s rf communication,” in _Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services_. ACM, 2017, pp. 211–224.
* [228] J. Wang, Y. Liu, and H. Song, “Counter-unmanned aircraft system (s)(c-uas): State of the art, challenges and future trends,” _arXiv preprint arXiv:2008.12461_ , 2020.
* [229] Y. Shi, K. Davaslioglu, and Y. E. Sagduyu, “Generative adversarial network for wireless signal spoofing,” in _Proceedings of the ACM Workshop on Wireless Security and Machine Learning_. ACM, 2019, pp. 55–60.
* [230] S. M. Mohanti, N. S. Soltani, K. Sankhe, D. Jaisinghani, M. Di Felice, and K. Chowdhury, “Airid: Injecting a custom rf fingerprint for enhanced uav identification using deep learning,” in _IEEE Global Communications Conference_ , 2020.
| YongXin Liu<EMAIL_ADDRESS>received his first Ph.D. from South China
University of Technology (SCUT) and currently working towards his second Ph.D.
in the Department of Electrical Engineering and Computer Science, Embry-Riddle
Aeronautical University, Daytona Beach, FL. His major research interests
include data mining, wireless networks, the Internet of Things, and unmanned
aerial vehicles.
---|---
| Jian Wang<EMAIL_ADDRESS>is a Ph.D. student in the Department of
Electrical, Computer, Software, and Systems Engineering (ECSSE), Embry-Riddle
Aeronautical University (ERAU), Daytona Beach, Florida, and a graduate
research assistant in the Security and Optimization for Networked Globe
Laboratory (SONG Lab, www.SONGLab.us). He received his M.S. from South China
Agricultural University (SCAU) in 2017 and B.S. from Nanyang Normal University
in 2014. His major research interests include wireless networks, unmanned
aerial systems, and machine learning.
---|---
| Jianqiang Li<EMAIL_ADDRESS>received his B.S. and Ph.D. degrees from the
South China University of Technology in 2003 and 2008, respectively. He is a
Professor with the College of Computer and Software Engineering, Shenzhen
University, Shenzhen, China. He is leading two projects funded by the National
Natural Science Foundation of China and two projects funded by the Natural
Science Foundation of Guangdong, China. His major research interests include
Internet of Things, robotic, hybrid systems, and embedded systems.
---|---
| Shuteng Niu<EMAIL_ADDRESS>is a Ph.D. student in the Department of
Electrical, Computer, Software, and Systems Engineering (ECSSE), Embry-Riddle
Aeronautical University (ERAU), Daytona Beach, Florida, and a graduate
research assistant in the Security and Optimization for Networked Globe
Laboratory (SONG Lab, www.SONGLab.us). He received his M.S. from Embry-Riddle
Aeronautical University (ERAU) in 2018 and B.S. from Civil Aviation University
of China (CAUC) in 2015. His major research interests include machine
learning, data mining, and signal processing.
---|---
| Houbing Song (M’12-SM’14) received the Ph.D. degree in electrical
engineering from the University of Virginia, Charlottesville, VA, in August
2012. In August 2017, he joined the Department of Electrical Engineering and
Computer Science, Embry-Riddle Aeronautical University, Daytona Beach, FL,
where he is currently an Assistant Professor and the Director of the Security
and Optimization for Networked Globe Laboratory (SONG Lab, www.SONGLab.us). He
has served as an Associate Technical Editor for IEEE Communications Magazine
(2017-present), an Associate Editor for IEEE Internet of Things Journal
(2020-present) and IEEE Journal on Miniaturization for Air and Space Systems
(J-MASS) (2020-present), and a Guest Editor for IEEE Journal on Selected Areas
in Communications (J-SAC), IEEE Internet of Things Journal, IEEE Transactions
on Industrial Informatics, IEEE Sensors Journal, IEEE Transactions on
Intelligent Transportation Systems, and IEEE Network. He is the editor of six
books, including Big Data Analytics for Cyber-Physical Systems: Machine
Learning for the Internet of Things, Elsevier, 2019, Smart Cities:
Foundations, Principles and Applications, Hoboken, NJ: Wiley, 2017, Security
and Privacy in Cyber-Physical Systems: Foundations, Principles and
Applications, Chichester, UK: Wiley-IEEE Press, 2017, Cyber-Physical Systems:
Foundations, Principles and Applications, Boston, MA: Academic Press, 2016,
and Industrial Internet of Things: Cybermanufacturing Systems, Cham,
Switzerland: Springer, 2016. He is the author of more than 100 articles. His
research interests include cyber-physical systems, cybersecurity and privacy,
internet of things, edge computing, AI/machine learning, big data analytics,
unmanned aircraft systems, connected vehicle, smart and connected health, and
wireless communications and networking. His research has been featured by
popular news media outlets, including IEEE GlobalSpec’s Engineering360, USA
Today, U.S. News & World Report, Fox News, Association for Unmanned Vehicle
Systems International (AUVSI), Forbes, WFTV, and New Atlas. Dr. Song is a
senior member of ACM and an ACM Distinguished Speaker. Dr. Song was a
recipient of the Best Paper Award from the 12th IEEE International Conference
on Cyber, Physical and Social Computing (CPSCom-2019), the Best Paper Award
from the 2nd IEEE International Conference on Industrial Internet (ICII 2019),
the Best Paper Award from the 19th Integrated Communication, Navigation and
Surveillance technologies (ICNS 2019) Conference, the Best Paper Award from
the 6th IEEE International Conference on Cloud and Big Data Computing (CBDCom
2020), and the Best Paper Award from the 15th International Conference on
Wireless Algorithms, Systems, and Applications (WASA 2020).
---|---
|
# Modeling and simulation of vascular tumors
embedded in evolving capillary networks
Marvin Fritz1, Prashant K. Jha2, Tobias Köppl1,∗, J. Tinsley Oden2,
Andreas Wagner1, and Barbara Wohlmuth1,3
###### Abstract.
In this work, we present a coupled 3D-1D model of solid tumor growth within a
dynamically changing vascular network to facilitate realistic simulations of
angiogenesis. Additionally, the model includes erosion of the extracellular
matrix, interstitial flow, and coupled flow in blood vessels and tissue. We
employ continuum mixture theory with stochastic Cahn–Hilliard type phase-field
models of tumor growth. The interstitial flow is governed by a mesoscale
version of Darcy’s law. The flow in the blood vessels is controlled by
Poiseuille flow, and Starling’s law is applied to model the mass transfer in
and out of blood vessels. The evolution of the network of blood vessels is
orchestrated by the concentration of the tumor angiogenesis factors (TAFs);
blood vessels grow towards the increasing TAFs concentrations. This process is
not deterministic, allowing random growth of blood vessels and, therefore, due
to the coupling of nutrients in tissue and vessels, makes the growth of tumors
stochastic. We demonstrate the performance of the model by applying it to a
variety of scenarios. Numerical experiments illustrate the flexibility of the
model and its ability to generate satellite tumors. Simulations of the effects
of angiogenesis on tumor growth are presented as well as sample-independent
features of cancer.
###### Key words and phrases:
tumor growth, 3D-1D coupled blood flow models, angiogenesis, finite elements,
finite volume
###### 2020 Mathematics Subject Classification:
65M08, 65M60, 76S05, 76Z05, 92C17, 92C42.
∗Corresponding author
1Department of Mathematics, Technical University of Munich, Germany
2Oden Institute for Computational Engineering and Sciences, The University of
Texas at Austin, USA
3Department of Mathematics, University of Bergen, Allegaten 41, 5020 Bergen,
Norway
Keywords: tumor growth, 3D-1D coupled blood flow models, angiogenesis, finite
elements, finite volume
## 1\. Introduction
In this work, we present new computational models and algorithms for
simulating and predicting a broad range of biological and physical phenomena
related to cancer at the tissue scale. We consider the growth of solid
vascular tumors inside living tissue containing a dynamically evolving
vasculature. One of the main goals of this work is to provide realistic
simulations of the vascular growth characterizing angiogenesis, whereby blood
vessels sprout and invade the domain of the solid tumor when prompted by
concentrations of proteins collectively referred to as tumor angiogenesis
factors (TAFs); these proteins are produced by nutrients-starved cancerous
cells. The tumor growth is necessarily depicted by a multispecies model in
which tumor cell concentrations are categorized as proliferative, hypoxic, or
necrotic. To capture the complex interaction of cell species and the evolving
interfaces between species, continuum mixture theory is used as a framework
for constructing mesoscale stochastic phase-field models of the Cahn–Hilliard
type. Other critical phenomena are also addressed by this class of models,
including the erosion of the extracellular matrix (ECM) due to concentrations
of matrix-degrading enzymes (MDEs, such as matrix metalloproteeinase and
urokinase plasminogen activators) that erode the ECM and permit the invasion
of tumor cells as a prelude to metastasis [25, 41].
The volume of an isolated colony of tumor cells will not generally grow beyond
approximately $1.0$ $\mathrm{m}\mathrm{m}$3 [31, 46, 42] unless sufficient
nutrients and oxygen are supplied for proliferation. To acquire such
nutrients, cancerous cells promote angiogenesis [8, 47]. Low levels of oxygen
and nutrient result in tumor cells entering the hypoxia phase during which
they remain dormant and release various proteins such as TAFs that promote the
proliferation of endothelial cells and new vessel formation. Similarly, low
oxygen levels can generate irregular invasive tumors governed by haptotaxis
[25, 41]. Because angiogenesis is one of the major processes through which
tumors grow, anti-angiogenic drugs that inhibit the formation of the new
vascular structure are often identified as one of the approaches to delay or
arrest the growth of cancer. Thus, a realistic model of angiogenesis is of
critical importance for studying the effectiveness of anti-angiogenic drugs.
Typically, the vasculature near the tumor core in the early stages of tumor
growth may not effectively supply nutrients to the tumor. The vasculature
evolves rapidly and, therefore, the vessel walls are not fully developed and
may be destroyed due to pressure (proliferation of tumor cells result in
higher pressure nearby), the pruning of vessels due to insufficient flow for a
sustained period and, vasculature adaptation and remodeling [45, 52, 50, 51,
59]. Highly interconnected and irregular vasculature with inefficient blood
vessels causes low blood flow rates to the tumor, making it possible that
therapeutic drugs miss the tumor mass altogether [59]. Shear and
circumferential stresses due to blood flow result in vascular adaptation
effects such as vessel radii adaptation, see [60, 38, 51]. All of these
phenomena are represented by the models described herein.
Earlier models taking into account angiogenesis include lattice-probabilistic
network models, see [3, 69, 37, 59, 60, 38, 45]. An overview of this class of
models is given in [15]. Another class of models referred to as agent-based
models has been proposed and extensively studied. There, the idea is to
introduce a phase-field for the tip endothelial cells that takes a value $1$
inside the vessel and $0$ outside and through the agents, which can move
anywhere in the simulation domain following certain rules, which can be
designed to trigger the sprouting of new vessels; see [36, 62, 64, 48]. These
models do not capture blood circulation in the vessel and, therefore, are
unable to be truly coupled to the tumor growth. In [66], a dimensionally
coupled model for drug delivery based on MRI data and a study of dosing
protocols is considered with drug flow in the vessels governed by algebraic
rules instead of PDEs. More recently, vasculature models involving a network
of straight cylindrical vessels supporting the 1D flow of nutrient, oxygen,
and therapeutic drugs and coupled to the 3D tissue domain by versions of the
Starling or Kedem–Katchalsky law have been presented; see [67, 68, 33].
We consider a class of 3D-1D vascular tumor models [21] that approximates the
flow within the blood vessels by one-dimensional flow based on the Poiseuille
law effectively reducing the flow in the three-dimensional vessels to the flow
in a network of one-dimensional vessel segments. While coupling the flow in
the vessels and tissue, the blood vessels’ three-dimensional nature is
retained by approximating the vessels as a network of straight-cylinders and
applying the fluid exchange at the walls of cylindrical segments. From a
mathematical and computational point of view, a complicating factor is the use
of one-dimensional characterizations of vessel segments in the vascular
network embedded in three-dimensional domains of the tissue and the tumor
within the tissue and the assignment of mechanical models to this 3D-1D system
to depict interstitial flow and pressure fluctuations. Mathematical analysis
showing well-posedness and existence of weak solutions for the class of 3D-1D
model considered in this work is performed in a recent paper [21].
In our model, flow in vessels is governed by 1D Poiseuille law, whereas the
flow in tissue is derived by treating the tissue domain as a porous medium and
applying a version of Darcy’s law. The model consists of nutrients in the
tissue and vessels; nutrients in the vessels are governed by the 1D advection-
diffusion equation and advection-diffusion-reaction equation in the tissue.
Flow and nutrients in the tissue and vessel are coupled; we assume that vessel
walls are porous, resulting in the advection and diffusion-driven exchange of
nutrients and coupling of the extravascular and intravascular pressures. Some
aspects of the 1D model architecture and coupling of 3D and 1D models are
based on previous works, see [30, 33, 32]. The 3D tissue domain includes, in
addition to the nutrients, ECM, tumor species such as proliferative, hypoxic,
necrotic, and diffusive molecules such as TAF and MDE.
As noted earlier, the 3D tumor model is derived from the balance laws of
continuum mixture theory as in [7, 12, 43, 36, 24, 13], and representations of
the principal mechanisms governing the development and evolution of cancer,
see, e.g., [36, 27]. Especially, we note the comprehensive developments of
diffuse-interface multispecies models presented in [65, 19], the ten species
models derived in [36], and the multispecies nonlocal models of adhesion and
promotes a tumor invasion due to ECM degradation described in [22].
Angiogenesis models embedded in models of hypoxic and cell growth are
presented in [36, 67, 68, 21]. Related models of extracellular matrix (ECM)
degradation due to matrix-degrading enzymes (MDEs) and subsequent tumor
invasion and metastasis are discussed in [22, 11, 10, 17]. Several of the
earlier mechanistic models of tumor growth focused on modeling the effects of
mechanical deformation and blood flow, and fluid pressure on tumor growth,
e.g., [1, 2, 49, 5, 6, 34].
A key new feature of the models proposed here is the dynamic growth/deletion
of the vascular network and full coupling between the dynamic network and
tumor system in the tissue microenvironment. In response to TAF generated by
nutrient-starved hypoxic cells, new vessels are formed. Due to the formation
of new vessels, the local conditions such as nutrient concentration changes
near the tumor, affecting TAF production and promoting a higher proliferation
of tumor cells. The rules by which the network grows, or existing vessels are
deleted due to insufficient flow and dormancy, are based on the experimentally
known causes of angiogenesis and are parameterized so that various aspects of
the network growth algorithm can be adjusted based on available experimental
data. By including the time-evolution of the larger vascular tissue domain and
the sprouting, growth, bifurcation, and pruning of the vascular network
orchestrated by a combination of blood supply and tumor-generated growth
factors, a more realistic depiction of tumor growth than the more common
isolated-tumor (avascular) models is obtained.
This article is organized as follows: In Section 2, we introduce various
components of the model, such as the tissue and 1D network domain and the
equations governing various fields. The details associated with the vessel
network growth are presented in Section 3. Spatial and temporal discretization
and solver schemes for the highly nonlinear coupled system of equations are
discussed in Section 4. We apply mixed finite volumes, finite element
approximations to the model equations. The systems of equations arising in
each time step are solved using a semi-implicit fixed point iteration scheme.
In Section 5, the model is applied to various situations, and several
simulation experiments are presented. For further details on our
implementation of the solver, we refer to the open-source code at
https://github.com/CancerModeling/Angiogenesis3D1D. Concluding comments are
given in Section 6.
## 2\. Mathematical Modeling
In this work, a colony of tumor cells in an open bounded domain
$\Omega\subset\mathbb{R}^{3}$, e.g., representing an organ, is considered. It
is supported by a system of macromolecules consisting of collagen, enzymes,
and various proteins, that constitute the extracellular matrix. We focus on
phenomenological characterizations to capture mesoscale and macroscale events.
Additionally, we consider a one-dimensional graph-like structure $\Lambda$
inside of $\Omega$ forming a microvascular network, see Figure 1.
Figure 1. Setup of the domain $\Omega$ with the 1D microvascular network
$\Lambda$ and a tumor mass, which is composed of its proliferative
($\phi_{P}$), hypoxic ($\phi_{H}$) and necrotic ($\phi_{N}$) phases.
The single edges of $\Lambda$ of vessel components are denoted by
$\Lambda_{i}$ such that $\Lambda$ is given by
$\Lambda=\bigcup_{i=1}^{N}\Lambda_{i}$ and each edge $\Lambda_{i}$,
$i\in\\{1,\dots,N\\}$, is parameterized by a corresponding curve parameter
$s_{i}$ such that
$\Lambda_{i}=\mathopen{}\mathclose{{}\left\\{\bm{x}\in\Omega\mathopen{}\mathclose{{}\left|\;\bm{x}=\Lambda_{i}(s_{i})=\bm{x}_{i,1}+s_{i}\cdot(\bm{x}_{i,2}-\bm{x}_{i,1}),\;s_{i}\in(0,1)}\right.}\right\\},$
where $\bm{x}_{i,1}\in\Omega$ and $\bm{x}_{i,2}\in\Omega$ mark the boundary
nodes of $\Lambda_{i}$, see Figure 2. For the total 1D network $\Lambda$, we
introduce a global curve parameter $s$, which is interpreted in the following
way: $s=s_{i}$, if $\bm{x}=\Lambda(s)=\Lambda_{i}(s_{i})$. At each value of
the curve parameter $s$, various 1D constituents exist, which interact with
their respective 3D counterpart in $\Omega$.
We introduce the surface $\Gamma$ of the microvascular network $\Lambda$ to
formulate the coupling between the 3D and 1D constituents in Subsection 2.2
and Subsection 2.4. For simplicity, it is assumed that the surface for a
single vessel is approximated by a cylinder with a constant radius, see Figure
2. The radius of a vessel that is associated with $\Lambda_{i}$ is given by
$R_{i}$ and the corresponding surface is denoted by $\Gamma_{i}$; i.e., we
have as the total surface $\Gamma=\bigcup_{i=1}^{N}\Gamma_{i}.$
Figure 2. Modeling of a single edge $\Lambda_{i}$ contained in the 1D graph-
like structure with boundary nodes $\bm{x}_{i,1}$ and $\bm{x}_{i,2}$. The
cylinder $\Gamma_{i}$ has a constant radius $R_{i}$.
### 2.1. Governing constituents
The principal dependent variables characterizing the growth and decline of the
tumor mass are taken to be a set of scalar-valued fields $\phi_{\alpha}$ with
values $\phi_{\alpha}(\bm{x},t)$ at a time $t\in[0,T]$ and point
$\bm{x}\in\Omega\subset\mathbb{R}^{3}$, representing the volume fractions of
constituents in the space-time domain $\Omega\times[0,T]$. The primary feature
of our model of tumor growth is the application of the framework of continuum
mixture theory in which multiple mechanical and chemical species can exist at
a point $\bm{x}\in\Omega$ at time $t>0$. Therefore, for a medium with
${N_{\alpha}}\in\mathbb{N}$ interacting constituents, the volume fraction of
each species $\phi_{\alpha}$, $\alpha\in\\{1,\dots,{N_{\alpha}}\\}$, is
represented by a field $\phi_{\alpha}$ with the value
$\phi_{\alpha}(\bm{x},t)$ at $(\bm{x},t)$ and the property
$\sum_{\alpha}\phi_{\alpha}(\bm{x},t)=1$.
We separate the tumor volume fraction $\phi_{T}=\phi_{T}(\bm{x},t)$ into the
sum of three phases $\phi_{T}=\phi_{P}+\phi_{H}+\phi_{N}$, where
$\phi_{P}=\phi_{P}(\bm{x},t)$ is the volume fraction of proliferative cells,
$\phi_{H}=\phi_{H}(\bm{x},t)$ that of hypoxic cells, and
$\phi_{N}=\phi_{N}(\bm{x},t)$ is the volume fraction of necrotic cells, see
Figure 1. Proliferative cells have a high probability of mitosis, i.e.,
division into twin cells, and to produce growth of the tumor mass. Hypoxic
cells are those tumor cells which are deprived of sufficient nutrient to
become or remain proliferative. Lastly, necrotic cells have died due to the
lack of nutrients.
The nutrient concentration and the tumor angiogenesis factor (TAF) over
$\Omega\times[0,T]$ are represented by scalar fields
$\phi_{\sigma}=\phi_{\sigma}(\bm{x},t)$ and $\phi_{TAF}=\phi_{TAF}(\bm{x},t)$,
respectively. The tumor cells response to hypoxia, i.e., when $\phi_{\sigma}$
is below a certain threshold, is the production of an enzyme that increases
cell mobility and activates the secretion of angiogenesis promoting factors
characterized by $\phi_{{TAF}}$. As a particular case of TAFs, we consider the
vascular endothelial growth factor (VEGF), which promotes sprouting of
endothelial cells forming the tubular structure of blood vessels, which grow
into new vessels and supply nutrients to the hypoxic volume fraction
$\phi_{H}$.
Another consequence of hypoxia is the release of matrix-degrading enzymes
(MDEs), e.g., urokinase plasminogen and matrix metalloproteinases, by the
hypoxic cells. We denote the volume fraction of the MDEs by
$\phi_{{MDE}}=\phi_{{MDE}}(\bm{x},t)$. The primary feature of the MDEs is the
erosion of the extracellular matrix, whose volume fraction is denoted by
$\phi_{{ECM}}=\phi_{{ECM}}(\bm{x},t)$. Consequently, the erosion of the ECM
produces room for the invasion of tumor cells, which increases $\phi_{T}$ in
the ECM domain and therefore, raises the likelihood of metastasis. Below a
certain level necrosis occurs and cells die, entering the necrotic phase
$\phi_{N}$. Tumor cells may also die naturally, in a process which is called
apoptosis.
Within the one-dimensional network $\Lambda$, we introduce the constituents
$\phi_{v}=\phi_{v}(s,t)$ and $v_{v}=v_{v}(s,t)$, which represent the one-
dimensional counterparts of the local nutrient concentration $\phi_{\sigma}$
and the volume-averaged velocity $v$. Additionally, we consider the pressures
$p_{v}=p_{v}(s,t)$ and $p=p(\bm{x},t)$ in the network and tissue domain,
respectively. In summary, we refer to the table below for the primary
variables and constituents of the model.
Constituents
---
${\bm{\phi}}$ | Vector of all 3D species volume fractions
$\phi_{\alpha}$ | Volume fraction of 3D species $\alpha\in\mathcal{A}=\\{P,H,N,\sigma,MDE,TAF,ECM\\}$
$\mu_{\beta}$ | Chemical potential, $\beta\in\\{P,H\\}$
$\phi_{v}$ | Volume fraction of nutrients in 1D network $\Lambda$
Flow model
$v$ | Convective velocity in tissue domain $\Omega$
$p$ | Pressure in tissue domain $\Omega$
$p_{v}$ | Pressure in 1D network domain $\Lambda$
$v_{v}$ | Velocity of interstitial flow in tissue domain $\Lambda$
Functions
$\Psi$ | Double-well potential, see Eq. 2.2
$m_{\alpha}$ | Mobility function, see Eq. 2.1
$S_{\alpha}$ | Mass source, see Eq. 2.5
$W_{\alpha}$ | Wiener process, see Eq. 2.3
$J_{\sigma v}$ | Mass source density of nutrient due to 1D network Eq. 2.6
### 2.2. Three-dimensional model governing the tumor constituents
The evolution of the constituents $\phi_{\alpha}$ must obey the balance laws
of continuum mixture theory (e.g., see [36, 23]). Assuming constant and equal
mass densities of the constituents, the mass balance equations for the mixture
read as follows:
$\partial_{t}\phi_{\alpha}+\text{div}(\phi_{\alpha}v_{\alpha})=-\text{div}J_{\alpha}(\bm{\phi})+S_{\alpha}(\bm{\phi}),$
where $v_{\alpha}$ is the cell velocity of the $\alpha$-th constituent, and
$S_{\alpha}$ describes a mass source term that may depend on all species
$\bm{\phi}=(\phi_{P},\phi_{H},\phi_{N},\phi_{\sigma},\phi_{{MDE}},\phi_{{TAF}},\phi_{{ECM}})$.
Moreover, $J_{\alpha}$ represents the mass flux of the $\alpha$-th constituent
and is given by:
$J_{\alpha}(\bm{\phi})=-m_{\alpha}(\bm{\phi})\nabla\mu_{\alpha},$
where $\mu_{\alpha}$ denotes the chemical potential of the $\alpha$-th
species, and $m_{\alpha}$ is its corresponding mobility function. Generally,
the mobilities may depend on many species, but in this work we consider the
following cases,
(2.1) $\displaystyle m_{\alpha}(\bm{\phi})$
$\displaystyle=M_{\alpha}\phi_{\alpha}^{2}(1-\phi_{T})^{2}I_{d},$
$\displaystyle\alpha\in\\{P,H\\},$ $\displaystyle m_{\beta}(\bm{\phi})$
$\displaystyle=M_{\beta}I_{d},$
$\displaystyle\beta\in\\{\sigma,{MDE},{TAF}\\},$
where $M_{\alpha}$ are mobility constants, and $I_{d}$ is the $(d\times
d)$-dimensional identity matrix. For the remaining species $\phi_{N}$ and
$\phi_{{ECM}}$, we choose $m_{N}=m_{{ECM}}=0$ in accordance to the non-
diffusivity of the necrotic cells and the ECM; see [41]. Following [28, 36,
65], we define the chemical potential $\mu_{\alpha}$ as the first variation
(Gâteaux derivative) with respect to $\phi_{\alpha}$ of the
Ginzburg–Landau–Helmholtz free energy functional $\mathcal{E}(\bm{\phi})$. The
free energy in this work is designed to capture the following key effects:
* •
Phase change in tumor species $\phi_{T},\phi_{P},\phi_{H}$. For example,
$\phi_{T}$ can change (conditions permitting) from a healthy phase
$\phi_{T}=0$ to a tumor phase $\phi_{T}=1$. This is typically achieved by
introducing a double-well potential
(2.2) $
\Psi=\Psi(\phi_{T},\phi_{P},\phi_{H})=\sum_{\alpha\in\\{T,P,H\\}}C_{\Psi_{\alpha}}\phi_{\alpha}^{2}(1-\phi_{\alpha})^{2}$
to the free energy, where $C_{\Psi_{\alpha}}$, $\alpha\in\\{T,P,H\\}$, are
constants. In addition to phase separation between healthy and cancer phases
(using the energy term $C_{\Psi_{T}}\phi_{T}^{2}(1-\phi_{T})^{2}$), we have
also introduced energy terms that promote phase separation between
proliferative and non-proliferative and hypoxic and non-hypoxic phases. It is
possible to consider different forms of the double-well potential [21],
however, in this work we will consider $\Psi$ in Eq. 2.2 with
$C_{\Psi_{P}}=C_{\Psi_{H}}=0$, see Section 5.
* •
Promote phase separation between two phases of species
$\phi_{T},\phi_{P},\phi_{H}$. For example, a model could exhibit phase values
at $\bm{x}$ between, say, $\phi_{\alpha}=0$ and $\phi_{\alpha}=1$, with a
change in gradient, $\nabla\phi_{\alpha}$, at the interface of these phases.
Such changes are manifested as surface energy terms in the form of penalties
on the magnitude of $\nabla\phi_{\alpha}$ of the form
$\frac{\varepsilon_{\alpha}^{2}}{2}|\nabla\phi_{\alpha}|^{2},$
where $\varepsilon_{\alpha}$ controls the thickness of the phase interface.
* •
Diffusion driven mobilities of species
$\phi_{\sigma},\phi_{{TAF}},\phi_{{MDE}}$. These effects are captured by
adding the diffusive energies
$\frac{D_{\beta}}{2}\phi_{\beta}^{2},$
where $D_{\beta}$, $\beta\in\\{\sigma,{TAF},{MDE}\\}$, are diffusion
coefficients.
* •
Chemotaxis and haptotaxis effects. Chemotaxis represents a movement of cells
towards a gradient of nutrients (i.e., along the direction of increasing
nutrients). Similar to chemotaxis, the tumor cells show a tendency to move
along the ECM gradient, and this phenomenon is referred to as haptotaxis.
These effects are incorporated via the terms [29, 61]
$-(\chi_{c}\phi_{\sigma}+\chi_{h}\phi_{{ECM}})\sum_{\alpha\in\\{P,H\\}}\phi_{\alpha},$
where $\chi_{c},\chi_{h}$ are chemotaxis and haptotaxis coefficients,
respectively. In the above energy terms, we exclude necrotic cells to be
consistent with our assumption that necrotic cells are immobile.
Combining these effects, the free energy takes the form
$\mathcal{E}(\bm{\phi})=\int_{\Omega}\Big{\\{}\Psi(\phi_{P},\phi_{H},\phi_{N})+\sum_{\alpha\in\\{P,H\\}}\frac{\varepsilon_{\alpha}^{2}}{2}|\nabla\phi_{\alpha}|^{2}+\sum_{\beta\in{\mathcal{R}\mathcal{D}}}\frac{D_{\beta}}{2}\phi_{\beta}^{2}-(\chi_{c}\phi_{\sigma}+\chi_{h}\phi_{{ECM}})\sum_{\alpha\in\\{P,H\\}}\phi_{\alpha}\Big{\\}}\text{
d}\bm{x},$
where ${\mathcal{R}\mathcal{D}}=\\{\sigma,{MDE},{TAF},{ECM}\\}$ is the set of
species driven by reaction-diffusion type equations. We assume a volume-
averaged velocity $v$ for the proliferative cells, hypoxic cells, and the
nutrients concentration. This assumption is regarded as reasonable whenever
cells are tightly packed.
In thin subdomains at the interfaces of the phase fields, stochastic
variations of the phase concentrations are possible. The variations in these
regions of random behavior are bounded by noise parameters
$\phi_{\alpha}^{\omega}$ and noise intensity $\omega_{\alpha}$; the variations
(along with the noise intensity) in $\phi_{\alpha}$, $\alpha\in\\{P,H\\}$, are
restricted to interface regions using function $G_{\alpha}$ given by
(2.3)
$G_{\alpha}(\phi_{P},\phi_{H},\phi_{N})=\omega_{\alpha}\mathcal{H}((\phi_{\alpha}-\phi_{\alpha}^{\omega})(1-\phi_{\alpha}-\phi_{\alpha}^{\omega}))\mathcal{H}((\phi_{T}-\phi_{T}^{\omega})(1-\phi_{T}-\phi_{T}^{\omega})).$
Here, $\mathcal{H}$ denotes the Heaviside step function. Typically, the
randomness in the evolution of species near the interface is incorporated in
the model in the form of cylindrical Wiener process on $L^{2}(\Omega)$, see
[14, 44, 4]; we add $G_{P}\dot{W}_{P}$ and $G_{H}\dot{W}_{H}$ to the mass
balance equation for $\phi_{P}$ and $\phi_{H}$. To keep the mass balance
equations in standard form, we slightly abuse the standard notation and use
notation $\dot{W}_{\alpha}$ such that
$\dot{W}_{\alpha}\textup{d}t=\textup{d}W_{\alpha}$. Further details on Wiener
processes $W_{\alpha}$ and numerical discretization are provided in Section 4.
Following these assumptions and conventions, we arrive at the following system
of equations governing the model:
(2.4) $\displaystyle\partial_{t}\phi_{P}+\textup{div}(\phi_{P}v)$
$\displaystyle=\textup{div}(m_{P}(\bm{\phi})\nabla\mu_{P})+S_{P}(\bm{\phi})+G_{P}(\phi_{P},\phi_{H},\phi_{N})\dot{W}_{P},$
$\displaystyle\mu_{P}$
$\displaystyle=\partial_{\phi_{P}}\Psi(\phi_{P},\phi_{H},\phi_{N})-\varepsilon^{2}_{P}\Delta\phi_{P}-\chi_{c}\phi_{\sigma}-\chi_{h}\phi_{{ECM}},$
$\displaystyle\partial_{t}\phi_{H}+\textup{div}(\phi_{H}v)$
$\displaystyle=\textup{div}(m_{H}(\bm{\phi})\nabla\mu_{H})+S_{H}(\bm{\phi})+G_{H}(\phi_{P},\phi_{H},\phi_{N})\dot{W}_{H},$
$\displaystyle\mu_{H}$
$\displaystyle=\partial_{\phi_{H}}\Psi(\phi_{P},\phi_{H},\phi_{N})-\varepsilon^{2}_{H}\Delta\phi_{H}-\chi_{c}\phi_{\sigma}-\chi_{h}\phi_{{ECM}},$
$\displaystyle\partial_{t}\phi_{N}$ $\displaystyle=S_{N}(\bm{\phi}),$
$\displaystyle\partial_{t}\phi_{\sigma}+\textup{div}(\phi_{\sigma}v)$
$\displaystyle=\textup{div}(m_{\sigma}(\bm{\phi})(D_{\sigma}\nabla\phi_{\sigma}\\!-\\!\chi_{c}\nabla(\phi_{P}+\phi_{H}))+S_{\sigma}(\bm{\phi})+J_{\sigma
v}(\phi_{\sigma},p,\Pi_{\Gamma}\phi_{v},\Pi_{\Gamma}p_{v})\delta_{\Gamma},$
$\displaystyle\partial_{t}\phi_{{MDE}}+\textup{div}(\phi_{{MDE}}v)$
$\displaystyle=\textup{div}(m_{{MDE}}(\bm{\phi})D_{{MDE}}\nabla\phi_{{MDE}})+S_{{MDE}}(\bm{\phi}),$
$\displaystyle\partial_{t}\phi_{{TAF}}+\textup{div}(\phi_{{TAF}}v)$
$\displaystyle=\textup{div}(m_{{TAF}}(\bm{\phi})D_{{TAF}}\nabla\phi_{{TAF}})+S_{{TAF}}(\bm{\phi}),$
$\displaystyle\partial_{t}\phi_{{ECM}}$ $\displaystyle=S_{{ECM}}(\bm{\phi}),$
$\displaystyle-\textup{div}(K\nabla p)$
$\displaystyle=J_{pv}(p,\Pi_{\Gamma}p_{v})\delta_{\Gamma}-\textup{div}(KS_{p}(\bm{\phi},\mu_{P},\mu_{H})),$
$\displaystyle v$ $\displaystyle=-K(\nabla
p-S_{p}(\bm{\phi},\mu_{P},\mu_{H})),$
in the space-time domain $\Omega\times(0,T)$ and we supplement the system with
homogeneous Neumann boundary conditions. In the above set of governing
equations, the velocity $v$ is given by modified Darcy’s law, where $K$
denotes the hydraulic conductivity. The source term $S_{p}$ (defined below)
represents a form of the elastic Korteweg force, e.g., see [20], and includes
a correction of the chemical potential by the haptotaxis and chemotaxis
adhesion terms following [24]. Here $J_{pv}$ and $J_{\sigma v}$ are the fluid
flux and nutrient flux as described in Subsection 2.3. We consider the
following choices of the coupling source functions; see [21],
(2.5) $\displaystyle S_{P}(\bm{\phi})$
$\displaystyle=\lambda^{\\!\textup{pro}}_{P}\phi_{\sigma}\phi_{P}(1-\phi_{T})-\lambda^{\\!\textup{deg}}_{P}\phi_{P}-\lambda_{P\\!H}\mathcal{H}(\sigma_{P\\!H}-\phi_{\sigma})\phi_{P}+\lambda_{H\\!P}\mathcal{H}(\phi_{\sigma}-\sigma_{H\\!P})\phi_{H},$
$\displaystyle S_{H}(\bm{\phi})$
$\displaystyle=\lambda^{\\!\textup{pro}}_{H}\phi_{\sigma}\phi_{H}(1-\phi_{T})-\lambda^{\\!\textup{deg}}_{H}\phi_{H}+\lambda_{P\\!H}\mathcal{H}(\sigma_{P\\!H}-\phi_{\sigma})\phi_{P}-\lambda_{H\\!P}\mathcal{H}(\phi_{\sigma}-\sigma_{H\\!P})\phi_{H}$
$\displaystyle\qquad-\lambda_{H\\!N}\mathcal{H}(\sigma_{H\\!N}-\phi_{\sigma})\phi_{H},$
$\displaystyle S_{N}(\bm{\phi})$
$\displaystyle=\lambda_{H\\!N}\mathcal{H}(\sigma_{H\\!N}-\phi_{\sigma})\phi_{H},$
$\displaystyle S_{{ECM}}(\bm{\phi})$
$\displaystyle=-\lambda^{\\!\textup{deg}}_{{ECM}}\phi_{{ECM}}\phi_{{MDE}}+\lambda^{\\!\textup{pro}}_{{ECM}}\phi_{\sigma}(1-\phi_{{ECM}})\mathcal{H}(\phi_{{ECM}}-\phi^{\text{pro}}_{{ECM}}),$
$\displaystyle S_{\sigma}(\bm{\phi})$
$\displaystyle=-\lambda^{\\!\textup{pro}}_{P}\phi_{\sigma}\phi_{P}-\lambda^{\\!\textup{pro}}_{H}\phi_{\sigma}\phi_{H}+\lambda^{\\!\textup{deg}}_{P}\phi_{P}+\lambda^{\\!\textup{deg}}_{H}\phi_{H}-\lambda^{\\!\textup{pro}}_{{ECM}}\phi_{\sigma}(1-\phi_{{ECM}})\mathcal{H}(\phi_{{ECM}}-\phi^{\text{pro}}_{{ECM}})$
$\displaystyle\qquad+\lambda^{\\!\textup{deg}}_{{ECM}}\phi_{{ECM}}\phi_{{MDE}},$
$\displaystyle S_{{MDE}}(\bm{\phi})$
$\displaystyle=-\lambda^{\\!\textup{deg}}_{{MDE}}\phi_{{MDE}}+\lambda^{\\!\textup{pro}}_{{MDE}}(\phi_{P}+\phi_{H})\phi_{{ECM}}\frac{\sigma_{H\\!P}}{\sigma_{H\\!P}+\phi_{\sigma}}(1-\phi_{{MDE}})-\lambda^{\\!\textup{deg}}_{{ECM}}\phi_{{ECM}}\phi_{{MDE}},$
$\displaystyle S_{{TAF}}(\bm{\phi})$
$\displaystyle=\lambda^{\\!\textup{pro}}_{{TAF}}(1-\phi_{{TAF}})\phi_{H}\mathcal{H}(\phi_{H}-\phi_{H_{P}})-\lambda_{TAF}^{\deg}\phi_{TAF},$
$\displaystyle S_{p}(\bm{\phi},\mu_{P},\mu_{H})$
$\displaystyle=(\mu_{P}+\chi_{c}\phi_{\sigma}+\chi_{h}\phi_{{ECM}})\nabla\phi_{P}+(\mu_{H}+\chi_{c}\phi_{\sigma}+\chi_{h}\phi_{{ECM}})\nabla\phi_{H}.$
Here, $\lambda^{\\!\textup{pro}}_{\alpha}$ and
$\lambda^{\\!\textup{deg}}_{\alpha}$ denote the proliferation and degradation
rate of the $\alpha$-th species, respectively, $\lambda_{\alpha\beta}$ the
transition rate from the $\alpha$-th to the $\beta$-th volume fraction,
$\sigma_{\alpha\beta}$ the corresponding nutrient threshold for the
transition, and $\mathcal{H}$ is the Heaviside step function. Further,
$\phi^{\text{pro}}_{{ECM}}$ denotes the threshold level for the ECM density in
order to activate the production of ECM fibers. Moreover, we introduce the
projection $\Pi_{\Gamma}$ of the 1D quantities onto the cylinder $\Gamma$ via
extending its function values $\Pi_{\Gamma}\phi_{v}(s)=\phi_{v}(s_{i})$ for
all $s\in\partial B_{R_{i}}(s_{i})$.
### 2.3. Interaction between the 3D and 1D model
We apply the Kedem–Katchalsky law [26] to quantify the flux of nutrients
across the vessel surface; i.e., $J_{\sigma v}$ in Eq. 2.4 is given by
(2.6) $J_{\sigma
v}(\overline{\phi}_{\sigma},\overline{p},\phi_{v},p_{v})=(1-r_{\sigma})J_{pv}(\overline{p},p_{v})\phi_{\sigma}^{v}+L_{\sigma}(\phi_{v}-\overline{\phi}_{\sigma}),$
where $J_{pv}$ denotes the flux, which is caused by the flux of blood plasma
from the vessels into the tissue or vice versa. Further, $J_{pv}$ is governed
by Starling’s law [56], i.e.,
$J_{pv}(\overline{p},p_{v})=L_{p}(p_{v}-\overline{p})$ where $\overline{p}$
denotes an averaged pressure over the circumference of cylinder cross-sections
and is computed in the following way: For each point $s_{i}$ on the curve
$\Lambda_{i}$, we consider the circle $\partial B_{R_{i}}(s_{i})$ of radius
$R_{i}$, which is perpendicular to $\Lambda_{i}$; see Figure 2. Thus, the
tissue pressure $p$ is averaged with respect to $\partial B_{R_{i}}(s_{i})$,
$\overline{p}(s_{i})=\frac{1}{2\pi R_{i}}\int_{\partial
B_{R_{i}}(s_{i})}p|_{\Gamma}\,\textup{d}S.$
The part $J_{pv}\phi_{\sigma}^{v}$ in the Kedem–Katchalsky law Eq. 2.6 is
weighted by a factor $1-r_{\sigma}$, $r_{\sigma}$ being a reflection
parameter, introduced to account for the permeability of the vessel wall with
respect to the nutrients. The value of $\phi_{\sigma}^{v}$ is set to
$\phi_{v}$ for $p_{v}\geq\overline{p}$ and to $\overline{\phi}_{\sigma}$
otherwise. The second term on the right hand side of Eq. 2.6 is a Fickian type
law, accounting for the tendency of the nutrients to balance their
concentration levels, where the permeability of the vessel wall is represented
by the parameter $L_{\sigma}$.
The interaction between the vascular network and the tissue occur at the
vessel surface $\Gamma$, and thus, we concentrate the flux $J_{\sigma v}$ by
means of the Dirac measure $\delta_{\Gamma}$; i.e., we define
$\delta_{\Gamma}(\varphi)=\int_{\Gamma}\varphi|_{\Gamma}\,\textup{d}S,$
for a sufficiently smooth test function $\varphi$ with compact support.
### 2.4. One-dimensional model for transport in the vascular network
The one-dimensional vessel variables $\phi_{v}$ and $p_{v}$ represent averages
across cross-section of the blood vessels. Thus, the one-dimensional variables
$\phi_{v}$ and $p_{v}$ on a 1D vessel $\Lambda_{i}$, $i\in\\{1,\dots,N\\}$,
depend only on $s_{i}$. See also [33] for more details related to the
derivation of the 1D pipe flow and transport models. With these conventions,
the 1D model equations for flow and transport on $\Lambda_{i}$ are given by
(2.7) $\displaystyle\partial_{t}\phi_{v}+\partial_{s_{i}}(v_{v}\phi_{v})$
$\displaystyle=\partial_{s_{i}}(m_{v}(\phi_{v})D_{v}\partial_{s_{i}}\phi_{v})-2\pi
R_{i}J_{\sigma v}(\overline{\phi}_{\sigma},\overline{p},\phi_{v},p_{v}),$
$\displaystyle R_{i}^{2}\pi\;\partial_{s_{i}}(K_{v,i}\;\partial_{s_{i}}p_{v})$
$\displaystyle=2\pi R_{i}J_{pv}(\overline{p},p_{v}).$
Here, we have introduced the permeability
$K_{v,i}=\tfrac{1}{8}R_{i}^{2}/\mu_{bl}$ of the $i$-th vessel with $\mu_{bl}$
being the viscosity of blood. We assign $\mu_{bl}$ a constant value, i.e.,
non-Newtonian behavior of blood is not considered. The diffusivity parameter
$D_{v}$ is set to the same value as $D_{\sigma}$. The blood velocity $v_{v}$
is given by the Darcy equation $v_{v}=-K_{v,i}\partial_{s_{i}}p_{v}.$
In order to interconnect $p_{v}$ and $\phi_{v}$ on $\Lambda_{i}$ at the inner
networks nodes on the intersections
$\bm{x}\in\partial\Lambda_{i}\setminus\partial\Lambda,$ we require continuity
conditions on $p_{v}$ and $\phi_{v}$. Moreover, we enforce conservation of
mass to obtain a physically relevant solution. To formulate these coupling
conditions in a mathematical way, we define for each bifurcation point
$\bm{x}$ an index set
$N(\bm{x})=\mathopen{}\mathclose{{}\left\\{\mathopen{}\mathclose{{}\left.i\in\mathopen{}\mathclose{{}\left\\{1,\ldots,N}\right\\}\;}\right|\;\bm{x}\in\partial\Lambda_{i}}\right\\}.$
We state the following continuity and mass conservation conditions at an inner
node $\bm{x}\in\partial\Lambda_{i}$:
$\displaystyle p_{v}|_{\Lambda_{i}}(\bm{x})-p_{v}|_{\Lambda_{j}}(\bm{x})$
$\displaystyle=0,\quad\text{ for all }j\in N(\bm{x})\backslash\\{i\\},$
$\displaystyle\phi_{v}|_{\Lambda_{i}}(\bm{x})-\phi_{v}|_{\Lambda_{j}}(\bm{x})$
$\displaystyle=0,\quad\text{ for all }j\in N(\bm{x})\backslash\\{i\\},$
$\displaystyle\sum_{j\in
N(\bm{x})}-\frac{R_{j}^{4}\pi}{8\mu_{\mathrm{bl}}}\frac{\partial
p_{v}}{\partial s_{j}}\Big{|}_{\Lambda_{j}}(\bm{x})$ $\displaystyle=0,$
$\displaystyle\sum_{j\in
N(\bm{x})}\Big{(}v_{v}\phi_{v}-m_{v}(\phi_{v})D_{v}\frac{\partial\phi_{v}}{\partial
s_{j}}\Big{)}\Big{|}_{\Lambda_{j}}(\bm{x})$ $\displaystyle=0.$
## 3\. Angiogenesis: Network Growth Algorithm
As noted earlier, angiogenesis is triggered by an increased TAF concentration
$\phi_{{TAF}}$ around the pre-existing blood vessels. After the TAF molecules
are emitted by the dying hypoxic tumor cells, they move through the tissue
matrix and may encounter sensor ligands on the vessel surfaces. If the TAF
concentration is large enough at the vessel surfaces, an increased number of
sensors in the vessel wall are activated and a reproduction of endothelial
cells forming the vessel walls is initiated. As a result, the affected vessels
can elongate, resulting in two different kinds of vessel elongations or
growth. In medical literature, see, e.g., [54, 16], this process is referred
to as apical growth and sprouting of vessels. The term apical growth is
derived from the term apex denoting the tip of a blood vessel, i.e., apical
growth is the type of growth occurring at the tip of a vessel. On the other
hand, the sprouting of new vessels results in the formation of new vessels at
other places on the vessel surface. In order to increase or decrease the flow
of blood and nutrients through the vessels, it is observed that the newly
formed blood vessels can adapt their vessel radii which is caused, e.g., by an
increased wall shear stress at the inner side of the vessel walls. Combining
these mechanisms, an increased supply of nutrients for both the healthy and
cancerous tissue can be achieved such that the tumor can continue to grow.
In the following, we describe how an angiogenesis step can be realized within
our mathematical model. It is assumed that in such a step the apical growth is
considered first and then the sprouting of new vessels is simulated. At the
end of an angiogenesis step, the radii of the vessels are adapted to regulate
the blood flow. The 1D network that is updated during an angiogenesis step is
denoted by $\Lambda_{{\text{old}}}$.
### 3.1. Apical growth
Since the apical growth occurs only at the tips of the blood vessels, we
consider all the boundary nodes $\bm{x}$ of the network
$\Lambda_{{\text{old}}}$ contained in the inner part of $\Omega$, i.e.,
$\bm{x}\in\partial\Lambda_{{\text{old}}}$ and $\bm{x}\notin\partial\Omega$.
Moreover, we assume that $\bm{x}$ is contained in the segment
$\Lambda_{i}\subset\Lambda_{{\text{old}}}$. At $\bm{x}$, the value of the TAF
concentration is denoted by $\phi_{{TAF}}(\bm{x})$. If this value exceeds a
certain threshold $Th_{{TAF}}$: $\phi_{{TAF}}(\bm{x})\geq Th_{{TAF}}$, the tip
of the corresponding vessel is considered as a candidate for growth.
There are two types of growth that are allowed to occur at the apex of a
vessel: either the vessel can further elongate or it can bifurcate. In order
to decide which event occurs, a probabilistic method is used. According to
[57] and the references therein, the ratio $r_{i}=l_{i}/R_{i}$ of the vessel
$\Lambda_{i}$ follows a log-normal distribution:
(3.1)
$p_{b}(r)\sim\mathcal{L}\mathcal{N}(r,\mu_{r},\sigma_{r})=\frac{1}{r\sqrt{2\pi\sigma_{r}^{2}}}\exp\bigg{(}-\frac{({\ln
r}-\mu_{r})^{2}}{2\sigma_{r}^{2}}\bigg{)}.$
The parameters $\mu_{r}$ and $\sigma_{r}$ represent the mean value and
standard deviation of the probability distribution $p_{b}$, respectively.
Using the cumulative distribution function of $p_{b}$, we decide whether a
bifurcation is considered or not. This means that a bifurcation at
$\bm{x}\in\partial\Lambda_{i}\cup\partial\Lambda$ is formed with a probability
of:
(3.2) $P_{b}(r)=\Phi\bigg{(}\frac{\ln
r-\mu_{r}}{\sigma_{r}}\bigg{)}=\frac{1}{2}+\frac{1}{2}\text{erf}\bigg{(}\frac{\ln
r-\mu_{r}}{\sqrt{2\sigma_{r}^{2}}}\bigg{)},$
where $\Phi$ denotes the standard normal cumulative distribution function and
$x\mapsto\text{erf}(x)$ the Gaussian error function. We refer to Figure 3 for
the illustration of an exemplary vessel, which bifurcates. Moreover, we depict
the distribution of the ratio $l_{i}/R_{i}$ according to Eq. 3.1, the radii of
its bifurcations, see Eq. 3.5 below, and the probability of the occurrence of
a bifurcation, see Eq. 3.2.
Figure 3. Given a vessel with length $l_{i}$ and radius $R_{i}$, we plot the
probability of the occurrence of a bifurcation (red curve in figure (b)), the
ratio of its length over the radius (blue curve in figure (a)), and the
distribution of the radii of the sproutings (green curve in figure (c)); we
choose $R_{i}=1.5\cdot 10^{-2}$, $R_{c}=2^{-\frac{1}{3}}R_{i}$ according to
Eq. 3.5, $\mu_{r}=1$, $\sigma_{r}=0.2$, $R_{\min}=9\cdot 10^{-3}$,
$R_{\max}=3.5\cdot 10^{-2}$ according to Table 2,
$\overline{R}_{\max}=\max\\{R_{\max},R_{i}\\}=R_{i}$.
If a single vessel is formed at $\bm{x}$, the direction of growth
$\mathbf{d}_{g}$ is based on the TAF concentration:
(3.3)
$\mathbf{d}_{g}(\bm{x})=\frac{\nabla\phi_{{TAF}}(\bm{x})}{\mathopen{}\mathclose{{}\left\|\nabla\phi_{{TAF}}(\bm{x})}\right\|}+\lambda_{g}\frac{\mathbf{d}_{i}}{\mathopen{}\mathclose{{}\left\|\mathbf{d}_{i}}\right\|},$
where $\|\cdot\|$ denotes the Euclidean norm. The vector
$\mathbf{d}_{i}=\bm{x}_{i,2}-\bm{x}_{i,1}$ is the orientation of the vessel
$\Lambda_{i}$, and the value
$\lambda_{g}\in\mathopen{}\mathclose{{}\left(0,1}\right]$ represents a
regularization parameter that can be used to circumvent the formation of sharp
bendings and corners. This is necessary if the TAF gradient at $\bm{x}$
encloses an acute angle with $\mathbf{d}_{i}$. The radius $R_{i^{\prime}}$ of
the new vessel $\Lambda_{i^{\prime}}$ is taken over from $\Lambda_{i}$ i.e.
$R_{i^{\prime}}=R_{i}$. Having the radius $R_{i^{\prime}}$ at hand, we use
(3.2) to determine the length $l_{i^{\prime}}$ of $\Lambda_{i^{\prime}}$.
Before $\Lambda_{i^{\prime}}$ is incorporated into the network
$\Lambda_{{\text{old}}}$, we check, whether it intersects another vessel in
the network. If this is the case, $\Lambda_{i^{\prime}}$ is not added to
$\Lambda_{{\text{old}}}$. In order to test whether a new vessel intersects an
existing vessel that is not directly connected, we compute the distance
between the centerlines of the new vessel and the existing vessel. If this
distance is smaller than the sum of the radii for any of the existing vessels,
the new vessel is considered too close to existing vessels, and, therefore,
the new vessel is not inserted into the network.
In the case of bifurcations, we have to choose the radii, orientations and
lengths of the new branches $b_{1}$ and $b_{2}$. The radii of the new branches
are computed based on a Murray-type law. It relates the radius $R_{i}$ of the
father vessel to the radius $R_{i,b_{1}}$ of branch $b_{1}$ and the radius
$R_{i,b_{2}}$ of branch $b_{2}$ as follows [40]:
(3.4) $R_{i}^{\gamma}=R_{i,b_{1}}^{\gamma}+R_{i,b_{2}}^{\gamma},$
where $\gamma$ denotes the bifurcation exponent. According to [57], $\gamma$
can vary between $2.0$ and $3.5$. In addition to (3.4), we require an
additional equation to determine the radii of the branches. Towards this end,
it is assumed that $R_{b_{1}}$ follows a truncated Gaussian normal
distribution:
(3.5) $R_{c}=2^{-\frac{1}{\gamma}}R_{i},\;\qquad
R_{b_{k}}\sim\mathcal{N}^{t}(R,\mu=R_{c},\sigma=R_{c}/35),\;\qquad
k\in\mathopen{}\mathclose{{}\left\\{1,2}\right\\},$
which is set to zero outside of the interval $[R_{\min},\overline{R}_{\max}]$
with $\overline{R}_{\max}=\max\\{R_{\max},R_{i}\\}$; we refer to Table 2 for a
choice of parameters for $R_{\min}$ and $R_{\max}$. Additionally, the radius
of the parent vessel acts as a natural bound for the radius of its
bifurcations.
The selection of $R_{b_{k}}$ is motivated as follows: Using the radius $R_{i}$
of $\Lambda_{i}$, we compute the expected radius $R_{c}$ resulting from
Murray’s law for a symmetric bifurcation $(R_{b_{1}}=R_{b_{2}})$. Here,
$R_{c}$ is used as a mean value for a Gaussian normal distribution, with a
small standard deviation. This yields bifurcations that are slightly deviating
from a symmetric bifurcation which is in accordance with Murray’s law. Having
$R_{b_{1}}$ and $R_{b_{2}}$ at hand, we compute the corresponding lengths
$l_{b_{1}}$ and $l_{b_{2}}$ as in the case of a single vessel.
We refer to Figure 4 for the distribution of the radii of the bifurcating
vessels. We note that the ideal case is a symmetric bifurcation, that means
both radii which correspond to the mean. Further, we also depict two
asymmetric cases where the radii deviate from the mean.
Figure 4. Distribution of the radii of the bifurcating vessels, choosing
$R_{i}=0.015$, $R_{c}=2^{-\frac{1}{3}}R_{i}$. Examples of bifurcations with
different radii are given, $R=1.08\cdot 10^{-2}$ (case (a)), $R=R_{c}$ (case
(b)), $R=1.25\cdot 10^{-2}$ (case (c)).
The creation of a bifurcation is accomplished by specifying the orientations
of the two branches. At first, we define the plane in which the bifurcation is
contained. The normal vector $\mathbf{n}_{p}$ of this plane is given by the
cross product of the vessel orientation $\mathbf{d}_{i}$ and the growth
direction $\mathbf{d}_{g}$ from the non-bifurcating case:
(3.6)
$\mathbf{n}_{p}(\bm{x})=\frac{\mathbf{d}_{i}\times\mathbf{d}_{g}}{\mathopen{}\mathclose{{}\left\|\mathbf{d}_{i}\times\mathbf{d}_{g}}\right\|}.$
The exact location of the plane is determined such that the vessel
$\Lambda_{i}$ is contained in this plane. Further constraints for the
bifurcation configuration are related to the bifurcation angles. In [40, 39],
it is shown how optimality principles like minimum work and minimum energy
dissipation can be utilized to derive formulas relating the radii of the
branches to the branching angles $\alpha_{i}^{(1)}$ and $\alpha_{i}^{(2)}$:
(3.7)
$\cos\big{(}\alpha_{i}^{(1)}\big{)}=\frac{R_{i}^{4}+R_{b_{1}}^{4}-R_{b_{2}}^{4}}{2\cdot
R_{i}^{2}R_{b_{1}}^{2}}\;\text{ and
}\;\cos\big{(}\alpha_{i}^{(2)}\big{)}=\frac{R_{i}^{4}+R_{b_{2}}^{4}-R_{b_{1}}^{4}}{2\cdot
R_{i}^{2}R_{b_{2}}^{2}}.$
The value $\alpha_{i}^{(k)}$ denotes the bifurcation angle between branch
$k\in\mathopen{}\mathclose{{}\left\\{1,2}\right\\}$ and the father vessel.
Rotating the vector $\mathbf{d}_{g}$ at $\mathbf{x}$ around the axis defined
by $\mathbf{n}_{p}(\bm{x})$ counterclockwise by
$\alpha_{i}^{(1)}+\alpha_{i}^{(2)}$, we obtain two new growth directions
$\mathbf{d}_{b_{1}}=\mathbf{d}_{g}$ and $\mathbf{d}_{b_{2}}$. These vectors
are used to define the main axes of the two cylinders representing the two
branches. This choice of the growth directions can be considered as a
compromise between the optimality principles provided by [40, 39] and the
tendency of the network to adapt its growth direction to the nutrient demand
of the surrounding tissue. At the end of the apical growth phase, we obtain a
1D network denoted by $\Lambda_{\text{ap}}$.
1 Input: Network $\Lambda_{old}$, Output: New network $\Lambda_{ap}$
2 for _each $\mathbf{x}\in\partial\Lambda\cap\Omega$_ do
3 Compute the TAF concentration at $\mathbf{x}$: $\phi_{{TAF}}(\bm{x})$;
4 Consider the TAF threshold $Th_{{TAF}}$;
5 if _$\phi_{{TAF}}(\bm{x})\geq Th_{{TAF}}$_ then
6 Consider the edge $\Lambda_{i}$ containing $\mathbf{x}$ i.e.
$\mathbf{x}\in\partial\Lambda_{i}\cap\partial\Lambda$;
7 $\Lambda_{i}$ has the orientation $\mathbf{d}_{i}$, the radius $R_{i}$
8 Compute the gradient $\nabla\phi_{{TAF}}(\bm{x})$;
9 Compute the new growth direction $\mathbf{d}_{g}$ using (3.3);
10 Compute the probability
$P_{b}\mathopen{}\mathclose{{}\left(\mathbf{x}}\right)$ given by (3.2);
11 Form a bifurcation with probability
$P_{b}\mathopen{}\mathclose{{}\left(\mathbf{x}}\right)$;
12 if _a bifurcation is formed_ then
13 Determine the radii of the new branches $R_{b_{1}}$ and $R_{b_{2}}$
according to (3.4) and (3.5);
14 Compute the bifurcation angels $\alpha_{i}^{(1)}$ and $\alpha_{i}^{(2)}$
according to (3.7);
15 Rotate $\mathbf{d}_{g}(\bm{x})$ by the angle
$\alpha_{i}^{(1)}+\alpha_{i}^{(2)}$ counterclockwise around the rotation axis
defined by the vector $\mathbf{n}_{p}(\bm{x})$ (computed using (3.6)) to
obtain a second growth direction $\mathbf{d}_{b_{2}}(\bm{x})$;
16 Determine the ratios $r_{b_{1}}$ and $r_{b_{2}}$ according to the
probability distribution (3.1);
17 Construct new edges $\Lambda_{b_{1}}$ and $\Lambda_{b_{2}}$ having the
radii $R_{b_{1}}$ and $R_{b_{2}}$,
18 the lengths $l_{b_{1}}=r_{b_{1}}\cdot R_{b_{1}}$ and
$l_{b_{2}}=r_{b_{2}}\cdot R_{b_{2}}$ as well as the orientations
19
$\mathbf{d}_{b_{1}}=\mathbf{d}_{g}\mathopen{}\mathclose{{}\left(\mathbf{x}}\right)$
and $\mathbf{d}_{b_{2}}$;
20 if _$\Lambda_{b_{1}}$ and $\Lambda_{b_{2}}$ are not intersecting and
$R_{b_{1}},R_{b_{2}}\in\mathopen{}\mathclose{{}\left[R_{\min},\overline{R}_{\max}}\right]$_
then
21 Add $\Lambda_{b_{1}}$ and $\Lambda_{b_{2}}$ to $\Lambda_{i}$ at the node
$\mathbf{x}$;
22
23 end if
24 else
25 Continue;
26
27 end if
28
29 end if
30 else
31 The radius for the new edge is set to $R_{i}$;
32 Determine the ratio $r_{i}$ according to the probability distribution
(3.1);
33 Construct a new edge $\Lambda_{i^{\prime}}$ having the radius $R_{i}$,
34 the length $l_{i^{\prime}}=r_{i}\cdot R_{i}$ and the orientation
$\mathbf{d}_{g}\mathopen{}\mathclose{{}\left(\mathbf{x}}\right)$;
35 Check whether $\Lambda_{i^{\prime}}$ intersects;
36 if _$\Lambda_{i^{\prime}}$ is not intersecting and
$R_{i}\in\mathopen{}\mathclose{{}\left[R_{\min},\overline{R}_{\max}}\right]$_
then
37 Add $\Lambda_{i^{\prime}}$ to $\Lambda_{i}$ at the node $\mathbf{x}$;
38
39 end if
40 else
41 Continue;
42
43 end if
44
45 end if
46
47 end if
48 else
49 Continue;
50 end if
51
52 end for
Algorithm 1 Apical growth algorithm
### 3.2. Sprouting of new vessels
In the second phase of the angiogenesis process, we examine each vessel or
segment $\Lambda_{i}\subset\Lambda_{\text{ap}}$. As ligands has been already
mentioned, the sprouting of inner vessels is triggered by TAF molecules
touching some sensor ligands in the vessel walls. Therefore, we determine for
the middle region of each segment, i.e.,
$\Lambda_{i}(s_{i})\subset\Lambda_{i},\;s_{i}\in(0.25,0.75)$ at which place an
averaged TAF concentration $\overline{\phi}_{{TAF}}$ attains its maximum
$\overline{\phi}_{{TAF}}^{(\max)}$. As in the previous section
$\overline{\phi}_{{TAF}}$ is determined by means of an integral expression:
$\overline{\phi}_{{TAF}}(s_{i})=\frac{1}{2\pi R_{i}}\int_{\partial
B_{R_{i}}(s_{i})}\phi_{{TAF}}(\bm{x})\,\textup{d}S,\;s_{i}\in(0.25,0.75).$
We consider only the parameters $s_{i}\in(0.25,0.75)$, since we want to avoid
a sprouting of new vessels at the boundaries of $\Lambda_{i}$. Furthermore,
boundary edges are not considered, and we demand that the edges should have a
minimal length $l_{\min}$ to avoid the formation of tiny vessels. If
$\overline{\phi}_{{TAF}}^{(\max)}$ is larger than $Th_{{TAF}}$, we attach a
new vessel $\Lambda_{i^{\prime}}$ at $\bm{x}$. As in the case of apical
growth, the local TAF gradient is considered as the preferred growth direction
of the new vessel:
$\mathbf{d}_{g}(\bm{x})=\frac{\nabla\phi_{{TAF}}(\bm{x})}{\mathopen{}\mathclose{{}\left\|\nabla\phi_{{TAF}}(\bm{x})}\right\|}.$
In order to prevent $\Lambda_{i^{\prime}}$ from growing in the direction of
$\Lambda_{i}$, we demand that $\mathbf{d}_{g}$ draws an angle of at least
$\frac{10}{180}\pi$. The new radius $R_{i^{\prime}}$ is computed as follows:
$\tilde{R}_{i}={\zeta}R_{i},\;\tilde{R}_{i^{\prime}}=(\tilde{R}_{i}-R_{i})^{\frac{1}{\gamma}}=({\zeta}-1)^{\frac{1}{\gamma}}R_{i},\;R_{i^{\prime}}=\begin{cases}R_{i^{\prime}}\sim\mathcal{U}(1.25\cdot
R_{\min},\tilde{R}_{i^{\prime}})\text{ if }1.25\cdot
R_{\min}<\tilde{R}_{i^{\prime}}\\\ R_{\min}\text{ otherwise.}\end{cases}$
${\zeta}>1$ is a fixed parameter, $R_{\min}$ denotes the minimal radius of a
blood vessel, and $\mathcal{U}$ stands for the uniform distribution, i.e., new
segment radius $R_{i^{\prime}}$ is chosen from the interval
$[R_{\min},\tilde{R}_{i^{\prime}}]$ based on a uniform distribution. For a
given radius $\tilde{R}_{i^{\prime}}$, the new length $l_{i^{\prime}}$ of
$\Lambda_{i^{\prime}}$ is determined by means of (3.1).
Finally, three new vessels $\Lambda_{i_{1}}$, $\Lambda_{i_{2}}$ and
$\Lambda_{i^{\prime}}$ are added to the network $\Lambda_{\text{ap}}$. As in
the case of apical growth, we test whether a new vessel intersects an existing
vessel, before we incorporate $\Lambda_{i^{\prime}}$ into
$\Lambda_{\text{ap}}$. In addition, we check whether a terminal vessel, i.e.,
a vessel that is part of $\partial\Lambda_{\text{ap}}$ can be linked to
another vessel. For this purpose, the distance of the point
$\bm{x}_{b}\in\partial\Lambda_{\text{ap}}\cup\partial\Lambda_{i}$ to its
neighboring network nodes that are not directly linked to $\bm{x}_{b}$ is
computed. If the distance is below a certain threshold
$\text{dist}_{\text{link}}$, the corresponding network node is considered as a
candidate to be linked with $\bm{x}_{b}$. If $\bm{x}_{b}$ is part of an artery
or the high pressure region of $\Lambda_{\text{ap}}$, we link it preferably
with a candidate at minimal distance and whose pressure is in the low pressure
region (venous part). If $\bm{x}_{b}$ is part of a vein, the roles are
switched.
### 3.3. Adaption of the vessel radii
In the final phase of the angiogenesis step, we iterate over all the vessels
$\Lambda_{i}\subset\Lambda_{\text{sp}}$ and compute for each vessel the wall
shear stress $\mathbf{\tau}_{w}$ by:
$\mathbf{\tau}_{w,i}=\frac{4.0\;\mu_{\text{bl}}}{\pi
R_{i}^{3}}\mathopen{}\mathclose{{}\left|Q_{i}}\right|,\;Q_{i}=-K_{v,i}\frac{R_{i}^{2}\pi\Delta
p_{v,i}}{l_{i}},$
where $\Delta p_{v,i}$ is the pressure drop along $\Lambda_{i}$. By means of
the 1D wall shear stress, the wall shear stress stimulus for the vessel
adaption is given by [60]:
(3.8) $S_{{\text{WSS}},i}={\ln}(\mathbf{\tau}_{w,i}+\tau_{\text{ref}}).$
Here, $\tau_{\text{ref}}$ is a constant that is included to avoid a singular
behavior at lower wall shear stresses [50]. Following the model for radius
adaptation in [58], the change in radius $\Delta R_{i}$ over a time step
$\Delta t$ is assumed to be proportional to the stimulus $S_{{\text{WSS}},i}$
and current radius $R_{i}$:
(3.9) $\Delta R_{i}=\mathopen{}\mathclose{{}\left(k_{{\text{WSS}}}\cdot
S_{{\text{WSS}},i}-k_{s}}\right)\cdot\Delta t\cdot R_{i},$
where $k_{s}$ is a constant that controls the natural shrinking tendency of
the blood vessel and $k_{{\text{WSS}}}$ a proportionality constant that
controls the effect of stimulus $S_{{\text{WSS}},i}$. Once we have $\Delta
R_{i}$, we can compute the updated radius of vessels using
$R_{\textup{new},i}=R_{i}+\Delta R_{i}$. If
$R_{\textup{new},i}\in\mathopen{}\mathclose{{}\left[R_{\min},1.25\cdot
R_{i}}\right],$
where $R_{\min}$ is some fixed constant, we update the vessel radius of
$\Lambda_{i}$. Otherwise, if $R_{i}<R_{\min}$ and
$\partial\Lambda_{i}\cup\partial\Lambda_{\text{sp}}\neq\emptyset$, the vessel
is removed from the network. Finally, after following the procedure discussed
in this section, we obtain a new network $\Lambda_{\textup{new}}$.
## 4\. Numerical Discretization
With our mathematical models for tumor growth, blood flow and nutrient
transport as well as angiogenesis processes presented in previous sections, we
now turn our attention to numerical solution strategies. Toward this end, let
us consider a time step $n$ given by the interval
$\mathopen{}\mathclose{{}\left[t_{n},t_{n+1}}\right]$, with $\Delta
t=t_{n+1}-t_{n}$. At the beginning of a time step $n$, we decide whether an
angiogenesis process has to be simulated or not. As examples of relevant
simulations, we consider an angiogenesis process after each third time step.
If angiogenesis has to be taken into account, we follow the steps described in
Section 3. Given the 1D network $\Lambda$ at the time point $t_{n}$, we first
apply the algorithm for the apical growth. Afterwards, the sprouting of new
vessels and the adaption of the vessel radii is simulated. Finally, we obtain
a new network $\Lambda_{\textup{new}}$ for the new time point $t_{n+1}$. If
the simulation of angiogenesis is omitted in the respective time step $n$,
$\Lambda$ is directly used for the simulation of the tumor growth as well as
blood flow and nutrient transport, see Figure 5.
Figure 5. Simulation steps within a single time step.
For the time discretization of the 3D model equations in Section 2, the semi-
implicit Euler method is used i.e. we keep the linear terms implicit and the
nonlinear terms explicit with respect to time. Discretizing the model
equations in space, standard conforming trilinear $Q1$ finite elements are
employed for the partial differential equations governing the tumor growth
(2.4), whereas the PDEs for pressure $(p)$ and nutrient transport
$(\phi_{\sigma})$ are solved by means of cell centered finite volume methods.
The computational mesh is given by a union of cubes having an edge length of
$h_{3D}$.
We use finite elements to approximate the higher-order Cahn–Hilliard type
equations as well advection-reaction-diffusion equations corresponding to the
species $\phi_{TAF},\phi_{ECM},\phi_{MDE}$ in Eq. 2.4. In order to ensure mass
conservation for both flow and nutrient transport in the interstitial domain,
finite volume schemes are taken into account, since they are locally mass
conservative.
In order to solve the 3D-1D coupled system, such as pressure $(p_{v},p)$, the
iterative Block-Gauss-Seidel method is used, i.e., in each iteration, we first
solve the equation system for the 1D system. Then the updated 1D solution is
used to solve the equation system derived from the 3D problem. We stop the
iteration when the change in the current and previous iteration solution is
within a small tolerance. At each time step, we first solve the $(p_{v},p)$
coupled system. Afterwards the $(\phi_{v},\phi_{\sigma})$ coupled system is
solved. Next, we solve the remaining equations in the 3D system. This is
summarized in Algorithm 2. In the remainder of this section, the
discretizations of the 1D and 3D systems are outlined.
### 4.1. VGM discretization of the 1D PDEs
It remains to specify the numerical solution techniques for the 1D network
equations. The time integration is based on the implicit Euler method. For the
spatial discretization of the 1D equations, the Vascular Graph Method (VGM) is
considered. This method corresponds to a node centered finite volume method
[53, 63]. We then briefly describe this numerical method as well as the
discretization of the terms arising in the context of the 3D-1D coupling. We
restrict ourselves to the pressure equations.
As mentioned in Section 2, the 1D network is given by a graph-like structure,
consisting of edges $\Lambda_{i}\subset\Lambda$ and network nodes
$\bm{x}_{i}\in\Lambda$. In a first step, we assign to each network node
$\bm{x}_{i}$ an unknown for the pressure that is denoted by $p_{v,i}$. Let us
assume that the edges containing $\bm{x}_{i}$ are given by
$\Lambda_{i_{1}},\ldots,\Lambda_{i_{N}}$ and its midpoints by
$\mathbf{m}_{i_{1}},\ldots,\mathbf{m}_{i_{N}}$, see Figure 6.
Figure 6. Notation for the Vascular Graph Method.
On each edge,
$\Lambda_{k}\in\mathopen{}\mathclose{{}\left\\{\Lambda_{i_{1}},\ldots,\Lambda_{i_{N}}}\right\\}$,
we consider the following PDE for the pressure; see also (2.7). For
convenience, the curve parameter is simply denoted by $s$.
$-R_{k}^{2}\pi\;\partial_{s}(K_{v,k}\;\partial_{s}p_{v})=-2\pi
R_{k}L_{p}(p_{v}-\overline{p}).$
Next, we establish for the node $\bm{x}_{i}$ a mass balance equation taking
the fluxes across the cylinders $Z_{i_{l}}$ into account. $Z_{i_{l}}$ is a
cylinder having the edge $\Lambda_{i_{l}}$ as a rotation axis and the radius
$R_{i_{l}}$. Furthermore its top and bottom facets are located at
$\mathbf{m}_{i_{l}}$ and $\bm{x}_{i}$, respectively (see Figure 6). The
corresponding curve parameters are denoted by $s(\bm{x}_{i})$ and
$s(\bm{x}_{i_{l}}),\;l\in\mathopen{}\mathclose{{}\left\\{1,\ldots,N}\right\\}$.
Accordingly, the mass balance equation reads as follows:
$-\sum_{l=1}^{N}\int_{s(\bm{x}_{i})}^{s(\mathbf{m}_{i_{l}})}R_{i_{l}}^{2}\pi\;\partial_{s}(K_{v,{i_{l}}}\;\partial_{s}p_{v})\,\textup{d}s=-2\pi
L_{p}\sum_{l=1}^{N}\int_{s(\bm{x}_{i})}^{s(\mathbf{m}_{i_{l}})}R_{i_{l}}(p_{v}-\overline{p})\,\textup{d}s.$
Integration yields:
$-\sum_{l=1}^{N}R_{i_{l}}^{2}\pi
K_{v,{i_{l}}}\mathopen{}\mathclose{{}\left.\partial_{s}p_{v}}\right|_{s(\mathbf{m}_{i_{l}})}+\sum_{l=1}^{N}R_{i_{l}}^{2}\pi
K_{v,{i_{l}}}\mathopen{}\mathclose{{}\left.\partial_{s}p_{v}}\right|_{s(\bm{x}_{i})}=-2\pi
L_{p}\sum_{l=1}^{n}\int_{s(\bm{x}_{i})}^{s(\mathbf{m}_{i_{l}})}{R_{i_{l}}(p_{v}-\overline{p})}\,\textup{d}s.$
Approximating the derivatives by central finite differences and using the mass
conservation equation (see Subsection 2.4):
$\sum_{l=1}^{N}R_{i_{l}}^{2}\pi
K_{v,{i_{l}}}\mathopen{}\mathclose{{}\left.\partial_{s}p_{v}}\right|_{s(\bm{x}_{i})}=0,$
at an inner node $\bm{x}_{i}$, it follows that
$\sum_{l=1}^{N}R_{i_{l}}^{2}\pi
K_{v,{i_{l}}}\;\frac{p_{v,i}-p_{v,i_{l}}}{l_{i_{l}}}=-2\pi
L_{p}\sum_{l=1}^{N}\int_{s(\bm{x}_{i})}^{s(\mathbf{m}_{i_{l}})}{R_{i_{l}}(p_{v}-\overline{p})}\,\textup{d}s,$
where $l_{i_{l}}$ denotes the length of the edge $\Lambda_{i_{l}}$. Denoting
the mantle surface of $Z_{i_{l}}$ by $S_{i_{l}}$, we have:
$\sum_{l=1}^{N}\frac{R_{i_{l}}^{2}\pi
K_{v,{i_{l}}}}{l_{i_{l}}}\;(p_{v,i}-p_{v,i_{l}})=-L_{p}\sum_{l=1}^{N}\mathopen{}\mathclose{{}\left|S_{i_{l}}}\right|p_{v,i}+L_{p}\sum_{l=1}^{N}\int_{S_{i_{l}}}p\,\textup{d}S.$
Computing the integrals $\int_{S_{i_{l}}}p\,\textup{d}S$, we introduce the
decomposition of $\Omega$ into $M$ finite volume cells ${CV}_{k}$:
$\Omega=\bigcup_{k=1}^{M}{CV}_{k}$. The pressure unknown assigned to
${CV}_{k}$ is given by $p_{k}$. Using this notation, one obtains:
$\int_{S_{i_{l}}}p\,\textup{d}S=\sum_{{CV}_{k}\cap
S_{i_{l}}\neq\emptyset}\int_{{CV}_{k}\cap
S_{i_{l}}}p\,\textup{d}S\approx\mathopen{}\mathclose{{}\left|S_{i_{l}}}\right|\sum_{{CV}_{k}\cap
S_{i_{l}}\neq\emptyset}\underbrace{\frac{\mathopen{}\mathclose{{}\left|{CV}_{k}\cap
S_{i_{l}}}\right|}{\mathopen{}\mathclose{{}\left|S_{i_{l}}}\right|}}_{=:w_{ki_{l}}}p_{k}=\mathopen{}\mathclose{{}\left|S_{i_{l}}}\right|\sum_{{CV}_{k}\cap
S_{i_{l}}\neq\emptyset}w_{ki_{l}}\;p_{k}.$
In order to estimate the weights $w_{ki_{l}}$ we discretize the mantle surface
$S_{i_{l}}$ by $N_{s}$ nodes. For our simulations, we used $N_{s}=400$ nodes.
$S_{i_{l}}$ intersects some finite volume cells $CV_{k}$. The number of nodes
contained in $CV_{k}$ is denoted by $N_{ki_{l}}$. Using these definitions, the
weights $w_{ki_{l}}$ are computed as follows: $w_{ki_{l}}=N_{ki_{l}}/N_{s}$.
As an example, consider Figure 7 below, where we show the discretization of
the surface of a cylinder. We note that one has to guarantee that the relation
$\sum_{{CV}_{k}\cap S_{i_{l}}\neq\emptyset}w_{ki_{l}}=1$ holds. Otherwise, a
consistent mass exchange between the vascular system and the tissue could not
be enforced. All in all, we obtain a linear system of equations for computing
the pressure values.
Figure 7. Typical discretization of the cylinder surface (left). Cross
section through a mesh composed of finite volume cells $CV_{k}$ and a cylinder
with the mantle surface $S_{i_{l}}$. $S_{i_{l}}$ is discretized by $N_{s}$
nodes, which are contained in different finite volume cells. Nodes belonging
to different cells are colored differently. The number of nodes contained in
$CV_{k}$ is denoted by $N_{ki_{l}}$.
On closer examination, it can be noted that the corresponding matrix is
composed of four blocks, i.e., two coupling blocks as wells as a block for the
1D diffusion term and the 3D diffusion term. As said earlier, at each time
step, we decouple the 1D and 3D pressure equations and use a Block-Gauss-
Seidel iteration to solve the two systems until the 3D pressure is converged.
The discretization of the nutrient equation is exerted in a similar manner,
where the main difference consists in adding an upwinding procedure for the
convective term. At each time step, the nutrient equations are also solved
using a Block-Gauss-Seidel iteration.
#### 4.1.1. Initial and boundary conditions for the 1D PDEs
Since we use a transient PDE to simulate the transport of nutrients, we
require an initial condition for the variable $\phi_{v}$. In doing so, a
threshold $R_{T}$ for the radii is introduced in order to distinguish between
arteries and veins. If the radius of a certain vessel is below $R_{T}$, the
vessel is considered as an artery and otherwise as a vein. In case of an
artery, we set $\phi_{v}(t=0)=1$ and in case of a vein $\phi_{v}(t=0)=0$ is
used. When the network starts to grow, initial values for the newly created
vessels have to be provided. If a new vessel is created due to sprouting
growth, we consider the vessel or edge to which the new vessel is attached. At
the point of the given vessel, where the new vessel or edge is added,
$\phi_{v}$ is interpolated linearly. For this purpose the two values of
$\phi_{v}$ located at the nodes of the existing vessel are used. The
interpolated value is assigned to both nodes of the newly created vessel.
When apical growth takes place, a new vessel is added to
$\bm{x}\in\partial\Lambda$. In this case, we consider $\phi_{v}(\bm{x},t)$ for
a time point $t$ and assign it to the newly created node, since we assume that
no flow boundary conditions are enforced. With respect to the boundary
conditions for the 1D pressure PDE, the following distinction of cases is
made:
* •
Dirichlet boundary for $p_{v}$ if
$\bm{x}\in\partial\Lambda\cap\partial\Omega$. In this case, we set:
$p_{v}(\bm{x})=p_{v,D}(\bm{x})$, where $p_{v,D}$ is a given Dirichlet value at
$\bm{x}$. Numerically, we enforce this boundary condition by setting in the
corresponding line of the matrix all entries to zero except for the entry on
the diagonal which is fixed to the value $1$. Additionally, the corresponding
component of the right hand side vector contains the Dirichlet value $p_{D}$.
Let us assume that $\bm{y}$ is the neighbor of $\bm{x}$ on the edge
$\Lambda_{1}$. The other edges adjacent to $\bm{y}$ are denoted by
$\Lambda_{2},\ldots,\Lambda_{N}$. Then the balance equation for $\bm{y}$ has
to be adapted as follows to account for the Dirichlet boundary condition
$p_{v,D}$:
$\displaystyle\frac{R_{1}^{2}\pi
K_{v,{1}}}{l_{1}}\;(p_{v}\mathopen{}\mathclose{{}\left(\bm{y}}\right)-p_{v,D})+\sum_{j=2}^{N}\frac{R_{j}^{2}\pi
K_{v,{j}}}{l_{j}}\;(p_{v}\mathopen{}\mathclose{{}\left(\bm{y}}\right)-p_{v,j})$
$\displaystyle\quad=-L_{p}\mathopen{}\mathclose{{}\left|\tilde{S}_{1}}\right|p_{v}\mathopen{}\mathclose{{}\left(\bm{y}}\right)+L_{p}\sum_{{CV}_{k}\cap\tilde{S}_{1}\neq\emptyset}w_{k1}\;p_{k}-L_{p}\sum_{j=2}^{N}\mathopen{}\mathclose{{}\left|S_{j}}\right|p_{v}\mathopen{}\mathclose{{}\left(\bm{y}}\right)+L_{p}\mathopen{}\mathclose{{}\left|S_{j}}\right|\sum_{{CV}_{k}\cap
S_{j}\neq\emptyset,j>1}w_{kj}\;p_{k},$
where $\tilde{S}_{1}$ is the mantle surface of the cylinder covering the whole
edge $\Lambda_{1}$.
* •
Homogeneous Neumann boundary for $p_{v}$ if
$\bm{x}\in\partial\Lambda\cap\Omega$. Let
$\bm{x}\in\partial\Lambda_{i}\cap\partial\Lambda\cap\Omega$, then we set
$\mathopen{}\mathclose{{}\left.-R_{i}^{2}\pi\;K_{v,i}\partial_{s}p_{v}}\right|_{\bm{x}}=0$,
resulting in the following discretization:
$R_{i}^{2}\pi\;K_{v,i}\frac{p_{v}(\bm{x})-p_{v}(\bm{y})}{l_{i}}=-L_{p}\mathopen{}\mathclose{{}\left|S_{i_{1}}}\right|p_{v,i}+L_{p}\mathopen{}\mathclose{{}\left|S_{i_{1}}}\right|\sum_{{CV}_{k}\cap
S_{i_{1}}\neq\emptyset}w_{ki_{l}}\;p_{k},$
where $\bm{y}\in\partial\Lambda_{i}\cap\Lambda$ and $l_{i}$ is the length of
the edge $\Lambda_{i}$.
Summarizing, we consider for the pressure in the network Dirichlet boundaries
at the boundary of the 3D domain $\Omega$ and homogeneous Neumann boundary
conditions in the inner part of $\Omega$. For the nutrients, the
implementation of boundary conditions is more challenging, since an upwinding
procedure has to be taken into account.
* •
Dirichlet boundary for $\phi_{v}$ if
$\bm{x}\in\partial\Lambda_{i}\cap\partial\Lambda\cap\partial\Omega$ and
$v_{v}(\bm{x})\approx-
K_{v,i}\frac{p_{v}\mathopen{}\mathclose{{}\left(\bm{y}}\right)-p_{v,D}(\bm{x})}{l_{i}}>0.$
In this case, we set: $\phi_{v}(\bm{x},t)=\phi_{v,D}(\bm{x})$, where
$\phi_{v,D}$ is a given Dirichlet value at $\bm{x}$. The numerical
implementation can be exerted analogously to the case of the pressure $p_{v}$.
* •
Homogeneous Neumann boundary for $\phi_{v}$ if
$\bm{x}\in\partial\Lambda\cap\Omega$. Let
$\bm{x}\in\partial\Lambda_{i}\cap\partial\Lambda\cap\Omega$ and we set
$(\mathopen{}\mathclose{{}\left.v_{v}\phi_{v}-D_{v}\partial_{s_{i}}\phi_{v})}\right|_{\bm{x}}=0,$
resulting in the following discretization:
$\displaystyle\frac{l_{i}}{2}\frac{\phi_{v}(\bm{x},t+\Delta
t)-\phi_{v}(\bm{x},t)}{\Delta
t}+\mathopen{}\mathclose{{}\left.v_{v}\phi_{v}}\right|_{\mathbf{m}_{i}}-D_{v}\frac{\phi_{v}(\bm{y},t+\Delta
t)-\phi_{v}(\bm{x},t+\Delta t)}{l_{i}}$ $\displaystyle\quad=-2\pi
R_{i}\mathopen{}\mathclose{{}\left[(1-r_{\sigma})\int_{s(\bm{x})}^{s(\mathbf{m}_{i})}J_{pv}(\overline{p},p_{v})\cdot\phi_{\sigma}^{v}(s,t+\Delta
t)\;\textup{d}s+L_{\sigma}\int_{s(\bm{x})}^{s(\mathbf{m}_{i})}\phi_{v}(s,t+\Delta
t)-\overline{\phi}_{\sigma}(s,t+\Delta t)\,\textup{d}s}\right],$
where $\bm{y}\in\partial\Lambda_{i}\cap\Lambda$. The integrals modeling the
exchange terms are discretized as in the case of the pressure equations.
* •
Upwinding boundary for $\phi_{v}$ if
$\bm{x}\in\partial\Lambda\cap\partial\Lambda_{i}\cap\partial\Omega$ and
$v_{v}(\mathbf{m}_{i})\approx-
K_{v,i}\frac{p_{v}\mathopen{}\mathclose{{}\left(\bm{y}}\right)-p_{v}\mathopen{}\mathclose{{}\left(\mathbf{x}}\right)}{l_{i}}\leq
0.$
Here, we obtain the following semi-discrete equation:
$\displaystyle\frac{l_{i}}{2}\frac{\phi_{v}(\bm{x},t+\Delta
t)-\phi_{v}(\bm{x},t)}{\Delta
t}+\mathopen{}\mathclose{{}\left.v_{v}}\right|_{\mathbf{m}_{i}}\phi_{v}(\bm{y},t+\Delta
t)-\mathopen{}\mathclose{{}\left.v_{v}}\right|_{\mathbf{m}_{i}}\phi_{v}(\bm{x},t+\Delta
t)$ $\displaystyle\quad=-2\pi
R_{i}\mathopen{}\mathclose{{}\left[(1-r_{\sigma})\int_{s(\bm{x})}^{s(\mathbf{m}_{i})}J_{pv}(\overline{p},p_{v})\cdot\phi_{\sigma}^{v}(s,t+\Delta
t)\;\textup{d}s+L_{\sigma}\int_{s(\bm{x})}^{s(\mathbf{m}_{i})}\phi_{v}(s,t+\Delta
t)-\overline{\phi}_{\sigma}(s,t+\Delta t)\,\textup{d}s}\right],$
where $\bm{y}\in\partial\Lambda_{i}\cap\Lambda$.
### 4.2. Discretization of the 3D PDEs
Suppose that $\phi_{\alpha_{n}},\mu_{\alpha_{n}},p_{n},p_{v_{n}},\bm{v}_{n}$
denote the various fields at time $t_{n}$. Let $V_{h}$ be a subspace of
$H^{1}(\Omega,\mathbb{R})$ consisting of continuous piecewise trilinear
functions on a uniform mesh $\Omega_{h}$. We consider $\phi_{\alpha_{n}}\in
V_{h}$ for $\alpha\in\\{P,H,N,TAF,ECM,MDE\\}$, $\mu_{P_{n}},\mu_{H_{n}}\in
V_{h}$, and $\bm{v}_{h}\in[V_{h}]^{3}$. The test functions are denoted by
$\tilde{\phi}\in V_{h}$ for species in $V_{h}$, $\tilde{\mu}\in V_{h}$ for
chemical potentials $\mu_{P},\mu_{H}$, and $\tilde{\bm{v}}\in[V_{h}]^{3}$ for
velocity.
Given a time step $n$ and solutions
$\phi_{\alpha_{n}},\mu_{\alpha_{n}},p_{n},p_{v_{n}},\bm{v}_{n}$, we are
interested in the solution at the next time step. For the 3D-1D coupled
pressure $(p_{v_{n+1}},p_{n+1})$, as mentioned earlier, we utilize a block
Gauss-Seidel iteration, where the discretization of the 1D equation is
discussed in Subsection 4.1 and discretization of the 3D equation using
finite-volume scheme is provided in • ‣ Subsection 4.2. Similarly,
$(\phi_{v_{n+1}},\phi_{\sigma_{n+1}})$ is solved using a block Gauss-Seidel
iteration with the discretization of the 1D equation along the lines of the
discretization of the 1D pressure equation and discretization of the 3D
equation provided in • ‣ Subsection 4.2. We then solve the proliferative,
hypoxic, necrotic, MDE, ECM, and TAF systems sequentially. Once we have
pressure $p_{n+1}$, we compute the velocity $\bm{v}_{n+1}\in[V_{h}]^{3}$ using
the weak form:
(4.1) $(\bm{v}_{n+1},\tilde{\bm{v}})=(-K(\nabla
p_{n+1}-S_{p_{n}}),\tilde{\bm{v}}),\qquad\forall\tilde{\bm{v}}\in[V_{h}]^{3},$
where $S_{p_{n}}=S_{p}({\bm{\phi}}_{n},\mu_{P_{n}},\mu_{H_{n}})$, see Eq. 2.5.
For the advection terms, using the fact that $\nabla p\cdot\bm{n}=0$ on
$\partial\Omega$ and so $\bm{v}_{n+1}\cdot\bm{n}=0$ on $\partial\Omega$, we
can write
(4.2)
$\displaystyle\mathopen{}\mathclose{{}\left({\nabla\cdot(\phi_{\alpha}\bm{v}_{n+1})},{\tilde{\phi}}}\right)=-\mathopen{}\mathclose{{}\left({\phi_{\alpha}\bm{v}_{n+1}},{\nabla\tilde{\phi}}}\right),\qquad\forall\tilde{\phi}\in
V_{h},\;\forall\alpha\in\\{P,H,TAF,MDE\\}.$
In what follows, we consider the expression on the right hand side in above
equation for the advection terms.
For fields $\phi_{a}$, $a\in\\{P,H,N,\sigma,TAF,ECM,MDE\\}$, and chemical
potentials $\mu_{P},\mu_{H}$, we assume homogeneous Neumann boundary condition
on $\partial\Omega$. Next, we describe the discretization of the scalar fields
in the 3D model.
* •
Pressure. Let ${CV}\in\Omega_{h}$ denote the typical finite volume cell and
$\sigma\in\partial{CV}$ face of a cell ${CV}$. Let
$(p_{v_{n+1}}^{k},p_{{n+1}}^{k})$ denote the pressures at $k^{\text{th}}$
iteration and time $t_{n+1}$. Suppose we have solved for $p_{v_{n+1}}^{{k+1}}$
following Subsection 4.1. To solve $p_{n+1}^{{k+1}}$, we consider, for all
${CV}\in\Omega_{h}$,
$\displaystyle-\sum_{\sigma\in\partial{CV}}\int_{\sigma}{K\nabla
p_{n+1}^{{k+1}}}\cdot\bm{n}\,\textup{d}S$
$\displaystyle=-\sum_{\sigma\in\partial{CV}}\int_{\sigma}{KS_{p_{n}}}\cdot\bm{n}\,\textup{d}S+\int_{\Gamma\cap{CV}}J_{pv}(p_{n+1}^{{k+1}},\Pi_{\Gamma}p_{v_{n+1}}^{{k+1}})\,\textup{d}S$
(4.3)
$\displaystyle=-\sum_{\sigma\in\partial{CV}}\int_{\sigma}{KS_{p_{n}}}\cdot\bm{n}\,\textup{d}S+\sum_{i=1}^{N}\int_{\Gamma_{i}\cap{CV}}J_{pv}(p_{n+1}^{{k+1}},\Pi_{\Gamma_{i}}p_{v_{n+1}}^{{k+1}})\,\textup{d}S.$
Above follows by the integration of the pressure equation in Eq. 2.4 over
${CV}$ and using the divergence theorem. Here
$J_{pv}(p,p_{v})=L_{p}(p_{v}-p)$, $\Gamma=\cup_{i=1}^{N}\Gamma_{i}$ is the
total vascular surface, and $\Pi_{\Gamma_{i}}(p_{v})$ is the projection of the
1D pressure defined on the centerline $\Lambda_{i}$ onto the surface of the
cylinder, $\Gamma_{i}$.
* •
Nutrients. Suppose we have solved for $\phi_{v_{n+1}}^{{k+1}}$. To solve
$\phi_{{\sigma}_{n+1}}^{{k+1}}$, we consider, for all ${CV}$,
$\displaystyle\int_{{CV}}{\frac{\phi_{\sigma_{n+1}}^{{k+1}}-\phi_{\sigma_{n}}}{\Delta
t}}\,\textup{d}V+\sum_{\sigma\in\partial{CV}}\int_{\sigma}{\phi_{\sigma_{n+1}}^{{k+1}}\hat{\bm{v}}_{n+1}}\cdot\bm{n}\,\textup{d}S-\sum_{\sigma\in\partial{CV}}\int_{\sigma}{m_{\sigma}(\bm{\phi}_{n})\mathopen{}\mathclose{{}\left(D_{\sigma}\nabla\phi_{\sigma_{n+1}}^{{k+1}}}\right)}\cdot\bm{n}\,\textup{d}S$
$\displaystyle\quad+\int_{{CV}}{\lambda^{\\!\textup{pro}}_{P}\phi_{P_{n}}\phi_{\sigma_{n+1}}^{{k+1}}}\,\textup{d}V+\int_{{CV}}{\lambda^{\\!\textup{pro}}_{H}\phi_{H_{n}}\phi_{\sigma_{n+1}}^{{k+1}}}\,\textup{d}V$
$\displaystyle\quad+\int_{{CV}}{\lambda^{\\!\textup{pro}}_{ECM}(1-{\phi_{{ECM}}}_{n})\mathcal{H}({\phi_{{ECM}}}_{n}-\phi^{\textmd{pro}}_{{ECM}})\phi_{\sigma_{n+1}}^{{k+1}}}\,\textup{d}V$
$\displaystyle=\int_{{CV}}{\lambda^{\\!\textup{deg}}_{P}\phi_{P_{n}}+\lambda^{\\!\textup{deg}}_{H}\phi_{H_{n}}}\,\textup{d}V+\int_{{CV}}{\lambda^{\\!\textup{deg}}_{{ECM}}{\phi_{{ECM}}}_{n}{\phi_{{MDE}}}_{n}}\,\textup{d}V$
(4.4)
$\displaystyle-\sum_{\sigma\in\partial{CV}}\int_{\sigma}{m_{\sigma}(\bm{\phi}_{n})\chi_{c}\nabla\mathopen{}\mathclose{{}\left(\phi_{P_{n}}+\phi_{H_{n}}}\right)}\cdot\bm{n}\,\textup{d}S+\sum_{i=1}^{N}\int_{\Gamma_{i}\cap{CV}}J_{\sigma
v}(\phi_{\sigma_{n+1}}^{{k+1}},p_{n+1}^{{k+1}},\Pi_{\Gamma_{i}}\phi_{{v}_{n+1}}^{{k+1}},\Pi_{\Gamma_{i}}p_{v_{n+1}}^{{k+1}})\,\textup{d}S,$
where $J_{\sigma v}$ is given by Eq. 2.6. Noting that the velocity is
$\hat{\bm{v}}_{n+1}=-K\nabla p_{n+1}+KS_{p_{n}},$
we divide the advection term, for $\sigma\in\partial{CV}$, into two parts:
(4.5)
$\displaystyle\int_{\sigma}{\phi_{\sigma_{n+1}}^{{k+1}}\hat{\bm{v}}_{n+1}}\cdot\bm{n}\,\textup{d}S$
$\displaystyle=-K\int_{\sigma}{\phi_{\sigma_{n+1}}^{{k+1}}\nabla
p_{n+1}}\cdot\bm{n}\,\textup{d}S+K\int_{\sigma}{\phi_{\sigma_{n+1}}^{{k+1}}S_{p_{n}}}\cdot\bm{n}\,\textup{d}S.$
For the first term we apply the upwinding scheme. For the second term, we
perform quadrature approximation to compute the integral over the face
$\sigma$.
* •
Proliferative. For a general double-well potential
$\Psi(\phi_{P},\phi_{H},\phi_{N})=\sum_{a\in\\{T,P,H\\}}C_{\Psi_{a}}\phi_{a}^{2}(1-\phi_{a})^{2}$,
we consider the convex-concave splitting, see [18], as follows
(4.6)
$\displaystyle\Psi(\phi_{P},\phi_{H},\phi_{N})=\sum_{a\in\\{T,P,H\\}}\frac{3}{2}C_{\Psi_{a}}\phi_{a}^{2}+\sum_{a\in\\{T,P,H\\}}C_{\Psi_{a}}(\phi_{a}^{4}-2\phi_{a}^{3}-\frac{1}{2}\phi_{a}^{2}).$
This results in
(4.7)
$\displaystyle\partial_{\phi_{P}}\Psi(\phi_{P},\phi_{H},\phi_{N})=\sum_{a\in\\{T,P\\}}3C_{\Psi_{a}}\phi_{a}+\sum_{a\in\\{T,P\\}}C_{\Psi_{a}}\phi_{a}(4\phi_{a}^{2}-6\phi_{a}-1).$
The expression for $\partial_{\phi_{H}}\Psi$ can be derived analogously. In
our implementation, $\phi_{P},\phi_{H},\phi_{N}$ are the main state variables
and $\phi_{T}$ is computed using $\phi_{T}=\phi_{P}+\phi_{H}+\phi_{N}$. Let
the mobility $\bar{m}_{P_{n}}$ at the current step be given by
(4.8)
$\displaystyle\bar{m}_{P_{n}}=M_{P}\mathopen{}\mathclose{{}\left[(\phi_{P_{n}})^{+}(1-\phi_{T_{n}})^{+}}\right]^{2},$
where for a field $f$, $\mathopen{}\mathclose{{}\left(f}\right)^{+}$ is the
projection onto $[0,1]$ given by
(4.9) $\displaystyle\mathopen{}\mathclose{{}\left(f}\right)^{+}$
$\displaystyle=\begin{cases}f\qquad\text{if }f\in[0,1],\\\ 0\qquad\text{if
}f\leq 0,\\\ 1\qquad\text{if }f\geq 1.\end{cases}$
We solve for $\phi_{P_{n+1}},\mu_{P_{n+1}}$ using the weak forms below
$\displaystyle\mathopen{}\mathclose{{}\left({\frac{\phi_{P_{n+1}}-\phi_{P_{n}}}{\Delta
t}},{\tilde{\phi}}}\right)-\mathopen{}\mathclose{{}\left({\phi_{P_{n+1}}\bm{v}_{n+1}},{\nabla\tilde{\phi}}}\right)+\mathopen{}\mathclose{{}\left({\bar{m}_{P_{n}}\nabla\mu_{P_{n+1}}},{\nabla\tilde{\phi}}}\right)$
$\displaystyle\quad-\mathopen{}\mathclose{{}\left({\lambda^{\\!\textup{pro}}_{P}\phi_{\sigma_{n+1}}(1-\phi_{T_{n}})^{+}\phi_{P_{n+1}}},{\tilde{\phi}}}\right)+\mathopen{}\mathclose{{}\left({\lambda^{\\!\textup{deg}}_{P}\phi_{P_{n+1}}},{\tilde{\phi}}}\right)$
$\displaystyle=\mathopen{}\mathclose{{}\left({\lambda_{HP}\mathcal{H}(\phi_{\sigma_{n+1}}-\sigma_{HP})\mathopen{}\mathclose{{}\left(\phi_{H_{n}}}\right)^{+}},{\tilde{\phi}}}\right)-\mathopen{}\mathclose{{}\left({\lambda_{P\\!H}\mathcal{H}(\sigma_{P\\!H}-\phi_{\sigma_{n+1}})\mathopen{}\mathclose{{}\left(\phi_{P_{n}}}\right)^{+}},{\tilde{\phi}}}\right)$
(4.10) $\displaystyle\quad+\frac{1}{\Delta
t}\mathopen{}\mathclose{{}\left({G_{P_{n}}\int_{t_{n}}^{t_{n+1}}\textup{d}{W}_{P}},{\tilde{\phi}}}\right)$
and
$\displaystyle\mathopen{}\mathclose{{}\left({\mu_{P_{n+1}}},{\tilde{\mu}}}\right)-\mathopen{}\mathclose{{}\left({3(C_{\Psi_{T}}+C_{\Psi_{P}})\phi_{P_{n+1}}},{\tilde{\mu}}}\right)-\mathopen{}\mathclose{{}\left({\epsilon_{P}^{2}\nabla\phi_{P_{n+1}}},{\nabla\tilde{\mu}}}\right)$
$\displaystyle=\mathopen{}\mathclose{{}\left({C_{\Psi_{T}}\phi_{T_{n}}(4\phi_{T_{n}}^{2}-6\phi_{T_{n}}-1)},{\tilde{\mu}}}\right)+\mathopen{}\mathclose{{}\left({C_{\Psi_{P}}\phi_{P_{n}}(4\phi_{P_{n}}^{2}-6\phi_{P_{n}}-1)},{\tilde{\mu}}}\right)$
(4.11)
$\displaystyle\quad+\mathopen{}\mathclose{{}\left({3C_{\Psi_{T}}(\phi_{H_{n}}+\phi_{N_{n}})},{\tilde{\mu}}}\right)-\mathopen{}\mathclose{{}\left({\chi_{c}\phi_{\sigma_{n+1}}+\chi_{h}{\phi_{{ECM}}}_{n}},{\tilde{\mu}}}\right),$
where $(\cdot)^{+}$ is the projection to $[0,1]$ defined in Eq. 4.9,
$G_{P_{n}}=G_{P}(\phi_{P_{n}},\phi_{H_{n}},\phi_{N_{n}})$ is given by Eq. 2.3,
and $W_{P}$ is the cylindrical Wiener process. We discuss the computation of
stochastic term in more detail in Subsection 4.2.1.
* •
Hypoxic. Let the mobility $\bar{m}_{H_{n}}$ be given by
(4.12)
$\displaystyle\bar{m}_{H_{n}}=M_{H}\mathopen{}\mathclose{{}\left[(\phi_{H_{n}})^{+}(1-\phi_{T_{n}})^{+}}\right]^{2}.$
To solve for $\phi_{H_{n+1}},\mu_{H_{n+1}}$, we consider
$\displaystyle\mathopen{}\mathclose{{}\left({\frac{\phi_{H_{n+1}}-\phi_{H_{n}}}{\Delta
t}},{\tilde{\phi}}}\right)-\mathopen{}\mathclose{{}\left({\phi_{H_{n+1}}\bm{v}_{n+1}},{\nabla\tilde{\phi}}}\right)+\mathopen{}\mathclose{{}\left({\bar{m}_{H_{n}}\nabla\mu_{H_{n+1}}},{\nabla\tilde{\phi}}}\right)$
$\displaystyle\quad-\mathopen{}\mathclose{{}\left({\lambda^{\\!\textup{pro}}_{H}\phi_{\sigma_{n+1}}(1-\phi_{T_{n}})^{+}\phi_{H_{n+1}}},{\tilde{\phi}}}\right)+\mathopen{}\mathclose{{}\left({\lambda^{\\!\textup{deg}}_{H}\phi_{H_{n+1}}},{\tilde{\phi}}}\right)$
$\displaystyle=\mathopen{}\mathclose{{}\left({\lambda_{PH}\mathcal{H}(\sigma_{PH}-\phi_{\sigma_{n+1}})\mathopen{}\mathclose{{}\left(\phi_{P_{n}}}\right)^{+}},{\tilde{\phi}}}\right)-\mathopen{}\mathclose{{}\left({\lambda_{H\\!P}\mathcal{H}(\phi_{\sigma_{n+1}}-\sigma_{H\\!P})\mathopen{}\mathclose{{}\left(\phi_{H_{n}}}\right)^{+}},{\tilde{\phi}}}\right)$
(4.13)
$\displaystyle\quad-\mathopen{}\mathclose{{}\left({\lambda_{H\\!N}\mathcal{H}(\sigma_{H\\!N}-\phi_{\sigma_{n+1}})\mathopen{}\mathclose{{}\left(\phi_{H_{n}}}\right)^{+}},{\tilde{\phi}}}\right)+\frac{1}{\Delta
t}\mathopen{}\mathclose{{}\left({G_{H_{n}}\int_{t_{n}}^{t_{n+1}}\textup{d}{W}_{H}},{\tilde{\phi}}}\right)$
and
$\displaystyle\mathopen{}\mathclose{{}\left({\mu_{H_{n+1}}},{\tilde{\mu}}}\right)-\mathopen{}\mathclose{{}\left({3(C_{\Psi_{T}}+C_{\Psi_{H}})\phi_{H_{n+1}}},{\tilde{\mu}}}\right)-\mathopen{}\mathclose{{}\left({\epsilon_{H}^{2}\nabla\phi_{H_{n+1}}},{\nabla\tilde{\mu}}}\right)$
$\displaystyle=\mathopen{}\mathclose{{}\left({C_{\Psi_{T}}\phi_{T_{n}}(4\phi_{T_{n}}^{2}-6\phi_{T_{n}}-1)},{\tilde{\mu}}}\right)+\mathopen{}\mathclose{{}\left({C_{\Psi_{H}}\phi_{H_{n}}(4\phi_{H_{n}}^{2}-6\phi_{H_{n}}-1)},{\tilde{\mu}}}\right)$
(4.14)
$\displaystyle\quad+\mathopen{}\mathclose{{}\left({3C_{\Psi_{T}}(\phi_{P_{n}}+\phi_{N_{n}})},{\tilde{\mu}}}\right)-\mathopen{}\mathclose{{}\left({\chi_{c}\phi_{\sigma_{n+1}}+\chi_{h}{\phi_{{ECM}}}_{n}},{\tilde{\mu}}}\right),$
where $G_{H_{n}}=G_{H}(\phi_{P_{n}},\phi_{H_{n}},\phi_{N_{n}})$ is given by
Eq. 2.3 and $W_{H}$ is the cylindrical Wiener process.
* •
Necrotic.
(4.15)
$\displaystyle\mathopen{}\mathclose{{}\left({\frac{\phi_{N_{n+1}}-\phi_{N_{n}}}{\Delta
t}},{\tilde{\phi}}}\right)=\mathopen{}\mathclose{{}\left({\lambda_{HN}\mathcal{H}(\sigma_{HN}-\phi_{\sigma_{n+1}})\mathopen{}\mathclose{{}\left(\phi_{H_{n}}}\right)^{+}},{\tilde{\phi}}}\right).$
* •
MDE.
$\displaystyle\mathopen{}\mathclose{{}\left({\frac{{\phi_{{MDE}}}_{n+1}-{\phi_{{MDE}}}_{n}}{\Delta
t}},{\tilde{\phi}}}\right)-\mathopen{}\mathclose{{}\left({{\phi_{{MDE}}}_{n+1}\bm{v}_{n+1}},{\nabla\tilde{\phi}}}\right)+\mathopen{}\mathclose{{}\left({m_{{MDE}}(\bm{\phi}_{n})D_{{MDE}}\nabla{\phi_{{MDE}}}_{n+1}},{\nabla\tilde{\phi}}}\right)$
$\displaystyle\quad+\mathopen{}\mathclose{{}\left({\lambda^{\\!\textup{deg}}_{{MDE}}{\phi_{{MDE}}}_{n+1}},{\tilde{\phi}}}\right)+\mathopen{}\mathclose{{}\left({\lambda^{\\!\textup{pro}}_{{MDE}}(\phi_{P_{n}}+\phi_{H_{n}}){\phi_{{ECM}}}_{n}\frac{\sigma_{H\\!P}}{\sigma_{H\\!P}+\phi_{\sigma_{n+1}}}{\phi_{{MDE}}}_{n+1}},{\tilde{\phi}}}\right)$
$\displaystyle=\mathopen{}\mathclose{{}\left({\lambda^{\\!\textup{pro}}_{{MDE}}(\phi_{P_{n}}+\phi_{H_{n}}){\phi_{{ECM}}}_{n}\frac{\sigma_{H\\!P}}{\sigma_{H\\!P}+\phi_{\sigma_{n+1}}}},{\tilde{\phi}}}\right)$
(4.16)
$\displaystyle\quad-\mathopen{}\mathclose{{}\left({\lambda^{\\!\textup{deg}}_{ECM}{\phi_{{ECM}}}_{n}{\phi_{{MDE}}}_{n}},{\tilde{\phi}}}\right).$
* •
ECM.
(4.17)
$\displaystyle\mathopen{}\mathclose{{}\left({\frac{{\phi_{{ECM}}}_{n+1}-{\phi_{{ECM}}}_{n}}{\Delta
t}},{\tilde{\phi}}}\right)$
$\displaystyle=\mathopen{}\mathclose{{}\left({\lambda^{\\!\textup{pro}}_{ECM}\phi_{\sigma_{n+1}}\mathcal{H}({\phi_{{ECM}}}_{n}-\phi^{\text{pro}}_{{ECM}})(1-{\phi_{{ECM}}}_{n})},{\tilde{\phi}}}\right)-\mathopen{}\mathclose{{}\left({\lambda^{\\!\textup{deg}}_{ECM}{\phi_{{MDE}}}_{n}.{\phi_{{ECM}}}_{n}},{\tilde{\phi}}}\right)$
* •
TAF.
$\displaystyle\mathopen{}\mathclose{{}\left({\frac{{\phi_{{TAF}}}_{n+1}-{\phi_{{TAF}}}_{n}}{\Delta
t}},{\tilde{\phi}}}\right)-\mathopen{}\mathclose{{}\left({{\phi_{{TAF}}}_{n+1}\bm{v}_{n+1}},{\nabla\tilde{\phi}}}\right)+\mathopen{}\mathclose{{}\left({D_{TAF}\nabla{\phi_{{TAF}}}_{n+1}},{\nabla\tilde{\phi}}}\right)$
$\displaystyle\quad+\mathopen{}\mathclose{{}\left({\lambda^{\\!\textup{pro}}_{{TAF}}\phi_{H_{n+1}}{\phi_{{TAF}}}_{n+1}\mathcal{H}(\phi_{H_{n+1}}-\phi_{H_{P}})},{\tilde{\phi}}}\right)$
(4.18)
$\displaystyle=\mathopen{}\mathclose{{}\left({\lambda^{\\!\textup{pro}}_{{TAF}}\phi_{H_{n+1}}\mathcal{H}(\phi_{H_{n+1}}-\phi_{H_{P}})},{\tilde{\phi}}}\right)-\mathopen{}\mathclose{{}\left({\lambda^{\\!\textup{deg}}_{{TAF}}\phi_{{TAF}_{n}}},{\tilde{\phi}}}\right).$
The steps followed in solving the coupled system of equations including the
angiogenesis step are summarized in Algorithm 2.
1 Input:Model parameters, $\phi_{\alpha_{0}},v_{0},\Delta t,T,\text{TOL}$ for
$\alpha\in\mathcal{A}:=\\{P,H,N,\sigma,{TAF},{MDE},{ECM}\\}$
2 Output:
$\phi_{\alpha_{n}},\mu_{P_{n}},\mu_{H_{n}},p_{n},\bm{v}_{n},p_{v_{n}},\phi_{{\sigma
v}_{b}}$ for all $n$
3 $n=0$, $t=0$
4 while _$t\leq T$ _ do
5 $\phi_{\alpha_{n}}=\phi_{\alpha_{n+1}}$ $\forall\alpha\in\mathcal{A}$,
$\mu_{P_{n}}=\mu_{P_{n+1}}$, $\mu_{H_{n}}=\mu_{H_{n+1}}$,
$p_{v_{n}}=p_{v_{n+1}}$, $\phi_{{v}_{n}}=\phi_{{v}_{n+1}}$
6 if _$\text{apply\\_angiogenesis(n)}==\text{True}$_ then
7 apply angiogensis model described in Section 3
8 update 1D systems if the network is changed
9 end if
10
11 solve coupled linear system $(p_{v_{n+1}},p_{n+1})$ using block Gauss-
Seidel iteration and Subsection 4.1
12 solve coupled linear system $(\phi_{v_{n+1}},\phi_{\sigma_{n+1}})$ using
block Gauss-Seidel iteration and Subsection 4.1
13 solve velocity $\bm{v}_{n+1}$ using Eq. 4.1
14 solve $\mathopen{}\mathclose{{}\left(\phi_{P_{n+1}},\mu_{P_{n+1}}}\right)$
using the semi-implicit scheme in • ‣ Subsection 4.2 and • ‣ Subsection 4.2
15 solve $\mathopen{}\mathclose{{}\left(\phi_{H_{n+1}},\mu_{H_{n+1}}}\right)$
using the semi-implicit scheme in • ‣ Subsection 4.2 and • ‣ Subsection 4.2
16 solve $\phi_{N_{n+1}}$ using the semi-implicit scheme in Eq. 4.15
17 solve ${\phi_{{MDE}}}_{n+1}$ using the semi-implicit scheme in • ‣
Subsection 4.2
18 solve ${\phi_{{ECM}}}_{n+1}$ using the semi-implicit scheme in Eq. 4.17
19 solve ${\phi_{{TAF}}}_{n+1}$ using the semi-implicit scheme in • ‣
Subsection 4.2
20 $n\mapsto n+1$, $t\mapsto t+\Delta t$
21 end while
Algorithm 2 The 3D-1D tumor growth model solver with angiogenesis step
###### Remark 1.
If we ignore the advection and reaction terms of the given system and set
$\chi_{c}=0$ we can show that our algorithm is unconditionally gradient
stable.
This is due to the fact that if we freeze the field $\phi_{H_{n}}$, in both
the convex and concave part of our double-well potential and solve Eq. • ‣ 4.2
for $\phi_{P_{n+1}}$, then due to the given convex-concave splitting we get
$\mathcal{E}(\phi_{\sigma_{n+1}},\phi_{P_{n+1}},\phi_{H_{n}},\phi_{N_{n}},\phi_{MDE_{n}},\phi_{ECM_{n}},\phi_{TAF_{n}})-\mathcal{E}(\phi_{\sigma_{n+1}},\phi_{P_{n}},\phi_{H_{n}},\phi_{N_{n}},\phi_{MDE_{n}},\phi_{ECM_{n}},\phi_{TAF_{n}})\leq
0.$
Similarly, if we now freeze $\phi_{P_{n+1}}$ in both parts of the potential
and solve • ‣ 4.2 for $\phi_{H_{n+1}}$ we get from the unconditional gradient
stability of the sub scheme that
$\mathcal{E}(\phi_{\sigma_{n+1}},\phi_{P_{n+1}},\phi_{H_{n+1}},\phi_{N_{n}},\phi_{MDE_{n}},\phi_{ECM_{n}},\phi_{TAF_{n}})-\mathcal{E}(\phi_{\sigma_{n+1}},\phi_{P_{n+1}},\phi_{H_{n}},\phi_{N_{n}},\phi_{MDE_{n}},\phi_{ECM_{n}},\phi_{TAF_{n}})\leq
0.$
This observation extends to arbitrarily large systems of Cahn-Hilliard
equations and since Eqs. (• ‣ 4.2), (• ‣ 4.2), (• ‣ 4.2) can be considered as
very simple Cahn-Hilliard equations it also extends to them. Finally we note
that $\phi_{N_{n+1}}=\phi_{N_{n}}$ and $\phi_{ECM_{n+1}}=\phi_{ECM_{n}}$ holds
trivially without source terms. Hence, using a telescope sum over all the
energy decrements due to solving Eqs. (• ‣ 4.2), (• ‣ 4.2), (• ‣ 4.2), (4.15),
(• ‣ 4.2), (4.17) and (• ‣ 4.2) we get
$\mathcal{E}(\mathbf{\bm{\phi}}_{n+1})-\mathcal{E}(\mathbf{\bm{\phi}}_{n})\leq
0$
independent of our time-step size, which provides a strong motivation for the
stability of our algorithm.
With the stochastic terms we can generate tumor mass and hence $\mathcal{E}$
does not have to decrease. And even though the reaction terms all add up to
zero, this does not necessarily mean that $\mathcal{E}$ has to decrease, since
they are not part of our gradient-flow. For arbitrary initial states we
therefore cannot expect that $\frac{d}{dt}\mathcal{E}\leq 0$ holds even for
our continuous system.
#### 4.2.1. Stochastic component of the system
Generally, the cylindrical Wiener processes $W_{\alpha}$,
$\alpha\in\\{P,H\\}$, on $L^{2}(\Omega)$ with $\Omega=(0,2)^{3}$ can be
written as
$W_{\alpha}(t)(\bm{x})=\sum_{i,j,k=1}^{\infty}\eta^{\alpha}_{ijk}(t)\underbrace{\cos(i\pi
x_{1}/L)\cos(j\pi x_{2}/L)\cos(k\pi x_{3}/L)}_{=:e_{ijk}},$
where $\bm{x}=(x_{1},x_{2},x_{3})$, $L$ is the edge length of the cubed domain
$\Omega$, $\\{e_{ijk}\\}$ form the orthonormal basis of $L^{2}(\Omega)$, and
$\\{\eta^{\alpha}_{ijk}\\}_{i,j,k\in\mathbb{N}}$ is a family of real-valued,
independent, and identically distributed Brownian motions. Following [9, 4],
we approximate the term involving the Wiener process in the fully discretized
system as follows
(4.19) $\frac{1}{\Delta
t}\mathopen{}\mathclose{{}\left(\int_{t_{n}}^{t_{n+1}}\textup{d}W_{\alpha}(t),\xi}\right)_{L^{2}}\approx\frac{1}{\Delta
t}\sum_{\begin{subarray}{c}i,j,k,\\\
i+j+k<I_{\alpha}\end{subarray}}\eta^{\alpha}_{ijk}(e_{ijk},\xi)_{L^{2}},$
where $\xi\in V_{h}$ is a test function,
$\eta^{\alpha}_{ijk}\sim\mathcal{N}(0,\Delta t)$ are independent Gaussians,
and $I_{\alpha}$ controls the number of basis functions.
## 5\. Numerical simulations
In this section, we apply the models described in Sections 2 and 3 and use the
numerical discretization steps discussed in Section 4. We consider examples
that showcase the effects of angiogenesis on the tumor growth. For this
purpose, the model parameters and the basic setting for our simulations are
introduced in Table 2. In the base setting, we consider two vessels, one
representing an artery and the other a vein, and introduce an initially
spherical tumor core. Based on this setting, tumor growth is simulated first
without considering angiogenesis, i.e., the growth algorithm from Section 3 is
not applied. Afterwards, we repeat the same simulation including the
angiogenesis effects and study the differences between the corresponding
simulations results.
Figure 8. Initial setting with boundary conditions for a first numerical
experiment. Pressure at top plane ($z=0$) and bottom ($z=2$) ends of the
artery are $3000$ and $2000$ respectively. Similarly, the pressure at top and
bottom ends of the vein are fixed at $1100$ and $1600$, respectively. The high
pressure end of the artery is the inlet end, and there we assign the nutrients
value to $1$. The high pressure end of the vein is also the inlet, and here we
assign the nutrients value to $0$. At the remaining ends, we apply the
upwinding scheme for solving the nutrient equation.
We then consider a scenario consisting of a tumor core surrounded by a small
capillary network. We obtain the network from
source111https://physiology.arizona.edu/sites/default/files/brain99.txt.
First, we rescale the network so that it fits into the domain
$\Omega=(0,2)^{3}$. The vessel radii are rescaled such that the maximal and
minimal vessel radius is given non-dimensionally by $0.05606$ and $0.025$,
respectively. In all of the simulations, we consider the double-well potential
of the form:
$\Psi=C_{\Psi_{T}}\phi_{T}^{2}(1-\phi_{T})^{2},$
where $C_{\Psi_{T}}$ is a constant. Since the model involves stochastic PDEs
as well as stochastic angiogenesis, we employ Monte-Carlo approximation based
on samples of the probability distributions characterizing the white noise
terms, using 10 samples for the case without angiogenesis and 50 samples for
the case with angiogenesis. We use the samples to report statistics of
quantity of interests such as total tumor volume, vessel volume density, etc.
### 5.1. Setup and model parameters for the two vessel problem
Table 1. List of parameters and their values for the numerical simulations. Unlisted parameters are set to zero. $\phi_{\alpha}^{\omega}$, $\omega_{\alpha}$, and $I_{\alpha}$ are parameters related to Wiener process, see Eq. 2.3 and Eq. 4.19. Parameter | Value | Parameter | Value | Parameter | Value
---|---|---|---|---|---
$\lambda^{\\!\textup{pro}}_{P}$ | 5 | $\lambda^{\\!\textup{pro}}_{H}$ | 0.5 | $\lambda^{\\!\textup{deg}}_{P},\lambda^{\\!\textup{deg}}_{H}$ | 0.005
$\lambda^{\\!\textup{pro}}_{{ECM}}$ | 0.01 | $\lambda^{\\!\textup{pro}}_{{MDE}},\lambda^{\\!\textup{deg}}_{{MDE}}$ | 1 | $\lambda^{\\!\textup{deg}}_{{ECM}}$ | 5
$\lambda_{P\\!H}$ | 1 | $\lambda_{H\\!P}$ | 1 | $\lambda_{H\\!N}$ | 1
$\sigma_{P\\!H}$ | 0.55 | $\sigma_{H\\!P}$ | 0.65 | $\sigma_{H\\!N}$ | 0.44
$M_{P}$ | 50 | $M_{H}$ | 25 | $C_{\Psi_{T}}$ | 0.03
$\varepsilon_{P}$ | 0.005 | $\varepsilon_{H}$ | 0.005 | $\lambda^{\\!\textup{pro}}_{{TAF}}$ | 10
$D_{{TAF}}$ | 0.5 | $M_{{TAF}}$ | 1 | $L_{p}$ | $10^{-7}$
$D_{\sigma}$ | 3 | $M_{\sigma}$ | 1 | $K$ | $10^{-9}$
$D_{{MDE}}$ | 0.5 | $M_{{MDE}}$ | 1 | $L_{\sigma}$ | $4.5$
$D_{v}$ | $0.1$ | $\mu_{\text{bl}}$ | 1 | $r_{\sigma}$ | $0.95$
$I_{\alpha}$, $\alpha\in\\{P,H\\}$ | $17$ | $\phi_{\alpha}^{\omega}$, $\alpha\in\\{P,H,T\\}$ | $0.1$ | $\omega_{\alpha}$, $\alpha\in\\{P,H\\}$ | $0.0025$
Table 2. List of parameters and their values for the growth algorithm and numerical discretization Parameter | Value | Function
---|---|---
$Th_{{TAF}}$ | $7.5\cdot 10^{-3}$ | Threshold for the TAF concentration (sprouting)
$\mu_{r}$ | $1.0$ | Mean value for the log-normal distribution (ratio radius/vessel length)
$\sigma_{r}$ | $0.2$ | Standard dev. for the log-normal distribution (ratio radius/vessel length)
$\lambda_{g}$ | $1.0$ | Regularization parameter to avoid bendings and sharp corners
$\gamma$ | $3.0$ | Murray parameter determining the radii at a bifurcation
$R_{\min}$ | $9.0\cdot 10^{-3}$ | Minimal vessel radius
$l_{\min}$ | $0.13$ | Minimal vessel length for which sprouting is activated
$R_{\max}$ | $0.035$ | Maximal vessel radius
$R_{T}$ | $0.05$ | Threshold for the radius to distinguish between arteries and veins for $t=0$
${\zeta}$ | $1.05$ | Sprouting parameter
$\text{dist}_{\text{link}}$ | $0.08$ | Maximal distance at which a terminal vessel is linked to the network
$\tau_{\text{ref}}$ | $0.02$ | Lower bound for the wall shear stress
$k_{{\text{WSS}}}$ | $0.4$ | Proportional constant (wall shear stress)
$k_{s}$ | $0.14$ | Shrinking parameter
$\Delta t$ | $0.0025$ | Time step size
$h_{3D}$ | $0.0364$ | Mesh size of the 3D grid
$h_{1D}$ | $0.25$ | Mesh size of the initial 1D grid
$\Delta t_{net}$ | $2\Delta t$ | Angiogenesis (network update) time interval
As a computational domain $\Omega$, we choose a cube,
$\Omega=\mathopen{}\mathclose{{}\left(0,2}\right)^{3}$. Within $\Omega$ two
different vessels are introduced: an artery and a vein; see Figure 8. The
radius of the vein $R_{v}$ is given by $R_{v}=0.0625$, and the radius of the
artery $R_{a}$ is set to $R_{a}=0.047$. The centerlines of both vessels are
given by straight lines. In case of the artery, the centerline starts at
$\mathopen{}\mathclose{{}\left(0.1875,0.1875,0}\right)$ and ends at
$\mathopen{}\mathclose{{}\left(0.1875,0.1875,2}\right)$, whereas the vein
starts at $\mathopen{}\mathclose{{}\left(1.8125,1.8125,0}\right)$ and ends at
$\mathopen{}\mathclose{{}\left(1.8125,1.8125,2}\right)$.
At the boundaries of the vessels, we choose Dirichlet boundaries for the
pressure, see also Figure 8. These boundary conditions imply that the artery
provides nutrients for the tissue block $\Omega$, while the vein will take up
nutrients. For the nutrients in the blood vessels mixed boundary conditions
are considered, as depicted in Figure 8.
As initial conditions for $\phi_{v}$, we choose $\phi_{v}=1$ in the artery and
$\phi_{v}=0$ in the vein. The initial value for the nutrient variable
$\phi_{\sigma}$ in the tissue matrix is given by $\phi_{\sigma}=0.5$. In order
to define the initial conditions for the tumor, we consider a ball $B_{T}$ of
radius $r_{c}=0.3$ around the center
$\bm{x}_{c}=\mathopen{}\mathclose{{}\left(1.0,0.8,1.0}\right)$. Within
$B_{T}$, the total tumor volume fraction $\phi_{T}$ smoothly decays from $1$
in the center to $0$ on the boundary of the ball:
(5.1) $\displaystyle\phi_{T}(\bm{x},t=0)$
$\displaystyle=\begin{cases}\begin{aligned}
&\exp\mathopen{}\mathclose{{}\left(1-\frac{1}{1-(|\bm{x}-\bm{x}_{c}|/r_{c})^{4}}}\right),&&\text{if
}|\bm{x}-\bm{x}_{c}|<r_{c},\\\ &0,&&\text{otherwise}.\end{aligned}\end{cases}$
Thereby, the necrotic and hypoxic volume fractions, $\phi_{N}$ and $\phi_{H}$,
are set initially to zero. In the other parts of the domain, all the volume
fractions for the tumor species are set to $0$ at $t=0$. In Table 1, the
parameters for the model equations in Section 2 are listed and Table 2
contains the parameters for the growth algorithm described in Section 3 as
well as the discretization parameters. In particular, the parameters for the
stochastic distributions are listed, which determine the radii and vessel
lengths, the probability of bifurcations, and the sprouting probability of new
vessels.
### 5.2. Robustness of the 3D-1D solver
To ascertain the accuracy and robustness of the proposed solver, we performed
several studies where we changed mesh size and time steps and found that the
solver is robust and the size of time step and mesh size employed in the
studies in sections below balance the computational cost and numerical
accuracy pretty well. To strengthen the claims, we consider a two-vessel setup
described above without the stochasticity and network growth. We run the
simulations using four different time steps $\Delta t_{i}=0.01/2^{i-1}$,
$i=1,2,3,4$, and compute the rate of convergence of quantity of interests such
as $L^{2}$ or $L^{1}$ norms of tumor species and nutrients. In Figure 9, we
plot the $L^{2}$ norm of tumor species and nutrients for different time steps.
We see that the difference between the curves for different $\Delta t$ is very
small. Let $Q_{i}(t)$ denote the quantity of interest ($L^{2}$ norm of
species) at time $t$ for $\Delta t_{i}$. We can approximately compute the rate
of convergence of $Q$ using the formula:
$\displaystyle
r(t)=\frac{\log(|Q_{1}(t)-Q_{4}(t)|)-\log(|Q_{2}(t)-Q_{4}(t)|)}{\log(\Delta
t_{1})-\log(\Delta t_{2})}.$
For $Q(t)=\mathopen{}\mathclose{{}\left\|\phi_{T}(t)}\right\|_{L^{2}}$, we
found $r(1)=0.894,r(2)=1.03,r(3)=1.025,r(4)=0.531,r(5)=1.692$.
We also remark that the proposed solver, see Algorithm 2, does not involve
nonlinear iterations to compute the
$\phi_{P},\phi_{H},\phi_{N},\phi_{{MDE}},\phi_{{ECM}}$ solutions at current
time step. We compared the results of current solver and the solver involving
nonlinear iterations and observed that the solver with nonlinear iterations
still required us to consider small time steps. Also, the error in solution
from two solvers decreases with mesh refinement and smaller time steps. These
observations motivated us to use the proposed solver for all numerical tests
in the sections below.
Figure 9. Plot of the $L^{2}$ norm of various species using four different
time steps.
### 5.3. Tumor growth without angiogenesis
Figure 10. Top left: Tumor growing in a mouse that is treated with anti-VEGF
agents. As a consequence tumor satellites in the vicinity of the main tumor
can be detected. Image taken from [55], with permission from Elsevier. Top
right: $\phi_{T}$ presented in a plane at $z=1$ perpendicular to the $z$-axis.
As seen in the medical experiments the formation of satellites and the
accumulation of tumor cells at the nutrient-rich artery are reproduced in the
simulations. Bottom left: Distribution of necrotic cells ($\phi_{N}$). It can
be seen that the main tumor consists of a large necrotic kernel. Bottom right:
Distribution of nutrients ($\phi_{\sigma}$).
The simulation results for tumor growth without angiogenesis are summarized in
Figure 10. For $t=8$, the tumor cell distribution within the plane
perpendicular to the $z$-axis at $z=1.0$ is shown. In three subfigures, the
volume fraction variables $\phi_{T}=\phi_{P}+\phi_{H}+\phi_{N}$, $\phi_{N}$,
as well as the nutrients $\phi_{\sigma}$ are presented. It can be observed
that the primary tumor is enlarged and small satellites are formed in the
vicinity of the main tumor. The distribution of the necrotic cells indicates
that the main tumor consists mostly of necrotic cells, while the hypoxic and
proliferative cells gather around the nutrient-rich blood vessels. This means
that the tumor cells can migrate against the flow from the artery towards the
vein. Apparently, the chemical potential caused by the nutrient source
dominates the interstitial flow.
These observations are consistent with simulation results and measurements
discussed, e.g., in [19, 35, 55]. In [35, 55] a tumor is introduced into a
mouse. At the same time, anti-VEGF agents were injected into the mouse, such
that the sprouting of new vessels growing towards the tumor is prevented. This
process leads to the formation of satellites located in the vicinity of the
primary tumor as well as the accumulation of tumor cells at nutrient-rich
vessels and cells. Furthermore, the primary tumor stops growing and forms a
large necrotic core.
Figure 11. Quantities of interests (QoIs) related to tumor species over a
time interval $[0,8]$ for the two-vessels setting. For the case without
angiogenesis, the mean QoIs computed using 10 samples is shown. For the
angiogenesis case, we compute the mean and standard deviation from 50 samples.
The solid line shows the mean QoI as a function of time. The thick layer
around the solid line corresponds to interval
$(\mu_{\alpha}(t)-\sigma_{\alpha}(t),\mu_{\alpha}(t)+\sigma_{\alpha}(t))\cap[0,\infty)$,
for $t\in[0,T]$, where $\mu_{\alpha}(t),\sigma_{\alpha}(t)$ are the mean and
standard deviations of QoI
$\alpha\in\\{\mathopen{}\mathclose{{}\left\|\phi_{T}}\right\|_{L^{1}},\mathopen{}\mathclose{{}\left\|\phi_{P}}\right\|_{L^{1}},\mathopen{}\mathclose{{}\left\|\phi_{H}}\right\|_{L^{1}},\mathopen{}\mathclose{{}\left\|\phi_{N}}\right\|_{L^{1}}\\}$
at time $t$. The variations in the QoIs for the non-angiogenesis case are very
small. The mean of the total tumor volume fraction
$\mathopen{}\mathclose{{}\left\|\phi_{T}}\right\|_{L^{1}}$ at the final time
for the angiogenesis case is about 1.7 times that of the non-angiogenesis
case.
In Figure 11, the $L^{1}$-norms of the tumor species over time are presented
for the case when angiogenesis was inactive and when it was active. While the
profiles for different species in two cases look similar, the total tumor is
about 70$\%$ times higher when angiogenesis is active. Diffusivity
$D_{\sigma}=3$ is large enough, and therefore, the nutrients originating from
the nutrient-rich vessels diffuse quickly throughout the domain.
In summary, one can conclude that without angiogenesis a tumor can grow to a
certain extent, before the primary tumor starts to collapse, i.e., a large
necrotic core is formed. However, this does not mean that the tumor cells are
entirely removed from the healthy tissue. If there is a source of nutrients
such as an artery close by, transporting nutrient rich blood, a portion of
tumor cells can survive by migrating towards the neighboring nutrient source.
### 5.4. Tumor growth with angiogenesis
As in the previous subsection, we compute the $L^{1}$-norms of the tumor
species $\phi_{T}$ at time $t=8$. However, since several stochastic processes
are involved in the network growth and also Wiener processes appear in the
proliferative and hypoxic cell mass balances, several data sets have to be
created in order to rule out statistical variations. In this context, the
issue arises as to how many data sets are needed to obtain a representative
value. In order to investigate this, we compute for every sample $i$, the
$L^{1}$-norm of the tumor species, denoted by
$\phi_{\alpha_{L_{1},i}},\;\alpha\in\mathopen{}\mathclose{{}\left\\{T,P,H,N}\right\\}$.
Additionally, the volume of the blood vessel network $V_{i}$ is computed. For
each data set with $i$ samples, we compute the mean values:
$\textup{mean}_{i}\mathopen{}\mathclose{{}\left(V}\right)=\frac{1}{i}\sum_{j=1}^{i}V_{j},\;\qquad\textup{mean}\mathopen{}\mathclose{{}\left(\phi_{\alpha_{L_{1},i}}}\right)=\frac{1}{i}\sum_{j=1}^{i}\phi_{\alpha_{L_{1},j}},\;\qquad\alpha\in\mathopen{}\mathclose{{}\left\\{T,P,H,N}\right\\}.$
In Figure 12, the mean values
$\textup{mean}_{i}\mathopen{}\mathclose{{}\left(V}\right)$ and
$\textup{mean}\mathopen{}\mathclose{{}\left(\phi_{\alpha_{L_{1},i}}}\right)$,
$\alpha\in\\{T,P,H,N\\}$, are shown. From the plots we see that the mean of
$||\phi_{T}||_{L^{1}}$ stabilizes after about 25 samples. For the vessel
volume, fluctuations in the sample means reduce with increasing sample and get
small after 30 samples. While the results in Figure 12 show that mean of the
QoIs stabilizes with increasing sample and converge to some number, the
trajectory in the figure could change with change in sample values. For
example, if we shuffle the samples and recompute the quantities in Figure 12,
various curves may look different.
Figure 12. The mean values for the $L^{1}$-norm of the tumor cell volume
fractions $\phi_{T},\phi_{P},\phi_{H},\phi_{N}$ and the volume of the blood
vessel network at time $t=8$ from increasing number of samples. Results
correspond to the two-vessel setting. The mean of the total tumor volume
fraction appears to be stable after about 28 samples. The mean of the vessel
volume shows smaller fluctuations as the number of samples in the data set
grows.
As mentioned earlier, Figure 11 presents the $L^{1}$-norms of tumor species.
For the angiogenesis simulations, we compute the mean and standard deviation
using 50 samples. We see that the total tumor volume fraction varies from
sample to sample, as expected. Both the hypoxic and the tumor cells show an
exponential growth after $t\approx 4$; see Figure 11. After decreasing until
$t\approx 3$, the proliferative cell mass grows from $t\approx 4$ onward. The
result is that in the case of angiogenesis, the overall nutrient concentration
is higher compared to the case without angiogenesis, while the spatial
variation of the nutrient is the same in the two; and hence the growth of the
tumor in the two cases are similar except that the tumor grows more rapidly in
the case of angiogenesis. We will see in our second example, where
$D_{\sigma}=0.05$ is much smaller, that the nutrient concentration is higher
near the nutrient rich vessels and tumor growth is more concentrated near
these regions, see Figure 18.
Figure 13. Growth of the tumor and the network at four different time points
$t\in\mathopen{}\mathclose{{}\left\\{0.24,0.64,3.20,5.60}\right\\}$ and one
specific sample. We show contour $\phi_{T}=0.9$. Figure 14. Tumor cell
distributions in the plane $z=1$ with angiogenesis for one sample. Top left:
$\phi_{T}$. Top right: $\phi_{N}$. Bottom left: $\phi_{H}$. Bottom right:
$\phi_{P}$.
In Figure 13, we show the evolving network together with the contour plot
$\phi_{T}=0.9$ of the total tumor species. At time $t=0.24$ (top-left figure),
we observe that new vessels originate from the artery and move towards the
hypoxic core; the directions of these vessels being based on the gradient of
TAF with random perturbations. At $t=0.64$ (top-right), we see a large number
of new vessels formed as predicted by the angiogenesis algorithm. However, at
time $t=3.2$ (bottom-left), vessels adapt and due to lower flow rates in some
newly created vessels, some vessels are gradually removed, and thus the number
of vessels decreases. Comparing $t=3.2$ and $t=5.6$ (bottom-right), we see
that the network has stabilized and little has changed in this time window.
From Figure 13, we can also summarize that the tumor growth is directed
towards the nutrient-rich vessels.
Next, we plot the tumor species at the $z=1$ plane along with the nutrient
distribution in the vessels in Figure 14. The plot corresponds to time $t=8$.
The plots corresponding to the necrotic species show that the necrotic
concentration is typically higher away from the nutrient-rich vessels. From
the hypoxic plot, we see that it is higher near these vessels, and this is
explained by the fact that as soon as the proliferation of new tumor cells
takes place, due to nutrient concentration below the proliferative-to-hypoxic
transition threshold, these newly added proliferative tumor cells convert to
hypoxic cells. Further transition to necrotic cells would take place if the
nutrients are even below the hypoxic-to-necrotic transition threshold. This is
also consistent with the increase concentration of the proliferative cells
near the outer tumor-healthy cell phase interface.
### 5.5. Sensitivity of the growth parameters
Figure 15. Results of a parametric study in which certain growth parameters
$\gamma$ and $k_{{\text{WSS}}}$ are varied to measure their impact on the
total network length and volume. For these studies, we considered a coarse
mesh for the 3D domain with $16^{3}$ uniform cells and time step $\Delta
t=0.01$. The network update time step was fixed to $\Delta t_{net}=2\Delta t$.
Figure 16. Network structure for different values of $\gamma$ at time
$t=2.4$. Left: $\gamma=2$. Middle: $\gamma=3$. Right: $\gamma=4$. Figure 17.
Network structure for different values of $k_{{\text{WSS}}}$ at time $t=6$.
Left: $k_{WSS}=4$. Middle: $k_{WSS}=0.4$. Right: $k_{WSS}=0.001$.
In Figure 15, we present the results of a parametric study designed to test
the robustness of the vascular network model to changes in the values of the
parameters $\gamma$ and $k_{WSS}$. It is observed that changes in these
parameters can produce significant changes in the network structure for given
values of the other model parameters.
The parameter $\gamma$, for example, appears in Murray’s law (3.4), and
controls the radii of network branches, with increase in $\gamma$ leading to
larger radii of bifurcating vessels. Such larger radii vessels have a higher
probability for connecting with neighboring vessels so as to increase the flow
and to continue to evolve; for high $\gamma$, the networks are more dense and
the total network length is higher (see Figure 16, right). Conversely, small
values of $\gamma$ promote thin network segments with lower probability for
connecting to neighboring vessels, see Figure 16, left.
The change in vessel radius due to the vessel wall shear stress is
proportional to the constant $k_{{\text{WSS}}}$ and stimulus
$S_{{\text{WSS}},i}$, see (3.8) and (3.9). Further, the vessels shrink
naturally and this effect is controlled by constant $k_{s}$ (higher $k_{s}$
means radius decay is higher). In our study, we varied the values of parameter
$k_{{\text{WSS}}}$ and found that, for a large $k_{{\text{WSS}}}$, radii of
sprouting vessels decrease, and new sprouts are removed in their early stage
of growth before they could join the nearby vessels, see Figure 17 left. As a
result of new sprouts getting removed in early stages, the total network
length and the vessel volume stay constant with respect to time with constant
values very close to the initial values. For a very small $k_{WSS}$ with
$k_{s}$ being fixed, the radii during the early phase of simulations do not
change much. But in the later phases of the simulation, the radii begin to
decay and even with large flow rate (which means large wall shear stress);
their decay is unavoidable as the term $k_{{\text{WSS}}}S_{{\text{WSS}},i}$ is
small as $k_{{\text{WSS}}}$ is small and can not counter the effects of
$k_{s}$. In summary, in the long run the radii of vessels decrease with time,
see Figure 17 right.We also observed that for values of $k_{{\text{WSS}}}$
within certain bounds, its impact on the network morphology is low. However,
when $k_{{\text{WSS}}}$ is outside the bound, some care is required so that
vessel radii do not tend to zero with time.
### 5.6. Angiogenesis and tumor growth for the capillary network
Returning to (5.1), let us consider a smooth spherical tumor core with center
at $\bm{x}_{c}=(1.3,0.9,0.7)$ and radius $r_{c}=0.3$ in the domain
$\Omega=(0,2)^{3}$. The initial blood vessel network and boundary conditions
for pressure and nutrient on the network is described in Figure 20.
In the simulation, the vessels are uniformly refined two times. We fix
$\phi_{a}=0$ for $a\in\\{H,N,{TAF}\\}$ and $\phi_{\sigma}=0.5$ at $t=0$. The
domain is discretized uniformly with a mesh size $h_{3D}=0.0364$ and the time
step size is $\Delta t=0.0025$. We identify four inlet ends (see Figure 20) at
which the unit nutrient $\phi_{v}=\phi_{v_{in}}=1$ and pressure
$p_{v}=p_{in}=8000.0$ is prescribed as the Dirichlet boundary condition. At
the remaining boundaries, we prescribe the pressure $p_{v}=p_{out}=1000.0$ and
apply an upwinding scheme for the nutrients.
At $t=0$, $p_{v}$ and $\phi_{v}$ at internal network nodes are set to zero.
Furthermore, we set $L_{\sigma}=0.5$, $D_{\sigma}=0.05$, $D_{TAF}=0.1$,
$Th_{{TAF}}=0.0075$, $\mu_{r}=1.5$, $k_{{\text{WSS}}}=0.45$, and $\Delta
t_{net}=10\Delta t$. All the other parameter values remain unchanged, see
Table 1 and Table 2.
Figure 18. Quantities of interests (QoIs) related to tumor species over a
time interval $[0,8]$ for the capillary network setting. Similar to the two-
vessels simulation, we compute the mean and standard deviation using 50 and 10
samples for the case with and without angiogenesis, respectively. We refer to
Figure 11 for more details on the plots. As in the case of two-vessels
setting, the variation in the QoIs are much smaller for the non-angiogenesis
case. The mean of the total tumor volume fraction
$\mathopen{}\mathclose{{}\left\|\phi_{T}}\right\|_{L^{1}}$ for the
angiogenesis case is about 1.62 times that of the non-angiogenesis case.
Figure 19. The mean values for the $L^{1}$-norm of the tumor cell volume
fractions $\phi_{T},\phi_{P},\phi_{H},\phi_{N}$ and the volume of the blood
vessel network at time $t=8$. Results correspond to the capillary network
setting. The mean of the total tumor volume fraction stabilizes with small
samples. This agrees with Figure 18 that shows that the variations in
$L^{1}$-norm QoIs are overall smaller. The mean of the vessel volume shows
some change when samples are small and stabilizes as the size of data set
grows.
We first compare the tumor volume with and without angiogenesis; see Figure
18. The results are similar to the two-vessel setting. They show that the
overall tumor growth is higher with angiogenesis as expected. We also observe
that proliferative cells start to grow rapidly at $t\approx 3.5$ with
angiogenesis as compared to $t\approx 4.75$ without angiogenesis. Production
of necrotic cells is higher in the non-angiogenesis case until time $t\approx
5$. Compared to Figure 11 for the two-vessel setting, the variations in the
tumor species related QoIs are much smaller in Figure 18. This may be due to
the fact that diffusivity of the nutrients in the latter case is much smaller,
and that $L_{\sigma}$ is also smaller in the later case resulting in a smaller
exchange of nutrients. Next, we plot the mean of QoIs as we increase the size
of data in data sets in Figure 19; results show that mean of tumor species
related QoIs is stable and can be computed accurately using fewer samples.
This relates to the fact that we see smaller variation in the $L^{1}$-norm of
the tumor species $\phi_{T}$ in Figure 18. The mean of vessel volume shows
some variations for smaller data sets and the variations get smaller later on;
still the variations are very small and contained in range $[0.117,0.121]$.
In Figures 20 and 22, some results for the capillary network are summarized,
where in Figure 20 the growing network is shown and in the Figure 21 the
vessel network as a result of angiogenesis is shown at the final simulation
time $t=8$ for three samples. In Figure 22, we compare the the tumor species
at time $t=5.12$ for the angiogenesis and non-angiogenesis case. As in the
two-vessel case, the tumor starts to grow faster after it is vascularized.
Apparently, the tumor cells tend to migrate towards the nutrient rich part of
the computational domain despite the fact that they have to move against the
direction of flow which is induced by the pressure gradient. Not surprisingly,
the volume fraction of the necrotic cells is larger in the part that is facing
away from the nutrient rich part and related to the whole tumor it remains
relatively small.
It is interesting to observe that, as in the two-vessel case, the contour plot
of $\phi_{T}$ for $\phi_{T}=0.9$ exhibits a secondary structure, while in the
simulation without angiogenesis, this effect cannot be seen. Moreover, as in
the two-vessel experiment, the tumor contains a large necrotic kernel if there
is no angiogenesis, indicating that the tumor has almost died. This simulation
portrays once again that angiogenesis can play a crucial role in the evolution
of tumor growth.
Figure 20. Top left: Spherical tumor core
$\mathopen{}\mathclose{{}\left(\text{Contour plot for }\phi_{T}=0.9}\right)$
at $\bm{x}_{c}=(1.3,0.9,0.7)$ with radius $r_{c}=0.3$ surrounded by a network
of vessels. We identify four inlet ends (red cross-sections) at which the unit
nutrient $\phi_{v}=\phi_{v_{in}}=1.0$ and pressure $p_{v}=p_{in}=8000$ is
prescribed as a Dirichlet boundary condition. Top right: Formation of first
sprouts at $t=1.20$ growing towards the tumor core. Bottom left: Around
$t=3.04$ a complex vascular network is formed and the tumor starts to grow
towards the nutrients. Bottom right: At $t=5.60$ the tumor is significantly
enlarged and creates satellites near the nutrient vessels. Figure 21. Plot of
vessel network at $t=8$ from three samples for the capillary network setting.
Figure 22. Distribution of tumor cells for $t=5.12$. The simulation results
for tumor growth supported by angiogenesis are shown on the left hand side,
while the results for tumor growth without angiogenesis are presented on the
right. The necrotic $\mathopen{}\mathclose{{}\left(\phi_{N}}\right)$ and
hypoxic $\mathopen{}\mathclose{{}\left(\phi_{H}}\right)$ volume fractions are
visualized in the $z$-plane at $z=0.7$. For $\phi_{T}$, in both cases a
contour plot for $\phi_{T}=0.9$ is shown.
## 6\. Summary and outlook
In this work, we presented a stochastic model for tumor growth characterized
by a coupled system of nonlinear PDEs of Cahn-Hilliard type coupled with a
model of an evolving vascular network. A 3D-1D model is developed to simulate
flow and nutrient transport within both the network and porous tissue so as to
depict the phenomena of angiogenesis. In this model, the blood vessel network
is given by a 1D graph-like structure coupling the flow and transport
processes in the network and tissue. Furthermore, the model facilitates the
handling of a growing network structure with bifurcation of growing vessels
which is crucial for the simulation of angiogenesis. The angiogenesis process
is simulated by an iterative algorithm starting from a single artery and a
vein or a given network. The blood vessel network supplying the tumor employs
Murray’s law to determine the radii at a bifurcation of network capillaries.
The choice of radii and lengths of new vessels as well as bifurcations are
governed by stochastic algorithms. The direction of growth of the vessels is
determined by the gradient of the local TAF concentration.
We demonstrate that the model is capable of simulating the development of
satellite of tumor concentrations in nutrient rich vessels near necrotic cores
in agreement with some experimental observations. Also, as expected, rapid
growth of solid tumor mass accompanies increased supply of nutrient through
angiogenesis.
We believe that our model can serve as a starting point for important
predictive simulations of cancer therapy; in particular the effect of anti-
angiogenic drugs could be studied using models of these types. However, models
of these kind require experimental data such as MR imaging data that inform
the vasculature in the tissue as well as the parameters in the tumor growth
model and vasculature flow model. In the present work, the adaption of the
vessel radii is related to the wall shear stress. However, other effects can
influence the vessel radii that could be included, such as metabolic
hematocrit-related stimulus, which may also lead to a significant pruning and
restructuring of the network. Further work on refining and improving
computational algorithms is also needed, such as the development of efficient
distributed solvers for the 3D-1D systems. We hope to address these issues and
other extensions in future work.
## Acknowledgements
The authors gratefully acknowledge the support of the Deutsche
Forschungsgemeinschaft (DFG) through TUM International Graduate School of
Science and Engineering (IGSSE), GSC 81. The work of PKJ and JTO was supported
by the U.S. Department of Energy, Office of Science, Office of Advanced
Scientific Computing Research, Mathematical Multifaceted Integrated Capability
Centers (MMICCS), under Award Number DE-SC0019303.
## References
* [1] D. Ambrosi, F. Bussolino, and L. Preziosi, A review of vasculogenesis models, Journal of Theoretical Medicine, 6 (2005), pp. 1–19.
* [2] D. Ambrosi and L. Preziosi, Cell adhesion mechanisms and stress relaxation in the mechanics of tumours, Biomechanics and Modeling in Mechanobiology, 8 (2009), pp. 397–413.
* [3] A. R. Anderson and M. A. J. Chaplain, Continuous and discrete mathematical models of tumor-induced angiogenesis, Bulletin of Mathematical Biology, 60 (1998), pp. 857–899.
* [4] D. Antonopoulou, Ĺ. Baňas, R. Nürnberg, and A. Prohl, Numerical approximation of the stochastic Cahn–Hilliard equation near the sharp interface limit, Numerische Mathematik, 147 (2021), pp. 505–551.
* [5] N. Bellomo, N. Li, and P. K. Maini, On the foundations of cancer modelling: Selected topics, speculations, and perspectives, Mathematical Models and Methods in Applied Sciences, 18 (2008), pp. 593–646.
* [6] N. Bellomo and L. Preziosi, Modelling and mathematical problems related to tumor evolution and its interaction with the immune system, Mathematical and Computer Modelling, 32 (2000), pp. 413–452.
* [7] H. Byrne and L. Preziosi, Modelling solid tumour growth using the theory of mixtures, Mathematical Medicine and Biology: A Journal of the IMA, 20 (2003), pp. 341–366.
* [8] P. Carmeliet and R. K. Jain, Molecular mechanisms and clinical applications of angiogenesis, Nature, 473 (2011), pp. 298–307.
* [9] S. Chai, Y. Cao, Y. Zou, and W. Zhao, Conforming finite element methods for the stochastic Cahn–Hilliard–Cook equation, Applied Numerical Mathematics, 124 (2018), pp. 44–56.
* [10] M. A. Chaplain, M. Lachowicz, Z. Szymańska, and D. Wrzosek, Mathematical modelling of cancer invasion: The importance of cell-cell adhesion and cell-matrix adhesion, Mathematical Models and Methods in Applied Sciences, 21 (2011), pp. 719–743.
* [11] M. A. Chaplain and G. Lolas, Mathematical modelling of cancer cell invasion of tissue: The role of the urokinase plasminogen activation system, Mathematical Models and Methods in Applied Sciences, 15 (2005), pp. 1685–1734.
* [12] V. Cristini, X. Li, J. S. Lowengrub, and S. M. Wise, Nonlinear simulations of solid tumor growth using a mixture model: invasion and branching, Journal of Mathematical Biology, 58:723 (2009).
* [13] V. Cristini and J. Lowengrub, Multiscale Modeling of Cancer: An Integrated Experimental and Mathematical Modeling Approach, Cambridge University Press, 2010.
* [14] G. Da Prato and A. Debussche, Stochastic Cahn–Hilliard equation, Nonlinear Analysis: Theory, Methods & Applications, 26 (1996), pp. 241–263.
* [15] M. Dorraki, A. Fouladzadeh, A. Allison, C. S. Bonder, and D. Abbott, Angiogenic networks in tumors—insights via mathematical modeling, IEEE Access, 8 (2020), pp. 43215–43228.
* [16] H. M. Eilken and R. H. Adams, Dynamics of endothelial cell behavior in sprouting angiogenesis, Current Opinion in Cell Biology, 22 (2010), pp. 617–625.
* [17] C. Engwer, C. Stinner, and C. Surulescu, On a structured multiscale model for acid-mediated tumor invasion: The effects of adhesion and proliferation, Mathematical Models and Methods in Applied Sciences, 27 (2017), pp. 1355–1390.
* [18] D. J. Eyre, Unconditionally gradient stable time marching the Cahn–Hilliard equation, in Materials Research Society Symposium Proceedings, vol. 529, Materials Research Society, 1998, pp. 39–46.
* [19] H. B. Frieboes, F. Jin, Y.-L. Chuang, S. M. Wise, J. S. Lowengrub, and V. Cristini, Three-dimensional multispecies nonlinear tumor growth – II: Tumor invasion and angiogenesis, Journal of Theoretical Biology, 264 (2010), pp. 1254–1278.
* [20] S. Frigeri, K. F. Lam, E. Rocca, and G. Schimperna, On a multi-species Cahn–Hilliard–Darcy tumor grwoth model with singular potentials, Communications in Mathematical Sciences, 16 (2018), pp. 821–856.
* [21] M. Fritz, P. K. Jha, T. Köppl, J. T. Oden, and B. Wohlmuth, Analysis of a new multispecies tumor growth model coupling 3D phase-fields with a 1D vascular network, Nonlinear Analysis: Real World Applications, 61 (2021), p. 103331.
* [22] M. Fritz, E. Lima, V. Nikolic, J. T. Oden, and B. Wohlmuth, Local and nonlocal phase-field models of tumor growth and invasion due to ECM degradation, Mathematical Models and Methods in Applied Sciences, 29 (2019), pp. 2433–2468.
* [23] M. Fritz, E. Lima, J. T. Oden, and B. Wohlmuth, On the unsteady Darcy–Forchheimer–Brinkman equation in local and nonlocal tumor growth models, Mathematical Models and Methods in Applied Sciences, 29 (2019), pp. 1691–1731.
* [24] H. Garcke, K. F. Lam, R. Nürnberg, and E. Sitka, A multiphase Cahn–Hilliard–Darcy model for tumour growth with necrosis, Mathematical Models and Methods in Applied Sciences, 28 (2018), pp. 525–577.
* [25] A. Gerisch and M. Chaplain, Mathematical modelling of cancer cell invasion of tissue: Local and non-local models and the effect of adhesion, Journal of Theoretical Biology, 250 (2008), pp. 684–704.
* [26] B. Ginzburg and A. Katchalsky, The frictional coefficients of the flows of non-electrolytes through artificial membranes, The Journal of General Physiology, 47 (1963), pp. 403–418.
* [27] D. Hanahan and R. A. Weinberg, Hallmarks of cancer: The next generation, Cell, 144 (2011), pp. 646–674.
* [28] A. Hawkins-Daarud, K. G. van der Zee, and J. T. Oden, Numerical simulation of a thermodynamically consistent four-species tumor growth model, International Journal for Numerical Methods in Biomedical Engineering, 28 (2012), pp. 3–24.
* [29] T. Hillen, K. J. Painter, and M. Winkler, Convergence of a cancer invasion model to a logistic chemotaxis model, Mathematical Models and Methods in Applied Sciences, 23 (2013), pp. 165–198.
* [30] E. Hodneland, X. Hu, and J. Nordbotten, Well-posedness, discretization and preconditioners for a class of models for mixed-dimensional problems with high dimensional gap, arXiv preprint arXiv:2006.12273, (2020).
* [31] L. Holmgren, M. S. O’Reilly, and J. Folkman, Dormancy of micrometastases: balanced proliferation and apoptosis in the presence of angiogenesis suppression, Nature Medicine, 1 (1995), pp. 149–153.
* [32] T. Koch, M. Schneider, R. Helmig, and P. Jenny, Modeling tissue perfusion in terms of 1d-3d embedded mixed-dimension coupled problems with distributed sources, Journal of Computational Physics, 410:100050 (2020).
* [33] T. Köppl, E. Vidotto, and B. Wohlmuth, A 3D-1D coupled blood flow and oxygen transport model to generate microvascular networks, International Journal for Numerical Methods in Biomedical Engineering, e3386 (2020).
* [34] P. Koumoutsakos, I. Pivkin, and F. Milde, The fluid mechanics of cancer and its therapy, Annual Review of Fluid Mechanics, 45 (2013), pp. 325–355.
* [35] P. Kunkel, U. Ulbricht, P. Bohlen, M. Brockmann, R. Fillbrandt, D. Stavrou, M. Westphal, and K. Lamszus, Inhibition of glioma angiogenesis and growth in vivo by systemic treatment with a monoclonal antibody against vascular endothelial growth factor receptor-2, Cancer Research, 61 (2001), pp. 6624–6628.
* [36] E. Lima, J. T. Oden, and R. Almeida, A hybrid ten-species phase-field model of tumor growth, Mathematical Models and Methods in Applied Sciences, 24 (2014), pp. 2569–2599.
* [37] S. R. McDougall, A. Anderson, M. Chaplain, and J. Sherratt, Mathematical modelling of flow through vascular networks: implications for tumour-induced angiogenesis and chemotherapy strategies, Bulletin of Mathematical Biology, 64 (2002), pp. 673–702.
* [38] S. R. McDougall, A. R. Anderson, and M. A. Chaplain, Mathematical modelling of dynamic adaptive tumour-induced angiogenesis: clinical implications and therapeutic targeting strategies, Journal of theoretical biology, 241 (2006), pp. 564–589.
* [39] C. Murray, The physiological principle of minimum work applied to the angle of branching of arteries, The Journal of General Physiology, 9 (1926), pp. 835–841.
* [40] , The physiological principle of minimum work: I. the vascular system and the cost of blood volume, Proceedings of the National Academy of Sciences of the United States of America, 12 (1926), pp. 207–214.
* [41] N. Nargis and R. Aldredge, Effects of matrix metalloproteinase on tumour growth and morphology via haptotaxis, J. Bioengineer. & Biomedical Sci., 6:1000207 (2016).
* [42] N. Nishida, H. Yano, T. Nishida, T. Kamura, and M. Kojiro, Angiogenesis in cancer, Vascular Health and Risk Management, 2 (2006), pp. 213–219.
* [43] J. T. Oden, E. Lima, R. C. Almeida, Y. Feng, M. N. Rylander, D. Fuentes, D. Faghihi, M. M. Rahman, M. DeWitt, M. Gadde, et al., Toward predictive multiscale modeling of vascular tumor growth, Archives of Computational Methods in Engineering, 23 (2016), pp. 735–779.
* [44] Orrieri, Carlo, Rocca, Elisabetta, and Scarpa, Luca, Optimal control of stochastic phase-field models related to tumor growth, ESAIM: COCV, 26 (2020), p. 104.
* [45] M. R. Owen, T. Alarcón, P. K. Maini, and H. M. Byrne, Angiogenesis and vascular remodelling in normal and cancerous tissues, Journal of Mathematical Biology, 58:689 (2009).
* [46] S. Parangi, M. O’Reilly, G. Christofori, L. Holmgren, J. Grosfeld, J. Folkman, and D. Hanahan, Antiangiogenic therapy of transgenic mice impairs de novo tumor growth, Proceedings of the National Academy of Sciences, 93 (1996), pp. 2002–2007.
* [47] C. Patsch, L. Challet-Meylan, E. C. Thoma, E. Urich, T. Heckel, J. F. O’Sullivan, S. J. Grainger, F. G. Kapp, L. Sun, K. Christensen, et al., Generation of vascular endothelial and smooth muscle cells from human pluripotent stem cells, Nature Cell Biology, 17 (2015), pp. 994–1003.
* [48] C. M. Phillips, E. A. Lima, R. T. Woodall, A. Brock, and T. E. Yankeelov, A hybrid model of tumor growth and angiogenesis: In silico experiments, PLoS One, 15 (2020), p. e0231137.
* [49] L. Preziosi, Cancer modelling and simulation, CRC Press, 2003.
* [50] A. Pries, B. Reglin, and T. Secomb, Structural adaptation of vascular networks: role of the pressure response, Hypertension, 38 (2001), pp. 1476–1479.
* [51] A. Pries, B. Reglin, and T. W. Secomb, Structural adaptation of microvascular networks: functional roles of adaptive responses, American Journal of Physiology-Heart and Circulatory Physiology, 281 (2001), pp. H1015–H1025.
* [52] A. Pries, T. Secomb, and P. Gaehtgens, Structural adaptation and stability of microvascular networks: theory and simulations, American Journal of Physiology-Heart and Circulatory Physiology, 275 (1998), pp. H349–H360.
* [53] J. Reichold, M. Stampanoni, A. L. Keller, A. Buck, P. Jenny, and B. Weber, Vascular graph model to simulate the cerebral blood flow in realistic vascular networks, Journal of Cerebral Blood Flow & Metabolism, 29 (2009), pp. 1429–1443.
* [54] D. Ribatti and E. Crivellato, “sprouting angiogenesis”, a reappraisal, Developmental biology, 372 (2012), pp. 157–165.
* [55] J. Rubenstein, J. Kim, T. Ozawa, M. Zhang, M. Westphal, D. Deen, and M. Shuman, Anti-VEGF antibody treatment of glioblastoma prolongs survival but results in increased vascular cooption, Neoplasia, 2 (2000), pp. 306–314.
* [56] E. P. Salathe and K.-N. An, A mathematical analysis of fluid movement across capillary walls, Microvascular Research, 11 (1976), pp. 1–23.
* [57] M. Schneider, J. Reichold, B. Weber, G. Székely, and S. Hirsch, Tissue metabolism driven arterial tree generation, Medical Image Analysis, 16 (2012), pp. 1397–1414.
* [58] T. Secomb, J. Alberding, R. Hsu, M. Dewhirst, and A. Pries, Angiogenesis: an adaptive dynamic biological patterning problem, PLoS Computational Biology, 9:e1002983 (2013).
* [59] A. Stephanou, S. R. McDougall, A. R. Anderson, and M. A. Chaplain, Mathematical modelling of flow in 2d and 3d vascular networks: applications to anti-angiogenic and chemotherapeutic drug strategies, Mathematical and Computer Modelling, 41 (2005), pp. 1137–1156.
* [60] A. Stéphanou, S. R. McDougall, A. R. Anderson, and M. A. Chaplain, Mathematical modelling of the influence of blood rheological properties upon adaptative tumour-induced angiogenesis, Mathematical and Computer Modelling, 44 (2006), pp. 96–123.
* [61] Y. Tao and M. Winkler, A chemotaxis-haptotaxis model: the roles of nonlinear diffusion and logistic source, SIAM Journal on Mathematical Analysis, 43 (2011), pp. 685–704.
* [62] R. D. Travasso, E. C. Poiré, M. Castro, J. C. Rodrguez-Manzaneque, and A. Hernández-Machado, Tumor angiogenesis and vascular patterning: a mathematical model, PloS One, 6:e19989 (2011).
* [63] E. Vidotto, T. Koch, T. Köppl, R. Helmig, and B. Wohlmuth, Hybrid models for simulating blood flow in microvascular networks, Multiscale Modeling & Simulation, 17 (2019), pp. 1076–1102.
* [64] G. Vilanova, M. Burés, I. Colominas, and H. Gomez, Computational modelling suggests complex interactions between interstitial flow and tumour angiogenesis, Journal of The Royal Society Interface, 15:20180415 (2018).
* [65] S. M. Wise, J. S. Lowengrub, H. B. Frieboes, and V. Cristini, Three-dimensional multispecies nonlinear tumor growth – I: Model and numerical method, Journal of Theoretical Biology, 253 (2008), pp. 524–543.
* [66] C. Wu, D. A. Hormuth, T. A. Oliver, F. Pineda, G. Lorenzo, G. S. Karczmar, R. D. Moser, and T. E. Yankeelov, Patient-specific characterization of breast cancer hemodynamics using image-guided computational fluid dynamics, IEEE Transactions on Medical Imaging, (2020).
* [67] J. Xu, G. Vilanova, and H. Gomez, A mathematical model coupling tumor growth and angiogenesis, PloS One, 11 (2016).
* [68] , Full-scale, three-dimensional simulation of early-stage tumor growth: The onset of malignancy, Computer Methods in Applied Mechanics and Engineering, 314 (2017), pp. 126–146.
* [69] X. Zheng, S. Wise, and V. Cristini, Nonlinear simulation of tumor necrosis, neo-vascularization and tissue invasion via an adaptive finite-element/level-set method, Bulletin of Mathematical Biology, 67:211 (2005).
|
# Combinatorics of KP hierarchy structural constants
A. Andreeva,c, A. Popolitova,b,c, A. Sleptsova,b,c, A. Zhabina,c
andreev.av@phystech.edupopolit@gmail.comsleptsov<EMAIL_ADDRESS>
MIPT/TH-19/20
ITEP/TH-34/20
IITP/TH-21/20
a Institute for Theoretical and Experimental Physics, Moscow 117218, Russia
b Institute for Information Transmission Problems, Moscow 127994, Russia
c Moscow Institute of Physics and Technology, Dolgoprudny 141701, Russia
Dedicated to the memory of Sergey Mironovich Natanzon
ABSTRACT
Following Natanzon-Zabrodin, we explore the Kadomtsev–Petviashvili hierarchy
as an infinite system of mutually consistent relations on the second
derivatives of the free energy with some universal coefficients. From this
point of view, various combinatorial properties of these coefficients
naturally highlight certain non-trivial properties of the KP hierarchy.
Furthermore, this approach allows us to suggest several interesting directions
of the KP deformation via a deformation of these coefficients. We also
construct an eigenvalue matrix model, whose correlators fully describe the
universal KP coefficients, which allows us to further study their properties
and generalizations.
This paper is just the beginning of a very large program of multi-faceted
study of the KP hierarchy suggested to us by Sergey Natanzon. He had his own
special view of the KP hierarchy, which made it possible to see in it some new
interesting structures that are completely invisible with other approaches. We
are deeply grateful to him for numerous scientific discussions, for fueling
our interest in the KP hierarchy and for his characteristic style of
discussing science.
## 1 Introduction
The Kadomtsev-Petviashvili (KP) hierarchy has many different applications in
modern physics and mathematics. Historically it was studied as equations with
soliton solutions, but very soon it was discovered that partition functions
and correlators of some field theories are solutions of the hierarchy as well.
It often happens that partition function can be represented as a matrix model,
which provides a connection between KP hierarchy and matrix models. Probably
the most famous example is the Kontsevich matrix model [1], which is a
partition function of 2D gravity. Among other important examples lattice gauge
theories of QCD [2, 3], the Ooguri-Vafa partition function for HOMFLY
polynomials of any torus knot [4, 5], generating function for simple Hurwitz
numbers [6, 7, 8]. Moreover, recently interest in KP hierarchy resurgent due
to fantastic rapid progress in understanding of superintegrable properties of
a particular version of KP, the so-called BKP hierarchy [9, 10, 11, 8].
The KP hierarchy can be understood as an infinite system of compatible non-
linear differential equations. All the equations may be encoded in the Hirota
bilinear identity:
$\oint_{\infty}e^{\xi(\overline{\textbf{t}},z)}\,\tau(\textbf{t}+\overline{\textbf{t}}-[z^{-1}])\,\tau(\textbf{t}-\overline{\textbf{t}}+[z^{-1}])\,dz=0,$
(1)
where we used a standard notation
$\begin{gathered}\xi(\textbf{t},z)=\sum_{k=1}^{\infty}t_{k}z^{k}\\\
\textbf{t}\pm[z^{-1}]=\left\\{t_{1}\pm\frac{1}{z},t_{2}\pm\frac{1}{2z^{2}},t_{3}\pm\frac{1}{3z^{3}},\dots\right\\}\end{gathered}$
(2)
Expanding the integrand near $z=\infty$ and calculating the coefficient in the
front of $z^{-1}$, each coefficient in front of every monomial of
${\bf\bar{t}}$ gives an equation for $\tau({\bf t})$. Functions that satisfy
(1) are called $\tau$-functions. They may depend on an infinite number of
variables $\textbf{t}=\\{t_{1},t_{2},t_{3},\dots\\}$ called ”times”.
Previously mentioned partition and generating functions are KP
$\tau$-functions. According to the works of Kyoto school [12, 13], KP
hierarchy closely related to rich mathematical structures, such as infinite-
dimensional Lie algebras, projective manifolds, symmetric functions and boson-
fermion correspondence. Each of these mathematical structures provides
alternative language for description of KP solutions and highlights different
solutions’ properties. Moreover, looking at any particular solution from
several points of view provides deep insights about its structure.
All mentioned examples of $\tau$-functions and many others have a geometric
expansion over compact Riemann surfaces (genus expansion). Genus expansion for
$\tau$-functions coincides with expansion in parameter $\hbar$ for the
$\hbar$-KP hierarchy [14]. The introduction of the $\hbar$ parameter slightly
modifies the hierarchy and allows one, among other things, to obtain solutions
of the classical KP hierarchy for $\hbar=1$ and dispersionless KP for
$\hbar\rightarrow 0$ [15, 16]. This $\hbar$-formulation of the KP hierarchy
was first studied by Takasaki and Takebe in [17, 18], where they described a
method for deformation of the classical $\tau$-function.
Natanzon and Zabrodin formulated another approach [19, 20] for description of
the $\hbar$-KP. The advantage of their approach is that formal solutions for
the $F$-function ($F=\log\tau$) can be explicitly expressed in terms of
boundary data using universal integer coefficients that help to define the
entire $\hbar$-KP hierarchy. Moreover, an arbitrary solution of the $\hbar$-KP
hierarchy can be restored from its boundary data, using these coefficients and
their higher analogs, which are determined recursively. Namely, the set of the
integer coefficients $P_{i,j}(s_{1},\dots,s_{m})$, which we also call the
universal KP coefficients, enters the KP equations as (see, for instance,
[21])
$\frac{\partial^{\hbar}_{i}\partial^{\hbar}_{j}F}{ij}=\sum\limits_{m\geq
1}\frac{(-1)^{m+1}}{m}\sum\limits_{s_{1},\dots,s_{m}\geq
1}P_{i,j}(s_{1},\dots,s_{m})\,\frac{\partial_{x}\partial^{\hbar}_{s_{1}}F}{s_{1}}\dots\frac{\partial_{x}\partial^{\hbar}_{s_{m}}F}{s_{m}}.$
(3)
where $\partial^{\hbar}_{i}$ is a $\hbar$-deformed derivative with respect to
$t_{i}$, see formula (16) below. From these equation we see that
$P_{i,j}(s_{1},\dots,s_{m})$ are one of the central ingredients of the KP
equations. Definition of these coefficients can be given in combinatorial
terms by enumeration of sequences of positive integers (see section 2, formula
(17)).
The main goal of this paper is to establish and develop the relation between
combinatorics and integrability. We want to find out how basic properties of
the combinatorial coefficients $P_{i,j}(s_{1},\dots,s_{m})$ affect the various
properties of $\tau$-functions. The purpose of the paper is to point out new
interesting research directions, but we do not develop them exhaustively in
this short note. Therefore, in many cases we stop after providing first non-
trivial example, just enough to demonstrate, that a particular directions is
potentially interesting and is worth studying.
The paper is organized as follows. In section 2, we introduce all the
necessary definitions and theorems.
Section 3 is devoted to various approaches to calculation of the combinatorial
coefficients. We show that they can be calculated using an explicit formula
that includes the sum of the binomial coefficients and has a clear geometric
meaning. In addition, we consider two different generating function for the
universal coefficients. One of them, up to normalization, has the simple form
of a sum over Young diagrams of length $\ell(\lambda)\leq 2$:
$F(y_{1},y_{2};\mathbf{x})\sim\sum\limits_{\lambda}S_{\lambda}(y_{1},y_{2})S_{\lambda}(\mathbf{x}),$
(4)
where $S_{\lambda}$ is Schur polynomial. This generating function becomes a
$\tau$-function of KP hierarchy itself after standard replacement of variables
$kt_{k}=\sum_{i}x^{k}_{i}$, which gives us a hint on possible deformation of
the universal coefficients (section 6), considering another solutions of KP
hierarchy as generating function of new coefficients.
The second generating function corresponds to the, so-called, Fay identity
and, as we discuss in section 4, allows us to obtain some restrictions on
resolvents in topological recursion [22, 23, 24, 25, 26, 27, 28].
In section 5 we construct a simple matrix eigenvalue model, whose correlators
give the universal KP coefficients. The form of these correlators also makes
it possible to generalize the coefficients. Generalization of matrix model has
the following motivation. There are Ward identities in matrix models which can
be solved recursively, and as we expect, corresponding recursion relations are
related with recursion relations for higher analogs of universal coefficients
in some sense. Furthermore generating function for the averages of Schur
polynomials $\langle S_{\lambda_{1}}\dots S_{\lambda_{m}}\rangle$ depend on
the set of time variables $\\{\mathbf{t}^{(1)},\dots,\mathbf{t}^{(m)}\\}$ and
in the simplest case (4) we obtain $\tau$-function of KP hierarchy, so
generalized matrix model may be somehow connected with $m$-component KP
hierarchy.
In Section 6 we discuss possible approach to KP deformation via deformation of
generating functions of the combinatorial coefficients. We suggest another
deformed generating functions that have the same properties as the initial
one. Such consideration may help to understand what is the role of the
combinatorial coefficients in $\hbar$-KP hierarchy: are they responsible for
integrability or the certain form of equations (3) is important.
The last section 7 is a discussion where we list main results of this paper
and questions for further research.
## 2 Definitions
Schur polynomials. Following [29] we define Young diagram as a sequence of
ordered positive integers $\lambda_{1}\geq\dots\geq\lambda_{\ell(\lambda)}>0$
and denote it as $\lambda=[\lambda_{1},\dots,\lambda_{\ell(\lambda)}]$;
$\ell(\lambda)$ is the length of Young diagram. Schur polynomials
$S_{\lambda}(\mathbf{x})$ are symmetric functions depending on an arbitrary
set of variables $\mathbf{x}=\\{x_{1},x_{2},\dots\\}$ and a Young diagram
$\lambda$.
$S_{\lambda}(x_{1},\dots,x_{n}):=\frac{\det\limits_{1\leq i,j\leq
n}\left(x_{i}^{\lambda_{j}+j-1}\right)}{\det\limits_{1\leq i,j\leq
n}\left(x_{i}^{j-1}\right)}$ (5)
If $n>\ell(\lambda)$, then $\lambda_{j}$ are equal to zero for large enough
$j$. Schur polynomials labeled by Young diagrams of length $\ell(\lambda)=1$
we call symmetric Schur polynomials. Although all Schur polynomials are
symmetric functions, such a name for particular Young diagrams is due to
representation theory. Sometimes Schur polynomials are considered in variables
$\mathbf{t}=\\{t_{1},t_{2},\dots\\}$. The change from variables $\mathbf{x}$
is given via
$t_{k}=\frac{1}{k}\sum_{i\geq 1}x_{i}^{k}.$ (6)
An important property of Schur polynomials that we frequently use in what
follows is the Cauchy-Littlewood identity:
$\sum_{\lambda}S_{\lambda}(\textbf{t})S_{\lambda}(\overline{\textbf{t}})=\exp\left(\sum_{k=1}^{\infty}kt_{k}\overline{t}_{k}\right)$
(7)
$\hbar$-KP hierarchy. We briefly review the main facts about the KP equations
and solutions. For the detailed explanation see [30]. KP hierarchy is an
infinite set of non-linear differential equations with the first equation
given by
$\frac{1}{4}\frac{\partial^{2}F}{\partial
t_{2}^{2}}=\frac{1}{3}\frac{\partial^{2}F}{\partial t_{1}\partial
t_{3}}-\frac{1}{2}\left(\frac{\partial^{2}F}{\partial
t_{1}^{2}}\right)^{2}-\frac{1}{12}\frac{\partial^{4}F}{\partial t_{1}^{4}}$
(8)
It is more common to work with $\tau$-function
$\tau(\textbf{t})=\exp(F(\textbf{t}))$ than with free energy $F(\textbf{t})$.
We assume that $\tau(\textbf{t})$ is at least a formal power series in times
$t_{k}$, and maybe it is even a convergent series. Entire set of equations of
hierarchy can be written in terms of $\tau$-function using Hirota bilinear
identity (1), which, in turn, is equivalent to the following functional
equation
$(z_{1}-z_{2})\tau^{[z_{1},z_{2}]}\tau^{[z_{3}]}+(z_{2}-z_{3})\tau^{[z_{2},z_{3}]}\tau^{[z_{1}]}+(z_{3}-z_{1})\tau^{[z_{3},z_{1}]}\tau^{[z_{2}]}=0$
(9)
where
$\tau^{[z_{1},\dots,z_{m}]}(\textbf{t})=\tau\left(\textbf{t}+\sum_{i=1}^{m}[z_{i}^{-1}]\right)\\\
$ (10)
and the shift of times is the same as in (2). Equation (9) should be satisfied
for an arbitrary $z_{1},z_{2},z_{3}$. One can expand $\tau$-function at the
vicinity of $z_{i}=\infty$ and obtain partial differential equation for
$\tau$-function at every term $z_{1}^{-k_{1}}z_{2}^{-k_{2}}z_{3}^{-k_{3}}$.
All formal power series solutions of KP hierarchy can be decomposed over the
basis of Schur polynomials
$\tau(\textbf{t})=\sum_{\lambda}C_{\lambda}S_{\lambda}(\textbf{t}).$ (11)
Function written as a formal sum over Schur polynomials is a KP solution if
and only if coefficients $C_{\lambda}$ satisfy the Plücker relations. The
first such relation is
$C_{[2,2]}C_{[\varnothing]}-C_{[2,1]}C_{[1]}+C_{[2]}C_{[1,1]}=0.$ (12)
The simplest way to define $\hbar$-KP hierarchy is to deform bilinear
equations (9) for $\tau$-function of the classical KP hierarchy in the
following way [19, 21]:
$\begin{gathered}(z_{1}-z_{2})\tau^{[z_{1},z_{2}]}\tau^{[z_{3}]}+(z_{2}-z_{3})\tau^{[z_{2},z_{3}]}\tau^{[z_{1}]}+(z_{3}-z_{1})\tau^{[z_{3},z_{1}]}\tau^{[z_{2}]}=0\\\
\tau^{[z_{1},\dots,z_{m}]}(\textbf{t})=\tau\left(\textbf{t}+\hbar\sum_{i=1}^{m}[z_{i}^{-1}]\right)\\\
\textbf{t}+\hbar[z^{-1}]=\left\\{t_{1}+\frac{\hbar}{z},t_{2}+\frac{\hbar}{2z^{2}},t_{3}+\frac{\hbar}{3z^{3}},\dots\right\\}\end{gathered}$
(13)
By setting parameter $\hbar=1$ we obtain classical KP hierarchy and the limit
$\hbar\rightarrow 0$ provides celebrated dispersionless hierarchy [15, 16].
The other equivalent way to encode all the ($\hbar$-)KP equations is the
differential Fay identity:
$\Delta(z_{1})\Delta(z_{2})F=\log\left(1-\frac{\Delta(z_{1})\partial_{1}F-\Delta(z_{2})\partial_{1}F}{z_{1}-z_{2}}\right),$
(14)
where
$\Delta(z)=\frac{e^{\hbar D(z)}-1}{\hbar},\>\>\>D(z)=\sum\limits_{k\geq
1}\frac{z^{-k}}{k}\partial_{k}.$ (15)
KP hierarchy can be considered as an infinite set of compatible differential
equations on the $F$-function, where
$F(\textbf{t})=\hbar^{2}\log(\tau(\textbf{t}))$. To describe the equations in
an unfolded form we need two more definitions. First one is deformed partial
derivatives $\partial_{k}^{\hbar}$ which are defined via symmetric Schur
polynomials in t-variables. Each $t_{i}$ one should replace with
$\frac{\hbar}{i}\partial_{i}$:
$\partial_{k}^{\hbar}:=\frac{k}{\hbar}S_{[k]}(\hbar\widetilde{\partial}),\;\;\;\;\;\widetilde{\partial}=\left\\{\partial_{1},\frac{1}{2}\partial_{2},\frac{1}{3}\partial_{3},\dots\right\\}$
(16)
Limit $\hbar\rightarrow 0$ transforms deformed derivatives
$\partial_{k}^{\hbar}$ into usual ones $\partial_{k}$.
The next definition is the main topic of our study. Let us define
combinatorial coefficients $P_{i,j}(s_{1},\dots,s_{m})$ as the number of
sequences $(i_{1},\dots,i_{m})$ and $(j_{1},\dots,j_{m})$ of positive integers
such that $i_{1}+\dots+i_{m}=i$, $j_{1}+\dots+j_{m}=j$ and
$i_{k}+j_{k}=s_{k}+1$. These coefficients can also be understood as the number
of matrices of size $2\times m$ with fixed sums over rows and columns:
$\boxed{P_{i,j}(s_{1},\dots,s_{m}):=\\#\left\\{\begin{pmatrix}i_{1}&\dots&i_{m}\\\
j_{1}&\dots&j_{m}\\\
\end{pmatrix}\Bigg{|}i_{k},j_{k}\in\mathbb{N},\begin{array}[]{cc}i_{1}+\dots+i_{m}=i\\\
j_{1}+\dots+j_{m}=j\\\ i_{k}+j_{k}=s_{k}+1\;\;\forall
k\in\overline{1,m}\end{array}\right\\}}$ (17)
Coefficients (17) are fundamental in the following sense. They allow us to
express all the KP equations in an explicit form and fully determine
$\hbar$-KP hierarchy.
Following [19, Lemma 3.2] the $\hbar$-KP hierarchy can be rewritten as the
system of equations:
$\frac{\partial^{\hbar}_{i}\partial^{\hbar}_{j}F}{ij}=\sum\limits_{m\geq
1}\frac{(-1)^{m+1}}{m}\sum\limits_{s_{1},\dots,s_{m}\geq
1}P_{i,j}(s_{1},\dots,s_{m})\frac{\partial_{x}\partial^{\hbar}_{s_{1}}F}{s_{1}}\dots\frac{\partial_{x}\partial^{\hbar}_{s_{m}}F}{s_{m}}$
(18)
for the function $F(x;\mathbf{t})=F(t_{1}+x,t_{2},t_{3},\dots)$. Note that sum
in the r.h.s. of (18) is finite. For fixed $i$ and $j$ there is a restriction
on $s_{k}$. Sum of all matrix elements is a sum of rows which should coincide
with a sum of columns: $i+j=s_{1}+\dots+s_{m}+m$. For large enough values of
$s_{k}$ or a large number $m$ coefficients $P_{i,j}(s_{1},\dots,s_{m})$ are
equal to zero.
The next step is to determine all the solutions of the hierarchy. For this
reason we need Cauchy-like data, which is a set of functions of variable $x$:
$\partial_{k}^{\hbar}F^{\hbar}(x,\textbf{t})\lvert_{\textbf{t}=0}=f_{k}^{\hbar}(x)$.
If we consider formal solutions, i.e. not necessarily converging series, any
solution can be expressed through Cauchy-like data using universal
coefficients $P^{\hbar}_{\lambda}\begin{pmatrix}s_{1}\dots s_{m}\\\ l_{1}\dots
l_{m}\end{pmatrix}$, which were mentioned before as higher analogs of
coefficients $P_{i,j}(s_{1},\dots,s_{m})$.
It was shown by Natanzon and Zabrodin [19, Theorem 4.3] that for an arbitrary
set of smooth functions
$\textbf{f}=\\{f_{0}^{\hbar}(x),f_{1}^{\hbar}(x),\dots\\}$
there exists a unique solution $F^{\hbar}(x,\textbf{t})$ of the $\hbar$-KP
hierarchy with Cauchy-like data f. This solution is of the form
$F^{\hbar}(x,\textbf{t})=f_{0}^{\hbar}(x)+\sum_{|\lambda|\geq
1}\frac{f_{\lambda}^{\hbar}(x)}{\sigma(\lambda)}t_{\lambda}^{\hbar}$ (19)
where $f_{[k]}^{\hbar}(x)=f_{k}^{\hbar}(x)$ and
$f_{\lambda}^{\hbar}(x)=\sum_{m\geq
1}\sum_{\begin{subarray}{c}s_{1}+l_{1}+\dots+s_{m}+l_{m}=|\lambda|\\\ 1\leq
s_{i};\;1\leq l_{i}\leq
l(\lambda)-1\end{subarray}}P^{\hbar}_{\lambda}\begin{pmatrix}s_{1}\dots
s_{m}\\\ l_{1}\dots
l_{m}\end{pmatrix}\partial_{x}^{l_{1}}f_{s_{1}}^{\hbar}(x)\dots\partial_{x}^{l_{m}}f_{s_{m}}^{\hbar}(x)$
(20)
for $l(\lambda)>1$. $\sigma(\lambda)=\prod_{i\geq 1}m_{i}!$, where exactly
$m_{i}$ parts of the partition $\lambda$ have length $i$.
The full recursive definition of universal coefficients $P_{\lambda}^{\hbar}$
is quite unwieldy and can be found in [19]. In this paper we are interested in
simplest coefficients with $l_{1}=\dots=l_{m}=1$ and $\lambda=[i,j]$. They are
defined as coefficients (17) with the normalization factor
$P^{\hbar}_{[i,j]}\begin{pmatrix}s_{1}&\dots&s_{m}\\\
1&\dots&1\end{pmatrix}:=\frac{(-1)^{m+1}ij}{m\cdot s_{1}\dots
s_{m}}P_{i,j}(s_{1},\dots,s_{m})$ (21)
The other coefficients with $\ell(\lambda)\geq 2$ and $l_{i}>1$ can be
obtained from (17) using recursion relations.
## 3 Remarkable properties of combinatorial coefficients
$P_{i,j}(s_{1},\dots,s_{m})$
As it was claimed (18), we can rewrite all KP equations with help of certain
combinatorial coefficients $P_{i,j}(s_{1},\dots,s_{m})$. So it is natural to
ask if there is some connection between properties of KP hierarchy and
properties of these combinatorial objects. Therefore, in this section we
recall the most prominent properties of the constants
$P_{i,j}(s_{1},\dots,s_{m})$, as well as the context around their
combinatorics. We postpone the discussion of the connection with the KP till
the next section.
Coefficients $P_{i,j}(s_{1},\dots,s_{m})$ and their n-point generalizations
(23), in fact, arise in the theory of flow networks [31] and are very well
studied. Standard problem in the theory of flow networks is finding the
maximum flow which gives the largest total flow from the source to the sink.
We interested here in more simple question: what is the number of different
flows on the graph where all $n$ sources and $m$ sinks are connected by edges
which is exactly coefficients $P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})$.
Since there is a rich combinatorial structure of the combinatorial
coefficients, there are many different ways to calculate them, each having
potential implications for our topic: explicit formula as the sum over
vertices of hypercube, recursion formula and generating function.
* •
First of all, there is an explicit approach to calculation of the coefficients
using geometric interpretation and inclusion-exclusion principle:
combinatorial coefficients $P_{i,j}(s_{1},\dots,s_{m})$ can be represented as
the sum over vertices of $m$-dimensional hypercube
$P_{i,j}(s_{1},\dots,s_{m})=\delta_{s_{1}+\dots+s_{m}+m,i+j}\sum\limits_{\\{\sigma_{k}=\\{0,1\\}|k=1,\dots,m\\}}(-1)^{\sigma_{1}+\dots+\sigma_{m}}{i-\sigma_{1}s_{1}-\dots-\sigma_{m}s_{m}-1\choose
m-1}$ (22)
The cube is parametrized by the sequences of zeros and unities
$(\sigma_{1},\dots,\sigma_{m})$. Note here that we take binomial coefficients
${m\choose k}$ equal to zero if $m<k$ or $m<0$ or $k<0$. (see Appendix A for
the details on the derivation)
* •
There is a natural generalization of the combinatorial coefficients in the
following way. Matrices of size $2\times m$ are distinguished in KP theory,
but from the point of view of combinatorics one may consider the number of
matrices of size $n\times m$ with fixed sums over rows and columns.
$P_{i_{1}\dots
i_{n}}(s_{1},\dots,s_{m}):=\\#\left\\{\begin{pmatrix}i_{1}^{(1)}&\dots&i_{m}^{(1)}\\\
\vdots&\ddots&\vdots\\\ i_{1}^{(n)}&\dots&i_{m}^{(n)}\\\
\end{pmatrix}\Bigg{|}i_{k}^{(l)}\in\mathbb{N},\;\begin{array}[]{cc}i_{1}^{(l)}+\dots+i_{m}^{(l)}=i_{1},&\forall
l\in\overline{1,n}\\\ i_{k}^{(1)}+\dots+i_{k}^{(n)}=s_{k}+n-1,&\forall
k\in\overline{1,m}\end{array}\right\\}$ (23)
Such objects arise in the simplest flow network problem: it is the number of
integer flows on complete bipartite graph [31].
Note that defined coefficients are symmetric up to permutation within the set
of parameters $s_{k}$ and within the set of indices $i_{l}$. Thus, one may
consider an ordered sets of indices and parameters labeled by Young diagrams
$\lambda$ and $\mu$. More information about combinatorial meaning and
different applications of such coefficients, denoted as $N(\lambda,\mu)$, can
be found in [32].
This interpretation in terms of the number of certain matrices (23) allows one
to obtain the following recursion relations [33]:
$P_{i_{1}\dots
i_{n}}(s_{1},\dots,s_{m})=\sum\limits_{\left\\{{i_{n}^{1}+\dots+i_{n}^{m}=i_{n}\atop
1\leq i_{n}^{l}\leq s_{l}|l=1,\dots,m}\right\\}}P_{i_{1}\dots
i_{n-1}}(s_{1}-i_{1}^{n}+1,\dots,s_{m}-i_{m}^{n}+1)$ (24)
and
$P_{i_{1}\dots
i_{n}}(s_{1},\dots,s_{m})=\sum\limits_{\left\\{{i_{1}^{m}+\dots+i_{n}^{m}=s_{m}+n-1\atop
1\leq i_{l}^{m}\leq
i_{l}-m+1|l=1,\dots,n}\right\\}}P_{i_{1}-i_{1}^{m},\dots,i_{n}-i_{n}^{m}}(s_{1},\dots,s_{m-1})$
(25)
Note that (24) and (25) are the same up to the symmetry between indices
$\\{i_{l}\\}$ and parameters $\\{s_{l}\\}$ mentioned above.
* •
The last approach to calculation of the combinatorial coefficients is by means
of generating function. We can construct such function in two different ways.
Both highlight some interesting properties on the KP hierarchy side which we
discuss in section 4. Firstly, we can write it in the following way:
$\widetilde{G}_{nm}(\mathbf{x},\mathbf{y})=\sum\limits_{i_{1}\geq
1,\dots,i_{n}\geq 1}y_{1}^{i_{1}}\dots y_{n}^{i_{n}}\sum\limits_{s_{1}\geq
1,\dots,s_{m}\geq 1}x_{1}^{s_{1}}\dots x_{m}^{s_{m}}P_{i_{1}\dots
i_{n}}(s_{1},\dots,s_{m})=\left(\prod\limits_{l=1}^{m}x_{l}\right)\left(\prod\limits_{k=1}^{n}y_{k}^{m}\right)\sum\limits_{\lambda}S_{\lambda}(\mathbf{x})S_{\lambda}(\mathbf{y})$
(26)
which can be rewritten more naturally with the help of shifts:
$i_{k}^{1}+\dots+i_{k}^{m}=i_{k}\mathbf{+m}$ for $k=1,\dots,n$ and
$i_{1}^{l}+\dots+i_{n}^{l}=s_{l}\mathbf{+n}$ for $l=1,\dots,m$:
$G_{mn}(\mathbf{x},\mathbf{y})=\sum\limits_{\lambda}S_{\lambda}(\mathbf{x})S_{\lambda}(\mathbf{y})=\prod\limits_{i,j}\frac{1}{1-x_{i}y_{j}}$
(27)
This formula is well known [32], but we give a short calculation in Appendix B
that shows how it follows from recursion relations (24).
We also consider another generating function in variables $p_{k}$:
$H(\mathbf{p};y_{1},y_{2})=\sum\limits_{m\geq
0}\frac{(-1)^{m+1}}{m}\sum\limits_{ij}y_{1}^{i}y_{2}^{j}\sum\limits
p_{s_{1}}\dots
p_{s_{m}}P_{ij}(s_{1},\dots,s_{m})=\ln\left(1+y_{1}y_{2}\sum\limits_{k=1}^{\infty}p_{k}\frac{y_{1}^{k}-y_{2}^{k}}{y_{1}-y_{2}}\right)$
(28)
The choice of these variables is motivated by the formula (18) where factors
$\partial_{x}\partial_{s}F$ are included in the equation in the same way as
$p_{i}$ into this generating function. The formula (28) can be obtained from
the first generating function (27) by replacement $p_{k}=\sum_{i}x_{i}^{k}$
(more detailed calculation can be found in Appendix B)
## 4 Connection with KP hierarchy
Now we discuss, what does the explicit form of the generating functions
(27),(28) mean for the KP hierarchy. First of all, we argue that the
generating function (27) becomes the KP tau-function after some simple change
of variables, which will become effective in section 6.2 where we describe
possible deformations.
Second of all, the other generating function (28) allows one to easily derive
Fay-identity form of the KP hierarchy.
We also discuss here interpretation of the combinatorial formula (20) in terms
of solutions that can be restored using topological recursion.
* •
Generating function (27) of the redefined coefficients can be rewritten in
another variables by replacement $kt_{k}=\sum\limits_{i}x_{i}^{k}$ and
$k\bar{t}_{k}=\sum\limits_{i}y_{i}^{k}$. In these variables, using Cauchy-
Littlewood identity (7) we obtain:
$G(\mathbf{x},\mathbf{y})=\sum_{\lambda}S_{\lambda}(\mathbf{t})S_{\lambda}(\mathbf{\bar{t}})=e^{\sum_{k}kt_{k}\bar{t}_{k}}$
(29)
which is trivially a $\tau$-function of KP (or Toda) hierarchy where
$\mathbf{t}$ and $\mathbf{\bar{t}}$ are corresponding times. So the generating
function for coefficients, which defines $\hbar$-KP, is the trivial
$\tau$-function itself. We will discuss this property in section 6 trying to
deform the combinatorial coefficients.
* •
The second generating function allows us to write the analog of the Fay
identity in the following way: it gives us generating function for all KP
equations (18) by replacement
$p_{k}\rightarrow\frac{\partial\partial_{k}^{\hbar}F}{k}$:
$\frac{\partial^{\hbar}_{i}\partial^{\hbar}_{j}F}{ij}=\left[y_{1}^{i}y_{2}^{j}\right]\ln\left(1+y_{1}y_{2}\sum\limits_{k=1}^{\infty}\frac{\partial\partial^{\hbar}_{k}F}{k}\frac{y_{1}^{k}-y_{2}^{k}}{y_{1}-y_{2}}\right)$
(30)
From the other hand the Fay identity for $\hbar$-KP hierarchy has the form
(14). Now, using replacement $z_{i}\rightarrow\frac{1}{y_{i}}$ and the fact
that $\partial_{1}=\partial_{x}=\partial$ we obtain exactly (30). The explicit
derivation of (30) from Fay identity can be found in [19].
As we can see here, combinatorial properties of the coefficients
$P_{i,j}(s_{1},\dots,s_{m})$ in the form (24) lead to the generating function
(28) which is exactly Fay identity in terms of KP hierarchy.
* •
Let us now turn to the question of restrictions which explicit form of KP
equations imposes on the topological recursion. Many solutions of the KP
hierarchy (e.g., simple Hurwitz numbers [6], Hermitian matrix model [34, 35,
36], Kontsevich $\tau$-function [1]) allows one to construct multi-
differentials, which are related by the so-called spectral curve topological
recursion [22, 23, 24, 25, 26, 27, 28]. The initial data for the recursion
procedure are 1-point and 2-point function of genus 0 which are expected to be
independent. However, naively, from formula (20) it follows that two-point
functions $f^{\hbar}_{\lambda_{1},\lambda_{2}}$ can be expressed via one-point
functions $f^{\hbar}_{k}$.
Let us recall main concepts of the topological recursion. This approach
firstly arose in the theory of matrix models where all correlators have
natural genus expansion [37, 38]. In such theories we consider the following
correlators which are called resolvents:
$W_{n}(p_{1},\dots,p_{n})=\left\langle{\rm Tr}\,\frac{1}{p_{1}-X}\dots\rm
Tr\,\frac{1}{p_{n}-X}\right\rangle_{Conn}$ (31)
where we integrate over matrices $X$ with some measure and $Conn$ means we
consider connected diagrams only. They also have some genus expansion
$W_{n}=\sum\limits_{g}\hbar^{2g}W_{g,n}$ (32)
Topological recursion allows us to recover all resolvents in the genus $g=n$
from $g<n$ resolvents if we know the initial data: spectral curve, $W_{0,1}$
and $W_{0,2}$.
Moreover, in many cases where topological recursion is applicable, the
logarithm of partition function $F=\hbar^{2}\log(Z)$ turns out to be a
solution of $\hbar$-KP. We can also represent resolvents via $F$ in the
following way
$W(p_{1},\dots,p_{s})=-\frac{\partial}{\partial
V(p_{1})}\dots\frac{\partial}{\partial V(p_{s})}F\Big{|}_{\mathbf{t}=0,x=0}$
(33)
where
$\frac{\partial}{\partial
V(p)}=-\sum\limits_{j=1}^{\infty}\frac{1}{p^{j+1}}\frac{\partial}{\partial
t_{j}}$ (34)
is the loop insertion operator [26].
Returning to the Natanzon-Zabrodin formulation of KP hierarchy we can consider
the Cauchy-like data as genus zero resolvents since in the limit
$\hbar\rightarrow 0$ formula (19) gives:
$F^{\hbar=0}(x,\textbf{t})=f_{0}^{\hbar=0}(x)+\sum_{|\lambda|\geq
1}\frac{f_{\lambda}^{\hbar=0}(x)}{\sigma(\lambda)}t_{\lambda}$ (35)
and
$W_{0}(p_{1},\dots,p_{n})=(-1)^{n}\sum\limits_{\lambda_{1},\dots,\lambda_{n}\geq
1}\frac{1}{p_{1}^{\lambda_{1}+1}}\dots\frac{1}{p_{n}^{\lambda_{n}+1}}\partial_{1}\dots\partial_{n}F\Big{|}_{\mathbf{t}=0,x=0,\hbar=0}=(-1)^{n}\sum\limits_{\lambda_{1}\geq\dots\geq\lambda_{n}\geq
1}\frac{1}{p_{1}^{\lambda_{1}+1}}\dots\frac{1}{p_{n}^{\lambda_{n}+1}}f^{\hbar=0}_{\lambda}\Big{|}_{x=0}$
(36)
Now it is clear that KP hierarchy imposes some restrictions since this formula
connects two point resolvents with functions $\partial f_{k}|_{x=0}$ which in
terms of $W$ corresponds to two-point resolvents in the following way. Let
$W_{0}(p_{1},p_{2})=\sum\limits_{\lambda_{1}\geq\lambda_{2}\geq
1}\frac{1}{p_{1}^{\lambda_{1}+1}p_{2}^{\lambda_{2}+1}}\omega_{\lambda_{1}\lambda_{2}}$
(37)
then $f^{\hbar=0}_{\lambda_{1}\lambda_{2}}=\omega_{\lambda_{1}\lambda_{2}}$
and $\omega_{\lambda_{1}1}=\partial f^{\hbar=0}_{\lambda_{1}}|_{x=0}$. It is
possible now to write nontrivial condition on two-point resolvents using (20)
for $\ell(\lambda)=2$:
$\boxed{\frac{\omega_{\lambda_{1},\lambda_{2}}}{\lambda_{1}\lambda_{2}}=\left[y_{1}^{\lambda_{1}}y_{2}^{\lambda_{2}}\right]\ln\left(1+y_{1}y_{2}\sum\limits_{k=1}^{\infty}\frac{\omega_{k,1}}{k}\frac{y_{1}^{k}-y_{2}^{k}}{y_{1}-y_{2}}\right)}$
(38)
Summarizing, the combinatorial view on KP hierarchy allows us to obtain a
nontrivial condition on solutions of KP hierarchy that admits recovering using
topological recursion. This equation means that we can express all genus zero
two-point resolvents using only $\omega_{k,1}$ data. It would be very
interesting to see whether these KP restrictions are related with the
decomposition property ([39, Lemma 4.1]), which under certain mild assumptions
holds for $W_{0,2}$. This question is left for further research.
## 5 Eigenvalue model
In this section we provide a complete description of combinatorial
coefficients $P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})$ in terms of an
eigenvalue model. The model is an integral over eigenvalues of a matrix with a
simple measure. Combinatorial coefficients appear to be certain correlators in
the model, i.e. averages of product of $m$ symmetric Schur polynomials
$\langle S_{s_{1}-1}\dots S_{s_{m}-1}\rangle$. An arbitrary correlator in the
model may be expressed with the help of the full basis of observables. The
basis is obtained as a natural generalization of combinatorial coefficients
$P_{i_{1},\dots,i_{n}}(s)$ with one parameter $s$ and coincides with a subset
of Kostka numbers. Partition function of the model can be calculated
explicitly. The common property of matrix models is the existence of Ward
identities that might be solved recursively. In this model Ward identities
give new recursion relations on combinatorial coefficients
$P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})$.
This model takes the simplest form for slightly modified coefficients
$P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})$ with symmetric definition for both
lower indices $i_{k}$ and integers $s_{j}$:
$\begin{gathered}i_{1}^{(k)}+\dots+i_{m}^{(k)}=i_{k}+m-1,\;\;\;1\leq k\leq
n\\\ i_{j}^{(1)}+\dots+i_{j}^{(n)}=s_{j}+n-1,\;\;\;1\leq j\leq
m\end{gathered}$ (39)
Note that such a definition differs from (23) by shift of $i_{k}$. However,
both definitions provide coefficients that are in one-to-one correspondence by
the shift of indices, so we denote them as
$P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})$ as well.
Let us introduce an eigenvalue model
$\mathcal{Z}_{n}(\textbf{t})=\frac{1}{(2\pi i)^{n}}\oint dz_{1}\dots\oint
dz_{n}\left(\prod_{k=1}^{n}z_{k}^{-i_{k}}\right)\exp\left(\sum_{k=1}^{\infty}t_{k}{\rm
Tr}\,Z^{k}\right),$ (40)
where $z_{k}$ are complex variables, integration contours are unit circles and
$Z$ is diagonal matrix $Z=\text{diag}(z_{1},\dots,z_{n})$. Using Cauchy-
Littlewood identity (7), we rewrite it in the form
$\mathcal{Z}_{n}(\textbf{t})=\sum_{\lambda}\left\\{\frac{1}{(2\pi i)^{n}}\oint
dz_{1}\dots\oint
dz_{n}\left(\prod_{k=1}^{n}z_{k}^{-i_{k}}\right)S_{\lambda}(Z)\right\\}S_{\lambda}(\textbf{t})\equiv\sum_{\lambda}\langle
S_{\lambda}\rangle S_{\lambda}(\textbf{t}),$ (41)
which can be understood as a generating function for correlators $\langle
S_{\lambda}\rangle$. Combinatorial coefficients
$P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})$ appear to be correlators of
specific form in such eigenvalue model.
Any combinatorial coefficient $P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})$ can
be represented as an average of $m$ symmetric Schur polynomials:
$P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})=\frac{1}{(2\pi i)^{n}}\oint
dz_{1}\dots\oint
dz_{n}\left(\prod_{k=1}^{n}z_{k}^{-i_{k}}\right)\left(\prod_{j=1}^{m}S_{s_{j}-1}(Z)\right)\equiv\langle
S_{s_{1}-1}\dots S_{s_{m}-1}\rangle$ (42)
Although this integral seems complicated, it i fact, has simple meaning of
extracting certain coefficient in front of certain powers of $z$-variables of
integrand: $P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})=[z_{1}^{i_{1}-1}\dots
z_{n}^{i_{n}-1}]\left(\prod_{j=1}^{m}S_{s_{j}-1}(Z)\right)$. This formula can
be obtained as follows. Restrictions (39) allow us to represent the definition
of combinatorial coefficients as a sum over product of delta-symbols (each
restriction corresponds to one delta-symbol):
$P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})=\sum_{\begin{subarray}{c}i_{1}^{(1)}\geq
1\\\ \dots\\\ i_{m}^{(1)}\geq
1\end{subarray}}\dots\sum_{\begin{subarray}{c}i_{1}^{(n)}\geq 1\\\ \dots\\\
i_{m}^{(n)}\geq
1\end{subarray}}\left(\prod_{k=1}^{n}\delta_{i_{1}^{(1)}+\dots+i_{m}^{(1)},i_{k}+m-1}\right)\left(\prod_{j=1}^{m}\delta_{i_{j}^{(1)}+\dots+i_{j}^{(n)},s_{j}+n-1}\right)$
(43)
Delta-symbols are replaced with contour integrals with the help of simple
relation
$\delta_{n,m}=\frac{1}{2\pi i}\oint dzz^{n-m-1}.$ (44)
We change the first $n$ delta-symbols to integrals in such way. The obtained
expression is of the form
$P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})=\frac{1}{(2\pi i)^{n}}\oint
dz_{1}\dots\oint
dz_{n}\left(\prod_{k=1}^{n}z_{k}^{-i_{k}}\right)\prod_{j=1}^{m}\left[\sum_{i_{j}^{(1)}\geq
1}\dots\sum_{i_{j}^{(n)}\geq 1}z_{1}^{i_{j}^{(1)}-1}\dots
z_{n}^{i_{j}^{(n)}-1}\delta_{i_{j}^{(1)}+\dots+i_{j}^{(n)},s_{j}+n-1}\right]$
(45)
The expression in square brackets can be evaluated independently for each $j$
and depends only on $s_{j}$. It is equal to Schur polynomial
$S_{s_{j}-1}(z_{1},\dots,z_{n})$. Detailed calculations are presented in
Appendix C. Thus, we proved formula (42).
Eigenvalue model (41) is a natural generalization of coefficients
$P_{i_{1},\dots,i_{n}}(s)=\langle S_{s-1}\rangle$, i.e. one may consider
averages of an arbitrary Schur polynomial $\langle S_{\lambda}\rangle$, not
only symmetric ones. Any other coefficients such as
$P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})=\langle S_{s_{1}-1}\dots
S_{s_{m}-1}\rangle$ or their natural generalizations $\langle
S_{\lambda_{1}}\dots S_{\lambda_{m}}\rangle$ can be expressed in terms of
linear combinations of $\langle S_{\lambda}\rangle$: product of Schur
polynomials is decomposed in linear combination of single Schur polynomials
with Littlewood-Richardson coefficients [29]. So, correlators $\langle
S_{\lambda}\rangle$ form the appropriate full basis in the space of
observables of the model.
Moreover, correlators $\langle S_{\lambda}\rangle$ coincide with Kostka
numbers. One of the definitions of Kostka numbers $K_{\lambda,\mu}$ is the
decomposition of Schur polynomial into the sum over monomial symmetric
functions $m_{\lambda}(z_{1},\dots,z_{n})$ or, equivalently, into the sum over
all weak compositions $\alpha$ of $n$ [32]:
$S_{\lambda}(z_{1},\dots,z_{n})=\sum_{\mu}K_{\lambda,\mu}m_{\mu}(z_{1},\dots,z_{n})=\sum_{\alpha}K_{\lambda,\alpha}z^{\alpha},$
(46)
where $z^{\alpha}$ denotes the monomial $z_{1}^{\alpha_{1}}\dots
z_{n}^{\alpha_{n}}$. The simple form of average (41) exactly coincides with
coefficient in front of one monomial in Schur polynomial decomposition:
$\langle S_{\lambda}(Z)\rangle=[z_{1}^{i_{1}-1}\dots
z_{n}^{i_{n}-1}]S_{\lambda}(Z)$. The latter one is the Kostka number
$K_{\lambda,\widetilde{\alpha}}$, where
$\widetilde{\alpha}=(i_{1}-1,\dots,i_{n}-1)$. Finally, we can write
$\langle S_{\lambda}(Z)\rangle=K_{\lambda,\widetilde{\alpha}}.$ (47)
The set of basis observables in the eigenvalue model is a subset of Kostka
numbers. All correlators in the model may be expressed with the help of Kostka
numbers.
The complete information about eigenvalue model is given by an explicit
expression for generating function (41). It is possible to calculate not only
$\mathcal{Z}_{n}(\textbf{t})$ but also more general generating function:
$\mathcal{Z}_{n}(\textbf{t}^{(1)},\dots,\textbf{t}^{(m)})=\sum_{\lambda_{1}}\dots\sum_{\lambda_{m}}\langle
S_{\lambda_{1}}\dots S_{\lambda_{m}}\rangle
S_{\lambda_{1}}(\textbf{t}^{(1)})\dots S_{\lambda_{m}}(\textbf{t}^{(m)}),$
(48)
where $\textbf{t}^{(k)}$ is an infinite vector of times
$\textbf{t}^{(k)}=(t_{1}^{(k)},t_{2}^{(k)},t_{3}^{(k)},\dots)$ for each $k$.
First of all, Cauchy-Littlewood identity (7) allows us to evaluate sums over
partitions $\lambda_{1},\dots,\lambda_{m}$ and obtain an expression similar to
(40):
$\mathcal{Z}_{n}(\textbf{t}^{(1)},\dots,\textbf{t}^{(m)})=\frac{1}{(2\pi
i)^{n}}\oint dz_{1}\dots\oint
dz_{n}\left(\prod_{k=1}^{n}z_{k}^{-i_{k}}\right)\left(\prod_{k=1}^{m}\exp\left\\{\sum_{l=1}^{\infty}t_{l}^{(k)}(z_{1}^{l}+\dots+z_{n}^{l})\right\\}\right)$
(49)
If we change the sum over $z_{\alpha}$ in the exponent into product of
exponents and, in its turn, the product of exponents into sum over
$t_{m}^{\alpha}$ in the exponent we obtain the following expression
$\mathcal{Z}_{n}(\textbf{t}^{(1)},\dots,\textbf{t}^{(m)})=\prod_{k=1}^{n}\oint\frac{dz_{k}}{2\pi
i}z_{k}^{-i_{k}}\exp\left\\{\sum_{l=1}^{\infty}(t_{l}^{(1)}+\dots+t_{l}^{(m)})z_{k}^{l}\right\\},$
(50)
where it is possible to calculate each integral. The given exponent is a
generating series for symmetric Schur polynomials in variables
$\textbf{t}^{(1)}+\dots+\textbf{t}^{(m)}$ [29], so contour integral is exactly
$S_{i_{k}-1}$ for each $k$ and we obtain the product of $n$ Schur polynomials:
$\mathcal{Z}_{n}(\textbf{t}^{(1)},\dots,\textbf{t}^{(m)})=\prod_{k=1}^{n}S_{i_{k}-1}(\textbf{t}^{(1)}+\dots+\textbf{t}^{(m)}).$
(51)
The particular case of $m=1$ is the eigenvalue model (41). Generating function
of the form (51) is not very useful to restore coefficients $\langle
S_{\lambda_{1}}\dots S_{\lambda_{n}}\rangle$ since one has to differentiate it
with operator $S(\tilde{\partial}^{(1)})\dots S(\tilde{\partial}^{(n)})$ at
$\textbf{t}^{(1)}=\dots=\textbf{t}^{(n)}=0$. However, it contains the product
of Schur polynomials, which seems similar to the Frobenius formula. The
difference between them is that Frobenius formula contains sum over products
of Schur polynomials. One may hope that adding times in Schur polynomials as
in (51) leads to some good properties.
One more question which arises while studying matrix models is the question
about any recursion relations. On the one hand we already mentioned recursion
relations (24) and (25). On the other hand matrix model always has Ward
identities, which sometimes can be solved recursively. It turns out that
recursion relations obtained from the eigenvalue model are different from both
(24) and (25). Eigenvalue model (40) is provided with Ward identities that
give new recursion relations different from (24), (25). We introduce new
recursion relations for combinatorial coefficients
$P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})$ with an arbitrary parameters
$i_{1},\dots,i_{n}$ and $s_{1},\dots,s_{m}$:
$P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})=\frac{1}{i_{1}-1}\sum_{k=1}^{m}\sum_{l=1}^{s_{k}-1}P_{i_{1}-s_{k}+l,i_{2},\dots,i_{n}}(s_{1},\dots,s_{k-1},l,s_{k+1},\dots,s_{m}).$
(52)
As usual for matrix models, Ward identities are obtained with the help of
change of variables under the integral that does not change the entire
integral. In the case of $P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})=\langle
S_{s_{1}-1}\dots S_{s_{m}-1}\rangle$ in the form (42) change of variables is
dilatation of the first variable $z_{1}\rightarrow(1+q)z_{1},\;q\neq 0$.
Integration contour is around $z_{1}=0$, so it is a possible change of
variables and deformed integral is independent on $q$. The explicit
calculations of deformed integral and proof of (52) are presented in Appendix
C.
We finish this section with brief review of the obtained results. We provide
complete description of combinatorial coefficients
$P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})$ in terms of an eigenvalue model
(40). Combinatorial coefficients are certain correlators in the model (42) –
averages of symmetric Schur polynomials $\langle S_{s_{1}-1}\dots
S_{s_{m}-1}\rangle$. The full basis of observables is the set of averages of
arbitrary Schur polynomials $\langle S_{\lambda}\rangle$. It is a certain
subset of Kostka numbers (47). Generating function
$\mathcal{Z}_{n}(\textbf{t})$ can be calculated explicitly (51). Ward
identities give new recursion relations (52) for combinatorial coefficients
$P_{i_{1},\dots,i_{n}}(s_{1},\dots,s_{m})$.
## 6 Towards deformations of KP hierarchy
In previous sections we discussed the appearance of coefficients
$P_{i,j}(s_{1},\dots,s_{m})$ in the KP equations (18). In this section we are
interested in a possible generalization of the theory mentioned above, i.e. in
the deformation of KP equations.
As it often happens, various deformations help to understand the underlying
structure of the formula, find out which parts are essential and which can be
deformed. We try to reveal what role do the combinatorial coefficients
$P_{i,j}(s_{1},\dots,s_{m})$ play in equations (18). Integrability might be
determined by combinatorial coefficients or might be a consequence of the
particular form of equations. We deform only the combinatorial coefficients
leaving the form of equations unchanged. It turns out that an arbitrary
deformation is not possible, equations (18) contain some restrictions that
come from the fact that some of the equations should be fulfilled trivially.
These restrictions appear even before the question about compatibility of
obtained system of differential equations. However, there is a hopeful
deformation direction.
The idea of deformation is based on the fact that we know explicit expression
for the generating function of coefficients $P_{i,j}(s_{1},\dots,s_{m})$ of
the form (27). Let us deform this generating function. Deformed coefficients
$P_{i,j}^{(def)}(s_{1},\dots,s_{m})$ are obtained as coefficients in the
expansion of the deformed generating function similarly to the original ones.
At first glance, deformation of the generating function can be done in many
ways. For example, we know that generating function (27) is a KP
$\tau$-function. So, one can try to let the new generating function be another
KP $\tau$-function of a similar form, i.e. $\tau$-function of hypergeometric
type [2, 40]. Another way of deformation of generating function is the
replacement of Schur polynomials with some other polynomials, which are
considered as deformed Schur polynomials, for example, MacDonald polynomials
[29]. These two types of generating functions we consider below.
First of all, let us examine which equations in (18) are trivial. It is
obvious that equations (18) are symmetric due to permutations
$i\leftrightarrow j$. Therefore, we can consider only ordered pair of indices
$i>j$, or, equivalently, equations are labeled by all Young diagrams of length
2. In the case of $i=n$ and $j=1$:
$\partial_{n}^{\hbar}\partial_{1}F=\sum_{\begin{subarray}{c}s_{1}\geq 1\\\
s_{1}=n\end{subarray}}P_{n,1}(s_{1})\partial_{1}\partial_{s_{1}}^{\hbar}F+\underbrace{\sum_{\begin{subarray}{c}s_{1},s_{2}\geq
1\\\
s_{1}+s_{2}=n-1\end{subarray}}\frac{(-1)}{2s_{1}s_{2}}P_{n,1}(s_{1},s_{2})\partial_{1}\partial_{s_{1}}^{\hbar}F\cdot\partial_{1}\partial_{s_{2}}^{\hbar}F+\text{higher
m}}_{=0}$ (53)
Since one of the indices is equal to 1, there is no matrix of size $2\times m$
with $m\geq 2$ with positive integer elements and sum over row equal to 1.
Therefore, for any positive integer $n$ equation (53) reduces to
$\partial_{1}\partial_{n}^{\hbar}F=P_{n,1}(n)\partial_{1}\partial_{n}^{\hbar}F$
(54)
For any $n$ it is easy to calculate that $P_{n,1}(n)=1$, thus, equations (54)
are hold trivially.
When we replace coefficients $P_{i,j}(s_{1},\dots,s_{m})$ with the deformed
ones $P_{i,j}^{(def)}(s_{1},\dots,s_{m})$, the latter ones are calculated via
deformed generating function in the following way. Since
$P_{i,j}(s_{1},\dots,s_{m})=[x_{1}^{i-1}x_{2}^{j-1}y_{1}^{s_{1}-1}\dots
y_{m}^{s_{m}-1}]G(\textbf{x},\textbf{y})$, deformed coefficients are obtained
similarly as:
$P_{i,j}^{(def)}(s_{1},\dots,s_{m})=[x_{1}^{i-1}x_{2}^{j-1}y_{1}^{s_{1}-1}\dots
y_{m}^{s_{m}-1}]G^{(def)}(\textbf{x},\textbf{y})$ (55)
The deformed equations in the case of $i=n,j=1$ are of the form
$\partial_{1}\partial_{n}^{\hbar}F=P_{n1}^{(def)}(n)\partial_{1}\partial_{s_{1}}^{\hbar}F$
(56)
and again should be fulfilled trivially. Thus, we have the condition on the
deformed coefficients:
$P_{n1}^{(def)}(n)=1,\;\;\forall\;n\in\mathbb{N}\;\Leftrightarrow\;[x_{1}^{k}y_{1}^{k}]G^{(def)}(\textbf{x},\textbf{y})=1,\;\;\forall\;k\in\mathbb{N}\cup\\{0\\}$
(57)
This condition we consider as a necessary condition for the deformed
generating function.
### 6.1 Hurwitz deformation
Let us consider the generating function for simple Hurwitz numbers as a new
generating function for combinatorial coefficients. This generating function
is a member of the set of hypergeometric $\tau$-functions [41] and can be
written as:
$G^{H}(\textbf{x},\textbf{y})=\sum_{\lambda}e^{\frac{u}{2}C_{2}(\lambda)}S_{\lambda}(\textbf{x})S_{\lambda}(\textbf{y})$
(58)
where $C_{2}(\lambda)$ is an eigenvalue of the second Casimir operator [6]
($C_{2}(\lambda)=\sum_{i=1}^{\ell(\lambda)}\lambda_{i}(\lambda_{i}-2i+1)$).
The first few terms of the generating function are
$G^{H}(\textbf{x},\textbf{y})=1+(x_{1}+x_{2})(y_{1}+y_{2})+e^{u}(x_{1}^{2}+x_{1}x_{2}+x_{2}^{2})(y_{1}^{2}+y_{1}y_{2}+y_{2}^{2})+e^{-u}(x_{1}x_{2})(y_{1}y_{2})+\dots$
(59)
Already in the second order
$[x_{1}^{2}y_{1}^{2}]G^{H}(\textbf{x},\textbf{y})=e^{u}$, thus, such deformed
coefficients violate necessary condition (57). We conclude that Hurwitz
numbers is a bad choice for deformed combinatorial coefficients.
### 6.2 MacDonald $(q,t)$-deformation
Although smart $(q,t)$-deformation of KP hierarchy that possesses an
underlying structure of some algebra and solutions like $(q,t)$-deformed
matrix models [42, 43] is still unknown, we make an attempt to construct
$(q,t)$-deformed KP equations. Let us consider the sum over MacDonald
polynomials as the deformed generating function for combinatorial
coefficients:
$G^{(q,t)}(\textbf{x},\textbf{y})=\sum_{\lambda}M_{\lambda}(\textbf{x})M_{\lambda}(\textbf{y}),$
(60)
Necessary condition (57) is fulfilled at least for the first few polynomials
($P_{n1}^{(q,t)}(n)=1$ for n = 2,3,4,5), so it is possible that it holds for
an arbitrary $n$.
The first non-trivial equation of $(q,t)$-deformed hierarchy is ($i=2,j=2$):
$\partial_{2}^{\hbar}\partial_{2}^{\hbar}F=\frac{4}{3}\left(1+\frac{q-t}{1-qt}\right)\partial_{1}\partial_{3}^{\hbar}F-2\left(\partial_{1}^{2}F\right)^{2}$
(61)
The second non-trivial equation of $(q,t)$-deformed hierarchy is ($i=3,j=2$):
$\partial_{3}^{\hbar}\partial_{2}^{\hbar}F=\frac{3}{2}\left(1+\frac{(q-t)(q+1)}{1-q^{2}t}\right)\partial_{1}\partial_{4}^{\hbar}F-3\left(\partial_{1}^{2}F\right)\left(\partial_{1}\partial_{2}^{\hbar}F\right)$
(62)
Both equations become equations of classical KP hierarchy in the limit $q=t$.
However the question about compatibility of deformed differential equations is
still open and deserves a separate study.
Unfortunately, generating function (60) does not satisfy equations (61) and
(62). Thus, it cannot be considered as a trivial $\tau$-function of the
deformed hierarchy similarly to (27), which is a trivial $\tau$-function of
non-deformed KP. However, the form of the equations remains the same as
classical KP hierarchy: each term contains at least two derivatives. Thus, any
linear combination of times $t_{k}$ is a solution of these equations. A
possible candidate for the deformed trivial $\tau$-function comes from the
modification of Cauchy-Littlewood identity (7) for MacDonald polynomials [29]:
$\sum_{\lambda}\frac{C_{\lambda}}{C^{\prime}_{\lambda}}M_{\lambda}(t_{k})M_{\lambda}(\overline{t}_{k})=\exp\left(\sum_{k=1}^{\infty}[\beta]_{q}kt_{k}\overline{t}_{k}\right)$
(63)
where
$C_{\lambda}=\prod_{(i,j)\in\lambda}\left[\beta
Arm_{\lambda}(i,j)+Leg_{\lambda}(i,j)+1\right]_{q},\;\;\;\;\;\;\;\;C^{\prime}_{\lambda}=\prod_{(i,j)\in\lambda}\left[\beta
Arm_{\lambda}(i,j)+Leg_{\lambda}(i,j)+\beta\right]_{q}$ (64)
Here $[x]_{q}$ denotes the quantum number, $t=q^{\beta}$ and
$Arm_{\lambda}(i,j),Leg_{\lambda}(i,j)$ are notations of combinatorial objects
such as arms and legs of the Young diagram $\lambda$ (for the detailed
description of these objects see, for example, [44]). The $F$-function is a
logarithm of (63) and is just a linear combination of times $t_{k}$ for fixed
parameters $\overline{t}_{k}$. Therefore it satisfies deformed equations (61),
(62) and might be a possible candidate for a trivial $\tau$-function.
This approach contains some hopeful directions that will be considered in more
details elsewhere. Right now generating function (60) seems as a possible
choice for deformed combinatorial coefficients.
## 7 Discussion
In this paper we presented a combinatorial view on the $\hbar$-KP hierarchy
based on Natanzon-Zabrodin approach with universal combinatorial coefficients
$P_{ij}(s_{1},\dots,s_{m})$. We showed that studying of the combinatorial
coefficients naturally highlights certain properties of the KP hierarchy:
* •
generating function (27) is the KP $\tau$-function by itself and generating
function (28) gives Fay identity (30). These properties give an idea about
possible deformations of KP hierarchy from the combinatorial point of view: we
expect that deformation of generating function (27) will lead to some
interesting deformations of KP hierarchy.
* •
generating function (28) and form of solutions (19) gives information about
conditions on Cauchy-like data that corresponds to genus zero resolvents in
topological recursion for $\hbar$-KP solutions. In particular, this may be
used as a quick test for putative spectral curves for enumerative problems,
known to be KP integrable.
* •
combinatorial coefficients $P_{ij}(s_{1},\dots,s_{m})$ have complete
description in terms of quite simple eigenvalue matrix model (40). This
approach allows us to describe non-trivial recursion relation (52) on the
combinatorial coefficients. This matrix model may be used in studying KP
hierarchy in terms of the combinatorial coefficients and it gives new
questions about interpretation of corresponding averages in terms of KP
hierarchy.
The aim of this paper is to demonstrate that combinatorial approach to KP
hierarchy is instrumental in giving motivation and insights for further study
of emergent properties of KP. Here we list some questions that appear
naturally when applying this approach:
* •
The question about combinatorial deformation of KP hierarchy is still open:
can we deform combinatorial coefficients in equations (18) in such a way that
we obtain an integrable hierarchy? (Discussed in section 6)
* •
What do coefficients $\langle S_{\lambda_{1}}\dots S_{\lambda_{n}}\rangle$
mean in terms of combinatorial objects or KP hierarchy? (Discussed in section
5)
* •
It is easy to generalize combinatorial definition of the coefficients
replacing matrices by tensors. For example, the number of three-tensors with
fixed sums over two of three indices is called a Kronecker coefficient, which
has a lot of different applications [45, 46]. It is natural to ask, is there
any integrable hierarchy formulated via Kronecker coefficients in the same way
as the $\hbar$-KP?
* •
How to write a matrix model for such generalizations and how do Ward
identities in this model looks like?
* •
According to [19] it is possible to recover any formal solution of $\hbar$-KP
from Cauchy-like data (20) using higher coefficients
$P^{\hbar}_{\lambda}\begin{pmatrix}s_{1}\dots s_{m}\\\ l_{1}\dots
l_{m}\end{pmatrix}$. Is there any simple combinatorial description for these
coefficients? Are they connected with Kronecker numbers in some way? Or may be
there is some matrix model generating these coefficients.
We hope to address some, or all, of these intriguing questions in the future.
## Acknowledgements
This work was funded by the Russian Science Foundation (Grant No.20-71-10073).
We are grateful to Sergey Fomin and Anton Zabrodin for very useful discussions
and remarks. Our special acknowledgement is to Sergey Natanzon for a
formulation of the problem and for inspiring us to work on this project.
## Appendix A. Explicit calculation of $P_{i,j}(s_{1},\dots,s_{m})$
We start here from the sum that follows from the definition:
$P_{i,j}(s_{1},\dots,s_{m})=\sum\limits_{\\{1\leq
i_{k}|k=1,\dots,m\\}}\sum\limits_{\\{1\leq
j_{k}|k=1,\dots,m\\}}\delta_{i_{1}+\dots+i_{m}=i}\delta_{j_{1}+\dots+j_{m}=j}\delta_{i_{1}+j_{1}=s_{1}+1}\dots\delta_{i_{m}+j_{m}=s_{m}+1}$
(65)
Resolving equations $i_{k}+j_{k}=s_{k}+1$ we obtain:
$P_{i,j}(s_{1},\dots,s_{m})=\delta_{s_{1}+\dots+s_{m}+m,i+j}\sum\limits_{\\{1\leq
i_{l}\leq s_{l}|l=1,\dots,m\\}}\delta_{i_{1}+\dots+i_{m},i}$ (66)
Sum in the r.h.s. has geometric interpretation as the section of
$m$-dimensional parallelogram $R_{s_{1},\dots,s_{m}}=\\{i_{k}|1\leq i_{k}\leq
s_{k},k=1,\dots,m\\}$ by $m-1$-dimensional hyper-plane $i_{1}+\dots+i_{m}=i$.
In order to calculate this sum we use inclusion-exclusion principle for
$m$-dimensional ”quadrants” $Q_{a_{1},\dots,a_{m}}=\\{i_{k}|a_{k}\leq
i_{k},k=1,\dots,m\\}$. Contribution from $m$-dimensional parallelogram
$R_{s_{1},\dots,s_{m}}$ then expressed as the sum over all ”quadrants” with
vertices coinciding with vertices of $R_{s_{1},\dots,s_{m}}$:
$R^{Cont}_{s_{1},\dots,s_{m}}=\sum\limits_{\\{\sigma_{k}=\\{0,1\\}|k=1,\dots,m\\}}(-1)^{\sigma_{1}+\dots+\sigma_{m}}Q^{Cont}_{1+\sigma_{1}s_{1},\dots,1+\sigma_{m}s_{m}}$
(67)
where set of variables $\sigma_{k}$ enumerate all vertices.
The next step is to calculate contribution of ”quadrant”
$Q^{Cont}_{1,\dots,1}$, which is just a number of ordered partitions of $i$:
$Q^{Cont}_{1,\dots,1}=\sum\limits_{1\leq
i_{k}}\delta_{i_{1}+\dots+i_{m},i}={i-1\choose m-1}.$ (68)
Shifting of ”quadrant” $Q_{\dots,1,\dots}\rightarrow Q_{\dots,1+s_{k},\dots}$
is equivalent to shifting $i\rightarrow i-s_{1}$, so for the contribution of
$Q_{1+\sigma_{1}s_{1},\dots,1+\sigma_{m}s_{m}}$ we have the following formula:
$Q_{1+\sigma_{1}s_{1},\dots,1+\sigma_{m}s_{m}}={i-\sigma_{1}s_{1}-\dots-\sigma_{m}s_{m}-1\choose
m-1}$ (69)
Combining now (69) and (67) we obtain:
$R^{Cont}_{s_{1},\dots,s_{m}}=\sum\limits_{\\{\sigma_{k}=\\{0,1\\}|k=1,\dots,m\\}}(-1)^{\sigma_{1}+\dots+\sigma_{m}}{i-\sigma_{1}s_{1}-\dots-\sigma_{m}s_{m}-1\choose
m-1}$ (70)
## Appendix B. Calculation of generating functions
We give here an approach to calculation of generating functions.
In order to obtain the $G$-generating function (27) it is convenient to use
recursion relation (24). Let us substitute (24) into the generating function:
$\widetilde{G}_{nm}(\mathbf{x},\mathbf{y})=\sum\limits_{i_{1}\geq
1,\dots,i_{n}\geq 1}y_{1}^{i_{1}}\dots y_{n}^{i_{n}}\sum\limits_{s_{1}\geq
1,\dots,s_{m}\geq 1}x_{1}^{s_{1}}\dots
x_{m}^{s_{m}}\sum\limits_{\left\\{{i_{n}^{1}+\dots+i_{n}^{m}=i_{n}\atop 1\leq
i_{n}^{l}\leq s_{l}|l=1,\dots,m}\right\\}}P_{i_{1}\dots
i_{n-1}}(s_{1}-i_{n}^{1}+1,\dots,s_{m}-i_{n}^{m}+1)$ (71)
The next step is to swap two sums on the right and rewrite each
$x_{l}^{s_{l}}$ as $x^{i_{l}^{n}-1}x_{l}^{s_{l}-i_{l}^{n}+1}$:
$\widetilde{G}_{nm}(\mathbf{x},\mathbf{y})=\sum\limits_{i_{1}\geq
1,\dots,i_{n}\geq 1}y_{1}^{i_{1}}\dots
y_{n}^{i_{n}}\sum\limits_{\left\\{{i_{n}^{1}+\dots+i_{n}^{m}=i_{n}\atop 1\leq
i_{n}^{l}|l=1,\dots,m}\right\\}}x_{1}^{i_{n}^{1}-1}\dots
x_{m}^{i_{n}^{m}-1}\sum\limits_{i_{n}^{l}\leq s_{l}\atop
l=1,\dots,m}x_{1}^{s_{1}-i_{n}^{1}+1}\dots
x_{m}^{s_{m}-i_{n}^{m}+1}P_{i_{1}\dots
i_{n-1}}(s_{1}-i_{n}^{1}+1,\dots,s_{m}-i_{n}^{m}+1)$ (72)
After replacement $s^{\prime}_{l}=s_{l}-i_{n}^{l}+1$ for $l=1,\dots,m$ we
obtain simple recursion relation:
$\widetilde{G}_{nm}(\mathbf{x},\mathbf{y})=\sum\limits_{i_{k}\geq 1\atop
k=1,\dot{,}m}y_{1}^{i_{1}}\dots
y_{n}^{i_{n}}\sum\limits_{\left\\{{i_{n}^{1}+\dots+i_{n}^{m}=i_{n}\atop 1\leq
i_{n}^{l}|l=1,\dots,m}\right\\}}x_{1}^{i_{n}^{1}-1}\dots
x_{m}^{i_{n}^{m}-1}F_{n-1}(\mathbf{x},\mathbf{y})=\widetilde{G}_{(n-1)m}(\mathbf{x},\mathbf{y})\prod\limits_{l=1}^{m}\frac{y_{n}}{(1-x_{l}y_{n})}$
(73)
where sums over $i_{k}$ are independent and each of them is geometric
progression. It is easy now to write the entire generating function.
$\widetilde{G}_{nm}(\mathbf{x},\mathbf{y})=\widetilde{G}_{1m}(\mathbf{x},\mathbf{y})\prod\limits_{l=1}^{m}\prod\limits_{k=2}^{n}\frac{y_{k}}{(1-x_{l}y_{k})},$
(74)
where according to our definition of coefficients:
$P_{i_{1}}(s_{1},\dots,s_{m})=\delta_{s_{1}+\dots+s_{m},i_{1}}$ (75)
and hence
$\widetilde{G}_{1m}(\mathbf{x},\mathbf{y})=\sum\limits_{i_{1}\geq
1}y_{1}^{i_{1}}\sum\limits_{s_{1}\geq 1,\dots,s_{m}\geq 1}x_{1}^{s_{1}}\dots
x_{m}^{s_{m}}\delta_{s_{1}+\dots+s_{m},i_{1}}=\prod\limits_{l=1}^{m}\frac{x_{l}y_{1}}{(1-x_{l}y_{1})}.$
(76)
Finally, the generating function is of the form:
$\widetilde{G}_{nm}(\mathbf{x},\mathbf{y})=\prod\limits_{l=1}^{m}x_{l}\prod\limits_{k=1}^{n}\frac{y_{k}}{(1-x_{l}y_{k})}=\left(\prod\limits_{l=1}^{m}x_{l}\right)\left(\prod\limits_{k=1}^{n}y_{k}^{m}\right)\sum\limits_{\lambda}S_{\lambda}(\mathbf{x})S_{\lambda}(\mathbf{y})$
(77)
Now, using this result we can calculate the second generating function (28).
The main idea is to make replacement $p_{k}=\sum_{i}x_{i}^{k}$:
$H(\mathbf{p};y_{1},y_{2})=\sum\limits_{m\geq
0}\frac{(-1)^{m+1}}{m}\sum\limits_{ij}y_{1}^{i}y_{2}^{j}\sum\limits_{s_{1},\dots,s_{m}}\left(\sum\limits_{i_{1}}x_{i_{1}}^{s_{1}}\right)\dots\left(\sum\limits_{i_{m}}x_{i_{m}}^{s_{m}}\right)P_{ij}(s_{1},\dots,s_{m})$
(78)
Using generating function $\tilde{G}_{2m}$ we obtain
$\sum\limits_{m\geq
0}\frac{(-1)^{m+1}}{m}\left(\sum\limits_{i_{1}}x_{i_{1}}^{s_{1}}\right)\dots\left(\sum\limits_{i_{m}}x_{i_{m}}^{s_{m}}\right)\prod\limits_{l=1}^{m}\left(\frac{x_{l}y_{1}y_{2}}{(1-y_{1}x_{l})(1-y_{2}x_{l})}\right).$
(79)
It can be rewritten as the product
$\sum\limits_{m\geq
0}\frac{(-1)^{m+1}}{m}\left(\sum\limits_{l=1}^{m}\frac{x_{l}y_{1}y_{2}}{(1-y_{1}x_{l})(1-y_{2}x_{l})}\right)^{m}=\left(\frac{y_{1}y_{2}}{y_{1}-y_{2}}\sum\limits_{l=1}^{m}\left(\frac{1}{1-y_{1}x_{l}}-\frac{1}{1-y_{2}x_{l}}\right)\right)^{m}$
(80)
and expanding geometric progression we obtain function in $p_{i}$ variables:
$\sum\limits_{m\geq
0}\frac{(-1)^{m+1}}{m}\left(\frac{y_{1}y_{2}}{y_{1}-y_{2}}\sum\limits_{k=1}^{\infty}\left(y_{1}^{k}p_{k}-y_{2}^{k}p_{k}\right)\right)^{m}=\left(y_{1}y_{2}\sum\limits_{k=1}^{\infty}p_{k}\frac{y_{1}^{k}-y_{2}^{k}}{y_{1}-y_{2}}\right)^{m}$
(81)
Now summing over $m$ we obtain the generating function.
## Appendix C. Eigenvalue model calculations
Firstly, we show that expression in brackets in (45) is a Schur polynomial.
Thus, we prove formula (42). It is obvious that it can be calculated for each
$j$ independently, so we do not write index $j$ in the proof. The expression
in brackets is equal to
$\begin{gathered}\frac{z_{n}^{s+n-2}}{z_{1}\dots
z_{n-1}}\sum_{i^{(1)}=1}^{s}\dots\sum_{i^{(n-2)}=1}^{s}\sum_{i^{(n-1)}=1}^{s+n-2-i^{(1)}-\dots-i^{(n-2)}}\left(\frac{z_{1}}{z_{n}}\right)^{i^{(1)}}\dots\left(\frac{z_{n-1}}{z_{n}}\right)^{i^{(n-1)}}\equiv
A_{s-1}\end{gathered}$ (82)
Let us denote the expression as $A_{s-1}$ and calculate its generating series
$A(\xi)=\sum_{s=1}^{\infty}A_{s-1}\xi^{s-1}.$ (83)
To perform the calculation we need to swap sum over $s$ with the other $(n-1)$
sums over $i^{(k)}$. All the possible values of indices are inside an
$n$-dimensional semi-infinite triangle, and, as usual, changing the order of
sums changes the order in which we move inside this triangle with new
restrictions on the indices. After swapping the sums one obtains the following
expression
$A(\xi)=\sum_{i^{(1)}=1}^{\infty}\dots\sum_{i^{(n-1)}=1}^{\infty}\sum_{s=i^{(1)}+\dots+i^{(n-1)}-n+2}^{\infty}\frac{z_{n}^{s+n-2}}{z_{1}\dots
z_{n-1}}\left(\frac{z_{1}}{z_{n}}\right)^{i^{(1)}}\dots\left(\frac{z_{n-1}}{z_{n}}\right)^{i^{(n-1)}}\xi^{s-1},$
(84)
which is now easy to calculate. One has to calculate infinite geometric
progressions:
$\begin{gathered}A(\xi)=\sum_{i^{(1)}=1}^{\infty}\left(\frac{z_{1}}{z_{n}}\right)^{i^{(1)}}\dots\sum_{i^{(n-1)}=1}^{\infty}\left(\frac{z_{n-1}}{z_{n}}\right)^{i^{(n-1)}}\cdot\frac{z_{n}^{n-2}}{z_{1}\dots
z_{n-1}}\frac{(z_{n}\xi)^{i^{(1)}+\dots+i^{(n-1)}-n+2}}{\xi(1-\xi z_{n})}=\\\
=\left(\sum_{i^{(1)}=1}^{\infty}\frac{1}{z_{1}\xi}(z_{1}\xi)^{i^{(1)}}\right)\dots\left(\sum_{i^{(n-1)}=1}^{\infty}\frac{1}{z_{n-1}\xi}(z_{n-1}\xi)^{i^{(n-1)}}\right)\cdot\frac{1}{1-\xi
z_{n}}=\prod_{\alpha=1}^{n}\frac{1}{1-\xi z_{\alpha}}\end{gathered}$ (85)
The last expression in (85) is exactly a generating function for symmetric
Schur polynomials [29], thus, each $A_{s-1}$ is equal to Schur polynomial
$S_{s-1}$, which proves (42).
Secondly, we explicitly make clear the derivation of recursion relations (52)
using the same technique as for Ward identities in common matrix models. Let
us rescale the first variable under the integral $z_{1}\rightarrow(1+q)z_{1}$.
There is no singularities except point $z_{1}=\dots=z_{n}=0$, so such a change
of variables preserves the value of the integral:
$I(q)=\frac{1}{(2\pi i)^{n}}\oint(1+q)dz_{1}\dots\oint
dz_{n}(1+q)^{-i_{1}}\left(\prod_{k=1}^{n}z_{k}^{-i_{k}}\right)\left(\prod_{j=1}^{m}S_{s_{j}-1}((1+q)z_{1},\dots,z_{n})\right)$
(86)
This expression is independent on $q$, so the derivative is equal to zero
$\frac{\partial I}{\partial q}=0$. We calculate the derivative at the point
$q=0$. Derivative acts on each Schur polynomial independently, so let us first
calculate the derivative for only one Schur polynomial:
$\begin{gathered}\frac{\partial I(q)}{\partial
q}\Bigg{|}_{q=0}=(1-i_{1})P_{i_{1},\dots,i_{n}}(s)+\oint dz_{1}\dots\oint
dz_{n}\left(\prod_{k=1}^{n}z_{k}^{-i_{k}}\right)\left(\frac{\partial}{\partial
q}S_{s-1}((1+q)z_{1},\dots,z_{n},0,\dots)\Bigg{|}_{q=0}\right)=0.\end{gathered}$
(87)
To calculate the derivative of Schur polynomial we use the generating function
$A(\xi)$ as in (85), where we rescale the first variable:
$A(q,\xi)=\sum_{j=0}^{\infty}S_{j}((1+q)z_{1},\dots,z_{n})\xi^{j}=\frac{1}{1-(1+q)z_{1}\xi}\prod_{\alpha=2}^{n}\frac{1}{1-z_{\alpha}\xi}.$
(88)
Now it is possible to calculate the derivative of the obtained expression.
$\frac{\partial A(q,\xi)}{\partial
q}\Bigg{|}_{q=0}=\frac{z_{1}\xi}{1-z_{1}\xi}\left(\prod_{\alpha=1}^{n}\frac{1}{1-z_{\alpha}\xi}\right)=\sum_{j=0}^{\infty}\sum_{p=0}^{\infty}S_{j}z_{1}^{p+1}\xi^{j+p+1}$
(89)
Let us change the summation indices in the last expression: $a=j+p+1$.
$\frac{\partial A(q,\xi)}{\partial
q}\Bigg{|}_{q=0}=\sum_{a=1}^{\infty}\xi^{a}\left(\sum_{p=0}^{a-1}S_{p}z_{1}^{a-p}\right)$
(90)
If we compare it with the expression (88), we obtain the following result
$\frac{\partial S_{a}}{\partial
q}\Bigg{|}_{q=0}=\sum_{p=0}^{a-1}S_{p}z_{1}^{a-p},$ (91)
which we substitute into formula (87):
$(1-i_{1})P_{i_{1},\dots,i_{n}}(s)+\oint dz_{1}\dots\oint
dz_{n}\left(\prod_{k=1}^{n}z_{k}^{-i_{k}}\right)\left(\sum_{p=0}^{s-2}S_{p}z_{1}^{a-p}\right)=0.$
(92)
Let us simplify the last expression so it can be rewritten only through
combinatorial coefficients:
$\begin{gathered}0=(1-i_{1})P_{i_{1},\dots,i_{n}}(s)+\sum_{p=0}^{s-2}\oint
dz_{1}\dots\oint
dz_{n}z_{1}^{-i_{1}+s-1-p}\left(\prod_{k=2}^{n}z_{k}^{-i_{k}}\right)S_{p}=\\\
=(1-i_{1})P_{i_{1},\dots,i_{n}}(s)+\sum_{p=1}^{s-1}P_{i_{1}-s+p,i_{2},\dots,i_{n}}(p)\end{gathered}$
(93)
Now it is easy to do the same calculations for many parameters
$s_{1},\dots,s_{m}$. Derivative acts on each Schur polynomial labeled by these
parameters independently. The result is formula (52).
## References
* [1] M. Kontsevich, “Intersection theory on the moduli space of curves and the matrix Airy function,” Communications in Mathematical Physics, vol. 147, no. 1, pp. 1–23, 1992.
* [2] S. Kharchev, A. Marshakov, A. Mironov, and A. Morozov, “Generalized Kazakov-Migdal-Kontsevich Model: group theory aspects,” International Journal of Modern Physics A, vol. 10, p. 2015–2051, Jun 1995\.
* [3] A. Mironov, A. Morozov, and G. W. Semenoff, “Unitary matrix integrals in the framework of the generalized Kontsevich model,” International Journal of Modern Physics A, vol. 11, no. 28, pp. 5031–5080, 1996.
* [4] A. Mironov, A. Morozov, and A. Morozov, “Character expansion for HOMFLY polynomials I: Integrability and difference equations,” in Strings, gauge fields, and the geometry behind: the legacy of Maximilian Kreuzer, pp. 101–118, World Scientific, 2013.
* [5] P. Dunin-Barkowski, M. Kazarian, A. Popolitov, S. Shadrin, and A. Sleptsov, “Topological recursion for the extended Ooguri-Vafa partition function of colored HOMFLY-PT polynomials of torus knots,” arXiv preprint arXiv:2010.11021, 2020.
* [6] A. Alexandrov, A. Mironov, A. Morozov, and S. Natanzon, “Integrability of Hurwitz partition functions,” Journal of Physics A: Mathematical and Theoretical, vol. 45, no. 4, p. 045209, 2012.
* [7] S. M. Natanzon and A. Y. Orlov, “Hurwitz numbers from matrix integrals over gaussian measure,” arXiv e-prints, pp. arXiv–2002, 2020.
* [8] A. Mironov, A. Y. Morozov, S. Natanzon, and A. Y. Orlov, “Around spin hurwitz numbers,” arXiv preprint arXiv:2012.09847, 2020.
* [9] A. Alexandrova, “Intersection numbers on mg, n and bkp hierarchy,” arXiv preprint arXiv:2012.07573, 2020.
* [10] A. Alexandrov, “Kdv solves bkp,” arXiv preprint arXiv:2012.10448, 2020.
* [11] S. Natanzon and A. Y. Orlov, “Hurwitz numbers and bkp hierarchy,” arXiv preprint arXiv:1407.8323, 2014.
* [12] M. Jimbo and T. Miwa, “Solitons and infinite dimensional Lie algebras,” Publications of the Research Institute for Mathematical Sciences, vol. 19, no. 3, pp. 943–1001, 1983.
* [13] E. Date, M. Jimbo, M. Kashiwara, and T. Miwa, “Transformation groups for soliton equations,” Publications of the Research Institute for Mathematical Sciences, vol. 18, no. 3, pp. 1077–1110, 1982.
* [14] A. Andreev, A. Popolitov, A. Sleptsov, and A. Zhabin, “Genus expansion of matrix models and $\hbar$ expansion of kp hierarchy,” Journal of High Energy Physics, vol. 2020, no. 12, pp. 1–32, 2020.
* [15] I. Krichever, “The dispersionless Lax equations and topological minimal models,” Communications in mathematical physics, vol. 143, no. 2, pp. 415–429, 1992.
* [16] B. Dubrovin, “Hamiltonian formalism of Whitham-type hierarchies and topological Landau-Ginsburg models,” Communications in mathematical physics, vol. 145, no. 1, pp. 195–207, 1992.
* [17] K. Takasaki and T. Takebe, “Quasi-classical limit of KP hierarchy, W-symmetries and free fermions,” arXiv preprint hep-th/9207081, 1992\.
* [18] K. Takasaki and T. Takebe, “Integrable hierarchies and dispersionless limit,” Reviews in Mathematical Physics, vol. 07, p. 743–808, Jul 1995.
* [19] S. M. Natanzon and A. V. Zabrodin, “Formal solutions to the KP hierarchy,” Journal of Physics A: Mathematical and Theoretical, vol. 49, p. 145206, Feb 2016.
* [20] S. Natanzon and A. Zabrodin, “Symmetric solutions to dispersionless 2d toda hierarchy, hurwitz numbers, and conformal dynamics,” International Mathematics Research Notices, vol. 2015, no. 8, pp. 2082–2110, 2015.
* [21] B. A. Dubrovin and S. M. Natanzon, “Real theta-function solutions of the Kadomtsev–Petviashvili equation,” Mathematics of the USSR-Izvestiya, vol. 32, no. 2, p. 269, 1989.
* [22] L. Chekhov, B. Eynard, and N. Orantin, “Free energy topological expansion for the 2-matrix model,” Journal of High Energy Physics, vol. 2006, no. 12, p. 053, 2006.
* [23] L. Chekhov and B. Eynard, “Hermitian matrix model free energy: Feynman graph technique for all genera,” Journal of High Energy Physics, vol. 2006, no. 03, p. 014, 2006.
* [24] B. Eynard and N. Orantin, “Invariants of algebraic curves and topological expansion,” arXiv preprint math-ph/0702045, 2007.
* [25] B. Eynard, “Topological expansion for the 1-hermitian matrix model correlation functions,” Journal of High Energy Physics, vol. 2004, no. 11, p. 031, 2005\.
* [26] A. Alexandrov, A. Morozov, and A. Mironov, “Partition functions of matrix models: first special functions of string theory,” International Journal of Modern Physics A, vol. 19, no. 24, pp. 4127–4163, 2004.
* [27] A. S. Alexandrov, A. D. Mironov, and A. Y. Morozov, “M-theory of matrix models,” Theoretical and Mathematical Physics, vol. 150, no. 2, pp. 153–164, 2007.
* [28] A. Alexandrov, A. Mironov, and A. Morozov, “Instantons and merons in matrix models,” Physica D: Nonlinear Phenomena, vol. 235, no. 1-2, pp. 126–167, 2007.
* [29] I. G. Macdonald, Symmetric functions and Hall polynomials. Oxford university press, 1998.
* [30] T. Miwa, M. Jimbo, and E. Date, Solitons: Differential equations, symmetries and infinite dimensional algebras, vol. 135. Cambridge University Press, 2000.
* [31] A. Barvinok, Combinatorics and complexity of partition functions, vol. 9. Springer, 2016.
* [32] R. Stanley, “Enumerative Combinatorics, volume 2, with appendix by S. Fomin,” Cambridge, 1999.
* [33] S. Natanzon, “Differential equations on the Prym theta function. a realness criterion for two-dimensional, finite-zone, potential Schrödinger operators,” Functional Analysis and Its Applications, vol. 26, no. 1, pp. 13–20, 1992.
* [34] A. Gerasimov, A. Marshakov, A. Mironov, A. Morozov, and A. Orlov, “Matrix models of two-dimensional gravity and Toda theory,” Nuclear Physics B, vol. 357, no. 2-3, pp. 565–618, 1991.
* [35] S. Kharchev, A. Marshakov, A. Mironov, A. Orlov, and A. Zabrodin, “Matrix models among integrable theories: Forced hierarchies and operator formalism,” Nuclear Physics B, vol. 366, no. 3, pp. 569–601, 1991.
* [36] S. Kharchev, A. Marshakov, A. Mironov, and A. Morozov, “Generalized Kontsevich model versus Toda hierarchy and discrete matrix models,” Nuclear Physics B, vol. 397, no. 1-2, pp. 339–378, 1993.
* [37] A. Mironov and A. Morozov, “On the complete perturbative solution of one-matrix models,” Physics Letters B, vol. 771, p. 503–507, Aug 2017\.
* [38] B. Eynard, “Topological expansion for the 1-hermitian matrix model correlation functions.,” Journal of High Energy Physics, vol. 2004, p. 031–031, Nov 2004.
* [39] B. Eynard, “Invariants of spectral curves and intersection theory of moduli spaces of complex curves,” arXiv preprint arXiv:1110.2949, 2011.
* [40] A. Y. Orlov and D. Scherbin, “Hypergeometric solutions of soliton equations,” Theoretical and Mathematical Physics, vol. 128, no. 1, pp. 906–926, 2001\.
* [41] M. E. Kazarian and S. K. Lando, “Combinatorial solutions to integrable hierarchies,” Russian Mathematical Surveys, vol. 70, no. 3, p. 453, 2015\.
* [42] R. Lodin, A. Popolitov, S. Shakirov, and M. Zabzine, “Solving q-Virasoro constraints,” Letters in Mathematical Physics, vol. 110, p. 179–210, Sep 2019.
* [43] L. Cassia, R. Lodin, A. Popolitov, and M. Zabzine, “Exact SUSY Wilson loops on ${S}^{3}$ from q-Virasoro constraints,” Journal of High Energy Physics, vol. 2019, Dec 2019.
* [44] A. Mironov, A. Morozov, S. Shakirov, and A. Smirnov, “Proving AGT conjecture as HS duality: extension to five dimensions,” Nuclear Physics B, vol. 855, no. 1, pp. 128–151, 2012.
* [45] C. Ikenmeyer, K. D. Mulmuley, and M. Walter, “On vanishing of kronecker coefficients,” computational complexity, vol. 26, no. 4, pp. 949–992, 2017\.
* [46] J. B. Geloun and S. Ramgoolam, “Tensor models, kronecker coefficients and permutation centralizer algebras,” Journal of High Energy Physics, vol. 2017, no. 11, p. 92, 2017.
|
[table]capposition=top
11institutetext: Kainat Khowaja 22institutetext: International Research
Training Group 1792 "High Dimensional Nonstationary Time Series", Humboldt-
Universität zu Berlin, Berlin, Germany; Ivan Franko National University of
Lviv, Ukraine; University of L’Aquila, Italy. 22email: kainat.khowaja@hu-
berlin.de 33institutetext: Mykhaylo Shcherbatyy 44institutetext: Ivan Franko
National University of Lviv, Ukraine. 44email<EMAIL_ADDRESS>55institutetext: Wolfgang Karl Härdle66institutetext: BRC Blockchain Research
Center, Humboldt-Universität zu Berlin, Germany; Sim Kee Boon Institute,
Singapore Management University, Singapore; WISE Wang Yanan Institute for
Studies in Economics, Xiamen University, China; Dept. Information Science and
Finance, National Chiao Tung University, Taiwan, ROC; Dept. Mathematics and
Physics, Charles University, Czech Republic. Grants–DFG IRTG 1792, CAS: XDA
23020303 and COST Action CA19130 gratefully acknowledged. 66email: haerdle@hu-
berlin.de
# Surrogate Models for Optimization of Dynamical Systems ††thanks: This
research was supported by Joint MSc in Applied and Interdisciplinary
Mathematics, coordinated by the University of L’Aquila (UAQ) in Italy,
Department of Information Engineering, Computer Science and Mathematics
(DISIM) and the Deutsche Forschungsgemeinschaft through the International
Research Training Group 1792 "High Dimensional Nonstationary Time Series", the
Yushan Scholar Program of Taiwan, and European Union’s Horizon 2020 training
and innovation programme ”FIN-TECH”, under the grant No. 825215 (Topic
ICT-35-2018, Type of actions: CSA).
Kainat Khowaja Mykhaylo Shcherbatyy and Wolfgang Karl Härdle
###### Abstract
Driven by increased complexity of dynamical systems, the solution of system of
differential equations through numerical simulation in optimization problems
has become computationally expensive. This paper provides a smart data driven
mechanism to construct low dimensional surrogate models. These surrogate
models reduce the computational time for solution of the complex optimization
problems by using training instances derived from the evaluations of the true
objective functions. The surrogate models are constructed using combination of
proper orthogonal decomposition and radial basis functions and provides system
responses by simple matrix multiplication. Using relative maximum absolute
error as the measure of accuracy of approximation, it is shown surrogate
models with latin hypercube sampling and spline radial basis functions
dominate variable order methods in computational time of optimization, while
preserving the accuracy. These surrogate models also show robustness in
presence of model non-linearities. Therefore, these computational efficient
predictive surrogate models are applicable in various fields, specifically to
solve inverse problems and optimal control problems, some examples of which
are demonstrated in this paper.
Keywords: Proper Orthogonal Decomposition, SVD, Radial Basis Functions,
Optimization, Surrogate Models, Smart Data Analytics, Parameter Estimation
## Chapter 1 Introduction
Over the years, mathematical modeling and optimization techniques have
effectively described complex real-life dynamical structures using system of
differential equations. More often, the dynamical behavior of such models,
especially in optimization and inverse problems (the problems where some of
the ’effects’ (responses) are known but not some of the ’causes’ (parameters)
leading to them are unknown), cause necessity of repetitive solution of these
model equations with a slight change in system parameters. While numerical
models replaced experimental methods due to their robustness, accuracy, and
rapidness, their increasing complexity, high cost, and long simulation time
have limited their application in domains where multiple evaluations of the
model differential equations are demanded.
To prevent this trade-off between computational cost and accuracy, one needs
to focus on Reduced Order Models (ROM) which provide compact, accurate and
computationally efficient representations of ODEs and PDEs to solve these
multi-query problems. These approximation models, also commonly recognized as
a surrogate models or meta-models shcherbatyy2018 (21), allow the
determination of solution of model equations for any arbitrary combination of
input parameters at a cost that is independent of the dimension of the
original problem. Accordingly, they meet the most essential criteria of every
analysis problem: the criteria of highest fidelity at lowest possible
computational cost, where high fidelity is defined by the efficacy of
theoretical methods to replicate the physical phenomenons with least possible
error Emiliano2013 (14).
This paper employs Proper Orthogonal Decomposition (POD), a model reduction
technique originating in statistical analysis and known for its optimality as
it captures the most dominant components of data in the most efficient way
hinze (11). POD serves the purpose of dimension reduction by extracting hidden
structures from high dimensional data and projecting it on lower dimensional
space springer2005 (15). In this work, POD will be used to derive low order
models of dynamical system by reducing a high number of interdependent
variables to a much smaller number of uncorrelated variables while retaining
as much as possible of the variation in the original variables.
Over a century ago, Pearson proposed the idea of representing the statistical
data in high dimensional space using a straight line or plane, hence
discovering a finite dimensional equivalence of POD as a tool for graphical
analysis Pearson1901 (19). In the years following Pearson’s paper, the
technique has been independently rediscovered by several other scientists
including Kosambi, Hotelling and Van Loan under different names in the
literature such as Principle Component Analysis (PCA), Hotelling
Transformation and Loeve-Karhunen Expansion, depending on the branch in which
it is being tackled. Despite its early discovery, the availability of
computational resources required to compute POD modes were limited in earlier
years and the technique remained virtually unused until 1950s. The
technological advancements took an upturn after that with the invention of
powerful computers and led to the popularity of POD springer2005 (15). Since
then, the development and applications of POD have been widely investigated in
diverse disciplines such as structural mechanics springer2005 (15),
aerodynamics Emiliano2013 (14), signal and image processing Benaarbia2017 (4),
etc. Due to its strong theoretical foundations, the technique has been used in
many applications, such as for damage detection Lanata2006 (16), human face
recognition Kirby1987 (23), detection of signals in multi-channel time-series
Wax1985 (25), exploration of peak clustering berardi2015 (5) and many more.
In general, a non-equivalent variant of POD, known as factor analysis, has
been renowned and has been used for various applications Felix2018 (1, 2, 3,
18), etc. Unlike POD, factor analysis assumes that the data have a strict
factor structure and it looks for the factors that amount for common variance
in the data. On contrary, PCA the finite counterpart of POD, allows the
accountability of maximal amount of variance for observed variables. The PCA
analysis consists of identifying the set of variables, also known as principle
components, from the system that retain as much variation from the original
set of variables as possible. Similarly, Principal Expectile Analysis (PEC),
which generalizes PCA for expectiles was recently developed as a dimension
reduction tool for extreme value theory Haerdle2019 (24). These POD equivalent
tools have also been adopted in analysis on several instances such as
Felix2018 (1, 9, 17, 24). Yet, most of the literature exploits only the real
life data for dimension reduction. Even though some analysis highly relies on
real life data, there is an urgent need of introduction of tools that utilize
simulated data generated from the non-standard models with nonlinear
differential equations that are on constant rise and hold potential for
enrichment of analysis.
Moreover, optimal control problems and mathematical optimization models are
widely seen in various applications. These models are often used for normative
purposes to solve the minimization/maximization problems and require
repetitive evaluation in various context with different parameter values to
find the optimum set of parameters. This parameter exploration process can be
computationally intense, specially in complex non-linear system which
emphasize the need of dimension reduction for these models.
Through this research, the application of POD to reduce the dimensionality of
dynamical systems is proposed. The present work resorts to explore the
efficacy of POD on few common applications, the models which have been
previously defined and commonly used. We hypothesize that the system responses
of dynamical models can be obtained with a very high accuracy, but lower
computational cost model reduction technique. The novelty of this hypothesis
lies in the fact that dimensional reduction techniques have rarely been
explored for optimal control problems, specially the combination of POD and
Radial Basis Functions (RBF) to make surrogate models is quite under utilized,
specially for the the models discussed in this paper.
The computational procedure of the research is decomposed between offline and
online phases. The offline phase (training of the model) entails utilization
of sampling techniques to generate data, computation of snapshot matrix of
model solutions using variable order methods for solving of ODE (model of
dynamical system), obtainment of proper orthogonal modes via Singular Value
Decomposition (SVD) and estimation of POD expansion coefficients that
approximate the POD basis (via interpolation techniques radial basis
functions). The online phase (testing of the model) involves redefinition of
model equations in terms of surrogate models and computation of system
responses corresponding to any arbitrary set of input parameters in given
domain shcherbatyy2018 (21). Next, the quality of the model is validated and
evaluated by carrying out error analysis and various experimental designs are
employed by varying sampling and interpolation techniques and changing the
size of training set to determine the combination that generates that results
in the least maximum absolute error. Finally, using that experimental design,
optimization criterion are calculated using both models to evaluate accuracy
of the model. For the computations, a MATLAB software is developed by the
author which utilizes a combination of inbuilt and user-defined functions. The
illustrations used in this work are also generated using MATLAB.
The next chapter will lay down theoretical concepts related to POD, SVD and
RBF, and how surrogate models are constructed to project the dynamical system
onto subspaces consisting of basis elements that contain characteristics of
the expected solution. Chapter 2 will explain how the computational procedure
(algorithm) and Chapter 3 will implement the concepts developed in Chapter 1
and methodology presented in Chapter 2 on a set of dynamical systems. Finally,
last chapter will conclude the main results and provide a summary of current
research, its limitations, as well as future prospects.
## Chapter 2 Mathematical Formulation
Model reduction techniques have been known for their ability to reduce the
computational complexity of mathematical models in numerical simulations. The
main reason ROM has found applications in various disciplines is due to its
strong theoretical foundations and the demand of model reduction techniques in
ever-so-rising computational complexities and intrinsic property of high
dimensionality of physical system. ROM addresses these issues effectively by
providing low dimensional approximations.
Although a variety of dimensionality-reduction techniques exist such as
operational based reduction methods Schilders2008 (20), reduced basis methods
Boyaval2010 (6), the ROM methodology is often based upon POD. Analogous to
PCA, the POD theory requires to find components of the systems, known as
Proper Orthogonal Modes (POMs), that are ordered in a way that each subsequent
mode holds less energy than previous one. As stated earlier, POD is ubiquitous
in the dimensionality reduction of physical systems. It presents the optimal
technique for capturing the system modes in least square sense. That is, for
constructing ROM for any system, incorporating k POMs will give the best k
component approximation of that system. This assures that any approximation
formulated using POD will be the best possible approximation: there is no
other method that can reduce the dimensionality of the given system in lower
number of components or modes.
This chapter discusses in depth the mathematical concepts associated with POD
and its correspondence with SVD and RBF for construction of surrogate models.
The computational procedure adapted in Chapter 3 and Chapter 4 is strictly
based on the theory formulated in this chapter.
### 1 Formulation of Optimization Problem
Many problems of optimal control are focused on the minimization and
maximization problems. In order to find an optimal set of parameters,
optimization models are usually defined in which the problems are summarized
by the objective function. These optimization parameters are called control
parameters and they affect the choice of allocation. In optimal control
problems, these parameters are time paths which are chosen within certain
constraints so as to minimize or maximize the objective functional. The
applications presented in Chapter 4 are optimization problems, the general
structure of which has been discussed in the next paragraph.
Let us consider optimization problem which consists of finding a vector of
optimization parameters $u^{*}\in U_{S}$ and proper state function
$y^{*}\subset Y_{S}$, that minimizes the optimization criterion (objective
function)
$\psi_{0}=\tilde{\psi}_{0}(u^{*},y^{*})=\min_{(u,y)\in U_{S}\times
Y_{S}}\tilde{\psi}_{0}(u,y)$ (2.1)
subject to ODEs (state equation)
$c(y,u)=0\sim\begin{dcases}y_{i}^{\prime}-f(t,u,y)=0,\ t\in[t_{0},T],\\\
y(t_{0})-y_{0}=0,\end{dcases}$ (2.2)
box constrains on the control variable
$U=\\{u\in U_{S}:u^{-}\leq u\leq u^{+},u^{-}\in U_{S},u^{+}\in U_{S}\\}$ (2.3)
and possibly additional equality and non-equality constraints on state and
control
$\begin{matrix}\tilde{\psi_{j}}(u,y)=0,j=1,\ldots,m_{1},\\\
\tilde{\psi_{j}}(u,y)\leq 0,j=m_{1}+1,\ldots,m.\end{matrix}$ (2.4)
where $U_{S}$ and $Y_{S}$ are real Banach spaces,
$u=u(t)=[u_{1}(t),\ldots,u_{n_{u}}(t)]^{\top}\in
U_{S},y=y(t)=[y_{1}(t),\ldots,y_{n_{y}}(t)]^{\top}\in
Y_{S},\tilde{\psi_{j}}:U_{S}\times Y_{S}\rightarrow\mathbb{R},j=0,1,\ldots,m$
We assume that for each $u\in U$, there exists a unique solution $y(u)$ of
state equation $c(y,u)=0$. With the aim of compactness, we will write
optimization problem (2.1\- 2.4) in reduced form: find a function $u^{*}$ such
that
$\begin{matrix}u^{*}\in
U_{\partial_{u}},\psi_{0}\left(u_{*}\right)=\displaystyle\min_{u\in
U_{\partial_{u}}}\psi_{0}(u)\\\ U_{\partial_{u}}=\left\\{u:u\in
U;\psi_{j}(u)=0,j=1,\ldots,m_{1};\psi_{j}(u)\leq
0,j=m_{1}+1,\ldots,m\right\\}\\\ c(y(u),u)=0\\\
\psi_{j}(u)=\tilde{\psi}_{j}(u,y(u)),j=0,1,\ldots,m\end{matrix}$ (2.5)
The optimal control problems in this research are solved using direct method.
Each problem is transformed to nonlinear programming problem, i.e., it is
first discretized and then the resulting nonlinear programming problem is
optimized. The advantage of direct methods is that the optimality conditions
of an non linear programming problems are generic, whereas optimality
conditions of undiscretized optimal control problems need to be reestablished
for each new problem and often require partial a-priori knowledge of the
mathematical structure of the solution which in general is not available for
many practical problems.
The first step in the direct method is to approximate each component of the
control vector by a function of finite parameters
$u_{i}(t)=u_{i}(t,b^{(i)}),b^{(i)}=[b^{(i)}_{1},...,b^{(i)}_{n_{i}}]^{\top},i=1,\ldots,n_{u}$.
As a result, we write control function $u(t)$ as a function of vector of
optimization parameters $b$: $u(t)=u(t,b)$. In this paper we use a piecewise-
linear or piecewise-constant approximation for each function
$u_{i}(t),i=1,\ldots,n_{u}$.
The optimization problem can be written as nonlinear programming problem in
such a way that we have to find a vector $b^{*}$ such that
$\begin{matrix}b^{*}\in
U_{\partial},\psi_{0}\left(b^{*}\right)=\displaystyle\min_{b\in
U_{\partial}}\psi_{0}(b)\\\ U_{\partial}=\left\\{b:b\in
U_{b},\psi_{j}(b)=0,j=1,\ldots,m_{1};\psi_{j}(b)\leq
0,j=m_{1}+1,\ldots,m\right\\}\\\ U_{b}=\left\\{b:b\in R^{n},b^{-}\leq b\leq
b^{+},b^{-}\in\mathbb{R}^{n},b^{+}\in\mathbb{R}^{n}\right\\}\\\ c(y(b),b)=0\\\
\psi_{j}(b)=\tilde{\psi}_{j}(u(b),y(b)),j=0,1,\ldots,m\end{matrix}$ (2.6)
### 2 Surrogate Model for Optimization Problem
Solution of optimization problem in equation (2.6) requires multiply solutions
of state equation $c(y(b),b)=0$ and calculation of optimization criteria
$\tilde{\psi}_{0}$ and constraints $\tilde{\psi}_{j},j=1,\ldots,m$ of the
system for different values of optimization parameters $b$. Complexity of
mathematical models (state equation), which describe state and behavior of
considered dynamical system requires significant computing resources (CPU
time, memory,…) and occasionally puts in question the solving of the
optimization problem itself. In order to solve multi-query problems within
limited computational cost, there is a need to construct approximation models
(also known as surrogates models, meta-models or ROMs). Surrogate model
replaces the high-fidelity problem and tends to much lower numerical
complexity.
In this paper surrogate models are constructed by first selecting a sampling
strategy. Then, $n_{s}$ sampling points are generated and for each sample
point $b^{(i)}$, we solve state equation in equation (2.6) (ODEs) and obtain
$n_{s}$ vectors of solutions (snapshots)
$Y_{i}=\left[y\left(t_{1},b^{(i)}\right)^{\top},\ldots,y\left(t_{n_{t}},b^{(i)}\right)^{\top}\right]^{\top}\in\mathbb{R}^{m},m=n_{y}\times
n_{t}$ at different time instances, $t_{0}<t_{1}<t_{2}<\ldots<t_{n_{t}}=T.$
Snapshots vectors $Y_{i}$ create snapshot matrix
$Y=\left[Y_{1},Y_{2},\ldots,Y_{n}\right]\in\mathbb{R}^{m\times n_{s}}$.
Next, we construct surrogate model using POD and RBF and calculate the value
of functionals $\hat{\psi}_{j}(b)=\tilde{\psi}_{j}(b,\hat{y}),j=0,1,\ldots,m$.
Detailed description of POD-RBF procedure is presented in the following
paragraphs of this chapter. The formulation of optimal control problem for
surrogate model is to find a vector $\hat{b}^{*}$ such that:
$\begin{matrix}\hat{b}^{*}\in
U_{\partial},\hat{\psi}_{0}\left(\hat{b}^{*}\right)=\displaystyle\min_{b\in
U_{\partial}}\hat{\psi}_{0}(b)\\\ U_{\partial}=\left\\{b:b\in
U_{b},\hat{\psi}_{j}(b)=0,j=1,\ldots,m_{1};\hat{\psi}_{j}(b)\leq
0,j=m_{1}+1,\ldots,m\right\\}\\\ U_{b}=\left\\{b:b\in\mathbb{R}^{n},b^{-}\leq
b\leq b^{+},b^{-}\in\mathbb{R}^{n},b^{+}\in\mathbb{R}^{n}\right\\}\\\
\quad\hat{y}=S(b)\\\
\hat{\psi}_{j}(b)=\tilde{\psi}_{j}(u(b),\hat{y}),j=0,1,\ldots,m\par\end{matrix}$
(2.7)
Replacing the state equation in (2.6) with surrogate model given in equation
(2.7) is hypothesized to decrease the computational time by a significant
amount, because it is free of the complexity of initial problem and involves
matrix multiplication that can be accomplished in a much smaller time than
solving ordinary differential equations with high fidelity methods. The
hypothesis is tested by comparing the accuracy of system responses and time of
calculation for both equation (2.6) and equation (2.7). The detailed procedure
for testing of surrogate model is discussed in the next chapter.
### 3 Initial Sampling and Method of Snapshots
The method of snapshots for POD was developed by Sirovich Sirovich1987 (22) in
1987. Generally, it comprises of evaluating the model equations for the number
of sampling points at various time instances. Each model response is called
snapshot and is recorded in a matrix which is collectively called snapshot
matrix.
The initial dimension of the problem is equal to the number of snapshots
$n_{s}$ recorded at each time instance $t_{i},i=1,...,n_{t}$. There is no
standard method for generating the sampling points. Nevertheless, the choice
of sampling method has direct effects on the accuracy of the model and
therefore, it is regarded as an autonomous problem. This research briefly
explores the initial sampling problem by comparing various classical a-priori
methods of sampling. The deeper questions of sampling that relate to the
choice of surrogate model, nature of the objective function and analysis are
left for the reader to explore from recommended sources such as Emiliano2013
(14).
The main sampling methodology used in the computational procedure is Latin
Hypercube Sampling (LHS) and its variant Symmetric Latin Hypercube Sampling
(SLHS). LHS is a near-random sampling technique that aims at spreading the
sample points evenly across the surface. In statistics, a square grid
containing sample positions is a Latin square if and only if there is only one
sampling point in each row and each column. A Latin hypercube is the
generalization of this concept to an arbitrary number of dimensions, whereby
each sample is the only one in each axis-aligned hyperplane containing it.
Unlike Random Sampling (RS), which is frequently referred as Monte-Carlo
method in finance, LHS uses a stratified sampling techniques that remembers
the position of previous sampling point and shuffles the inputs before
determining the next sampling points. It has been considered to be more
efficient in a large range of conditions and proved to have faster speed and
lower sampling error than RS Lonnie2014 (10).
SLHS was introduced as an extension of LHS that achieves the purpose of
optimal design in a relatively more efficient way. It was also established
that sometimes, SLHS had higher minimum distance between randomly generated
points than LHS. In a nutshell, both LHS and SLHS are hypothesized to perform
better than RS. Nevertheless, sampling is performed using all three techniques
in this work to determine which techniques provides optimal sampling of the
underlying space and maximizes the system accuracy. A simple sampling
distribution of each of the three techniques is illustrated in figure 2.1.
Figure 2.1: Comparison of various sampling techniques. SurrogateModel
### 4 Approximation
The overarching goal of POD method is to provide a fit of the desired data by
extracting interpolation functions from the information available in the data
set. Geometrically, it derives ROMs by projecting the original model onto the
reduced space spanned by the POD modes Emiliano2013 (14). A simple
mathematical formulation of POD technique is laid out in this section which
closely follow the references bujlak2012 (7, 8, 21).
Suppose that we wish to approximate the response of the system given by output
parameters $y\in\mathbb{R}^{m}$, where $m=n_{y}\times n_{t}$, using the set of
input parameters $b\subset\mathbb{R}^{n_{u}}$ over a certain domain $\Omega$.
The ROMS approximate the state function y(t) in domain $\Omega$ using linear
combination of some basis function $\phi^{i}\left(x\right)$ such that
$y\left(t\right)\approx\sum_{i=1}^{M}{a_{i}.\phi^{i}\left(t\right)}$ (2.8)
where, $a_{i}$ are unknown amplitudes of the expansions and t is the temporal
coordinate. The first step in this process would be to find the basis and
choice is clearly not unique. Once the basis function is chosen, the
amplitudes are determined by a minimization process and the least square error
of approximation is calculated. It is ideal to take orthonormal set as the
basis with the property
$\int_{\Omega}{\phi_{k_{1}}\left(t\right).\
\phi_{k_{2}}\left(t\right)dx=\left\\{\begin{matrix}1&k_{1}=k_{2}\\\
0&k_{1}\neq k_{2}\\\ \end{matrix}\right.}$ (2.9)
This way, the determination of the amplitudes $a_{k}$ only depends on function
$\phi_{k}^{i}(t)$ and not on any other $\phi$. Along with being orthonormal,
the basis should approximate the function in best possible way in terms of the
least square error. Once found, these special ordered orthogonal functions are
called the POMS for the function y(t) and the equation (2.8) is called the POD
of y(t).
In order to determine the number of POMs that should be used in approximation
of lower dimensional space, we use the idea that POD inherently orders the
basis elements by their relative importance. This is further clarified in the
context of SVD in the next section.
### 5 Singular Value Decomposition
There prevails a misconception amongst researchers about distinction between
SVD and POD. As opposed to the common understanding, POD and SVD are not
strictly the same: the former is a model reduction technique where as the
latter is merely a method of calculating the orthogonal basis. Since the
theory of SVD is so widespread, this section will only highlight the most
general and relevant details of SVD that are helpful in derivation of POMs and
POD basis.
In general, SVD is a technique that is used to decompose any real rectangular
matrix Y into three matrices, U, $\Sigma$ and V, where U and V are orthogonal
matrices, where $\Sigma$ is a diagonal matrix that contains the singular
values $\sigma_{i}$ of Y, sorted in a decreasing order such that
$\sigma_{1}\geq\sigma_{2}\geq...\geq\sigma_{d}\geq 0$, where d is the number
of non-zero singular values of Y.
The singular values can then be used as a guide to determine the POD basis. If
a k-dimensional approximation of original surface is required, where k<d, the
first k columns of the matrix U serve as the basis $\phi^{i},i=1,...,k$. These
set of columns, gathered in matrix $\Phi$, form an orthonormal set of basis
for our new low-dimensional surface and k is referred as the rank.
After collection of basis using SVD, it is easy to calculate the matrix of
amplitudes $A_{k}$. Let $\Sigma_{k}=[\sigma_{1},\sigma_{2},...,\sigma_{k}]$ be
the set of k largest singular values of our initial matrix Y, then, the matrix
of amplitudes is given by $Y_{k}=\Sigma_{k}A_{k},\
A_{k}=\Sigma_{k}^{\top}Y_{k}$.
Literature on SVD has established that the relative magnitude of each singular
value with respect to all the others give a measure of importance of the
corresponding eigen-function in representing elements of the input collection.
Based on the same idea, a common approach for selection of number of POMs (k)
is to set a desired error margin $\epsilon_{\text{POD}}$ for the problem under
consideration and choose k as a minimum integer such that the cumulative
energy E(k) captured by first k singular values (now POMs) is less than
1-$\epsilon_{\text{POD}}$, i.e.
$E(k)=\frac{\displaystyle\sum_{i=1}^{k}\sigma_{i}^{2}}{\displaystyle\sum_{i=1}^{d}\sigma_{i}^{2}}\leq
1-\epsilon^{2}_{\text{POD}}$ (2.10)
### 6 Radial Basis Functions
With the basis vectors and amplitude matrix, using POD discrete theory, low
dimensional approximation of our problem has been constructed. However, the
formulation is not very useful since our new model can only give the responses
of the system for a discrete number of parameter combinations (those that were
previously used to generate the snapshot matrix). Since, in many practical
applications (for optimization and inverse analysis), even though the values
of input parameters may sometime fall in a particular range, they can be any
arbitrary combination of numbers between those ranges. That is why, the newly
constructed model needs to be approximated in a better way. In this research,
POD is coupled with RBF to create low-order parameterization of high-order
systems for accurate prediction of system responses.
RBF is a unique interpolating technique that determines one continuous
function that is defined over the whole domain. It is a widely used smoothing
and multidimensional approximation technique. For construction of surrogate
model using our current basis, a function $f(b)=y$, where $b$ is the vectors
of some parameters and y is the output of the system that has to be estimated.
Let $Y_{k}$ be the reduced dimensional matrix calculated by multiplication of
basis and amplitudes matrices. It is now easy to apply RBF to reduced
dimensional space where system responses are expressed as amplitudes in the
matrix $A_{k}$. Therefore, the surrogate model takes the form $f_{a}(b)=a$,
where $a$ is the vector of amplitudes. Hence,
$f(b)=y=\Sigma_{k}A_{k}=\Sigma_{k}f_{a}(b)=\phi f_{a}(b)$ (2.11)
When RBF is applied for the approximation of $f_{a}$, $f_{a}$ is written as
linear combination of some basis functions $g_{i}$ such that
$f_{a}(b)=\left[\begin{matrix}a^{i}_{1}\\\ a^{i}_{2}\\\ .\\\ .\\\ .\\\
a^{i}_{K}\end{matrix}\right]=\left[\begin{matrix}d_{11}\\\ d_{21}\\\ .\\\ .\\\
.\\\ d_{K_{1}}\end{matrix}\right].g_{1}(b)+\left[\begin{matrix}d_{12}\\\
d_{22}\\\ .\\\ .\\\ .\\\
d_{K_{2}}\end{matrix}\right].g_{2}(b)+...+\left[\begin{matrix}d_{1N}\\\
d_{2N}\\\ .\\\ .\\\ .\\\ d_{K_{N}}\end{matrix}\right].g_{N}(b)=D.g(b)$ (2.12)
Once the basis functions $g_{i}$ are known, the aim is to solve for the
interpolation coefficients that are collectively stored in matrix B. Since we
already have the value of amplitudes $A$ from last step, matrix B can be
easily obtained by using the equation $B=G^{-1}A$. Finally, using equation
(2.11), our initial space y can be approximated by:
$y\approx\Phi.D.g(b)=\hat{y}$ (2.13)
In this work, linear and cubic spline RBF are used for analysis, given by:
$\begin{matrix}\text{linear spline}:\ g_{j}(b)=||b-b_{j}||;\quad\text{cubic
spline}:\ g_{j}(b)=||b-b_{j}||^{3};\end{matrix}$ (2.14)
Since matrix $\Phi$ and D are calculated once for all, one only needs to
compute the vector $g(b)$ for any arbitrary combination of parameters to
obtain system response.
Replacing the state equation (2.2) with surrogate model given in equation
(2.13) is hypothesized to decrease the computational time by a significant
amount, because it is free of the complexity of initial problem and involves
matrix multiplication that can be accomplished in a much smaller time than
solving ordinary differential equations with high fidelity methods. The
hypothesis is tested by comparing the accuracy of system responses and time of
calculation for both equation (2.2) and equation (2.13). The detailed
procedure for testing of surrogate model is discussed in the next section.
### 7 Error Analysis
The final step in the analysis of surrogate models is to determine how
accurate the low-dimensional surrogate model are in determination of the
system responses.
This is done by generating $n_{g}$ sample points of set of parameters P, using
the same sampling technique that had been adapted for generation of training
test. It must be noted that the newly generated test points are not same as
the one used to train the model and hence, the system responses of these
points occur in between nodes and are ideal for checking the accuracy of the
models. Moving on, the system responses
$Y_{g}=[y_{1},y_{2},...,y_{n_{g}}]\in\mathbb{R}^{m\times n_{g}}$ are obtained
using initial numerical method (that solves entire system), and also
$\hat{Y}_{g}=[\hat{y}_{1},\hat{y}_{2},...,\hat{y}_{n_{g}}]\in\mathbb{R}^{m\times
n_{g}}$ are recorded using newly constructed surrogate model. Then, the
accuracy and error measures are generally calculated using the following
formulas:
$R^{2}=1-\frac{\displaystyle\sum_{1}^{n_{g}}|y_{j}-\hat{y}_{j}|}{\displaystyle\sum_{1}^{n_{g}}|y_{j}-\overline{y_{j}}|}$
(2.15)
$\text{MAE}=\frac{1}{n_{g}}\sum_{1}^{n_{g}}|y_{j}-\hat{y}_{j}|$ (2.16)
$\text{MXAE}=\max_{1\leq j\leq n_{g}}|y_{j}-\hat{y}_{j}|$ (2.17)
$\text{RMAE}=\max_{1\leq i\leq m}\max_{1\leq j\leq
n_{g}}\frac{|y_{ji}-\hat{y}_{ji}|}{y_{ji}}$ (2.18)
All four measures are put to use at various instances in the thesis, for
example, coefficient of determination ($R^{2}$) in equation (2.15), Mean
Absolute Error (MAE) in equation (2.16), Maximum Absolute Error (MXAE) in
equation (2.17) are evaluated for various rank approximations of SVD, whereas
a tolerance threshold for elative Maximum Absolute Error (RMAE) in equation
(2.18) is defined for testing the accuracy of optimization results obtained
through original and surrogate models.
## Chapter 3 Algorithm
While understanding of mathematical formulation of POD-RBF procedure presented
in Chapter 2 is essential, its implementation can be quite technical as it
involves high-dimensional matrices, a series of functions, complex loops and
iterative processes. The idea of this chapter is to give detailed description
of the algorithm that was implemented in MATLAB for this research. The whole
computational procedure is divided into three parts for simplicity:
experimental design, training phase and testing phase. For each part, a
section of the chapter is devoted in which importance of the steps of
algorithm are discussed and the intricacies of computational procedure are
highlighted. Finally, the iterative nature of algorithm is elaborated in
Section 4.
### 1 Experimental Design
For the construction of surrogate model for a dynamical system, the proper
definition of the optimal control problem and planning an appropriate
experimental design holds high importance since these conditions are
hypothesized to reflect on the accuracy of the model. This pivotal decision
relies on choice of fixed and variable parameters, values of constants for
fixed parameters, the domain of variable/control parameters, number of initial
sampling points, number of time-instances for computation of snapshots, the
sampling strategy, the interpolation technique, and minimum error of
approximation/ stopping criteria.
Because of the inherent dependence of model on the factors enlisted above, the
decision about experimental design has to be made before the construction of
surrogate model. In this research, various combination of these factors are
accounted for to determine which experimental design results in the highest
accuracy while satisfying the time constraints for generation of the
snapshots. The error of approximation can be defined for accuracy of system
responses or computational time or both.
### 2 Offline/Training Phase
The offline phase (training of the model) entails utilization of sampling
techniques to generate data, computation of snapshot matrix of model solutions
using variable order methods for solving of ODE (model of dynamical system),
obtainment of proper orthogonal modes via singular value decomposition and
estimation of POD expansion coefficients that approximate the POD basis via
RBFs.
The next step in the analysis is to determine the appropriate number of POD
modes to be used in the surrogate model. For that, the orthogonal basis are
found using SVD and the error measures (2.15), (2.16), and (2.17) are used to
determine the singular values (rank) whose corresponding eigenvectors will
used as POD basis. Next, the amplitudes of approximation $a_{i}$ are computed
using the basis vectors $\phi_{i},i=1,...,k$ and stored in amplitude matrix
$A_{k}$. With this, the dimensionality of this problem cut from $n_{s}$ to
just k (rank). Now, to obtain the system response for any arbitrary data
point, it is sufficient to simply multiply the reduced basis with
corresponding amplitude.
In the final part of offline phase, POD is combined with RBF. The coefficients
of RBF interpolation collected in matrix D are calculated using our initial
data points in $u$ and our final matrix of amplitudes $A_{k}$ as inputs. With
this, the training phase comes to an end. Now, for the computation of system
responses, $y\approx\phi.D.g(b)$, surrogate model can be used with only $g(b)$
remaining to be calculated, which depends upon the test points.
### 3 Online/Testing Phase
The last step in construction of low dimensional model is to check the overall
error of approximation. It is done by taking the sample points and for each of
the data point, first the original response of the function is recorded by
solving the ODEs using MATLAB solver ode15s, and then the newly developed
surrogate model is used to calculate the system response for the same data
point. Finally the error measure RMAE (2.18) is calculated for each
experimental design and compared to determine which combination meets the
required tolerance level.
After deciding the final sampling strategy, number of sampling points, and
interpolation technique, the optimization problem is solved using system
responses for both original and surrogate models. Then, RMAE is calculated to
evaluate the accuracy of surrogate model. If the accuracy level is above the
decided threshold, the algorithm enters an iterative process that allows
decreasing the width of domain of control parameters. A detailed discussion of
iterative process is demonstrated in next section.
### 4 Iterative Process
As stated in the previous section, when the optimization results are obtained
using both original and surrogate model, sometimes the desired accuracy of the
model is not obtained in the first iteration, despite selecting the best
experimental design. This is because the optimal values are usually the corner
points and the predictive models in general tend to perform poorly on extreme
ends. One of the most effective method to overcome this issue is by the use of
adaptive sampling, a method that has been used by many researchers such as
Emiliano2013 (14) with the aim of finding optimal design space points. Despite
the effectiveness of the approach, it was not adopted in this work due to
limited scope of the research, as previously explained in Section 3.
The algorithm used for this research, on the other hand, caters to the
aforementioned issue in two ways. Firstly, it trains the initial model with
the sampling points from a slightly wider domain than the domain in which the
optimization is performed. This way, the corner points are incorporated into
the sampling space and surrogate model tends to provide better approximation
for the optimal points. Secondly, in order to minimize the error of
approximation, the algorithm allows to decrease the width of domain of control
parameters at each iteration. By decreasing the size of design space, the
sampling points move closer and even if the corner points are not accounted
for in the sampling design, the smallest distance between the corner and the
neighboring points is lower in smaller domain, hence resulting in better
approximation and higher accuracy. If the accuracy is not achieved, the
iterative algorithm becomes active: every time the error of approximation is
higher than the tolerance level, it shortens the domain, and reconstructs the
surrogate model for analysis. The iterative process can be summarized in four
steps:
1. 1.
Initialization: In this step parameters of algorithm are initialized that are
required for the iterative process, such as width (the length of domains of
control parameters), desired tolerance level and $b^{(0)}=$ initial guess for
b (the optimization parameters)
2. 2.
Setting up the bounds: In this step, upper and lower bounds of domain are
defined for each control parameter. It is done by taking $b^{(0)}$,
interpolating it and substituting it as the value of control variables in the
data structure. Next, the new bounds are created centered at $b^{(0)}$. The
width of domain for each subsequent iteration is lower than the previous
iteration. The value of $b^{(0)}$ is replaced with optimal value of b obtained
using surrogate model ($\hat{b}^{*}$) in the previous iteration. Finally, it
is checked if the new bounds are within the bounds that were defined at the
beginning of the problem. If not, the algorithm restricts them to exceed the
initial bounds. The step 2 of iterative process is depicted for two
optimization parameters in figure 3.1.
Figure 3.1: Example of iterative algorithm of two optimization parameters
$b_{1}\text{ and }b_{2}$ with iterations $i=1,\ldots,5$ and recursively
decreasing lengths $l_{i},i=1,\ldots,5$
3. 3.
Optimization: This is the main step of algorithm which was discussed in detail
in the second and third section of this chapter. In summary, we make sampling
set and snapshots, create surrogate model, solve optimization problem and
calculate error.
4. 4.
Updating parameters: This step prepares the parameters for the next iteration
in the case when the tolerance level falls below the error of approximation.
In general, the algorithm replaces $b^{(0)}$ with the optimized value of
$\hat{b}^{*}$ from the surrogate response of current iteration, shortens the
length by using a predefined multiplier. If the tolerance criteria is met, it
stops the iterative process. Else it goes back to step 2.
The computation procedure discussed throughout this chapter is summarized in
flowchart presented in the figure 3.2.
Figure 3.2: POD-RBF algorithm flowchart
## Chapter 4 Application of POD-RBF Procedure on Dynamical Systems
In this chapter, the POD-RBF procedure is trained and used to construct the
surrogate models for real-life dynamical systems and solve associated
optimization problems. Two dynamical systems with various complexity are
presented, with model 1 being the simple non-linear ODE problem, and model 2
featuring a non-linear system of equations with complex optimization criteria.
For each model, a description of the problem is presented and the values of
initial parameters used in numerical experiments are defined. Next, the
numerical experiments are performed to first decide the combination of
sampling technique, interpolation method and sampling points optimal for that
model and then the optimization problem is solved to evaluate the accuracy of
surrogate responses and the difference in computational time of optimization
with original and POD-RBF methods.
As a convention for this chapter, the variables with hat represent the results
obtained using surrogate model and without hat stand for the results from
original model. The description of common variable names are summarized in
table 4.1.
Notation | Description
---|---
$b^{(0)}$ | Initial value of optimization parameter
$\hat{b}^{*}$ | | Optimal value of optimization parameter, surrogate model
---
$b^{*}$ | | Optimal value of optimization parameter, original model
---
$\psi_{0}(b^{(0)})$ | Value of optimization criteria for $b^{(0)}$, original model
$\psi_{0}(\hat{b}^{*})$ | Value of optimization criteria for $\hat{b}^{*}$, original model
$\widehat{\psi_{0}}(\hat{b}^{*})$ | Value of optimization criteria for $\hat{b}^{*}$, surrogate model
$\psi_{0}(b^{*})$ | Value of optimization criteria for $b^{*}$, original model
$\psi_{i}(b^{(0)})$ | Value of $i^{th}$ optimization constraint for $b^{(0)}$, original model
$\psi_{i}(\hat{b}^{*})$ | Value of $i^{th}$ optimization constraint for $\hat{b}^{*}$, original model
$\widehat{\psi_{i}}(\hat{b}^{*})$ | Value of $i^{th}$ optimization constraint for $\hat{b}^{*}$, , surrogate model
$\psi_{i}(b^{*})$ | Value of $i^{th}$ optimization constraint for $b^{*}$, original model
Table 4.1: Details of notations used in preceding analysis
### 1 Model 1: Science Policy
#### 1.1 Description of the Model
This section features a very interesting application of optimal control theory
in economics. The problem is one of the oldest optimal control problem in
economics known as science policy and was originally introduced in 1966 by
M.D. Intriligator and B.L.R. Smith in their paper "Some Aspects of the
Allocation of Scientific Effort between Teaching and Research"
Intriligator1966 (13). Science policy addresses the important issue of
allocation of new scientists between teaching and research staff, in order to
maintain the strength of educational processes or alternatively, avoiding any
other dangers caused by inappropriate allocation between scientific careers
Intriligator1975 (12). In order to find the optimal allocation, the optimal
control problem was formulated as following:
$\max_{(u,y)\in U\times
Y}\tilde{\psi_{0}}=\int_{t_{0}}^{\top}[0.5y_{1}(t)+0.5y_{2}(t)]dt,$ (4.1)
$\begin{matrix}\text{subject
to}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\\\
c(y,u)=0\sim\begin{dcases}y_{1}^{\prime}(t)-u(t)gy_{1}(t)+\delta
y_{1}(t)=0,t\in[t_{0},T]\\\ y_{2}^{\prime}(t)-(1-u(t))gy_{1}(t)+\delta
y_{2}(t)=0\\\ y_{1}(t_{0})-y_{10}=0,y_{2}(t_{0})-y_{20}=0\end{dcases}\\\
\begin{bmatrix}\tilde{\psi_{1}}\\\
\tilde{\psi_{2}}\end{bmatrix}=\begin{bmatrix}0\\\
0\end{bmatrix}\sim\begin{dcases}y_{1}(T)-y_{1T}=0\quad\quad\quad\\\
y_{2}(T)-y_{2T}=0\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\end{dcases}\\\
u^{-}\leq u(t)\leq
u^{+}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\end{matrix}$
In this formulation, the state variable $y_{1}$ and $y_{2}$ represent the
teaching scientists and research scientists respectively at any given time t.
The detailed description of all the parameters and their values are summarized
in table 4.2. As the control variable $u$ represents the number of new
scientists becoming teachers, $(1-u)$ represents the proportion of
researchers. Hence, the differential equations determine the rate of change of
number of teachers and researchers by subtracting the exiting proportion from
the allocated proportion. The upper and lower limit of control function
indicate the limits of the science policy in affecting the initial career
choices, by government contracts, grants, incentive schemes, etc.
| Parameters
---
Definitions | Values
$u(t_{0})$ | | Proportion of new scientists becoming teachers at initial time
---
0.5
$g$ | | Number of scientists annually produced by one scientist
---
0.14
$\delta$ | | Rate of exit of scientists due to death, retirement or transfer
---
0.02
$y_{10}$ | Number of initial scientists working as teachers | 100
$y_{20}$ | Number of initial scientists working as researchers | 80
$T$ | Final time for the analysis in this policy | 15
$y_{1T}$ | Number of final scientists working as teachers | 200
$y_{2T}$ | Number of final scientists working as researchers | 240
$u^{-}$ | Lower limit of control function | 0.1
$u^{+}$ | Upper limit of control function | 0.6
Table 4.2: Description of parameters for Model 1
The problem is the one of choosing a trajectory for the allocation of $u(t)$
such that the welfare is maximized, given by the objective function in
equation (4.1). The terminal part $g_{1}(.,.)$ of welfare is not accounted for
in the objective function, but the state constraints are added to compensate
for it in the form of $y_{1}(T)-y_{1T}=0$ and $y_{2}(T)-y_{2T}=0$. The
optimization process is focused at maximizing the intermediate value
$g_{2}(.,.,.)$ of welfare. The welfare function is thought to be additive of
individual utilities along the lines of utilitarian approach. The utilities
are set as a linear function, with an assumption that the teachers and
researchers are perfect substitutes, and the allocation of any scientist to
one career will lead him to abandon the other career completely. The
assumption, even though unrealistic, is granted for simplicity and can be
complicated at the later stages.
#### 1.2 Simulation
This system of equation is solved for $n_{s}=40,60,80$ training points,
generated with LHS, SLHS and RS to create the snapshot matrix. The desired
tolerance level is $\epsilon_{\text{POD}}=0.01$. The plots of singular values
of depicted similar pattern for all the experimental designs. The singular
value plot for one specific example, SLHS and $n_{s}=40$ is presented figure
4.1 and shows that the first 4 singular values explain almost 100% variance.
Given the criterion in equation (2.10), we choose the rank of $k=4$.
Figure 4.1: Cumulative energy plot to determine singular values for Model 1.
SurrogateModel_SciencePolicy
The surrogate model was constructed for each of the variant with this rank and
the RMAE are reported in table 4.3. The table shows that the lowest RMAE was
obtained for LHS, followed by SLHS and the RS. As the theory suggests, RMAE is
observed to decrease with increasing number of sampling points with an
exception of cubic spline in random sampling. The anomalous behavior of RS can
be associated with its randomness, which sometimes generates the sampling
points which belong to only one region of the surface, leading to higher
variance in the model and higher error of approximation, even with increasing
number of training points. Another trend that can be consistently observed is
that the linear spline RBF tend to perform better than the cubic spline in
this model. Overall, the best experimental design for this model is to use a
combination of LHS with linear spline RBF and $n_{s}=80$. The surrogate model
approximation for the initial control value $u=0.5$ and the original system
response are plotted in figure 4.2 and show that the approximated responses
are very close to the actual responses.
Sampling | Interpolation | $n_{s}=40$ | $n_{s}=60$ | $n_{s}=80$
---|---|---|---|---
LHS | Linear | 0.02034 | 0.00293 | 0.00150
Cubic | 0.05316 | 0.00647 | 0.00641
SLHS | Linear | 0.03825 | 0.00679 | 0.00437
Cubic | 0.05175 | 0.00897 | 0.00861
RS | Linear | 0.01525 | 0.02410 | 0.02792
Cubic | 0.16457 | 0.26597 | 12.91601
Table 4.3: RMAE for various experimental designs of Model 1 Figure 4.2: Actual
surface vs approximated surface for Model 1. SurrogateModel_SciencePolicy
#### 1.3 Optimization
For the final step of analysis, the surrogate model was constructed with 40
training points, LHS, and linear spline RBF. Here $n_{s}=40$ was used because
given the simplicity of the problem, the accuracy required for optimization
can be achieved by small number of training points. The optimization problem
is solved with two optimization parameters for control function using both
original and surrogate model. The results of optimization are given in table
4.4. The problem started with equal number of scientists allocated in both
careers, with the initial value of state constraint
$\psi_{1}(b^{(0)})=[11.8001;43.0163]$ representing that the number of teachers
and researchers allocated at initial time were 11 and 43 units short of
$y_{1T}$ and $y_{2T}$ respectively. The solution to the problem allocates
around 52% of new scientists to teaching at the beginning of the time. This
proportion decreases as the time passes with around 47% scientists allocated
as teaching staff at the end of time (see figure 4.3(b)). The optimal surface
in 4.3(a)) shows that the number of teaching staff was allocated to be higher
than the number of researchers until the end time. The surrogate model gave
consistent results, with error of approximation (the relative error of
$\psi_{0}(\hat{b}^{*})$ and $\widehat{\psi_{0}}(\hat{b}^{*})$) as low as 0.005
in the first iteration.
Even though the optimization using surrogate model was slightly quicker than
the original model, the time taken for construction of surrogate model was
higher. Hence, despite of highly accurate system responses through surrogate
model, substituting original model with POD-RBF model might not be useful, as
the time taken for optimization by surrogate model (training + optimization)
was much longer than the original model. This example give us insight into why
surrogate modelling was avoided into applications earlier: the simple nature
of optimization models for some applications do not require high computational
resources, while the construction of surrogate models is much more
computationally expensive and may not be desirable.
Field | Value | Field | Value
---|---|---|---
$b^{(0)}$ | [0.5000 0.5000] | Bounds | [0.1000,0.6000]
$b^{*}$ | [0.6000,0.3461] | $\hat{b}^{*}$ | [0.5187,0.4730]
$\psi_{0}(b^{(0)})$ | 210.6500 | $\psi_{0}(\hat{b}^{*})$ | 209.7600
$\psi_{0}(b^{*})$ | 212.8400 | $\widehat{\psi_{0}}(\hat{b}^{*})$ | 210.9900
$\psi_{1}(b^{(0)})$ | ${[}11.8001,43.0163{]}^{\top}$ | $\psi_{1}(\hat{b}^{*})$ | ${[}0.0003,0.0014{]}^{\top}$
$\psi_{1}(b^{*})$ | ${[}0.000,0.000{]}^{\top}$ | $\widehat{\psi_{1}}(\hat{b}^{*})$ | ${[}0.0000,0.0023{]}^{\top}$
$\text{Time}_{orig}$ | 2.8109 sec | $\text{Time}_{surr}$ | 2.3694 sec
$\text{Time}_{cnstr}$ | 37.8406 sec | $\epsilon$ | 0.0058
Table 4.4: Optimization results of Model 1 Figure 4.3: Optimal surface and
control functions for Model 1. SurrogateModel_SciencePolicy
### 2 Model 2: Population Dynamics
#### 2.1 Description of the Model
In this section, a more complex application of optimal control theory is
presented with a general model of non-linear system of ODEs defined by:
$c(y,u)=0\sim\left\\{\begin{array}[]{l}\left\\{\begin{array}[]{l}y_{1}^{\prime}-p_{1}y_{1}-p_{2}y_{2}^{2}-u_{1}y_{1}F\left(y_{1},t\right)y_{2}=0,\\\
y_{2}^{\prime}-p_{3}y_{2}-p_{4}y_{2}^{2}-u_{1}u_{2}y_{1}F\left(y_{1},t\right)y_{2}=0,\end{array}t\in\Omega_{t}=\left(t_{0},T\right]\right.\\\
y_{1}\left(t_{0}\right)-y_{10}=0\\\ y_{1}\left(t_{0}\right)-y_{20}=0\\\
F\left(y_{1},t\right)=1-e^{-p_{5}y_{1}}\end{array}\right.$ (4.2)
These type of dynamical problems are usually observed in population dynamics
in biology, ecology and environmental economics. These problems are variation
of prey-predator model presented by Lotka-Volterra. This section aims at
generalizing the approach of POD-RBF on these non-linear models without
providing specific details of the model parameters of the optimization
problem.
The optimization problem considered here consists of finding a value of
control function $u^{*}=\left[u^{*}_{1},u^{*}_{2}\right]$ that minimizes the
distance between $y_{1}$ and its desirable value $y_{1d}$ Value on control
function is restricted by dual pointwise constraints and value $y_{2}$ do not
exceed maximum value $y_{2d}.$ The optimization problem can be formulated in
the following manner: find $u^{*}$ that minimize optimization criterion
$\psi_{0}\left(u^{*}\right)=\min_{u}\int_{t_{0}}^{T}\left(y_{1}(t,u)-y_{1d}\right)^{2}dt$
(4.3)
subject to state equation (4.2), box constraints on the control
$U=\left\\{u:u^{-}(t)\leq u(t)\leq u^{+}(t)\right\\}$ (4.4)
and pointwise constraint on state
$y_{2}(t)\leq y_{2}^{+}$ (4.5)
The pointwise state constraint (4.5) is transformed into an equivalent
equality constraint of the integral type
$\psi_{1}(u)=\tilde{\psi}_{1}(u,y(u))=\int_{t_{0}}^{T}\left(\left|y_{2}(t,u)-y_{2d}\right|+y_{2}(t,u)-y_{2d}\right)^{2}dt$
(4.6)
Taking into account equations(4.3-4.6) the optimization problem can be written
in a reduced form as follows:
$\displaystyle\psi_{0}\left(u^{*}\right)=$ $\displaystyle\min_{u\in
U_{\partial}}\int_{t_{0}}^{T}\left(y_{1}(t,u)-y_{1d}\right)^{2}dt$ (4.7)
$\displaystyle U_{\partial u}=\\{u$ $\displaystyle\left.:u\in
U;\psi_{1}(u)=\tilde{\psi}_{j}(u,y(u))=0\right\\}$ $\displaystyle c(y(u),u)$
$\displaystyle=0$
#### 2.2 Simulation
For numerical experiments we select the following values of the input
parameters:
$\left[p_{1},p_{2},p_{3},p_{4},p_{5}\right]=[0.734,0.175,-0.500,-0.246,0.635],\left[t_{0},T\right]=[0,10]$,
$n_{u}=2,u^{-}=\left[u_{1}^{-},u_{2}^{-}\right]=[-0.5500,-1.0370],u^{+}=\left[u_{1}^{+},u_{2}^{+}\right]=[-0.300,-0.7870]$,
$y_{1d}=5,y_{2}^{+}=6$. The control functions $u_{1}(t),u_{2}(t)$ on the
interval $\left[t_{0},T\right]$ are approximated by linear functions. Thus,
the vector of optimization parameters $b$ consist of four components:
$b=\left[b_{1}^{(1)},b_{2}^{(1)},b_{1}^{(2)},b_{2}^{(2)}\right]^{T}=\left[b_{1},b_{2},b_{3},b_{4}\right]^{T}$.
For numerical simulations, LHS, SLHS and RS are used to define the sampling
matrix with $n_{s}=40,60\ \text{and}\ 80$. Also, RBF interpolation-linear
spline and cubic spline is used for comparison of results. The solution
$y=[y_{1},y_{2}]$ where $n_{y}=2$ was then computed for time instances,
$t_{i}$ with $t_{0}<t_{i}<t_{n_{t}}$, $n_{t}=100$ equally spaced instances of
t, and $n_{s}$ sampling points, and then system responses were collected to
generate the snapshot matrix. The error of approximation was fixed
$\epsilon_{\text{POD}}=0.01$.
Next, the POD-RBF approach is applied to this model to first determine the
dimension of POD basis through SVD using cumulative energy method (it is done
for all experimental designs) and it is concluded that 3 singular values
should be considered as the rank of the POD basis as shown by the figure of
singular values in figure 4.4. It can be clearly noticed that the magnitude of
all the singular values is very small compared to first singular value; the
relative commutative energy E(i) of first singular value is more than 99%.
This shows that that the responses of the system are fully correlated. Hence,
rank 3 approximation is very accurate and adding more vectors (by increasing
rank) in approximation further increases the precision.
Figure 4.4: Cumulative energy plot to determine singular values for Model 2.
SurrogateModel_PopulationDynamics
Having chosen $k=3$, the numerical simulations are performed for model (4.2).
For testing of the model, $n_{g}=10$ points were used to calculate the RMAE
for each combination. Table 4.5 exhibits that among all the surrogate models
that were trained using different number of sample points, different sampling
techniques and RBF interpolations, the cubic spline RBF showed the lowest
error for both LHS and SLHS in general, with a few exceptions. Also, as
expected, the error of approximation shows a decreasing pattern as the number
of sample points increase from 60 to 80, except in RS when the RMAE follows no
particular trend. The least RMAE was obtained for the model trained on 80 data
points from SLHS for cubic spline RBF. For one of such sample point
$b=[-0.425,-0.425,-0.912,-0.912]$, the POD-RBF responses were obtained for
$n_{s}=40$ and the original and approximated $y_{1}$ and $y_{2}$ were plotted
as shown in figure 4.5. For this point, all POD-RBF gave relative maximum
absolute error less than 1% as desired.
| Sampling
---
| Interpolation
---
$n_{s}=40$ | $n_{s}=60$ | $n_{s}=80$
LHS | Linear | 0.45112 | 0.32948 | 0.18871
Cubic | 0.28229 | 0.24010 | 0.15794
SLHS | Linear | 0.26162 | 0.19198 | 0.19204
Cubic | 0.23986 | 0.18685 | 0.15376
RS | Linear | 0.59500 | 0.55080 | 0.86405
Cubic | 0.92109 | 0.15595 | 0.19902
Table 4.5: RMAE for various experimental designs of Model 2
#### 2.3 Optimization
In previous subsection, the best results were obtained for $n_{s}=80$ with
SLHS and cubic spline RBF. That experimental design is used to solve the
optimization problem (4.7) and the results are summarized in table 4.6. For
simplicity, the number of optimization parameters for each control variable
are taken to be 2. We could, however, allows specification of different number
of optimization parameters for each control variable. The optimization results
of this model apparently highlight the efficiency of surrogate modeling. As
the table 4.6 reports, the tolerance level was met in the first iteration,
with error between approximated and actual responses being less than 0.01 in
first iteration. Hence, the desired accuracy was achieved and no further
iterations were required. Also, the optimization criteria obtained using
surrogate model $\widehat{\psi_{0}}(\hat{b}^{*})=43.5647$ is very close to
$\psi_{0}(b^{*})=43.3287$. Moreover, since results of optimization problem
were obtained within one iteration, the construction time of surrogate model
can be considered once for all. Therefore, the total computational time for
optimization through surrogate model of 6.6 seconds + 15.35 seconds is less
than 23.40 seconds taken by original problem. Relatively, the surrogate method
was four times faster than the original method in solving optimization
problem. In a nutshell, for this highly non-linear model, surrogate model gave
highly accurate and computationally efficient result of the optimization
problem.
Figure 4.5: Actual vs approximated surface of Model 2. SurrogateModel_PopulationDynamics Field | Value | Field | Value
---|---|---|---
$b^{(0)}$ | [-0.4250,-0.4250, | Bounds | [-0.5500, -0.300];
| -0.9120,-0.9120] | | [-1.0370,-0.7870]
$b^{*}$ | [-0.5006,-0.3250, | $\hat{b}^{*}$ | [-0.4922,-0.3334,
| -1.0120,-1.0120] | | -1.0120,-1.0120]
$\psi_{0}(b^{(0)})$ | 55.2817 | $\psi_{0}(\hat{b}^{*})$ | 43.9127
$\psi_{0}(b^{*})$ | 43.3287 | $\widehat{\psi_{0}}(\hat{b}^{*})$ | 43.5647
$\psi_{1}(b^{(0)})$ | 22.9396 | $\psi_{1}(\hat{b}^{*})$ | 0.0162
$\psi_{1}(b^{*})$ | 0.0000 | $\widehat{\psi_{1}}(\hat{b}^{*})$ | 0.0000
$\text{Time}_{orig}$ | 23.3983 sec | $\text{Time}_{surr}$ | 6.6241 sec
$\text{Time}_{cnstr}$ | 15.3470 sec | $\epsilon$ | 0.0081
Table 4.6: Optimization results of Model 2
## Chapter 5 Conclusions
This research employed Proper Orthogonal Decomposition (POD), a surrogate
modeling technique integrated in optimization framework for dimension
reduction by extracting hidden structures from high dimensional data and
projecting them on lower dimensional space. In the first instance, POD was
coupled with various Radial Basis Functions (RBF)— a smoothing technique— and
the computational procedure was hypothesized to provide compact, accurate and
computationally efficient solution of optimal control problems. The surrogate
models using POD-RBF were constructed. The computational procedure of
surrogate model was divided into problem setup and training/testing phase for
effective implementation of the reduced order modeling techniques.
Furthermore, an iterative algorithm was introduced methodically to achieve
more accurate results.
The algorithm and computational procedure was implemented on two real-life
optimal control problems that were taken directly from literature sources. It
was demonstrated that the dimensionality of high order models in the form of
ODEs of dynamical systems could be reduced substantially to as low as 3 with
relative maximum absolute error less than 0.01 between original and
approximated system responses. Hence approximated surrogate model gave a good
alternative method of solution of ODEs with low CPU intensity. The simulation
part of PDF-RBF procedure was carried out by varying the number of sample
points, sampling strategy, and RBF interpolation types in the training phase.
The results showed that the approximation was more precise if the model was
trained on higher number of sample points. Also, the interpolated surrogate
model constructed using cubic-spline RBF led to better results in the complex
model than its liner counterpart. Furthermore, LHS and SLHS both led to better
approximations than RS which is in coherence with the theory.
In solution of optimization problems, the system responses obtained by
surrogate model invariably gave accurate results with improved computational
time. As a whole, both the models agreed with the hypothesis of this work that
surrogate models can increase the computational efficiency in solution of
dynamical systems while maintaining the accuracy of system responses. However,
the computational performance is subject to the available computational
resources and the numerical simulation might be much faster in a high-
performance computer, compensating for the time used in iterative process of
POD-RBF algorithm.
### Limitations and Future Work
ROMs are usually thought of as computationally inexpensive mathematical
representations that offer the potential for near real-time analysis. The
hypothesis of this research was based on the same notion. However, while
analyzing the performance POD-RBF procedure on non-linear dynamical systems in
the last chapter of this thesis, it was brought into consideration that the
even though the optimization process itself was faster with surrogate
responses, their construction was sometimes computationally expensive as it
involved accumulating a large number of system responses to input parameters.
It is also noteworthy that sometimes ROMs lack robustness with respect to
parameter changes. These limitations were considered and elaborated throughout
the analysis and the scope of extension of this research was discussed
alongside.
In future, the performance of surrogate models can be evaluated on more
complicated models consisting of highly non-linear ordinary and partial
differential equations. Also, other sampling techniques which allow inclusion
of corner and optimization points in the training set, methods of obtaining
POMs, and interpolation methods can be explored as an extension of this work.
Furthermore, the computational time of each of the model can be calculated
with more efficient machines in homogeneous computer environment to get near-
exact insight into the performance of surrogate models.
## References
* (1) A.. Adongo, A. Lewis and J.. Chikelu “Principal Component and Factor Analysis of Macroeconomic Indicators” In _IOSR Journal Of Humanities And Social Science_ 23.7, 2015, pp. 01–07
* (2) A. Bai, S. Heera and P.S. Deshpande “An Application of Factor Analysis in the Evaluation of Country Economic Rank” In _Procedia Computer Science_ 54.3, 2015, pp. 311–317 DOI: 10.1016/j.procs.2015.06.036
* (3) J. Bai and P. Wang “Econometric Analysis of Large Factor Models” In _Annual Review of Economics_ 8.1, 2016, pp. 53–80 DOI: 10.1146/annurev-economics-080315-015356
* (4) A. Benaarbia and A. Chrysochoos “Proper orthogonal decomposition preprocessing of infrared images to rapidly assess stress-induced heat source fields” In _Quantitative InfraRed Thermography Journal_ 14.1 TaylorFrancis, 2017, pp. 132–152 DOI: 10.1080/17686733.2017.1281553
* (5) V. Berardi et al. “Proper Orthogonal Decomposition Methods for the Analysis of Real-Time Data: Exploring Peak Clustering in a Secondhand Smoke Exposure Intervention” In _Journal of Compuatational Science_ 11 Elsevier, 2015, pp. 102–111
* (6) S.. Boyaval et al. “Reduced Basis Techniques for Stochastic Problems” In _Archives of Computational Methods in Engineering_ 17, 2010, pp. 435–454 DOI: 10.1007/s11831-010-9056-z
* (7) V. Bujlak “Inverse Analysis with Model Reduction (Proper Orthogonal Decomposition in Structural Mechanics)” Milan: Springer, 2012
* (8) A. Chatterjee “An Introduction to the Proper Orthogonal Decomposition” In _Current Science_ 78.7, 2000
* (9) Ying Chen, Wolfgang K. Härdle, Qiang He and Piotr Majer “Risk related brain regions detection and individual risk classification with 3D image FPCA” In _Statistics & Risk Modeling_ 35.3-4 Berlin, Boston: De Gruyter, 01 Jul. 2018, pp. 89–110 DOI: 10.1515/strm-2017-0011
* (10) L. Chrisman “Latin Hypercube vs. Monte Carlo Sampling”, 2014
* (11) M. Hinze and S. Volkwein “Proper Orthogonal Decomposition Surrogate Models for Nonlinear Dynamical Systems: Error Estimates and Suboptimal Control” In _Dimension Reduction of Large-scale Systems_ 45, 2005, pp. 261–306 DOI: 10.1007/3-540-27909-1_10
* (12) M.D. Intriligator “Applications of Optimal Control Theory in Economics” In _Synthese_ 31.2 Springer, 1975, pp. 271–288
* (13) M.D. Intriligator and B. Smith “Some Aspects of the Allocation of Scientific Effort between Teaching and Research” In _The American Economic Review_ 56.1/2 American Economic Association, 1966, pp. 494–507
* (14) E. Iulianoa and D. Quagliarellaa “Proper Orthogonal Decomposition, surrogate modelling and evolutionary optimization in aerodynamic design” In _Computers and Fluids_ 84, 2013, pp. 327–350
* (15) G. Kerschen, J. Golinval, A. Vakakis and L. Bergman “The Method of Proper Orthogonal Decomposition for Dynamical Characterization and Order Reduction of Mechanical Systems: An Overview” In _Nonlinear Dynamics_ 41 Springer, 2005, pp. 147–169
* (16) F. Lanata and A.. Grosso “Damage detection and localization for continuous static monitoring of structures using a proper orthogonal decomposition of signals” In _Smart Materials and Structures_ 15.6 IOP Publishing, 2006, pp. 1811–1829 DOI: 10.1088/09641726156036
* (17) K. Li, J.. Cursio and Y. Sun “Principal Component Analysis of Price Fluctuation in the Smart Grid Electricity Market” In _Sustainability_ 10.11, 2018 DOI: 10.3390/su10114019
* (18) A. Maravalle and L. Rawdanowicz “Changes in Economic and Financial Synchronisation”, 2018
* (19) K. Pearson “On Lines and Planes of Closest Fit to Points in Space” In _The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science_ 2.11 Taylor & Francis, 1901, pp. 559–572 DOI: 10.1080/14786440109462720
* (20) W. Schilders, H.. Vorst and J. Rommes “Model Order Reduction: Theory, Research Aspects and Applications”, 2008 DOI: 10.1007/978-3-540-78841-6
* (21) M. Shcherbatyy and I. Shcherbata “Proper Orthogonal Decomposition for Ordinary Differential Equations and Partial Differential Equations” In _Proceedings XXXII International Conference PDMU, Czech Republic, Prague_ , 2018, pp. 162–170
* (22) L. Sirovich “Turbulence and the dynamics of coherent structures. I. Coherent structures” In _Quarterly of Applied Mathematics_ 45.3 Brown University, 1987, pp. 561–571
* (23) L. Sirovich and M. Kirby “Low-dimensional procedure for the characterization of human faces” In _J. Opt. Soc. Am. A_ 4.3 OSA, 1987, pp. 519–524 DOI: 10.1364/JOSAA.4.000519
* (24) Ngoc M. Tran, Petra Burdejová, Maria Ospienko and Wolfgang K. Härdle “Principal component analysis in an asymmetric norm” In _Journal of Multivariate Analysis_ 171, 2019, pp. 1–21 DOI: 10.1016/j.jmva.2018.10.004
* (25) M. Wax and T. Kailath “Detection of signals by information theoretic criteria” In _IEEE Transactions on Acoustics, Speech, and Signal Processing_ 33.2, 1985, pp. 387–392 DOI: 10.1109/TASSP.1985.1164557
|
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are not
made or distributed for profit or commercial advantage and that copies bear
this notice and the full citation on the first page.
# Writers Gonna Wait: The Effectiveness of Notifications to Initiate Aversive
Action in Writing Procrastination
Chatchai Wangwiwattana
Sunjoli Aggarwal
Eric C. Larson
Southern Methodist University, Department of Computer Science
<EMAIL_ADDRESS>Southern Methodist University, Department of Computer
Science<EMAIL_ADDRESS>Southern Methodist University, Department of
Computer Science<EMAIL_ADDRESS>
###### Abstract
This paper evaluates the use of notifications to reduce aversive-task-
procrastination by helping initiate action. Specifically, we focus on aversion
to graded writing tasks. We evaluate software designs commonly used by
behavior change applications, such as goal setting and action support systems.
We conduct a two-phase control trial experiment with 21 college students
tasked to write two 3000-word writing assignments (14 students fully completed
the experiment). Participants use a customized text editor designed to
continuously collect writing behavior. The results from the study reveal that
notifications have minimal effect in encouraging users to get started. They
can also increase negative effects on participants. Other techniques, such as
eliminating distraction and showing simple writing statistics, yield higher
satisfaction among participants as they complete the writing task.
Furthermore, the incorporation of text mining decreases aversion to the task
and helps participants overcome writer’s block. Finally, we discuss lessons
learned from our evaluation that help quantify the difficulty of behavior
change for writing procrastination, with emphasis on goals for the HCI
community.
###### category:
H.5.m. Information Interfaces and Presentation (e.g. HCI) Miscellaneous
###### keywords:
Procrastination, Behavior Change, Writing
## 1 Introduction
For decades, the HCI community has researched persuasive design in behavior
change in applications ranging from health improvement [34, 38, 42], to well-
being [36, 49], to sustainability [37, 39]. These researchers seek to bridge
the gaps between practical HCI design and behavioral psychology—or,
alternatively, behavioral economics or neuroscience; nonetheless, this gap has
proven difficult to trellis [64]. Because of the generalized nature of
behavioral theory, there are many possible ways to apply the same knowledge in
practical applications [64]. That is, a technique that works well within one
context may not work in another.
One widely used mechanism in behavior change is the notification. The
notification is used to draw users’ attention to a task or part of a task with
the hope that action will be initiated. In theory, a psychological trigger,
whether internal or external, is the first step of any behavior [63].
Notifications, therefore, can be categorized as effective external cues to
initiate behavior [14]. However, many studies investigate the use of
notifications for tasks that users already have an internal motivation to
complete; tasks for which internal motivation is absent have not been studied
as thoroughly. Such tasks are also known as “aversive tasks.” High intention
to perform aversive tasks does not guarantee that the behavior will occur [7].
Thus, it is an open question to what degree notifications are effective
psychological triggers for completing a task. In this paper, we evaluate the
usage of notifications for changing behavior towards aversive tasks, in which
there is no internal motivation present. More specifically, we focus on
reducing writing procrastination among college students.
We outline our contributions as follows: (i) We investigate the role of human
computer interaction in formal psychological theories of procrastination
behavior. (ii) We evaluate the efficacy of various notification styles in
reducing procrastination of tedious writing tasks. (iii) We evaluate popular
text editor designs and their perceived effects on procrastination behavior.
(iv) We summarize lessons learned from working on procrastination research
from an HCI perspective.
Writing is considered an unpleasant activity by many individuals because it
requires tremendous effort, is susceptible to judgment, and has delayed
reward; nonetheless it is vital to succeeding in many professions and,
therefore, cannot be treated as an optional lifestyle choice. The conflict of
have to and want to, which psychologists call cognitive dissonance, elicits
negative effects such as guilt and distress [16]. These negative effects enter
into a positive feedback loop—the more a student procrastinates, the worse
he/she feels, and these negative feelings block him/her from having an
environment conducive to writing, leading to further procrastination [24]. Due
to its cyclic nature, writing procrastination is an extremely challenging
application of aversive behavior change research, but as stated, trying to
reduce this phenomenon is also extremely relevant.
In this paper, we evaluate the effectiveness of notifications and other
persuasive HCI design elements on reducing procrastination and initiating
action for writing tasks. We developed a research instrument for collecting
data and facilitating the experiment (i.e., a custom text editor that tracks
interaction and can distribute self-evaluation surveys). The custom text
editor has a number of features, including a notification system, a goal
progress bar, distraction-free mode, and a writing assistance system. All of
these features are designed based on psychological intervention theory. We
conducted an experiment with 21 college students who used the editor to
complete graded writing assignments. The evaluation lasted for eighteen days.
Students worked on two separate writing assignments: the first assignment
lasted for nine days without notifications and was used as a baseline
evaluation. In a second nine day experiment, we divided the same users into
two groups based on the type of notifications they received (normal or
actioned notifications). We recorded users’ progress using word count, writing
time, and start time, and recorded users’ interaction with notifications for
both assignments. We also collected self-reported survey data on
procrastination with PASS (Procrastination Assessment Scale), and user
satisfaction with SUS (System Usability Scale). In the follow-up study, 14
participants used the editor regularly enough for analysis. The results showed
minimal change in writing procrastination behavior regardless of the type or
number of notifications received. Surprisingly, participants who received more
notifications reported lower satisfaction than others who received fewer
notifications. In addition, we found our custom writing assistance system,
which employs text mining and concept models to help mitigate writer’s block,
reduced aversion towards the task and provided a more positive writing
experience. Other persuasive techniques, such as a simple goal progress bar
and distraction-free mode, also showed positive outcomes.
## 2 Understanding Procrastination
In order to bring the HCI community together around behavior change for
procrastination, we first describe the psychology community perspectives
(which are competing in some instances). This also helps to properly ground
our HCI methodologies in the context of current psychological research trends.
A valid question to ask before going further is: Should we be trying to reduce
procrastination behavior? In general, social psychologists argue that
procrastination is an irrational and maladaptive behavior [19]. There is
considerable evidence that procrastination not only harms productivity, but
also increases stress and contributes to a poorer quality of life. Studies
show that procrastination can affect people’s health, ranging from headaches,
body aches, and colds to tooth decay, stress, and strokes [58, 59, 60].
Moreover, in academic literature, studies have shown that academic
procrastinators are likely to engage in cheating and plagiarism, as well as
have problems with self-esteem and self-confidence [19]. Procrastinators also
have a high degree of self-handicapping behaviors, such as indecision,
rebelliousness, societal demands for perfection, and a low degree of optimism,
self-esteem, life satisfaction, and self-confidence [25]. In contrast to these
negative effects, some research suggests that procrastination exists for good
reasons. From an evolutionary perspective, procrastination yields benefit
because the long term goals that a non-procrastinator would focus on may
actually distract from short term survival goals that a procrastinator would
gravitate towards [29]. Norman argues that procrastination provides the
maximum time to think, plan, and determine alternatives, giving more
flexibility for change in the future and allowing requirement to evolve [45].
In this paper, we do not argue whether procrastination is or is not
beneficial. Instead, we argue that individuals who wish to change their
behavior should have a right to do so. Our research, thus, provides valuable
knowledge for individuals who wish to reduce their procrastination behavior
using technology aides.
Procrastination can be viewed from various perspectives. Cognitive Science
views procrastination as a subtle executive dysfunction [50]. Executive
functions rely on a number of interconnected cortical and sub-cortical brain
regions. Together, these areas are responsible for the self-regulation of
cognition and for all cognitive processes that enable planning for complex
actions [50]. In contrast to Cognitive Scientists, Evolutionary Psychologists
view procrastination as a result of human evolution. They believe that
focusing on short term survival results in a greater chance of passing on a
gene, whereas long term planning is merely a distraction. According to them,
procrastination is an evolutionary by-product of impulsiveness [29]. Social
Psychologists have a similar view to Cognitive Scientists. Piers Steel defined
procrastination as “to voluntarily delay an intended course of action despite
expecting to be worse off for the delay” [61]. To emphasize its negative
nature, Kyle further defines procrastination as “the voluntary, irrational
postponement of an intended course of action despite the knowledge that this
delay will come at a cost to or have negative affects on the individual” [35].
In short, many psychologists agree that procrastination is a form of self-
regulation failure. It is important to note that not all delays are
categorized as procrastination. For example, a planned delay or a delay from
external factors is not procrastination; procrastination must include an
irrational delay. With this definition, Norman’s argument—to maximize time to
think, plan, and determine alternatives, give more flexibility in future
change, and allow requirements to change[45])—would not be considered
procrastination; it is time management.
### 2.1 Characteristics of Procrastinators
Individuals who procrastinate often repeat procrastination behavior;
therefore, the psychology community categorizes people as procrastinators and
non-procrastinators. Procrastinators have unique characteristics compared to
non-procrastinators. For example, many procrastinators have a misbelief that
pressure motivates them to do their best work [35]; furthermore,
procrastinators have high sensitivity to immediate gratification and have
trouble focusing on tasks [17, 18, 26]. Procrastinators also typically have
low self-control and low self-reinforcement, meaning they are unable to reward
themselves for success [23]. They also have decreased ability to regulate
their performance or speed under restricted time frames [18] and reflect on
the future negatively. In the specific case of students, distractions come
easy [15, 17, 22] and the ability to estimate the amount of time necessary to
finish a task is lacking [24].
### 2.2 Causes of Procrastination
Causes of procrastination are complex. Procrastination can stem from the task
itself or from individual differences in terms of personality and genetic
uniqueness. Task Aversion, dysphoric affect [43], or task appeal [31] refer to
an action that one finds unpleasant. By this definition, the more aversive the
task, the more likely one is to avoid it. Timing, rewards, and punishments
also influence the path towards procrastination. This is known in the
behavioral economics community as intertemporal choice or discounted utility
[41]. Marketing researchers view procrastination as deciding to perform a
certain task in a certain amount of time based on the perfect compromise
between cost and value. To a procrastinator, present cost is usually perceived
as higher than future costs, while value remains constant. This leads one to
act closer to the deadline, even though the action may be an enjoyable
activity [57].
Social psychologists argue that procrastination is stimulated by negative
causes. For example, fear of failure can contribute to procrastination [53].
Ferrari, based on 20 years of research, proposed three models of
procrastination: Arousal, Avoidant, and Decisional. Avoidant procrastinators
have the tendency to avoid certain outcomes such as fear of failure, success,
social isolation, or feeling like an impostor. Arousal procrastinators rely on
pressure in order to work. Indecisive procrastinators intentionally decide not
to act [20, 21]. Indecisive procrastination is related to a lack of competence
or time urgency. It is not related to laziness, but is rather more about not
understanding the trade-off between speed and accuracy [21]. Ferrari also
argues that learned helplessness, or the situation of experiencing a series of
uncontrollable and unpredictable unpleasant events [56], contributes to
procrastination. For example, some procrastinators use procrastination as a
self-handicapping strategy. When procrastinators perceive low-competence, they
blame external factors (such as not having enough time) in order to protect
their self-esteem and themselves from social judgment for poor performance.
The behavior-intention gap, addressed in Theory of Reasoned Action, is where
there is a gap between behavioral intention and behavior. Intention is a
strong predictor of behavior; however, that behavior is not guaranteed even
when the person believes that the act is worthwhile [6, 7]. Enjoyable
activities that can be done right away have more $utility$ than non-urgent or
undesirable tasks. This result was supported by Haghbin et al. [30].
### 2.3 Possible Psychological Interventions
Examples of psychological interventions to overcome procrastination are
plentiful. A popular intervention is isolating a task and breaking it into
small and attainable steps [32]. The effect can be enhanced by incorporating
goal setting theory, entailing the creation of small sub goals and enforcing
regular deadlines. This helps users regain self-efficacy and narrow the
intention-action gap [61]. In addition, Ainslie and Haslam suggest training
procrastinators to separate negative effects from taking action [41]. This can
be seen as a combination of willpower and mindfulness training. Schouwenburg
suggests using commitment devices to limit and eliminate short-term
temptations altogether such as turning on do not disturb mode or disconnecting
the Internet [54]. Instead of trying to rely on willpower or commitment
devices, Implementation Intention exploits psychological triggers and the
power of habit. It consists of forming an intention with specific action
plans. That intention acts as a cue for triggering followed behaviors. For
example, If I go to a restroom, I will also get a cup of water. A study shows
hat this makes individuals nearly eight times more likely to follow through
with a task [46]. Time Traveling, in contrast, focuses on mood regulation.
This approach encourages individuals to flip negative and positive reflections
toward tasks by asking them to think about how they will feel after the task
is completed [48]. Recent work in psychology explores the idea of having
individuals make a relationships with their future-selves to help
procrastinators make more rational decisions [27, 28]. The idea of the future-
self focuses on how current decisions will affect one’s self in the future.
For an academic setting, Ferrari suggests (1) finding the part of the paper
with the most individual interest, (2) creating an outline, and (3) writing in
small sessions [24]. Pychyl believes that just getting started is the most
effective way to decrease procrastination [48], arguing that splitting the
task into small, manageable sub-goals, it becomes easier to start.
Although these techniques have reported success, one limitation is the
training and active commitment required to implement them into one’s life.
Like other behavioral interventions, “only knowing how” does not cause changes
in behavior [55]. Even through one may implemented it, it is still unclear how
sustainable the initial commitment will last. We argue that exploring how
technology can augment psychological interventions is a vital role for the HCI
community. In the study presented here, we build the interventions explicitly
into our text editor system, hypothesizing that the system might guide
productive behavior if it is a part of the holistic intervention process.
## 3 Is Technology a New Thief of Time?
Joseph Ferrari condemns modern technology in his book, “Technology the New
Thief of Time” [24]. There is evidence to support his claim. In 2008, office
workers spent 28% of their time managing technology interruption and 46
percent of those interruptions — nearly half — were not necessary and not
urgent [20]. This report does not include the increasing number of
notifications from applications that demand attention. Cyber-psychologists
have coined this “e-procrastination” and associate this with “attitudes like a
sense of low control over one’s time and a desire to take risks by visiting
Web sites that are forbidden” [62]. Although this evidence positions modern
technology in a bad light, it does not represent the entire story. Technology
can also be a tool, depending on how we use it. Some practitioners recognize
the problem of procrastination and offer various solutions. “Stop
Procrastination,” for example, is an application that blocks distracting sites
and emails [4]. It is designed to eliminate interruptions, which has some
positive result [47]. “Avoiding Procrastination 101” teaches users various
techniques about procrastination [2]. “Write or Die 2” allows users to choose
either negative or positive consequences when they lose focus [5]. However,
there has been little evidence to support whether or not these techniques
work. In this paper, we evaluate some popular technological techniques of
behavior change: triggers, eliminating distraction, splitting to smaller sub-
tasks, goal setting, and machine augmented intelligence.
## 4 From Theories to UI Elements
### 4.1 Notifications for Getting Started
Push notifications are common in behavior change applications. Studies show
some types of notifications are capable of creating behavior and users
appreciate having notifications as reminders [10]; nonetheless, not all
notifications are equally important, and users are more likely to react to
important notifications [52]. Eyal, the author of Hook, argues that a good
trigger should be well timed, actionable, and spark interest [10, 14].
Interestingly, these qualities are found challenging in aversive tasks such
academic writing. Aversive tasks are unlikely to spark authors’ interest. They
have the perception of requiring huge time and effort, and in-a-mood writing
time is unpredictable. We could say the notifications from aversive tasks are
aversive notifications—no one wants to get one, because it is a reminder of an
aversive task. This leads us to ask, can notifications trigger actions in
aversive tasks?.
To answer this question, we implemented standard clickable notifications in
our customized editor. We utilized psychological intervention and persuasive
techniques to guide the content of the notifications. Moreover, we compare it
with a new type of notification we call an actioned notification: focusing on
eliciting an immediate action.
#### 4.1.1 Standard Clickable Notifications
We implement a standard clickable notification in our text editor that can
contain various messages. Users can click on the notification to open the
editor (see Figure 1). This notification is commonly used in behavior-change
applications. We use it to remind, inform, and motivate users. We group the
messages into five categories based upon the intent of the notification:
1. 1.
Standard Reminder(e.g.,“It is your writing time.”): This type of notification
only acts as a reminder. This concept is used in many task management
applications such as to-dos and calendars. We require users to set their daily
writing times, and they receive reminder notifications when the time is
reached. A study shows that students who set their own deadline for their
writing assignments perform better than those who do not [9]. The strength of
this notification type is that it is intuitive—most users are already familiar
with this style of notification.
2. 2.
Encouraging Reminder (e.g., “Great Job! You have written 2000 words”): This
type of notification is employed by many fitness and online learning
applications. It attempts to increase user’s motivation through positive
reinforcement of previous activity.
3. 3.
Inviting Action (e.g.,“Let’s write for 2 minutes!”): Unlike the encouraging
reminder notification, Inviting Action notifications attempt to create the
perception that a task requires small effort. Wendel defines Minimum Viable
Action to refer to the smallest action of the behavior. If the action is small
enough, users are more likely to enact the behavior [63]
4. 4.
Tips and Tricks (e.g.,“Writing Tip: An idea is nothing more nor less than a
new combination of old elements. – The Pareto Principle”): This type of
notification is based on a Knowledge Deficit Model [55]. This model suggests
that not knowing how to perform the behavior can block the behavior from
happening, even though users have the right attitude. These notifications are
meant to invoke thought about writing behavior via trying new suggested tips.
5. 5.
Mood Regulation (e.g., “Imagine how good it will feel to finish the
project.”): This notification is based on the psychological intervention Time
Traveling. It is a mood regulation technique that attempts to convert negative
reflections on doing the task to positive feelings about task completion.
Figure 1: An example push notification (top), action notification with text
prompt (middle), and concept expansion action notification (bottom).
#### 4.1.2 Actioned Notifications
Unlike standard clickable notifications, actioned notifications encourage
performing an action instead of attempting to increase motivation. We make the
system split a task into small manageable chunks, and then present this chunk
as a question-and-answer system. We hypothesized that this would reduce user’s
effort and increase the likelihood of the action occurring [33]. Actioned
notifications contain a question and a text input box (see Figure 1). Users
can answer the question right away in the text box. Furthermore, we
hypothesized that the notification would help users focus on the action rather
than on the feelings surrounding the action, potentially reducing aversion to
the notification.
Figure 2: The main user interface for our custom text editor (left), markdown
display (center), and distraction free mode (right).
### 4.2 Persuasive Elements for Retaining Behavior
In addition to notifications, we designed our text editor with elements we
hypothesized would be beneficial to reducing procrastination such as an
immersive mode, a goal progress bar, and a writing assistance system.
#### 4.2.1 Immersive Mode
This feature aims to curtail impulsiveness in procrastinators (see Figure 2).
Studies show that procrastinators are sensitive to distraction [15, 17, 22].
Immersive mode is a stage in which all external user interfaces are hidden and
the editor expands itself full screen on top of other applications. Many text
editors offer immersive mode. To further encourage users to focus on writing
and not editing, the customized editor supports Markdown Language. Markdown
allows the screen to be free from tools and buttons, making the interface
simpler. All participants are computer science students, so the markdown
format is familiar to them. Note that using markdown or immersive mode is
optional.
#### 4.2.2 Goal Setting Theory: Goal Progress Bar
Latham argues in his book, “Goal setting theory is among the most valid and
useful theories of motivation of organizational behavior” [40]. Nonetheless,
it is not a perfect solution. Goal Setting depends on value of the outcome,
task difficulty, specificity, and feedback [40]. In other words, a goal that
has no appealing reward, is too easy or too hard, is vague, or has no
feedback, is not an effective goal. In addition, fantasizing negatively about
approaching the goal can increase stress and anxiety [12, 13]. For example, “I
will write to demonstrate my capability” vs “I will write to avoid being
punished.” Both might produce a similar outcome, but an avoidance goal might
be more susceptible to procrastination behavior, because it is driven by
negative thoughts. Selecting appropriate goals is a challenging task by
itself. Thus, in this study we set our goal to be word-count, because it is
specific, measurable, able to give real-time feedback, and nonjudgmental. We
provide real-time feedback with a small progress bar showing the current
number of words compared to the goal word count. Users can set their own goal,
but it is optional. In addition, we intentionally place the progress bar at a
noticeable place at the top of the screen so that users can easily get access
to the information (Figure 2. We hypothesize that this progress bar increases
conscious motivation.
#### 4.2.3 Writing Assistance System
Procrastinators are poor at estimating the time necessary to finish a task
[24]. Haycock suggests that splitting a task into small manageable chunks can
help users get started [32]. With this system, we help users create a
framework for their paper, as well as to divide a long paper into manageable
sections. The application has a section panel to encourage users to create an
outline. It allows users to only focus on one section at a time as opposed to
scrolling through a long document. Users can search for certain sections via a
search box, Figure 2.
As discussed, a significant difficulty about writing is writer’s block. Rose
defines it as “[…] an inability to begin or continue writing for reasons other
than a lack of basic skill or commitment.” He demonstrates that writer’s block
can be caused by lacking of creative ideas [51]. Aren, in addition, defined
writer’s block as “a condition producing a transitory inability to express
one’s thoughts on paper. It is characterized by feelings of frustration,
rather than dread, hatred or panic” [8]. To help reduce writer’s block, we
used two text mining techniques: concept mining and concept expansion. Concept
mining uses the initial content in a user’s document to build a concept graph.
These abstract concepts are used as keywords to search external sources and
expand other related concepts in order to trigger creativity. In this paper,
we used IBM Watson Concept Insight and IBM Watson Concept Extension API [3].
Once users had written 1000 words or more, we extracted the initial texts to
find the top three concepts in the student’s paper. Then, we used those
concepts to find the top three TED talks that were most relevant to those
topics. The system also uses the extracted concepts to search more adjacent
concepts in the concept graph, providing additional TED talks about related
areas. Finally, the system sends the result back to users via a clickable
notification. We hypothesized that the system would help reduce anxiety and
increase creativity, as well as provide an incentive to start writing early so
that the concept map could be generated in time for the student to use the
additional information.
Finally, the writing assistance system creates custom question sets about the
writing a student has generated. The goal was to have these questions
stimulate creativity and create structure in the paper. These questions sets
are “wizard-of-ozed”—that is, the questions are generated by researchers, but
students did not know if they came from a human or computer. The advantage of
question sets over concept maps is that these question sets can be generated
with less student writing and take less time for students to review.
## 5 Evaluation
The custom text editor tracked word count, time spent typing, and documents
versions for further analysis. To evaluate whether the design strategies
affect users’ procrastination behavior and satisfaction, we conducted a
controlled trial experiment. The experiment consisted of two phases: a
baseline phase and a follow-up phase. The baseline study was used to determine
causes of procrastination and understand users’ writing behavior without any
notification system. In the follow-up experiment, notifications were added to
the text editor. Moreover, subjects were grouped in the follow-up study by
whether thay did or did not receive actioned notifications. During both phases
of the experiment we collected information about perceived procrastination
behavior and usability of the custom text editor. We used the PASS survey and
open-ended questions to determine the level of procrastination in our study
population (discussed below). All participants were required to fill out the
PASS survey and a system usability survey (System Usability Scale or SUS). All
experiments were conducted with proper IRB approval.
### 5.1 Baseline Study Design
The objective of the baseline study was to identify potential procrastinators
and non-procrastinators and preliminarily evaluate the editor for any major
usability issues that could possibly contribute to procrastination. Based on
data collected in the baseline study, we wanted to find groupings of
participants for the follow-up experimental study (explained in the next
section). It was also meant to help familiarize participants with using the
editor. All participants received the same version of the editor, but they did
not receive any notifications. Participants were college students enrolled in
a course on ubiquitous computing. They used the editor on a graded-3000-word-
writing assignment about the history of UbiComp from Weiser’s vision to
present day. The writing also involved a creative component where students
argued if certain elements from Weiser’s vision had come to pass, been
discarded, or evolved to different elements. The participants were a mix of
undergraduate and graduate students. They used the editor for nine days
leading up to the paper turn-in deadline.
Grouping by Writing Performance: The course instructor graded all paper
assignments. Participants were separately divided into groups based on their
assignment grade, above average and below average. It should be noted that all
students showed mostly good writing skills and were motivated to perform well
because the course was elective and presumably the students had interest in
the subject matter.
Grouping by Procrastination: Once the baseline experiment had ended, two
annotators independently reviewed figures of word count and writing time for
each participant. Annotators did not have access to the performance grouping
of the participants. The annotators reviewed the graphs and settled on two
different criteria in order to divide the participants into procrastinators
and non-procrastinators: the number of days before the deadline when a user
started (Group X: More than 3 days before, Group Y: Day before) and the number
of writing sessions and length of time spent on writing (Group A: Many
sessions, Group B: A few long sessions or one long session). Using these
criteria, the participants were grouped into high procrastination and low
procrastination. The high procrastination group always started the day before
(or day of) the deadline and spent 1 or 2 long sessions writing. Both
researchers unanimously agreed on which students were in the high
procrastination group. There was some disagreement on medium versus low
procrastination for participants that started early, but only wrote a few long
sessions; therefore, it was decided to group all participants who started
early into the low procrastination group.
Final Grouping for Follow-Up: Finally, the researchers divided subjects using
both the low/high procrastination grouping and above/below average groups as
shown in Table 1. For several participants, there was not enough writing data
to divide them into high/low procrastination groups. In this case, they were
placed in the “no data” group. This could occur, for example, for participants
who never connected to the internet or whose firewall prevented the editor
from sending word count and writing time information.
| Performance Level |
---|---|---
Procrastination level | Below Average | Above Average
No Data | 2 | 3
Low | 3 | 5
High | 4 | 4
Table 1: Breakdown of the number of participants by procrastination and
performance level.
It is interesting to note that, while most low procrastinators performed well
on the assignment, many high procrastinators also did well. Furthermore, the
distinguishing of “procrastinator” here is not a perfect measure because we
cannot be sure whether or not any negative feelings or planned delays
contributed to working on the paper the day before it was due. In other words,
behavior that may have seemed like procrastination may have been planned by
the student. The PASS survey results, however, support a conclusion that this
student behavior was procrastination.
The PASS (Procrastination Assessment Scale) [44] survey instrument is well
accepted in the psychology community for self-report of procrastination
behavior. The PASS survey consists of two sections. The first section
evaluates levels of procrastination. The section consists of eighteen items
scaled 1-5. The second section identifies 13 reasons for procrastination. It
consists of twenty-six items scaled 1-5. The first section of the PASS survey
demonstrated procrastination levels (0-10). The high procrastination group had
an average of 8.2 points (sd=0.45), and the low procrastination group had an
average of 6.6 points (sd=1.52). Thus, the results of the PASS survey support
the manual grouping that was based solely from writing behavior.
To create two final experimental groups, we chose an equal number of students
from each cell in Table 1. This ensured that each experimental group had an
approximately equal level of procrastination and writing performance. That is,
each group was a representative sample of the class across procrastination
level and writing performance. These experimental groups are designated as
GRP1 and GRP2 for the remainder of this paper.
### 5.2 Baseline-Study Results
Although users were given nine days to work on the paper, Figure 3 shows that
on average, most participants started writing 2-3 days before the deadline.
While GRP2 had two students that started on the assignment more than two days
before the deadline (as opposed to GRP1 with only one student), both groups
had a similar number of writing sessions per participant and a similar total
writing time. GRP1 had an average of 2.8 writing sessions (sd=0.98), and GRP2
had an average of 2.0 writing sessions (sd=1.0).
Figure 3: Number of words written graphed over time for baseline experiment.
Each line represents a separate participant.
Figure 4 shows reasons of self-reported procrastination via the PASS survey
for both baseline and follow-up studies. The top three reasons for
procrastination were aversion to the task, laziness, and time management.
Participants had varying reasons for aversion such as “It’s hard to put my
thoughts onto paper” and “I never know how [or] where to begin.”
The result from SUS shows the software usability score is about average
(mean=65.8, std=7.07). The reason most participants liked the editor is the
cleanness and simplicity of the user interface, with quotes such as “Very
simple UI” and “I liked the clean-ness of it.”
The reasons for concern were software reliability and stability. As given by
comments such as “I just need to be guaranteed my work isn’t going to be lost
when the program crashes (which happened) otherwise I don’t have enough
confidence to use it” and “crashing at the beginning freaking me out….”
Based on the given feedback, we implemented a set of upgrades to the editor
before the second experiment. We retained the clean and simple UI while also
minimizing user concerns by allowing users to export the document. We also
conducted more extensive software testing to eliminate crashing.
Figure 4: Comparing reasons to procrastinate between two groups for baseline
and follow-up experiments.
## 6 Experimental Follow-Up Study Design
To answer whether notifications help users get started in academic writing,
the same 21 college students participated in a follow-up study. The students
were assigned another graded 3000-word writing assignment. In this paper,
students were asked to summarize two application areas of UbiComp and
hypothesize about a research or class project that could contribute to one or
both of these application areas. Students were given nine days to complete the
assignment. As described, participants were divided into two representative
groups that used the custom text editor, GRP1 and GRP2. The program recorded
the following user actions related to writing: writing content, word count,
typing time, received notifications time, and users’ responses. For the second
experiment, the word count and writing time sampling rates were increased from
once per half hour to once per two minutes. This ensured a more reliable
estimation of the length and number of writing sessions in which users
engaged; moreover, the software is always running in the background, allowing
us to continually collect data and push out notifications. Each group received
a different set of features. GRP1 received standard clickable notifications
only. In contrast, GRP2 received both standard clickable notifications and
actioned notifications. We also sent out notifications that asked users their
reasons for accepting or dismissing the notification. Finally, all
participants again completed the Procrastination Assessment Scale for Students
(PASS) after the experiment, and filled out the SUS survey after the
experiment.
## 7 Experimental Follow-Up Study Results
Over the nine-day experiment period, we closely observed participants’
behavior and provided a series of standard clickable and actioned
notifications. Although all 21 participants agreed to take part in the study,
1 person was excluded because of low writing competency, 4 participants did
not install the application, and 2 participants installed the software but
never used it, in spite of sending several reminder emails. Therefore, 14
participants successfully installed and used the software. Half of the
participants (n=7) were online regularly and the remaining 7 people were
online a few hours per day. Fortunately, the number of disqualified
participants was about the same in GRP1 and GRP2, leaving 7 people in each
notification group.
In this study, we measured procrastination in two ways: writing statistics and
standard self-reports. Writing statistics was measured by the number of
sessions, starting writing date, and time spent writing per session.
Figure 5: Time spent writing continuously versus overall time before deadline
for two example participants.
Figure 5 shows example of writing sessions for two users. The x-axis
represents the number of days before the deadline, and the y-axis represents
the amount of time spent writing continuously in minutes. These two
participants represent two different behaviors from many participants: Some
students had multiple short sessions, and some wrote in one long session close
to the deadline. We used this data to identify the number of writing sessions
for each participant. GRP1 had 2.29 sessions on average (sd=1.82, n=7), and
GRP2 had 2.43 on average (sd=0.9, n=7). GRP1 wrote for 52.45 minutes on
average (sd=44.11, n=7), and GRP2 wrote for 44.24 on average (sd=21.56, n=7).
The data show that both groups spent short sessions at the beginning of the
assignment and long sessions near the deadline.
To compare the performance of participants in both groups, the course
instructor coded grades based upon two aspects: the novelty of the content (50
points), and how well they supported their arguments (50 points). To reduce
the subjectivity of comparing point by point, we converted the range 0-100 to
the discrete range F-A, and calculated the step difference. For example, if a
participant got a B on the first paper and an A- on the second paper, we give
him/her 2 steps difference (B,A,A-). GRP1 had a 0.75 step difference on
average (sd=1.28, n=7) and GRP2 had a -0.29 step difference on average
(sd=1.50, n=7).
Figure 6: Word count versus time before deadline for each group in the follow-
up experiment. Each line represents a separate participant.
Figure 6 shows word count over the four days before the deadline. The x-axis
indicates the number of days before the deadline and the y-axis shows the
number of words over time. GRP2 does appear to have started slightly earlier
than GRP1, but the small number of participants makes statistical testing
inappropriate here. Qualitatively, GRP2 spent the full day and night before
the deadline writing, whereas GRP1 mostly started writing the night before.
Figure 7: Cumulative typing time versus time before deadline for each group in
the follow-up experiment. Each line represents a separate participant.
Figure 7 shows the time spent on writing over 4 days before the deadline. The
x-axis shows the number of days before the deadline and the y-axis shows the
cumulative amount of time spent on typing in minutes. Despite clickable and
actioned notifications being sent regularly for the full nine-day period, 5
people in GRP1 started around a day before the deadline, while 4 people in
GRP2 started around 2 days before the deadline.
Participants received 143 notifications (118 clickable and 25 actioned
notifications). All participants immediately dismissed all 118 clickable
notifications. When we asked the reasons for dismissal, all participants
claimed they were in the middle of something else (many students noted that
they had mid-terms the week of the paper deadline). For the actioned
notifications, on the other hand, 36% were responded to (9 out of 25). Of the
9 notifications, 7 were related to outline-generation or content expansion.
The last 2 notifications were short questions with a prompting text box. In
these responses, the students only entered 2-3 words. When we asked the reason
for such short answers, they also responded by saying that they were in the
middle of another task. For students having trouble starting the assignment,
we asked a random subset of the participants why they felt they did not get
started quickly. The responses are low self-efficacy related, such as “Not
settle with the topic,” “not knowing what to write about,” and “Lack of
ideas.”
Our hypothesis was that actioned notifications would bypass negative
reflection of the task by requiring users to make quick snippets of thought
that helped to make writing more manageable; however, overusing notifications
ended up increasing negative feelings. For example, “It will be better if it
knows when I am writing and then decide not to pop up questions, it is kind of
a distraction” and “Annoying notifications.”
We asked participants what features were most useful and influenced their
decision making towards the writing tasks, summarized in Figure 8.
Figure 8: Survey response summary for the notification system.
50% of participants in GRP1 agreed that clickable notifications helped them be
more aware of the task, 25% said it helped them to get started, 12.5% agreed
that it motivated them, and 25% thought writing tips were helpful. In
contrast, in GRP2 (receiving clickable and actioned notifications) 43% agreed
that notifications helped them to be more aware of the task and thought
writing tips were helpful. Only 14.29% agreed that it helped them or motivated
them to get started. The starkest difference between the groups, then, was
their opinion of writing tips, where GRP1 was mostly neutral and GRP2 had
stronger opinions about the tips, both positive and negative.
Figure 9: Survey response summary for various elements of the custom text
editor.
Both groups used distraction free mode. Figure 9 summarizes the perceptions of
goal progress and distraction free mode. The result showed that 40% agreed
that it helped them write faster, and 60% agreed that it helped them focus
more on the task. Both groups used a goal progress bar indicator. The result
showed that 80% agreed that it helped them evaluate writing time better, and
73.33% agreed that it motivated them to reach their set goal. Participants
stated “I like the word counting bar a lot!” and “I like Word count and
overall UI.”
Only GRP2 received writing assistance notifications. The result showed that
30% agreed it helped them overcome writer’s block, 40% agreed that it helped
them focus more on the task, structure their thoughts better, and be more
creative (Figure 9). Participants had particularly strong opinions about these
notifications: “The automatic content generator learning system was absolutely
amazing. I was shocked to see how accurate it was. It truly helped me when I
was stuck and motivated me to keep going. If the negative aspects of the app
are removed (listed below), I will absolutely use this app in the future,” and
“I like the earlier planning questions to help me get started,” and “The
planning questions, definitely helpful.”
SUS scores for the updated the application had a mean of 60.7 points, with the
standard deviation of 12.30 points (n=14). From qualitative comments, 11
people out of 14 participants liked the simplicity of user interface: “I
really enjoyed using it to write my paper. The UI was very simple and did not
have a lot of distractions,” “Looks very clean,” and “it’s very minimalistic
and easy to use.”
The dislike regarding the application stemmed from annoyance with the
notifications and the fact that the application was always running. The memory
usage of the application is about 60Mb (30Mb compressed), which is about 50%
less than Dropbox or Google Drive syncing service. The visibility that
application was always running without users being able to control it, led to
the increased negative feeling: “I did not like having to keep the app running
at all times” and “It will better if it knows when I am writing and then
decide not to pop up questions, it is kind of a distraction.”
The outlining system also contributed to low usability scores. Many
individuals did not understand how to use the section panel (despite attending
a tutorial on using the application). A number of comments were similar to:
“[…] I was confused by the sections/files on the left side. What were they
actually for?”
## 8 Discussion
Our results show that there is very little to no difference regarding
procrastination behavior after technological intervention; however, because we
only captured the time and word count information, it was unclear when
participants were conducting background research for their papers. Comparing
the PASS self-reported surveys taken at the end of baseline and the end of the
follow-up study, the data shows an insignificant difference between both
baseline and follow-up procrastination level; however, Figure 4 shows that the
reason for procrastination shifted from “aversion toward the task” to “time
management”. The reason for this may be that the experimental study was the
week before mid-term exams. Many course projects were due that week, including
our writing assignment. Qualitative data also supports this hypothesis, with
several participants mentioning mid-term exams. On the other hand, Writing
assistants significantly decreased task aversiveness and difficulty in
decision making compared to the baseline study (p<0.05), while other factors
remained the same. This conclusion is also supported by qualitative data from
participants. These results suggest that the perceived benefit of attending a
notification must be considerable to be perceived as positive—the actioned
notifications were the only notifications attended to, but required
considerable time and user writing data to prepare.
### 8.1 Notifications
Although use of a notification is useful for external triggers of
procrastination, the notification not only triggers memory about the task, but
it might also trigger feelings associated with the task. If the task is
aversive, those negative feelings (anxiety, guilt, or boredom) are more likely
to be triggered. Thus, an aversive notification might be more susceptible to
annoyance, guilt, and anger than the desirable ones. This is consistent with
the basic psychological concept of conditioning, exemplified by a trigger (a
door bell) associated with a particular outcome (food) [56]. In addition, our
result implied that users may perceive an aversive notification as a reminder,
regardless of motivational text written in the notification. We hypothesized
that the tips and tricks notification may help users increase writing
competency, leading to the desired amount of writing; however, it appears that
users perceived the tips and tricks notifications as a disguised version of
reminders. Thus, the number, frequency, and prominence of aversive
notifications must be carefully considered. Some users may appreciate
reminders; however, users should be given full control over whether or not
they want to be notified, postpone the task, or not interact with it at all.
Our experiment implies that only notifications by themselves are not enough to
encourage users to get started on aversive tasks.
### 8.2 UI Elements and Writing Assistance
Word count as a goal does indeed motivate users as we expected. Most users
reported high satisfaction. The reasons that word count is so effective might
be its specific, measurable, and nonjudgmental nature. More importantly, it
gives users real-time feedback. They can see clearly how much they have
invested in their work. They focus on number of words, instead of focusing on
perfecting those words. One may argue that the goal of academic writing is
quality, not quantity. However, we argue that there will be no quality without
some quantity. Quality has to start from some quantity and iterative
improvement. If users fear starting, then they are less likely to produce any
quality work [11].
A number of participants liked the distraction free mode, and self-report
shows that they feel more focused on the task when entering this mode. Since
we did not track users outside our application, it remains unclear whether or
not distraction free mode affected users on a behavioral level.
The results imply that users appreciate systems that help them minimize time
and effort required to finish a task. Writing assistant systems can reduce
writers’ anxiety. At the same time, concept extraction gives useful feedback
to users. Concept extraction is a machine operation, so there is little to no
fear of social judgment on the quality of the work. Adding semi-intelligent
systems to help users finish aversive tasks easier and faster is a promising
strategy to reduce procrastination and increase satisfaction. Human feedback
may give strong social reinforcement. This power could increasing bursts of
motivation more than any machine ever could. At the same time, social judgment
can also paralyze users from getting started. Machine intelligence, in
contrast, provides pure non-judgmental feedback. Users still get feedback
while feeling safe from losing self-efficacy and social judgment. Balancing
human and machine feedback could facilitate reducing procrastination behavior.
In our experimental design, we had no placebo group for this set of features;
nonetheless, writing is a familiar behavior among college students. Most
students have experience with text editors without special features. Any self-
evaluation is likely to be compared with this previous experience. Even so,
this was not explicitly controlled for, so changes in attitude remain unclear.
### 8.3 General Lessons Learned
A key consideration for software designers should be data transparency. In
this study, we were collecting basic user statistics such as word count,
writing time, and notification reaction time. We told users exactly what data
we collected including explicit written words through informed consent. In
addition, writing a graded assignment is extremely important information for
the user. Users have high expectations for software stability. We understood
their concerns and told them that we have versioning systems. All data was
backed up periodically; nonetheless, some users still showed some level of
discomfort with using it. This might be caused by inability to manage their
own data. Thus, offering data transparency appropriately may create more trust
among users and researchers.
### 8.4 Summary of Lessons Learned
1. 1.
If the goal is to create an immediate action, all blockers have to be
identified and eliminated before sending triggers.
2. 2.
Notifications also trigger emotion that are associated with the task. Be
mindful before using them.
3. 3.
Notifications are effective to decrease user’s burden of keeping track of a
list of tasks, but they are less useful in persuading actions that are
aversive.
4. 4.
Users may perceive aversive notifications as a negative reminder, regardless
of motivational text written in the notification.
5. 5.
Showing users proper, measurable, and specific goal helps users evaluate their
work better.
6. 6.
It is a good idea to keep users’ autonomy in mind. If you decide to use
notifications, make sure you provide options for users to opt-out.
7. 7.
Adding semi-intelligent systems to help users finish aversive tasks easier and
faster is a promising strategy to reduce procrastination and increase
satisfaction.
8. 8.
Questions are useful for users who are already motivated to answer them;
however, it backfires for users who do not. Before asking questions, it is a
good idea to make sure users have enough answers or motivation to answer them.
9. 9.
Users are concerned for their own data. Considering data transparency helps
users trust the service and increases satisfaction overall.
## 9 Conclusion
We discussed the current psychological literature regarding procrastination
and evaluated various technological interventions to decrease writing
procrastination among college students. We also outlined the challenges and
lessons learned through conducting procrastination research. Notifications
used in this papers did little to decrease procrastination behavior; moreover,
users who get more notifications have lower satisfaction than other peers.
Helping users clear their motivation blockers is the first step in performing
any task. Goal Setting Theory has proven effective in increasing motivation.
Using machine learning aids can decrease the aversion toward the tasks. Thus,
providing tools for making aversive tasks easier and less fearful is a
promising strategy to decrease procrastination, but must be carefully
applied—especially when employing notifications as motivators.
## References
* [1]
* [2] 2012\. 101 Top Tips for Avoiding Procrastination. (May 2012).
* [3] 2016\. Concept Expansion | IBM Watson Developer Cloud. (2016). http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/concept-expansion.html
* [4] 2016\. Stop Procrastinating. (2016). http://www.stopprocrastinatingapp.com/
* [5] 2016\. Write or Die 2. (2016). http://www.writeordie.com/
* [6] Icek Ajzen. 1991. Theories of Cognitive Self-RegulationThe theory of planned behavior. Organizational Behavior and Human Decision Processes 50, 2 (Dec. 1991), 179–211. DOI:http://dx.doi.org/10.1016/0749-5978(91)90020-T
* [7] Icek Ajzen and Martin Fishbein. 1977. Attitude-Behavior Relations: A Theoretical Analysis and. Review of Empirical Research, Psychological Bulletin (1977), 888–918.
* [8] Cynthia A. Arem. 2011. Conquering Writing Anxiety (1 edition ed.). Morton Publishing Company.
* [9] Dan Ariely and Klaus Wertenbroch. 2002. Procrastination, Deadlines, and Performance: Self-Control by Precommitment. PSYCHOLOGICAL SCIENCE Research 13 (2002), 219–224. http://people.duke.edu/
* [10] Frank Bentley and Konrad Tollmar. 2013. The power of mobile notifications to increase wellbeing logging behavior. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1095–1098. http://dl.acm.org/citation.cfm?id=2466140
* [11] Robert Boice. 1990. Professors as Writers: A Self-Help Guide to Productive Writing. New Forums Press.
* [12] Andrew J. Elliot and Ron Friedman. 2007. Approach-avoidance: A central characteristic of personal goals. Lawrence Erlbaum Associates Publishers. 97–118 pages.
* [13] Andrew J. Elliot and Kennon M. Sheldon. 1997. Avoidance Achievement Motivation: A Personal Goals Analysis. Journal of Personality and Social Psychology Copyright 73, 1 (1997), 171–185.
* [14] Nir Eyal. 2014. Hooked: How to Build Habit-Forming Products. Portfolio, New York, New York. 256 pages. http://www.amazon.com/dp/1591847788/?tag=hooked-us-20
* [15] Joseph R. Ferrari. 1992. Psychometric validation of two Procrastination inventories for adults: Arousal and avoidance measures. Journal of Psychopathology and Behavioral Assessment 14, 2 (June 1992), 97–110. DOI:http://dx.doi.org/10.1007/BF00965170
* [16] Joseph R. Ferrari. 1994. Dysfunctional procrastination and its relationship with self-esteem, interpersonal dependency, and self-defeating behaviors. Personality and Individual Differences 17, 5 (11 1994), 673–679. DOI:http://dx.doi.org/10.1016/0191-8869(94)90140-6
* [17] Joseph R. Ferrari. 2000. Procrastination and attention: Factor analysis of attention deficit, boredomness, intelligence, self-esteem, and task delay frequencies. Journal of Social Behavior & Personality 15, 5 (2000), 185–196.
* [18] Joseph R. Ferrari. 2001. Procrastination as self-regulation failure of performance: effects of cognitive load, self-awareness, and time limits on ?working best under pressure? European Journal of Personality 15, 5 (Sept. 2001), 391–406. DOI:http://dx.doi.org/10.1002/per.413
* [19] Joseph R. Ferrari. 2010. Still Procrastinating: The No Regrets Guide to Getting It Done (1 edition ed.). Wiley, Hoboken, N.J.
* [20] Joseph R. Ferrari. 2011. AARP Still Procrastinating: The No-Regrets Guide to Getting It Done. John Wiley & Sons.
* [21] Joseph R. Ferrari and John F. Dovidio. 1997. Some experimental assessments of indecisives: Support for a non-cognitive failures hypothesis. Journal of Social Behavior and Personality 12, 2 (1997), 527\. http://search.proquest.com/openview/82a0720ceb3f44678e13c079c5345c27/1?pq-origsite=gscholar&cbl=1819046
* [22] Joseph R. Ferrari, Juan Francisco Díaz-Morales, Jean O’Callaghan, Karem Díaz, and Doris Argumedo. 2007. Frequent Behavioral Delay Tendencies By Adults International Prevalence Rates of Chronic Procrastination. Journal of Cross-Cultural Psychology 38, 4 (July 2007), 458–464. DOI:http://dx.doi.org/10.1177/0022022107302314
* [23] Joseph R. Ferrari and R. A. Emmons. 1995. Methods of procrastination and their relation to self-control and self-reinforcement: An empirical study. Journal of Social Behavior and Personality 10 (1995).
* [24] Joseph R. Ferrari, Judith L. Johnson, and William G. McCown. 2013. Procrastination and Task Avoidance: Theory, Research, and Treatment (1995 edition ed.). Springer.
* [25] Joseph R. Ferrari and Tina Patel. 2004. Social comparisons by procrastinators: rating peers with similar or dissimilar delay tendencies. Personality and Individual Differences 37, 7 (Nov. 2004), 1493–1501. DOI:http://dx.doi.org/10.1016/j.paid.2004.02.006
* [26] Joseph R. Ferrari and Timothy A. Pychyl. 2007. Regulating speed, accuracy and judgments by indecisives: Effects of frequent choices on self-regulation depletion. Personality and Individual Differences 42, 4 (March 2007), 777–787. DOI:http://dx.doi.org/10.1016/j.paid.2006.09.001
* [27] Dan Gilbert. 2014. The psychology of your future self. (2014). https://www.ted.com/talks/dan_gilbert_you_are_always_changing?language=en
* [28] Daniel Goldstein. 2001. The battle between your present and future self. (2001). https://www.ted.com/talks/daniel_goldstein_the_battle_between_your_present_and_future_self?language=en
* [29] Daniel E. Gustavson, Akira Miyake, John K. Hewitt, and Naomi P. Friedman. 2014. Genetic Relations Among Procrastination, Impulsivity, and Goal-Management Ability Implications for the Evolutionary Origin of Procrastination. Psychological Science 25, 6 (6 2014), 1178–1188. DOI:http://dx.doi.org/10.1177/0956797614526260
* [30] Mohsen Haghbin, Adam McCaffrey, and Timothy A. Pychyl. 2012. The Complexity of the Relation between Fear of Failure and Procrastination. Journal of Rational-Emotive & Cognitive-Behavior Therapy 30, 4 (March 2012), 249–263. DOI:http://dx.doi.org/10.1007/s10942-012-0153-9
* [31] Nancy. N. Harris and Robert I. Sutton. 1983. Task Procrastination in Organizations: A Framework for Research. Human Relations 36, 11 (11 1983), 987–995. DOI:http://dx.doi.org/10.1177/001872678303601102
* [32] Laurel A. Haycock, Patricia McCarthy, and Carol L. Skay Skay. 1998. Procrastination in college students: The role of self-efficacy and anxiety. Journal of Counseling and Development 76, 3 (1998), 137.
* [33] George Kingsley Zipf. 2016. Human Behavior and the Principle of Least Effort: An Introduction to Human Ecology. https://books.google.com/books?hl=en
* [34] Predrag Klasnja, Sunny Consolvo, and Wanda Pratt. 2011. How to evaluate technologies for health behavior change in HCI research. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 3063–3072. http://dl.acm.org/citation.cfm?id=1979396
* [35] W. Kyle and Timothy A. Pychyl. 2009. In search of the arousal procrastinator: Investigating the relation between procrastination, arousal-based personality traits and beliefs about procrastination motivations. Personality and Individual Differences 47, 8 (2009), 906–911. DOI:http://dx.doi.org/10.1016/j.paid.2009.07.013
* [36] Nicholas D. Lane, Mu Lin, Mashfiqui Mohammod, Xiaochao Yang, Hong Lu, Giuseppe Cardone, Shahid Ali, Afsaneh Doryab, Ethan Berke, Andrew T. Campbell, and Tanzeem Choudhury. 2014. BeWell: Sensing Sleep, Physical Activities and Social Interactions to Promote Wellbeing. Mobile Networks and Applications 19, 3 (June 2014), 345–359. DOI:http://dx.doi.org/10.1007/s11036-013-0484-5
* [37] Eric Larson, Jon Froehlich, Tim Campbell, Conor Haggerty, Les Atlas, James Fogarty, and Shwetak N. Patel. 2012a. Disaggregated water sensing from a single, pressure-based sensor: An extended analysis of HydroSense using staged experiments. Pervasive and Mobile Computing 8, 1 (Feb. 2012), 82–102. DOI:http://dx.doi.org/10.1016/j.pmcj.2010.08.008
* [38] Eric C. Larson, Mayank Goel, Gaetano Boriello, Sonya Heltshe, Margaret Rosenfeld, and Shwetak N. Patel. 2012b. SpiroSmart: using a microphone to measure lung function on a mobile phone. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing. ACM, 280–289. http://dl.acm.org/citation.cfm?id=2370261
* [39] David Lehrer and Janani Vasudev. 2011. Evaluating a Social Media Application for Sustainability in the Workplace. In CHI ’11 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’11). ACM, New York, NY, USA, 2161–2166. DOI:http://dx.doi.org/10.1145/1979742.1979935
* [40] Edwin A. Locke, Gary P. Latham, Ken J. Smith, Robert E. Wood, and Albert Bandura. A Theory of Goal Setting & Task Performance.
* [41] George Loewenstein. 1992. Choice Over Time (1st editio ed.). Russell Sage Foundation. https://www.amazon.com/Choice-Over-Time-George-Loewenstein/dp/0871545586
* [42] Anmol Madan, Manuel Cebrian, David Lazer, and Alex Pentland. 2010. Social sensing for epidemiological behavior change. In Proceedings of the 12th ACM international conference on Ubiquitous computing. ACM, 291–300. http://dl.acm.org/citation.cfm?id=1864394
* [43] Norman A. Milgram, Gila Batori, and Doron Mowrer. 1993. Correlates of academic procrastination. Journal of School Psychology 31, 4 (1993), 487–500. DOI:http://dx.doi.org/10.1016/0022-4405(93)90033-F
* [44] Forough Mortazavi, Saideh S. Mortazavi, and Razieh Khosrorad. 2015. Psychometric Properties of the Procrastination Assessment Scale-Student (PASS) in a Student Sample of Sabzevar University of Medical Sciences. Iranian Red Crescent Medical Journal 17, 9 (Sept. 2015). DOI:http://dx.doi.org/10.5812/ircmj.28328
* [45] Don Norman. 2014. Why Procrastination Is Good. (12 2014). https://www.linkedin.com/pulse/why-procrastination-good-don-norman
* [46] Shane Gregory Owens, Christine G. Bowman, and Charles A. Dill. 2008. Overcoming Procrastination: The Effect of Implementation Intentions1. Journal of Applied Social Psychology 38, 2 (Feb. 2008), 366–384. DOI:http://dx.doi.org/10.1111/j.1559-1816.2007.00309.x
* [47] Richard W. Patterson. 2014. Can Behavioral Tools Improve Online Student Outcomes? Experimental Evidence from a Massive Open Online Course. Technical Report. http://www.ilr.cornell.edu/sites/ilr.cornell.edu/files/cheri_wp165_0.pdf
* [48] Timothy A. Pychyl. 2013. Solving the Procrastination Puzzle: A Concise Guide to Strategies for Change (reprint edition ed.). TarcherPerigee, New York.
* [49] Mashfiqui Rabbi, Shahid Ali, Tanzeem Choudhury, and Ethan Berke. 2011\. Passive and in-situ assessment of mental and physical well-being using mobile sensors. In Proceedings of the 13th international conference on Ubiquitous computing. ACM, 385–394. http://dl.acm.org/citation.cfm?id=2030164
* [50] Laura A. Rabin, Joshua Fogel, and Katherine E. Nutter-Upham. 2011. Academic procrastination in college students: The role of self-reported executive function. Journal of Clinical and Experimental Neuropsychology 33, 3 (March 2011), 344–357. DOI:http://dx.doi.org/10.1080/13803395.2010.518597
* [51] Mike Rose. Writer’s Block: The Cognitive Dimension (Studies in Writing and Rhetoric). https://www.amazon.com/Writers-Block-Cognitive-Dimension-Rhetoric/dp/0809329239
* [52] Alireza Sahami Shirazi, Niels Henze, Tilman Dingler, Martin Pielot, Dominik Weber, and Albrecht Schmidt. 2014. Large-scale assessment of mobile notifications. ACM Press, 3055–3064. DOI:http://dx.doi.org/10.1145/2556288.2557189
* [53] Henri C. Schouwenburg. 1992. Procrastinators and fear of failure: an exploration of reasons for procrastination. European Journal of Personality 6, 3 (Sept. 1992), 225–236. DOI:http://dx.doi.org/10.1002/per.2410060305
* [54] Henri C. Schouwenburg. 1995. Academic Procrastination:Theoretical Notions, Measurement, and Research. In Procrastination and Task Avoidance. Springer US, Boston, MA, 71–96. DOI:http://dx.doi.org/10.1007/978-1-4899-0227-6{_}4
* [55] P. Wesley Schultz. 2002. Knowledge, information, and household recycling: Examining the knowledge-deficit model of behavior change. New tools for environmental protection: Education, information, and voluntary measures (2002), 67–82.
* [56] Martin E. P. Seligman, Peter Schulman, Robert J. DeRubeis, and Steven D. Hollon. 1999. The prevention of depression and anxiety. Prevention & Treatment 2, 1 (1999). DOI:http://dx.doi.org/10.1037/1522-3736.2.1.28a
* [57] Suzanne B. Shu and Ayelet Gneezy. 2013. Procrastination of Enjoyable Experiences. http://dx.doi.org/10.1509/jmkr.47.5.933 47, 5 (2013), 933–944. http://www.jstor.org/stable/20751554http://www.jstor.org.proxy.libraries.smu.edu/stable/pdfplus/10.2307/20751554.pdf?acceptTC=true
* [58] Fuschia M Sirois. 2004. Procrastination and intentions to perform health behaviors: The role of self-efficacy and the consideration of future consequences. Personality and Individual Differences 37, 1 (July 2004), 115–128. DOI:http://dx.doi.org/10.1016/j.paid.2003.08.005
* [59] Fuschia M. Sirois. 2007. “I’ll look after my health, later”: A replication and extension of the procrastination–health model with community-dwelling adults. Personality and Individual Differences 43, 1 (July 2007), 15–26. DOI:http://dx.doi.org/10.1016/j.paid.2006.11.003
* [60] Fuschia M Sirois, Michelle L Melia-Gordon, and Timothy A Pychyl. 2003. “I’ll look after my health, later”: an investigation of procrastination and health. Personality and Individual Differences 35, 5 (Oct. 2003), 1167–1184. DOI:http://dx.doi.org/10.1016/S0191-8869(02)00326-4
* [61] Piers Steel. 2007. The nature of procrastination: A meta-analytic and theoretical review of quintessential self-regulatory failure. Psychological Bulletin 133, 1 (2007), 65–94. DOI:http://dx.doi.org/10.1037/0033-2909.133.1.65
* [62] Andrew Thatcher, Gisela Wretschko, and Peter Fridjhon. 2008. Online flow experiences, problematic Internet use and Internet procrastination. Computers in Human Behavior 24, 5 (Sept. 2008), 2236–2254. DOI:http://dx.doi.org/10.1016/j.chb.2007.10.008
* [63] Steven Wendel. 2013. Designing for Behavior Change: Applying Psychology and Behavioral Economics (1 edition ed.). O’Reilly Media, Sebastopol, California. 394 pages.
* [64] John Zimmerman, Jodi Forlizzi, and Shelley Evenson. 2007. Research Through Design As a Method for Interaction Design Research in HCI (CHI ’07). ACM, New York, NY, USA, 493–502. DOI:http://dx.doi.org/10.1145/1240624.1240704
|
# Superconductivity and normal-state properties of kagome metal RbV3Sb5 single
crystals
Qiangwei Yin†, Zhijun Tu†, Chunsheng Gong†, Yang Fu, Shaohua Yan, and Hechang
Lei<EMAIL_ADDRESS>Department of Physics and Beijing Key Laboratory of Opto-
electronic Functional Materials $\&$ Micro-nano Devices, Renmin University of
China, Beijing 100872, China
###### Abstract
We report the discovery of superconductivity and detailed normal-state
physical properties of RbV3Sb5 single crystals with V kagome lattice. RbV3Sb5
single crystals show a superconducting transition at $T_{c}\sim$ 0.92 K.
Meanwhile, resistivity, magnetization and heat capacity measurements indicate
that it exhibits anomalies of properties at $T^{*}\sim$ 102 - 103 K, possibly
related to the formation of charge ordering state. When $T$ is lower than
$T^{*}$, the Hall coefficient $R_{\rm H}$ undergoes a drastic change and sign
reversal from negative to positive, which can be partially explained by the
enhanced mobility of hole-type carriers. In addition, the results of quantum
oscillations show that there are some very small Fermi surfaces with low
effective mass, consistent with the existence of multiple highly dispersive
Dirac band near the Fermi energy level.
Two-dimensional (2D) kagome lattice composed of corner-sharing triangles and
hexagons is one of the most studied systems in the last decades due to its
unique structural feature. On the one hand, if only the spin degree of freedom
is considered, insulating magnetic kagome materials can host some exotic
magnetism ground states, like quantum spin liquid state, because of the nature
of strongly geometrical frustration for kagome lattice Balents ; Shores ;
HanTH ; FuM . On the other hand, when the charge degree of freedom becomes
dominant (partial filling), the band topology starts to manifest its features
in kagome metals, such as nontrivial Dirac points and flat band in the band
structure KangM ; LiuZ ; KangM2 . More interestingly, when both of spin and
charge degrees of freedom exist, many of exotic phenomena appear in the
correlated magnetic kagome metals. For example, in ferromagnetic Fe3Sn2 and
TbMn6Sn6 kagome metals, due to the spin orbital coupling and broken of time
reversal symmetry, the Chern gap can be opened at the Dirac point, leading to
large anomalous Hall effect (AHE), topological edge state and large magnetic-
field tunability YeL ; YinJX ; YinJX2 . Moreover, antiferromagnetic Mn3Sn and
ferromagnetic Co3Sn2S2 kagome metals exhibit large intrinsic AHE, which is
related to the existence of Weyl node points in these materials Kuroda ; LiuE
; WangQ .
Besides the intensively studied magnetic kagome metals, other correlation
effects and ordering states in partially filled kagome lattice have also
induced great interests. Theoretical studies suggest that the doped kagome
lattice could lead to unconventional superconductivity Balents ; Anderson ; Ko
; WangWS ; Kiesel . Especially, when the kagome lattice is filled near van
Hove filling, the Fermi surface (FS) is perfectly nested and has saddle points
on the $M$ point of Brillouin zone (BZ) WangWS . Depends on the variations of
on-site Hubbard interaction $U$ and Coulomb interaction on nearest-neighbor
bonds $V$, the system can develop different ground states, including
unconventional superconductivity, ferromagnetism, charge bond order and charge
density wave (CDW) order and so on WangWS ; Kiesel . However, the realizations
of superconducting and charge ordering states are still scarce in kagome
metals.
In very recent, a novel family of kagome metals AV3Sb5 (A = K, Rb and Cs) was
discovered Ortiz1 . Among them, KV3Sb5 and CsV3Sb5 exhibit superconductivity
with transition temperature $T_{c}=$ 0.93 and 2.5 K, respectively Ortiz2 ;
Ortiz3 . The proximity-induced spin-triplet superconductivity was also
observed in Nb/KV3Sb5 devices WangY . More importantly, theoretical
calculations and angle-resolved photoemission spectroscopy (ARPES) demonstrate
that there are several Dirac nodal points near the Fermi energy level ($E_{\rm
F}$) with a non-zero $Z_{2}$ topological invariant in KV3Sb5 and CsV3Sb5
Ortiz1 ; Ortiz2 ; Ortiz3 ; YangSY . Moreover, AV3Sb5 exhibits transport and
magnetic anomalies at $T^{*}\sim$ 80 K - 110 K Ortiz1 ; Ortiz2 ; Ortiz3 . The
X-ray diffraction (XRD) and scanning tunnelling microscopy (STM) measurements
on KV3Sb5 and CsV3Sb5 indicate that there is a 2$\times$2 superlattice
emerging below $T^{*}$, i.e., the formation of charge order (CDW-like state)
Ortiz2 ; JiangYX . Furthermore, the STM spectra show that this charge order
has a chiral anisotropy, which can be tuned by magnetic field and may lead to
the anomalous Hall effect (AHE) at low temperature even KV3Sb5 does not
exhibit magnetic order or local moments YangSY ; JiangYX ; Kenney .
Motivated by these studies, in this work, we carried out a comprehensive study
on physical properties of RbV3Sb5 single crystals. We find that RbV3Sb5 shows
a superconducting transition at $T_{c}\sim$ 0.92 K, which coexist with the
anomalies of properties at $T^{*}\sim$ 102 - 103 K. This could be related to
the emergence of charge ordering state. Below $T^{*}$, the transport
properties change significantly, possibly rooting in the dramatic changes of
electronic structure due to the formation of charge order. Furthermore, the
analysis of low-temperature quantum oscillations indicates that there are
small Fermi surfaces (FSs) with low effective mass in RbV3Sb5, revealing the
existence of highly dispersive bands near the Fermi energy level $E_{\rm F}$.
Single crystals of RbV3Sb5 were grown from Rb ingot (purity 99.75%), V powder
(purity 99.9%) and Sb grains (purity 99.999%) using the self-flux method
Ortiz2 . The eutectic mixture of RbSb and Rb3Sb7 is mixed with VSb2 to form a
composition with 50 at.% RbxSby and 50 at.% VSb2 approximately. The mixture
was put into an alumina crucible and sealed in a quartz ampoule under partial
argon atmosphere. The sealed quartz ampoule was heated to 1273 K for 12 h and
soaked there for 24 h. Then it was cooled down to 1173 K at 50 K/h and further
to 923 K at a slowly rate. Finally, the ampoule was taken out from the furnace
and decanted with a centrifuge to separate RbV3Sb5 single crystals from the
flux. Except sealing and heat treatment procedures, all of other preparation
processes were carried out in an argon-filled glove box in order to prevent
the reaction of Rb with air and water. The obtained crystals have a typical
size of 2 $\times$ 2 $\times$ 0.02 mm3. RbV3Sb5 single crystals are stable in
the air. XRD pattern was collected using a Bruker D8 X-ray diffractometer with
Cu $K_{\alpha}$ radiation ($\lambda=$ 0.15418 nm) at room temperature. The
elemental analysis was performed using the energy-dispersive X-ray
spectroscopy (EDX). Electrical transport and heat capacity measurements were
carried out in a Quantum Design physical property measurement system
(PPMS-14T). The longitudinal and Hall electrical resistivity were measured
using a five-probe method and the current flows in the $ab$ plane of the
crystal. The Hall resistivity was obtained from the difference in the
transverse resistivity measured at the positive and negative fields in order
to remove the longitudinal resistivity contribution due to the voltage probe
misalignment, i.e.,
$\rho_{yx}(\mu_{0}H)=[\rho_{yx}(+\mu_{0}H)-\rho_{yx}(-\mu_{0}H)]/2$. The
$c$-axial resistivity was measured by attaching current and voltage wires on
the opposite sides of the plate-like crystal. Magnetization measurements were
performed in a Quantum Design magnetic property measurement system (MPMS3).
Figure 1: (a) Crystal structure of RbV3Sb5. The big green, small red, medium
blue and cyan balls represent Rb, V, Sb1 and Sb2 sites, respectively. (b) XRD
pattern of a RbV3Sb5 single crystal. Inset: photo of typical RbV3Sb5 single
crystals on a 1 mm grid paper.
As shown in the left panel of Fig. 1(a), RbV3Sb5 has a layered structure with
hexagonal symmetry (space group $P6/mmm$, No. 191). It consists of Rb layer
and V-Sb slab stacking along $c$ axis alternatively, isostructural to KV3Sb5
and CsV3Sb5 Ortiz1 . The key structural ingredient of this material is two-
dimensional (2D) kagome layer formed by the V atoms in the V-Sb slab (right
panel of Fig. 1(a)). There are two kinds of Sb sites and the Sb atoms at Sb1
site occupy at the centers of V hexagons when another Sb atoms at Sb2 site
locate below and above the centers of V triangles, forming graphene-like
hexagon layers. The XRD pattern of a RbV3Sb5 single crystal (Fig. 1(b))
reveals that the crystal surface is parallel to the $(00l)$-plane. The
estimated $c$-axial lattice constant is about 9.114 Å, close to previously
reported values Ortiz1 . The thin-plate-like crystals (inset of Fig. 1(b)) are
also consistent with the layered structure of RbV3Sb5. The measurement of EDX
by examination of multiple points on the crystals gives the atomic ratio of Rb
: V : Sb = 0.90(6) : 3 : 5.07(4) when setting the content of V as 3. The
composition of Rb is slightly less than 1, indicating that there may be small
amount of Rb deficiencies in the present RbV3Sb5 crystals.
Fig. 2(a) exhibits the temperature dependence of in-plane resistivity
$\rho_{ab}(T)$ and $c$-axial resistivity $\rho_{c}(T)$ of RbV3Sb5 single
crystal from 2 K to 300 K. The zero-field $\rho_{ab}(T)$ exhibits a metallic
behavior in the measured temperature range and the residual resistivity ratio
(RRR), defined as $\rho_{ab}$(300 K)/$\rho_{ab}$(2 K), is about 44, indicating
the high quality of crystals. At $T^{*}\sim$ 103 K, the $\rho_{ab}(T)$ shows
an inflection point and it is related to the onset of charge ordering
transition Ortiz2 ; JiangYX . It should be noted that the $T^{*}$ is higher
than those in KV3Sb5 and CsV3Sb5 Ortiz2 ; Ortiz3 , implying that the
relationship between $T^{*}$ and the lattice parameters (or ionic radius of
alkali metal) is not monotonic. At $\mu_{0}H=$ 14 T, $\rho_{ab}(T)$ is
insensitive to magnetic field when $T>T^{*}$ but the significant
magnetoresistance (MR) appears gradually below $T^{*}$. On the other hand, the
$\rho_{c}(T)$ has a much larger absolute value than the $\rho_{ab}(T)$. The
ratio of $\rho_{c}/\rho_{ab}$ is about 7 at 300 K and increases to about 33
when temperature is down to 2 K, manifesting a significant 2D nature of
RbV3Sb5. However this anisotropy is smaller than that in CsV3Sb5, which could
be partially ascribed to the smaller interlayer spacing between two V-Sb slabs
Ortiz2 . More importantly, in contrast to $\rho_{ab}(T)$, the $\rho_{c}(T)$
shows a remarkable upturn starting from $T^{*}$ with a maximum at about 97 K
and this behavior is distinctly different from that in CsV3Sb5Ortiz2 . It
suggests that the $\textbf{q}_{\rm CDW}$ in RbV3Sb5 might have a $c$-axial
component, leading to the significantly gapped FS along the $k_{z}$ direction.
Similar behavior has also been observed in PdTeI with CDW vector
$\textbf{q}_{\rm CDW}$ = (0, 0, 0.396) LeiHC and GdSi with spin density wave
(SDW) vector $\textbf{q}_{\rm SDW}$ = (0, 0.483, 0.092) FengY . Fig. 2(b)
exhibits the $\rho_{ab}(T)$ as a function of temperature below 1.3 K. It can
be seen that there is a sharp resistivity drop appearing in the $\rho_{ab}(T)$
curve at zero field and it corresponds to the superconducting transition. The
onset superconducting transition temperature $T_{c,\rm onset}$ determined from
the cross point of the two lines extrapolated from the high-temperature normal
state and the low-temperature superconducting state is 0.92 K with the
transition width $\Delta T_{c}=$ 0.17 K. This $T_{c}$ is lower than that of
CsV3Sb5 ($T_{c}\sim$ 2.5 K) but very close to that of KV3Sb5 ($T_{c}\sim$ 0.93
K) Ortiz2 ; Ortiz3 .
Figure 2: (a) Temperature dependence of $\rho_{ab}(T)$ and $\rho_{c}(T)$ at
zero field and 14 T between 2 K and 300 K. (b) Temperature dependence of zero-
field $\rho_{ab}(T)$ below 1.3 K. (c) Temperature dependence of $M(T)$ at
$\mu_{0}H=$ 1 T for $H\parallel c$ with ZFC and FC modes. (d) Temperature
dependence of $C_{p}(T)$ at zero field between 2 K and 117 K. Inset: $C_{p}/T$
vs. $T^{2}$ at low temperature region. The red solid line represents the
linear fit using the formula $C_{p}/T=\gamma+\beta T^{2}$.
The charge ordering transition also has a remarkable influence on the magnetic
property of RbV3Sb5. As shown in Fig. 2(c), the magnetization $M(T)$ curve
exhibits a relatively weak temperature dependence with a small absolute value
above $T^{*}$, reflecting the Pauli paramagnetism of RbV3Sb5. In contrast,
when $T<T^{*}$, there is a sharp drop in the $M(T)$ curve because of the
decreased carrier density originating from the partially gapped FS by the
charge ordering transition. In addition, the nearly overlapped zero-field-
cooling (ZFC) and field-cooling (FC) $M(T)$ curves also suggest that this
anomaly should be due to certain density wave transition not an
antiferromagnetic one. Fig. 2(d) shows the temperature dependence of heat
capacity $C_{p}(T)$ of RbV3Sb5 single crystals measured between $T=$ 2 and 117
K at zero field. It can be seen that there is a jump at $\sim$ 102 K, in
agreement with the $T^{*}$ obtained from resistivity and magnetization
measurements. The jump in $C_{p}(T)$ curve of RbV3Sb5 is similar to those of
KV3Sb5 and CsV3Sb5 Ortiz1 ; Ortiz2 ; Ortiz3 , suggesting the same origin of
this anomaly of heat capacity from the charge ordering transition. The
electronic specific heat coefficient $\gamma$ and phonon specific heat
coefficient $\beta$ can be obtained from the linear fit of low-temperature
heat capacity using the formula $C_{p}/T=\gamma+\beta T^{2}$ (inset of Fig.
2(d)). The fitted $\gamma$ and $\beta$ is 17(1) mJ mol-1 K-2 and 3.63(2) mJ
mol-1 K-4, respectively. The latter one gives the Debye temperature
$\Theta_{D}=$ 168.9(3) K using the formula
$\Theta_{D}=(12\pi^{4}N\rm{R}/5\beta)^{1/3}$, where $N$ is the number of atoms
per formula unit and R is the gas constant. The electron-phonon coupling
$\lambda_{e-ph}$ can be estimated with the values of $\Theta_{D}$ and $T_{c}$
using McMillan’s formula McMillan ,
$\lambda_{e-ph}=\frac{1.04+\mu^{\ast}\ln(\Theta_{D}/1.45T_{c})}{(1-0.62\mu^{\ast})\ln(\Theta_{D}/1.45T_{c})-1.04}$
(1)
where $\mu^{\ast}$ is the repulsive screened Coulomb potential and is usually
between 0.1 and 0.15. Assuming $\mu^{\ast}=$ 0.13, the calculated
$\lambda_{e-ph}$ is about 0.489, implying that RbV3Sb5 is a weakly coupled BCS
superconductor Allen .
Figure 3: (a) and (b) Field dependence of MR and $\rho_{yx}(T,\mu_{0}H)$ at
various temperatures up to 9 T. Inset of (a) shows the field dependence of MR
at 2 K with the field up to 14 T. The red line represents the fit using the
formula MR $=A(\mu_{0}H)^{\alpha}$. (c) Temperature dependence o $R_{\rm
H}(T)$ obtained from the linear fits of $\rho_{yx}(T,\mu_{0}H)$ curves. (d)
Temperature dependence of $R_{\rm H}/\rho_{ab}(0)$. Inset: the enlarged part
of $R_{\rm H}/\rho_{ab}(0)$ near $T^{*}$ and the vertical red line represents
the temperature of $T^{*}$.
The MR [$=(\rho_{ab}(\mu_{0}H)-\rho_{ab}(0))/\rho_{ab}(0)$] of RbV3Sb5 is
negligible above $T^{*}$ and increases gradually below $T^{*}$ (Fig. 3(a)),
consistent with the $\rho_{ab}(T)$ data (Fig. 2(a)). At low temperature, the
MR does not saturate up to 14 T and the Shubnikov-de Haas (SdH) quantum
oscillations (QOs) can be clearly observed at low-temperature and high-field
region (inset of Fig. 3(a)). The MR at 2K can be fitted using the formula MR
$=A(\mu_{0}H)^{\alpha}$ with $\alpha=$ 1.001(5) (inset of Fig. 3(a)), such
linear behavior of MR extends to $T^{*}$, especially at $\mu_{0}H>$ 3 T. Fig.
3(b) shows the field dependence of Hall resistivity $\rho_{yx}(T,\mu_{0}H)$ at
several typical temperatures. At high temperature, the values of
$\rho_{yx}(T,\mu_{0}H)$ are negative with nearly linear dependence on field.
When decreasing temperature below 50 K, the $\rho_{yx}(T,\mu_{0}H)$ becomes
positive but the linear field dependence is almost unchanged at high-field
region. Similar to the MR curves, the SdH QOs appear at low temperatures. The
Hall coefficient $R_{\rm H}$ obtained from the linear fits of
$\rho_{yx}(T,\mu_{0}H)$ curves are shown in Fig. 3(c). The strong temperature
dependence of $R_{\rm H}$ implies that RbV3Sb5 is a multi-band metal,
consistent with theoretical calculations and ARPES measurements of KV3Sb5 and
CsV3Sb5 Ortiz1 ; Ortiz2 ; YangSY . At high temperature, the negative $R_{\rm
H}$ suggests that the electron-type carriers are dominant, which could
originate from the electron pockets around $\Gamma$ and $K$ points of BZ
Ortiz1 ; Ortiz2 ; YangSY . The most remarkable feature is that the weakly
temperature dependent $R_{\rm H}$ starts to decrease rapidly below $T^{*}$ and
changes its sign to positive at about 40 K. Such behavior is very similar to
the typical CDW materials NbSe2 and TaSe2 Naito , and SDW system GdSi FengY .
Both theory and STM results indicate that the $\textbf{q}_{\rm CDW}$ connects
the $M$ point when the Fermi level is close to the van Hove filling as in the
case of AV3Sb5 JiangYX ; WangWS ; Kiesel ; Ortiz1 ; Ortiz2 . Moreover, there
are a band with van Hove singularity and a pair of Dirac-cone like bands near
the $M$ point JiangYX , which can form hole pockets especially when the
$E_{\rm F}$ shifts downward slightly due to the slight Rb deficiency Ortiz1 ;
Ortiz2 . Therefore, the charge order may lead to the gap opening of hole bands
not electron ones. It seems very peculiar that the $\rho_{ab}(T)$ becomes
smaller with positive $R_{\rm H}$ in the charge ordering state even though the
portions of hole-type FSs are gapped. Here, we explain this phenomenon in the
framework of two-band model. According to the two band model at low-field
region Ziman ,
$R_{\rm
H}=\frac{\rho_{yx}}{\mu_{0}H}=\frac{n_{h}\mu_{h}^{2}-n_{e}\mu_{e}^{2}}{e(n_{h}\mu_{h}+n_{e}\mu_{e})^{2}}$
(2)
where $\mu_{e,h}$ and $n_{e,h}$ are the mobilities and densities of electron-
and hole-type carriers, respectively. Because of zero-field
$\rho_{ab}(0)=1/\sigma_{ab}(0)=1/(n_{h}e\mu_{h}+n_{e}e\mu_{e})$, it has,
$R_{\rm
H}/\rho_{ab}(0)=\frac{n_{h}\mu_{h}^{2}-n_{e}\mu_{e}^{2}}{n_{h}\mu_{h}+n_{e}\mu_{e}}$
(3)
The derived $R_{\rm H}/\rho_{ab}(0)$ with the dimension of mobility is shown
in Fig. 3(d). According to eq. (3), if the $n_{e}\mu_{e}^{2}$ is much larger
than the $n_{h}\mu_{h}^{2}$ which should be the case above $T^{*}$, the
$R_{\rm H}/\rho_{ab}(0)$ is negative and the $1/|R_{\rm H}|$ will be close to
$n_{e}$, which is about 1.6$\times$1022 cm-3 at 300 K. On the other hand, when
the $T$ is just below $T^{*}$, the $\mu_{h}$ may still not increase remarkably
and the $n_{h}$ decreases continuously because the FS reconstruction has not
finished yet, manifesting from the drops of $M(T)$ curves shown in Fig. 2(c)
This would result in an even negative value of $R_{\rm H}/\rho_{ab}(0)$, which
can be clearly seen in the inset of Fig. 3(d). In contrast, when the $T$ is
far below $T^{*}$ ($<$ 70 K), the $n_{h}$ becomes insensitive to temperature
and the $\mu_{h}$ may be much larger than the $\mu_{e}$ because both of
electron and hole mobilities have the temperature dependence $BT^{-n}$ with
different $B$ and $n$ values. It will lead to a sign reversal of $R_{\rm
H}/\rho_{ab}(0)$ to positive even the $n_{h}$ is smaller than the value above
$T^{*}$. This also explains the even smaller $\rho_{ab}(T)$ below $T^{*}$.
Since the strongly CDW-like coupled portions of FSs near $M$ point may play a
negative role in conductivity above $T^{*}$, the carrier scattering around
this area can be effectively reduced when entering charge ordering state, and
thus the $\mu_{h}$ can enhance significantly Valla . Similar discussion about
the sign change of $R_{\rm H}$ has been developed by Ong for 2D multiband
system and applied to Sr2RuO4 and CDW material 2H-NbSe2 Ong ; Mackenzie ; LiL
.
Figure 4: (a) SdH QOs
$\Delta\rho_{ab}=\rho_{ab}-\left\langle\rho_{ab}\right\rangle$ as a function
of 1/($\mu_{0}H$) at various temperatures. (b) FFT spectra of the QOs between
4 T and 14 T at various temperatures. (c) The temperature dependence of FFT
amplitude of $F_{\alpha}$ frequency. The solid line is the fit using the LK
formula to extract the effective mass.
Analysis of SdH QOs provides insight on the features of FSs and carriers
further. After subtracting the slowly changed part of $\rho_{ab}(\mu_{0}H)$
($\equiv\left\langle\rho_{ab}\right\rangle$), the oscillation parts of
resistivity $\Delta\rho_{ab}=\rho_{ab}-\left\langle\rho_{ab}\right\rangle$ as
a function of 1/($\mu_{0}H$) for $H\|c$ at several representative temperatures
are shown in Fig. 4(a). The amplitudes of QOs decrease with increasing
temperature or decreasing field, but still observable at 30 K. The fast
Fourier transform (FFT) spectra of the QOs reveal two principal frequencies
$F_{\alpha}=$ 33.5 T and $F_{\beta}=$ 117.2 T (Fig. 4(b)). Both of them are
slightly smaller than those in KV3Sb5 YangSY , indicating RbV3Sb5 has smaller
extremal orbits of FSs than KV3Sb5. According to the Onsager relation
$F=(\hbar/2\pi e)A_{F}$ where $A_{F}$ is the area of extremal orbit of FS, the
determined $A_{F}$ is 0.0032 and 0.011 Å-2 for $\alpha$ and $\beta$ extremal
orbits, respectively. These $A_{F}$s are very small, taking only about 0.0934
% and 0.321 % of the whole area of BZ in the $k_{x}-k_{y}$ plane when taking
the lattice parameter $a=$ 5.4715 Å Ortiz1 . The effective mass $m^{*}$ can be
extracted from the temperature dependence of the amplitude of FFT peak using
the Lifshitz-Kosevich (LK) formula,
$\Delta\rho_{ab}\propto\frac{X}{\sinh X}$ (4)
where $X=2\pi^{2}k_{B}T/\hbar\omega_{c}=14.69m^{*}/\mu_{0}H_{\rm avg}$ with
$\hbar\omega_{c}$ being the cyclotron frequency and $\mu_{0}H_{\rm avg}$ (= 9
T) being the average value of the field window used for the FFT of QOs
Shoenberg ; Rhodes . As shown in Fig. 4(c), the temperature dependence of FFT
amplitude of $F_{\alpha}$ can be fitted very well using eq. (4) and the
obtained $m^{*}$ is 0.091(2) $m_{e}$, where $m_{e}$ is the bare electron mass.
This value is even smaller than that in KV3Sb5 (0.125 $m_{e}$ for the $\alpha$
orbit) YangSY . The small extremal cross sections of FSs accompanying with
such light $m^{*}$ could be related to the highly dispersive bands near either
$M$ point or along the $\Gamma-K$ path of BZ Ortiz1 ; Ortiz2 ; YangSY ;
JiangYX .
In summary, we carried out the detailed study on physical properties of
RbV3Sb5 single crystals grown by the self-flux method. RbV3Sb5 single crystals
exhibit a superconducting transition at $T_{c,\rm onset}$ = 0.92 K with a
weakly coupling strength, accompanying with anomalies of properties at
$T^{*}\sim$ 102 - 103 K. The high-temperature anomaly could be related to the
formation of charge ordering state and it results in the sign change of
$R_{\rm H}$, which can be partially ascribed to the enhancement of mobility
for hole-type carriers due to the reduced carrier scattering by the gapping of
strongly CDW-like coupled portions of FSs. Furthermore, there are some very
small FSs with rather low $m^{*}$, indicating the existence of highly
dispersive bands near $E_{\rm F}$ in RbV3Sb5. Moreover, due to the similar
electronic structure of RbV3Sb5 to KV3Sb5 and CsV3Sb5 Ortiz1 , RbV3Sb5 should
also be a candidate of $Z_{2}$ topological metal. Therefore, the V-based
kagome metals AV3Sb5 provide a unique platform to explore the interplay
between nontrivial band topology, electronic correlation and possible
unconventional superconductivity.
This work was supported by National Natural Science Foundation of China (Grant
No. 11822412 and 11774423), Ministry of Science and Technology of China (Grant
No. 2018YFE0202600 and 2016YFA0300504), Beijing Natural Science Foundation
(Grant No. Z200005), and Fundamental Research Funds for the Central
Universities and Research Funds of Renmin University of China (RUC) (Grant No.
18XNLG14 and 19XNLG17).
† Q.W.Y, Z.J.T. and C.S.G. contributed equally to this work.
## References
* (1) L. Balents, Nature 464, 199-208 (2010).
* (2) M. P. Shores, E. A. Nytko, B. M. Bartlett, and D. G. Nocera, J. Am. Chem. Soc. 127, 13462-13463 (2005).
* (3) T.-H. Han, J. S. Helton, S. Chu, D. G. Nocera, J. A. Rodriguez-Rivera, C. Broholm, and Y. S. Lee, Nature 492, 406-410 (2012).
* (4) M. Fu, T. Imai, T.-H. Han, and Y. S. Lee, Science 350, 655-658 (2015).
* (5) M. Kang, L. Ye, S. Fang, J.-S. You, A. Levitan, M. Han, J. I. Facio, C. Jozwiak, A. Bostwick, E. Rotenberg, M. K. Chan, R. D. McDonald, D. Graf, K. Kaznatcheev, E. Vescovo, D. C. Bell, E. Kaxiras, J. van den Brink, M. Richter, M. P. Ghimire, J. G. Checkelsky, and R. Comin,, Nat. Mater. 19, 163-169 (2020).
* (6) Z. Liu, M. Li, Q. Wang, G. Wang, C. Wen, K. Jiang, X. Lu, S. Yan, Y. Huang, D. Shen, J.-X. Yin, Z. Wang, Z. Yin, H. Lei, and S. Wang, Nat. Commun. 11, 4002 (2020).
* (7) M. Kang, S. Fang, L. Ye, H. C. Po, J. Denlinger, C. Jozwiak, A. Bostwick, E. Rotenberg, E. Kaxiras, J. G. Checkelsky, and R. Comin, Nat. Commun. 11, 4004 (2020).
* (8) L. Ye, M. Kang, J. Liu, F. von Cube, C. R. Wicker, T. Suzuki, C. Jozwiak, A. Bostwick, E. Rotenberg, D. C. Bell, L. Fu, R. Comin, and J. G. Checkelsky, Nature 555, 638-642 (2018).
* (9) J.-X. Yin, W. Ma, T. A. Cochran, X. Xu, S. S. Zhang, H.-J. Tien, N. Shumiya, G. Cheng, K. Jiang, B. Lian, Z. Song,G. Chang, I. Belopolski, D. Multer, M. Litskevich, Z.-J. Cheng, X. P. Yang, B. Swidler, H. Zhou, H. Lin, T. Neupert, Z. Wang, N. Yao, T.-R. Chang, S. Jia, and M. Z. Hasan, Nature 583, 533-536 (2020).
* (10) J.-X. Yin, S. S. Zhang, H. Li, K. Jiang, G. Chang, B. Zhang, B. Lian, C. Xiang, I. Belopolski, H. Zheng, T. A. Cochran, S.-Y. Xu, G. Bian, K. Liu, T.-R. Chang, H. Lin, Z.-Y. Lu, Z. Wang, S. Jia, W. Wang, and M. Z. Hasan, Nature 562, 91-95 (2018).
* (11) K. Kuroda, T. Tomita, M.-T. Suzuki, C. Bareille, A. A. Nugroho, P. Goswami, M. Ochi, M. Ikhlas, M. Nakayama, S. Akebi, R. Noguchi, R. Ishii, N. Inami, K. Ono, H. Kumigashira, A. Varykhalov, T. Muro, T. Koretsune, R. Arita, S. Shin, T. Kondo, and S. Nakatsuji, Nat. Mater. 16, 1090-1095 (2017).
* (12) E. Liu, Y. Sun, N. Kumar, L. Muechler, A. Sun, L. Jiao, S.-Y. Yang, D. Liu, A. Liang, Q. Xu, J. Kroder, V. Süß, H. Borrmann, C. Shekhar, Z. Wang, C. Xi, W. Wang, W. Schnelle, S. Wirth, Y. Chen, S. T. B. Goennenwein, and C. Felser, Nat. Phys. 14, 1125-1131 (2018).
* (13) Q. Wang, Y. Xu, R. Lou, Z. Liu, M. Li, Y. Huang, D. Shen, H. Weng, S. Wang, and H. Lei, Nat. Commun. 9, 3681 (2018).
* (14) P. W. Anderson, Mater. Res. Bull. 8, 153-160 (1973).
* (15) W.-H. Ko, P. A. Lee, and X.-G. Wen, Phys. Rev. B 79, 214502 (2009).
* (16) W.-S. Wang, Z.-Z. Li, Y.-Y. Xiang, and Q.-H. Wang, Phys. Rev. B 87, 115135 (2013).
* (17) M. L. Kiesel, C. Platt, and R. Thomale, Phys. Rev. Lett. 110, 126405 (2013).
* (18) B. R. Ortiz, L. C. Gomes, J. R. Morey, M. Winiarski, M. Bordelon, J. S. Mangum, I. W. H. Oswald, J. A. Rodriguez-Rivera, J. R. Neilson, S. D. Wilson, E. Ertekin, T. M. McQueen, and E. S. Toberer, Phys. Rev. Mater. 3, 094407 (2019).
* (19) B. R. Ortiz, S. M. L. Teicher, Y. Hu, J. L. Zuo, P. M. Sarte, E. C. Schueller, A. M. M. Abeykoon, M. J. Krogstad, S. Rosenkranz, R. Osborn, R. Seshadri, L. Balents, J. He, and S. D. Wilson, Phys. Rev. Lett. 125, 247002 (2020).
* (20) B. R. Ortiz, P. M. Sarte, E. Kenney, M. J. Graf, S. M. L. Teicher, R. Seshadri, and S. D. Wilson, arXiv: 2012.09097 (2020).
* (21) Y. Wang, S.-Y. Yang, P. K. Sivakumar, B. R. Ortiz, S. M. L. Teicher, H. Wu, A. K. Srivastava, C. Garg, D. Liu, S. S. P. Parkin, E. S. Toberer, T. McQueen, S. D. Wilson, and M. N. Ali, arXiv: 2012.05898 (2020).
* (22) S.-Y. Yang, Y. Wang, B. R. Ortiz, D. Liu, J. Gayles, E. Derunova, R. Gonzalez-Hernandez, L. Šmejkal, Y. Chen, S. S. P. Parkin, S. D. Wilson, E. S. Toberer, T. McQueen, and M. N. Ali, Sci. Adv. 6, eabb6003 (2020).
* (23) Y.-X. Jiang, J.-X. Yin, M. M. Denner, N. Shumiya, B. R. Ortiz, J. He, X. Liu, S. S. Zhang, G. Chang, I. Belopolski, Q. Zhang, M. S. Hossain, T. A. Cochran, D. Multer, M. Litskevich, Z.-J. Cheng, X. P. Yang, Z. Guguchia, G. Xu, Z. Wang, T. Neupert, S. D. Wilson, and M. Z. Hasan, arXiv: 2012.15709 (2020).
* (24) E. M. Kenney, B. R. Ortiz, C. Wang, S. D. Wilson, and M. J. Graf, arXiv: 2012.04737 (2020).
* (25) H. C. Lei, K. Liu, J.-i. Yamaura, S. Maki, Y. Murakami, Z.-Y. Lu, and H. Hosono, Phys. Rev. B 93, 121101(R) (2016).
* (26) Y. Feng, J. Wang, D. M. Silevitch, B. Mihaila, J. W. Kim, J.-Q. Yan, R. K. Schulze, N. Woo, A. Palmer, Y. Ren, J. van Wezel, P. B. Littlewood, and T. F. Rosenbaum, Proc. Natl. Acad. Sci. 110, 3287-3292 (2013).
* (27) W. L. McMillan, Phys. Rev. 167, 331-344 (1968).
* (28) P. B. Allen and R. C. Dynes, Phys. Rev. B 12, 905-922 (1975).
* (29) M. Naito and S. Tanaka, J. Phys. Soc. Jpn. 51, 219-227 (1982).
* (30) J. M. Ziman, Electrons and Phonons, Clarendon Press, Oxford, England (1960).
* (31) T. Valla, A. V. Fedorov, P. D. Johnson, P-A. Glans, C. McGuinness, K. E. Smith, E. Y. Andrei, and H. Berger, Phys. Rev. Lett. 92, 086104 (2004).
* (32) N. P. Ong, Phys. Rev. B 43, 193-201 (1991).
* (33) A. P. Mackenzie, N. E. Hussey, A. J. Diver, S. R. Julian, Y. Maeno, S. Nishizaki, and T. Fujita, Phys. Rev. B 54, 7425-7429 (1996).
* (34) L. Li, J. Shen, Z. Xu, and H. Wang, Int. J. Mod. Phys. B 19, 275-279 (2005).
* (35) D. Shoenberg, Magnetic Oscillations in Metals, Cambridge University Press, Cambridge, England (1984).
* (36) D. Rhodes, S. Das, Q. R. Zhang, B. Zeng, N. R. Pradhan, N. Kikugawa, E. Manousakis, and L. Balicas, Phys. Rev. B 92, 125152 (2015).
|
# Quantum Polarization of Qudit Channels
Ashutosh Goswami1 Mehdi Mhalla2 Valentin Savin3
1 Univ. Grenoble Alpes, Grenoble INP, LIG, F-38000 Grenoble, France
2 Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG, F-38000 Grenoble, France
3 Univ. Grenoble Alpes, CEA, LETI, F-38054 Grenoble, France
###### Abstract
We provide a generalization of quantum polar codes to quantum channels with
qudit-input, achieving the symmetric coherent information of the channel. Our
scheme relies on a channel combining and splitting construction, where a two-
qudit unitary randomly chosen from a unitary 2-design is used to combine two
instances of a qudit-input channel. The inputs to the synthesized bad channels
are frozen by sharing EPR pairs between the sender and the receiver, so our
scheme is entanglement assisted. Using the fact that the generalized two-qudit
Clifford group forms a unitary 2-design, we conclude that the channel
combining operation can be chosen from this set. Moreover, we show that
polarization also happens for a much smaller subset of two-qudit Cliffords,
which is not a unitary 2-design. Finally, we show how to decode the proposed
quantum polar codes on Pauli qudit channels.
## 1 Introduction
In classical information theory, polar codes are the first explicit
construction provably achieving the symmetric capacity of any discrete
memoryless channel [1]. The construction is based on the recursive application
of a channel combining and splitting procedure. It first combines two
instances of the transmission channel, using a controlled-NOT gate as channel
combiner, and then splits the combined channel into two virtual channels,
referred to as good and bad channels. Applied recursively $n$ times, the above
procedure yields $N=2^{n}$ virtual channels. These virtual channels exhibit a
polarization property, in the sense that they tend to become either completely
noisy or noiseless, as $N$ goes to infinity. Polar coding consists of
efficient encoding and decoding algorithms that take effective advantage of
the channel polarization property.
Polar codes have been generalized to classical-quantum channels with binary
and non-binary classical input in [2, 3]. For the transmission of quantum
information over quantum channels with qubit-input, two approaches have been
considered in the literature. The first approach is based on CSS-like
constructions, which essentially exploit polarization in either amplitude or
phase basis [4, 5, 6]. The second approach relies on a purely quantum
polarization construction [7, 8], where the synthesized virtual channels tend
to become either completely noisy or noiseless as quantum channels, not merely
in one basis. This approach uses a randomized channel combining, employing a
random two-qubit Clifford unitary as channel combiner.
In this work, we extend the work in [7] to the case of quantum channels with
qudit-input. To the best of our knowledge, this is the first generalization of
polar codes to qudit-input channels. First, we show that purely quantum
polarization (in the sense of [7]) happens for any qudit-input quantum
channel, using as channel combiner a random two-qudit unitary, chosen from a
unitary 2-design. Further, we provide a simple proof of the fact that the
generalized two-qudit Clifford group forms a unitary 2-design, therefore the
channel combining operation can be randomly chosen from this set. Moreover,
when the qudit dimension $d$ is a prime, we show that polarization happens for
a subset of two-qudit Clifford unitaries containing only $d^{4}+d^{2}-2$
elements, which is not a unitary 2-design. Hence, unitary 2-designs are not
necessary for the quantum polarization of qudit-input channels.
To exploit the above polarization property, the inputs to the synthesized
noisy channels are frozen by presharing EPR pairs between the sender and the
receiver. Hence, our polar coding scheme is entanglement assisted. Finally, we
consider the case of Pauli qudit channels. Similarly to [7], we associate a
classical counterpart channel to a Pauli qudit channel. Then, we show that a
quantum polar code on a Pauli qudit channel yields a classical polar code on
the classical counterpart channel. Hence, we show that Pauli errors can be
identified by decoding the polar code on the classical counterpart channel,
using classical polar decoding.
The paper is organized as follows. Section 2 provides the basic definitions
needed for quantum polarization. Section 3 contains our main polarization
results for qudit-input quantum channels. Section 4 discusses the decoding of
our quantum polar codes on Pauli qudit channels.
## 2 Preliminaries
We consider $d$-dimensional quantum systems, referred to as qudits, where
$d\geq 2$ is fixed throughout the paper. We denote by $\rho_{A}$ a quantum
state (i.e., density matrix) of a quantum system $A$. When no confusion is
possible, we shall discard the quantum system from the notation. For a
bipartite quantum state $\rho_{AB}$, we shall denote by
$\rho_{B}:=\operatorname{Tr}_{A}(\rho_{AB})$ the quantum state of the system
$B$, obtained by tracing out the system $A$. The identity matrix is denoted by
either $\mathbbm{1}$ or $I$, with the former notation used for quantum states,
and the latter for quantum operators. Throughout the paper, logarithm is taken
in base $d$.
###### Definition 1 (von Neumann entropy).
(a) The von Neumann entropy of a quantum state $\rho$ is defined as
$H(\rho):=-\operatorname{Tr}\left(\rho\log\rho\right).$
(b) The conditional von Neumann entropy of a bipartite quantum state
$\rho_{AB}$ is defined as
$H(A|B)_{\rho_{AB}}=H(\rho_{AB})-H(\rho_{B}).$
###### Definition 2 (Conditional sandwiched Rényi entropy of order 2).
Let $\rho_{AB}$ be a quantum state. Then,
$\tilde{H}^{\downarrow}_{2}(A|B)_{\rho}:=-\log\operatorname{Tr}\left[\rho_{B}^{-\frac{1}{2}}\rho_{AB}\rho_{B}^{-\frac{1}{2}}\rho_{AB}\right].$
###### Definition 3 (Petz-Rényi entropy of order $\frac{1}{2}$).
Let $\rho_{AB}$ be a quantum state. Then,
$H^{\uparrow}_{\frac{1}{2}}(A|B)_{\rho}:=2\log\sup_{\sigma_{B}}\operatorname{Tr}\left[\rho_{AB}^{\frac{1}{2}}\sigma^{\frac{1}{2}}_{B}\right],$
where the supremum is taken over all quantum states $\sigma_{B}$.
We consider quantum channels $\mathcal{W}_{A^{\prime}\rightarrow B}$, with
qudit input system $A^{\prime}$, and output system $B$ of arbitrary dimension.
When no confusion is possible, we shall discard the channel input and output
systems from the notation. An EPR pair on two-qudit systems $A$ and
$A^{\prime}$ is the quantum state
$\Phi_{AA^{\prime}}:=\mathchoice{{\left\lvert\Phi_{AA^{\prime}}\right\rangle}}{{\lvert\Phi_{AA^{\prime}}\rangle}}{{\lvert\Phi_{AA^{\prime}}\rangle}}{{\lvert\Phi_{AA^{\prime}}\rangle}}\mathchoice{{\left\langle\Phi_{AA^{\prime}}\right\rvert}}{{\langle\Phi_{AA^{\prime}}\rvert}}{{\langle\Phi_{AA^{\prime}}\rvert}}{{\langle\Phi_{AA^{\prime}}\rvert}}$,
with
$\mathchoice{{\left\lvert\Phi_{AA^{\prime}}\right\rangle}}{{\lvert\Phi_{AA^{\prime}}\rangle}}{{\lvert\Phi_{AA^{\prime}}\rangle}}{{\lvert\Phi_{AA^{\prime}}\rangle}}:=\frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}\mathchoice{{\left\lvert
i\right\rangle}}{{\lvert i\rangle}}{{\lvert i\rangle}}{{\lvert
i\rangle}}_{A}\mathchoice{{\left\lvert i\right\rangle}}{{\lvert
i\rangle}}{{\lvert i\rangle}}{{\lvert i\rangle}}_{A^{\prime}}$. Given a
quantum channel $\mathcal{W}_{A^{\prime}\rightarrow B}$, we denote by
$\mathcal{W}(\Phi_{AA^{\prime}}):=(I_{A}\varotimes\mathcal{W})(\Phi_{AA^{\prime}})$
the quantum state on the $AB$ system obtained by applying $\mathcal{W}$ on the
$A^{\prime}$-half of the EPR pair $\Phi_{AA^{\prime}}$.
###### Definition 4 (Symmetric coherent information).
Let $\mathcal{W}_{A^{\prime}\rightarrow B}$ be a channel with qudit input
$A^{\prime}$ and output system $B$ of arbitrary dimension. The symmetric
coherent information of $\mathcal{W}$ is defined as the coherent information
of the channel for a uniformly distributed input, that is
$I(\mathcal{W}):=-H(A|B)_{\mathcal{W}(\Phi_{AA^{\prime}})}\in[-1,1].$
We further introduce the following parameter of a quantum channel, which can
be seen as the quantum counterpart of the classical Bhattacharyya parameter
[7], and which we refer to as the “Rényi-Bhattacharyya” parameter.
###### Definition 5 (Rényi-Bhattacharyya parameter).
Let $\mathcal{W}_{A^{\prime}\rightarrow B}$ be a channel with qudit input
$A^{\prime}$ and output system $B$ of arbitrary dimension. Then,
$R(\mathcal{W}):=d^{H^{\uparrow}_{\frac{1}{2}}(A|B)_{\mathcal{W}(\Phi_{AA^{\prime}})}}=d^{-\tilde{H}^{\downarrow}_{2}(A|E)_{\mathcal{W}^{c}(\Phi_{AA^{\prime}})}}\in\left[\tfrac{1}{d},d\right],$
where $\mathcal{W}^{c}$ denotes the complementary channel associated with
$\mathcal{W}$ [9], and the equality
$H^{\uparrow}_{\frac{1}{2}}(A|B)_{\mathcal{W}(\Phi_{AA^{\prime}})}=-\tilde{H}^{\downarrow}_{2}(A|E)_{\mathcal{W}^{c}(\Phi_{AA^{\prime}})}$
follows from [10, Theorem 2].
We will also need the definitions of the generalized (qudit) Pauli and
Clifford groups [11, 12], and unitary $2$-designs [13].
###### Definition 6 (Generalized Pauli Group).
(a) The Pauli operators $X$ and $Z$ for a qudit quantum system are defined as
$X=\sum_{j=0}^{d-1}\mathchoice{{\left\lvert j\right\rangle}}{{\lvert
j\rangle}}{{\lvert j\rangle}}{{\lvert j\rangle}}\mathchoice{{\left\langle
j\oplus 1\right\rvert}}{{\langle j\oplus 1\rvert}}{{\langle j\oplus
1\rvert}}{{\langle j\oplus 1\rvert}}$, and
$Z=\sum_{j=0}^{d-1}\omega^{j}\mathchoice{{\left\lvert j\right\rangle}}{{\lvert
j\rangle}}{{\lvert j\rangle}}{{\lvert j\rangle}}\mathchoice{{\left\langle
j\right\rvert}}{{\langle j\rvert}}{{\langle j\rvert}}{{\langle j\rvert}}$,
where $\oplus$ denotes the sum modulo $d$, and
$\omega=e^{\frac{2\pi\imath}{d}}$.
(b) The generalized Pauli group on one qudit is defined as
$\mathcal{P}_{d}^{1}:=\\{\omega^{\lambda}P_{r,s}\mid\lambda,r,s=0,\dots,d-1\\}$,
where $P_{r,s}:=X^{r}Z^{s}$.
(c) The generalized Pauli group on $n$ qudits is defined as
$\mathcal{P}_{d}^{n}:=\mathcal{P}_{d}^{1}\varotimes\mathcal{P}_{d}^{1}\varotimes\cdots\varotimes\mathcal{P}_{d}^{1}$.
It is easily seen that $X^{d}=Z^{d}=I$ and $XZ=\omega ZX$, hence
$\mathcal{P}_{d}^{1}$ is indeed a group. Applying the commutation relation
$XZ=\omega ZX$ appropriately many times, we have that
$P_{r,s}P_{t,u}=\omega^{ru-st}P_{t,u}P_{r,s}.$ (1)
###### Definition 7 (Generalized Clifford Group).
The Clifford group $\mathcal{C}_{d}^{n}$ is the unitary group on $n$ qudits
that takes $\mathcal{P}_{d}^{n}$ to $\mathcal{P}_{d}^{n}$ by conjugation.
Let $\mathcal{U}(d^{n})$ be the set of unitary operators on $n$ qudits, and
$\mathcal{W}_{n}$ be a quantum channel with $n$-qudit input. The twirling of
$\mathcal{W}_{n}$ with respect to $\mathcal{U}(d^{n})$ is defined as the
quantum channel that maps a $n$-qudit quantum state $\rho$ to $\int
U^{\dagger}\mathcal{W}_{n}(U\rho U^{\dagger})Ud\eta$, where
$U\in\mathcal{U}(d^{n})$ is randomly chosen according to the Haar measure
$\eta$. The twirling of $\mathcal{W}_{n}$ with respect to a finite subset
$\mathcal{U}\subset\mathcal{U}(d^{n})$ is defined as the quantum channel
acting as
$\rho\mapsto\frac{1}{|\mathcal{U}|}\sum_{U\in\mathcal{U}}U^{\dagger}\mathcal{W}_{n}(U\rho
U^{\dagger})U$.
###### Definition 8 (Unitary 2-Design).
A finite subset $\mathcal{U}\subset\mathcal{U}(d^{n})$ is said to form a
unitary 2-design if it satisfies the following, for all $n$-qudit input
quantum channels $\mathcal{W}_{n}$, and all $n$-qudit quantum states $\rho$:
$\frac{1}{|\mathcal{U}|}\sum_{U\in\mathcal{U}}U^{\dagger}\mathcal{W}_{n}(U\rho
U^{\dagger})U=\int U^{\dagger}\mathcal{W}_{n}(U\rho U^{\dagger})Ud\eta.$ (2)
## 3 Quantum Polarization of Qudit Channels
### 3.1 Main polarization results
Throughout this section ${\cal W}_{A^{\prime}\rightarrow B}$ denotes a quantum
channel with qudit input, and arbitrary dimension output. Our quantum
polarization scheme is based on the channel combining and splitting operations
depicted in the following figure.
$\mathcal{W}$$\mathcal{W}$$C$$A^{\prime}_{1}$$A^{\prime}_{2}$$B_{1}$$B_{2}$
(a) Combined channel
$\mathcal{W}$$\mathcal{W}$$C$$A^{\prime}_{1}$$\frac{\mathbbm{1}_{A^{\prime}_{2}}}{d}$$B_{1}$$B_{2}$
(b) Bad channel $\mathcal{W}_{C}^{(0)}$
$\mathcal{W}$$\mathcal{W}$$C$$A^{\prime}_{2}$$B_{1}$$B_{2}$$\Phi_{A_{1}A^{\prime}_{1}}$$A_{1}$
(c) Good channel $\mathcal{W}_{C}^{(1)}$
Figure 1: Channel combining and splitting. (a) combined channel: a two-qudit
unitary $C$ is applied on the two inputs. (b) bad channel: we input a totally
mixed state into the second input. (c) good channel: we input half of an EPR
pair into the first input, and the other half becomes the output $A_{1}$.
First, two instances of ${\cal W}$ are combined, by entangling their inputs
through a two-qudit unitary $C$. The combined channel is then split into one
bad and one good channel. The bad channel $\mathcal{W}_{C}^{(0)}$ is a channel
from $A^{\prime}_{1}$ to $B_{1}B_{2}$ that acts as
$\mathcal{W}_{C}^{(0)}(\rho)\break=\mathcal{W}^{\varotimes
2}\left(C(\rho\varotimes\frac{\mathbbm{1}_{A^{\prime}_{2}}}{d})C^{\dagger}\right)$,
where $\frac{\mathbbm{1}_{A^{\prime}_{2}}}{d}$ is the completely mixed state.
The good channel $\mathcal{W}_{C}^{(1)}$ is a channel from $A^{\prime}_{2}$ to
$A_{1}B_{1}B_{2}$ that acts as
$\mathcal{W}_{C}^{(1)}(\rho)=\mathcal{W}^{\varotimes
2}\left(C(\Phi_{A_{1}A^{\prime}_{1}}\varotimes\rho)C^{\dagger}\right)$, where
$\Phi_{A_{1}A^{\prime}_{1}}$ is an EPR pair.
The polarization construction is obtained by recursively applying the above
channel combining and splitting operations, while choosing $C$ randomly from
some finite set of unitaries, denoted by ${\cal U}\subset{\cal U}(d^{2})$. To
accommodate the random choice of $C\in{\cal U}$, a classical description of
$C$ is included as part of the output of the bad and good channels. Hence, for
$i=0,1$, we define:
${\cal W}^{(i)}(\rho)=\frac{1}{|{\cal U}|}\sum_{C\in{\cal
U}}\mathchoice{{\left\lvert C\right\rangle}}{{\lvert C\rangle}}{{\lvert
C\rangle}}{{\lvert C\rangle}}\mathchoice{{\left\langle
C\right\rvert}}{{\langle C\rvert}}{{\langle C\rvert}}{{\langle
C\rvert}}\varotimes{\cal W}_{C}^{(i)}(\rho),$ (3)
where $\\{\mathchoice{{\left\lvert C\right\rangle}}{{\lvert C\rangle}}{{\lvert
C\rangle}}{{\lvert C\rangle}}\\}_{C\in{\cal U}}$ is an orthogonal basis of
some auxiliary system. Applying twice the transformation ${\cal
W}\mapsto\left({\cal W}^{(0)},{\cal W}^{(1)}\right)$, we get channels ${\cal
W}^{(i_{1}i_{2})}:=\left({\cal W}^{(i_{1})}\right)\,\\!^{(i_{2})}$, where
$(i_{1}i_{2})\in\\{00,01,10,11\\}$. In general, after $n$ levels or recursion,
we obtain $2^{n}$ channels:
${\cal W}^{(i_{1}\dots i_{n})}:=\left({\cal W}^{(i_{1}\dots
i_{n-1})}\right)\,\\!^{(i_{n})},\ \forall(i_{1}\dots i_{n})\in\\{0,1\\}^{n}.$
(4)
The quantum polarization theorem below states that the symmetric coherent
information of the synthesized channels ${\cal W}^{(i_{1}\dots i_{n})}$
polarizes, meaning that it goes to either $-1$ or $+1$ as $n$ goes to infinity
(except possibly for a vanishing fraction of channels), provided that ${\cal
U}$ is a unitary $2$-design. The second theorem states that polarization also
happens when ${\cal U}$ is taken to be the generalized Clifford group on two
qudits, ${\cal C}_{d}^{2}$, or some specific subset of it.
###### Theorem 9.
Let $\mathcal{U}$ be a unitary 2-design. For any qudit-input quantum channel
${\cal W}$, let $\break\left\\{{\cal W}^{(i_{1}\dots i_{n})}:(i_{1}\dots
i_{n})\in\\{0,1\\}^{n}\right\\}$ be the set of channels defined in (4), with
channel combining unitary $C$ randomly chosen from ${\cal U}$. Then, for any
$\delta>0$,
$\displaystyle\lim_{n\rightarrow\infty}\frac{\\#\\{(i_{1}\dots
i_{n})\in\\{0,1\\}^{n}:I\left({\cal W}^{(i_{1}\dots
i_{n})}\right)\in(-1+\delta,1-\delta)\\}}{2^{n}}=0$
and furthermore,
$\displaystyle\lim_{n\rightarrow\infty}\frac{\\#\left\\{(i_{1},\dots,i_{n})\in\\{0,1\\}^{n}:I(\mathcal{W}^{(i_{1},\dots,i_{n})})\geqslant
1-\delta\right\\}}{2^{n}}=\frac{I(\mathcal{W})+1}{2}$
###### Theorem 10.
(a) The generalized Clifford group on two qudits, ${\cal C}_{d}^{2}$, is a
unitary $2$-design. Thus, polarization happens when the channel combining
unitary $C$ is randomly chosen from ${\cal C}_{d}^{2}$.
(b) If $d$ is prime, there exists a subset ${\cal U}\subset{\cal C}_{d}^{2}$,
of size $|{\cal U}|=d^{4}+d^{2}-2$, which is not a unitary $2$-design, and
such that polarization happens when the channel combining unitary $C$ is
randomly chosen from ${\cal U}$.
We note that part (a) of Theorem 10 may be inferred from Lemmas 1, 2 and 3 in
[14]. We will give an alternative and more elementary proof in Section 3.3, by
generalizing the proof from [13] to the qudit case.
### 3.2 Proof of Theorem 9 (quantum polarization)
To prove the polarization theorem, we essentially need three ingredients, as
follows.
1. 1.
For any two-qudit unitary $C$, the total symmetric coherent information is
preserved under channel combining and splitting, that is, $I({\cal
W}_{C}^{(0)})+I({\cal W}_{C}^{(1)})=2I({\cal W})$. We omit the proof of this,
as the proof given in [8, Lemma 10] for qubit-input channels remains valid in
the qudit case, with minor adjustments.
2. 2.
The symmetric coherent information $I({\cal W})$ approaches $\\{-1,+1\\}$
values if and only if the Rényi-Bhattacharyyia parameter $R({\cal W})$
approaches $\\{d,1/d\\}$ values. This follows from Lemma 11, below.
3. 3.
Taking the good channel yields a guaranteed improvement of the average Rényi-
Bhattacharyya parameter, in the sense of Lemma 12, below.
The proof of Theorem 9 then follows by using [8, Lemma 7], similar to the
proof of quantum polarization for qubit-input channels in [8].
###### Lemma 11.
Let $\mathcal{W}_{A^{\prime}\rightarrow B}$ be a channel with qudit input.
Then,
1. 1.
$R(\mathcal{W})\leqslant\frac{1}{d}+\delta\Rightarrow I(\mathcal{W})\geqslant
1-\log(1+d\delta)$.
2. 2.
$R(\mathcal{W})\geqslant d-\delta\Rightarrow
I(\mathcal{W})\leqslant-1+2\sqrt{\frac{\delta}{d}}+\frac{\sqrt{d}+\sqrt{\delta}}{\sqrt{d}}h\left(\frac{\sqrt{\delta}}{\sqrt{d}+\sqrt{\delta}}\right)$,
where $h(\cdot)$ denotes the binary entropy function.
###### Proof.
We prove first 1). For $\rho_{AB}=\mathcal{W}(\Phi_{AA^{\prime}})$, we have
that
$\frac{1}{d}+\delta\geqslant
R(\mathcal{W})=d^{H^{\uparrow}_{\frac{1}{2}}(A|B)_{\rho}}\geqslant
d^{H(A|B)_{\rho}}=d^{-I(\mathcal{W})},$
where we have used $H^{\uparrow}_{\frac{1}{2}}(A|B)_{\rho}\geqslant
H(A|B)_{\rho}$ for the second inequality, which follows from the monotonically
decreasing property of the conditional Petz-Rényi entropy with respect to its
order [15, Theorem 7]. Hence, $I(\mathcal{W})\geqslant 1-\log(1+d\delta)$.
We now turn to point 2). We have that
$\displaystyle d-\delta$ $\displaystyle\leqslant R(\mathcal{W})\leqslant
R(\mathcal{W})$
$\displaystyle=\max_{\sigma_{B}}\operatorname{Tr}\left[\rho^{\frac{1}{2}}_{AB}\sigma^{\frac{1}{2}}_{B}\right]^{2}$
$\displaystyle=d\max_{\sigma_{B}}\operatorname{Tr}\left[\sqrt{\rho_{AB}}\sqrt{\frac{\mathbbm{1}_{A}}{d}\varotimes\sigma_{B}}\right]^{2}$
$\displaystyle\leqslant
d\max_{\sigma_{B}}\left\|\sqrt{\rho_{AB}}\sqrt{\frac{\mathbbm{1}_{A}}{d}\varotimes\sigma_{B}}\right\|_{1}^{2}$
(5)
$\displaystyle=d\max_{\sigma_{B}}F\left(\rho_{AB},\frac{\mathbbm{1}_{A}}{d}\varotimes\sigma_{B}\right)^{2}$
(6)
Using the Fuchs-van de Graaf inequalities [16], we get that there exists a
$\sigma_{B}$ such that
$\break\frac{1}{2}\left\|\rho_{AB}-\frac{\mathbbm{1}_{A}}{d}\varotimes\sigma_{B}\right\|_{1}\leqslant\sqrt{\frac{\delta}{d}}$.
We are now in a position to use the Alicki-Fannes-Winter [17, Lemma 2]
inequality, which states that
$\displaystyle\left|H(A|B)_{\rho}-1\right|\leqslant
2\sqrt{\frac{\delta}{d}}+\frac{\sqrt{d}+\sqrt{\delta}}{\sqrt{d}}h\left(\frac{\sqrt{\delta}}{{\sqrt{d}+\sqrt{\delta}}}\right).$
This concludes the proof of the lemma. ∎
###### Lemma 12.
Let $\mathcal{W}_{A^{\prime}\rightarrow B}$ be a channel with qudit input.
Then,
$\mathbb{E}_{C}R\left(\mathcal{W}_{C}^{(1)}\right)=\frac{d}{d^{2}+1}\left(1+R(\mathcal{W})^{2}\right)\leq
R(\mathcal{W}),$
where $\mathbb{E}_{C}$ denotes the expectation operator, $C$ is the channel
combining unitary, chosen uniformly at random from a unitary $2$-design ${\cal
U}$. Moreover, equality happens if and only if $R(\mathcal{W})\in\\{1/d,d\\}$.
###### Proof.
Let $\mathcal{W}^{c}_{A^{\prime}\rightarrow E}$ and
$(\mathcal{W}_{C}^{(1)})_{{{A^{\prime}_{2}\to E_{1}E_{2}}}}^{c}$ be the
complementary channel associated with $\mathcal{W}_{A^{\prime}\rightarrow B}$
and the good channel $\mathcal{W}^{(1)}_{C_{A^{\prime}_{2}\to
A_{1}B_{1}B_{2}}}$, respectively. The complementary of the good channel acts
as
$(\mathcal{W}_{C}^{(1)})^{c}(\rho)=(\mathcal{W}^{c}\varotimes\mathcal{W}^{c})\left(C\left(\frac{\mathbbm{1}_{A^{\prime}_{1}}}{d}\varotimes\rho\right)C^{\dagger}\right)$
(see [8, Appendix A] for a proof). Therefore,
$R(\mathcal{W}_{C}^{(1)})=d^{-\tilde{H}^{\downarrow}_{2}(A_{2}|E_{1}E_{2})_{\rho}}$,
where
$\rho_{A_{2}E_{1}E_{2}}=(\mathcal{W}_{C}^{(1)})^{c}(\Phi_{A_{2}A^{\prime}_{2}})$.
Note that
$\rho_{E_{1}E_{2}}=\mathcal{W}^{c}\left(\frac{\mathbbm{1}}{d}\right)\varotimes\mathcal{W}^{c}\left(\frac{\mathbbm{1}}{d}\right)$,
which is independent of $C$. To compute the expected value of
$R(\mathcal{W}_{C}^{(1)})$ with respect to $C$, we proceed as follows.
$\displaystyle\mathbb{E}_{C}d^{-\tilde{H}^{\downarrow}_{2}(A_{2}|E_{1}E_{2})_{\rho}}$
$\displaystyle=\mathbb{E}_{C}\operatorname{Tr}\left[\left(\rho_{E_{1}E_{2}}^{-\frac{1}{4}}\rho_{A_{2}E_{1}E_{2}}\rho_{E_{1}E_{2}}^{-\frac{1}{4}}\right)^{2}\right]$
$\displaystyle=\mathbb{E}_{C}\operatorname{Tr}\left[\left(\rho_{E_{1}E_{2}}^{-\frac{1}{4}}(\mathcal{W}^{c}\varotimes\mathcal{W}^{c})\left(C\left(\frac{\mathbbm{1}_{A^{\prime}_{1}}}{d}\varotimes\Phi_{A_{2}A^{\prime}_{2}}\right)C^{\dagger}\right)\rho_{E_{1}E_{2}}^{-\frac{1}{4}}\right)^{2}\right].$
Note that this is basically the same calculation as in [18, Equation (3.32)]
(there, $U$ is chosen according to the Haar measure over the full unitary
group, but all that is required is a unitary 2-design). However, we will not
make the simplifications after (3.44) and (3.45) in [18], but will instead
keep all the terms. We therefore get
$\mathbb{E}_{C}d^{-\tilde{H}^{\downarrow}_{2}(A_{2}|E_{1}E_{2})_{\rho}}=\alpha\operatorname{Tr}\left[(\frac{\mathbbm{1}_{A_{2}}}{d})^{2}\right]+\beta\operatorname{Tr}\left[(\frac{\mathbbm{1}_{A_{1}^{\prime}}}{d}\varotimes\Phi_{A_{2}A^{\prime}_{2}})^{2}\right]=\frac{1}{d}\alpha+\frac{1}{d}\beta$,
where
$\alpha=\frac{d^{4}}{d^{4}-1}-\frac{d^{2}}{d^{4}-1}d^{-\tilde{H}^{\downarrow}_{2}(A_{1}A_{2}|E_{1}E_{2})_{\omega}}$,
$\beta=\frac{d^{4}}{d^{4}-1}d^{-\tilde{H}^{\downarrow}_{2}(A_{1}A_{2}|E_{1}E_{2})_{\omega}}-\frac{d^{2}}{d^{4}-1}$,
and
$\omega_{A_{1}A_{2}E_{1}E_{2}}:=(\mathcal{W}^{c}\varotimes\mathcal{W}^{c})(\Phi_{A_{1}A^{\prime}_{1}}\varotimes\Phi_{A_{2}A^{\prime}_{2}})$.
Hence,
$\displaystyle\mathbb{E}_{C}d^{-\tilde{H}^{\downarrow}_{2}(A_{2}|E_{1}E_{2})_{\rho}}$
$\displaystyle=\frac{d}{d^{2}+1}+\frac{d}{d^{2}+1}d^{-\tilde{H}^{\downarrow}_{2}(A_{1}A_{2}|E_{1}E_{2})_{\omega}}$
$\displaystyle=\frac{d}{d^{2}+1}(1+R(\mathcal{W})^{2}),$
where the second equality follows from
$d^{-\tilde{H}^{\downarrow}_{2}(A_{1}A_{2}|E_{1}E_{2})_{\omega}}=R(\mathcal{W})^{2}$
using the fact that conditional sandwiched Rényi entropy of order 2 is
additive with respect to tensor-product states. It is easily seen that the
function $f(R)=\frac{d}{d^{2}+1}(1+R^{2})$ is a convex function satisfying
$f(R)=R$ for $R\in\\{\frac{1}{d},d\\}$ and $f(R)<R$ for $R\in(\frac{1}{d},d)$.
∎
### 3.3 Proof of Theorem 10
Proof of part (a). It is shown in [13, Theorem 1] (see also [19]) that the
Clifford group on $n$-qubits forms a unitary $2$-design for any $n\geq 1$.
Here, we generalize the proof from [13] to the qudit case, and for $n=2$. We
need to prove that the Clifford group $\mathcal{C}_{d}^{2}$ satisfies the
Definition 8. For this, it is sufficient to prove (2), with
$\mathcal{U}=\mathcal{C}_{d}^{2}$, for two-qudit input quantum channels of the
form $\mathcal{W}_{2}(\rho):=A\rho B$ (since any quantum channel is a convex
combination of quantum channels of this form).
We first consider the twirling of $\mathcal{W}_{2}$ with respect to the
Clifford group $\mathcal{C}_{d}^{2}$. Since the Pauli group
$\mathcal{P}_{d}^{2}$ is a normal subgroup of $\mathcal{C}_{d}^{2}$, we may
chose a subset $\bar{\mathcal{C}}_{d}^{2}\subset\mathcal{C}_{d}^{2}$
containing one representative for each equivalence class in the quotient group
$\mathcal{C}_{d}^{2}/\mathcal{P}_{d}^{2}$. Thus, any element of
$\mathcal{C}_{d}^{2}$ can be uniquely written as a product $CP$, where
$C\in\bar{\mathcal{C}}_{d}^{2}$, and $P\in\mathcal{P}_{d}^{2}$. Therfore, in
order to twirl $\mathcal{W}_{2}$ with respect to $\mathcal{C}_{d}^{2}$, we may
first twirl it with respect to $\mathcal{P}_{d}^{2}$, then twirl again the
obtained channel with respect to $\bar{\mathcal{C}}_{d}^{2}$.
The elements of $\mathcal{P}_{d}^{2}$ have the form
$\omega^{\lambda}P_{r,s}\varotimes P_{r^{\prime},s^{\prime}}$, with
$\lambda,r,s,r^{\prime},s^{\prime}=0,\dots,d-1$. Hence, twirling
$\mathcal{W}_{2}$ with respect to $\mathcal{P}_{d}^{2}$ gives a quantum
channel, denoted $\mathcal{W}_{2}^{\prime}$, defined below
$\displaystyle\mathcal{W}_{2}^{\prime}(\rho)$
$\displaystyle:=\frac{1}{d^{5}}\sum_{\lambda,r,s,r^{\prime},s^{\prime}}\left(\omega^{\lambda}P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}}\right)^{\dagger}A\left(\omega^{\lambda}P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}}\right)\rho\left(\omega^{\lambda}P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}}\right)^{\dagger}B\left(\omega^{\lambda}P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}}\right),$
$\displaystyle=\frac{1}{d^{4}}\sum_{r,s,r^{\prime},s^{\prime}}(P_{r,s}^{\dagger}\varotimes
P_{r^{\prime},s^{\prime}}^{\dagger})A\left(P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}}\right)\rho(P_{r,s}^{\dagger}\varotimes
P_{r^{\prime},s^{\prime}}^{\dagger})B\left(P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}}\right).$ (7)
The last equality from the above shows that it is actually enough to twirl
$\mathcal{W}_{2}$ with respect to the subset
$\bar{\mathcal{P}}_{d}^{2}:=\left\\{P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}}\mid r,s,r^{\prime},s^{\prime}=0,\dots,d-1\right\\}$,
obtained by omitting phase factors. Since $\bar{\mathcal{P}}_{d}^{2}$ forms an
operator basis (for two-qudit operators), we may write
$A=\sum_{r,s,r^{\prime},s^{\prime}}\alpha(r,s,r^{\prime},s^{\prime})P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}}$, and
$B=\sum_{r,s,r^{\prime},s^{\prime}}\beta(r,s,r^{\prime},s^{\prime})P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}}$. The following two lemmas are proven in Appendix A
and Appendix B, respectively.
###### Lemma 13.
The quantum channel $\mathcal{W}_{2}^{\prime}$, obtained by twirling
$\mathcal{W}_{2}$ with respect to $\bar{\mathcal{P}}_{d}^{2}$, is a Pauli
channel satisfying the following
$\displaystyle\mathcal{W}_{2}^{\prime}(\rho)$
$\displaystyle=\sum_{r,s,r^{\prime},s^{\prime}}\gamma_{r,s,r^{\prime},s^{\prime}}\left(P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}}\right)\rho(P_{r,s}^{\dagger}\varotimes
P_{r^{\prime},s^{\prime}}^{\dagger}),$ (8)
where
$\gamma_{r,s,r^{\prime},s^{\prime}}:=\omega^{rs+r^{\prime}s^{\prime}}\alpha(r,s,r^{\prime},s^{\prime})\beta(-r,-s,-r^{\prime},-s^{\prime})$
and $-x$ denotes the additive inverse of $x$ modulo $d$.
###### Lemma 14.
The quantum channel obtained by twirling $\mathcal{W}_{2}^{\prime}$ with
respect to $\bar{\mathcal{C}}_{d}^{2}$, is the quantum channel
$\mathcal{W}_{2}^{\prime\prime}$ acting as
$\displaystyle\mathcal{W}_{2}^{\prime\prime}(\rho)=\frac{\operatorname{Tr}(AB)}{d^{4}}\mathbbm{1}\varotimes\mathbbm{1}+\frac{d^{2}\operatorname{Tr}(A)\operatorname{Tr}(B)-\operatorname{Tr}(AB)}{d^{2}(d^{4}-1)}\left(\rho-\frac{1}{d^{2}}\mathbbm{1}\varotimes\mathbbm{1}\right).$
(9)
Now, the quantum channel $\mathcal{W}_{2}^{\prime\prime}$ from (9) is the
twirling of $\mathcal{W}_{2}$ with respect to $\mathcal{C}_{d}^{2}$. To
conclude that $\mathcal{C}_{d}^{2}$ is a unitary 2-design, we need to show
that twirling $\mathcal{W}_{2}$ with respect to $\mathcal{U}(d^{2})$ yields
the same channel, which follows from [20].
Proof of part (b). We will need the following two lemmas. The first is
basically the same as [8, Lemma 14] and the proof can be easily generalized.
The second is proven in Appendix C.
###### Lemma 15.
Consider $C,C^{\prime}\in\mathcal{C}_{d}^{2}$, such that
$C^{\prime}=C(C_{1}\varotimes C_{2})$, for some
$C_{1},C_{2}\in\mathcal{C}_{d}^{1}$. Then, $C^{\prime}$ and $C^{\prime\prime}$
yield the same Rényi-Bhattacharya parameter for both good and bad channels,
i.e., following equalities hold,
* 1)
$R(\mathcal{W}_{C}^{(0)})=R(\mathcal{W}_{C^{\prime}}^{(0)}).$
* 2)
$R(\mathcal{W}_{C}^{(1)})=R(\mathcal{W}_{C^{\prime}}^{(1)}).$
###### Lemma 16.
If $d$ is a prime number, $|\mathcal{C}_{d}^{1}|=d^{3}(d^{2}-1)$ and
$|\mathcal{C}_{d}^{2}|=d^{8}(d^{4}-1)(d^{2}-1)$.
We are now in a position to prove the part (b) of the theorem. The group
$\mathcal{C}_{d}^{2}$ can be decomposed into left cosets with respect to the
subgroup
$\mathcal{C}_{d}^{1}\varotimes\mathcal{C}_{d}^{1}\subset\mathcal{C}_{d}^{2}$.
From Lemma 15, it follows that any two elements in the same left coset, when
used as channel combiners, yield the same Rényi-Bhattacharyya parameter for
both good and bad channels. Therefore, polarization also happens for any
subset ${\cal L}\subset\mathcal{C}_{d}^{2}$, containing one representative of
each left coset (since $\mathbb{E}_{C\in{\cal
L}}R(\mathcal{W}_{C}^{(1)})=\mathbb{E}_{C\in\mathcal{C}_{d}^{2}}R(\mathcal{W}_{C}^{(1)})$,
thus the guaranteed improvement of the average Rényi-Bhattacharyya parameter,
in the sense of Lemma 12, still holds when $C$ is randomly chosen from ${\cal
L}$). Using Lemma 16, the number of cosets of
$\mathcal{C}_{d}^{1}\varotimes\mathcal{C}_{d}^{1}$ in $\mathcal{C}_{d}^{2}$ is
equal to
$\frac{|\mathcal{C}_{d}^{2}|}{|\mathcal{C}_{d}^{1}\varotimes\mathcal{C}_{d}^{1}|}=d^{4}+d^{2}$,
therefore ${\cal L}$ contains $d^{4}+d^{2}$ representatives, two of which may
be chosen to be the identity ($I$) and the swap ($S$) operators. Since
$R(\mathcal{W}_{I}^{(1)})=R(\mathcal{W}_{S}^{(1)})=R(\mathcal{W})\geq\mathbb{E}_{C\in{\cal
L}}R(\mathcal{W}_{C}^{(1)})$, we may further remove $I$ and $S$ from ${\cal
L}$, thus getting a subset
$\mathcal{L}^{\prime}:=\mathcal{L}\setminus\\{I,S\\}$ containing
$d^{4}+d^{2}-2$ elements, which still ensures polarization of qudit-input
quantum channels. From [21, 22], we know that a set of unitaries in dimension
$\delta$ can only form a unitary 2-design if it has at least
$\delta^{4}-2\delta^{2}+2$ elements. As we consider a two-qudit system
(dimension $\delta=d^{2}$), a unitary 2-design would have at least
$d^{8}-2d^{4}+2$ two-qudit unitaries, which is clearly bigger than
$d^{4}+d^{2}-2$. Hence, the set $\mathcal{L}^{\prime}$ is not a unitary
$2$-design. This completes the poof of the part (b). ∎
One may try to further reduce the size of ${\cal L}^{\prime}$, by considering
the action of the swap gate $S$. Indeed, it can be seen that the two
equalities from Lemma 15 also hold for two
$C,C^{\prime}\in\mathcal{C}_{d}^{2}$, such that $C^{\prime}=SC$ (see also [8,
Lemma 15]). Hence, if both $C$ and $C^{\prime}$ belong to ${\cal L}^{\prime}$,
one of them can be removed, while still ensuring polarization. Now,
multiplying by $S$ on the left induces a permutation on the left cosets of
$\mathcal{C}_{d}^{1}\varotimes\mathcal{C}_{d}^{1}$ in $\mathcal{C}_{d}^{2}$,
which in turn induces a permutation ${\cal
L}^{\prime}\stackrel{{\scriptstyle\sim}}{{\rightarrow}}{\cal L}^{\prime}$. In
the qubit case ($d=2$), this permutation has no fixed points, thus the size of
${\cal L}^{\prime}$ can be reduced by half. However, in general the above
permutation may have fixed points. We provide such an example in Appendix D,
where we show that for $d=5$, there exist $C\in\mathcal{C}_{d}^{2}$ and
$C_{1},C_{2}\in\mathcal{C}_{d}^{1}$, such that $SC=C(C_{1}\varotimes C_{2})$.
## 4 Quantum Polar codes on Pauli Qudit channels
In this section, we discuss the decoding of quantum polar codes on a Pauli
qudit channel. We shall assume that all channel combining unitaries are
Clifford unitaries.
A Pauli qudit channel $\mathcal{W}$ is defined as the quantum channel that
maps a qudit quantum state $\rho$ to $\sum_{r,s}a_{r,s}P_{r,s}\rho
P_{r,s}^{\dagger}$, where $a_{r,s}\geq 0$ with $\sum_{r,s}a_{r,s}=1$. Similar
to [8, Definition 17], we associate a classical channel with $\mathcal{W}$,
which is referred to as the classical counterpart of $\mathcal{W}$, and
denoted by $\mathcal{W}^{\\#}$. The classical counterpart $\mathcal{W}^{\\#}$
is a classical channel with input and output alphabet
$\bar{\mathcal{P}}_{d}^{1}:=\\{P_{r,s}\mid r,s=0,\dots,d-1\\}$, and transition
probabilities $\mathcal{W}^{\\#}(P_{r,s}\mid P_{t,u})=a_{v,w}$, where
$v=r+t\>(\text{mod }d)$ and $w=s+u\>(\text{mod }d)$. Consider now the channel
combining and splitting procedure on $\mathcal{W}$, where
$C\in\mathcal{C}_{d}^{2}$ is used to combine the two copies of $\mathcal{W}$.
Let
$\Gamma_{C}:\bar{\mathcal{P}}_{d}^{1}\varotimes\bar{\mathcal{P}}_{d}^{1}\mapsto\bar{\mathcal{P}}_{d}^{1}\varotimes\bar{\mathcal{P}}_{d}^{1}$
be the permutation induced by the conjugate action of $C$. We may define a
channel combining and splitting procedure on the classical
$\mathcal{W}^{\\#}$, using $\Gamma_{C}$ to combine the two copies of
$\mathcal{W}^{\\#}$. Similarly to [8], we may prove (but the proof is omitted
here) that the Pauli qudit channel $\mathcal{W}$ and its classical counterpart
$\mathcal{W}^{\\#}$ polarize simultaneously, in the sense of [8, Proposition
$20$ and Corollary $21$], under their respective channel combining and
splitting procedure. As a consequence, to a quantum polar code on the Pauli
qudit channel $\mathcal{W}$, we may associate a classical polar code on
$\mathcal{W}^{\\#}$, then exploit classical polar decoding in order to decode
Pauli errors, as explained below (see also [8, Section 6]). Let $\mathbf{P}$
denote the unitary corresponding to a quantum polar code of length $N$ qudits
(see also [8, Section 5]), and $\mathbf{P}^{\\#}$ the linear map corresponding
to the classical polar code. To perform decoding, we first apply
$\mathbf{P}^{\dagger}$ on the $N$-qudit channel output, that is, the encoded
quantum state corrupted by some Pauli error, say
$E\in(\bar{\mathcal{P}}_{d}^{1})^{\varotimes N}$ (we may omit phase factors).
Hence, applying $\mathbf{P}^{\dagger}$ brings it back to the original (un-
encoded) state, which is however corrupted by a Pauli error
$E^{\prime}\in(\bar{\mathcal{P}}_{d}^{1})^{\varotimes N}$, such that
$\mathbf{P}^{\\#}(E^{\prime})=E$. We are now in position to decode
$E^{\prime}$, provided that we have been given the errors corresponding to the
noisy virtual channels. We know that the inputs to the noisy channels are
halves of preshared EPR pairs. Hence, we may perform projective measurements
on the preshared EPR pairs, with respect to the generalized Bell basis
$\\{I\varotimes
P_{r,s}\mathchoice{{\left\lvert\Phi_{AA^{\prime}}\right\rangle}}{{\lvert\Phi_{AA^{\prime}}\rangle}}{{\lvert\Phi_{AA^{\prime}}\rangle}}{{\lvert\Phi_{AA^{\prime}}\rangle}}|P_{r,s}\in\bar{\mathcal{P}}_{d}^{1}\\}$,
which give us the errors, i.e., the $E^{\prime}$ components, on the noisy
virtual channels, as desired. Finally, we may decode the classical polar code
to determine $E^{\prime}$, and subsequently apply $E^{\prime\dagger}$ to
return the system to the original quantum state.
## 5 Conclusion and perspectives
The goal of this work has been to generalize the purely quantum polarization
construction to higher dimensional quantum systems. We have introduced the
necessary definitions and worked out the proof of quantum polarization,
assuming the channel combining unitary is randomized over (1) an unitary
2-design, (2) the two-qudit Clifford group, or (3) a smaller subset of two-
qudit Cliffords. Using Clifford channel combining unitaries is important, as
we showed it allows reducing the decoding problem to a classical polar code
decoding, for qudit Pauli channels. However, we note that the reliability of
the classical polar code decoding also depends on the speed of polarization
[1]. We believe that fast polarization properties can also be generalized to
the qudit case, although we leave this here as an open question.
## Acknowledgements
This research was supported in part by the “Investissements d’avenir”
(ANR-15-IDEX-02) program of the French National Research Agency. Ashutosh
Goswami acknowledges the European Union’s Horizon 2020 research and innovation
programme, under the Marie Skłodowska Curie grant agreement No 754303.
## Appendix A Proof of Lemma 13
Recall that $\bar{\mathcal{P}}_{d}^{2}=\left\\{P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}}\mid r,s,r^{\prime},s^{\prime}=0,\dots,d-1\right\\}$
is the subset of two-qudit Pauli, without phase factors. Hence, twirling of
$\mathcal{W}_{2}$ with respect to $\bar{\mathcal{P}}_{d}^{2}$ gives
$\mathcal{W}_{2}^{\prime}(\rho)=\frac{1}{d^{4}}\sum_{r,s,r^{\prime},s^{\prime}}(P_{r,s}^{\dagger}\varotimes
P_{r^{\prime},s^{\prime}}^{\dagger})A\left(P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}}\right)\rho(P_{r,s}^{\dagger}\varotimes
P_{r^{\prime},s^{\prime}}^{\dagger})B\left(P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}}\right)$ (10)
Since $\bar{\mathcal{P}}_{d}^{2}$ forms an operator basis, we may write
$\displaystyle A$
$\displaystyle=\sum_{r,s,r^{\prime},s^{\prime}}\alpha(r,s,r^{\prime},s^{\prime})P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}},$ (11) $\displaystyle B$
$\displaystyle=\sum_{r,s,r^{\prime},s^{\prime}}\beta(r,s,r^{\prime},s^{\prime})P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}}$ (12)
Substituting $A$ and $B$ in the above equation, we get
$\displaystyle\mathcal{W}_{2}^{\prime}(\rho)$
$\displaystyle=\frac{1}{d^{4}}\sum_{t,u,t^{\prime},u^{\prime}}\>\sum_{v,w,v^{\prime},w^{\prime}}\alpha(t,u,t^{\prime},u^{\prime})\beta(v,w,v^{\prime},w^{\prime})\kappa,$
(13) $\displaystyle\text{where }\kappa$
$\displaystyle:=\sum_{r,r^{\prime},s,s^{\prime}}(P_{r,s}^{\dagger}P_{t,u}P_{r,s})\varotimes(P_{r^{\prime},s^{\prime}}^{\dagger}P_{t^{\prime},u^{\prime}}P_{r^{\prime},s^{\prime}})\rho\break(P_{r,s}^{\dagger}P_{v,w}P_{r,s})\varotimes(P_{r^{\prime},s^{\prime}}^{\dagger}P_{v^{\prime},w^{\prime}}P_{r^{\prime},s^{\prime}}).$
(14)
From (1), we have that $P_{t,u}P_{r,s}=\omega^{-ru+st}P_{r,s}P_{t,u}$. Then,
we may write
$\displaystyle\kappa$ $\displaystyle=k(P_{t,u}\varotimes
P_{t^{\prime},u^{\prime}})\rho(P_{v,w}\varotimes P_{v^{\prime},w^{\prime}})$
(15) $\displaystyle\text{with }k$
$\displaystyle:=\sum_{r,s}\omega^{-r(u+w)+s(v+t)}\sum_{r^{\prime},s^{\prime}}\omega^{-r^{\prime}(u^{\prime}+w^{\prime})+s^{\prime}(v^{\prime}+t^{\prime})}.$
(16)
When $u+w=v+t=0\ (\text{mod }d)$, we have
$\sum_{r,s}\omega^{-r(u+w)+s(v+t)}=d^{2}$. When either $u+v\neq 0\ (\text{mod
}d)$ or $t+w\neq 0\ (\text{mod }d)$, we have
$\sum_{r,s}\omega^{-r(u+w)+s(v+t)}=\frac{(\omega^{-d}-1)(\omega^{d}-1)}{(\omega^{-1}-1)(\omega-1)}=0$.
Therefore,
$k=\begin{cases}d^{4},&\text{when
}u+w=v+t=u^{\prime}+w^{\prime}=v^{\prime}+t^{\prime}=0\ (\text{mod }d)\\\
0,&\text{otherwise }\end{cases}$ (17)
The condition $u+w=v+t=0\ (\text{mod }d)$ implies that
$P_{t,u}P_{v,w}=X^{t}Z^{u}X^{v}Z^{w}=\omega^{-uv}I$. Using $t=-v\ (\text{mod
}d)$, we have that $P_{v,w}=\omega^{tu}P_{t,u}^{\dagger}$. Plugging $\kappa$
into (13), we get
$\displaystyle\mathcal{W}_{2}^{\prime}(\rho)$
$\displaystyle=\sum_{t,u,t^{\prime},u^{\prime}}\gamma_{t,u,t^{\prime},u^{\prime}}(P_{t,u}\varotimes
P_{t^{\prime},u^{\prime}})\rho(P_{t,u}^{\dagger}\varotimes
P_{t^{\prime},u^{\prime}}^{\dagger}),$ (18) $\displaystyle\text{where
}\gamma_{t,u,t^{\prime},u^{\prime}}$
$\displaystyle:=\omega^{tu+t^{\prime}u^{\prime}}\alpha(t,u,t^{\prime},u^{\prime})\beta(-t,-u,-t^{\prime},-u^{\prime}).$
(19)
Hence, $\mathcal{W}_{2}^{\prime}$ is a qudit Pauli channel, as desired. ∎
## Appendix B Proof of Lemma 14
Recall that $\bar{\mathcal{C}}_{d}^{2}\subset\mathcal{C}_{d}^{2}$ is a subset
containing one representative for each equivalence class in the quotient group
$\mathcal{C}_{d}^{2}/\mathcal{P}_{d}^{2}$. Twirling of
$\mathcal{W}_{2}^{\prime}$ with respect to $\bar{\mathcal{C}}_{d}^{2}$ gives
$\mathcal{W}_{2}^{\prime\prime}(\rho)=\sum_{t,u,t^{\prime},u^{\prime}}\gamma_{t,u,t^{\prime}u^{\prime}}\frac{1}{|\bar{\mathcal{C}}_{d}^{2}|}\sum_{C\in\bar{\mathcal{C}}_{d}^{2}}C^{\dagger}(P_{t,u}\varotimes
P_{t^{\prime},u^{\prime}})C\rho C^{\dagger}(P_{t,u}^{\dagger}\varotimes
P_{t^{\prime},u^{\prime}}^{\dagger})C.$ (20)
We know that the conjugate action of the entire set
$\bar{\mathcal{C}}_{d}^{2}$ maps any $P_{t,u}\varotimes
P_{t^{\prime},u^{\prime}}\neq I\varotimes I$ to all $d^{4}-1$ two-qudit Paulis
excluding $I\varotimes I$, an equal number of times. In other words,
$P_{t,u}\varotimes P_{t^{\prime},u^{\prime}}\neq I\varotimes I$ gets mapped to
a Pauli $P_{r,s}\varotimes P_{r^{\prime},s^{\prime}}\neq I\varotimes I$,
$\frac{|\bar{\mathcal{C}}_{d}^{2}|}{d^{4}-1}$ times. Further, $I\varotimes I$
is always mapped to $I\varotimes I$. Therefore, we have that
$\displaystyle\mathcal{W}_{2}^{\prime\prime}(\rho)$
$\displaystyle=\gamma_{0,0,0,0}\rho+\frac{1}{d^{4}-1}\gamma^{\prime}\sum_{(r,s,r^{\prime},s^{\prime})\neq(0,0,0,0)}(P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}})\rho(P_{r,s}^{\dagger}\varotimes
P_{r^{\prime},s^{\prime}}^{\dagger}),$ (21) $\displaystyle\text{where
}\gamma^{\prime}$
$\displaystyle:=\sum_{(t,u,t^{\prime},u^{\prime})\neq(0,0,0,0)}\gamma_{t,u,t^{\prime},u^{\prime}}.$
(22)
Using the following three identities, we can easily transform (21) into the
form of (9).
1. 1.
$\displaystyle\gamma_{0,0,0,0}=\frac{\text{Tr}(A)\text{Tr}(B)}{d^{4}}$.
2. 2.
$\displaystyle\sum_{t,u,t^{\prime},u^{\prime}}\gamma_{t,u,t^{\prime},u^{\prime}}=\frac{\text{Tr}(AB)}{d^{2}}$.
3. 3.
$\displaystyle\sum_{r,s,r^{\prime},s^{\prime}}(P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}})\rho(P_{r,s}^{\dagger}\varotimes
P_{r^{\prime},s^{\prime}}^{\dagger})=d^{2}I\varotimes I$.
Proof of identity 1) We have that
$\gamma_{0,0,0,0}=\alpha(0,0,0,0)\beta(0,0,0,0)$. Also,
$\text{Tr}(P_{r,s})=\begin{cases}d,&\text{when }P_{r,s}=I\\\
0,&\text{otherwise }\end{cases}$
Using (11) and (12), we get $\text{Tr}(A)=\alpha(0,0,0,0)d^{2}$ and
$\text{Tr}(B)=\beta(0,0,0,0)d^{2}$. Hence,
$\gamma_{0,0,0,0}=\frac{\text{Tr}(A)\text{Tr}(B)}{d^{4}}$.
Proof of identity 2) We have,
$\displaystyle\text{Tr}(AB)$
$\displaystyle=\sum_{t,u,t^{\prime},u^{\prime}}\>\sum_{v,w,v^{\prime},w^{\prime}}\alpha(t,u,t^{\prime},u^{\prime})\beta(v,w,v^{\prime},w^{\prime})\text{Tr}(P_{t,u}P_{v,w})\text{Tr}(P_{t^{\prime},u^{\prime}}P_{v^{\prime},w^{\prime}})$
$\displaystyle=\sum_{t,u,t^{\prime},u^{\prime}}d^{2}\omega^{tu+t^{\prime}u^{\prime}}\alpha(t,u,t^{\prime},u^{\prime})\beta(-t,-u,-t^{\prime},-u^{\prime})$
$\displaystyle=d^{2}\sum_{t,u,t^{\prime},u^{\prime}}\gamma_{t,u,t^{\prime},u^{\prime}}.$
Proof of identity 3) Let
$\rho=\sum_{r,s,r^{\prime},s^{\prime}}\rho_{r,s,r^{\prime},s^{\prime}}P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}}$. Since $\rho$ is a density matrix, we have
$\rho_{0,0,0,0}=\frac{\operatorname{Tr}(\rho)}{d^{2}}=\frac{1}{d^{2}}$. Hence,
$\displaystyle\sum_{r,s,r^{\prime},s^{\prime}}(P_{r,s}\varotimes
P_{r^{\prime},s^{\prime}})\rho(P_{r,s}^{\dagger}\varotimes
P_{r^{\prime},s^{\prime}}^{\dagger})$
$\displaystyle=\sum_{r,s,r^{\prime},s^{\prime}}\>\sum_{t,u,t^{\prime},u^{\prime}}\rho_{t,u,t^{\prime},u^{\prime}}(P_{r,s}P_{t,u}P_{r,s}^{\dagger})\varotimes(P_{r^{\prime},s^{\prime}}P_{t^{\prime},u^{\prime}}P_{r^{\prime},s^{\prime}}^{\dagger})$
$\displaystyle=\sum_{t,u,t^{\prime},u^{\prime}}\rho_{t,u,t^{\prime},u^{\prime}}\left(\sum_{r,s,r^{\prime},s^{\prime}}\omega^{-st+ru}\omega^{-s^{\prime}t^{\prime}+r^{\prime}u^{\prime}}\right)P_{t,u}\varotimes
P_{t^{\prime},u^{\prime}}$ $\displaystyle=d^{4}\rho_{0,0,0,0}I\varotimes I$
$\displaystyle=d^{2}I\varotimes I.$
We get (9) from (21) by using the above identities, while also substituting
the notation $\mathbbm{1}$ for the identity matrix $I$, as it denotes a
quantum state here. ∎
## Appendix C Proof of Lemma 16
Consider the one-qudit Clifford group $\mathcal{C}_{d}^{1}$. We count first
the permutations generated by $\mathcal{C}_{d}^{1}$ on
$\bar{\mathcal{P}}_{d}^{1}:=\\{P_{r,s}|r,s=0,\dots,d-1\\}$, and later we will
accommodate the phase factors. Any Clifford $C\in\mathcal{C}_{d}^{1}$ is
uniquely determined by its conjugate action on the generators of the Pauli
group, $X$ and $Z$. Suppose that $C$ maps $X\mapsto P_{r,s}$ and $Z\mapsto
P_{t,u}$ via its conjugate action, where $P_{r,s},P_{t,u}\neq I$. On the one
hand, since commutation relations are preserved under unitary conjugation,
$P_{r,s}$ and $P_{t,u}$ must satisfy $P_{r,s}P_{t,u}=\omega P_{t,u}P_{r,s}$.
On the other hand, from (1), we have that $P_{r,s}P_{t,u}=\omega^{ru-
st}P_{t,u}P_{r,s}$. Therefore, $r,u,s,t$ must be such that $ru-st=1\
(\text{mod d})$. We fix $r,s$ and solve for $t,u$. Since $P_{r,s}\neq I$, it
follows that either $r$ or $s$ is non-zero. Without loss of generality, we may
assume that $r\neq 0$. Since $d$ is a prime number, $r$ is invertible under
multiplication modulo $d$. Therefore, for any $t\in\\{0,\dots,d-1\\}$, there
exists a unique $u:=r^{-1}(1+st)\ (\text{mod }d)$, satisfying $ru-st=1$.
Hence, there are exactly $d$ choices for the $t,u$ pair. Since we have
$d^{2}-1$ choices for the $r,s$ pair, it follows that there are $d(d^{2}-1)$
pairs of Paulis, $P_{r,s}$ and $P_{t,u}$, such that $P_{r,s}P_{t,u}=\omega
P_{t,u}P_{r,s}$. Taking into account the phase factors,
$\omega^{\lambda},\lambda\in\\{0,\dots,d-1\\}$, it follows that
$\mathcal{C}_{d}^{1}$ has $d^{3}(d^{2}-1)$ elements.
We now count the number of elements in $\mathcal{C}_{d}^{2}$. The two-qudit
Pauli group $\mathcal{P}_{d}^{2}$ is generated by a set of four Paulis
$I\varotimes X,I\varotimes Z,X\varotimes I$ and $Z\varotimes I$, and any
Clifford $C\in\mathcal{C}_{d}^{2}$ is uniquely determined by its conjugate
action on these four generators. The commutation relations between the four
generators are illustrated in Fig. 2.
$I\varotimes Z$$I\varotimes X$$Z\varotimes I$$X\varotimes I$ Figure 2:
Connected Paulis satisfy $AB=\omega BA$, with $A$ is the Pauli on the top row,
and $B$ the Pauli on the bottom row. Paulis that are not connected commute.
Consider a mapping $I\varotimes X\mapsto A$, $I\varotimes Z\mapsto B$,
$X\varotimes I\mapsto A^{\prime}$, $Z\varotimes I\mapsto B^{\prime}$, where
$A,B,A^{\prime},B^{\prime}\in\bar{\mathcal{P}}_{d}^{2}$, that preserves all
the commutation relations between generators. Pauli $I\varotimes X$ can be
mapped to any two-qudit Pauli $A\neq I\varotimes I$, so there are $d^{4}-1$
choices for $A$. It is not very difficult to see that for any $A\neq
I\varotimes I$ there are $d^{3}$ choices for $B$ such that $AB=\omega BA$.
Further, there are $d(d^{2}-1)$ pairs of two-qudit Paulis $A^{\prime}$ and
$B^{\prime}$, which commute with both $A$ and $B$, and satisfy
$A^{\prime}B^{\prime}=\omega B^{\prime}A^{\prime}$. Therefore, we have
$d^{4}(d^{4}-1)(d^{2}-1)$ possible permutations on
$\bar{\mathcal{P}}_{d}^{2}$, which satisfy all the commutation relations.
Taking into account the phase factors, it follows that $\mathcal{C}_{d}^{2}$
has $d^{8}(d^{4}-1)(d^{2}-1)$ elements. ∎
## Appendix D Example of left coset fixed by the swap gate
We consider $d=5$. Let $C_{1}=I$ be the identity, and
$C_{2}^{\prime}\in\mathcal{C}_{d}^{1}$ be such that it maps $X\mapsto X^{4}$
and $Z\mapsto Z^{4}$, via conjugation. Since $X^{4}Z^{4}=\omega Z^{4}X^{4}$,
$C_{2}^{\prime}$ is indeed a one-qudit Clifford. We define
$C_{2}=C_{2}^{\prime}X^{2}Z^{2}$. Further, let $C\in\mathcal{C}_{d}^{2}$, such
that its conjugate action generates the following permutation on the
generators of $\mathcal{P}_{d}^{2}$,
$\displaystyle I\varotimes X$ $\displaystyle\mapsto X^{4}Z\varotimes XZ^{4},$
$\displaystyle I\varotimes Z$ $\displaystyle\mapsto XZ\varotimes X^{4}Z^{4},$
$\displaystyle X\varotimes I$ $\displaystyle\mapsto X^{4}Z\varotimes X^{4}Z,$
$\displaystyle Z\varotimes I$ $\displaystyle\mapsto XZ\varotimes XZ.$
Using (1), it is easily seen that the above permutation preserves all the
commutation relations between the generators. Now, the conjugate actions of
$SC$ and $C(C_{1}\varotimes C_{2})$ generate the same permutation on
$\mathcal{P}_{d}^{2}$. Therefore, $SC=C(C_{1}\varotimes C_{2})$.
## References
* [1] Erdal Arıkan “Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels” In _IEEE Transactions on Information Theory_ 55.7, 2009, pp. 3051–3073 DOI: 10.1109/TIT.2009.2021379
* [2] Mark M. Wilde and Saikat Guha “Polar Codes for Classical-Quantum Channels” In _IEEE Transactions on Information Theory_ 59.2, 2013, pp. 1175–1187 DOI: 10.1109/TIT.2012.2218792
* [3] Rajai Nasser and Joseph M. Renes “Polar codes for arbitrary classical-quantum channels and arbitrary cq-MACs” In _IEEE Transactions on Information Theory_ 64.11, 2018, pp. 7424–7442 DOI: https://doi.org/10.1109/TIT.2018.2869460
* [4] Joseph M. Renes, Frédéric Dupuis and Renato Renner “Efficient Polar Coding of Quantum Information” In _Physical Review Letters_ 109 American Physical Society, 2012, pp. 050504 DOI: 10.1103/PhysRevLett.109.050504
* [5] Mark M. Wilde and Saikat Guha “Polar Codes for Degradable Quantum Channels” In _IEEE Transactions on Information Theory_ 59.7, 2013, pp. 4718–4729 DOI: 10.1109/TIT.2013.2250575
* [6] Joseph M. Renes and Mark M. Wilde “Polar Codes for Private and Quantum Communication Over Arbitrary Channels” In _IEEE Transactions on Information Theory_ 60.6, 2014, pp. 3090–3103 DOI: 10.1109/TIT.2014.2314463
* [7] Frédéric Dupuis, Ashutosh Goswami, Mehdi Mhalla and Valentin Savin “Purely Quantum Polar Codes” In _2019 IEEE Information Theory Workshop (ITW)_ , 2019 DOI: 10.1109/ITW44776.2019.8989387
* [8] Frédéric Dupuis, Ashutosh Goswami, Mehdi Mhalla and Valentin Savin “Polarization of Quantum Channels using Clifford-based Channel Combining” In _IEEE Transactions on Information Theory_ 67.5, 2021, pp. 2857–2877 DOI: 10.1109/TIT.2021.3063093
* [9] Mark M. Wilde “From Classical to Quantum Shannon Theory” Cambridge University Press DOI: 10.1017/9781316809976.001
* [10] Marco Tomamichel, Mario Berta and Masahito Hayashi “Relating different quantum generalizations of the conditional Rényi entropy” In _Journal of Mathematical Physics_ 55.8, 2014 DOI: 10.1063/1.4892761
* [11] Daniel Gottesman “Fault-Tolerant Quantum Computation with Higher-Dimensional Systems” In _Chaos, Solitons and Fractals_ 10.10, 1999, pp. 1749–1758 DOI: https://doi.org/10.1016/S0960-0779(98)00218-5
* [12] Vlad Gheorghiu “Standard form of qudit stabilizer groups” In _Physics Letters A_ 378.5–6, 2014, pp. 505–509 DOI: https://doi.org/10.1016/j.physleta.2013.12.009
* [13] Christoph Dankert, Richard Cleve, Joseph Emerson and Etera Livine “Exact and approximate unitary 2-designs and their application to fidelity estimation” In _Physical Review A_ 80.1, 2009, pp. 012304 DOI: https://doi.org/10.1103/PhysRevA.80.012304
* [14] Zak Webb “The Clifford group forms a unitary 3-design” In _Quantum Information and Computation_ 16, 2016, pp. 1379–1400 DOI: 10.26421/QIC16.15-16-8
* [15] Martin Müller-Lennert, Frédéric Dupuis, Oleg Szehr, Serge Fehr and Marco Tomamichel “On quantum Rényi entropies: a new generalization and some properties” In _Journal of Mathematical Physics_ 54.12, 2013 DOI: 10.1063/1.4838856
* [16] Christopher A. Fuchs and Jeroen Graaf “Cryptographic distinguishability measures for quantum-mechanical states” In _IEEE Transactions on Information Theory_ 45.4, 1999, pp. 1216–1227 DOI: 10.1109/18.761271
* [17] Andreas Winter “Tight Uniform Continuity Bounds for Quantum Entropies: Conditional Entropy, Relative Entropy Distance and Energy Constraints” In _Communications in Mathematical Physics_ 347 Springer, 2016, pp. 291–313 DOI: https://doi.org/10.1007/s00220-016-2609-8
* [18] Frédéric Dupuis “The decoupling approach to quantum information theory”, 2009 arXiv:1004.1641
* [19] Olivia Di Matteo “A short introduction to unitary 2-designs” eprint: https://glassnotes.github.io/OliviaDiMatteo_Unitary2Designs.pdf
* [20] Joseph Emerson, Robert Alicki and Karol Życzkowski “Scalable noise estimation with random unitary operators” In _Journal of Optics B: Quantum and Semiclassical Optics_ 7.10, 2005 DOI: https://doi.org/10.1088/1464-4266/7/10/021
* [21] David Gross, Koenraad Audenaert and Jens Eisert “Evenly distributed unitaries: On the structure of unitary designs” In _Journal of Mathematical Physics_ 48.5, 2007, pp. 052104 DOI: https://doi.org/10.1063/1.2716992
* [22] Aidan Roy and A. J. Scott “Unitary designs and codes” In _Designs, Codes and Cryptography_ 53.5, 2009, pp. 13–31 DOI: https://doi.org/10.1007/s10623-009-9290-2
|
# Swarming bottom feeders: Flocking at solid-liquid interfaces
Niladri Sarkar<EMAIL_ADDRESS>Instituut-Lorentz, Leiden University,
P.O. Box 9506, 2300 RA Leiden, The Netherlands Abhik Basu
<EMAIL_ADDRESS><EMAIL_ADDRESS>Condensed Matter Physics Division,
Saha Institute of Nuclear Physics, Calcutta 700064, West Bengal, India John
Toner<EMAIL_ADDRESS>Department of Physics and Institute of Theoretical
Science, University of Oregon, Eugene, Oregon 97403, USA
###### Abstract
We present the hydrodynamic theory of coherent collective motion (“flocking”)
at a solid-liquid interface, and many of its predictions for experiment. We
find that such systems are stable, and have long-range orientational order,
over a wide range of parameters. When stable, these systems exhibit “giant
number fluctuations”, which grow as the 3/4th power of the mean number. Stable
systems also exhibit anomalous rapid diffusion of tagged particles suspended
in the passive fluid along any directions in a plane parallel to the solid-
liquid interface, whereas the diffusivity along the direction perpendicular to
the plane is not anomalous. In the remaining parameter space, the system
becomes unstable.
Many “active” systems consist of macroscopically large numbers of self-
propelled particles that align their directions of motion. This occurs both in
living kruse04 ; kruse05 ; goldstein13 ; saintillan08 ; hatwalne04 and
synthetic saha14 ; cates15 ; narayan07 ; lubensky09 ; marchetti08 systems.
Such “active orientationally ordered phases” exhibit many phenomena impossible
in their equilibrium analogs (e.g., nematics deGennes ), including spontaneous
breaking of continuous symmetries in two dimensionsvicsek95 ; tonertu95 ;
toner98 ; toner05 , instability in the extreme Stokesian limitsimha2002 , and
giant number fluctuations Chate+Giann ; toner2019giant ; ramaswamy03 .
“Dry” active systems - i.e., those lacking momentum conservation due to, e.g.,
friction with a substrate wolgemuth2002 ; toner98 ; ramaswamy03 \- behave
quite differently from “wet” active fluids (i.e., those with momentum
conservation) lushi2014 .
In this paper, we present the first theory of a natural hybrid of these two
cases: polar active particles at a solid-liquid interface (see figure (1)). We
are motivated by experiments schaller13 in which highly concentrated actin
filaments on a solid-fluid interface are propelled by motor proteins, and
those of Bricard et al bricard2013 ; Geyer17 , who studied the emergence of
macroscopically directed motion in “Quincke rollers”. The latter are motile
colloids, spontaneously rolling on a solid substrate when a sufficiently
strong electric field is applied.
These systems differ from both dry and wet active matter, as defined above, by
having both friction from the underlying solid substrate and the long range
hydrodynamic interactions due to the overlying bulk passive fluid.
The geometry we consider here, as in Ref. schaller13 ; bricard2013 , places a
collection of polar, self-propelled particles at the flat interface (the
$x$-$y$ plane of our coordinate system) between a solid substrate and a semi-
infinite bulk isotropic and incompressible passive liquid. as illustrated in
Fig. 1. We consider the extreme Stokesian limit, in which inertial forces are
completely negligible compared to viscous forces.
Figure 1: (Color online) Schematic diagram of our system: a layer of active
polar particles moving on a solid substrate with a passive ambient (“bulk”)
fluid above.
The most surprising result of our work is that, even in the presence of noise,
this system can be in a stable, long-range ordered polar state, in sharp
contrast to “wet” active systems, which are generically unstable simha2002 at
low Reynolds number, and equilibrium systems, which cannot display long range
orientational order in two dimensions at finite temperatureMW ; xtalfoot ;
2dxtal ; teth .
Remarkably, this ordered state is predicted even by a linear theory.
Furthermore, this linear theory provides an asymptotically exact long
wavelength description, in contrast to dry polar active systems, which can
only be correctly described by a non-linear theory. Indeed, dry polar active
systems can only exhibit long range order due to non-linear effects vicsek95 ;
tonertu95 ; toner98 ; toner05 .
Concomitant with the long-range polar order, the density fluctuations are
giant: the standard deviation $\sqrt{\langle(N-\langle N\rangle)^{2}\rangle}$
of the number $N$ of the active particles contained in a fixed open area
scales with its average $\langle N\rangle$ according to
$\sqrt{\langle(N-\langle N\rangle)^{2}\rangle}\propto\langle
N\rangle^{3/4}\,.$ (1)
This agrees very well with the experiments of schaller13 , which found
$\sqrt{\langle(N-\langle N\rangle)^{2}\rangle}\propto\langle N\rangle^{0.8}$.
Note that our prediction should not be confused with qualitatively similar
predictions for dry active matter Chate+Giann ; GNF and active nematics AN ,
for which the exponent is different, because they belong to different
universality classes.
We also find that the fluctuations in the active fluid layer stir the bulk
fluid above it, making the diffusion of a passive tagged particle parallel to
the active fluid layer anomalous: specifically, the mean squared displacement
grows with time $t$ as $t\ln t$, whereas the diffusive motion perpendicular
to the active fluid layer remains conventional, i.e., the mean squared
displacement scales like $t$.
To understand the physics of this system, we have constructed a theory which,
when linearized for small fluctuations about a uniform reference state, is
asymptotically exact in the long wavelength limit, and gives the above
results. We define $\hat{{\bf p}}(\mathbf{r}_{{}_{\parallel}},t)$ as the
coarse grained polarization of the active particles, and
$\rho(\mathbf{r}_{{}_{\parallel}},t)$ as the conserved areal density of the
active polar particles on the surface. Taking our uniform reference state to
be $\hat{{\bf p}}(\mathbf{r}_{{}_{\parallel}},t)=\hat{{\bf x}}$ (see Fig. 1),
and $\rho=\rho_{0}$, one hydrodynamic variable is the transverse fluctuations
$p_{y}$ of $\hat{{\bf p}}(\mathbf{r}_{{}_{\parallel}},t)$, which we take to
have unit magnitude, i.e., $|\hat{{\bf p}}|^{2}=1$. This is a non-conserved
broken symmetry - i.e., “Goldstone” - mode. Our second hydrodynamic variable
is the fluctuations
$\delta\rho(\mathbf{r}_{{}_{\parallel}},t)\equiv\rho(\mathbf{r}_{{}_{\parallel}},t)-\rho_{0}$
of the density from its mean value.
These variables couple to the bulk passive fluid velocity
$\mathbf{v}(\mathbf{r}_{{}_{\parallel}},z,t)$ via an active boundary condition
given below in (16). Eliminating $\mathbf{v}(\mathbf{r}_{{}_{\parallel}},z,t)$
by solving the Stokes equation for the bulk fluid subject to this active
boundary condition gives the equations of motion for the spatially Fourier
transformed fields $p_{y}(\mathbf{q},t)$ and $\delta\rho(\mathbf{q},t)$:
$\displaystyle\partial_{t}\delta\rho(\mathbf{q},t)=-iv_{\rho}[q_{x}\delta\rho(\mathbf{q},t)+\rho_{c}q_{y}p_{y}]+i\mathbf{q}\cdot{\bf
f}_{\rho}(\mathbf{q},t)\,,$ (2) $\displaystyle\partial_{t}p_{y}(\mathbf{q},t)$
$\displaystyle=$ $\displaystyle-
iv_{p}q_{x}p_{y}(\mathbf{q},t)-\gamma\left({q^{2}+q_{y}^{2}\over
q}\right)p_{y}(\mathbf{q},t)-\left({\gamma_{\rho}\over\rho_{c}}\right)\left({q_{x}q_{y}\over
q}\right)\delta\rho(\mathbf{q},t)-i\sigma_{t}q_{y}\delta\rho(\mathbf{q},t)+f_{y}(\mathbf{q},t)\,,$
(3)
where $v_{\rho}$, $v_{p}$, $\gamma$, $\gamma_{\rho}$, $\rho_{c}$, and
$\sigma_{t}$ are parameters of our model. Note the non-analytic character of
the damping $\gamma$ and $\gamma_{\rho}$ terms in (3); due to long-ranged
hydrodynamic interactions mediated by the bulk passive fluid.
In (2) and (3), ${\bf f}_{\rho}$ and $f_{y}$ are zero-mean Gaussian white
noises whose variances are parameters of our model.
For stability, fluctuations must decay for all directions of $\mathbf{q}$. We
show in the associated long paper (ALP) alp that this condition is satisfied
provided that the analogs of the bulk compressibility and the shear and bulk
viscosities in our system are all positive, and that the coupling of the
density of the active particles to their self-propelled speeds is not too
strong.
Thus, in contrast to “wet” active matter in the “Stokesian” limit simha2002 ;
toner05 , our “mixed” system can be generically stable. Indeed, the
requirements for stability are almost as easily met for these systems as for
an equilibrium fluid. Furthermore, when the stability conditions are met,
fluctuations about the uniform ordered state in this model decay with a rate
that scales linearly with $q$, quite different from the linear theory of dry
active matter. The also propagate nondispersively with a wavespeed independent
of $q$.
This unusual damping in this linear theory is responsible for many novel
phenomena: most strikingly, it makes $\langle p_{y}^{2}({\bf
r}_{\perp},t)\rangle$ asymptotically independent of the lateral size of the
system, a tell-tale signature of orientational long-range order. It also leads
to giant number fluctuations of the active particles given by (1), as
mentioned earlier.
In the ordered state, the active particles “stir” the passive fluid above
them. The mean squared components $\langle
v_{x}^{2}(\mathbf{r}_{{}_{\parallel}},z,t)\rangle$, and $\langle
v_{y}^{2}(\mathbf{r}_{{}_{\parallel}},z,t)\rangle$ of the passive fluid
velocity field $\mathbf{v}(\mathbf{r}_{{}_{\parallel}},z,t)$ thereby induced
are inversely proportional to the distance $z$ from the solid-fluid interface.
The unequal-time correlations $\langle
v_{x,y}(\mathbf{r}_{{}_{\parallel}},z,t)v_{x,y}(\mathbf{r}_{{}_{\parallel}},z,0)\rangle$
of the in-plane velocity fluctuations of the passive fluid also exhibit long
temporal correlations, which decay as $1/t$, whereas the correlation $\langle
v_{z}(\mathbf{r}_{{}_{\parallel}},z,t)v_{z}(\mathbf{r}_{{}_{\parallel}},z,0)\rangle$
of the the bulk fluid velocity perpendicular to the surface decays as
$1/t^{3}$.
The correlations of the in-plane velocity in turn lead to anomalous diffusion
of neutrally buoyant passive particles in the $x$\- and $y$-direction, with
variances of the displacements growing faster with time than the linear
dependence found for simple brownian particles. Specifically, we find, for a
particle that is initially a height $z_{0}$ above the solid-liquid interface,:
$\displaystyle\langle(r_{i}(t)-r_{i}(0))^{2}\rangle=\left\\{\begin{array}[]{ll}2D_{i}t\left[\ln\left({v_{0}t\over
z_{0}}\right)+O(1)\right]\,\,,\,\,\,\,\,\,t\ll{z_{0}^{2}\over D_{z}}\,,\\\ \\\
D_{i}t\left[\ln\left({v_{0}^{2}t\over
D_{z}}\right)+O(1)\right]\,\,,\,\,\,\,\,\,t\gg{z_{0}^{2}\over
D_{z}}\,,\end{array}\right.$ (7) (8)
where $i=x,y$, $v_{0}$ is a system-dependent characteristic speed (roughly
speaking, the self-propulsion speed of the active particles), $z_{0}$ is the
initial distance from the surface, and $D_{x,y,z}$ are diffusion constants
which are independent of $z_{0}$. Note that the mean square displacements
depend on the initial height $z_{0}$ for short times $t\ll z_{0}^{2}/D_{z}$,
but not for long times $t\gg z_{0}^{2}/D_{z}$. The crossover between these
limits is the time $t=z_{0}^{2}/D_{z}$ it takes for a neutrally buyoant
particle to diffuse a distance $z_{0}$ in the $z$-direction.
Diffusion in the $z$-direction remains conventional, controlled by a
$z$-independent diffusivity.
This set of predictions could also be tested experimentally by particle
tracking of neutrally buoyant tracer particles in the passive fluid.
Particles denser than the passive fluid, which therefore sediment, will also
be affected by this activity induced flow. We find that particles sedimenting
at a speed $v_{\rm sed}\ll v_{0}$ from an initial height $z_{0}$ will, when
they reach the surface, be spread out over a region of RMS dimensions
$\sqrt{\langle(x(z=0)-x(z=z_{0}))^{2}\rangle}$ and
$\sqrt{\langle(y(z=0)-y(z=z_{0}))^{2}\rangle}$ in the $x$ and $y$ directions,
respectively, with
$\langle(r_{i}(t)-r_{i}(0))^{2}\rangle=2D_{i}\left({z_{0}\over v_{\rm
sed}}\right)\ln\left({v_{0}\over v_{\rm sed}}\right)\ \ \ ,\ \ \ v_{\rm
sed}\ll v_{0}\,,$ (9)
where $v_{0}$ is roughly the mean speed of the active particles, and $v_{\rm
sed}$ is the speed at which the sedimenting particles sink.
Once again, these predictions should be readily testable in particle tracking
experiments.
We find that the polarization $\hat{{\bf p}}$, has a simple scaling form for
its spatio-temporally Fourier transformed correlation function:
$C_{pp}({\bf
q},\omega)\equiv\langle|p_{y}(\mathbf{q},\omega)|^{2}\rangle=\left({1\over
q^{2}}\right)F_{pp}\bigg{(}\left({\omega\over
q}\right),\theta_{\mathbf{q}}\bigg{)}\,,$ (10)
where the scaling function $F_{pp}(u,\theta_{\mathbf{q}})$ is given in the
ALP; and $\theta_{\bf q}\equiv\tan({q_{y}/q_{x}})$ is the angle between
$\mathbf{q}$ and the direction $\hat{{\bf x}}$ of the mean polarization. The
positions of the peaks in $C_{pp}(\mathbf{q},\omega)$ versus $\omega$ unreal
(but most definitely not their widths), are precisely those found for dry
active matter in tonertu95 ; toner98 ; toner05 ; i.e., $\omega_{\rm
peak}=c_{\pm}(\theta_{\mathbf{q}})q$, where $c_{\pm}(\theta_{\mathbf{q}})$ is
given by (19) and plotted in Figure (2).
Figure 2: (Color online) Polar plot of the sound speeds; the polarization
points directly to the right. That is, the distance along a straight line line
drawn from the origin and making an angle $\theta$ with the $x$-axis to its
intersection with the curve is proportional to the sound speed of a mode
propagating at the same angle $\theta$ to the mean polarization direction
$\hat{{\bf x}}$. There are two intersections for each such line, corresponding
to the two roots given in equation (19) for the sound speeds. Here we have
taken $v_{\rho}=1$, $v_{p}=c_{0}=2$, and $\gamma=.3$ (all in arbitrary units).
These peak positions agree with those found in the experiments of Geyer17 on
Quinke rollers.
The density-density correlation function
$C_{\rho\rho}(\mathbf{q},\omega)\equiv\langle|\delta\rho(\mathbf{q},\omega)|^{2}\rangle$,
and the density-polarization cross-correlation
$C_{p\rho}(\mathbf{q},\omega)\equiv\langle
p_{y}(\mathbf{q},\omega)\delta\rho(-\mathbf{q},-\omega)\rangle$, both obey
similar scaling laws, which are given in detail in the ALP.
Integrating these spatio-temporally Fourier-transformed correlation functions
over all frequencies $\omega$ shows that the equal time correlation functions
$C_{pp}(\mathbf{q})\equiv\langle|p_{y}(\mathbf{q},t)|^{2}\rangle$,
$C_{\rho\rho}(\mathbf{q})\equiv\langle|\delta\rho(\mathbf{q},t)|^{2}\rangle$,
and $C_{p\rho}(\mathbf{q})\equiv\langle
p_{y}(\mathbf{q},t)\delta\rho(-\mathbf{q},t)\rangle$ all scale like $1/q$.
Their dependence on the direction of $\mathbf{q}$ is given explicitly in the
ALP.
Fourier transforming these in space shows that the real space, equal-time
correlation functions $C_{pp}(\mathbf{r})=\langle
p_{y}(\mathbf{r}+\mathbf{R},t)p_{y}(\mathbf{R},t)\rangle$,
$C_{\rho\rho}(\mathbf{r})\equiv\langle\delta\rho(\mathbf{r}+\mathbf{R},t)\delta\rho(\mathbf{R},t)\rangle$,
and $C_{p\rho}(\mathbf{r})\equiv\langle
p_{y}(\mathbf{r}+\mathbf{R},t)\delta\rho(\mathbf{R},t)\rangle$ all scale like
$1/r$, and depend on the direction of $\mathbf{r}$. Explicit expressions for
this direction-dependence are given in the ALP.
These predictions could also be tested experimentally in systems in which the
active particles can be imaged, like those of schaller13 ; bricard2013 .
Although the anisotropy of the system ensures that all the correlators are
anisotropic functions of distance $\bf r$, nonetheless, their spatial scaling
remains isotropic. That is, the anisotropy exponent $\zeta$ that determines
the relative scaling between $x$ and $y$ is $\zeta=1$, in contrast to the
Toner-Tu model toner98 .
The correlator $C_{\rho\rho}(\mathbf{r}-\mathbf{r}^{\prime})$ can be used to
obtain the result (1) for the giant number fluctuations. The bulk velocity can
be obtained from $p_{y}(\mathbf{r},t)$ and $\delta\rho(\mathbf{r},t)$ through
the aforementioned solution of the Stokes equation subject to the active
boundary condition. This in turn allows us to derive the anomalous diffusion
(8); see the ALPalp for detailed derivations.
We will now provide an outline of how we obtained these results. Details can
be found in the ALP.
In the presence of friction from the substrate, there is no momentum
conservation on the surface, so the only conserved variable on the surface is
the active particle number. We also include the bulk fluid velocity
$\mathbf{v}(\mathbf{r},t)$, which is defined throughout the semi-infinite
three dimensional (3D) space above the surface, since in that space momentum
(which is equivalent to velocity in the limit of an incompressible bulk fluid)
is conserved. However, we work in the Stokesian limit, in which viscous forces
dominate inertial ones.
We formulate the hydrodynamic equations for these variables by expanding their
equations of motion phenomenologically in powers of fluctuations of both
fields $\hat{{\bf p}}$ and $\rho$ from their mean values, and in spatio-
temporal gradients. In doing so, we respect all symmetries and conservation
laws of the underlying dynamics. In our non-equilibrium system, additional
equilibrium constraints like detailed balance do not apply. Our system has
underlying rotational invariance in the plane of the surface, which is
spontaneously broken by the active particles when they align their
polarizations.
Conservation of the active particles implies that
$\rho(\mathbf{r}_{{}_{\parallel}},t)$ obeys a continuity equation:
$\displaystyle\partial_{t}\rho+{\bm{\nabla}}_{s}\cdot{\bf J}_{\rho}$
$\displaystyle=$ $\displaystyle 0\,,$ (11)
where ${\bm{\nabla}}_{s}\equiv{\hat{\bf x}}\partial/\partial x+{\hat{\bf
y}}\partial/\partial y$ is the 2D gradient operator, with ${\hat{\bf x}}$ and
${\hat{\bf y}}$ the unit vectors along the $x$ and $y$ axis respectively. We
phenomenologically expand the active particle current ${\bf J}_{\rho}$ to
leading order in powers of the bulk velocity evaluated at the surface ${\bf
v}(\mathbf{r}_{{}_{\parallel}},z=0)$, and gradients, while respecting rotation
invariance. In practice, this means we can make the vector ${\bf J}_{\rho}$
only out of vectors the system itself chooses, i.e., out of gradients, the
surface velocity $\mathbf{v}_{s}(\mathbf{r}_{{}_{\parallel}},t)\equiv{\bf
v}(\mathbf{r}_{{}_{\parallel}},z=0,t)$, and the polarization $\hat{{\bf
p}}(\mathbf{r}_{{}_{\parallel}},t)$. These constraints force ${\bf J}_{\rho}$
to take the form:
$\displaystyle{\bf J_{\rho}}(\mathbf{r}_{{}_{\parallel}})$ $\displaystyle=$
$\displaystyle\rho_{e}(\rho,|\mathbf{v}_{s}|){\bf
v}_{s}(x,y)+\kappa(\rho,|\mathbf{v}_{s}|)\hat{{\bf p}}$ (12)
to leading order in gradients. The factor $\kappa(\rho,|\mathbf{v}_{s}|)$ is
an active parameter reflecting the self-propulsion of the particles through
interaction with the solid substrate, while the $\rho_{e}$ term reflects
convection of the active particles by the passive fluid above them. The
parameter $\rho_{e}\neq\rho$ in general due to drag between the active
particles and the substrate.
In calculating the bulk velocity
$\mathbf{v}(\mathbf{r}_{{}_{\parallel}},z,t)$, we assume the bulk fluid is in
the extreme “Stokesian” limit, in which inertia is negligible relative to
viscous drag. This should be appropriate for most systems in which the active
particles are microscopic, since the Reynolds’ number will be extremely low
for such particles. It is, however, certainly not valid for bottom-feeding
fish, so the title of this paper takes some poetic license!
In this limit, the three-dimensional (3D) incompressible bulk velocity field
${\bf v}=(v_{i},v_{z}),\,i=x,y$ satisfies the 3D Stokes’ equation
$\eta\nabla^{2}_{3}v_{\alpha}(\mathbf{r}_{{}_{\parallel}},z)=\partial_{\alpha}\Pi(\mathbf{r}_{{}_{\parallel}},z),$
(13)
where $\eta$ is the bulk viscosity of the fluid, together with the
incompressibility constraint ${\bm{\nabla}}_{3}\cdot{\bf v}=0$. Here
${\bm{\nabla}}_{3}\equiv{\hat{\bf x}}\partial/\partial x+{\hat{\bf
y}}\partial/\partial y+{\hat{\bf z}}\partial/\partial z$ is the full three-
dimensional gradient operator, with ${\hat{\bf x}}$, ${\hat{\bf y}}$, and
${\hat{\bf z}}$ as the unit vectors along the $x$, $y$, and $z$ axes
respectively, and $\Pi$ is the bulk pressure which enforces the
incompressibility constraint.
This equation (13) can be solved exactly for the bulk velocity
$\mathbf{v}(\mathbf{r}_{{}_{\parallel}},z,t)$ in terms of the surface velocity
$\mathbf{v}_{s}(\mathbf{r}_{{}_{\parallel}},t)$. If we Fourier expand the
surface velocity:
$\mathbf{v}_{s}(\mathbf{r}_{{}_{\parallel}},t)={1\over\sqrt{L_{x}L_{y}}}\sum_{\mathbf{q}}\mathbf{v}_{s}(\mathbf{q},t)e^{i\mathbf{q}\cdot\mathbf{r}_{{}_{\parallel}}}$
(14)
where $(L_{x},L_{y})$ are the linear dimensions of our (presumed rectangular)
surface, then, as we show in the ALP, the bulk velocity
$\mathbf{v}(\mathbf{r}_{{}_{\parallel}},z,t)$ is given by
$\mathbf{v}(\mathbf{r}_{{}_{\parallel}},z,t)={1\over\sqrt{L_{x}L_{y}}}\sum_{\mathbf{q}}[\mathbf{v}_{s}(\mathbf{q},t)-z(\mathbf{q}\cdot\mathbf{v}_{s})({\hat{\mathbf{q}}}+i{\hat{\mathbf{z}}})]e^{-qz+i\mathbf{q}\cdot\mathbf{r}_{\perp}}\,.$
(15)
The last ingredient in our theory is the boundary condition on the bulk fluid
velocity at the interface. The active particles at the solid-liquid interface
generate active forces, which change the boundary condition from the familiar
partial-slip boundary condition to:
$v_{si}(\mathbf{r}_{{}_{\parallel}},t)=v_{a}(\rho)p_{i}(\mathbf{r}_{{}_{\parallel}},t)+\zeta_{1}(\rho)\hat{{\bf
p}}\cdot\nabla_{S}p_{i}+\zeta_{2}(\rho)p_{i}\nabla_{S}\cdot\hat{{\bf
p}}+p_{i}\hat{{\bf p}}\cdot\nabla_{S}\zeta(\rho)+\mu\eta\bigg{(}\frac{\partial
v_{i}(\mathbf{r}_{{}_{\parallel}},z,t)}{\partial
z}\bigg{)}_{z=0}-\partial_{i}P_{s}(\rho)\,,$ (16)
where $v_{a}(\rho)$ is the spontaneous self-propulsion speed of the active
particles relative to the solid substrate, $\zeta_{1,2}$ and $\zeta$ are
coefficients of the active stresses permitted by symmetry, and $P_{s}(\rho)$
is a surface osmotic pressure. As before $i=(x,y)$. For a system in thermal
equilibrium, $v_{a}=0=\zeta_{1,2}(\rho)=\zeta(\rho)$, and (16) reduces to the
well-known equilibrium partial slip boundary condition partial-slip .
We now turn to the equation of motion for $\hat{{\bf p}}$. As the active
particles are polar, the system lacks $\hat{{\bf p}}\rightarrow-\hat{{\bf p}}$
symmetry. This allows $\partial_{t}\hat{{\bf p}}$ to contain terms even in
$\hat{{\bf p}}$. The most general equation of motion for $p_{k}$ allowed by
symmetry, neglecting “irrelevant” terms,
$\displaystyle\partial_{t}p_{k}=T_{ki}\bigg{(}\alpha
v_{si}-\lambda_{pv}(\mathbf{v}_{s}\cdot\nabla_{s})p_{i}+\left({\nu_{1}-1\over
2}\right)p_{j}\partial_{i}v_{sj}+\left({\nu_{1}+1\over 2}\right)(\hat{{\bf
p}}\cdot\nabla_{s})v_{si}-\lambda(\hat{{\bf
p}}\cdot\nabla_{s})p_{i}-\partial_{i}P_{p}(\rho)+f_{i}\bigg{)},$ (17)
where the projection operator $T_{ki}\equiv\delta^{s}_{ki}-p_{k}p_{i}$ insures
that the fixed length condition $|\hat{{\bf p}}|=1$ on $\hat{{\bf p}}$ is
preserved. It is the breaking of Galilean invariance by the solid substrate
that allows $\lambda_{pv}$ to differ from $1$, and the presence of the “self-
advection” term $\alpha$ in (17). The terms proportional to $\nu_{1}$ are
“flow alignment terms”, identical in form to those found in nematic liquid
crystals martin1972 . The term with coefficient $\lambda$ is allowed by the
polar symmetry of the particles, and can be interpreted as self advection of
the particle polarity in its own direction. The function $P_{p}(\rho)$ is a
density dependent “surface polarization pressure” independent of the “osmotic
pressure” $P_{s}(\rho)$ introduced earlier. We have also added to the equation
of motion (17) a white noise ${\bf f}$ with statistics
$\langle f_{i}({\bf r}_{{}_{\perp}},t)f_{j}({\bf
r}_{{}_{\perp}}^{\prime},t^{\prime})\rangle=2D_{p}\delta_{ij}\delta({\bf
r}_{{}_{\perp}}-{\bf r}_{{}_{\perp}}^{\prime})\delta(t-t^{\prime})\,.$ (18)
Our hydrodynamic model, then, is summarized by the equations of motion (11),
(12), and (17) for $\rho$ and $\hat{{\bf p}}$, respectively, and the solution
(15) of the Stokes equation (13) for the bulk velocity field
$\mathbf{v}(x,y,z,t)$ obtained with the boundary condition (16). Fluctuations
also involve the noise correlations (18) .
These equations of motion and boundary conditions have an obvious spatially
uniform, steady state solution:
$\rho(\mathbf{r}_{{}_{\parallel}},t)=\rho_{0}\,,\hat{{\bf
p}}(\mathbf{r}_{{}_{\parallel}},t)=\hat{{\bf x}}$, where we have defined
$v_{0}\equiv v_{a}(\rho_{0})$ and have chosen the $\hat{{\bf x}}$ axis of our
coordinate system to be along the (spontaneously chosen) direction of
polarization, as illustrated in figure (1).
To study fluctuations about this steady state, we expand the equations of
motion (11), (12), and (17) for $\rho$ and $\hat{{\bf p}}$, and the boundary
condition (16), to linear order in $\delta\rho$ and $p_{y}$. We obtain the
bulk velocity $\mathbf{v}(\mathbf{r}_{{}_{\parallel}},z,t)$ from the surface
velocity $\mathbf{v}_{s}(\mathbf{r}_{{}_{\parallel}},t)$ using our solution
(15) of the Stokes equation. This ultimately produces Eqs. (2) and (3) , where
the phenomenological hydrodynamic parameters $v_{\rho}$, $v_{p}$, $\gamma$,
$\gamma$, $\rho_{c}$, and $\sigma_{t}$ are all related to the expansion
coefficients of the various parameters introduced above when expanded in
powers of the small fluctuations $\delta\rho$ and $p_{y}$. The rather involved
details of this calculation are given in the ALP.
The correlation functions can be straightforwardly determined from these
equations of motion, and shown to have peaks at $\omega_{\rm
peak}=c_{\pm}(\theta_{\mathbf{q}})q$, where $c_{\pm}(\theta_{\mathbf{q}})$ is
given by
$\displaystyle c_{\pm}\left(\theta_{\mathbf{q}}\right)$ $\displaystyle=$
$\displaystyle\pm\sqrt{{1\over
4}\left(v_{\rho}-v_{p}\right)^{2}\cos^{2}\theta_{\mathbf{q}}+c^{2}_{0}\sin^{2}\theta_{\mathbf{q}}}$
(19) $\displaystyle+\left({v_{\rho}+v_{p}\over
2}\right)\cos\theta_{\mathbf{q}}\quad\,.$
We have presented a comprehensive hydrodynamic theory of flocking at a solid-
liquid interface. This theory makes quantitative , experimentally testable
predictions about orientational long range order, spatio-temporal scaling of
fluctuations, giant number fluctuations and anomalous diffusion along
directions parallel to the solid-liquid interface. These predictions are exact
in the asymptotic long wavelength limit, as will be shown in the ALP using
renormalization group arguments. One simple variant on our system would be to
replace the bulk isotropic fluid of our system with a nematic.
Acknowledgements: One of us (AB) thanks the SERB, DST (India) for partial
financial support through the MATRICS scheme [file no.: MTR/2020/000406]. NS
is partially supported by Netherlands Organization for Scientific Research
(NWO), through the Vidi grant No. 2016/N/00075794. We thank S. Ramaswamy for
sharing reference maitra2018 with us. NS thanks Institut Curie and MPIPKS for
their support through postdoctoral fellowships while some of this work was
being done. AB thanks the MPIPKS, Dresden for their hospitality, and their
support through their Visitors’ Program, while a portion of this work was
underway. JT likewise thanks the MPIPKS for their hospitality, and their
support through the Martin Gutzwiller Fellowship, and the Higgs Center of the
University of Edinburgh for their support with a Higgs Fellowship.
## References
* (1) K. Kruse, J.F. Joanny, F. Jülicher, J. Prost, and K. Sekimoto, Asters, vortices, and rotating spirals in active gels of polar filaments, Phys. Rev. Lett. 92, 078101(2004).
* (2) K. Kruse, J.F. Joanny, F. Jülicher, J. Prost, and K. Sekimoto, Generic theory of active polar gels: a paradigm for cytoskeletal dynamics, Eur. Phys. J. E, 16, 5, (2005).
* (3) H. Wioland, F. G. Woodhouse, J. Dunkel, J. O. Kessler, and R.E. Goldstein, Confinement stabilizes a bacterial suspension into a spiral vortex, Phys. Rev. Lett., 110, 268102, (2013).
* (4) D. Saintillan, and M. J. Shelley, Instabilities, pattern formation, and mixing in active suspensions, Phys. Fluids 20, 123304, (2008).
* (5) Y. Hatwalne, S. Ramaswamy, M. Rao, S. Madan and A. Simha, Rheology of active-particle suspensions, Phys. Rev. Lett. 92, 118101 (2004).
* (6) S. Saha, R. Golestanian, and S. Ramaswamy, Clusters, asters, and collective oscillations in chemotactic colloids, Phys. Rev. E, 89, 062316, (2014).
* (7) B. Liebchen, D. Marenduzzo, I. Pagonabarraga, and M.E. Cates, Clustering and Pattern Formation in Chemorepulsive Active Colloids, Phys. Rev. Lett. 115, 258301(2015).
* (8) V. Narayan, S. Ramaswamy, and N. Menon, Long-Lived Giant Number Fluctuations in a Swarming Granular Nematic, Science 317, 105, (2007).
* (9) L.J. Daniels, Y. Park, T. C. Lubensky, and D. J. Durian, Dynamics of gas-fluidized granular rods, Phys. Rev. E79, 041301(2009).
* (10) A. Baskaran, and M.C. Marchetti, Hydrodynamics of self-propelled hard rods, Phys. Rev. E 77, 011920 (2008).
* (11) P.G. de Gennes and J. Prost, _The Physics of Liquid Crystals_ (Oxford University Press, Oxford, 1995)
* (12) T. Vicsek, A. Czirók, E. Ben-Jacob, I. Cohen, and O. Shochet, Novel type of phase transition in a system of self-driven particles, Phys. Rev. Lett. 75, 1226, (1995).
* (13) J. Toner and Y. Tu, Long-Range Order in a Two-Dimensional Dynamical $\mathrm{XY}$ Model: How Birds Fly Together, Phys. Rev. Lett. 75, 4326 (1995).
* (14) J. Toner and Y. Tu, Flocks, herds, and schools: A quantitative theory of flocking, Phys. Rev. E 58, (1998).
* (15) J. Toner, Y. Tu, and S. Ramaswamy, Hydrodynamics and phases of flocks, Annals of Physics 318, 170 (2005).
* (16) R.A. Simha and S. Ramaswamy, Hydrodynamic fluctuations and instabilities in ordered suspensions of self-propelled particles, Phys. Rev. Lett. 89, 058101(2002).
* (17) H. Chaté, F. Ginelli, G. Gregoire and F. Raynaud, Collective motion of self-propelled particles interacting without cohesion, Phys Rev E 77, 046113 (2008); F. Ginelli, The Physics of the Vicsek model, Eur. Phys. J. Special Topics 225, 2099 (2016).
* (18) J. Toner, Giant number fluctuations in dry active polar fluids: A shocking analogy with lightning rods, J. Chem. Phys. 150, 154120 (2019).
* (19) S. Ramaswamy, R.A. Simha, and J. Toner, Active nematics on a substrate: Giant number fluctuations and long-time tails, Europhys. Lett. 62, 196 (2003).
* (20) C. Wolgemuth, E. Hoiczyk, D. Kaiser, and G. Oster, How myxobacteria glide, Current Biology, 12, 369 (2002).
* (21) E. Lushi, and H. Wioland, and R. Goldstein, Fluid flows created by swimming bacteria drive self-organization in confined suspensions, Proceedings of the National Academy of Sciences 111, 9733 (2014).
* (22) V. Schaller, and A.R. Bausch, Topological defects and density fluctuations in collectively moving systems, Proceedings of the National Academy of Sciences 110, 4488 (2013).
* (23) A. Bricard,J.-B. Caussin, N. Desreumaux, O. Dauchot, and D. Bartolo, Emergence of macroscopic directed motion in populations of motile colloids, Nature 503, 95 (2013).
* (24) D. Geyer, A. Morin and D. Bartolo, Sounds and hydrodynamics of polar active fluids, Nature Materials 17, 789 (2018).
* (25) N. D. Mermin and H. Wagner, Absence of Ferromagnetism or Antiferromagnetism in One- or Two-Dimensional Isotropic Heisenberg Models, Phys. Rev. Lett. 17, 1133 (1966); P. C. Hohenberg, Existence of Long-Range Order in One and Two Dimensions, Phys. Rev. 158, 383 (1967); N. D. Mermin, Absence of Ordering in Certain Classical Systems, J. Math. Phys. 8, 1061 (1967).
* (26) Exceptions to this result are two dimensional crystals (see, e.g., 2dxtal ) and fluctuating tethered membranes (see, e.g., teth ).
* (27) B. I. Halperin and D. R. Nelson, Theory of Two-Dimensional Melting, Phys. Rev. Lett. 41, 121 (1978).
* (28) Y. Kantor, M. Kardar, and D. R. Nelson, Statistical Mechanics of Tethered Surfaces, Phys. Rev. Lett. 57, 791 (1986).
* (29) S. Shankar, S. Ramaswamy, and M. C. Marchetti, Low-noise phase of a two-dimensional active nematic system, Phys. Rev. E 97, 012707 (2018); S. Mishra, A. Baskaran, and M. C. Marchetti, Fluctuations and pattern formation in self-propelled particles, Phys. Rev. E 81, 061916 (2010).
* (30) S. Ramaswamy, R. A. Simha, and J. Toner, Active nematics on a substrate: Giant number fluctuations and long-time tails, Europhys. Lett. 62, 196 (2003).
* (31) N. Sarkar, A. Basu and J. Toner, Associated long paper.
* (32) These are not actually the sound speeds one would obtain from the real part of the eigenfrequencies $\omega$.
* (33) Y. Zhu and S. Granick, No-Slip Boundary Condition Switches to Partial Slip When Fluid Contains Surfactant, Langmuir 18, 10058 (2002).
* (34) P.C. Martin, O. Parodi, and P.S. Pershan, Unified hydrodynamic theory for crystals, liquid crystals, and normal fluids, Phys. Rev. A 6, 2401 (1972).
* (35) A. Maitra, P. Srivastava, M.C. Marchetti, J.S. Lintuvuori, S. Ramaswamy, and M. Lenz, A nonequilibrium force can stabilize 2D active nematics, Proceedings of the National Academy of Sciences 115, 6934 (2018).
|
# In-silico modeling of early-stage biofilm formation
Pin Nie Division of Physics and Applied Physics, School of Physical and
Mathematical Sciences, Nanyang Technological University, Singapore 637371,
Singapore Francisco Alarcon Oseguera Departamento de Estructura de la
Materia, Fisica Termica y Electronica, Facultad de Ciencias Fisicas,
Universidad Complutense de Madrid, 28040 Madrid, Spain Departamento de
Ingeniería Física, División de Ciencias e Ingenierías, Universidad de
Guanajuato, Loma del Bosque 103, 37150 León, Mexico Iván López-Montero
Instituto de Investigación Hospital 12 de Octubre (i+12), 28041 Madrid, Spain
Departamento de Química Física, Universidad Complutense de Madrid, 28040
Madrid, Spain Belén Orgaz Departamento de Farmacia Galénica y Tecnología
Alimentaria, Universidad Complutense de Madrid, 28040 Madrid, Spain Chantal
Valeriani Departamento de Estructura de la Materia, Fisica Termica y
Electronica, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid,
28040 Madrid, Spain Massimo Pica Ciamarra<EMAIL_ADDRESS>Division of
Physics and Applied Physics, School of Physical and Mathematical Sciences,
Nanyang Technological University, Singapore 637371, Singapore CNR–SPIN,
Dipartimento di Scienze Fisiche, Università di Napoli Federico II, I-80126,
Napoli, Italy
(August 27, 2024)
###### Abstract
Several bacteria and bacteria strands form biofilms in different environmental
conditions, e.g. pH, temperature, nutrients, etc. Biofilm growth, therefore,
is an extremely robust process. Because of this, while biofilm growth is a
complex process affected by several variables, insights into biofilm formation
could be obtained studying simple schematic models. In this manuscript, we
describe a hybrid molecular dynamics and Monte Carlo model for the simulation
of the early stage formation of a biofilm, to explicitly demonstrate that it
is possible to account for most of the processes expected to be relevant. The
simulations account for the growth and reproduction of the bacteria, for their
interaction and motility, for the synthesis of extracellular polymeric
substances and Psl trails. We describe the effect of these processes on the
early stage formation of biofilms, in two dimensions, and also discuss
preliminary three-dimensional results.
Biofilms are self-organized bacteria communities comprising the bacteria and a
matrix of extracellular polymeric substances (EPS) Vert _et al._ (2012).
Biofilms are certainly the most resilient form of life on Earth, as they
survive in both hot, salty, acid and alkaline waters, as well as at extremely
low temperature. Biofilms colonize their host environment, including humans,
in which case they are frequently the cause of persistent infections. Their
resilience mainly originates from the EPS matrix, which might account for up
to 90% of the dry biofilm weight. Besides allowing for a spatial and social
supracellular organization Flemming _et al._ (2016), the matrix provides a
physical scaffold that keeps the cells together and protects them from
antimicrobial compounds (antibiotics) Organization _et al._ (2014). EPS also
play a prominent role in the early stage biofilm formation, by promoting the
attachment of bacteria on surfaces Berne _et al._ (2018).
The social need for research in biofilms is enormous. Biofilm grows on the
surface of a tooth, causing dental plaque Marsh (2006). More worryingly, they
grow on medical devices Francolini and Donelli (2010) such as prosthetic heart
valves, orthopaedic devices, skull implants, and might trigger virulent
rejection reaction. Pseudomonas aeruginosa, for example, can enter the blood
circulation Micek _et al._ (2005) through open wounds to infect organs of the
urinary and respiratory systems. In a different context, biofilm cause
billions of dollars in damage to metal pipes in the oil and gas industry Xu
and Gu (2015); Ashraf _et al._ (2014). Sulfate-reducing bacteria Enning and
Garrelfs (2014), for example, transform molecular hydrogen into hydrogen
sulfide which, in turn, produces sulfuric acid that destroys metal surfaces
causing catastrophic failures. In the water supply system, biofilm can grow in
pipes, clogging them due to their biomass Mazza (2016). It is of enormous
interest to develop surfaces to which bacteria are not able to attach. To
date, no surface able to reliably inhibit the formation of biofilms is known
Mon (1995).
On the other hand, one might also tame biofilms to benefit from them. For
example, we could exploit biofilms in environmental biotechnology, e.g., in
wastewater treatment Lazarova and Manem (1995), or for in situ immobilization
of heavy metals in soil Flemming _et al._ (1996). Biofilms naturally grow by
consuming organic materials in the fluid. Microorganisms (typically bacteria
and fungi) can be used for microbial leaching, e.g., to metals from ores.
Copper, uranium, and gold are examples of metals commercially recovered by
microorganisms Mazza (2016).
The life cycle of a biofilm is traditionally described as consisting of five
phases: reversible attachment, irreversible attachment, growth, maturation and
dispersion. The first three phases identify the early-stage biofilm formation.
Understanding this phase is of particular interest, as it might allow for the
design of mechanisms able to prevent the formation of a biofilm. There is
mounting evidence that, in this phase, mechanical forces play a crucial role
in this stage Allen and Waclaw (2019), affecting the growth dynamics as
bacteria diffuse on the surface to be colonized, interacting among themselves
and with a chemical environment affected by their secretions. These include
EPS, and in particular, Psl exopolysaccharide, which promotes surface
attachment.
The observation that biofilms are formed by different bacteria and bacteria
strands, under highly variable external conditions, suggests that schematic
models could provide critical insights into biofilm formation. Indeed, several
models have been introduced in the literature Rudge _et al._ (2012); Winkle
_et al._ (2017); Mattei _et al._ (2018), e.g. to investigate biofilm jamming
Delarue _et al._ (2016), nematic ordering Dell’Arciprete _et al._ (2018);
Acemel _et al._ (2018), role of psl trails Zhao _et al._ (2013), nutrient
concentration Rana _et al._ (2017), phase separation Ghosh _et al._ (2015),
front propagation Farrell _et al._ (2017).
In this manuscript, we introduce a flexible computational model for the
investigation of the early-stage biofilm formation. As in previous models, we
describe a biofilm as a collection of growing and self-replicating rod-shaped
particles. We do, however, also consider the role of Psl trails reproducing
previous experimental results Zhao _et al._ (2013), and model for the first
time the growth of an EPS matrix, The article is structure as follows. In Sec.
I we introduce the numerical model, detailing all of the features we consider
as well as those we decided to neglect. We then examine the behavior of the
model, investigating different scenarios in increasing order of complexity:
Growth of non-motile cells, Sec II; competition between growth rate and
motility, Sec. III; multi-species biofilm, Sec. IV; role of Psl trails V;
formation of the EPS matrix, Sec VI. We conclude discussing the transition
from two- to three-dimensional colonies VII, and future research directions.
## I Numerical model
Modelling the biofilm early-stage formation is a challenging task, as one need
to accounts for several biological and out-of-equilibrium processes. The
microscopic model also needs to be supplemented by several parameters, e.g. to
describe motility, reproduction, eps production, etc. We describe in the
following the main features of the computational model we have implemented.
While the model is general, we have calibrated the values of its many
parameters by referring to previous experimental investigation of the pathogen
Pseudomonas aeruginosa, whenever possible.
We describe in the following the implementation of different features of the
model, in order of complexity, which are schematically illustrated in Fig. 1.
Figure 1: Schematic illustration of the of the considered model. a) Bacteria
are modeled as a collection of particles. Isolated bacteria undergo a run and
tumble motion, we realize adding a propelling force and a torque, in a viscous
background. b) Consecutive particle making a bacterium interact via a harmonic
spring of rest length $l_{0}$. We model bacterial grow making $l_{0}$ time
dependent. A bacterium reproduces when its size doubles. c) Bacteria may
deposit a psl trail (red dots) as they move on the surface. These immobile psl
particles attract the particles making up a bacterium, effectively exerting a
net force and torque. Because of this, moving bacteria preferentially follow
existing psl trails. d) Bacteria may produce eps, we model as small particles.
Permanent bonds are formed between the EPS particles, and between the EPS
particles and those making up the bacteria. This polymerization process leads
to the formation of a EPS matrix.
### I.1 Isolated non-reproducing bacterium
We model a bacterium as a spherocylinder, which we construct by lumping
together $7$ point particles. Point particles of different bacteria interact
via a Weeks-Chandler-Anderson (WCA) potential. This is a Lennard-Jones
potential with energy scale $\epsilon$ and diameter $\sigma$, we cut at its
minimum $d_{\rm b,b}=2^{1/6}\sigma$. This distance fixes the transverse width
of the bacteria that, in our units, is $w=d_{\rm b,b}=0.6\mu$m. Consecutive
particles of a bacterium interact via a Harmonic spring with stiffness $k_{\rm
b}=250\epsilon/w^{2}$ and initial rest length $l_{0}$, we fix so that the
bacterium aspect ratio is $[(n-1)l_{0}+w]/w=3$. These value for the size of a
bacterium mimic that of Pseudomonas aeruginosa. Bending rigidity is provided
introducing Harmonic angular interactions, with rest angle $\pi$ and stiffness
$k_{\rm a}=20\epsilon$, between any three consecutive particles. The value of
the stiffness coefficient is high enough for the bending deformation of the
bacteria to be negligible, for the range of parameters we will consider.
We assume the bacteria to follow an overdamped dynamics, which we realize by
applying to each particle making up a bacterium a viscous force $-\gamma v$
proportional to its velocity. Here $\gamma$ is a viscous friction coefficient.
We further assume the bacteria to perform a run and tumble motion. During a
‘run’ period, whose duration is a random number drawn from an exponential
distribution with time constant $t_{\rm run}=3$ min, we apply to the particles
making a bacterium a force $F=v_{\rm run}/\gamma$, where $v_{\rm run}=0.12\mu
m/s$ is the velocity of the particles in the running state. During a ‘tumble’
period, whose duration is a random number drawn from an exponential
distribution with time constant $t_{\rm tumble}=0.5$ min, we apply to the
bacterium a torque $T$, which fixes a rotational velocity. The equations of
motion are solved with a Verlet algorithm with timestep $5\cdot 10^{-3}s$. The
dynamical properties of a bacteria depend on the species, mutant, as well as
on the experimental condition. The values described above reasonable reproduce
the time dependence of the mean square displacement curves of Ref. Conrad _et
al._ (2011), conducted in the early stage of formation of P. aeruginosa
biofilms. In particular, the diffusion coefficient results $D\simeq
0.7\mu^{2}/s$.
### I.2 Growth and reproduction
We model the growth of bacterium by making time-dependent the rest lengths of
the springs connecting the beads making-up bacterium. Precisely, the rest
lengths grow linearly in $\min(t-t_{\rm b},1.2t_{r})$, where $t$ is the actual
time and $t_{\rm b}$ the time of birth of the bacterium, with a grow rate set
such that an isolated bacterium double its length in $t_{r}$, where for each
bacterium $t_{r}$ is taken from an exponential distribution with mean $\langle
t_{r}\rangle=1$h. The maximum value of the rest length has a cutoff to avoid
the unbounded growth of the pressure of a bacterium not able to grow, e.g. as
in a dense environment. A bacterium reproduces when its length equals twice
the original one. We implement the reproduction by replacing a bacterium with
two daughter cells, which occupy the same volume as the original one. The
polarity of the daughter cells is that of their father.
### I.3 Psl exopolysaccharide trails
When moving on a surface, bacteria may secrete Psl exopolysaccharide. Psl
promotes attachment, effectively acting as a glue Zhao _et al._ (2013).
Describing this process requires keeping track of the spatial location visited
by the moving bacteria. From a computational viewpoint, we do that by
superimposing to the computational domain a square grid, with grid size
$l\simeq w/20$, where $w$ is the width of the bacteria. As the bacteria move
on the surface, we record how many times each cell is visited. Specifically,
considering our coarse-grained description of the bacteria as a collection of
particles, we focus on the position of the central one. We indicate with
$n_{v}({\bf r},t)$ the number of times the grid cell in ${\bf r}$ has been
visited; this number originates from the superimposition of the trails left by
all bacteria. We assume $n_{v}({\bf r},t)$ to be proportional to the amount of
Psl deposited by the bacteria in ${\bf r}$.
To model the interaction between the bacteria and the trail pattern, we add to
the energy of our model the following term:
$V_{\rm trail}(t)=\sum_{b}\sum_{r_{i}\in b}\sum_{r}n_{v}({\bf r},t)v_{\rm
Gauss}({\bf r}-{\bf r_{i}}),$ (1)
where the first sum runs over all bacteria, the second one over the particles
of a bacterium, and the third one over the cells of the grid we use to record
the trail pattern. The interaction between each cell element and each particle
of our bacteria is given by an attractive potential, whose amplitude is
proportional to the number of times the grid element has been visited. We
model this attractive potential with an attractive Gaussian potential $v_{\rm
Gauss}$, with a width equal to half of the bacterial width. Notice that the
trail interaction acting on each bacterium exerts a torque, whose net effect
is that of aligning the bacteria to the trail.
In this model, the interaction potential is characterised by a typical energy
scale, $\epsilon$. We do not find literature data discussing the strength of
this interaction. Also, the rate of which bacteria deposit Psl has not been
discussed in the literature. Nevertheless, we understand that if bacteria
deposit Psl too frequently, and if the attraction is too strong, then the
bacteria will quickly bind to the deposited Psl, and will stop diffusing Tsori
and De Gennes (2004); Sengupta _et al._ (2009). This self-trapping appears to
be unrealistic. On the order side, if the deposition rate is too small, then
the bacteria deposit Psl in uncorrelated locations, not on a trail. This
scenario also appears unrealistic. We have, therefore, arbitrarily chosen
simulation parameters for which the concept of a trail is well defined.
### I.4 Extracellular Polymeric Substances
EPS production is essential to the growth of biofilm in vivo, as it bridges
bacteria cell together and to the hosting surface Xiao and Koo (2009). In the
early stage formation, EPS production appears to cooperate with bacterial
motility, e.g. twitching motility Conrad _et al._ (2011), as bacteria need to
be close in space to agglomerate. Indeed, motility suppression may hinder the
formation of microcolonies and biofilms Recht _et al._ (2000), at least if
the bacteria do not explore their environment via other physical processes,
e.g. diffusion or drift in a flow.
The theoretical and numerical description of the role of EPS is arduous and
limited. Here, we develop a numerical model for EPS along the line of the only
literature work explicitly modelling EPS particles Ghosh _et al._ (2015) we
are aware of, but also introducing substantial advancements. Considering EPS
as polymer coils, Ref. Ghosh _et al._ (2015) has modelled EPS as point
particles interacting via a purely repulsive potential. These particles have
been considered as passive and not able to form bonds to give rise to an EPS
matrix. In this condition, EPS and bacteria have been found to phase separate,
a result rationalized invoking a depletion-like interaction Ghosh _et al._
(2015). Regardless, the features of the observed phase separation depend on
the rate at which EPS particles are produced. More recent results have also
highlighted the interplay between motility and depletion-like interactions
Porter _et al._ (2019).
The main novelty of our approach is in the introduction of a polymerization
dynamics, allowing EPS particles to bond among themselves and with the
bacteria, to create an EPS matrix. Specifically, we describe EPS particles and
their dynamics as follows:
1. 1.
Extracellular polymeric substances (EPSs) are represented as small spheres,
whose size is half of the width of the bacteria, $\sigma_{\rm eps}=D/2$.
2. 2.
EPS particles interact among them with a purely repulsive WCA potential, with
energy scale $\epsilon$, as the particles of different bacteria.
3. 3.
EPS particles are inserted by the bacteria in their surrounding, at a rate
$\tau_{\rm eps}^{-1}$. An EPS particle is inserted only if it does not
interact with any other particle or bacteria. This ensures numerical
stability. Hence, EPS production is suppressed in crowded conditions.
4. 4.
Every $\Delta t$, where $\Delta t$ is a random variable taken form an
exponential distribution with average value $\Delta_{t}^{*}$, we look for all
possible pair of interacting EPS particles. If two EPS particles are
interacting, we add an harmonic bond $v(r)=10^{2}\epsilon(r-\sigma_{\rm
eps})^{2}$ between them, provided that they are not already bonded, with a
probability $p_{b}$.
5. 5.
Similarly, every $\Delta t$ we add a bond between an EPS particle and a
bacteria particle in interaction, provided that they are not already bonded,
with probability $p_{b}$. In this case, the bond energy is
$v(r)=10^{2}\epsilon\left[r-\left(\frac{\sigma_{\rm
eps}+D}{2}\right)\right]^{2}$.
The steps 1-3 above essentially reproduce the model of Ref. Ghosh _et al._
(2015). On the other hand, steps 4-5 describe the dynamics of a polymerization
process. The ratio between the mass $m_{\rm eps}$ of an EPS particle and the
mass $M$ of a bacterium is $m/M\ll 1$. EPS particles motion follow a Langevin
dynamics, with parameters fixed so that a particle has thermal velocity
$\sqrt{2k_{B}T/m_{\rm eps}}=0.18\mu$m/s, and a diffusion coefficient roughly
100 time smaller than that of bacteria in dilute conditions. This means that
the bacteria de-facto move in a bath of almost immobile EPS particles.
The EPS model has two parameters, $\Delta_{t}^{*}$ and $p_{b}$, and the rate
at which bonds are formed between possible pair of particles is
$p_{b}\Delta_{t}^{*}$. It isn’t easy to estimate these parameters from the
experiments. Besides, we notice that the EPS production rate depends on the
growing condition. Here, we decided to fix $\Delta_{t}^{*}=1$min
$=\tau_{r}/60$, and have investigated the dependence of the growing dynamics
on the bond probability $p_{b}$. We consider the bond between bacterial and
EPS particles to be permanent.
### I.5 What is not in the model
This model takes into account all of the processes that appear to be relevant,
such as motility, reproduction, production of Psl trail, EPS matrix, etc.
Some features, we believe to be less relevant, are for now neglected. For
instance, we neglect hydrodynamic interactions, which after the initial
docking of the bacteria should be minor, due to the small Reynolds number.
Indeed, bacteria swim in bulk with velocity $\simeq 30\mu m/s$, and on surface
with velocity $\simeq 1\mu/s$. The Reynolds number is
${\Re}=\frac{\rho_{f}vL}{\nu}$, where $\rho_{f}$ is the density of the fluid,
$\nu$ is its viscosity ($\nu=10^{-3}Pas$ for water), $v$ is the relative
velocity of the particle with respect to the fluid, $L$ is the typical length
of a bacterium (around $1\mu m$). Thus, for bacteria swimming in bulk, the
Reynolds number is $\sim 3\times 10^{-5}$, and for Bacteria on the surface,
the Reynolds number is $\sim 10^{-6}$. Bacterial motion is thus in a low
Reynolds number regime where viscous forces dominate over inertial ones.
Furthermore, we do not consider the diffusion of nutrients and hence the
possibility that the growth rate and the motility properties might spatially
vary. In the early-stage formation in which the biofilm is essentially two-
dimensional, we do not expect diffusion of nutrients to be sensibly affected
by the forming biofilm. Indeed, experimental results suggest that the growth
rate in the interior and the periphery of a biofilm are comparable Zachreson
_et al._ (2017).
## II Growth in the absence of motility
Figure 2: Growth of a colony of non-motile bacteria, imaged every $4$h. The
colour code reflects the angle between the bacteria and a fixed spatial
direction. Hence, patches with the same colour correspond to regions with the
same nematic director. See here for the corresponding animation.
We begin illustrating our model at work with the simplest possible example.
The growth of a colony of non-motile bacteria, in the absence of Psl and EPS.
In this scenario, we do expect the number of bacteria to grow exponentially
with time. Saturation occurs at large times due to finite-size effects. This
jamming transition induced by reproduction has been considered before Delarue
_et al._ (2016).
We illustrate the expanding colony in Fig. 2, where a fixed time interval
separate consecutive snapshots. The number of bacteria $n$ present at each
time is specified in each panel. The direct visualization of the colony
suggests that the bacteria tend to align with each other. Nematic ordering is
indeed commonly observed in experiments Volfson _et al._ (2008);
Dell’Arciprete _et al._ (2018); Yaman _et al._ (2019); the order is short-
ranged due to the emergence of buckling instabilities Boyer _et al._ (2011).
To investigate this issue, we colour code each bacterium according to the
angle its director forms with a given axis (modulus $\pi$, given that in the
absence of motility the bacteria are not polar). As the colony grows, we see
the emergence of domains with the same colour corresponding to regions of
local nematic alignment.
## III Motility vs. growth rate
Figure 3: Growth of microcolonies of bacteria having different typical
velocity $v_{\rm run}$ and fixed average reproduction time, $\tau_{r}=1$h. See
these links for the corresponding animations: slow, medium, fast.
The motility properties of bacteria are highly variable. Different species
have different motility properties. For each species, motility depends on the
mutant, e.g. depending on the presence of type-4 pili or of the flagella.
Besides, motility depends on the external environment, e.g. on the presence of
nutrients. Because of this variability, it is interesting to consider the
dependence of the early-stage formation on the motility properties, in our
numerical model.
Here we consider that, once a bacterium adheres to the surface and seeds a
microcolony, the subsequent evolution depends on the competition of two
physical processes, reproduction and motility. To clarify the origin of this
competition, we start by considering the time dependence of the radius of a
microcolony, assuming the bacteria to have no motility. In this condition, a
colony expands as bacteria duplicate and push against each other.
To model this situation, we assume the colony to have a constant number
density $\rho$, number of bacteria per unit area, so that then the number $n$
of bacteria in a colony of radius $R$ is $n(R)=\rho 4\pi R^{2}$. How does $R$
evolves with time? To predict $R(t)$, we assume the bacteria to reproduce with
a constant rate $\tau_{r}^{-1}$, so that $\frac{dn}{dt}=\frac{n}{\tau_{r}}$.
From this assumption, we get
$\frac{n}{\tau_{r}}=\frac{dn}{dt}=8\pi\rho R\frac{dR}{dt}.$ (2)
Hence, the radial expansion velocity of the colony is
$v_{R}=\frac{dR}{dt}=\frac{R}{2\tau_{r}}.$ (3)
Interestingly, this model predicts that the expansion velocity grows linearly
with the cluster size. One might expect this to occur in the early stage
development of a microcolony. At a later time, the bacteria deep inside the
colony stop reproducing because of the limited nutrient diffusing to the core
or because of the high mechanical pressure.
If the bacteria are motile, then another typical velocity scale enters into
the problem: the characteristic bacteria velocity $v_{\rm run}$. It turns out
that $v_{R}$ and $v_{\rm run}$ compete. Precisely, when $v_{\rm run}\gg
v_{R}$, bacteria swim away from each other before they reproduce. Conversely,
they reproduce when still close. Since $v_{R}$ grows with the bacteria colony,
there is a characteristic colony radius $R\simeq 2v_{\rm run}{\tau_{r}}$ above
which the radial velocity profile due to the reproduction overcomes the
swimming velocity of the bacteria. When this occurs, the colony starts
becoming compact.
As an example, we illustrate in Fig. 3 the developing of three different
microcolonies, which only differ in the magnitude of the typical velocity of
bacteria. At small velocities, the microcolony is nearly compact at all times.
At large velocities, bacteria spread on the surface at short times, as
apparent in the configuration reached at $8\tau_{r}$ in the case of
intermediate velocities, but then become part of a dense microcolony. At even
larger velocities, compact shape is attained at a longer time, possibly not
yet achieved in our simulation with $v_{\rm run}=50$.
It is interesting to notice that, in this picture, a compact colony emerges in
this picture when the reproduction rate dominates over the motility of the
particles. In this respect, while microcolony formation visually resemble the
activity drive phase separation of active system of spherical Redner _et al._
(2013); Wysocki _et al._ (2014); Fily and Marchetti (2012); Buttinoni _et
al._ (2013); Palacci _et al._ (2013); Theurkauff _et al._ (2012); Ginot _et
al._ (2018); Nie _et al._ (2020a, b) or dumbbells particles Suma _et al._
(2014); Petrelli _et al._ (2018), the underlying physical driving force is
different.
It is, however, arduous to understand the experimental relevance of these
findings. Indeed, one might expect that before a compact shape is attained,
the colony stops expanding in two dimensions, and start growing in the
vertical one. We discuss such a transition in Sec. VII. Besides, in the
picture we are considering, there are no bacteria in the planktonic state
joining the colony, and no bacteria move from the colony to the planktonic
state. We do not consider these processes in our numerical model, despite it
would be trivial to include them, as the rates of attachment and detachment
have not yet been thoroughly experimentally characterized.
## IV Coexistence of different species
Figure 4: Early stage formation of a two-species biofilm. Blu bacteria (left
in the figures) are non-motile, while red bacteria are motile (right in the
figures). The motile bacteria are faster in the bottom row. See here for an
animation.
Biofilms are often multispecies Røder _et al._ (2016). Our computational
model allows considering the coexistence of bacteria with different
properties. Here, as an example, we consider that of bacteria with different
motility properties.
Fig. 4 illustrates the growth of a colony of immotile bacteria (blue, on the
left), and a colony of motile ones (red, on the right). On the top row, we
consider the case in which the colony of motile bacteria becomes compact
before the two colonies start interacting. Hence, when the two microcolonies
enter in contact, both of them are compact. As a consequence, a sharp
interface between the two colonies develops. Notice that this interface is not
straight, but slightly curved. This curvature reflects the anisotropy of the
microcolony of non-motile bacteria, which is ellipsoidal at short times.
In the bottom row of Fig. 4 illustrates a case in which the motile bacteria
are fast so that when the two colonies start interacting, their microcolony is
not compact. In this case, the interface between the two colonies is rough. A
close look suggests that the interface might have a wavy appearance
reminiscent of the viscous fingering Saffman–Taylor instability which develops
when fluids with different viscosity pushed against each other. In this
respect, we notice that such instability has been reported at the interface of
cell populations growing at different rates Mather _et al._ (2010), and in a
variety of other contexts Pica Ciamarra _et al._ (2005a, b).
## V The role of psl
Figure 5: Early stage biofilm formation in the presence of Psl production. The
red lines are the trails left by the bacteria as they explore the surface.
Bacteria interact through an attractive force with the trails. The attraction
to a particular location is space is proportional to the number of times this
location has been visited by the bacteria. See here for the corresponding
animation.
While exploring a surface, bacteria may leave a Psl trail, to which other
bacteria are subsequently attracted. Psl trails thus resemble pheromones
trails left by ants. The statistical features of the motion of particles
attracted by substances they secrete, generally known as reinforced random
walks, have been extensively investigated in the literature Allen and Waclaw
(2019). For the case of a single bacterium attracted to its own secreted
substance, for instance, Tsori and de Gennes Tsori and De Gennes (2004)
suggested the presence of self-trapping in one and two spatial dimensions, not
in three. More recent numerical simulations indicate that there is no self-
trapping, but rather a prolonged sub-diffusive transient Sengupta _et al._
(2009). Here, we consider the growth of a microcolony, seeded by a single
bacterium, in the presence of Psl production.
In Fig. 5, we illustrate a representative time evolution of a bacterial
colony. Besides drawing the bacteria, we illustrate the corresponding trails,
which are clearly visible at short times, before trails of different bacteria
overlap. Qualitatively, these results are analogous to that experimentally
reported in Ref. Zhao _et al._ (2013).
To be more quantitative, we have determined the time evolution of the
probability distribution of the number of times a particular space location
has been visited. Here, by location, we intend grid elements of side length
equal to 1/20th of the bacterial width. This visit frequency distribution
quantity favourably compares to experimental results. Fig. 6a,b presents
experimental results for this probability distribution Zhao _et al._ (2013);
Gelimson _et al._ (2016). The probability distribution decays as a power law,
with a large exponent that decreases as times evolve. In panel c of the same
figure, we present our numerical results for the same quantity. The numerical
model well reproduces the experimental results, both as concern the presence
of a power-law decay in the probability distribution, as well as the value of
the decay exponent and its time dependence.
Figure 6: Experimental and numerical results for the time evolution of the
probability distribution of the number of times a point (pixel) has been
visited by a bacterium. Panels a and b report experimental results from Ref.
Gelimson _et al._ (2016) (with permission) and Ref. Zhao _et al._ (2013)
(with permission), respectively. Panel c illustrates the results of our
numerical model.
## VI EPS matrix
Since EPS come into the focus of the research community only recently, the
current knowledge of its role in early-stage biofilm development pales when
compared to the extensive understanding of biofilm formation in the absence of
EPS production, in particular for non-motile bacteria. The role of EPS has not
been considered in earlier literatureWingender _et al._ (1999), as
“traditionally, microbiologists used to study and to subculture individual
bacterial strains in pure cultures using artificial growth media. Under these
in vitro conditions, bacterial isolates did not express EPS-containing
structures or even lost their ability to produce EPS”. However, it is nowadays
clear that EPS is of fundamental importance, as it allows for a spatial and
social supracellular organization Flemming _et al._ (2016), while providing a
physical scaffold that keeps the cells together and protect them from
antimicrobial compounds and heavy metals Nadell _et al._ (2015), and can also
retain water Wingender _et al._ (1999). EPS also appears to play a prominent
role in the early stage biofilm formation, by promoting the attachment of
bacteria on surfaces Berne _et al._ (2018).
Figure 7: Effect of the bonding probability on the number of bacteria. Panel
(a) illustrates the time dependence of the number of bacteria on the surface.
Different curves refer to different values of the bonding probability,
$p_{b}$. Panel (b) shows the dependence of the asymptotic steady state number
of bacteria on the bonding probability $p_{b}$. The fitting line is an
exponential one, $n_{\infty}+(n_{0}-n_{\infty})e^{-p_{b}/p_{b}^{*}}$.
In our numerical model, two control parameters affect the role of EPS. First,
there is the rate at which individual bacteria secrete EPS particles in their
surrounding, provided that these new particles do not interact with other EPS
particles or bacteria. We keep this rate to 1/60th or the reproduction rate.
Secondly, there is the probability $p_{b}$ that two EPS particles, or an EPS
and a bacterium, for a bond if close enough.
Here, we investigate the dynamics and the steady-state as a function of the
bonding probability $p_{b}$. Fig. 7a illustrates the time dependence of the
number of bacteria, for different values of $p_{b}$. At short times, $t<2h$,
the production of EPS does not quantitatively affect the dynamics, as
different curves collapse on each other. At larger times, the population grows
exponentially but then saturates. This saturation is not a finite-size effect.
This is a critical result, as it clarifies that in the presence of EPS a
microcolony stops spreading, in two dimensions. Indeed, we do expect a
transition towards a three-dimensional condition. Fig. 7b shows that the
asymptotic number of bacteria decreases exponentially with the bonding
probability. If $p_{b}$ is very high, growth stops with just a few bacteria on
the surface. This finding is reminiscent of early speculations for isolated
non-reproducing particles Tsori and De Gennes (2004).
Figure 8: Evolution of system of bacteria (red) which produce EPS (blue). The
EPS particles can bond to each other, and to the bacteria, with probability
$p_{b}$. Different rows correspond to different values of the bonding
probability $p_{b}$, as indicated.
To rationalize these results, we provide snapshots illustrating the time
evolution of the investigated system in Fig. 8. In this figure, the columns
correspond to different times, the rows to different values of the bond
probability $p_{b}$, as indicated. In all case, at long times, we do see the
formation of small clusters of bacteria. These bacteria are glued together
through the EPS particles. For small values of $p_{b}$, these clusters only
form when there are many EPS particles in the systems. Conversely, for a
larger value of $p_{b}$, few EPS particles can glue the bacteria together.
Bacteria are therefore self-trapped by the EPS particles the produce Tsori and
De Gennes (2004); Sengupta _et al._ (2009). The exponential dependence of the
number of bacteria on $p_{b}$ observed in Fig. 7b is not simply recovered in a
mean-field approximation, starting from rate equations from the total number
of bacteria and the number of trapped bacteria. Spatial correlations, which
are apparent in Fig. 8, appear therefore to play an important role in
determining the size of the final population.
## VII From two- to three-dimensional microcolonies
Figure 9: Evolution of a three dimensional microcolony of not-motile bacteria.
The microcolony develops with the bacteria embedded in an EPS gel matrix.
All investigations reported so-far have been restricted to the early stage
formation of a biofilm, which is essentially a two-dimensional process.
However, biofilms then develop as structured three dimensional aggregates.
Here, without the aim of being quantitative, we demonstrate that the numerical
approach we have developed is also able to describe this transition. To this
end, we extended the model to allow the bacteria to move in the vertical
direction.
In the absence of EPS, the transition for two- to three-dimensional colonies
has been suggested to originate from extrusion driven by the compression of
the cells-Farrell _et al._ (2017); Grant _et al._ (2014) \- alike in
epithelial cell tissues. In the presence of EPS, a different mechanism appear
to be at work. Indeed, while the bacteria are still on the plane, EPS
particles move also in the vertical direction, and their polymerization leads
to a three dimensional network. The stress induced in this network by the
continuous growth and reproduction of the bacteria, leads to upward-forces
acting on the bacteria, which force them out of the horizontal plane. A small
tilt of the bacterium is enough to seed the transition from a two to a three
dimensional biofilm.
Fig. 9 illustrates the developing of a three dimensional biofilm, for non-
motile bacteria. Clearly, the bacteria result embedded in a growing EPS
matrix. We leave to future studies the quantitative investigation of three
dimensional investigation, also because of their high computational cost.
## VIII Conclusions
In this manuscript, we have illustrated a computational model for the
simulation of the early-stage biofilm formation. The model reproduces results
reported in previous numerical studies, such as the emergence of local nematic
order, as well as the role of Psl trails. Our model, however, shows for the
first time that it is possible to describe in numerical setting the production
of EPS as the growth of the extracellular matrix, in a coarse-grained fashion.
The main limitation of our model, and of related ones, appears the presence of
many parameters. Specifically, the issue concerns the absence of a proper
experimental measure of them, for most species. This renders a quantitative
comparison with experimental results difficult. Nevertheless, the universality
of the discussed phenomenology suggests that our model could suffice to
pinpoint the key physical processes at work in the early-stage formation of a
biofilm.
In this respect, our work suggests that not only the production of Psl trail
Zhao _et al._ (2013), but also that of EPS, might induce the formation of
microcolonies. Specifically, EPS leads to the formation of an extracellular
matrix which traps the bacteria in what are de-facto microcolonies (see red
regions in Fig. 8. Besides, we have originally observed that the incipient EPS
matrix appears to foster the transition from a two- to a three-dimensional
morphology.
## Acknowledgements
N.P. and M.P.C. acknowledge support from the Singapore Ministry of Education
through the Academic Research Fund MOE2017-T2-1-066 (S), and are grateful to
the National Supercomputing Centre (NSCC) for providing computational
resources.
## References
* Vert _et al._ (2012) M. Vert, Y. Doi, K.-H. Hellwich, M. Hess, P. Hodge, P. Kubisa, M. Rinaudo, and F. Schué, Pure and Applied Chemistry , 377 (2012).
* Flemming _et al._ (2016) H.-C. Flemming, J. Wingender, U. Szewzyk, P. Steinberg, S. A. Rice, and S. Kjelleberg, Nature Reviews Microbiology 14, 563 (2016).
* Organization _et al._ (2014) W. H. Organization _et al._ , _Antimicrobial resistance: global report on surveillance_ (World Health Organization, 2014).
* Berne _et al._ (2018) C. Berne, C. K. Ellison, A. Ducret, and Y. V. Brun, Nature Reviews Microbiology 16, 616 (2018).
* Marsh (2006) P. D. Marsh, BMC Oral Health 6, S14 (2006).
* Francolini and Donelli (2010) I. Francolini and G. Donelli, FEMS Immunology & Medical Microbiology 59, 227 (2010).
* Micek _et al._ (2005) S. T. Micek, A. E. Lloyd, D. J. Ritchie, R. M. Reichley, V. J. Fraser, and M. H. Kollef, Antimicrobial Agents and Chemotherapy 49, 1306 (2005).
* Xu and Gu (2015) D. Xu and T. Gu, J Microb Biochem Technol 7, 5 (2015).
* Ashraf _et al._ (2014) M. A. Ashraf, S. Ullah, I. Ahmad, A. K. Qureshi, K. S. Balkhair, and M. Abdur Rehman, Journal of the Science of Food and Agriculture 94, 388 (2014).
* Enning and Garrelfs (2014) D. Enning and J. Garrelfs, “Corrosion of iron by sulfate-reducing bacteria: New views of an old problem,” (2014).
* Mazza (2016) M. G. Mazza, Journal of Physics D: Applied Physics 49, 203001 (2016).
* Mon (1995) _Microbial Biofilms_, Biotechnology Research (Cambridge University Press, 1995).
* Lazarova and Manem (1995) V. Lazarova and J. Manem, Water Research 29, 2227 (1995).
* Flemming _et al._ (1996) H.-C. Flemming, J. Schmitt, and K. C. Marshall, in _Sediments and Toxic Substances_ (Springer Berlin Heidelberg, 1996) pp. 115–157.
* Allen and Waclaw (2019) R. J. Allen and B. Waclaw, Reports on Progress in Physics 82, 016601 (2019).
* Rudge _et al._ (2012) T. J. Rudge, P. J. Steiner, A. Phillips, and J. Haseloff, Biol 1, 35 (2012).
* Winkle _et al._ (2017) J. J. Winkle, O. A. Igoshin, M. R. Bennett, K. Josić, and W. Ott, Physical Biology 14, 055001 (2017).
* Mattei _et al._ (2018) M. R. Mattei, L. Frunzo, B. D’Acunto, Y. Pechaud, F. Pirozzi, and G. Esposito, Journal of Mathematical Biology 76, 945 (2018).
* Delarue _et al._ (2016) M. Delarue, J. Hartung, C. Schreck, P. Gniewek, L. Hu, S. Herminghaus, and O. Hallatschek, Nature Physics 12, 762 (2016).
* Dell’Arciprete _et al._ (2018) D. Dell’Arciprete, M. Blow, A. Brown, F. Farrell, J. S. Lintuvuori, A. McVey, D. Marenduzzo, and W. C. Poon, Nature communications 9, 4190 (2018).
* Acemel _et al._ (2018) R. D. Acemel, F. Govantes, and A. Cuetos, Scientific Reports 8, 5340 (2018).
* Zhao _et al._ (2013) K. Zhao, B. S. Tseng, B. Beckerman, F. Jin, M. L. Gibiansky, J. J. Harrison, E. Luijten, M. R. Parsek, and G. C. L. Wong, Nature 497, 388 (2013).
* Rana _et al._ (2017) N. Rana, P. Ghosh, and P. Perlekar, Physical Review E 96, 052403 (2017).
* Ghosh _et al._ (2015) P. Ghosh, J. Mondal, E. Ben-Jacob, and H. Levine, Proceedings of the National Academy of Sciences of the United States of America 112, E2166 (2015).
* Farrell _et al._ (2017) F. D. Farrell, M. Gralka, O. Hallatschek, and B. Waclaw, Journal of The Royal Society Interface 14, 20170073 (2017).
* Conrad _et al._ (2011) J. Conrad, M. Gibiansky, F. Jin, V. Gordon, D. Motto, M. Mathewson, W. Stopka, D. Zelasko, J. Shrout, and G. Wong, Biophysical Journal 100, 1608 (2011).
* Tsori and De Gennes (2004) Y. Tsori and P. G. De Gennes, Europhysics Letters 66, 599 (2004).
* Sengupta _et al._ (2009) A. Sengupta, S. Van Teeffelen, and H. Löwen, Phys. Rev. E 80, 031122 (2009).
* Xiao and Koo (2009) J. Xiao and H. Koo, Journal of Applied Microbiology 108, 2103 (2009).
* Recht _et al._ (2000) J. Recht, A. Martínez, S. Torello, and R. Kolter, Journal of bacteriology 182, 4348 (2000).
* Porter _et al._ (2019) M. K. Porter, A. P. Steinberg, and R. F. Ismagilov, Soft Matter 15, 7071 (2019).
* Zachreson _et al._ (2017) C. Zachreson, X. Yap, E. S. Gloag, R. Shimoni, C. B. Whitchurch, and M. Toth, Physical Review E 96, 042401 (2017).
* Volfson _et al._ (2008) D. Volfson, S. Cookson, J. Hasty, and L. S. Tsimring, Proceedings of the National Academy of Sciences 105, 15346 (2008).
* Yaman _et al._ (2019) Y. I. Yaman, E. Demir, R. Vetter, and A. Kocabas, Nature communications 10, 2285 (2019).
* Boyer _et al._ (2011) D. Boyer, W. Mather, O. Mondragón-Palomino, S. Orozco-Fuentes, T. Danino, J. Hasty, and L. S. Tsimring, Physical Biology 8, 026008 (2011).
* Redner _et al._ (2013) G. S. Redner, M. F. Hagan, A. Baskaran, and M. Fisher, Phys Rev Lett 110, 055701 (2013).
* Wysocki _et al._ (2014) A. Wysocki, R. G. Winkler, and G. Gompper, EPL (Europhysics Letters) 105, 48004 (2014).
* Fily and Marchetti (2012) Y. Fily and M. C. Marchetti, Physical Review Letters 108, 235702 (2012).
* Buttinoni _et al._ (2013) I. Buttinoni, J. Bialké, F. Kümmel, H. Löwen, C. Bechinger, and T. Speck, Phys. Rev. Lett. , 238301 (2013).
* Palacci _et al._ (2013) J. Palacci, S. Sacanna, A. P. Steinberg, D. J. Pine, and P. M. Chaikin, Science 339, 936 (2013).
* Theurkauff _et al._ (2012) I. Theurkauff, C. Cottin-Bizonne, J. Palacci, C. Ybert, and L. Bocquet, Physical Review Letters , 268303 (2012).
* Ginot _et al._ (2018) F. Ginot, I. Theurkauff, F. Detcheverry, C. Ybert, and C. Cottin-Bizonne, Nature Communications 9, 696 (2018).
* Nie _et al._ (2020a) P. Nie, J. Chattoraj, A. Piscitelli, P. Doyle, R. Ni, and M. P. Ciamarra, Physical Review Research 2, 23010 (2020a).
* Nie _et al._ (2020b) P. Nie, J. Chattoraj, A. Piscitelli, P. Doyle, R. Ni, and M. P. Ciamarra, Phys. Rev. E 102, 32612 (2020b).
* Suma _et al._ (2014) A. Suma, G. Gonnella, D. Marenduzzo, and E. Orlandini, EPL (Europhysics Letters) 108, 56004 (2014).
* Petrelli _et al._ (2018) I. Petrelli, P. Digregorio, L. F. Cugliandolo, G. Gonnella, and A. Suma, European Physical Journal E , 128 (2018).
* Røder _et al._ (2016) H. L. Røder, S. J. Sørensen, and M. Burmølle, Trends in microbiology 24, 503 (2016).
* Mather _et al._ (2010) W. Mather, O. Mondragón-Palomino, T. Danino, J. Hasty, and L. S. Tsimring, Phys. Rev. Lett. 104, 208101 (2010).
* Pica Ciamarra _et al._ (2005a) M. Pica Ciamarra, A. Coniglio, and M. Nicodemi, Physical Review Letters 94, 188001 (2005a).
* Pica Ciamarra _et al._ (2005b) M. Pica Ciamarra, A. Coniglio, and M. Nicodemi, Journal of Physics: Condensed Matter Condensed Matter 17, S2549 (2005b).
* Gelimson _et al._ (2016) A. Gelimson, K. Zhao, C. K. Lee, W. T. Kranz, G. C. Wong, and R. Golestanian, Physical review letters 117, 178102 (2016).
* Wingender _et al._ (1999) J. Wingender, T. R. Neu, and H.-C. Flemming, in _Microbial Extracellular Polymeric Substances_ (Springer Berlin Heidelberg, Berlin, Heidelberg, 1999) pp. 1–19.
* Nadell _et al._ (2015) C. D. Nadell, K. Drescher, N. S. Wingreen, and B. L. Bassler, The ISME Journal 9, 1700 (2015).
* Grant _et al._ (2014) M. A. A. Grant, B. Wac aw, R. J. Allen, and P. Cicuta, Journal of The Royal Society Interface 11, 20140400 (2014).
|
scaletikzpicturetowidth[1]
# Proba-V-ref: Repurposing the Proba-V challenge
for reference-aware super resolution
Ngoc Long Nguyen1 Jérémy Anger1,2 Axel Davy1 Pablo Arias1 Gabriele Facciolo1
(1Université Paris-Saclay, CNRS, ENS Paris-Saclay, Centre Borelli, France
2Kayrros SAS)
###### Abstract
The PROBA-V Super-Resolution challenge distributes real low-resolution image
series and corresponding high-resolution targets to advance research on Multi-
Image Super Resolution (MISR) for satellite images. However, in the PROBA-V
dataset the low-resolution image corresponding to the high-resolution target
is not identified. We argue that in doing so, the challenge ranks the proposed
methods not only by their MISR performance, but mainly by the heuristics used
to guess which image in the series is the most similar to the high-resolution
target. We demonstrate this by improving the performance obtained by the two
winners of the challenge only by using a different reference image, which we
compute following a simple heuristic. Based on this, we propose PROBA-V-REF a
variant of the PROBA-V dataset, in which the reference image in the low-
resolution series is provided, and show that the ranking between the methods
changes in this setting. This is relevant to many practical use cases of MISR
where the goal is to super-resolve a specific image of the series, i.e. the
reference is known. The proposed PROBA-V-REF should better reflect the
performance of the different methods for this reference-aware MISR problem.
## 1 Introduction
((a))
((b))
Figure 1: The PROBA-V dataset (top) does not make any distinction between the
LR images. One of them was acquired at the same time as the target HR image
which is used for training and evaluation. The MISR methods need to determine
a reference without knowing which is the one corresponding to the target. We
propose PROBA-V-REF (bottom), a version of PROBA-V where the identity of true
reference is known.
Earth monitoring plays an important role in our understanding of the Earth
systems including climate, natural resources, ecosystems, and natural and
human-induced disasters. Some of Earth monitoring applications require high
resolution images, such as monitoring human activity or monitoring
deforestation. Lately, computational super-resolution is being adopted as a
cost-effective solution to increase spatial resolution of satellite images
[12, 2]. We refer to [13, 15] for a comprehensive review of the problem of
super-resolution.
In general, the approaches to image super-resolution can be classified into:
single image super-resolution (SISR) and multi-image super-resolution (MISR).
Single image super-resolution has recently attracted considerable attention in
the image processing community [5, 8]. It is a highly ill-posed problem. In
fact, during the acquisition of the low-resolution (LR) images some high-
frequency components are lost or aliased, hindering their correct
reconstruction. In contrast, MISR aims to recover the true details in the
super-resolved image (SR) by combining the non-redundant information in
multiple LR observations.
In 2019, the Advanced Concepts Team of European Space Agency (ESA) organised a
challenge [9] with the goal of super-resolving the multi-temporal images
coming from the PROBA-V satellite. The challenge dataset consists of sets of
LR images acquired within a time window of one month over a set of sites. For
each site, a high-resolution target image (HR) is also provided. In each
sequence, one of the LR images was acquired at the same date as the HR image.
We call this image the true reference. Knowing the LR reference can help
produce a result matching better the HR image as there can be significant
changes with images taken at different dates. However, the identity of these
true reference images is not provided in the challenge. Several teams have
participated in the challenge, and since it finished, a “post-mortem” contest
continues to benchmark new MISR methods. All these works try to solve the
problem without the knowledge of the reference images. We believe that the
problem of MISR without a reference image is interesting and could have
several applications. However, in such problem, the reference image need to be
completely random, which is not the case in the PROBA-V challenge where for
example, a cloud-free LR has more chance to be the reference than a cloudy LR
image. Such bias introduces noise in the resulting benchmark. A method might
get a good performance not because of a more suitable architecture or
training, but because of a better heuristic to select the reference image.
On the other hand, reference-aware MISR is a relevant problem in itself.
Indeed, in many practical cases, the goal is to super-resolve a specific image
of the sequence (for example we might be interested in a specific date).
Although this problem is considerably easier, it is far from being solved. In
other domains such as super-resolution of video or burst of images, the
standard definition of the MISR problem includes the reference image. Hence,
we are convinced that a variant of the PROBA-V dataset with the true reference
images would be valuable for the computer vision community.
In this work we first demonstrate the impact of the heuristic used to select
the reference LR image in the PROBA-V challenge. We do this by improving the
performance obtained with the two winning methods of the contest, by simply
changing their reference images with a different one chosen following a simple
heuristic. We then point out that the true reference image can be obtained in
the training and validation splits of the dataset by comparison with the HR
target, and propose PROBA-V-REF, a version of the PROBA-V dataset with the
true LR references. Finally, we retrain the first and second best methods in
the challenge on the proposed PROBA-V-REF dataset and show that the ranking
between them becomes inverted.
## 2 Related works
Lately, deep learning algorithms have been proven a success in super-
resolution. However, these methods are data-hungry and their performance
heavily relies on the quality and the abundance of the training dataset.
The importance of training with realistic data was highlighted in [4] for SISR
algorithms. The authors of [4] proposed a dataset comprised of real pairs of
LR/HR images and showed that the models trained on it achieved much better
results than those trained on synthetic data [1].
Realistic MISR datasets are usually small and can only be used to test an MISR
algorithm (for example the MDSP dataset
111http://www.soe.ucsc.edu/~milanfar/software/sr-datasets.html). Most of deep
learning MISR algorithms are trained on simulated data [6, 10]. It was not
until the publication of the PROBA-V dataset that the training of deep
learning MISR methods could be done on a real-world dataset. The PROBA-V
satellite is equipped with two types of cameras with different resolutions and
revisit times. This interesting setup opens the way to a supervised learning
of new MISR methods with real-world data.
However, the limitation of the PROBA-V dataset is that the information of the
reference image is not provided, which hinders its huge potential. Indeed,
most of traditional MISR methods like shift-and-add, kernel-regression [14],
polynomial fitting [2] start by registering all the LR images to a common
domain which is usually chosen to be that of one LR image in the series
(typically the one we are interested in super-resolving). The two top
performing methods of the Proba-V challenge DeepSUM [11] and HighRes-net [7]
also pick a specific LR image as an anchor for the reconstruction. DeepSUM
selects the LR image with the highest clearance as the reference for the
registration step. HighRes-net chooses the median of $9$ clearest LR images as
the reference in the fusion step.
## 3 Recovering the true LR reference
The PROBA-V dataset contains $566$ scenes from the NIR spectral band and $594$
scenes from the RED band. For each scene, there is only one HR image of
$384\times 384$ pixels and several LR images (from $9$ to $35$ images taken
over a period of one month) of $128\times 128$ pixels. The LR images in one
set can be very different due to change of illumination, presence of clouds,
shadows or ice/snow covering. A status map is provided to indicate which
pixels in a LR image can be reliable for fusion. The “clearance score” of an
image is defined as the percentage of clear pixels in its status map. The
dataset is carefully hand picked such that the LR images have at least $60\%$
clearance and the HR has at least $75\%$ clearance. Within a 30 day period,
even if more than one HR image verify this condition, only the one with the
highest clearance is selected as the target. Since the PROBA-V dataset does
not make any distinction between the LR images, the MISR methods have to
produce some kind of average SR image. To help them recover the true details
on the SR image, we need the information of the true LR reference (see Fig.
1).
For each element of the training set, we retrieve the true LR reference by
determining the LR image that is the most “similar” to the HR. To this aim,
first a filtered and subsampled (by a factor $3$) version of HR is computed.
Then, we align the LR frames with the downsampled HR using the inverse
compositional algorithm [3] and compute the pixel-wise root-mean-square errors
between them. The true reference is chosen as the LR image that minimizes the
error. The computed indexes of the true references for the PROBA-V dataset can
be found here: https://github.com/cmla/PROBAVref.
## 4 Experiments
In this section, we demonstrate that the reference image is as important as
the technique used. Then we illustrate and discuss the benefit of the PROBA-V-
REF dataset for real-world applications.
For evaluating the quality of the reconstructions we adopt the “corrected
clear” PSNR (cPSNR) [9] metric introduced for PROBA-V challenge. The
specificity of this metric is that it takes the status map of the ground truth
HR into account and allows intensity biases and small pixel-translations
between the super-resolved image and the target.
### 4.1 Experimental settings
As mentioned earlier, the two top competitors of the PROBA-V challenge use a
specific LR image in the series as anchor.
DeepSUM [11] — is the winner of the challenge. It uses the LR with the highest
clearance as the reference. A registration step aligns all other images to the
reference.
HighRes-net [7] — achieved the second place in the challenge. The median of
the $9$ images with the highest clearance is considered as a shared
representation for multiple LR. Each LR image is embedded jointly with this
reference image before being recursively fused.
To show that the choices of the reference images by DeepSUM and HighRes-net
are suboptimal we retrain them from scratch using the true LR references (see
Sec. 3) and name these two adjusted methods DeepSUM-ref and HighRes-net-ref
respectively. Furthermore, we demonstrate that a SISR algorithm trained on the
true references can achieve better score than DeepSUM and HighRes-net. To this
aim, we introduce DeepSUM-SI, a version of DeepSUM modified to perform SISR by
replacing all input images by the true references.
Tables 1 and 2 show the performances of these methods on the validation set
for the NIR spectral band, consisting of 170 scenes.
We consider different ways of choosing the reference on the validation set:
Similarity — is the true reference as computed in Sec. 3.
Highest clearance — chooses the LR view that has the best clearance score, as
in [11].
Median — takes the median of the $9$ clearest LR observations as the
reference, as in [7].
Heuristic — In the test set, the ground truth HR are not available so we use a
heuristic to predict the reference images. By minimizing this objective
function:
$\displaystyle i_{\text{heur}}$
$\displaystyle=\text{argmin}_{i}\,\Big{\\{}\|\text{Mask}^{\text{LR}}_{i}-\text{Downscale}(\text{Mask}^{\text{HR}})\|_{1}$
(1)
$\displaystyle+\alpha\left|\text{median}(\text{LR}_{i})-\text{median}(\text{LRset})\right|$
$\displaystyle+\beta\,\text{clearance}(\text{LR}_{i})\Big{\\}},$
where Mask designates the status map of an image, LRset is the set of input LR
images, clearance is the sum of all clear pixels of a LR, we manage to guess
the true references in more than $50\%$ of scenes in the training set. We set
$\alpha=0.1,\beta=0.3$ in our experiments.
Table 1: Average cPSNR (dB) over the validation dataset for DeepSUM and
HighRes-net. The original performance is highlighted in orange and the best
performances are highlighted in blue
Methods | Training | Evaluation ref.
---|---|---
ref. | Simil. | Clearance | Median | Heuristic
DeepSUM | Clearance | $\mathbf{47.99}$ | $\mathbf{47.75}$ | $47.62$ | $47.87$
HighRes-net | Median | $\mathbf{47.77}$ | $47.26$ | $\mathbf{47.48}$ | $47.57$
Table 2: Average cPSNR (dB) over the validation dataset for DeepSUM-SI,
DeepSUM-ref and HighRes-net-ref. For each methods, the best performance is
highlighted in blue.
Methods | Training | Evaluation ref.
---|---|---
ref. | Simil. | Clearance | Median | Heuristic
DeepSUM-ref | Similarity | $\mathbf{50.24}$ | $46.38$ | $46.69$ | $\mathbf{49.10}$
HighRes-net-ref | Similarity | $\mathbf{50.49}$ | $46.35$ | $46.47$ | $\mathbf{49.29}$
DeepSUM-SI | Similarity | $\mathbf{49.05}$ | $45.57$ | $45.85$ | $\mathbf{47.96}$
### 4.2 Discussion
Inspecting the results (Table 1), we observe that the two top competitors of
the PROBA-V challenge are affected by the type of reference images. Without
retraining, using the true references or even the “heuristic references”
systematically improves the results. In this setting, DeepSUM is better than
HighRes-net.
Being trained with the true references (Table 2), DeepSUM-ref and HighRes-net-
ref are superior to the original DeepSUM and HighRes-net by a very large
margin ($2.49$ and $3.01$ dB). With the “heuristic references”, they can still
surpass the original methods by $1.35$ and $1.81$ dB respectively. We admit
that by using $\text{Mask}^{\text{HR}}$ this method does not follow the rules
of the contest. However, as a proof of concept, we submitted the results of
HighRes-net-ref with the “heuristic references” on the official post-mortem
PROBA-V challenge222https://kelvins.esa.int/PROBA-v-super-resolution-
postmortem. At this point in time, the resulting method is ranked the second
place in the leaderboard and surpassing significantly the performances of the
original DeepSUM and HighRes-net. Although this heuristic is based on the mask
of the HR, it shows the impact that the choice of the reference image can have
on the results.
Furthermore, observe that in this situation where the true references are
provided, HighRes-net-ref is better than DeepSUM-ref. We can conclude that the
design of the challenge strongly affects its outcome.
On the other hand, the SISR algorithm DeepSUM-SI achieves much better results
than the MISR algorithm DeepSUM. This is due to the temporal variability
between LR observations. In some sense, networks trained without the knowledge
of the reference image have to deal with two different tasks: guessing the
reference and super-resolving that specific image using the complementary
information from other images in the set. Of course the guess is random (at
least among the LR images with high clearance), thus the network will predict
some sort of average SR image. Adding the information about the reference
helps the networks to focus on the super-resolution problem.
Figure 2: Examples of reconstruction by DeepSUM-ref and DeepSUM with different
references (in false color). The first line corresponds to crops of three
different LR images in a set. The second line and the third line show the
reconstruction by DeepSUM-ref and DeepSUM respectively when using each of
these three LR as the reference image.
To evaluate the impact of the reference on the result of DeepSUM and DeepSUM-
ref, we select three LR images taken in different days as the reference (see
Fig. 2). In each case, DeepSUM-ref faithfully recovers fine details in the SR
image. On the other hand, the vegetation covers on the outputs of DeepSUM are
inconsistent with that of the references. The reconstruction of DeepSUM is
less likely to correlate with the reference. Consequently, DeepSUM-ref is more
appropriate to practical use of super-resolution since we usually want to
super-resolve a specific image in a time series.
## 5 Conclusion
In this work, we have demonstrated that the PROBA-V challenge, by not
providing the true LR reference is evaluating not only the MISR performance of
the methods, but also the way in which the LR reference images are chosen. The
later aspect is irrelevant in the many practical use cases where the reference
image is dictated by the application. To address this use case, we proposed
PROBA-V-REF a variant of the dataset with the true reference images in the
training and validation splits. These were obtained by comparing the LR images
and a downscaled version of the ground truth HR. We believe that, by using the
provided true LR images, future methods will be able to use this unique real
dataset to focus on the core problem of MISR: making the most out of the
complementary information in the LR images.
## Acknowledgements
This work was supported by a grant from Région Île-de-France. It was also
partly financed by IDEX Paris-Saclay IDI 2016, ANR-11-IDEX-0003-02, Office of
Naval research grant N00014-17-1-2552, DGA Astrid project « filmer la Terre »
no ANR-17-ASTR-0013-01, MENRT. This work was performed using HPC resources
from GENCI–IDRIS (grant 2020-AD011011801) and from the “Mésocentre” computing
center of CentraleSupélec and ENS Paris-Saclay supported by CNRS and Région
Île-de-France (http://mesocentre.centralesupelec.fr/).
## References
* [1] E. Agustsson and R. Timofte. Ntire 2017 challenge on single image super-resolution: Dataset and study. In CVPRW, 2017.
* [2] J. Anger, T. Ehret, C. de Franchis, and G. Facciolo. Fast and accurate multi-frame super-resolution of satellite images. ISPRS, 2020.
* [3] S. Baker and I. Matthews. Equivalence and efficiency of image alignment algorithms. In CVPR, 2001.
* [4] J. Cai, H. Zeng, H. Yong, Z. Cao, and L. Zhang. Toward real-world single image super-resolution: A new benchmark and a new model. In ICCV, 2019.
* [5] C. Dong, C. C. Loy, K. He, and X. Tang. Image super-resolution using deep convolutional networks. TPAMI, 2015.
* [6] B. Wronski et al. Handheld multi-frame super-resolution. ACM TOG, 2019.
* [7] M. Deudon et al. Highres-net: Recursive fusion for multi-frame super-resolution of satellite imagery. arXiv, 2020.
* [8] J. Kim, J. Kwon Lee, and K. Mu Lee. Deeply-recursive convolutional network for image super-resolution. In CVPR, 2016.
* [9] M. Märtens, D. Izzo, A. Krzic, and D. Cox. Super-resolution of proba-v images using convolutional neural networks. Astrodynamics, 2019.
* [10] E. M. Masutani, N. Bahrami, and A. Hsiao. Deep learning single-frame and multiframe super-resolution for cardiac mri. Radiology, 2020.
* [11] A. B. Molini, D. Valsesia, G. Fracastoro, and E. Magli. Deepsum: Deep neural network for super-resolution of unregistered multitemporal images. IEEE TGRS, 2019.
* [12] K. Murthy, M. Shearn, B. D. Smiley, A. H. Chau, J. Levine, and M. D. Robinson. Skysat-1: very high-resolution imagery from a small satellite. In Sens. Syst. Next-Gener. Satell., 2014.
* [13] K. Nasrollahi and T. B. Moeslund. Super-resolution: a comprehensive survey. Mach. Vis. Appl., 2014.
* [14] H. Takeda, S. Farsiu, and P. Milanfar. Kernel regression for image processing and reconstruction. IEEE TIP, 2007.
* [15] L. Yue, H. Shen, J. Li, Q. Yuan, H. Zhang, and L. Zhang. Image super-resolution: The techniques, applications, and future. Signal Process., 2016.
|
# Back–Projection Pipeline
Pablo Navarrete Michelini, 1 Hanwen Liu, 1 Yunhua Lu, 1 Xingqun Jiang, 1
###### Abstract
We propose a simple extension of residual networks that works simultaneously
in multiple resolutions. Our network design is inspired by the iterative
back–projection algorithm but seeks the more difficult task of learning how to
enhance images. Compared to similar approaches, we propose a novel solution to
make back–projections run in multiple resolutions by using a data pipeline
workflow. Features are updated at multiple scales in each layer of the
network. The update dynamic through these layers includes interactions between
different resolutions in a way that is causal in scale, and it is represented
by a system of ODEs, as opposed to a single ODE in the case of ResNets. The
system can be used as a generic multi–resolution approach to enhance images.
We test it on several challenging tasks with special focus on super–resolution
and raindrop removal. Our results are competitive with state–of–the–arts and
show a strong ability of our system to learn both global and local image
features.
## Introduction
Image enhancement is the process of taking an impaired image as input and
return an image of better quality. The current trend to achieve this target is
to learn a mapping between impaired and enhanced images using example data.
Deep–learning is leading this fast–growing quest in a number of applications,
including: denoise(Lefkimmiatis 2018), deblur(Tao et al. 2018),
super–resolution(Timofte et al. 2018), demosaicking(Kokkinos and Lefkimmiatis
2018), compression removal(Lu et al. 2018), dehaze(Ancuti et al. 2018b),
derain(Wang et al. 2019), raindrop removal(Qian et al. 2018a), HDR(Wu et al.
2018), and colorization(He et al. 2018). Progress in network architectures
often succeeds in image enhancement, as seen for example in image
super–resolution, with CNNs applied in SRCNN (Dong et al. 2014), ResNets (He
et al. 2016) applied in EDSR (Lim et al. 2017), DenseNets (Huang et al. 2017)
applied in RDN (Zhang et al. 2018d), attention (Hu, Shen, and Sun 2018)
applied in RCAN (Zhang et al. 2018a), and non–local attention (Wang et al.
2018) applied in RNAN (Zhang et al. 2019). In all these examples, arguably the
most influential practice is the use of residual networks (ResNets). Here, we
define the _network state_ as the internal representation of an image in a
network, commonly referred to as latent or feature space in the literature.
The idea of ResNets is to represent an impaired image as a network state and
progressively change it by adding residuals, as seen in Figure 1. This gives a
compositional hierarchy(Poggio et al. 2017) of progressive local processing
steps (e.g. convolutional layers) that transforms the input image. The update
strategy of residual networks can be seen as a dynamical system where depth
represents time and a differential equation models the evolution of the
state(Liao and Poggio 2016).
Figure 1: Our system (BPP) works as a multi–scale ResNet with state updates
that interact with lower resolution states. Information travels forward in
depth and upwards in scale.
Our proposed system, a Back–Projection Pipeline (BPP), works as a residual
network that carries many (instead of one) resolution states at a given time
step as seen in Figure 1. Although similar in spirit to U–Nets (Ronneberger,
Fischer, and Brox 2015), this multi–resolution state is fundamentally
different. U–Nets hold high resolution states to re–enter the network in later
stages, whereas in BPP the state is created as initial conditions in multiple
resolutions and get updated synchronously through the network. Another
distinctive property of BPPs is _scale causality_. Namely, after
initialization, low resolution states do not depend on higher resolution
states. Information travels forward in depth, same as in ResNets, and upwards
in scale, as shown in Figure 1. Scale causality is inspired by scale–space
(Lindeberg 1994) and multi–resolution analysis(Mallat 1998) to express the
nested nature of details. A simple example is that when we see an image of a
keyboard we expect to see letters, but not necessarily the other way around.
Finally, the interpretation of BPPs as an extension of ResNets becomes more
clear from the dynamic of the network. We will show that BPP updates can be
modeled by a non–autonomous system of differential equations, as opposed to a
single ODE for ResNets.
Related Work. With regard to applications, BPP gives us a generic
multi–resolution approach to transform images into a desired target. Current
benchmarks in image enhancement often use different architectures for
different tasks. It is important to distinguish between local and global
targets. In the problem of super–resolution, for example, we need to calculate
pixel values around a local area, and distant pixels become less relevant. In
a different problem, contrast enhancement, we want to change the histogram of
an image, which contains statistics that represent global features. General
image enhancement is gaining interest in research and has been considered in
the context of:
* •
_Mixed Local Problems_ : In (Zhang et al. 2019), for example, authors solve
denoising, super–resolution and deblur tasks using a single architecture and
different parameters for each problem. In (Gharbi et al. 2016; Ehret et al.
2019) authors solve joint demosaicking and denoising, and in (Qian et al.
2019) authors additionally solve super-resolution, all through using a single
architecture and same model parameters. In (Zhang, Zuo, and Zhang 2018)
authors tackle super–resolution and deblur, and train a single system to
handle different image degradations.
* •
_Global and Local Problems_ : Authors in (Soh, Park, and Cho 2019; Kim, Oh,
and Kim 2019; Kinoshita and Kiya 2019) consider the joint solution of
low–to–high dynamic range enhancement as well as image–SR. In (Kim, Oh, and
Kim 2019) authors generate an image in HDR display format, whereas (Soh, Park,
and Cho 2019; Kinoshita and Kiya 2019) use the same input and output format.
They both use U–Net configurations, while (Soh, Park, and Cho 2019) uses a
two–stage Retinex decomposition network.
Regarding architecture, BPP uses a multi–resolution workflow, which is
different from U–Nets (Ronneberger, Fischer, and Brox 2015). This workflow
follows from the Iterative Back–Projection (Irani and Peleg 1991) (IBP)
algorithm. In this respect, Multi–Grid Back–Projection (Navarrete Michelini,
Liu, and Zhu 2019) (MGBP) is the closest super–resolution system that is
state–of–the–arts for lightweight systems with small number of parameters. It
is based on a multi–resolution back–projection algorithm that uses a multigrid
recursion(Trottenberg and Schuller 2001). This recursion violates
scale–causality as it sends network states back to low–resolution to restart
iterations. We also notice that BPP follows the wide–activation design in (Yu
et al. 2018) in the sense that features are increase before activations and
reduced before updating. BPP shows a workflow structure similar to the
Multi–scale DenseNet architecture in (Huang et al. 2018), except in the latter
scale–causality moves downwards in scale, it does not use back–projections and
it focuses on a label prediction problem. The WaveNet architecture (Oord et
al. 2016) also shares the property of scale causality but without
back–projections, moving information upwards in scale without any step back.
Finally, a similar causality and adaptation in the number of channels per
scale exists in the SlowFast architecture (Feichtenhofer et al. 2019) but
again without back–projections.
Contributions. Our major contribution is the introduction of a new network
architecture that extends ResNets from single to multiple resolutions, with a
clear representation in terms of ODE dynamic. Our main focus is to evaluate
this extension and to prove that it is beneficial with respect to conventional
ResNets. We also verify that the multi–scale dynamic of the network is being
used to achieve improved performance and we visualize the dynamic of the
network in solving different problems. BPP can be used to solve joint local
problems, as well as combinations of global and local problems, getting
state–of–art results in image–SR and competitive results for other problems
using a single network configuration. Finally, we also show empirical evidence
that BPP effectively uses both local and global information to solve problems.
Figure 2: Pipelining Iterative Back–Projections. Figure 3: Back–Projection Pipeline network diagram. On the left, a detailed diagram shows all back–projection modules. On the right, the diagram is simplified by using _Flux_ units. _BPP_ $(input,L,D)$: | _FluxBlock_ $(x_{k},p_{k},L)$: | _Flux_ $(e_{in},x_{in},p_{in})$:
---|---|---
0: Image $input$. 0: Integer $L\geqslant 1,D\geqslant 1$. 0: Image $output$. 1: $s^{A}_{L}=input$ 2: for $k=L-1,\ldots,1$ do 3: $s^{A}_{k}=Scaler^{A}_{k}(s^{A}_{k+1})$ 4: end for 5: $x_{L}=Analysis^{A}_{k}(s^{A}_{L})$ 6: for $k=1,\ldots,L-1$ do 7: $x_{k}=Analysis^{A}_{k}(s^{A}_{k})$ 8: $s^{B}_{k}=Scaler^{B}_{k}(s^{A}_{k+1})$ 9: $p_{k}=Analysis^{B}_{k}(s^{B}_{k})$ 10: end for 11: for $l=1,\ldots,D$ do 12: $x,p=FluxBlock(x,p,L)$ 13: end for 14: $output=input+Synthesis(x_{L})$ | 0: Initial $x_{k},p_{k}$, $k=1,\ldots,L$. 0: Integer $L\geqslant 1$. 0: Updated $x_{k},p_{k}$, $k=1,\ldots,L$. 1: $e_{2},x_{1},\\_=Flux(0,x_{1},p_{1})$ 2: for $k=2,\ldots,L$ do 3: $e_{k+1},x_{k},p_{k-1}=Flux(e_{k},x_{k},p_{k})$ 4: end for 5: $\\_,x_{L},p_{L-1}=Flux(e_{L},x_{L},0)$ | 0: $e_{in},x_{in},p_{in}$. 0: $e_{out},x_{out},p_{out}$. 1: $c=x_{in}+e_{in}$ 2: $e_{out}=Upscale([\;p_{in},c\;])$ 3: $p_{out}=Downscale(c)$ 4: $x_{out}=Update(c)$
Algorithm 1 Back–Projection Pipeline (BPP)
## Architecture Design
In Figure 2 (a) we observe the workflow of the Iterative Back–Projections
(Irani and Peleg 1991) (IBP) algorithm:
$\displaystyle h^{0}$ $\displaystyle=P\;x\;,$ $\displaystyle h^{t+1}$
$\displaystyle=h^{t}+P\;e(h^{t})\;,$ $\displaystyle e(h^{t})$
$\displaystyle=x-R\;h^{t}\;.$ (1)
IBP upscales an image $x$ with a linear operator $P$ and sends it back to
low–resolution to verify the downscaling model represented by a linear
operator $R$. Now, we propose to extend the IBP algorithm to multiple scales
by using the data pipeline approach shown in Figure 2 (b). Specifically, as
soon as we get the first upscale image, we take it as reference and start a
new upscaling to a higher resolution. Next, we downscale the second
high–resolution image to verify the downscaling model. However, the reference
image has been changed by the back–projection update at the lower level. At
the lowest resolution the image never changes, and upper level iterations need
to keep track of the lower level updates. In Figure 2 (b) we identify the
essential computational block to assemble the pipeline: the _Flux_ unit. The
_Flux_ unit is what makes scale travel possible by connecting input and output
images from different levels.
Network Architecture. Without loss of generality, we tackle the image
enhancement problem with an input resolution equal to the output resolution.
In the case of image SR, which requires to increase image resolution, we add a
pre–processing stage where the input image is upscaled using a standard method
(e.g. bicubic). This helps to make the system become more general for
applications. For example, we can easily solve the problem of fractional
upscaling factors(Hu et al. 2019) or multiple upscaling factors(Zhang, Zuo,
and Zhang 2018) by simply using different pre–processing bicubic upscalers.
The full BPP algorithm and network configuration is specified in Algorithm 1
and Figure 3. To extend the pipelining approach into a network configuration,
first, we initialize the network states $x_{k}$ and down–projections $p_{k}$
using linear downscalers and single convolutional layers in the _Analysis_
modules to increase the number of channels. Second, states are updated using
the _Flux–Blocks_ defined in Algorithm 1, calculating residuals $e_{k}$ and
updating states upwards in scale with flux units. Third, the output state in
the highest resolution is converted into a residual image by a convolutional
layer in the _Synthesis_ module and added to the input image.
Network Dynamic. The restriction operators $R_{k}$ (_Downscale_ module) and
interpolation operators $P_{k}$ (_Upscale_ module) are now non–linear and do
not share parameters (time dependent). When we interpret depth as time $t$,
the dynamic is described in Figure 4 and leads to the following set of
difference equations with their correspondent extension to continuous time:
$\displaystyle h_{k}^{t+1}$
$\displaystyle=h_{k}^{t}+P_{k}(R_{k}(h_{k}^{t},t),h_{k-1}^{t+1},t)$
$\displaystyle h_{1}^{t+1}$ $\displaystyle=h_{1}^{t}\;,$
$\displaystyle\stackrel{{\scriptstyle cont.time}}{{\Rightarrow}}$
$\displaystyle\frac{dh_{k}}{dt}$
$\displaystyle=P_{k}(R_{k}(h_{k},t),h_{k-1},t)$ $\displaystyle h_{1}(x,y,t)$
$\displaystyle=h_{1}(x,y,0)\;.$ (2)
In the case of ResNets, the dynamical systems is given by
$h^{t+1}=h^{t}+f(h^{t},t)$ and $\tfrac{dh}{dt}=f(h,t)$ in continuous time.
Therefore, BPP extends the model of ResNets from a single ODE to a system of
coupled equations. Scale–causality follows from (2) as state $h_{k}$ only
depends on $h_{k-1},h_{k-2},\ldots$. The multi–scale nature follows from the
spatial dimension of state vectors $h_{k}$, explicitly expressed in operators
$P_{k}:\mathbb{R}^{\frac{H}{2}\times\frac{W}{2}}\rightarrow\mathbb{R}^{H\times
W}$ and $R_{k}:\mathbb{R}^{H\times
W}\rightarrow\mathbb{R}^{\frac{H}{2}\times\frac{W}{2}}$. In continuous space
we could also express the multi–scale nature of the equations by using initial
conditions $h_{k-s}(x,y,t=0)=h_{k}(2^{s}x,2^{s}y,t=0)$ with $s\in\mathbb{N}$
with no filtering needed in continuous space, since aliasing effects do not
exist. We observe that initial conditions are self-similar in scale(Mallat
1998). Whether this property is maintained in time depends on the evolution of
the network state. In the continuous time model, the restriction operator
$R_{k}$ in (2) represents a renormalization–group transformation of the
network state, similar to those used in particle physics and ODEs to ensure
self–similarity(Fisher 1974; Chen, Goldenfeld, and Oono 1996). In this sense,
using different parameters at each scale allows the model to adjust the level
of self–similarity that works better for a given problem. On the other hand,
using different parameters in time can also be beneficial. It has been
observed in (Liao and Poggio 2016) that normalization layers do not work well
in recurrent networks, which share parameters in time. But in time–dependent
systems, these layers become beneficial. Since the BPP configurations in our
experiments use IN–layers, we chose to use different parameters in time. This
does not have a significant effect in performance, because the flux–block
structure in Algorithm 1 uses inline updates that avoid storage of old network
states. During training, a checkpoint strategy can effectively reduce the
memory footprint (Chen et al. 2016).
Figure 4: State diagram of the depth transitions in the BPP architecture. The
residual structure leads to a non–autonomous system of differential equations.
Using pipelining to extend IBP into multiple scales is simple and this is the
major strength of this approach. There are several ways to extend IBP to
multiple scales. We mentioned MGBP as a relevant but different approach. BPP
is simpler, and that simplicity translates to a clear ODE model that is
difficult to obtain otherwise. Most importantly, this ODE model is very
expressive about the connection to IBP. It is direct from (2) that if the
composition of $P$ and $R$ operations forms a contraction mapping then the ODE
model will converge, which is the same argument used in convergence proofs of
IBP in the linear case (Irani and Peleg 1991). At this point BPP departs from
IBP. Because BPP is trained in a supervised fashion, we do not know a priori
how is this dynamic going to be driven towards the target. Overall, the BPP
model inherits the essence of IBP in terms of an iteration that updates
residuals upwards in scale, which can now be trained to reach diverse targets
in a non–linear fashion using convolutional networks. The main purpose of our
investigation is: first, to generalize the IBP dynamic to multiple scales in
sequence; and second, to study how powerful is this dynamic so solve more
general problems.
Finally, we note that the continuous model in (2) allows BPP to work as a
Neural–ODE system(Chen et al. 2018). For the sake of simplicity, in this work
we do not explore this direction. However, it stands as an interesting
direction for future research.
Figure 5: a) Qualitative evaluation for SR methods. b) Validation MSE for
$4\times$ SR. Table 1: Quantitative evaluation for SR. A more extensive
comparison is available in the Appendix.
| | Set14 | BSDS100 | Urban100 | Manga109
---|---|---|---|---|---
Algorithm | | PSNR–$Y_{M}$ | SSIM–$Y_{M}$ | PSNR–$Y_{M}$ | SSIM–$Y_{M}$ | PSNR–$Y_{M}$ | SSIM–$Y_{M}$ | PSNR–$Y_{M}$ | SSIM–$Y_{M}$
Bicubic | $2\times$ | 30.34 | 0.870 | 29.56 | 0.844 | 26.88 | 0.841 | 30.84 | 0.935
MSLapSRN | | 33.28 | 0.915 | 32.05 | 0.898 | 31.15 | 0.919 | 37.78 | 0.976
D-DBPN | | 33.85 | 0.919 | 32.27 | 0.900 | 32.70 | 0.931 | 39.10 | 0.978
EDSR | | 33.92 | 0.919 | 32.32 | 0.901 | 32.93 | 0.935 | 39.10 | 0.977
RDN | | 34.28 | 0.924 | 32.46 | 0.903 | 33.36 | 0.939 | 39.74 | 0.979
BPP–SRx2x3x4x8 | | 33.27 | 0.913 | 31.21 | 0.879 | 31.67 | 0.921 | 38.31 | 0.975
BPP–SRx2 | | 34.23 | 0.922 | 31.63 | 0.886 | 33.07 | 0.935 | 39.19 | 0.977
Bicubic | $3\times$ | 27.55 | 0.774 | 27.21 | 0.739 | 24.46 | 0.735 | 26.95 | 0.856
MSLapSRN | | 29.97 | 0.836 | 28.93 | 0.800 | 27.47 | 0.837 | 32.68 | 0.939
EDSR | | 30.52 | 0.846 | 29.25 | 0.809 | 28.80 | 0.865 | 34.17 | 0.948
RDN | | 30.74 | 0.850 | 29.38 | 0.812 | 29.18 | 0.872 | 34.81 | 0.951
BPP–SRx2x3x4x8 | | 30.23 | 0.838 | 28.81 | 0.794 | 28.43 | 0.852 | 33.75 | 0.943
BPP–SRx3 | | 30.78 | 0.848 | 29.14 | 0.804 | 29.56 | 0.873 | 34.49 | 0.948
Bicubic | $4\times$ | 26.10 | 0.704 | 25.96 | 0.669 | 23.15 | 0.659 | 24.92 | 0.789
MSLapSRN | | 28.26 | 0.774 | 27.43 | 0.731 | 25.51 | 0.768 | 29.54 | 0.897
D-DBPN | | 28.82 | 0.786 | 27.72 | 0.740 | 26.54 | 0.795 | 31.18 | 0.914
EDSR | | 28.80 | 0.788 | 27.71 | 0.742 | 26.64 | 0.803 | 31.02 | 0.915
RDN | | 29.01 | 0.791 | 27.85 | 0.745 | 27.01 | 0.812 | 31.74 | 0.921
BPP–SRx2x3x4x8 | | 28.55 | 0.778 | 27.43 | 0.728 | 26.48 | 0.791 | 30.81 | 0.909
BPP–SRx4 | | 29.07 | 0.791 | 27.89 | 0.745 | 27.55 | 0.819 | 31.63 | 0.918
Bicubic | $8\times$ | 23.19 | 0.568 | 23.67 | 0.547 | 20.74 | 0.516 | 21.47 | 0.647
MSLapSRN | | 24.57 | 0.629 | 24.65 | 0.592 | 22.06 | 0.598 | 23.90 | 0.759
D-DBPN | | 25.13 | 0.648 | 24.88 | 0.601 | 22.83 | 0.622 | 25.30 | 0.799
EDSR | | 24.94 | 0.640 | 24.80 | 0.596 | 22.47 | 0.620 | 24.58 | 0.778
RDN | | 25.38 | 0.654 | 25.01 | 0.606 | 23.04 | 0.644 | 25.48 | 0.806
BPP–SRx2x3x4x8 | | 25.10 | 0.642 | 24.89 | 0.598 | 22.72 | 0.626 | 24.78 | 0.785
BPP–SRx8 | | 25.53 | 0.655 | 25.11 | 0.607 | 23.17 | 0.649 | 25.28 | 0.800
## Experiments
In our experiments we found that using IN–layers to activate ReLU units, as
shown in Figure 3, could help converge faster in early training and doing so
independent of initialization. Figure 5 (b) shows this effect and we also see
that IN–layers are not required for BPP in the long run. In early training IN
layers placed before ReLUs force a $50\%$ activation in all flux units across
all scales. This strategy shows to be a good choice to initialize parameters.
Alternatively, we found that the most effective way to avoid IN–layers is
using Dirac–kernels to initialize weights and adding Gaussian noise. This
initialization was used in the learning curve _BPP (no IN_) in Figure 5 (b)
and it is the closest we have found to avoid normalization layers.
Because of memory limitations we used a patch–based training strategy, where
smaller–sized patches are taken from training set images. Patch–based learning
reduces the receptive field of the network during training. At inference the
performance of the network reduces if the mean and variance of IN–layers are
computed on an image larger than the training patches. To solve this problems
we: first, divide input images into overlapping patches (of same size as
training patches); second, we multiply each output by a Hamming window(Harris
1978); and third, we average the results. In all our experiments we use
overlapping patches separated by $16$ pixels in vertical and horizontal
directions. The weighted average helps to avoid blocking artifacts.
On one hand, this approach introduces redundancy and reduces performance for
medium size images. On the other hand, it also allows the algorithm to run on
very large images (e.g. 8K) and can be massively parallelized by batch
processing in multiple GPUs.
Configuration. In the following experiments we use a BPP configuration with
$16$ back–projection layers (flux–blocks), $4$ resolution levels and $256$,
$128$, $64$ and $48$ features per level from lowest to highest resolution,
respectively. All convolutional layers use $3\times 3$ as kernel size, and
scalers are initialized with bicubic filters of size $9\times 9$ and trained
as additional parameters. A fully unrolled diagram is shown in the Appendix.
The configuration was tuned according to validation performance for the most
challanging problems (e.g. SR–$8\times$). By fixing the configuration we can
potentially have the architecture hardwired in silicon and update its model
parameters to switch between different problems.
Performance. The BPP architecture is multi–scale and sequential. The so–called
_Flux–Block_ in Algorithm 1 represents the sequential block and consists of
one Flux unit per level. This sequential structure is more convenient for
memory performance as it avoids buffering of features from previous blocks.
Architectures such as Dense–Nets, U–Nets and MGBP need to buffer features in
skipped connections and thus need more memory. Because the configuration is
fixed, the performance of the system can be roughly estimated from average
statistics. The system has a total of $19$ million parameters and it can
process $1.7$ million pixels per second on a Titan X GPU using $16$–bit
floating point precision. This means, for example, that it takes $3.7$ seconds
to process a Full–HD image in RGB format ($3\times 1920\times 1080$ pixels).
P1: Image Super–Resolution. We use DIV2K (Agustsson and Timofte 2017) and
FLICKR–2K datasets for training and the following datasets for test:
Set–14(Zeyde, Elad, and Protter 2010), BSDS–100(Martin et al. 2001),
Urban–100(Huang, Singh, and Ahuja 2015) and Manga–109(Matsui et al. 2017).
Impaired images were obtained by downscaling and then upscaling ground truth
images, using Bicubic scaler, and scaling factors: $2\times$, $3\times$,
$4\times$ and $8\times$. Here, we consider two cases: we trained models
BPP–SR$\times f$ for each upscaling factor $f=2,3,4$ and $8$; and we also
trained a single model BPP–SRx2x3x4x8 to restore impaired images with unknown
upscaling factors. We use $16$ patches per mini–batch with patch size
$48f\times 48f$ for known upscaling factor $f$, and $192\times 192$ for
unknown upscaling factor, all at high resolution.
Table 1 and Figure 5 (a) show quantitative and qualitative results compared to
other methods. We focus our comparison to the following methods: Bicubic (the
baseline); EDSR (Lim et al. 2017), with major processing in $1$ resolution
level using a $32$–layer ResNet; Dense–DBPN (Haris, Shakhnarovich, and Ukita
2018), with major processing in $2$ resolution levels using $12$ densely
connected up/down back–projections; and RDN (Zhang et al. 2018c), with major
processing in $1$ resolution level using $20$ densely connected
residual–dense–blocks. We show EDSR and DBPN because they are both closely
related to BPP in their residual and back–projection structures, respectively,
and we show RDN as a top reference of current state–of–the–arts. Further
comparisons with other methods can be found in the Appendix.
Overall, for the problem of super–resolution we find that BPP can get
excellent results, reaching state–of–the–arts results in both quantitative and
qualitative evaluations, but it decreases its performance when we test a more
general problem. First, BPP–$\times f$ models get the best scores in most
quantitative and qualitative evaluations, with RDN slightly outperforming BPP
at $2\times$ and $3\times$ upscaling factors. This setting, including datasets
for training and test, is the most common evaluation procedure for supervised
SR technics. In terms of application this would be useful if we need to
enhance an image upscaled with Bicubic upscaler with a specific upscaling
factor. It often happens that we have an image upscaled with an unknown factor
and in this case we do not know which model parameters to load. In this case
the BPP–SRx2x3x4x8 model offers a general upscaling solution. This performance
of these BPP models decrease and not reach state–of–the–arts results. Although
reasonably close to state–of–the–arts, often outperforming EDSR, we would have
expected this model to perform better than BPP–$\times f$ if the architecture
was able to generalize effectively to this more general setting. In fact, it
has been observed in VDSR (Kim, Lee, and Lee 2016a) and MDSR (Lim et al. 2017)
that training with unkown upscaling factors can improve the performance of the
network. Therefore, these empirical results show that BPP can be very
effective for fixed upscaling factors but does not generalize as well as other
architectures for general upscaling factors.
P2: Raindrop Removal. We use the DeRaindrop(Qian et al. 2018a, b) dataset for
training and test. This dataset provides paired images, one degraded by
raindrops and the other one free from raindrops. These were obtained by use
two pieces of exactly the same glass: one sprayed with water, and the other is
left clean. In each training batch, we take $1$ patch of size $528\times 528$.
We train a BPP model using $L_{1}$ loss and patch size $456\times 456$. More
details of training settings are provided in the Appendix.
This problem is very different in nature to super–resolution. On one hand, a
significant portion of pixels contain (uncorrupted) high–resolution
information that must move to the output with little or no change. At the same
time it needs to identify the irregular distribution of raindrops, with
different sizes, and fill–in those areas by predicting the content within. In
some images the content within raindrops is of little use, making the problem
similar to inpainting. Thus, the problem requires processing of both local and
global information in order to fill–in raindrops.
Even though we only trained our system with an $L_{1}$ loss, our system
performs similar to the state–of–the–arts DeRaindrop (Qian et al. 2018a) as
seen in Table 2 and Figure 6. The DeRaindrop network in (Qian et al. 2018a)
uses an attentive GAN approach that can estimate raindrop masks to focus on
these areas for restoration. The PSNR score of BPP is better than DeRaindrop
without adversarial training, and the SSIM score is better than all other
systems in Table 2. The qualitative evaluation shows that BPP achieves a
reasonable quality, considering the fact that it has not been trained using
GANs. Here, the BPP architecture appears to be effective. In the next section
we inspect properties of the network that reveal the undergoing mechanism used
by BPP to obtain its solutions.
Other Problems. The performance in other problems, including mobile–to–DSLR
photo translation, dehaze and joint HDR+SR are included in the Appendix.
Figure 6: Qualitative evaluation for raindrop removal. Table 2: Quantitative
results of raindrop removal.
Method | PSNR–$Y_{P}$ | SSIM–$Y_{P}$
---|---|---
Eigen13 | 28.59 | 0.6726
Pix2Pix | 30.14 | 0.8299
DeRaindrop (No GAN) | 29.25 | 0.7853
DeRaindrop | 31.57 | 0.9023
BPP | 30.85 | 0.9180
Inspection of ODE updates. We conduct experiments to measure the magnitude of
the updates in equation (2) to better understand the dynamic of the network
when solving different problems. The arrange of Flux units in BPP networks
forms an array of size $L\times D$ (number of levels times depth) and we
compare the magnitude of residual updates in each one of these units. In
Figure 7 we display the result of measuring
$\left\|\frac{dh_{k}}{dt}\right\|_{2}=\|P_{k}(R_{k}(h_{k}^{t},t),h_{k-1}^{t+1},t)\|_{2}\;,$
(3)
for every flux unit, averaged over all images in the validation sets, and
normalized to the maximum value (fixed to $100$). At the lowest resolution
($k=4$) the reference image never changes and thus the updates is always zero.
Figure 7: Average $L2$–magnitudes of residual updates normalized by the
maximum update with fixed value $100$.
Interestingly, we observe that the dynamic is far from the original
contraction mapping design of IBP, that would result in an exponential decay
of updates along depth. Here, we should remember that the dynamic is driven
exclusively by the result of training the network in supervised manner.
Instead of an exponential decay, the network consistently shows a bimodal
statistic with one peak very close to the input and another very close to the
output. Also, the highest resolution receives very small updates meaning that
these feature move more or less unchanged with an increased update towards the
end. The major processing goes on at the middle levels. In SR updates are
stronger at the lower resolution ($k=3$) and for RainDrop removal updates are
stronger at the higher resolution ($k=2$). The bimodal statistic is
reminiscent of interpretability results for VGG networks in classification,
that show higher contribution to label outputs very early and very late in a
sequential configuration (Navarrete Michelini et al. 2019). Nevertheless, in
BPP the updates focus on one or two resolution levels as opposed to VGG
networks that are designed to process high resolutions early in the network
and very low resolutions towards the end. Despite this important difference,
these results suggest that sequential networks find solutions in two steps:
analysis at the first layers, and fusion towards the very end.
Figure 8: Local and global contributions ($Fx$ and $r$) for $3$ systems using
deep filter visualization(Navarrete Michelini, Liu, and Zhu 2019). EDSR relies
on local contributions while BPP balances both local and global contributions.
Interpretability. We apply the _LinearScope_ method from (Navarrete Michelini
et al. 2019) to analyze the learning process in global and local problems. The
general methodology is as follows. The BPP architecture contains several
non–linear modules consisting on ReLUs and IN-layers. The decision of which
pixels pass or stop in ReLUs, and what mean and variance to use in IN–layers,
is non–linear. But the action of these layers are linear: masking and
normalizing. For a given input image $x$, the action of all non–linear modules
(ReLU and IN–layers) can be fixed as: $1/0$–masks for ReLU and fixed mean and
variance in IN–layers. This gives a linear system of the form $y=Fx+r$ that
generates the same output as the non–linear system for the input $x$, and
represents the overall action of the network on the input image.
The matrix $F$ represents the interpolation filters used by the network to
solve the problem, and thus shows the _local processing_. The residual $r$ is
a fixed _global_ image created by non–linear modules. Figure 8 shows the local
contributions, $Fx$, and global contributions, $r$, for three systems. We
observe that EDSR almost purely relies on local processing to obtain an
output. BPP, on the other hand, relies mostly on local processing but the
contribution of $r$ is much larger than the one in EDSR. This shows a
significantly different approach followed by BPP, compared to EDSR, to solve
the super–resolution problem.
The BPP system for raindrop removal reveals a much larger contribution of $r$,
that resembles a mask of raindrops. This means that BPP uses a local approach
on areas without raindrops (using $Fx$) and a global approach on raindrops
determined by the residual $r$. The mechanism used by the network to obtain
the residual $r$ is non–linear. Overall, we observe that for this problem the
BPP network divides the problem in two parts: a local adaptive filter in clean
areas, to nearly copy–paste the input into the output; and a non–linear global
approach to fill–in raindrop areas.
## Conclusions
We propose Back–Projection Pipeline as a simple yet non–trivial extension of
residual networks (ResNets) to run in multiple resolutions. The update dynamic
through the layers of the network includes interactions between different
resolutions in a way that is causal in scale, and it is represented by a
system of ODEs. We use it as a generic multi–resolution approach to enhance
images. The focus of our investigation is to evaluate this multi–scale
residual approach. Overall, our empirical results show that BPP can achieve
excellent results in traditional supervised learning. Our BPP configuration
gets state–of–the–art results in SR for fixed upscaling factors and
competitive results for raindrop removal as well as other problems (see
Appendix). We also observe a lack of generalization for the problem of SR with
unknown upscaling factors. Inspection of the residual updates in the network
shows that all resolution levels are being used, with higher intensity in
lower resolutions, showing that supervised training gives preference to the
multi–scale setting over traditional residual networks. Based on our results,
we cannot conclude that scale causality is beneficial. Nevertheless, we can at
least conclude that this strong simplification in the flow of network
information, inherited from IBP, does not prevent the architecture to achieve
competitive results. Further investigation is necessary in this regard
(especially regarding generalization) and it could open interesting research
directions in network architecture search and design.
Figure 9: Detail diagram of the $4$–level, $16$–layers BPP configuration used
in our experiments.
## References
* Agustsson and Timofte (2017) Agustsson, E.; and Timofte, R. 2017. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. In _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops_.
* Ancuti, Ancuti, and Timofte (2019) Ancuti, C.; Ancuti, C. O.; and Timofte, R. 2019. NTIRE–2019 Dehaze Evaluation code. https://competitions.codalab.org/my/datasets/download/a85cc0d2-cf8b-4ec8-bf83-243c7bcda515. [Online; accessed 20-May-2019].
* Ancuti et al. (2018a) Ancuti, C.; Ancuti, C. O.; Timofte, R.; and De Vleeschouwer, C. 2018a. I-HAZE: a dehazing benchmark with real hazy and haze-free indoor images. In _International Conference on Advanced Concepts for Intelligent Vision Systems_ , 620–631. Springer.
* Ancuti et al. (2018b) Ancuti, C.; Ancuti, C. O.; Timofte, R.; Van Gool, L.; Zhang, L.; and Yang, M. 2018b. NTIRE 2018 Challenge on Image Dehazing: Methods and Results 891–901.
* Ancuti et al. (2019) Ancuti, C. O.; Ancuti, C.; Sbert, M.; and Timofte, R. 2019. Dense Haze: A benchmark for image dehazing with dense-haze and haze-free images. _CoRR_ abs/1904.02904. URL http://arxiv.org/abs/1904.02904.
* Ancuti et al. (2018c) Ancuti, C. O.; Ancuti, C.; Timofte, R.; and De Vleeschouwer, C. 2018c. O-HAZE: a dehazing benchmark with real hazy and haze-free outdoor images. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_ , 754–762.
* Chen, Goldenfeld, and Oono (1996) Chen, L. Y.; Goldenfeld, N. D.; and Oono, Y. 1996. Renormalization group and singular perturbations: Multiple scales, boundary layers, and reductive perturbation theory. _Physical Review E_ 54(1): 376–394.
* Chen et al. (2018) Chen, T.; Rubanova, Y.; Bettencourt, J.; and Duvenaud, D. K. 2018. Neural Ordinary Differential Equations. _neural information processing systems_ 6571–6583.
* Chen et al. (2016) Chen, T.; Xu, B.; Zhang, C.; and Guestrin, C. 2016. Training Deep Nets with Sublinear Memory Cost. _arXiv: Learning_ .
* Dong et al. (2014) Dong, C.; Loy, C. C.; He, K.; and Tang, X. 2014. Learning a Deep Convolutional Network for Image Super–Resolution. In _in Proceedings of European Conference on Computer Vision (ECCV)_.
* Dong, Loy, and Tang (2016) Dong, C.; Loy, C. C.; and Tang, X. 2016. Accelerating the Super–Resolution Convolutional Neural Network. In _in Proceedings of European Conference on Computer Vision (ECCV)_.
* Ehret et al. (2019) Ehret, T.; Davy, A.; Arias, P.; and Facciolo, G. 2019. Joint demosaicing and denoising by overfitting of bursts of raw images.
* Feichtenhofer et al. (2019) Feichtenhofer, C.; Fan, H.; Malik, J.; and He, K. 2019. SlowFast Networks for Video Recognition. In _The IEEE International Conference on Computer Vision (ICCV)_.
* Fisher (1974) Fisher, M. E. 1974. The renormalization group in the theory of critical behavior. _Reviews of Modern Physics_ 46(4): 597–616.
* Gharbi et al. (2016) Gharbi, M.; Chaurasia, G.; Paris, S.; and Durand, F. 2016. Deep Joint Demosaicking and Denoising. _ACM Trans. Graph._ 35(6): 191:1–191:12. ISSN 0730-0301. doi:10.1145/2980179.2982399. URL http://doi.acm.org/10.1145/2980179.2982399.
* Haris, Shakhnarovich, and Ukita (2018) Haris, M.; Shakhnarovich, G.; and Ukita, N. 2018. Deep Back–Projection Networks for Super–Resolution. In _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_.
* Harris (1978) Harris, F. J. 1978. On the use of windows for harmonic analysis with the discrete Fourier transform. _Proceedings of the IEEE_ 66(1): 51–83.
* He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. _computer vision and pattern recognition_ 770–778.
* He et al. (2018) He, M.; Chen, D.; Liao, J.; Sander, P. V.; and Yuan, L. 2018. Deep exemplar-based colorization. _ACM Transactions on Graphics (TOG)_ 37(4): 47.
* Hu, Shen, and Sun (2018) Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-Excitation Networks. _computer vision and pattern recognition_ 7132–7141.
* Hu et al. (2019) Hu, X.; Mu, H.; Zhang, X.; Wang, Z.; Tan, T.; and Sun, J. 2019. Meta-SR: A Magnification-Arbitrary Network for Super-Resolution. _arXiv: Computer Vision and Pattern Recognition_ .
* Huang et al. (2018) Huang, G.; Chen, D.; Li, T.; Wu, F.; Der Maaten, L. V.; and Weinberger, K. Q. 2018\. Multi-Scale Dense Networks for Resource Efficient Image Classification. _International Conference on Learning Representations_ .
* Huang et al. (2017) Huang, G.; Liu, Z.; van der Maaten, L.; and Weinberger, K. Q. 2017. Densely connected convolutional networks. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_.
* Huang, Singh, and Ahuja (2015) Huang, J.; Singh, A.; and Ahuja, N. 2015. Single image super-resolution from transformed self-exemplars 5197–5206.
* Ignatov et al. (2017) Ignatov, A.; Kobyshev, N.; Timofte, R.; Vanhoey, K.; and Van Gool, L. 2017. DSLR-quality photos on mobile devices with deep convolutional networks. In _Proceedings of the IEEE International Conference on Computer Vision_ , 3277–3285.
* Irani and Peleg (1991) Irani, M.; and Peleg, S. 1991. Improving Resolution by Image Registration. _CVGIP: Graph. Models Image Process._ 53(3): 231–239. ISSN 1049-9652. doi:10.1016/1049-9652(91)90045-L. URL http://dx.doi.org/10.1016/1049-9652(91)90045-L.
* Kim, Lee, and Lee (2016a) Kim, J.; Lee, J. K.; and Lee, K. M. 2016a. Accurate Image Super–Resolution Using Very Deep Convolutional Networks. In _The IEEE Conference on Computer Vision and Pattern Recognition_.
* Kim, Lee, and Lee (2016b) Kim, J.; Lee, J. K.; and Lee, K. M. 2016b. Deeply-Recursive Convolutional Network for Image Super–Resolution. In _The IEEE Conference on Computer Vision and Pattern Recognition_.
* Kim, Oh, and Kim (2019) Kim, S. Y.; Oh, J.; and Kim, M. 2019. Deep SR-ITM: Joint Learning of Super-Resolution and Inverse Tone-Mapping for 4K UHD HDR Applications. In _The IEEE International Conference on Computer Vision (ICCV)_.
* Kingma and Ba (2015) Kingma, D. P.; and Ba, J. 2015. Adam: A method for stochastic optimization. _international conference on learning representations_ .
* Kinoshita and Kiya (2019) Kinoshita, Y.; and Kiya, H. 2019. Convolutional Neural Networks Considering Local and Global features for Image Enhancement. In _The IEEE International Conference on Image Processing (ICIP)_.
* Kokkinos and Lefkimmiatis (2018) Kokkinos, F.; and Lefkimmiatis, S. 2018. Deep image demosaicking using a cascade of convolutional residual denoising networks. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , 303–319.
* Kundu et al. (2017a) Kundu, D.; Ghadiyaram, D.; Bovik, A. C.; and Evans, B. L. 2017a. Evaluation code for HIGRADE metric. http://live.ece.utexas.edu/research/Quality/higradeRelease.zip. [Online; accessed 20-May-2019].
* Kundu et al. (2017b) Kundu, D.; Ghadiyaram, D.; Bovik, A. C.; and Evans, B. L. 2017b. No-Reference Quality Assessment of Tone-Mapped HDR Pictures. _IEEE Transactions on Image Processing_ 26(6): 2957–2971.
* Lai et al. (2018) Lai, W.; Huang, J.; Ahuja, N.; and Yang, M. 2018. Fast and Accurate Image Super–Resolution with Deep Laplacian Pyramid Networks. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ 31(3): 2599–2613.
* Lai et al. (2017) Lai, W.-S.; Huang, J.-B.; Ahuja, N.; and Yang, M.-H. 2017. Deep Laplacian Pyramid Networks for Fast and Accurate Super–Resolution. In _IEEE Conference on Computer Vision and Pattern Recognition_.
* Lefkimmiatis (2018) Lefkimmiatis, S. 2018. Universal denoising networks: a novel CNN architecture for image denoising. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 3204–3213.
* Liao and Poggio (2016) Liao, Q.; and Poggio, T. 2016. Bridging the gaps between residual learning, recurrent neural networks and visual cortex. _arXiv preprint arXiv:1604.03640_ .
* Lim et al. (2017) Lim, B.; Son, S.; Kim, H.; Nah, S.; and Lee, K. M. 2017. Enhanced Deep Residual Networks for Single Image Super–Resolution. In _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops_.
* Lindeberg (1994) Lindeberg, T. 1994. _Scale-Space Theory in Computer Vision_. ISBN 0-7923-9418-6. doi:10.1007/978-1-4757-6465-9.
* Lu et al. (2018) Lu, G.; Ouyang, W.; Xu, D.; Zhang, X.; Gao, Z.; and Sun, M.-T. 2018. Deep Kalman filtering network for video compression artifact reduction. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , 568–584.
* Ma et al. (2017) Ma, C.; Yang, C.-Y.; Yang, X.; and Yang, M.-H. 2017. Learning a No-Reference Quality Metric for Single-Image Super-Rolution. _Computer Vision and Image Understanding_ 1–16.
* Ma et al. (2018) Ma, C.; Yang, C.-Y.; Yang, X.; and Yang, M.-H. 2018. Evaluation code for Ma–metric. https://github.com/chaoma99/sr-metric. [Online; accessed 20-May-2019].
* Mallat (1998) Mallat, S. 1998. _A Wavelet Tour of Signal Processing_. Academic Press.
* Martin et al. (2001) Martin, D. R.; Fowlkes, C. C.; Tal, D.; and Malik, J. 2001. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics 2: 416–423.
* Matsui et al. (2017) Matsui, Y.; Ito, K.; Aramaki, Y.; Fujimoto, A.; Ogawa, T.; Yamasaki, T.; and Aizawa, K. 2017. Sketch-based manga retrieval using manga109 dataset. _Multimedia Tools and Applications_ 76(20): 21811–21838.
* Navarrete Michelini et al. (2019) Navarrete Michelini, P.; Liu, H.; Lu, Y.; and Jiang, X. 2019. A Tour of Convolutional Networks Guided by Linear Interpreters. In _The IEEE International Conference on Computer Vision (ICCV)_. IEEE. URL https://arxiv.org/abs/1908.05168.
* Navarrete Michelini, Liu, and Zhu (2019) Navarrete Michelini, P.; Liu, H.; and Zhu, D. 2019. Multigrid Backprojection Super–Resolution and Deep Filter Visualization. In _Proceedings of the Thirty–Third AAAI Conference on Artificial Intelligence (AAAI 2019)_. AAAI.
* Nemoto et al. (2015) Nemoto, H.; Korshunov, P.; Hanhart, P.; and Ebrahimi, T. 2015. Visual attention in LDR and HDR images. Technical report. URL https://mmspg.epfl.ch/downloads/hdr-eye/.
* Oord et al. (2016) Oord, A. v. d.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; and Kavukcuoglu, K. 2016. Wavenet: A generative model for raw audio. _arXiv preprint arXiv:1609.03499_ .
* Paszke et al. (2017) Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; and Lerer, A. 2017. Automatic differentiation in PyTorch. In _NIPS-W_.
* Poggio et al. (2017) Poggio, T.; Mhaskar, H.; Rosasco, L.; Miranda, B.; and Liao, Q. 2017. Why and when can deep-but not shallow-networks avoid the curse of dimensionality: a review. _International Journal of Automation and Computing_ 14(5): 503–519.
* Qian et al. (2019) Qian, G.; Gu, J.; Ren, J. S.; Dong, C.; Zhao, F.; and Lin, J. 2019. Trinity of Pixel Enhancement: a Joint Solution for Demosaicking, Denoising and Super-Resolution. _arXiv e-prints_ arXiv:1905.02538.
* Qian et al. (2018a) Qian, R.; Tan, R. T.; Yang, W.; Su, J.; and Liu, J. 2018a. Attentive Generative Adversarial Network for Raindrop Removal From a Single Image. In _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_.
* Qian et al. (2018b) Qian, R.; Tan, R. T.; Yang, W.; Su, J.; and Liu, J. 2018b. De-Raindrop dataset. https://drive.google.com/open?id=1e7R76s6vwUJxILOcAsthgDLPSnOrQ49K. [Online; accessed 20-May-2019].
* Qian et al. (2018c) Qian, R.; Tan, R. T.; Yang, W.; Su, J.; and Liu, J. 2018c. Evaluation code for Raindrop removal. https://github.com/rui1996/DeRaindrop/blob/master/metrics.py. [Online; accessed 20-May-2019].
* Reinhard and Devlin (2005) Reinhard, E.; and Devlin, K. 2005. Dynamic range reduction inspired by photoreceptor physiology. _IEEE Transactions on Visualization and Computer Graphics_ 11(1): 13–24.
* Ronneberger, Fischer, and Brox (2015) Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. _medical image computing and computer assisted intervention_ 234–241.
* Soh, Park, and Cho (2019) Soh, J. W.; Park, J. S.; and Cho, N. I. 2019. Joint High Dynamic Range Imaging and Super-Resolution from a Single Image. _arXiv:1905.00933 [cs, eess]_ URL http://arxiv.org/abs/1905.00933. ArXiv: 1905.00933.
* Tao et al. (2018) Tao, X.; Gao, H.; Shen, X.; Wang, J.; and Jia, J. 2018. Scale-recurrent Network for Deep Image Deblurring. In _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_.
* Timofte et al. (2018) Timofte, R.; Gu, S.; Wu, J.; Van Gool, L.; Zhang, L.; Yang, M.-H.; and et al. 2018\. NTIRE 2018 Challenge on Single Image Super-Resolution: Methods and Results. In _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops_.
* Timofte and Smet (2014) Timofte, R.; and Smet, V. D. 2014. Gool, “A+: Adjusted anchored neighborhood regression for fast super–resolution. In _in Proc. Asian Conf. Comput. Vis. (ACCV_.
* Trottenberg and Schuller (2001) Trottenberg, U.; and Schuller, A. 2001. _Multigrid_. Orlando, FL, USA: Academic Press, Inc. ISBN 0-12-701070-X.
* Wang et al. (2013) Wang, S.; Zheng, J.; Hu, H.-M.; and Li, B. 2013. Naturalness preserved enhancement algorithm for non-uniform illumination images. _IEEE Transactions on Image Processing_ 22(9): 3538–3548.
* Wang et al. (2019) Wang, T.; Yang, X.; Xu, K.; Chen, S.; Zhang, Q.; and Lau, R. 2019. Spatial Attentive Single-Image Deraining with a High Quality Real Rain Dataset. _arXiv preprint arXiv:1904.01538_ .
* Wang et al. (2018) Wang, X.; Girshick, R. B.; Gupta, A.; and He, K. 2018. Non-local Neural Networks. _computer vision and pattern recognition_ 7794–7803.
* Wu et al. (2018) Wu, S.; Xu, J.; Tai, Y.-W.; and Tang, C.-K. 2018. Deep High Dynamic Range Imaging with Large Foreground Motions. In _The European Conference on Computer Vision (ECCV)_.
* Yu et al. (2018) Yu, J.; Fan, Y.; Yang, J.; Xu, N.; Wang, Z.; Wang, X.; and Huang, T. S. 2018. Wide Activation for Efficient and Accurate Image Super-Resolution. _arXiv: Computer Vision and Pattern Recognition_ .
* Zeyde, Elad, and Protter (2010) Zeyde, R.; Elad, M.; and Protter, M. 2010. On single image scale-up using sparse-representations 711–730.
* Zhang, Sindagi, and Patel (2018) Zhang, H.; Sindagi, V.; and Patel, V. M. 2018. Multi–scale Single Image Dehazing using Perceptual Pyramid Deep Network. In _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops_.
* Zhang, Zuo, and Zhang (2018) Zhang, K.; Zuo, W.; and Zhang, L. 2018. Learning a Single Convolutional Super-Resolution Network for Multiple Degradations. _computer vision and pattern recognition_ 3262–3271.
* Zhang et al. (2018a) Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; and Fu, Y. 2018a. Image super-resolution using very deep residual channel attention networks. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , 286–301.
* Zhang et al. (2019) Zhang, Y.; Li, K.; Li, K.; Zhong, B.; and Fu, Y. 2019. Residual Non–local Attention Networks for Image Restoration. _international conference on learning representations_ .
* Zhang et al. (2018b) Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; and Fu, Y. 2018b. Evaluation code for Residual Dense Networks. https://github.com/yulunzhang/RDN/blob/master/RDN˙TestCode/Evaluate˙PSNR˙SSIM.m. [Online; accessed 20-May-2019].
* Zhang et al. (2018c) Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; and Fu, Y. 2018c. Residual Dense Network for Image Restoration .
* Zhang et al. (2018d) Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; and Fu, Y. 2018d. Residual Dense Network for Image Super-Resolution. In _CVPR_.
## Appendix
### Diagrams
In an effort to make diagrams easy to read, concise and carrying a precise
meaning, we introduce the notation in Figure 10. This is, lines connected to
the left–side of any given module represent different inputs to that module.
Every module can have several inputs but only one output. Lines connected to
the right–side of a given module represent copies of the same output.
Figure 9 shows an expanded diagram of the single BPP configuration used in our
experiments. It uses $16$ back–projection layers (flux blocks), $4$ resolution
levels and $256$, $128$, $64$ and $48$ features per level from lowest to
highest resolution, respectively. All convolutional layers use $3\times 3$ as
kernel size, and scalers are initialized with bicubic filters of size $9\times
9$ and trained as additional parameters.
We observe that, after initialization, the lowest–resolution network state (at
the bottom of the diagram) never changes. Thus, the highest–resolution state
(at the top of the diagram) is always $3$–layers away from this fixed state.
This is similar to a long–range skip–connection in DenseNet (Huang et al.
2017), but in BPP these shortcut moves through a different resolution. Because
of scale causality, the next low–resolution level moves relatively close to
the fixed state and we can interpret it as a shorter–range skip–connection.
Thus, the particular structure of BPP allows quick paths from the output to
every layer of the network, similar to DenseNets, which is convenient for the
gradient flow during back–propagation steps.
### Evaluation Metrics
Figure 10: Diagram notation.
Quantitative evaluations in our experiments include three objective metrics:
PSNR, SSIM and HIGRADE–2. From these, PSNR and SSIM are reference–based
metrics that measure the difference between an impaired image and ground
truth. Higher values are better in both cases. The PSNR (range $0$ to
$\infty$) is a log–scale version of mean–square–error and SSIM (range $0$ to
$1$) uses image statistics to better correlate with human perception. Full
expressions are as follows:
$\displaystyle PSNR(X,Y)$
$\displaystyle=10\cdot\log_{10}\left(\frac{255^{2}}{MSE}\right)\;,$ (4)
$\displaystyle SSIM(X,Y)$
$\displaystyle=\frac{(2\mu_{X}\mu_{Y}+c_{1})(2\sigma_{XY}+c_{2})}{(\mu_{X}^{2}+\mu_{Y}^{2}+c_{1})(\sigma_{X}^{2}+\sigma_{Y}^{2}+c_{2})}\;,$
(5)
where $MSE=\mathbb{E}\left[(X-Y)^{2}\right]$ is the mean square error of the
difference between $X$ and $Y$; $\mu_{X}$ and $\mu_{Y}$ are the averages of
$X$ and $Y$, respectively; $\sigma_{X}^{2}$ and $\sigma_{Y}^{2}$ are the
variances of $X$ and $Y$, respectively; $\sigma_{XY}$ is the covariance of X
and Y; $c_{1}=6.5025$ and $c_{2}=58.5225$.
HIGRADE–2(Kundu et al. 2017b) is a non–reference image quality metric based on
gradient scene–statistics defined in the LAB color space and it is often used
to evaluate high–dynamic–range images. Here, we used the Matlab code available
in (Kundu et al. 2017a).
In the case of PSNR and SSIM metrics, we follow existing benchmarks that use
different versions of these metrics. We used the following three definitions
in our experiments:
* •
$\boldsymbol{PSNR/SSIM-Y_{M}}$: Based on the Matlab code available in (Zhang
et al. 2018b), computes PSNR/SSIM on the $Y$ channel. Matlab uses a conversion
of RGB to YUV color–spaces following the BT.709 standard, including offsets
that are often avoided in other implementations.
* •
$\boldsymbol{PSNR/SSIM-Y_{P}}$: Based on the Python code available in (Qian et
al. 2018c), computes PSNR/SSIM on the $Y$ channel. The code uses an OpenCV
function to convert from RGB to YCbCr color–space.
* •
$\boldsymbol{PSNR/SSIM-RGB}$: Based on the Python code available in (Ancuti,
Ancuti, and Timofte 2019), computes the average PSNR/SSIM for pairs of RGB
images.
### Training Settings
#### Image Super–Resolution
We use DIV2K(Agustsson and Timofte 2017) and FLICKR–2K datasets for training
and the following datasets for test: Set–14(Zeyde, Elad, and Protter 2010),
BSDS–100(Martin et al. 2001), Urban–100(Huang, Singh, and Ahuja 2015) and
Manga–109(Matsui et al. 2017). Impaired images were obtained by downscaling
and then upscaling ground truth images, using Bicubic scaler, with scaling
factors: $2\times$, $3\times$, $4\times$ and $8\times$. Our target is to
recover the ground truth so we use a loss function that measures the $L_{1}$
distance between impaired images and ground truth. For evaluation we measure
PSNR and SSIM on the Y–channel using the Matlab code from (Zhang et al.
2018b).
We follow the training settings from (Lim et al. 2017). In each training
batch, we randomly take $16$ impaired patches from our training set ($800$
DIV2K plus $2,650$ FLICKR–2K images). We consider two cases: we train a model
BPP–SR$\times f$ for each upscaling factor $f=2,3,4$ and $8$; and we also
train a model BPP–SRx2x3x4x8 to restore impaired images with unknown upscaling
factor. We use patch size $48f\times 48f$, for $f=2,3$ and $4$, and $192\times
192$ for $f=8$ and unknown upscaling factor. We augment the patches by random
horizontal/vertical flipping and rotating $90^{\circ}$. We use Adam
optimizer(Kingma and Ba 2015) with learning rate initialized to $10^{-4}$ and
decreased by half every $200,000$ back–propagation steps.
The training data used for the BPP–SRx2x3x4x8 model includes all images used
for training the upscaling factors $f=2,3,4$ and $8$. We could have chosen to
train our model using a random and fractional upscaling factor $2.0\leqslant
f\leqslant 8.0$, but this would have made it difficult to reproduce the
training settings.
Table 3: Extended quantitative evaluation for super–resolution.
| | Set14 | BSDS100 | Urban100 | Manga109
---|---|---|---|---|---
Algorithm | | PSNR–$Y_{M}$ | SSIM–$Y_{M}$ | PSNR–$Y_{M}$ | SSIM–$Y_{M}$ | PSNR–$Y_{M}$ | SSIM–$Y_{M}$ | PSNR–$Y_{M}$ | SSIM–$Y_{M}$
Bicubic | $2\times$ | 30.34 | 0.870 | 29.56 | 0.844 | 26.88 | 0.841 | 30.84 | 0.935
A+ (Timofte and Smet 2014) | 32.40 | 0.906 | 31.22 | 0.887 | 29.23 | 0.894 | 35.33 | 0.967
FSRCNN (Dong, Loy, and Tang 2016) | 32.73 | 0.909 | 31.51 | 0.891 | 29.87 | 0.901 | 36.62 | 0.971
SRCNN (Dong et al. 2014) | 32.29 | 0.903 | 31.36 | 0.888 | 29.52 | 0.895 | 35.72 | 0.968
MSLapSRN (Lai et al. 2018) | 33.28 | 0.915 | 32.05 | 0.898 | 31.15 | 0.919 | 37.78 | 0.976
VDSR (Kim, Lee, and Lee 2016a) | 32.97 | 0.913 | 31.90 | 0.896 | 30.77 | 0.914 | 37.16 | 0.974
LapSRN (Lai et al. 2017) | 33.08 | 0.913 | 31.80 | 0.895 | 30.41 | 0.910 | 37.27 | 0.974
DRCN (Kim, Lee, and Lee 2016b) | 32.98 | 0.913 | 31.85 | 0.894 | 30.76 | 0.913 | 37.57 | 0.973
MGBP (Navarrete Michelini, Liu, and Zhu 2019) | 33.27 | 0.915 | 31.99 | 0.897 | 31.37 | 0.920 | 37.92 | 0.976
D-DBPN (Haris, Shakhnarovich, and Ukita 2018) | 33.85 | 0.919 | 32.27 | 0.900 | 32.70 | 0.931 | 39.10 | 0.978
EDSR (Lim et al. 2017) | 33.92 | 0.919 | 32.32 | 0.901 | 32.93 | 0.935 | 39.10 | 0.977
RDN (Zhang et al. 2018c) | 34.28 | 0.924 | 32.46 | 0.903 | 33.36 | 0.939 | 39.74 | 0.979
RCAN (Zhang et al. 2018a) | 34.12 | 0.921 | 32.41 | 0.903 | 33.34 | 0.938 | 39.44 | 0.979
BPP–SRx2x3x4x8 | 33.27 | 0.913 | 31.21 | 0.879 | 31.67 | 0.921 | 38.31 | 0.975
BPP–SRx2 | 34.23 | 0.922 | 31.63 | 0.886 | 33.07 | 0.935 | 39.19 | 0.977
Bicubic | $3\times$ | 27.55 | 0.774 | 27.21 | 0.739 | 24.46 | 0.735 | 26.95 | 0.856
SRCNN (Dong et al. 2014) | 29.30 | 0.822 | 28.41 | 0.786 | 26.24 | 0.799 | 30.48 | 0.912
MSLapSRN (Lai et al. 2018) | 29.97 | 0.836 | 28.93 | 0.800 | 27.47 | 0.837 | 32.68 | 0.939
LapSRN (Lai et al. 2017) | 29.87 | 0.832 | 28.82 | 0.798 | 27.07 | 0.828 | 32.21 | 0.935
EDSR (Lim et al. 2017) | 30.52 | 0.846 | 29.25 | 0.809 | 28.80 | 0.865 | 34.17 | 0.948
RDN (Zhang et al. 2018c) | 30.74 | 0.850 | 29.38 | 0.812 | 29.18 | 0.872 | 34.81 | 0.951
BPP–SRx2x3x4x8 | 30.23 | 0.838 | 28.81 | 0.794 | 28.43 | 0.852 | 33.75 | 0.943
BPP–SRx3 | 30.78 | 0.848 | 29.14 | 0.804 | 29.56 | 0.873 | 34.49 | 0.948
Bicubic | $4\times$ | 26.10 | 0.704 | 25.96 | 0.669 | 23.15 | 0.659 | 24.92 | 0.789
A+ (Timofte and Smet 2014) | 27.43 | 0.752 | 26.82 | 0.710 | 24.34 | 0.720 | 27.02 | 0.850
FSRCNN (Dong, Loy, and Tang 2016) | 27.70 | 0.756 | 26.97 | 0.714 | 24.61 | 0.727 | 27.89 | 0.859
SRCNN (Dong et al. 2014) | 27.61 | 0.754 | 26.91 | 0.712 | 24.53 | 0.724 | 27.66 | 0.858
MSLapSRN (Lai et al. 2018) | 28.26 | 0.774 | 27.43 | 0.731 | 25.51 | 0.768 | 29.54 | 0.897
VDSR (Kim, Lee, and Lee 2016a) | 28.03 | 0.770 | 27.29 | 0.726 | 25.18 | 0.753 | 28.82 | 0.886
LapSRN (Lai et al. 2017) | 28.19 | 0.772 | 27.32 | 0.728 | 25.21 | 0.756 | 29.09 | 0.890
DRCN (Kim, Lee, and Lee 2016b) | 28.04 | 0.770 | 27.24 | 0.724 | 25.14 | 0.752 | 28.97 | 0.886
MGBP (Navarrete Michelini, Liu, and Zhu 2019) | 28.43 | 0.778 | 27.42 | 0.732 | 25.70 | 0.774 | 30.07 | 0.904
D-DBPN (Haris, Shakhnarovich, and Ukita 2018) | 28.82 | 0.786 | 27.72 | 0.740 | 26.54 | 0.795 | 31.18 | 0.914
EDSR (Lim et al. 2017) | 28.80 | 0.788 | 27.71 | 0.742 | 26.64 | 0.803 | 31.02 | 0.915
RDN (Zhang et al. 2018c) | 29.01 | 0.791 | 27.85 | 0.745 | 27.01 | 0.812 | 31.74 | 0.921
RCAN (Zhang et al. 2018a) | 28.87 | 0.789 | 27.77 | 0.744 | 26.82 | 0.809 | 31.22 | 0.917
BPP–SRx2x3x4x8 | 28.55 | 0.778 | 27.43 | 0.728 | 26.48 | 0.791 | 30.81 | 0.909
BPP–SRx4 | 29.07 | 0.791 | 27.89 | 0.745 | 27.55 | 0.819 | 31.63 | 0.918
Bicubic | $8\times$ | 23.19 | 0.568 | 23.67 | 0.547 | 20.74 | 0.516 | 21.47 | 0.647
A+ (Timofte and Smet 2014) | 23.98 | 0.597 | 24.20 | 0.568 | 21.37 | 0.545 | 22.39 | 0.680
FSRCNN (Dong, Loy, and Tang 2016) | 23.93 | 0.592 | 24.21 | 0.567 | 21.32 | 0.537 | 22.39 | 0.672
SRCNN (Dong et al. 2014) | 23.85 | 0.593 | 24.13 | 0.565 | 21.29 | 0.543 | 22.37 | 0.682
MSLapSRN (Lai et al. 2018) | 24.57 | 0.629 | 24.65 | 0.592 | 22.06 | 0.598 | 23.90 | 0.759
VDSR (Kim, Lee, and Lee 2016a) | 24.21 | 0.609 | 24.37 | 0.576 | 21.54 | 0.560 | 22.83 | 0.707
LapSRN (Lai et al. 2017) | 24.44 | 0.623 | 24.54 | 0.586 | 21.81 | 0.582 | 23.39 | 0.735
MGBP (Navarrete Michelini, Liu, and Zhu 2019) | 24.82 | 0.635 | 24.67 | 0.592 | 22.21 | 0.603 | 24.12 | 0.765
D-DBPN (Haris, Shakhnarovich, and Ukita 2018) | 25.13 | 0.648 | 24.88 | 0.601 | 22.83 | 0.622 | 25.30 | 0.799
EDSR (Lim et al. 2017) | 24.94 | 0.640 | 24.80 | 0.596 | 22.47 | 0.620 | 24.58 | 0.778
RDN (Zhang et al. 2018c) | 25.38 | 0.654 | 25.01 | 0.606 | 23.04 | 0.644 | 25.48 | 0.806
RCAN (Zhang et al. 2018a) | 25.23 | 0.651 | 24.98 | 0.606 | 23.00 | 0.645 | 25.24 | 0.803
BPP–SRx2x3x4x8 | 25.10 | 0.642 | 24.89 | 0.598 | 22.72 | 0.626 | 24.78 | 0.785
BPP–SRx8 | 25.53 | 0.655 | 25.11 | 0.607 | 23.17 | 0.649 | 25.28 | 0.800
Figure 11: Extended qualitative evaluation for super–resolution.
#### Mobile–to–DSLR Photo Translation
We use the DPED(Ignatov et al. 2017) dataset for training and test. This
dataset provides $100\times 100$ aligned patches taken from iPhone–mobile
photos (impaired) and DSLR–Canon photos (ground truth). There are $160,471$
patches available for training and $4,353$ patches for test. We take $400$
patches from the test set for validation during training. We use full size
iPhone images from DPED for qualitative results. For loss function we use the
negative SSIM between impaired and ground truth patches. We find SSIM to be
more effective than $L_{1}$ and MSE losses in this problem. For evaluation we
measure the average PSNR and SSIM metrics for RGB pairs, using the code from
(Ancuti, Ancuti, and Timofte 2019), and the non–reference metric
HIGRADE–2(Kundu et al. 2017b) using the Matlab code available from (Kundu et
al. 2017a).
In each training batch, we take $16$ patches of size $100\times 100$. We use
Adam optimizer(Kingma and Ba 2015) with learning rate initialized to $10^{-4}$
and decreased by half every $200,000$ back–propagation steps. We do not
observe improvements after $200$ epochs.
Figure 12: Extended qualitative evaluation for Mobile–to–DSLR photo
translation.
#### Image Dehaze
We use the following real haze datasets: I–Haze(Ancuti et al. 2018a),
O–Haze(Ancuti et al. 2018c) and Dense–Haze(Ancuti et al. 2019). We follow the
training setting from (Zhang, Sindagi, and Patel 2018). In each training
batch, we take $1$ patch of size $528\times 528$. The training set is
augmented by rescaling the images, using bicubic scaler, to $1.25\times$,
$1\times$, $0.625\times$ and $0.3125\times$ the original size. We use Adam
optimizer(Kingma and Ba 2015) with learning rate initialized to $10^{-4}$ and
decreased by half every $200,000$ back–propagation steps. We train the system
for $10,000$ epochs.
Figure 13: Extended qualitative evaluation for image dehaze for Indoor/Outdoor
datasets. Figure 14: Extended qualitative evaluation for image dehaze for
Dense dataset.
#### Joint HDR and Super–Resolution
We use the HDR–Eye(Nemoto et al. 2015) dataset for training and Wang LDR(Wang
et al. 2013) dataset for test. HDR–Eye(Nemoto et al. 2015) provides HDR images
constructed from multi–exposure photographs. Following the training
configuration in (Soh, Park, and Cho 2019), we select $40$ from a total of
$46$ standard–exposed and HDR–constructed pairs of images (we excluded images
C01.png, C04.png, C13.png, C28.png, C38.png and C42.png, because of visible
misalignment problems in the HDR image constructions). Then, we take each
standard–exposed image and we: first, downscale it by factor $2$; and then
upscale it by factor $2$ (both with bicubic scaler), and use this output as
impaired image. We follow the configuration in (Soh, Park, and Cho 2019)
although their tone–mapping algorithms are not specified and tone–mapped
images are not provided. We use several tone–mapping algorithms until being
able to produce competitive quantitative and qualitative outputs. For our
final results we used the OpenCV implementation of Reinhard–Devlin
tone–mapping (Reinhard and Devlin 2005) with parameters $\text{gamma}=2.2$,
$\text{intensity}=0$, $\text{light\\_adapt}=0.$ and $\text{color\\_adapt}=0$.
We train our system using patches of size $456\times 456$. Following the
analysis in (Soh, Park, and Cho 2019), we use the non–reference image quality
metrics: Ma(Ma et al. 2017, 2018), to evaluate SR improvements; and
HIGRADE–2(Kundu et al. 2017b, a) to evaluate HDR improvements. We augment the
patches by random horizontal/vertical flipping and rotating $90^{\circ}$. We
use Adam optimizer(Kingma and Ba 2015) with learning rate initialized to
$10^{-4}$ and decreased by half every $200,000$ back–propagation steps. We
train the system for $10,000$ epochs.
Figure 15: Extended qualitative evaluation for joint HDR+SR enhancement.
#### Raindrop Removal
We use the DeRaindrop(Qian et al. 2018a, b) dataset for training and test.
This dataset provides paired images, one degraded by raindrops and the other
one free from raindrops. In each training batch, we take $1$ patch of size
$528\times 528$. We train a BPP model using $L_{1}$ loss and patch size
$456\times 456$. We use Adam optimizer(Kingma and Ba 2015) with learning rate
initialized to $10^{-4}$ and decreased by half every $200,000$
back–propagation steps. We train the system for $10,000$ epochs.
Figure 16: Extended qualitative evaluation for raindrop removal.
### Computing Infrastructure
All training processes run on Linux operating system, using implementations in
Python language with software packages: Numpy, Pytorch(Paszke et al. 2017),
Scilab, Pillow and OpenCV. We used NVIDIA Tesla M40 (24GB) GPU for training
and NVIDIA Titan–X Maxwell (12GB) for tests.
### Reproducibility
All output images of the BPP systems obtained in our experiments can be
downloaded from the following link (2.77 GB) This can be used to reproduce all
quantitative evaluations in our experiments. We have provided external links
to all evaluation scripts used in our evaluations.
|
# Asymmetric Si-Slot Coupler with Nonreciprocal Response Based on Graphene
Saturable Absorption
Alexandros Pitilakis, Dimitrios Chatzidimitriou, Traianos Yioultsis, ,
and Emmanouil E. Kriezis Manuscript received January 25, 2021; revised March
11, 2021; accepted March 31, 2021. This research is co-financed by Greece and
the European Union (European Social Fund-ESF) through the Operational
Programme “Human Resources Development, Education and Lifelong Learning
2014-2020” in the context of the project “Design of nonlinear silicon devices
incorporating graphene and using the Parity-Time symmetry concept” (MIS
5047874). (Corresponding author: Alexandros Pitilakis.) All authors are with
the Aristotle University of Thessaloniki, School of Electrical and Computer
Engineering, 54124 Greece (email: alexpiti@auth.gr).© 2021 IEEE. Personal use
of this material is permitted, but republication/redistribution requires IEEE
permission. Refer to IEEE Copyright and Publication Rights for more
details.Digital Object Identifier (DOI): 10.1109/JQE.2021.3071247IEEE Xplore
URL: https://ieeexplore.ieee.org/document/9395480
###### Abstract
We present the study of a proof-of-concept integrated device that can be used
as a nonlinear broadband isolator. The device is based on the asymmetric
loading of a highly-confining silicon-slot photonic coupler with graphene
layers, whose ultrafast and low-threshold saturable absorption can be
exploited for nonreciprocal transmission between the cross-ports of the
coupler. The structure is essentially a non-Hermitian system, whose
exceptional points are briefly discussed. The nonlinear device is modeled with
a coupled Schrödinger equation system whose validity is checked by full-vector
finite element-based beam-propagation method simulations in CW. The
numerically computed performance reveals a nonreciprocal intensity range
(NRIR) in the vicinity of 100 mW peak power with a bandwidth spanning tens of
nanometers, from CW down to ps-long pulses. Finally, the combination of
saturable absorption and self-phase modulation (Kerr effect) in graphene is
studied, indicating the existence of two NRIR with opposite directionality.
###### Index Terms:
Nonlinear optics, nonreciprocity, graphene, silicon photonics, directional
coupler, beam propagation method.
## I Introduction
The majority of passive and tunable photonic integrated circuits (PIC) and
components are reciprocal, i.e., they exhibit exactly equal forward and
backward transmission. Nonreciprocity is an often misunderstood [1, 2]
electromagnetic (EM) property, denoting the absence of reciprocity, i.e.,
unequal transmission when input and output ports are interchanged. The
archetype nonreciprocal component in guided-wave devices is the isolator, a
two-port unidirectional device that allows low-loss forward transmission while
blocking the backward one. The three-port extension of the isolator is a
device with circular (azimuthal) symmetry which allows “unirotational”
transmission between its ports, e.g., the input signal is only forwarded to
the adjacent port in a fixed sense of rotation. Isolator and circulator
functionalities are invaluable to source protection and full-duplex
communication channels, respectively [3]. Specifically for optical
communications, isolators are required to protect laser source cavities from
destructive back-reflections, or to isolate parts of a circuit from harmful
interference; similarly, circulators enable bi-directional communication over
the same transmission channel, e.g., a single-mode fiber. Both functionalities
are vital to optical transceivers, themselves essential to high-speed optical
interconnects in datacenters, or emerging photonic applications such as LiDAR
[4] or sensors [5].
Fundamental EM theory allows three avenues to “breaking” reciprocity, i.e,
time-reversal symmetry: (i) magnetic properties [6], (ii) space-time
modulation [7], or (iii) nonlinearity combined with spatial asymmetry [8]. The
present work focuses in the latter, which does not require active elements or
multiple waves (unlike space-time modulation) and does not implicate magneto-
optic materials which are bulky and incompatible with contemporary PIC
technologies, e.g. SOI (silicon-on-insulator) or SiN (silicon nitride), with
few exceptions [9]. Nonreciprocity through nonlinearity, see Section XXI in
[2], additionally requires for spatial asymmetry in the structure; moreover,
nonlinear isolators are subject to inherent bounds such as limited range of
powers, half-duplex operation in CW (i.e., simultaneous excitation from both
directions is prohibited), or asymptotic performance thresholds. Partially
overcoming these limitations, and building upon expertise in nonlinear
graphene-comprising [10, 11, 12] and hybrid silicon photonic design [13, 14],
we demonstrate a proof-of-concept device based on graphene saturable
absorption (SA) in a non-resonant structure operating in the NIR (1550 nm)
region. Our device is an asymmetrically-loaded SOI directional coupler, where
the loading consists of graphene sheets [15, 16], motivated by the broadband
response and the rather low SA intensity-threshold [17, 18]. The technological
maturity of the SOI platform is indispensable in engineering tightly confining
graphene-loaded waveguides, so as to maximize the loss-contrast between the
low- and high-power regime, simultaneously decreasing the power-threshold of
SA-onset and eliminating unwanted nonlinear effects, e.g., from silicon.
This device has three operation regimes: bidirectional isolation at low
powers, half-duplex isolation for powers inside the nonreciprocal intensity
range (NRIR), and bidirectional transmission above a higher “breakdown” power.
Our approach deviates in two aspects from the more frequently used phase-
related nonlinearities (e.g., Kerr effect) implemented in resonant cavities
[19, 20, 4], offering half-duplex isolator performance in a very large
bandwidth, and thus has potential applications in high-fluence fs-pulsed on-
chip sources. Moreover, we offer a novel design concept, based on a non-
Hermitian system, i.e., a pair of coupled subsystems with asymmetry in their
loss, with its signature exceptional points delimiting sharp changes in their
response [21, 11, 22]; note that a special class of non-Hermitian systems are
those exhibiting parity-time ($\mathcal{PT}$) symmetry, where exactly balanced
gain and loss are present. Finally, we note that graphene SA has potential
applications in all-optical interconnects or pulsed-source components, e.g.,
as an SA mirror [23].
The remainder of this paper is organized as follows: Section II presents the
device concept, physical description of the graphene SA used, and coupled-
equation modeling of the non-Hermitian system. Section III contains the
implementation in a graphene-clad Si-photonic waveguide coupler and its
simulated CW performance. Section IV addresses the pulsed regime performance
and the combined effect of SA and Kerr effect. Section V provides the
conclusions of our work.
## II Device Concept and Framework
### II-A Nonreciprocal Asymmetrically-loaded Coupler
A schematic of the directional coupler is illustrated in Fig. 1, where a
graphene ribbon asymmetrically loads only one of the silicon-slot waveguides;
the device $z$-length $L$ is a few hundred microns and the $x$-gap between the
two slot waveguides is $g\approx 1~{}\mu$m. The nonreciprocal response is due
to the SA in graphene and manifests as unequal forward and backward “cross-
port” transmission, $T_{F}=T_{2\leftarrow 1}\neq T_{B}=T_{1\leftarrow 2}$. In
the CW regime, only half-duplex isolation can be achieved, i.e., we can excite
only one port at a time (1:forward, 2:backward); full-duplex isolation is
possible in the pulsed regime, provided that the pulse duration and
repetition-rate are low. Note that the underlying photonic coupler in the
absence of graphene loading is synchronized, i.e., its two Si-slot waveguides
are identical in all their geometric and EM parameters. Also, the structure is
$z$-invariant and all ports are non-reflecting.
Figure 1: Schematic of the asymmetrically loaded Si-slot waveguide coupler
with annotated dimensions; $xy$-axes are in-scale with $g\approx 1~{}\mu$m and
$z$-length $L$ is a few hundred microns. When used as a two-port nonreciprocal
structure, the forward and backward transmission is defined between the
“cross” ports of the coupler, i.e., $T_{F}=T_{2\leftarrow 1}$ and
$T_{B}=T_{1\leftarrow 2}$. Bottom right-hand inset: Due to the symmetry in the
structure, we can interchange primed and unprimed ports, and in all cases the
unused “bar” output ports are assumed matched.
Assuming that the directional coupler $z$-length is approximately equal to the
coupling length ($L_{c}$) of the device in the absence of the graphene-SA
loading, the operation concept can be described as follows. In the low-power
(linear) regime, the large asymmetry in the losses between the two waveguides
means that coupling is inhibited, and cross-transmission is very low; in this
regime the two-port device is reciprocal with very low transmission in both
directions, $T_{F}\approx T_{B}\rightarrow 0$. Now, nonreciprocity is attained
in the nonlinear regime, for input power inside the NRIR, which lies above the
loaded-waveguide SA threshold. When exciting the graphene-loaded waveguide,
the high power quenches its losses thanks to SA so that both waveguides are
practically lossless and the coupler is almost synchronized; this allows the
signal to cross to the lossless waveguide and this is the “forward” or through
direction, with high transmission $T_{F}\rightarrow 1$, Fig. 2(a). On the
contrary, when exciting the lossless waveguide with a moderately high power,
the losses in the opposite (graphene-loaded) waveguide remain high so that
cross-coupling is inhibited due to the asymmetry; this is the “backward” or
isolated direction, with low transmission $T_{B}\rightarrow 0$, Fig. 2(b).
Finally, when the backward excitation power exceeds a threshold value, cross-
saturation synchronizes the coupler allowing high backward transmission; this
is the “breakdown” regime of the device with quasi-reciprocal high-
transmission in both directions, $T_{F}\approx T_{B}\rightarrow 1$. The
asymmetry between the transmission in the two directions for powers inside the
NRIR can be engineered in a half-duplex isolator, based on the nonlinearity
and on the asymmetric graphene-loading.
Figure 2: Concept illustration of the (a) high forward and (b) low backward
transmission that can be attained for input powers inside the NRIR. The length
of the asymmetrically-loaded nonlinear device is equal to the coupling-length
of the underlying Si-slot waveguide coupler (in the absence of graphene).
Indicative geometric dimensions can be found in Fig. 1, 4, 5, and 6
### II-B Saturable Absorption in Graphene
As described in Section II-A, the device operation relies on saturable
absorption, i.e., the nonlinear quenching of losses with increasing power. In
a perturbative third-order nonlinear regime, SA can be treated with a term
similar to the one commonly used for two-photon absorption (TPA) but of
opposite sign. TPA is a nonlinear process that increases the losses for high
intensity signals, thus the sign reversal, and manifests in semiconductors by
absorbing photons above half the bandgap energy and generating free carriers
[24]. In most materials, SA is typically observed at higher power thresholds,
closer to optical damage, in which case it ceases to fall into the
perturbative regime. The atoms absorbing the radiation energy are, in their
majority, excited to higher energy states and can no longer efficiently relax
their energy to the lattice so that they can be re-excited and absorb more
energy. This process leads to SA and culminates in the breakdown of the
material (irreversible damage) as power is further increased. The critical
parameters for any SA material are the saturation intensity (in W/m2, defined
as the CW intensity for which absorption reduces to half of the low-power
regime) or fluence (in J/m2, for pulsed excitation), and the relaxation time,
i.e., the time required for the material to desaturate, shedding its energy to
the lattice as its atoms decay to lower energy states.
Graphene, a 2D semi-metal or zero bandgap semiconductor [15], can be cast as a
high-contrast SA material in the NIR, owing to its mono-atomic thickness and
the gap-less Dirac-cone dispersion. Various theoretical models have been
proposed for its nonlinear behaviour, in perturbative [25, 26] and non-
perturbative [18, 27, 28] regimes, using semi-classical and/or thermodynamic
tools. These models lead to standard third-order nonlinear response (Kerr
effect, self/cross-phase modulation, four-wave mixing, parametric conversion)
or to a more complicated response, when coupled to the photo-excited carrier
plasma [29, 30, 31]. All these models predict a SA regime for graphene, when
it is biased below the half-photon energy where interband electronic
transitions are not restricted by Pauli blocking and, consequently, absorption
is high (metallic regime). When biased above that threshold energy, graphene
is practically transparent in the NIR due to the absence of interband
mechanism (dielectric regime) and, moreover, it exhibits TPA [25, 26]. One can
understand the SA behavior in simplistic thermodynamic terms as follows: In
the loss regime, graphene carriers absorb the EM energy and their excitation
leads to a nearly instantaneous (tens of fs timescale) heating; the effect on
the surface conductivity of this large temperature increase is a blurring
between the inter- and intraband mechanisms [25] and eventually a transition
between its two regimes, the high loss (metallic) and low loss (dielectric).
Desaturation happens at slower timescales, in the ps-order, due to interband
recombination and various scattering processes [28] . So, if the graphene
Fermi energy was set within the bounds of the high-loss regime then high-
intensity illumination will decrease the losses; the higher the intensity, the
higher the saturation of losses and the higher the loss contrast, i.e., the
difference in losses between the linear (low-power) and the nonlinear (high-
power) regime. We note that this field is currently under intense
investigation, with large deviations in the reported nonlinear parameters and
the thresholds between perturbative/non-perturbative regimes. These aspects
transcend the scope of this work, which is to investigate the performance of
SA-enabled nonreciprocity in a realistic proof-of-concept device. In this
spirit, we assume an instantaneous SA response with a phenomenological model
for graphene conductivity and study its spatially averaged effect on the
optical propagation in picosecond temporal regimes.
In such a model, the graphene conductivity can be separated in two parts, the
non-saturable and the saturable, which are directly attributed to the
intraband [$\sigma^{(1)}_{i}$] and interband [$\sigma^{(1)}_{e}$] mechanisms,
respectively. The sum of these terms forms the total linear conductivity of
graphene, $\sigma^{(1)}=\sigma^{(1)}_{i}+\sigma^{(1)}_{e}$, and depends on its
effective chemical potential (assumed fixed and below the half-photon energy,
$|\mu_{c}|<\hbar\omega/2$) and its temperature (assumed fixed at equilibrium,
$T=300$ K); exact expressions can be found in [10]. The non-saturable part is
independent of the incident radiation whereas the saturable part is assumed to
scale with the phenomenological law $1/(1+\rho)$, with $\rho$ being
proportional to the optical intensity; the linear and SA regimes are denoted
by $\rho\ll 1$ and $\rho>1$, respectively. In this work
$\rho=|\mathbf{E}_{\parallel}|^{2}/E_{\mathrm{sat}}^{2}$, where
$\mathbf{E}_{\parallel}$ is the E-field component parallel to the graphene
sheet(s), $E_{\mathrm{sat}}^{2}=2Z_{0}I_{\mathrm{sat}}$, $Z_{0}=377$ Ohm, and
$I_{\mathrm{sat}}$ is the saturation intensity. For the latter, we use the
value $I_{\mathrm{sat}}=1$ MW/cm2 [17, 32]. In terms of our full-wave EM
simulations the “effective” surface conductivity across the structure is
$\sigma^{(1)}(x,y,z)=\sigma^{(1)}_{i}+\sigma^{(1)}_{e}\frac{1}{1+|\mathbf{E_{\parallel}}(x,y,z)|^{2}/E_{\mathrm{sat}}^{2}},$
(1)
where $\sigma^{(1)}_{i,e}$ are constants, assuming uniform graphene sheets
(fixed $\mu_{c}$, $T$ and $\omega$). Consequently, the macroscopic spatial
inhomogeneity in this effective $\sigma^{(1)}$ depends only on the local
E-field intensity, and is thus nonlinear. To further simplify our analysis and
concept implementation, focusing on the upper performance threshold, we assume
that since graphene is biased below the half-photon energy, the interband
conductivity dominates [$\sigma^{(1)}_{i}\approx 0$] and it moreover acquires
a real constant value, i.e.,
$\sigma^{(1)}_{e}\approx\sigma_{0}=e^{2}/4\hbar\approx 61~{}\mu$S, where
$\sigma_{0}$ is the “universal” optical conductivity of graphene responsible
for the 2.3% absorption through an air-suspended monolayer.
Note that, as high-confinement waveguides support hybrid modes, we account for
the tensor properties of the 2D material. In this sense, the value of (1)
corresponds to both nonzero elements of the main diagonal of the second-rank
tensor describing graphene as an isotropic 2D material [10].
### II-C Coupled Mode Framework
For the mathematical modeling of light propagation in this nonreciprocal
device, we employ a coupled-mode theory approach, specifically, a pair of
coupled nonlinear Schrödinger equations (NLSE). This framework properly
accounts for the waveguide geometry and the linear/nonlinear macroscopic
response of constituting materials on the spatial distribution of the guided
modes, through effective parameters rigorously calculated for the specific
physical implementation.
One of the prerequisites for the NLSE derivation and validity is that the
spatial eigenmode profiles are unaltered during propagation, which is the case
for perturbative nonlinearity (Kerr effect) in multimode single-core
waveguides, such as birefringent fibers. As long as all guided modes are only
slightly perturbed during propagation, this concept can be extended to multi-
core waveguides such as directional couplers [13], where we have the
“supermodes”, i.e., eigenmodes with symmetric and anti-symmetric profiles, in
the synchronized case. Introducing non-perturbative nonlinearity to the
coupler will force the coupler eigenmodes to substantially change along the
propagation and their evolution will moreover depend on the symmetry (or lack)
of the initial excitation. Additionally, if the structure is asymmetric with
respect to the material absorption, then we have a non-Hermitian system with
exceptional points (EP), [11], which non-trivially affect the eigenmode
profiles. Specifically, in the asymmetric SA-loaded directional coupler
structure, there is one EP that can be identified as the SA level where the
two eigenmodes of the structure coalesce, i.e., when the eigenvalues and mode
profiles converge; this mode coalescence must not be confused with mirror
symmetry or degeneracy, as we are considering asymmetric single-polarization
waveguides. The combination of these modifications (non-perturbative nonlinear
loss-asymmetry), render the coupled-supermode NLSE framework unusable, because
crossing the EP imparts a substantial change in the mode profiles during
propagation.
To overcome this obstacle, we assume that the modification of the coupler
eigenmodes is almost solely attributed to the non-Hermitian nature of the
system and not to the modification of the underlying material EM properties.
In other words, nonlinearity does not substantially modify the mode profiles
of the isolated waveguides. This assumption holds true for NIR waveguides
comprising graphene sheets, which are not contributing to mode confinement or
guidance. Thus, in this work we derive two separate NLSEs, one for each
isolated waveguide of the coupler. We then couple the two equations with a
coefficient derived in the linear regime and when the asymmetry (the graphene
loading) is absent. This approach implies that only “self-acting” nonlinear
effects (such as self-SA and Kerr) are considered and phenomena like direct
cross-phase/amplitude modulation are negligible. This approximation is valid
in the weak-coupling regime that we are considering, as will be demonstrated
in Section III-B by means of numerical simulations. Do note, however, that
indirect cross-effects such as cross-absorption modulation are still allowed,
as power is exchanged between the waveguides.
The derivation of the NLSE is a subject extensively covered in literature,
e.g., in [24, 13]. Here, we directly present the general form of the loosely
coupled NLSE system, under the $e^{+j\omega t}$ harmonic oscillation phase
convention,
$\dfrac{\partial}{\partial z}\begin{bmatrix}A_{1}\\\
A_{2}\end{bmatrix}=\begin{bmatrix}+\delta^{(1)}&-j\kappa\\\
-j\kappa&+\delta^{(2)}\end{bmatrix}\begin{bmatrix}A_{1}\\\
A_{2}\end{bmatrix},$ (2)
where $A_{k}=A_{k}(z,\tau)$ are the complex amplitudes of the guided mode
envelopes in the $k=\\{1,2\\}$ waveguide (e.g., the loaded and unloaded
waveguides in Fig. 1, respectively) measured in W1/2, $\kappa=\pi/(2L_{c})$
the coupling coefficient, and $\delta^{(k)}$ the $k$-th mode “self-acting”
term:
$\delta^{(k)}=-\frac{\alpha^{(k)}}{2}+j\Delta\beta_{\mathrm{NL}}^{(k)}+j\gamma^{(k)}|A_{k}|^{2}+D^{(k)}.$
(3)
In this compact term, $\alpha^{(k)}$ is the power loss/gain coefficients (if
positive/negative, respectively), $\gamma^{(k)}$ the complex third-order
nonlinear parameter (including Kerr effect and perturbative SA/TPA), and
$\Delta\beta_{\mathrm{NL}}^{(k)}$ includes nonlinear phase-dispersion
contributions excluding third-order effects which are included in
$\gamma^{(k)}$. $D^{(k)}$ is the linear dispersion operator,
$D^{(k)}=\left(\frac{1}{\overline{v}_{\mathrm{g}}}-\frac{1}{v^{(k)}_{\mathrm{g}}}\right)\frac{\partial}{\partial\tau}+\sum_{m=2}^{\infty}(-j)^{m+1}\frac{\beta_{m}^{(k)}}{m!}\frac{\partial^{m}}{\partial\tau^{m}},$
(4)
where
$\overline{v}_{\mathrm{g}}=(v^{(1)}_{\mathrm{g}}+v^{(2)}_{\mathrm{g}})/2$ is
the mean group velocity, $v^{(k)}_{\mathrm{g}}$ the group velocity,
$\beta_{m}^{(k)}$ are the $m$-th dispersion parameters ($m=2,3$ is group-
velocity dispersion, GVD, and third-order dispersion, TOD, respectively), and
$\tau$ is a retarded time frame, moving with $\overline{v}_{\mathrm{g}}$. All
parameters in (2), (3) and (4) are evaluated at a central frequency
$\omega_{0}$, and all are real-valued unless explicitly stated. Finally, note
that $\Delta\beta_{\mathrm{NL}}^{(k)}$ and $\alpha^{(k)}$, are allowed only
“self-acting” nonlinearity, i.e., they exclusively depend on $A_{k}(z,\tau)$.
This models effects that do not fall into standard categories (these being the
higher-order dispersion terms and the third-order effects, modeled by
$D^{(k)}$ and $\gamma^{(k)}$, respectively), such as non-perturbative SA or
saturable photo-generated carrier refraction [33, 31]. In the latter case, an
additional rate equation is required, coupled to the NLSE system through
$\Delta\beta_{\mathrm{NL}}^{(k)}$ and/or $\alpha^{(k)}$, which governs the
temporal dynamics of the free-carrier plasma generated by the optical envelope
[29].
The coupling coefficient ($\kappa$) is the only parameter evaluated from the
coupler as a whole and not from the individual waveguides. Specifically, if
$\beta_{\mathrm{S}}$ and $\beta_{\mathrm{A}}$ are the phase constants of the
symmetric and anti-symmetric supermodes, respectively, then the coupling
length is given by
$L_{c}=\pi/(\beta_{\mathrm{S}}-\beta_{\mathrm{A}})=\pi/2\kappa$. The frequency
dispersion of $\kappa$ can be added to the coupled system in the frequency
domain, either with a Taylor series expansion around $\omega_{0}$ or directly,
from its spectrum $\kappa(\omega)$.
Figure 3: Evolution of eigenvalues in a non-Hermitian system. (a) Asymmetric
coupler with one transparent waveguide and one waveguide with loss or,
hypothetically, gain. (b) $\mathcal{PT}$-symmetric coupler, with exactly equal
loss and gain in each of its waveguides, not further studied in this work.
To gain insight into the non-Hermitian system evolution, (3) can be cast in a
much simpler form, including only the parameters relevant to our
asymmetrically SA-loaded coupler. Specifically, assuming the CW regime (where
the time-derivatives vanish), absence of third-order nonlinearity, and the
implications of our instantaneous SA model presented in Section II-B, the
coupled equation system for the asymmetric coupler can be dramatically
simplified to
$\dfrac{\partial}{\partial z}\begin{bmatrix}A_{1}\\\
A_{2}\end{bmatrix}=\begin{bmatrix}-\alpha/2&-j\kappa\\\
-j\kappa&0\end{bmatrix}\begin{bmatrix}A_{1}\\\ A_{2}\end{bmatrix},$ (5)
which entails only the coupling coefficient, $\kappa$, and the power-
attenuation factor in the first waveguide, $\alpha=\alpha(|A_{1}|^{2})$, where
we have dropped the superscript. The latter describes the saturation curve of
the graphene-loaded waveguide, having a high value ($\alpha_{0}$) at low
powers and a monotonic decrease as the power decreases; the saturation power
($P_{\mathrm{sat}}$) is defined as the value of $|A_{1}|^{2}$ for which
$\alpha=\alpha_{0}/2$. The two eigenvalues $\nu_{1,2}$ of the system can be
easily computed from the matrix in (5) as a function of the normalized
parameter $\alpha/2\kappa$. This unveils the EP at $\alpha/2\kappa=2$ where
modes coalescence, Fig. 3(a), as well as the hypothetical case of a gain
factor, which has a symmetric EP at $\alpha/2\kappa=-2$. It is worth depicting
the eigenvalues of the $\mathcal{PT}$-symmetric case, i.e., the special case
of a non-Hermitian system with exactly matched gain and loss in each of the
coupler waveguides, Fig. 3(b), whose EPs lie at $\alpha/2\kappa=\pm 1$. More
detailed discussions on the nuances and potential applications these features
can be found in [11, 21].
## III Physical Implementation and CW Performance
### III-A Waveguide and Coupler Design
In order to enhance the light-matter interaction in our structure, and so
boost the nonlinear effects originating from the 2D material (graphene), we
select the slot waveguide geometry, where light is confined in a low index
material (air) between two high-index ridges (silicon). The waveguide cross-
section is depicted in the inset of Fig. 4, characterized by high confinement
as the slot width is reduced. This applies to the quasi-TE mode, having a
horizontally polarized transverse E-field component, so that it is parallel to
a 2D material patterned in a ribbon and extending out to the outer vertical
walls of the Si-ridges. In order to optimize the waveguide dimensions, i.e.,
the Si-ridge height and width, and the slot width, we employ a finite element
method (FEM) based eigenmode solver [34] and extract the modal attenuation
factor. As the absorption in the waveguide is exclusively due to graphene
conductivity ($\mathrm{Re}\\{\sigma^{(1)}\\}$), we seek the geometric
dimensions that maximize the modal power-loss constant, $\alpha$. If graphene
sheets are included in the eigenmode problem, then
$\alpha=-2k_{0}\mathrm{Im}\\{n_{\mathrm{eff}}\\}$, where $n_{\mathrm{eff}}$ is
the complex effective index of the eigenmode. Note that, as graphene does not
contribute to the waveguiding in the NIR111As
$\varepsilon_{r,\mathrm{eff}}=1-j\sigma^{(1)}/(\omega\varepsilon_{0}d_{\mathrm{gr}})$,
$d_{\mathrm{gr}}=0.35$ nm the effective thickness of a graphene monolayer, in
the FIR/THz region the large negative $\mathrm{Im}\\{\sigma^{(1)}\\}$ leads to
$\mathrm{Re}\\{\varepsilon_{r,\mathrm{eff}}\\}\ll-1$ which, in turn, gives
rise to plasmonic waveguiding, i.e., strong confinement of the E-field
perpendicular to graphene., i.e.,
$|\mathrm{Im}\\{\sigma^{(1)}\\}|\approx\omega\varepsilon_{0}d_{\mathrm{gr}}$
is in the few-$\mu$S region, and as the 2D material tensor is isotropic, we
can accurately estimate the waveguide losses perturbatively:
$\alpha=\frac{1}{2\mathcal{P}}\int_{G}\sigma^{(1)}(x,y)|\mathbf{e}_{\parallel}(x,y)|^{2}\mathrm{d}\ell.$
(6)
In this expression, vector $\mathbf{e}(x,y)$ is the eigenmode profile
extracted by the solver in the absence of graphene-loading, the line-integral
is performed in the waveguide cross-section assumed to be occupied by graphene
sheets, $\sigma^{(1)}(x,y)\neq 0$, and it uses only the E-field components
that are parallel to graphene ($\mathbf{e}_{\parallel}$). The scalar
$\mathcal{P}=0.5\iint\mathrm{Re}\\{\mathbf{e}\times\mathbf{h}^{*}\\}\cdot\hat{\mathbf{z}}\mathrm{d}x\mathrm{d}y$
is an eigenmode-dependent normalization constant, in Watt.
Assuming a finite-width graphene ribbon (patterned to cover the air-slot and
the two Si-ridges) of uniform $\sigma^{(1)}=61~{}\mu$S, we numerically
optimize the geometric parameters seeking for a maximization of the
propagation losses for the $x$-polarized mode at $\lambda_{0}=1550$ nm. The
oxide and silicon refractive indices were $n_{\mathrm{Ox}}=1.45$
$n_{\mathrm{Si}}=3.47$, respectively. We noted that the optimal silicon-ridge
height is below 200 nm, so that the E-field in the upper part of the air-slot
can sufficiently overlap with graphene, and as close as possible to the cut-
off thickness where the mode leaks into the oxide substrate. Fixing the height
at the technologically acceptable value of 140 nm, we calculate the optimal
combination of Si-ridge width and slot width, 300 nm and 20 nm, respectively,
depicted in Fig. 4. The maximum propagation loss is almost 0.27 dB/$\mu$m with
a reasonable tolerance on the geometric parameters, ensuring a low fabrication
sensitivity in this design. Note that for Si-ridge widths above 300 nm the
waveguide also supports a low-loss anti-symmetric $x$-polarized mode,
localized inside the silicon cores, which is unwanted. If the monolayer is
replaced by few-layer graphene, then the absorption is expected to increase
proportionally to the number of layers, as long as the layers are assumed
uncoupled and sub-$\mathrm{nm}$ thick in total; for instance, an uncoupled
bilayer ribbon will have $\sigma^{(1)}=122~{}\mu$S which will lead to 0.54
dB/$\mu$m losses. Finally, we note that selecting an infinite-width graphene
monolayer sheet instead of a ribbon, would slightly increase the losses, up to
0.31 dB/$\mu$m, owing to the E-field concentrated in the upper/outer corners
of the Si-ridges. We nevertheless opt for the ribbon design as we anticipate
that it would limit the diffusion of photo-excited carriers in graphene, which
reduces the local carrier density and consequently increases the saturation
intensity of the waveguide [12], an unwanted effect in our device.
Figure 4: Propagation losses (dB/$\mu$m) as a function of the waveguide cross-
section, depicted in the inset. The silicon ridge height is 140 nm and the
graphene ribbon (thick red line in the inset) has a uniform
$\sigma^{(1)}=61~{}\mu$S.
Having selected the Si-slot waveguide geometric parameters, we can estimate
the SA curve in the waveguide, i.e., how the losses ($\alpha$) depend on the
CW power launched into the mode ($P_{\mathrm{in}}$). We use the waveguide mode
profile in the absence of graphene with the approximation of (6), where now
$\sigma^{(1)}$ is power-dependent, i.e., as in (1) with
$|\mathbf{E_{\parallel}}(x,y)|^{2}\longrightarrow(P_{\mathrm{in}}/\mathcal{P})|\mathbf{e_{\parallel}}(x,y)|^{2}$;
for the uniform graphene ribbon we assume $\sigma^{(1)}_{i}=0$ and
$\sigma^{(1)}_{e}=61$ $\mu$S. The resulting loss-saturation curve is depicted
in Fig. 5, with a thick black line, from which we extract a sub-mW saturation
power of $P_{\mathrm{sat}}\approx-6$ dBm or 0.22 mW. We also compare the
numerically calculated curve with commonly used phenomenological models
$\alpha=\alpha_{0}/(1+\rho)$ and $\alpha=\alpha_{0}/\sqrt{1+3\rho}$, where
$\rho=P_{\mathrm{in}}/P_{\mathrm{sat}}$ and $\alpha_{0}$ are the low-power
losses. While all models qualitatively agree below or close to
$P_{\mathrm{sat}}$, the deviations become non-negligible at higher powers
which is expected to influence the component performance. The two insets in
Fig. 5 depict the very high confinement of the $xz$-polarized E-field
components inside the slot, leading to a deep saturation of graphene
conductivity in its vicinity, even at modest power levels.
Figure 5: Nonlinear loss-saturation curve for the optimized graphene
monolayer-loaded waveguide, compared to phenomenological models where
$\rho=P_{\mathrm{in}}/P_{\mathrm{sat}}$. For the selected waveguide design,
$P_{\mathrm{sat}}\approx-6$ dBm is where the waveguide losses are halved with
respect to the low-power (linear) regime. The right inset depicts the
$|\mathbf{E}_{\parallel}|^{2}$ profile in the cross-section. The left inset
depicts the saturation of graphene’s local conductivity over the slot region
(horizontal axis) as the input power increases (vertical axis), with dark and
light colors denoting absorptive and transparent regions, respectively.
After the numerical design of the Si-slot waveguide in the linear and SA
regime, we move on to the design of the coupler. We assume the two waveguides
are at a sub-$\mu$m distance, measured by the gap ($g$) between their inner
Si-ridge walls, Fig. 6(a), which leads to weak coupling for the tightly
confining slot waveguides. Graphene sheets are omitted in these simulations as
their effect is primarily absorptive and we are interested in the ideal,
synchronized lossless coupler. We extract the coupling length from the
difference in phase constant of the two $x$-polarized (quasi-TE) supermodes of
the coupler, the symmetric and anti-symmetric one, using a FEM-based mode
solver. Figures 6(b) and (c) present the geometric and frequency dispersion of
the coupling length, respectively; the latter is the primary parameter
affecting the bandwidth of the device, which will be quantified in Section
III-C. We also quantify the effect of nm-sized geometric deviations in the
critical parameters of the coupler: the Si-ridge width ($w$), the air-slot
size ($s$), and the gap between the two waveguides ($g$); the air-slot offset
has a larger effect on the coupling length and it would be the critical
feature in a fabricated device.
Figure 6: (a) Cross section of the symmetric (unloaded) Si-slot waveguide
coupler and primary dimensions. (b) Coupling length vs. gap for $w=300$ nm and
$s=20$ nm, also presenting the deviation for few-nm sized offsets from these
nominal parameters. (c) Wavelength dispersion of the coupling length around
$1550$ nm for three gap values spaced by 10 nm.
### III-B Performance in CW Regime
To assess the nonreciprocal device performance, we start from the CW regime,
where a harmonic signal at $\lambda_{0}=1550$ nm excites one of the coupler
ports at various input powers. In terms of the coupled-equation system
integrated to extract the output transmission, we use the simplified system of
(5) where SA is the only nonlinear mechanism; the Kerr effect, in conjunction
to SA, will be addressed in Section IV-B. The attenuation coefficient for the
graphene-loaded Si-slot waveguide has been numerically calculated in the
saturation curve of Fig. 5 as a function of the CW input power; in a simpler
case, one could use the $1/(1+\rho)$ phenomenological model applied directly
to the attenuation coefficient, taking only the pair of $\alpha_{0}$ and
$P_{\mathrm{sat}}$ values from the numerical solution,
$\alpha=\alpha_{0}/(1+P_{\mathrm{in}}/P_{\mathrm{sat}})$. As explained in
Section II-C, this coupled-equation approach is valid under two justified
approximations: (i) Graphene conductivity negligibly affects the phase
constants of the waveguide modes, and thus their spatial profile, owing to the
fact that $\mathrm{Im}\\{\sigma^{(1)}\\}$ is practically zero. (ii) We use
single-polarization waveguides that form a coupler whose isolated modes have
negligibly small spatial overlap, translating in very weak coupling.
Consequently, all cross-nonlinear parameters are very close to zero and can be
safely excluded from the coupled system; the two equations correspond to the
isolated graphene-loaded and unloaded waveguides, which are weakly coupled
through $\kappa=\pi/2L_{c}$.
In order to attain a reasonably wide NRIR with realistic device footprint, we
have numerically identified that a good choice is an asymmetric loading
consisting of an uncoupled bilayer graphene ribbon, with low-power losses
$\alpha_{0}=0.54$ dB/$\mu$m, and a coupling length of $L_{c}=600$ $\mu$m,
realised by a gap $g=880$ nm between the two Si-slot waveguides. This
corresponds to a normalized $\alpha_{0}/\kappa\approx 48$ indicating that the
device is far above the EP, owing to the large asymmetry in losses. We
numerically integrate the coupled-equation system and extract the results for
the CW case, presented in Fig. 7 as the forward and backward transmission
against the input power, with black solid ($T_{F}$) and dashed ($T_{B}$)
curves, respectively. In panels (a) and (b), the device length is equal to
$L_{c}$ and $L_{c}/2$, respectively, which was found to exhibit approximately
the same NRIR for the performance metrics selected, $T_{F}\geq-6$ dB and
$T_{B}\leq-15$ dB, corresponding to moderate forward insertion losses and
adequate backward isolation, respectively. For these specifications, the
nonreciprocal window spans from 100 mW to 160 mW, i.e., $\mathrm{NRIR}\approx
2$ dB. With the saturation power of 0.22 mW, the normalized input powers that
delimit the NRIR are approximately $[430,700]P_{\mathrm{sat}}$ i.e., far above
the SA threshold. This can be explained by the relatively slow decrease of the
numerically calculated SA curve, Fig. 5, where an order of magnitude decrease
in $\alpha$ happens 20 dB above $P_{\mathrm{sat}}$. In Fig. 7, we also show
the transmission curves when using the $1/(1+\rho)$ phenomenological model for
the losses, with red curves, clearly leading to more optimistic device
performance, namely 4 dB larger NRIR and 10 dB lower power thresholds. This
result is also in line with the corresponding saturation curve in Fig. 5,
which decreases more rapidly towards zero than the numerically calculated one.
Another remark that can be extracted is that the upper power limit of the NRIR
is very sharp for the $1/(1+\rho)$ model, indicating that the transition from
the isolation (nonreciprocal) to the breakdown (quasi-reciprocal) regime is
abrupt. Finally, we note that due to the nonlinear nature of the device an
optimal length for the device can potentially be found between $L_{c}$ and
$L_{c}/2$, for given $T_{F,B}$ and NRIR limits.
Figure 7: Forward (solid) and backward (dashed) transmission as a function of
CW power. Panels (a) and (c) are for coupler length equal to $L_{c}$, while
(b) and (d) are for $L_{c}/2$, with $L_{c}=600~{}\mu$m. Panels (a)-(b) compare
the transmission curves for the phenomenological $1/(1+\rho)$ trend for the
losses against the numerically calculated curve of Fig. 5. Panels (c)-(d)
compare the latter against NL-BPM simulation (markers).
The coupled-equation results in the CW regime were corroborated by nonlinear
full-vector 3D beam propagation method (BPM) simulations. The BPM is a
spectral paraxial method using an implicit stepping algorithm to propagate a
vector excitation from an input cross-section ($xy$-plane) along the optical
axis ($z$), until its output cross-section, from where the transmission at
each port can be extracted for an integrated device such as the coupler. The
propagation is done assuming a fixed reference index for the envelope phase,
typically corresponding to the effective index of the propagation medium. BPM
is valid under the slowly-varying envelope approximation justified for
$z$-invariant reflectionless structures, or when the variations along the
$z$-direction are “slow” inside each step. Our BPM was implemented with higher
order triangular finite elements in the cross-section [34] and an iterative
wide-angle (multi-step) correction in conjunction with the Crank-Nicolson
scheme in the propagation direction [14]. The difference in the phase constant
(real part of effective index) of the two isolated waveguide modes is very
small due to $\mathrm{Im}\\{\sigma^{(1)}\\}\approx 0$, so the BPM
applicability is ensured despite the high index-contrast waveguides used. The
material nonlinearity, i.e., the E-field-dependent index or conductivity
perturbation, is locally applied before each step of the BPM algorithm.
Iterative stabilization is performed in each step (usually 2-3 iterations are
enough) to account for the nonlinear perturbations. The nonlinear effect
considered in this work is the saturation of the graphene surface
conductivity, (1), but other effects can also be incorporated, such as self-
acting third-order effects from complex tensorial $\chi^{(3)}$ and
$\sigma^{(3)}$, or perturbations from coupled systems (e.g., thermal effects,
optically generated carrier diffusion/drift in silicon or graphene, electro-
optic effects, multi-channel effects etc.). The NL-BPM results are depicted
with markers in Fig. 7(c)-(d), and are very close to the coupled-equation
system solution (curves), validating its use. In our BPM simulations of the
structure in Fig. 1, the cross-section $xy$-plane was finely meshed resulting
in approximately $10^{5}$ degrees of freedom and the $z$-propagation step-size
was in the order of $\lambda_{0}$.
### III-C Bandwidth Estimation
In order to demonstrate the broadband nature of this device, we evaluate the
NRIR across a 100 nm spectral window. Due to the broadband SA of graphene, the
negligible imaginary part in its conductivity, and the symmetry of the Si-slot
waveguides in the coupler, the main parameter defining the device bandwidth in
the CW regime is the coupling length dispersion, Fig. 6(c). We numerically
extract the threshold input powers for the previously used performance
metrics, namely, $T_{F}\geq-6$ dB and $T_{B}\leq-15$ dB, that delimit the
NRIR. We analyzed both the full-length and half-length coupler, i.e., assuming
device length equal to $L_{c}$ and $L_{c}/2$, respectively, with the
corresponding results presented in the two panels in Fig. 8. We also evaluated
the NRIR dispersion both for the numerically extracted loss-saturation curve
and the phenomenological curve $1/(1+\rho)$ that uses the low-power losses and
the numerically extracted saturation power, Fig. 5.
For the physically modeled full-length device [black curves in Fig. 8(a)] we
observe that the tolerable NRIR $\approx 2$ dB calculated for the central 1550
nm wavelength approximately covers a 70 nm band, and moreover improves to over
5 dB at lower wavelengths. This increase is due to the longer coupling length
(smaller coupling coefficient) at lower wavelengths, which pushes the backward
power threshold (“cross-saturation” from the lossless waveguide excitation)
higher than the forward threshold (“self-saturation” from the SA waveguide
excitation). The conclusion drawn here is that the bandwidth, like the NRIR,
non-trivially depends on the saturation-curve of the waveguide, and an optimal
component length can typically be found between $L_{c}/2$ and $L_{c}$, for the
prescribed metrics.
Figure 8: Forward (solid) and backward (dashed) input power limits for
nonreciprocal CW operation vs. wavelength, accounting for coupling length
dispersion. Panels (a) and (b) are for coupler length equal to $L_{c}$ and
$L_{c}/2$, respectively, with fixed $L_{c}=600~{}\mu$m as calculated at 1550
nm. The NRIR is delimited between the solid and dashed lines of same color.
Black and red curves correspond to the numerically calculated and the
phenomenological model for the loss-saturation, respectively, Fig. 5.
For the half-length device, black curves in Fig. 8(b), we find a narrower
bandwidth as the NRIR closes entirely at $\pm 30$ nm around the central
wavelength. Note that when the forward power threshold (solid lines) curve
crosses the backward threshold (dashed lines), we have the onset of an
inversion of the directionality of the nonreciprocity; extracting the opposite
metrics from the transmission curves (high $T_{B}$ and low $T_{F}$) can
potentially unveil an opposite polarity regime for the same isolator device.
Finally, we observe once more the overly optimistic performance predicted by
the phenomenological trend (red curves in Fig. 8), leading to wider NRIR,
lower power thresholds, and broader bandwidth.
## IV Further Considerations
### IV-A Performance in Pulsed Regime
The device performance can also be assessed in the pulsed regime, taking into
account the frequency dispersion in the system, Eq. (4). In this work, the SA
is assumed broadband and instantaneous, the Kerr effect is neglected (more
details in Section IV-B), and $v_{g}^{(1)}\approx v_{g}^{(2)}$ owing to
$\mathrm{Im}\\{\sigma^{(1)}\\}=0$; so, we use only the single-waveguide
dispersion parameters $\beta_{2,3}$ (GVD and TOD) and the coupling length
dispersion. For the former, numerical simulations accounting for both
waveguide and material (silicon and oxide) dispersion at $\lambda_{0}=1550$ nm
were used to extract $\beta_{2}=+6.7$ ps2/m and $\beta_{3}=-0.015$ ps3/m;
these parameters vary negligibly in the 100 nm window around 1550 nm. The full
coupling length dispersion was directly plugged into the equation system at
the frequency domain, using the data from Fig. 6(c); the dominant dispersion
term is approximately +48 $\mu$m/THz.
In the pulsed regime, the coupled NLSE system of (2) is integrated using the
split-step Fourier method (SSFM), by driving a 1 ps FWHM pulse into the
graphene-loaded or the unloaded waveguide port, at various peak powers. The
normalized cross-transmitted pulses at the output of the $L_{c}$-long coupler
are depicted in Fig. 9, where we identify the trends predicted from the CW
regime, without noticeable distortion. It is worth noting the twin-peak output
pulse shape in the bar-ports when exciting the lossless waveguide, dotted
curves in Fig. 9(b), with peak powers above the NRIR: Only the central part of
the pulse, that has sufficient power to saturate the graphene-loaded
waveguide, is transmitted to the cross port. In this regime the device
regresses to a quasi-reciprocal response, i.e., it has approximately the same
cross-port transmission in both directions, e.g., the 400 mW curves in Fig. 9.
Finally, we estimate the onset of pulse distortion at 0.5 ps, primarily due to
TOD (imparting an asymmetry in the temporal and spectral response) and
secondarily due to GVD and/or coupling-length dispersion. This means that 1 ps
pulses, requiring a bandwidth in the order of 10 nm, can be accommodated by
the device whereas shorter pulses would require dispersion engineering.
Figure 9: Normalized output pulses at the cross port of the coupler, for
various peak-powers ($P_{p,\mathrm{in}}$), when exciting (a) the graphene-SA
loaded waveguide in the forward direction, or (b) the lossless waveguide in
the backward direction. The dashed curves in panel (b) correspond to the bar-
port output.
### IV-B Third Order Effects in Graphene
Concerning the Kerr effect in integrated waveguides comprising graphene,
various implementations, both theoretical and experimental, have revealed
interesting phenomena, particularly in the perturbative regime, such as gate-
tunable nonlinearity [29]. Kerr-induced nonreciprocity arises at more
exaggerated power levels than SA (or it would necessitate higher graphene
nonlinearity values), and usually relies on narrow-bandwidth (high-Q)
resonators to further boost the nonlinear response [19]. In the $L_{c}$-long
directional coupler, and in the absence of SA, the Kerr-induced nonreciprocity
has an opposite isolation direction with respect to the SA-induced one: When
exciting the graphene-loaded waveguide, the high Kerr effect (either focusing
or defocusing) desynchronizes the coupler and inhibits coupling to the other
waveguide, which leads to low cross-port transmission. When exciting the
unloaded waveguide, coupling efficiency is not perturbed (coupler remains
synchronized), so we have a high transmission; however, above a certain power
threshold, phase modulation due to cross-coupling can desynchronize the
coupler, leading, again, to low transmission and quasi-reciprocal response. In
these cases, a geometric desynchronization of the waveguide coupler, e.g.,
different slot widths, can be used to tailor the response and reverse the
polarity.
The third-order nonlinear effects were so far omitted to keep the focus on the
SA phenomenon. Moreover, even though mathematically straightforward,
incorporation of such effects in the coupled NLSE is physically not as simple,
for a number of reasons: First and foremost is the free-carrier refraction, a
non-perturbative process accompanying the photo-excited carrier induced SA,
that has been shown to overshadow (perturbative) Kerr-type effects [31, 33]. A
second reason is the carrier-related nonlinearity coupled to the optical pulse
propagation [28], whose implementation is complicated (both physically and
computationally) but nonetheless important in the high-power regime. Thirdly,
perturbative models typically predict that high values of
$|\mathrm{Im}\\{\sigma^{(3)}\\}|$ (which contributes to the real part of
$\gamma$) are attained for chemical potential close to the two-photon
absorption resonance ($|\mu_{c}|\approx\hbar\omega/2$), with considerable
dispersion, i.e., a tenfold (or more) decrease when frequency and/or
$|\mu_{c}|$ are tuned away from that [25, 26]. Lastly, another point of
caution is that solutions of perturbative models diverge at low-$|\mu_{c}|$
regions. Taking all these into account, and recalling that
$|\mu_{c}|\rightarrow 0$ is required for high-contrast (saturable) losses in
graphene, reveals that including the Kerr effect in our proof-of-concept
device should be done with caution and mainly in the direction of exploring
other possible system dynamics.
In this spirit, we explore Kerr-induced nonreciprocity in the nonlinear
coupler, in the presence or absence of SA. To fully exploit the Kerr effect
one should bias the graphene ribbon so that its chemical potential is above
the half-photon energy, where graphene is almost transparent. Accurate
expressions for interband and intraband linear monolayer conductivity at room
temperature (quasi-equilibrium regime) reveal a local minimum of
$\mathrm{Re}\\{\sigma^{(1)}\\}\approx\sigma_{0}/20$ at $|\mu_{c}|\approx 0.55$
eV. Using this value for linear surface conductivity together with a
defocusing value of $\sigma^{(3)}=+j1.4\times 10^{-21}$ S(m/V)2 for the third-
order nonlinear surface conductivity of a graphene monolayer [12], we can
extract $\gamma=-44000$ and $+45$ m-1W-1 for the graphene-bilayer loaded and
unloaded Si-slot waveguides, respectively, using the expressions in [10]; the
nonlinear index of silicon is $n_{2}=2.5\times 10^{-18}$ m2/W. The rather high
value for graphene $\gamma$ is due to the extremely high overlap of graphene
with the $x$-polarized mode in the slot waveguide; it is worth pointing out
that the maximization of $\gamma$ effectively coincides with the maximization
of $a_{0}$, Fig. 4, in the sense that they both depend on the graphene/E-field
overlap (maximal light-matter interaction) in the waveguide cross-section. In
this work, the real part of $\sigma^{(3)}$, related to perturbative SA or TPA
(for negative or positive sign, respectively), is omitted as theoretical
predictions show that it is generally lower and moreover exhibits a transition
near half-photon energy [25, 26]. Moreover, we verified that Si-originating
TPA and corresponding free-carrier absorption and refraction [13] were
negligible in the intensity ranges considered, due to the low E-field overlap
with the Si ridges in the slot waveguide.
Inserting the nonlinear parameters contributing to a power-dependent self-
phase shift in each of the coupled equations [i.e., CW form of (2) with now
asymmetric non-zero $\gamma^{(1,2)}$], we extract the forward and backward
transmission curves against the input power, Fig. 10, for four scenarios: (i)
SA only with $\alpha_{0}=0.54$ dB/$\mu$m, (ii) Kerr and SA with
$\alpha_{0}=0.54$ dB/$\mu$m, (iii) Kerr and SA now with low saturable losses,
with $\alpha_{0}=0.027$ dB/$\mu$m, and (iv) a hypothetical lossless graphene
configuration that only exhibits Kerr effect. In all scenarios the same SA
curve shape of Fig. 5 was assumed and $\gamma^{(1)}=-44000$ and
$\gamma^{(2)}=+45$ m-1W-1, except in scenario (i) where $\gamma^{(1,2)}=+45$
m-1W-1. In Fig. 10(a), we observe that the combination of Kerr and SA opens
two non-overlapping, equally-sized NRIR of opposite polarity, with the Kerr
window appearing at three times higher power. When SA is diminished or
switched off, Fig. 10(b), the NRIR opens slightly lower, and is well predicted
by the nonlinear coupler theory [13],
$P_{\mathrm{in}}\approx\pi\sqrt{3}/(\gamma^{(1)}L_{c})$. Note that if non-
saturable (background) graphene losses, e.g., due to intraband absorption,
were considered in the Kerr-only scenario (iv), then the cross-transmission is
considerably reduced in both directions; this is due to the very low
$L_{\mathrm{eff}}\approx 1/\alpha\ll L_{c}$.
Figure 10: Forward and backward transmission as a function of input CW power.
(a) SA only vs. SA+Kerr, where two nonreciprocal intensity ranges of opposite
polarity open. (b) Kerr, with low saturable losses vs. hypothetical lossless
case. The device length in all cases is $L_{c}=600~{}\mu$m.
As a closing remark in this subsection, we note that the manifestation of
third-order effects (such as self/cross phase/amplitude modulation, or four-
wave mixing in general) is actually enabled by SA. The combination of SA and
self-defocusing Kerr in graphene can give rise to interesting phenomena, such
as soliton-like pulse compression in the normal-dispersion regime,
$\beta_{2}>0$ [12]. However, we stress that high-power illumination non-
negligibly alters the 2D material and therefore its nonlinear parameters
cannot be safely considered constant across so high power-contrast, especially
in dynamic situations (fs-pulse regime), or when carrier thermodynamics are
involved.
### IV-C Future prospects
Two future steps are easily identified: Firstly, developing a theoretical
model for this non-Hermitian system, which can be used for performance
prediction rules, in-line with the present numerical study. Secondly,
implementing a more elaborate physical description of graphene nonlinear
response, incorporating photo-generated carrier effects, diffusion, and
thermodynamic aspects; this will improve the accuracy of the predicted
performance, possibly to more favorable metrics.
Further development on the present concept could also include re-engineering
of the waveguides and coupler to optimize specific aspects of the
nonreciprocal response: adjusted transmission limits ($T_{F,B}$), maximized
NRIR and/or bandwidth, minimized footprint, dual/opposite isolation directions
based on interplay between Kerr and SA, etc. All these amount to tailoring the
saturation curve, i.e., how the material properties are imprinted onto the
waveguide mode through light-matter interaction, acting on the imaginary part
of the effective modal index across different power regimes. Apart from minor
geometry tweaks, this tailoring can possibly be accomplished by introducing
more elaborate features, e.g., multiple differently-biased graphene layers,
longitudinally varying or patterned sheets, or bulk semiconductor/plasmonic
materials. More intricate pulsed-regime studies would include dispersion
engineering for fs pulse-shaping, spectral broadening, or full-duplex
operation. Finally, this component could be considered as part of more complex
system such as a tunable cavity, ring laser, or a space-time modulated
structure.
## V Conclusions
We have proposed and numerically studied a proof-of-concept broadband
nonreciprocal integrated device relying on a directional Si-photonic coupler
asymmetrically loaded with graphene, a nonlinear 2D material exhibiting
broadband SA even at low intensities. We adopted an instantaneous model for
graphene SA to probe the limitations in the device and engineered the
structure for sub-mW saturation power ($P_{\mathrm{sat}}$) in the graphene-
loaded waveguide. We unveiled the non-Hermitian nature of the system and the
underlying EP, and proposed a coupled-NLSE formulation for its analysis, using
parameters rigorously extracted from a full-vector FEM-based mode solver; the
validity of this formulation was checked against a nonlinear FEM-based full-
vector BPM. Our results indicate a nonreciprocal window (NRIR) for 100 mW peak
powers, with a high bandwidth of tens of nanometers, owing to the non-resonant
nature of the structure. We also found that the NRIR (i) is much higher than
the $P_{\mathrm{sat}}$ of the isolated waveguide, (ii) it non-trivially
depends on the shape and steepness of the waveguide saturation curve, and
(iii) cannot be safely estimated by phenomenological models for SA, such as
$\alpha=\alpha_{0}/[1+(P_{\mathrm{in}}/P_{\mathrm{sat}})]$, which are over-
optimistic. In conclusion, usable half-duplex isolator performance can be
attained, provided that sufficient saturable losses and/or component length
are available.
## References
* [1] D. Jalas _et al._ , “What is — and what is not — an optical isolator,” _Nat. Photonics_ , vol. 7, no. 8, pp. 579–582, 2013\.
* [2] C. Caloz _et al._ , “Electromagnetic nonreciprocity,” _Phys. Rev. Appl._ , vol. 10, no. 4, 2018.
* [3] D. L. Sounas and A. Alù, “Non-reciprocal photonics based on time modulation,” _Nat. Photonics_ , vol. 11, no. 12, pp. 774–783, 2017.
* [4] K. Y. Yang _et al._ , “Inverse-designed non-reciprocal pulse router for chip-based LiDAR,” _Nat. Photonics_ , vol. 14, no. 6, pp. 369–374, 2020\.
* [5] K. Xu, Y. Chen, T. A. Okhai, and L. W. Snyman, “Micro optical sensors based on avalanching silicon light-emitting devices monolithically integrated on chips,” _Opt. Mater. Express_ , vol. 9, no. 10, p. 3985, 2019.
* [6] H. Dötsch _et al._ , “Applications of magneto-optical waveguides in integrated optics: review,” _J. Opt. Soc. Am. B_ , vol. 22, no. 1, p. 240, 2005.
* [7] D. L. Sounas, C. Caloz, and A. Alù, “Giant non-reciprocity at the subwavelength scale using angular momentum-biased metamaterials,” _Nat. Commun._ , vol. 4, no. 1, 2013.
* [8] D. L. Sounas, J. Soric, and A. Alù, “Broadband passive isolators based on coupled nonlinear resonances,” _Nat. Electronics_ , vol. 1, no. 2, pp. 113–119, 2018.
* [9] D. Huang _et al._ , “Electrically driven and thermally tunable integrated optical isolators for silicon photonics,” _IEEE J. Sel. Top. Quantum Electron._ , vol. 22, no. 6, pp. 271–278, 2016.
* [10] D. Chatzidimitriou, A. Pitilakis, and E. E. Kriezis, “Rigorous calculation of nonlinear parameters in graphene-comprising waveguides,” _J. Appl. Phys._ , vol. 118, no. 2, p. 023105, 2015.
* [11] D. Chatzidimitriou and E. E. Kriezis, “Optical switching through graphene-induced exceptional points,” _J. Opt. Soc. Am. B_ , vol. 35, no. 7, pp. 1525–1535, 2018.
* [12] ——, “Light propagation in nanophotonic waveguides considering graphene’s saturable absorption,” _Phys. Rev. A_ , vol. 102, p. 053512, 2020.
* [13] A. Pitilakis and E. E. Kriezis, “Highly nonlinear hybrid silicon-plasmonic waveguides: analysis and optimization,” _J. Opt. Soc. Am. B_ , vol. 30, no. 7, p. 1954, 2013.
* [14] O. Tsilipakos, A. Pitilakis, A. C. Tasolamprou, T. V. Yioultsis, and E. E. Kriezis, “Computational techniques for the analysis and design of dielectric-loaded plasmonic circuitry,” _Opt. Quant. Electron._ , vol. 42, no. 8, pp. 541–555, 2011.
* [15] A. C. Ferrari _et al._ , “Science and technology roadmap for graphene, related two-dimensional crystals, and hybrid systems,” _Nanoscale_ , vol. 7, no. 11, pp. 4598–4810, 2015.
* [16] N. Chamanara, D. Sounas, and C. Caloz, “Non-reciprocal magnetoplasmon graphene coupler,” _Optics Express_ , vol. 21, no. 9, p. 11248, 2013.
* [17] Q. Bao _et al._ , “Atomic-layer graphene as a saturable absorber for ultrafast pulsed lasers,” _Adv. Funct. Mater._ , vol. 19, no. 19, pp. 3077–3083, 2009.
* [18] A. Marini and F. G. de Abajo, “Graphene-based active random metamaterials for cavity-free lasing,” _Phys. Rev. Lett._ , vol. 116, no. 21, 2016.
* [19] L. D. Bino _et al._ , “Microresonator isolators and circulators based on the intrinsic nonreciprocity of the Kerr effect,” _Optica_ , vol. 5, no. 3, p. 279, 2018.
* [20] S. R. K. Rodriguez, V. Goblot, N. C. Zambon, A. Amo, and J. Bloch, “Nonreciprocity and zero reflection in nonlinear cavities with tailored loss,” _Phys. Rev. A_ , vol. 99, no. 1, 2019.
* [21] Y. Kominis, T. Bountis, and S. Flach, “Stability through asymmetry: Modulationally stable nonlinear supermodes of asymmetric non-hermitian optical couplers,” _Physical Review A_ , vol. 95, no. 6, 2017.
* [22] M.-A. Miri and A. Alù, “Exceptional points in optics and photonics,” _Science_ , vol. 363, no. 6422, 2019.
* [23] A. Mock, “Modeling passive mode-locking via saturable absorption in graphene using the finite-difference time-domain method,” _IEEE J. Quantum Electron._ , vol. 53, no. 5, pp. 1–10, 2017.
* [24] Q. Lin, O. J. Painter, and G. P. Agrawal, “Nonlinear optical phenomena in silicon waveguides: modeling and applications,” _Opt. Express_ , vol. 15, no. 25, p. 16604, 2007.
* [25] J. L. Cheng, N. Vermeulen, and J. E. Sipe, “Third-order nonlinearity of graphene: Effects of phenomenological relaxation and finite temperature,” _Phys. Rev. B_ , vol. 91, no. 23, 2015.
* [26] S. A. Mikhailov, “Quantum theory of the third-order nonlinear electrodynamic effects of graphene,” _Phys. Rev. B_ , vol. 93, no. 8, 2016.
* [27] A. Marini, J. D. Cox, and F. J. G. de Abajo, “Theory of graphene saturable absorption,” _Phys. Rev. B_ , vol. 95, no. 12, 2017.
* [28] S. A. Mikhailov, “Theory of the strongly nonlinear electrodynamic response of graphene: A hot electron model,” _Phys. Rev. B_ , vol. 100, no. 11, 2019\.
* [29] K. Alexander, N. A. Savostianova, S. A. Mikhailov, D. V. Thourhout, and B. Kuyken, “Gate-tunable nonlinear refraction and absorption in graphene-covered silicon nitride waveguides,” _ACS Photonics_ , vol. 5, no. 12, pp. 4944–4950, 2018.
* [30] P. Demongodin _et al._ , “Ultrafast saturable absorption dynamics in hybrid graphene/Si3N4 waveguides,” _APL Photonics_ , vol. 4, no. 7, p. 076102, 2019.
* [31] D. Castelló-Lurbe, H. Thienpont, and N. Vermeulen, “Predicting graphene's nonlinear-optical refractive response for propagating pulses,” _Laser Photonics Rev._ , vol. 14, no. 6, p. 1900402, 2020.
* [32] F. Zhang, S. Han, Y. Liu, Z. Wang, and X. Xu, “Dependence of the saturable absorption of graphene upon excitation photon energy,” _Appl. Phys. Lett._ , vol. 106, no. 9, p. 091102, 2015.
* [33] N. Vermeulen _et al._ , “Graphene’s nonlinear-optical physics revealed through exponentially growing self-phase modulation,” _Nat. Commun._ , vol. 9, no. 1, 2018.
* [34] S. Selleri, L. Vincetti, A. Cucinotta, and M. Zoboli, “Complex FEM modal solver of optical waveguides with PML boundary conditions,” _Opt. Quant. Electron._ , vol. 33, no. 4/5, pp. 359–371, 2001.
|
Virasoro algebras, kinematic space and the spectrum of modular Hamiltonians in
CFT2
Suchetan Das1,2, Bobby Ezhuthachan2, Somnath Porey2, Baishali Roy2
1Department of Physics,
Indian Institute of Technology Kanpur,
Kanpur 208016, India.
2Ramakrishna Mission Vivekananda Educational and Research Institute,
Belur Math,
Howrah-711202, West Bengal, India.
suchetan[at]iitk.ac.in, bobby.ezhuthachan[at]rkmvu.ac.in,
somnathhimu00[at]gm.rkmvu.ac.in, baishali.roy025[at]gm.rkmvu.ac.in
We construct an infinite class of eigenmodes with integer eigenvalues for the
Vacuum Modular Hamiltonian of a single interval $N$ in 2d CFT and study some
of its interesting properties, which includes its action on OPE blocks as well
as its bulk duals. Our analysis suggests that these eigenmodes, like the OPE
blocks have a natural description on the so called kinematic space of CFT2 and
in particular realize the Virasoro algebra of the theory on this kinematic
space. Taken together, our results hints at the possibility of an effective
description of the CFT2 in the kinematic space language.
###### Contents
1. 1 Introduction
2. 2 Modular Hamiltonian in 2D CFT and its spectrum
1. 2.1 OPE Blocks
2. 2.2 A new class of modular eigenmodes and its properties
3. 2.3 Action on the OPE blocks
4. 2.4 MVA and the Kinematic space
3. 3 The global subalgebra of the MVA
1. 3.1 Symmetries of the CFT2 causal diamonds
2. 3.2 g-MVA and modular inclusions
4. 4 Pulling the $\mathbb{L}_{n}$ into the bulk
5. 5 Discussion
6. A Modular inclusion in CFT2 and finite dimensional system
1. A.1 Modular inclusion
2. A.2 Modular inclusion in vacuum CFT2
3. A.3 Modular inclusion in finite dimensional Hilbert space
7. B Commutation relation of modular generators and Virasoro algebra
## 1 Introduction
Research over the past several years has made it abundantly clear that Quantum
information/entropy related ideas play a crucial role in developing a deeper
understanding of Quantum Field Theory and Quantum Gravity. The algebraic
formulation of QFT (AQFT) in terms of algebra of observables associated to
causal domains of spatial subregions [1],[2], seems to be particularly well
suited for such entropic studies. The many successes of this approach include
formulating a precise version of various entropy bounds in QFT [3]-[5],
developing a deeper understanding of RG flows in terms of relative entropy of
states, [6]-[10], proofs of various null energy conditions in QFT [11]-[14],
developing a more precise understanding of bulk Reconstruction [15]-[30] among
others.
A key role in most of these studies is played by the (total) modular
hamiltonian [31]111Total modular hamiltonian is defined as the difference of
modular hamiltonians of a given subregion and it’s complement. In the rest of
the note, we refer modular hamiltonian as the total modular hamiltonian.. In
the AQFT formulation, the modular hamiltonian operator $K^{\psi}_{\Sigma}$,
for a particular state $|\psi\rangle$ generates an automorphism of the algebra
of the operators localized in the causal diamond $\mathcal{D}(\Sigma)$
associated with the spatial subregion $\Sigma$. Under this automorphism,
operators localized within $\mathcal{D}(\Sigma)$ transform into each other,
thus generating a flow called the modular flow 222 Under this flow, an
operator $\mathcal{O}\rightarrow\mathcal{O}(s)$, where $\mathcal{O}(s)\equiv
e^{isK}\mathcal{O}e^{-isK}$. Both
$\mathcal{O}\;\textrm{and}\;\mathcal{O}_{s}\;\textrm{have support
within}\;\mathcal{D}_{\Sigma}$. In applications to holography, the importance
of the modular hamiltonian operator comes from its identification, at leading
order in the inverse bulk newton’s constant($\frac{1}{G_{N}}$), with the
corresponding bulk modular hamiltonian operator, where the corresponding bulk
region is the bulk causal diamond associated with the region bounded by the RT
surface and $\Sigma$ [15]. These modular flows play an important role in the
entanglement wedge reconstruction program333 Recently, a different, but
related, notion of the connes co-cycle flows also have been discussed in the
context of extracting bulk information from the entanglement wedge region
which is casually disconnected from the boundary [32],[33]. It has also been
argued that the emergence of a semiclassical bulk spacetime might itself be
understood from the algebra of modular hamiltonians of all subregions in the
boundary QFT [30].
Given its relevance, particularly in the context of bulk reconstruction
program alluded to above, it would be a useful endeavor to study the modular
hamiltonian operator in detail both in general QFTs as well more specifically
in simple but concrete examples. One way to characterize these operators would
be through the spectrum of its eigenstates. It is reasonable to expect that
this spectrum would encode the entanglement content of the QFT. The modular
hamiltonian $K^{\psi}_{\Sigma}$ for a state $|\psi\rangle$ and a spatial
region $\Sigma$ annihilates the state, ie $K^{\psi}_{\Sigma}|\psi\rangle=0$.
One may then construct its eigenstates by acting on $|\psi\rangle$ with a
special class of operators ($\mathcal{O}_{\omega}$) which has the following
commutation relation with the modular hamiltonian
$[K,\mathcal{O}_{\omega}]=\omega\mathcal{O}_{\omega}$. These are referred to
as the modular eigenmodes. It’s easy to see that the fourier transform of the
modular evolved operators (ie: $\int dse^{is\omega}\mathcal{O}_{\omega}$) are
modular eigenmodes. These eigenmodes, in particular the zero modes play a
crucial role in reconstruction of bulk fields inside the entanglement wedge
[17],[19]. The zero modes, which commute with the modular hamiltonian may be
thought of as local symmetries of the corresponding state, in the sense that
correlation function of operators inside the region $\mathcal{D}(\Sigma)$,
would be invariant under transformations (of the operators) generated by the
zero modes. These are local because for the same given state, but for a
different region, the modular hamiltonian and therefore the zero modes would
be different. It has been argued that in the bulk these local symmetries
generated by the zero modes corresponding to large diffeomorphisms which are
not trivial on the RT surface [22]. Thus the modular eigenmodes seem to have a
very important bearing on the emergence of bulk geometry itself.
Given the above motivations, a detailed study of the modular eigenmodes in
these theories would be interesting. While for generic states and regions the
modular hamiltonian as well its eigenmodes are nonlocal operators, there are a
few examples, where they take a simple form as an integral of local fields.
The simplest example of which is the case of the single interval in the vacuum
state of a $\textrm{CFT}_{2}$. In [37], two of us had shown that OPE blocks of
primary fields of different spins are nonzero modular eigenmodes of the
modular hamiltonian of the single interval, in the vacuum of CFT2, where the
endpoints of the interval corresponds to the location of the two primary
fields whose expansion define the OPE block. This generalizes known results in
the literature that scalar OPE blocks are modular zero eigenmodes [20],[17].
In this note, we continue with the study initiated in [37], of the eigenmodes
of the vacuum modular hamiltonian for a single interval (labelled as $N$) in
2D CFT. We find a new infinite class of modular eigenmodes with integer
eigenvalues and discuss some of their interesting properties. The key point we
want to make here is that like the OPE blocks, these new eigenmodes we
construct here have a natural description on the so-called kinematic
space(k-space)[34], which is essentially the space of causal diamonds in CFT2.
In particular, they realize the virasoro algebra of the CFT2 on this k-space.
As evidence of this fact we show that OPE blocks, which are local fields in
the k-space description, transform as modes of a primary field under this
‘k-space virasoro algebra’, which we refer to as the modular virasoro
algebra(MVA) in the bulk of the text. Moreover, as we show, a subset of the
new modular eigenmodes, which generate the global subalgebra of the MVA
representation can be directly identified with the modular hamiltonians of
subregions of $N$. We believe that these observations, taken together, hint at
the possibility of an equivalent effective description of the CFT2 in the
k-space language, a detailed study of which we leave for future work.
This draft is organized as follows. In the next section, after presenting a
brief summary of the known examples of modular eigenmodes- the OPE blocks, and
their bulk duals, we present the new class of eigenmodes and discuss its
interesting properties. Specifically, we show that these eigenmodes together
with the modular hamiltonian satisfy the virasoro algebra. The details of this
calculation are presented in appendix B. We also compute the commutator of
these new class of modular eigenmodes with the OPE blocks, and show that the
result is same as that of the usual (local) virasoro generators with the modes
of a primary field in CFT2. This fact suggests that the OPE blocks transform
as modes of a primary field under conformal transformations generated by the
MVA. Since the OPE blocks can be described naturally as fields living on the
so-called kinematic space(k-space), this also suggest that the new eigenmodes
have a natural action on the k-space. We explore the kinematic aspects of this
question in subsection 2.4.
In section 3, we focus on the global subalgebra of the MVA. Interestingly, we
show that it is isomorphic to the algebra of the modular hamiltonians of $N$
as well as its subregions $N^{\prime}$ and $N^{\prime\prime}$ and that they
implement the so-called modular inclusion within the lightcone of $N$. For
completeness we review the definition of the modular inclusion and, as an
example, discuss the modular inclusion for finite dimensional Hilbert space in
appendix A.
In section 4, we discuss the bulk dual of our construction. In particular we
show the emergence of the RT geodesic very naturally from our constructions.
We conclude with a summary of our results as well a discussion on some of the
questions and directions opened up in the light of these results, in section
5.
Note Added: While we were in the final stages of preparing this article, [58]
was posted on the arxiv, in which the authors construct Virasoro algebra from
generators constructed out of light-ray operators. Although the expression for
those operators are similar to what we refer here as ‘modular Virasoro
generators’, the context and motivation of the approaches seem to us to be
different.
## 2 Modular Hamiltonian in 2D CFT and its spectrum
The Modular Hamiltonian of a single interval with endpoints ($z_{2}$, $z_{3}$)
on a constant time slice ($t=0$) in the vacuum of 2D CFT is given by the
following integral expression:
$K=\int^{\infty}_{-\infty}d\zeta\frac{(z_{2}-\zeta)(\zeta-
z_{3})}{z_{2}-z_{3}}T_{\zeta\zeta}(\zeta)+\int^{\infty}_{-\infty}d\bar{\zeta}\frac{(z_{2}-\bar{\zeta})(\bar{\zeta}-z_{3})}{z_{2}-z_{3}}T_{\bar{\zeta}\bar{\zeta}}(\bar{\zeta})$
(2.1)
Modular eigenmodes are operators which satisfy the following commutation
relation with the modular hamiltonian $[K,\mathcal{O}]=\lambda\mathcal{O}$. In
[17],[37], it was shown that global OPE blocks in CFT2, are eigen-modes of
vacuum modular Hamiltonian. We therefore begin this section, with a brief
review of the OPE blocks in $\textrm{CFT}_{2}$.
### 2.1 OPE Blocks
In CFT, a global OPE block $B^{ij}_{k}$ is defined as the contribution of a
conformal family (ie a given primary field $\mathcal{O}_{k}$ of dimension
$h_{k}$, $\bar{h}_{k}$ and all it’s global descendants) to the OPE of two
primary operators ($\mathcal{O}_{i}$, $\mathcal{O}_{j}$) of dimensions
($h_{i}$, $\bar{h}_{i}$) and ($h_{j}$, $\bar{h}_{j}$) respectively [34].
Mathematically,
$\displaystyle\mathcal{O}_{i}(z_{1},\bar{z}_{1})\mathcal{O}_{j}(z_{2},\bar{z}_{2})=z_{12}^{-(h_{i}+h_{j})}\bar{z}_{12}^{-(\bar{h}_{i}+\bar{h}_{j})}\sum_{k}C_{ijk}B_{k}^{ij}(z_{1},\bar{z}_{1};z_{2},\bar{z}_{2})$
(2.2)
Here, $C_{ijk}$ is the OPE coefficient, which is dynamical input of the
theory. The above equation tells us how the OPE block transforms under global
conformal transformations and this is enough to fix the form of the OPE
blocks. Indeed, $B^{ij}_{k}$ has an integral expression which can be derived
[34], [38] using the shadow operator formalism [39], [40] and takes the
following form,
$\displaystyle B^{ij}_{k}(z_{1},\bar{z}_{1};z_{2},\bar{z}_{2})=$
$\displaystyle\int_{z_{1}}^{z_{2}}d\zeta\int_{\bar{z}_{1}}^{\bar{z}_{2}}d\bar{\zeta}\left(\frac{(\zeta-
z_{1})(z_{2}-\zeta)}{z_{2}-z_{1}}\right)^{h_{k}-1}\left(\frac{z_{2}-\zeta}{\zeta-
z_{1}}\right)^{h_{ij}}\times$
$\displaystyle\left(\frac{(\bar{\zeta}-\bar{z}_{1})(\bar{z}_{2}-\bar{\zeta})}{\bar{z}_{2}-\bar{z}_{1}}\right)^{\bar{h}_{k}-1}\left(\frac{\bar{z}_{2}-\bar{\zeta}}{\bar{\zeta}-\bar{z}_{1}}\right)^{\bar{h}_{ij}}\mathcal{O}_{k}(\zeta,\bar{\zeta})$
(2.3)
One can now show444See appendix A of [37], for the details of the proof.,
using the OPE of $T$, $\bar{T}$ with the primary field $\mathcal{O}$, that
these OPE blocks are indeed eigenmodes of $K$, with eigenvalue proportional to
the spin difference ($l_{ij}$)of the two operators.
$\Big{[}K,B^{ij}_{k}\Big{]}=2\pi il_{ij}B^{ij}_{k}$ (2.4)
The scalar zero-modes have been shown to be dual to the so-called geodesic
operators [34], which are essentially smeared geodesic integrals of the
appropriate bulk dual field of $\mathcal{O}_{k}$. The geodesic endpoints being
the location of the two primary fields, whose OPE defines the specific OPE
block in question.
$B^{ij}_{k}=\int_{\lambda}ds\;e^{-s\Delta_{ij}}\phi(x(s),z(s),t(s))$ (2.5)
Here $\phi$ is the dual scalar field to $\mathcal{O}_{k}$. ($x,\;t$) are the
boundary coordinates while $z$ is the bulk coordinate, and the integral is
over a geodesic with end points on the boundary. The smearing function is
$e^{-s\Delta_{ij}}$, with $\Delta_{ij}=\Delta_{i}-\Delta_{j}$ being the
difference in scaling dimensions of the two operators. This can be generalized
to non zero scalar modes [37], where now the integral is over a Lorentzian
cylinder. The cylindrical surface is generated by $K$ and $P_{D}$ which
generate boosts in the plane normal to the geodesic and translations along the
geodesic respectively.
$B^{ij}_{k}=c_{k}\int_{cylinder}d\tilde{t}ds\;e^{-s(\theta)\Delta_{ij}}e^{-\tilde{t}(\rho,\theta)l_{ij}}\phi(x(\rho,\theta),z(\rho,\theta),t(\rho,\theta))$
(2.6)
Here $l_{ij}$ is the spin difference between the two operators, $\tilde{t}$
and $s$ labels the coordinates on the cylinder and the $c_{k}$ is a
normalization constant which can be fixed by the appropriate boundary
condition. See [37] for the details of the derivation. In the next section, we
introduce the new class of eigenmodes which are smeared integrals of $T$ and
$\bar{T}$ and discuss their bulk duals.
### 2.2 A new class of modular eigenmodes and its properties
We now present a new class of integrated operators, which are all eigenmodes
of the modular hamiltonian. Unlike the OPE blocks, these exist in any 2d CFT
and are not theory dependent. As advertised earlier, these also satisfy a
virasoro algebra. For this reason, we label them as $\mathbb{L}_{n}$ and
$\bar{\mathbb{L}}_{n}$. The explicit expressions are given below.
$\displaystyle\mathbb{L}_{n}=a_{n}\int^{\infty}_{-\infty}d\zeta\;\frac{(z_{2}-\zeta)^{-n+1}(\zeta-
z_{3})^{n+1}}{z_{2}-z_{3}}T(\zeta)$ (2.7)
$\displaystyle\bar{\mathbb{L}}_{n}=\bar{a}_{n}\int^{\infty}_{-\infty}d\bar{\zeta}\;\frac{(z_{2}-\bar{\zeta})^{n+1}(\bar{\zeta}-z_{3})^{-n+1}}{z_{2}-z_{3}}\bar{T}(\bar{\zeta})$
(2.8)
Note that naively the integrand in the above formulae, blow up at $z_{2}$ and
$z_{3}$, however we can regulate the integral, by choosing a deformed contour
such that it doesn’t pass through $z_{2}$ and $z_{3}$. Equivalently, we can
give both $z_{2}$ and $z_{3}$ a small imaginary component. The $a_{n}$ are
arbitrary normalization constants555If we choose the normalization constant to
be independent of the endpoints $z_{2}$ and $z_{3}$, then it is easy to see
that the $L_{n}$ and $\bar{L}_{n}$ are really only a function of
($z_{2}$-$z_{3})$, however if the normalization constants are non trivial
functions of the endpoints, then the eigenmodes are bi-local ($z_{2}$ and
$z_{3}$). In this notation, $\mathbb{L}_{0}+\bar{\mathbb{L}}_{0}$, with
$a_{0}=\bar{a}_{0}=1$ is the modular hamiltonian of the single interval with
endpoints $z_{2}$ and $z_{3}$. It can be shown that they satisfies,
$[\mathbb{L}_{0},\mathbb{L}_{n}]=-n\mathbb{L}_{n},\;\;[\bar{\mathbb{L}}_{0},\bar{\mathbb{L}}_{n}]=-n\bar{\mathbb{L}}_{n}$
(2.9)
It follows that the $\mathbb{L}_{n}$ and $\bar{\mathbb{L}}_{n}$ for ($n\neq
0$) are indeed modular eigenmodes. Furthermore, if we normalize the
$\mathbb{L}_{n}$ suitably, which can be done without any loss of generality,
such that the $a_{n}=r^{n}$ and $\bar{a}_{n}=\bar{r}^{n}$ where $r$ and
$\bar{r}$ are two arbitrary constants, then in fact these eigenmodes satisfy
the virasoro algebra, with the correct central charge term.
$\displaystyle[\mathbb{L}_{m},\mathbb{L}_{n}]=(m-n)\mathbb{L}_{m+n}+\frac{c}{12}n(n^{2}-1)\delta_{m+n,0}$
(2.10)
For this reason, we refer to this, as the modular virasoro algebra (MVA). As
we explain in detail in section 3, there is a nice geometric interpretation of
the global $SO(2,2)$ subalgebra of the MVA. In particular, the generators of
this global subalgebra ie : $\mathbb{L}_{0,\pm}$ and
$\bar{\mathbb{L}}_{0,\pm}$ are linear combinations of the holomorphic and
antiholomorphic components of the modular hamiltonians corresponding to the
subregions $N^{\prime}(z_{1},z_{2})$ and $N^{\prime\prime}(z_{3},z_{1})$ of
$N$. See figure 1. This is particularly transparent if one parameterizes the
normalization constant as follows:
$r=\frac{1}{\bar{r}}=\left(\frac{z_{2}-z_{1}}{z_{3}-z_{1}}\right)$. As is
clear from figure 1, the $z_{1}$ in this parametrization is the point within
the line segment $N$, which divides it into $N^{\prime}$ and
$N^{\prime\prime}$. For this reason, in the remainder of the note, we use this
normalization for the $\mathbb{L}_{n}$ and $\bar{\mathbb{L}}_{n}$.
Finally we note that there is an interesting ’duality’ between the standard
generators of the CFT, which we denote as ${\bf L}_{n}$, and the
$\mathbb{L}_{n}$ we construct here. In particular, there exists a conformal
transformation which interchanges the two. The explicit map between the two
conformal frames is given in equation 4.2. Under this transformation,
$\mathbb{L}_{n}\Longleftrightarrow{\bf L}_{n}$. In particular, this means that
the modular hamiltonian gets interchanged with the usual CFT hamiltonian.
As we will argue in the rest of the note, it is natural to interpret the
$\mathbb{L}_{n}$ as realizing the virasoro algebra on the space of causal
diamonds in the CFT2, which is termed as the kinematic space (k-space).
Evidence of this is provided in the following sections, where we analyze the
action of $\mathbb{L}_{n}$ on the OPE blocks which are bilocal operators in
the CFT2 but have a simple local description in the k-space, and later when we
understand the geometric meaning of the global subalgebra of the MVA.
### 2.3 Action on the OPE blocks
We can compute the commutator of the “modular” $\mathbb{L}_{n}$ operators with
the OPE blocks, by using the commutator of $T$ with primary operators, which
can be obtained from the $T\mathcal{O}$ OPE.
$\displaystyle[T(\omega),\mathcal{O}_{k}(\zeta,\bar{\zeta})]=2\pi
i(h\partial_{\zeta}\delta(\zeta-\omega)+\delta(\zeta-\omega)\partial_{\zeta})\mathcal{O}_{k}(\zeta,\bar{\zeta}),$
(2.11)
$\displaystyle[\bar{T}(\bar{\omega}),\mathcal{O}_{k}(\zeta,\bar{\zeta})]=-2\pi
i(h\partial_{\bar{\zeta}}\delta(\bar{\zeta}-\bar{\omega})+\delta(\bar{\zeta}-\bar{\omega})\partial_{\bar{\zeta}})\mathcal{O}_{k}(\zeta,\bar{\zeta})$
(2.12)
One then needs to evaluate the action of $\mathbb{L}_{n}$ on primary field
$\mathcal{O}_{k}(\zeta,\bar{\zeta})$. Using (2.7), (2.11) we get
$\displaystyle[\mathbb{L}_{n},\mathcal{O}_{k}(\zeta,\bar{\zeta})]$
$\displaystyle=\frac{2\pi i}{z_{2}-z_{3}}(z_{2}-\zeta)^{-n}(\zeta-
z_{3})^{n}\left(\frac{z_{21}}{z_{31}}\right)^{n}$
$\displaystyle\times\Big{[}h_{k}\left(n(z_{2}-z_{3})+(z_{2}+z_{3}-2\zeta)\right)+(z_{2}-\zeta)(\zeta-
z_{3})\partial_{\zeta}\Big{]}\mathcal{O}(\zeta,\bar{\zeta})$ (2.13)
$\displaystyle[\bar{\mathbb{L}}_{n},\mathcal{O}_{k}(\zeta,\bar{\zeta})]$
$\displaystyle=\frac{2\pi
i}{z_{2}-z_{3}}(z_{2}-\bar{\zeta})^{n}(\bar{\zeta}-z_{3})^{-n}\left(\frac{z_{21}}{z_{31}}\right)^{-n}$
$\displaystyle\times\Big{[}\bar{h}_{k}\left(-n(z_{2}-z_{3})+(z_{2}+z_{3}-2\bar{\zeta})\right)+(z_{2}-\bar{\zeta})(\bar{\zeta}-z_{3})\partial_{\bar{\zeta}}\Big{]}\mathcal{O}(\zeta,\bar{\zeta})$
(2.14)
Using (2.1) and (2.3), we now present the commutator of the $\mathbb{L}_{n}$
and the $B^{ij}_{k}$.
$\displaystyle[\mathbb{L}_{n},B^{ij}_{k}(z_{2},z_{3})]$
$\displaystyle=\int_{z_{3}}^{z_{2}}d\zeta\int_{\bar{z}_{3}}^{\bar{z}_{2}}d\bar{\zeta}\left(\frac{(\zeta-
z_{3})(z_{2}-\zeta)}{z_{2}-z_{3}}\right)^{h_{k}-1}\left(\frac{z_{2}-\zeta}{\zeta-
z_{3}}\right)^{h_{ij}}\times$
$\displaystyle\left(\frac{(\bar{\zeta}-\bar{z}_{3})(\bar{z}_{2}-\bar{\zeta})}{\bar{z}_{2}-\bar{z}_{3}}\right)^{\bar{h}_{k}-1}\left(\frac{\bar{z}_{2}-\bar{\zeta}}{\bar{\zeta}-\bar{z}_{3}}\right)^{\bar{h}_{ij}}\frac{2\pi
i}{z_{2}-z_{3}}(z_{2}-\zeta)^{-n}(\zeta-
z_{3})^{n}\left(\frac{z_{21}}{z_{31}}\right)^{n}$
$\displaystyle\times\Big{[}h_{k}\left(n(z_{2}-z_{3})+(z_{2}+z_{3}-2\zeta)\right)+(z_{2}-\zeta)(\zeta-
z_{3})\partial_{\zeta}\Big{]}\mathcal{O}_{k}(\zeta,\bar{\zeta})$
$\displaystyle=(\text{T.D})+2\pi
i(nh_{k}-n+h_{ij})\left(\frac{z_{21}}{z_{31}}\right)^{n}\int_{z_{3}}^{z_{2}}d\zeta\int_{\bar{z}_{3}}^{\bar{z}_{2}}d\bar{\zeta}\left(\frac{(\zeta-
z_{3})(z_{2}-\zeta)}{z_{2}-z_{3}}\right)^{h_{k}-1}$
$\displaystyle\times\left(\frac{z_{2}-\zeta}{\zeta-
z_{3}}\right)^{h_{ij}}\left(\frac{(\bar{\zeta}-\bar{z}_{3})(\bar{z}_{2}-\bar{\zeta})}{\bar{z}_{2}-\bar{z}_{3}}\right)^{\bar{h}_{k}-1}\left(\frac{\bar{z}_{2}-\bar{\zeta}}{\bar{\zeta}-\bar{z}_{3}}\right)^{\bar{h}_{ij}}\left(\frac{z_{2}-\zeta}{\zeta-
z_{3}}\right)^{-n}\mathcal{O}_{k}(\zeta,\bar{\zeta})$ $\displaystyle=2\pi
i\left(\frac{z_{21}}{z_{31}}\right)^{n}[n(h_{k}-1)+h_{ij}]B^{ij-n}_{k}\;;\;\text{for}\;\Big{(}h_{ij}-h_{k}\leq
n\leq h_{ij}+h_{k}\Big{)}$ (2.15)
Here (T.D) is the total derivative term which vanishes for
$\Big{(}h_{ij}-h_{k}\leq n\leq h_{ij}+h_{k}\Big{)}$.
Equation (2.3) is identical to the action of usual Virasoro generator $l_{n}$
in CFT2 on the modes $\phi_{m}$ of a primary field $\phi$,
$\displaystyle[l_{n},\phi_{m}]=[n(h-1)-m]\phi_{n+m}$ (2.16)
with the identification
$\phi_{m}=\left(\frac{z_{21}}{z_{31}}\right)^{-h_{ij}}B^{ij}_{k}$. Thus we see
that the OPE blocks play the role of modes of some highest weight primary
field representation of the MVA.
To summarize, the key results of this section are:
* a.
The integrated stress tensor operators $\mathbb{L}_{n}$ and
$\bar{\mathbb{L}}_{n}$ form an infinite class of modular eigenmodes of the
modular hamiltonian corresponding to an single interval $N$ with endpoints
($z_{1},z_{2}$).
* b.
These modular eigenmodes are bi-local, similar to the OPE blocks, in that they
are a function of the end points ($z_{1},z_{2}$) of the interval ($N$).
* c.
Their commutators satisfy the virasoro algebra. For obvious reasons we refer
to this representation of the virasoro algebra as the modular virasoro algebra
(MVA). Moreover under a conformal transformation given in equation4.2, the
${\mathbb{L}}_{n}\Longleftrightarrow{\bf L}_{n}$. In particular, the modular
hamiltonian is interchanged with the usual CFT hamiltonian.
* d.
Under the transformations generated by the $\mathbb{L}_{n}$ and
$\bar{\mathbb{L}}_{n}$, the OPE blocks transforms as should the modes of a
primary operator in CFT2.
Now, the OPE blocks which are bi-local fields in the CFT2 have a natural
description as local fields on the so-called kinematic space (k-space)
[34]-[36]. In the light of [c] and [d], it is natural to wonder whether there
exists an “effective CFT” description in the k-space itself, with the OPE
blocks being the primary fields in this “k-space cft”. We explore the
kinematical aspects of this question in the next section.
### 2.4 MVA and the Kinematic space
The kinematic space of CFT2 is defined as the space of a ‘pair of space like
points’ in the CFT. Thus its a four dimensional space, with coordinates given
by the coordinates of the two points($t_{2},x_{2};t_{3},x_{3}$) with signature
($+,-,+,-$). One can fix the metric on this space by demanding its invariance
under conformal transformations of both points. This leads to a unique metric.
$ds^{2}_{kspace}=2\Big{[}\frac{dz_{2}dz_{3}}{(z_{2}-z_{3})^{2}}+\frac{d\bar{z}_{2}d\bar{z}_{3}}{(\bar{z}_{2}-\bar{z}_{3})^{2}}\Big{]}\textrm{with}\;\;z_{i}=t_{i}+x_{i},\;\bar{z}_{i}=t_{i}-x_{i}$
(2.17)
Thus the 4d space factorizes into two 2d conformally flat space times, spanned
by the two sets of k-space light cone coordinates $(z_{2},\;z_{3})$ and
$(\bar{z}_{2},\;\bar{z}_{3})$ respectively.
The k-space formalism allows us to visualize the OPE blocks, which are bi-
local fields in the CFT2 as local fields living on this k-space. Moreover it
geometrizes the conformal kinematic properties of these OPE blocks. In
particular, this means that the conformal casimir equation which the OPE
blocks satisfies, derived from its conformal transformation properties as
obtained from its definition in 2.2, translates in the k-space terminology
into an ‘equations of motion’ to be satisfied by the OPE blocks on the k-space
[34],[35]. For scalar OPE blocks, this is just the Klein-Gordon (KG) equation,
while the spinning OPE blocks satisfy a slightly modified KG equation [38].
The short distance behaviour of the OPE blocks 666The short distance limit of
an ope implies the ope block behaves like a single primary, i.e:
$\lim_{z_{2},\bar{z}_{2}\rightarrow
z_{3},\bar{z}_{3}}B_{k}^{ij}(z_{2},\bar{z}_{2};z_{3},\bar{z}_{3})\sim
z_{23}^{h_{k}}\bar{z}_{23}^{\bar{h}_{k}}\mathcal{O}_{h_{k},\bar{h}_{k}}(z_{3},\bar{z}_{3})$.
is now interpreted as a boundary condition to be imposed along the k-space
coordinates ($z_{23},\;\bar{z}_{23}\rightarrow 0$)
#### 2.4.1 Realizing $\mathbb{L}_{n}$ on k-space
Points [c] and [d] of the last section seem to hint at the possibility of an
effective CFT description in the k-space with the OPE blocks being modes of
the highest weight representation of the corresponding MVA, which in this
interpretation would act locally on the k-space. Now from the explicit form of
the $\mathbb{L}_{n}$, it is clear that they are functions of ($z_{2}-z_{3}$).
Now in our case, we had chosen the interval to be on a constant time slice, so
that $z_{2}=\bar{z}_{2}$ and $z_{3}=\bar{z}_{3}$. If, on the other hand, had
we chosen an arbitrary spacelike interval, then indeed $\mathbb{L}_{n}$,
$\bar{\mathbb{L}}_{n}$ are function of ($z_{23}=z_{2}-z_{3}$) and
($\bar{z}_{23}=\bar{z}_{2}-\bar{z}_{3}$) respectively. As is clear from the
k-space metric 2.17, the $z_{23}$, and $\bar{z}_{23}$ are spatial coordinates
on the two decoupled spaces and not light cone coordinates. Thus the
$\mathbb{L}_{n}$ and $\bar{\mathbb{L}}_{n}$ and should be thought of as
generating independent spatial diffeomorphisms along $z_{23}$ and
$\bar{z}_{23}$ directions rather than generating conformal transformations in
a 2d space.
It is still possible that there is a useful effective k-space description in
terms of product of two CFT1’s, with a 1d stress tensor which we denote as
$\mathbb{T}(\zeta)$ and $\mathbb{T}(\bar{\zeta})$ given by
$\mathbb{T}(\zeta)=\sum_{n}\frac{\mathbb{L}_{n}(z_{23})}{(\zeta-
z_{23})^{n+2}},\;\;\textrm{and}\;\;\bar{\mathbb{T}}(\bar{\zeta})=\sum_{n}\frac{\mathbb{L}_{n}(\bar{z}_{23})}{(\bar{\zeta}-\bar{z}_{23})^{n+2}}$
(2.18)
such that the OPE blocks are modes of a field $\Phi(\zeta,\bar{\zeta})$ with
respect to both of the 1d cfts. Where the field $\Phi$ could be formally mode-
expanded in terms of the OPE blocks as follows:
$\Phi(\zeta,\bar{\zeta})=\sum_{h_{ij},\bar{h}_{ij}}\frac{\mathcal{B}^{ij}_{k}(z_{2},z_{3};\bar{z}_{2},\bar{z}_{3})}{(\zeta-
z_{23})^{h_{k}-h_{ij}}(\bar{\zeta}-\bar{z}_{23})^{\bar{h}_{k}-\bar{h}_{ij}}}$
(2.19)
However to establish whether a consistent description of this type can be
constructed, would involve proving that it satisfies consistent crossing
equations among other things. We do not attempt to answer this question here.
The only point we want to make here, is that the algebra of the OPE blocks
with the $\mathbb{L}_{n}$ is consistent with the existence of such an
effective k-space description.
Of course, if such an effective description does exist in k-space, it would
only be a reformulation of the original 2d cft in the k-space language, and
the stress tensors defined via equation 2.18, would also be related to the CFT
stress tensor components. Nevertheless, we know that the k-space description
is a useful intermediary between the AdS and CFT descriptions, because the
k-space has the advantage of being directly identified as the space of bulk
geodesics which end on the boundary [36]. This fact has been used to derive a
very simple proof of the identification 2.5,2.6 of OPE blocks as geodesic
operators in AdS [34]. For these reasons, a way to incorporate cft dynamics in
k-space language would be interesting from the AdS/CFT perspective. We hope to
come back to this issue in the near future.
## 3 The global subalgebra of the MVA
In this section, we focus on the global subalgebra of the MVA, which is
spanned by $\mathbb{L}_{0,\pm 1}$ and $\mathbb{\bar{L}}_{0,\pm 1}$. We point
out that this subset of $\mathbb{L}_{n}$ has a nice geometric interpretation
as modular hamiltonians of $N$ itself as well as its subparts, labelled as
$N^{\prime}$ and $N^{\prime\prime}$. This in turn realizes the action of
’modular inclusion’ [50]-[53], within $\mathcal{D}_{N}$. For brevity, we refer
to this subalgebra as the g-MVA in the rest of the note. We begin with a short
discussion on the symmetries of causal diamonds in CFT2.
### 3.1 Symmetries of the CFT2 causal diamonds
CFT2 causal diamonds associated with intervals on a constant time slice are
preserved by a $SO(1,1)\times SO(1,1)$ subset of global conformal symmetry
group $SO(2,2)$. Due to chiral structure of symmetry algebras, one can find a
right moving and a left moving conformal killing vector (CKV) which stabilizes
the diamond. For a causal diamond with upper and lower tips at ($v,\bar{v}$)
and ($u,\bar{u}$) respectively (in the light cone coordinates), the CKVs take
the following form,
$\displaystyle
K^{\zeta}\partial_{\zeta}=\frac{(v-\zeta)(\zeta-u)}{v-u}\partial_{\zeta}\;,\;K^{\bar{\zeta}}\partial_{\bar{\zeta}}=\frac{(\bar{v}-\bar{\zeta})(\bar{\zeta}-\bar{u})}{\bar{v}-\bar{u}}\partial_{\bar{\zeta}}$
(3.1)
Here $\zeta(=X+T),\bar{\zeta}(=X-T)$ are the lightcone coordinates. One can
similarly define the corresponding conserved charges as:
$\displaystyle
K^{R}=\int^{\infty}_{-\infty}d\zeta\frac{(v-\zeta)(\zeta-u)}{u-v}T_{\zeta\zeta}(\zeta)\;,\;K^{L}=\int^{\infty}_{-\infty}d\bar{\zeta}\frac{(\bar{v}-\bar{\zeta})(\bar{\zeta}-\bar{u})}{\bar{v}-\bar{u}}T_{\bar{\zeta}\bar{\zeta}}(\bar{\zeta})$
(3.2)
If we take an interval on $T=0$ slice with endpoints $(z_{2},z_{3})$, the
upper and lower tips of the corresponding causal diamond(say $N$) are located
at $y^{\mu}=(\frac{z_{2}-z_{3}}{2},\frac{z_{2}+z_{3}}{2})$ and
$x^{\mu}=(\frac{z_{3}-z_{2}}{2},\frac{z_{2}+z_{3}}{2})$ respectively. In this
case: $(u,\bar{u})\equiv(x^{1}-x^{0},x^{1}+x^{0})=(z_{2},z_{3})$ and
$(v,\bar{v})\equiv(y^{1}-y^{0},y^{1}+y^{0})=(z_{3},z_{2})$. The total modular
Hamiltonian $K_{N}$ for the interval $N$ is the sum of $K^{R}_{N}$ and
$K^{L}_{N}$, i.e
$\displaystyle
K=K^{R}_{N}+K^{L}_{N}=\int^{\infty}_{-\infty}d\zeta\frac{(z_{2}-\zeta)(\zeta-
z_{3})}{z_{2}-z_{3}}T_{\zeta\zeta}(\zeta)+\int^{\infty}_{-\infty}d\bar{\zeta}\frac{(z_{2}-\bar{\zeta})(\bar{\zeta}-z_{3})}{z_{2}-z_{3}}T_{\bar{\zeta}\bar{\zeta}}(\bar{\zeta})$
(3.3)
This can be derived from the expression of modular Hamiltonian of Rindler half
space by a conformal transformation from Rindler wedge to CFT2 causal diamond.
One can similarly define $P_{D}$ as the antisymmetric combination of $K^{R}$
and $K^{L}$, i.e. $P_{D}=K^{R}-K^{L}$ [20]. Together, $K$ and $P_{D}$
generates a geometrical flow which preserve the diamond. $K$ generate flows
from lower tip to the upper tip, while $P_{D}$ generates flows from left to
the right tip of the diamond.
Consider a CFT2 interval $N(z_{3},z_{2})$ on a time slice $T=0$777i.e.
$z_{3,2}=\bar{z}_{3,2}$. Divide this line segment into two parts
$N^{\prime}(z_{1},z_{2})$ and $N^{\prime\prime}(z_{3},z_{1})$ around a point
$z_{1}$. The corresponding causal diamonds of $N^{\prime}$ and
$N^{\prime\prime}$ divides the causal diamond of $N$ into four parts such that
($N\supset{N^{\prime},N^{\prime\prime},U,L}$), where $U$ and $L$ are the upper
and lower diamond as shown in the Figure(1)888We will be using the same labels
interchangeably for the line segments as well as the corresponding causal
diamonds..
Z3Z2Z1 Figure 1: Causal diagram of different regions on a $T=0$ slice.
Using the $TT$ OPE, it can be shown that the
$K_{(N^{\prime},N,N^{\prime\prime})}^{R}$ satisfy the following algebra:
$[K_{N^{\prime}}^{R},K_{N}^{R}]=2\pi
i\left(K_{N}^{R}-K_{N^{\prime}}^{R}\right)$ (3.4)
$[K_{N}^{R},K_{N^{\prime\prime}}^{R}]=2\pi
i\left(K_{N}^{R}-K_{N^{\prime\prime}}^{R}\right)$ (3.5)
$[K_{N^{\prime}}^{R},K_{N^{\prime\prime}}^{R}]=-2\pi
i\left(K_{N^{\prime}}^{R}+K_{N^{\prime\prime}}^{R}\right)$ (3.6)
This is isomorphic to the holomorphic $SO(2,1)$ subsector of the full
$SO(2,2)$ conformal algebras, with the following identifications:999We have
absorbed the $2\pi i$ factor by redefining $K^{R,L}$ as $\frac{1}{2\pi
i}K^{R,L}$.
$K_{N^{\prime\prime}}^{R}-K_{N}^{R}=\mathbb{L}_{1}\;;\;K_{N}^{R}=\mathbb{L}_{0}\;;\;K_{N^{\prime}}^{R}-K_{N}^{R}=\mathbb{L}_{-1}$
(3.7)
Similarly, $K_{(N^{\prime},N,N^{\prime\prime})}^{L}$ can be shown to satisfy
the following commutation relations:
$[K_{N^{\prime}}^{L},K_{N}^{L}]=2\pi
i\left(K_{N^{\prime}}^{L}-K_{N}^{L}\right)$ (3.8)
$[K_{N}^{L},K_{N^{\prime\prime}}^{L}]=2\pi
i\left(K_{N^{\prime\prime}}^{L}-K_{N}^{L}\right)$ (3.9)
$[K_{N^{\prime}}^{L},K_{N^{\prime\prime}}^{L}]=2\pi
i\left(K_{N^{\prime}}^{L}+K_{N^{\prime\prime}}^{L}\right)$ (3.10)
This is again isomorphic to the anti-holomorphic $SO(2,1)$ sub-algebra, with
the identification:
$K_{N^{\prime}}^{L}-K_{N}^{L}=\bar{\mathbb{L}}_{1}\;;\;K_{N}^{L}=\bar{\mathbb{L}}_{0}\;;\;K_{N^{\prime\prime}}^{L}-K_{N}^{L}=\bar{\mathbb{L}}_{-1}$
(3.11)
The remaining right and left chiral generators of the diamond $U$ and $L$ can
be expressed in terms of $K^{R,L}_{N,N^{\prime},N^{\prime\prime}}$ as follows:
$\displaystyle
K^{R}_{U}=K^{R}_{N^{\prime\prime}}\;,\;K^{L}_{U}=K^{L}_{N^{\prime}}\;,\;K^{R}_{L}=K^{R}_{N^{\prime}}\;,\;K^{L}_{L}=K^{L}_{N^{\prime\prime}}$
(3.12)
Thus, the six modular generators of the three diamonds, ($K^{R(L)}_{N,U,L}$s)
also satisfy the $SO(2,1)\times SO(2,1)$ global conformal algebra.
We denote the CKV’s associated with these generators $\mathbb{L}_{0,\pm 1}$ as
$L_{0,\pm}$. It’s easy to see that these are simply linear combinations of the
standard representations of the global conformal generators $l_{1,0,-1}$
defined earlier. The exact relation between them are given by:
$\displaystyle
l_{1}=\frac{2z_{2}z_{3}}{z_{3}-z_{2}}L_{0}+\frac{z_{3}^{2}(z_{1}-z_{2})}{(z_{1}-z_{3})(z_{3}-z_{2})}L_{-1}+\frac{z_{2}^{2}(z_{1}-z_{3})}{(z_{1}-z_{2})(z_{3}-z_{2})}L_{1}$
(3.13) $\displaystyle
l_{0}=\frac{(z_{2}+z_{3})}{z_{2}-z_{3}}L_{0}+\frac{z_{3}(z_{1}-z_{2})}{(z_{1}-z_{3})(z_{2}-z_{3})}L_{-1}+\frac{z_{2}(z_{1}-z_{3})}{(z_{1}-z_{2})(z_{2}-z_{3})}L_{1}$
(3.14) $\displaystyle
l_{-1}=\frac{2}{z_{3}-z_{2}}L_{0}+\frac{(z_{1}-z_{2})}{(z_{1}-z_{3})(z_{3}-z_{2})}L_{-1}+\frac{(z_{1}-z_{3})}{(z_{1}-z_{2})(z_{3}-z_{2})}L_{1}$
(3.15)
With a similar set of relations between $\bar{l}_{1,0,-1}$ and
$\bar{L}_{1,0,-1}$. As is clear from the definitions, in our set-up,
$K_{N}=\mathbb{L}_{0}+\bar{\mathbb{L}}_{0}$ and $\mathbb{L}_{1,-1}$ and
$\bar{\mathbb{L}}_{1,-1}$ are it’s $\pm 1$ eigenmodes. These eigenmodes are
constructed by the $K$s and $P_{D}$s of the regions which reside inside the
$N$ itself. This set-up exhibits some other features like modular inclusion as
we discuss in the next subsection.
### 3.2 g-MVA and modular inclusions
Out of the left and right moving CKVs, one can notice that $K_{N}$, $K_{U}$,
$K_{L}$ and $P_{D,N}$, $P_{D,N^{\prime}}$, $P_{D,N^{\prime\prime}}$ are closed
under $SO(2,1)$ subalgebra separately101010However these two $SO(2,1)$ don’t
commute with each other..
$\displaystyle[P_{D,N},P_{D,N^{\prime}}]=2\pi i(P_{D,N^{\prime}}-P_{D,N})$
(3.16) $\displaystyle[P_{D,N},P_{D,N^{\prime\prime}}]=2\pi
i(P_{D,N}-P_{D,N^{\prime\prime}})$ (3.17)
$\displaystyle[P_{D,N^{\prime\prime}},P_{D,N^{\prime}}]=2\pi
i(P_{D,N^{\prime\prime}}+P_{D,N^{\prime}})$ (3.18)
Hence, $P_{D,N^{\prime\prime}}-P_{D,N}$, $P_{D,N}$ and
$P_{D,N^{\prime}}-P_{D,N}$ satisfy the $SO(2,1)$ sub-algebra.
In a similar fashion, one could also obtain the following
$\displaystyle[K_{N},K_{U}]=K_{N}-K_{U}$ (3.19)
$\displaystyle[K_{N},K_{L}]=K_{L}-K_{N}$ (3.20)
$\displaystyle[K_{U},K_{L}]=K_{U}+K_{L}$ (3.21)
Here $K_{U}-K_{N}$, $K_{N}$ and $K_{L}-K_{N}$ construct the another $SO(2,1)$
sub-algebra.
The above commutation relations (3.16) and (3.19) has the structure of modular
inclusion as we discuss in detail in the appendix (A). In particular, (3.16)
gives an unitary geometric operation using which one can find a map of algebra
of observable between the nested diamonds $N$, $N^{\prime}$ and
$N^{\prime\prime}$. Similarly, using (3.19) we have a map of algebra of
observables between $N$, $U$ and $L$. See appendix (A), for further details.
Using the inclusion properties and the fact that $K$ and $P_{D}$ of any
diamond can be constructed in the basis of modular generators of $N$,
$N^{\prime}$, $N^{\prime\prime}$ as in (3.13), we could in principle construct
algebra of observables of any region (diamond) or provide a map to any diamond
in the spacetime from $N$. However, since here the modular generators in
vacuum are constructed out of conformal symmetries, this inclusion property
i.e.: the map from different causal domains is just an artefact of the global
conformal symmetry.
## 4 Pulling the $\mathbb{L}_{n}$ into the bulk
Given the explicit form of the $\mathbb{L}_{n}$ and $\bar{\mathbb{L}}_{n}$,
one can read off the corresponding CKV’s ( $L_{n}$) from equations 2.7 and
3.1. Their explicit forms are as follows.
$\displaystyle L_{n}=\frac{(z_{2}-\zeta)^{-n+1}(\zeta-
z_{3})^{n+1}}{z_{3}-z_{2}}\left(\frac{z_{2}-z_{1}}{z_{3}-z_{1}}\right)^{n}\partial_{\zeta},$
$\displaystyle\bar{L}_{n}=-\frac{(z_{2}-\bar{\zeta})^{n+1}(\bar{\zeta}-z_{3})^{-n+1}}{z_{3}-z_{2}}\left(\frac{z_{3}-z_{1}}{z_{2}-z_{1}}\right)^{n}\partial_{\bar{\zeta}}$
(4.1)
Following [42], in this section we will extend the CKV’s into the bulk where
they would generate bulk diffeomorphisms. From our previous discussion of
section 3, we already know what these are for the special case of $n=0,\pm 1$.
Since the $\mathbb{L}_{0,\pm 1}$ and $\bar{\mathbb{L}}_{0,\pm 1}$ generate
isometries of the causal diamond of the subregions $N^{\prime}$ and
$N^{\prime\prime}$, their duals would generate boosts around and translations
along the respective RT geodesics for $N^{\prime}$ and $N^{\prime\prime}$.
For the generic $n$ case, we proceed as follows. We first use the following
transformation
$(\zeta,\bar{\zeta})\rightarrow(\zeta^{\prime},\bar{\zeta^{\prime}})$ to
transform $(L_{n},\bar{L}_{n})\rightarrow(l_{n},\bar{l}_{n})$.
$\zeta^{\prime}=\frac{1}{\beta}(\frac{\zeta-
z_{3}}{z_{2}-\zeta}),\;\;\;\bar{\zeta}^{\prime}=\beta\frac{z_{2}-\bar{\zeta}}{\bar{\zeta}-z_{3}},\;(\textrm{with}\;\beta=\frac{z_{13}}{z_{21}})$
(4.2)
One can then extend these transformations into the bulk. ($y$, $\zeta$,
$\bar{\zeta}$) $\rightarrow$ ($y^{\prime}$, $\zeta^{\prime}$,
$\bar{\zeta^{\prime}}$). Working in the Fefferman Graham gauge [41], the
corresponding dual bulk transformations are given by:
$\displaystyle\zeta^{\prime}=\frac{1}{\beta}[\frac{\zeta-
z_{3}}{z_{2}-\zeta}-\frac{z_{23}}{z_{2}-\zeta}\frac{y^{2}}{y^{2}-(z_{2}-\zeta)(\bar{\zeta}-z_{3})}]$
$\displaystyle\bar{\zeta}^{\prime}=\beta[\frac{z_{2}-\bar{\zeta}}{\bar{\zeta}-z_{3}}-\frac{z_{23}}{\bar{\zeta}-z_{3}}\frac{y^{2}}{y^{2}-(z_{2}-\zeta)(\bar{\zeta}-z_{3})}]$
$\displaystyle y^{\prime}=\frac{y}{y^{2}-(z_{2}-\zeta)(\bar{\zeta}-z_{3})}$
(4.3)
In the $y\rightarrow 0$ limit, this equation reduces to equation (4.2), as it
should. From the above transformations, we can now obtain the expression for
the bulk counterpart of the $L_{n}$’s. In the primed coordinates, the action
of the bulk $l_{n}$, is given by [42]
$l^{(b)}_{n}=\delta_{n}\zeta^{\prime}\partial_{\zeta^{\prime}}+\delta_{n}\bar{\zeta^{\prime}}\partial_{\bar{\zeta^{\prime}}}+\delta_{n}y^{\prime}\partial_{y^{\prime}}$
(4.4)
where:
$\delta_{n}\zeta^{\prime}=(-\zeta^{\prime})^{n+1},\;\delta_{n}\bar{\zeta^{\prime}}=-n(n+1)y^{\prime
2}(-\zeta^{\prime})^{n-1},\;\delta_{n}y=\frac{1}{2}(n+1)y^{\prime}(-\zeta^{\prime})^{n}$
(4.5)
Similar expression may be obtained for the $\bar{l}^{b}_{n}$. By using eqn(4)
and eqn(4.5) in eqn(4.4), we can obtain the explicit expression for
$L^{b}_{n}$
$L^{b}_{n}=\frac{{(-\zeta^{\prime})}^{n+1}}{z_{21}.A.RT}[U\partial_{\zeta}+V\partial_{\bar{\zeta}}+W\partial_{y}]$
(4.6)
Here, the expression for $\zeta^{\prime}$ is given by the first of the
equations(4) and the explicit expressions of $U$, $V$, $W$, $A$ and $RT$ are
given below.
$\displaystyle U=y^{4}n(n+1)-A(A+z_{21}(n+1)y^{2}),$ $\displaystyle
V=-A(A+z_{21}(n+1)(z_{2}-\zeta)(\bar{\zeta}-z_{3}))+n(n+1)(\bar{\zeta}-z_{3})^{2}(z_{2}-\zeta)^{2},$
$\displaystyle
W=A(2A+z_{21}((\bar{\zeta}-z_{3})(z_{2}-\zeta)+y^{2}))+2ny^{2}(\bar{\zeta}-z_{3})(n-z_{2}+\zeta)$
$\displaystyle+nz_{21}[(z_{2}-\zeta)^{2}(\bar{\zeta}-z_{3})^{2}(\zeta-
z_{3})+y^{2}(y^{2}(z_{2}+z_{3}-2\zeta)+(z_{2}-\zeta)^{2}(\bar{\zeta}-z_{3}))],$
$\displaystyle A=(\bar{\zeta}-z_{3})(\zeta-
z_{3})(z_{2}-\zeta)+y^{2}(z_{2}+z_{3}-2\zeta),$ $\displaystyle
RT=(y^{2}-(z_{2}-\zeta)(\bar{\zeta}-z_{3}))$ (4.7)
An interesting feature of the above formulae is the emergence of the RT
geodesic. For instance, notice the appearance of the RT geodesic expression
($y^{2}-(z_{2}-\zeta)(\bar{\zeta}-z_{3})$) in the RHS of eqn(4). Thus these
equations blow up on the RT surface. This means that the bulk coordinates
$(\zeta,\bar{\zeta},y)$ cover only the region between the geodesic and the
boundary. Thus they provide a natural set of coordinates for the entanglement
wedge associated to $N$. The RT geodesic also appears in the expressions for
the bulk counterparts of $L_{n}$’s given in equation (4.4). The fact that the
bulk coordinates and the bulk extensions of the $L_{n}$ ‘know’ about the RT
geodesic is not surprising. It is simply a reflection of the fact that the
boundary $L_{n}$ are modular eigenmodes by construction and thus has
information about the boundary causal diamond of $N$.
## 5 Discussion
In summary, we have constructed an infinite class of modular eigenmodes
($\mathbb{L}_{n}$) for the single interval in the vacuum of CFT2. These are
expressed as smeared integrals of the stress tensor components and thus exist
in any CFT2 111111Such smeared intergrals of the stress tensor has appeared in
several different contexts recently, for instance in the study of the light
ray operators [54]-[58] as well in the context of the so-called dipolar
quantization of CFT2 as discussed in [59],[60]. We thank Bartek Czeck for
bringing these works on the dipolar quantization to our attention.. Our
construction of these eigenmodes are intimately tied to the causal diamond of
$N$. This fact manifests itself in many of its interesting features. For
instance, one way in which this connection to the causal diamond manifests
itself is in the way $\mathbb{L}_{n}$ acts on OPE blocks. We showed that this
action is identical to the action of conformal generators on local primary
fields in CFT2. Coupled with the fact that the OPE blocks have a local
description as fields living on the k-space, which is the space of causal
diamonds of the CFT2, this hints at the possibility of finding an equivalent
effective description of the CFT on k-space. We argued that on this k-space,
the $\mathbb{L}_{n}$ seem to generate 1d diffeomorphisms along two independent
directions. Unfortunately our discussions are only at a kinematic level, and
it would be nice if these ideas can be made more concrete.
The connection to the causal diamonds is even more transparent, in the
subclass of the eigenmodes corresponding to $n=0,\pm 1$. In fact, as we
showed, these generators are essentially linear combinations of the modular
hamiltonians of the causal diamonds for the subregions $N^{\prime}$ and
$N^{\prime\prime}$ of $N$. We further showed how this structure of the g-MVA
realizes modular inclusions within in this setup. The half sided modular
inclusion has been studied previously in some examples like certain regions on
null plane in higher dimensions [47] and it has been used to show that in
certain special situation, black hole interior could be reconstructed from the
algebra of exterior region [48]. In our example, the inclusion structure
emerges quite naturally due to the rich symmetric structure of the
vacuum121212As a testing ground of such algebraic structure or modular
properties it is always very useful to study them in quantum mechanical system
having finite dimensional Hilbert space[2]. With this motivation in mind, in
appendix A we study inclusion properties in an example of finite dimensional
Hilbert space where inclusion algebras are satisfied trivially..
Finally we also discussed the action of the bulk counterparts of the
$\mathbb{L}_{n}$ on the bulk spacetime. We saw that these dual descriptions
already ‘know’ about the bulk RT geodesic, which is again a reflection of the
close connection of our construction with the causal diamond.
A natural question that arises is whether one can extend this construction of
algebra and its representation beyond the vacuum in CFT for at least some
class of excited states131313For locally excited states in CFT2 which are
connected to vacuum by local conformal transformation, we do have local
expression for modular Hamiltonian in single interval [19], [45]. However, we
expect this case to be almost identical to the vacuum case.. Perhaps a more
tractable direction to pursue would be to find the extension of such algebras
for disconnected multi-interval cases where analytic expression of modular
Hamiltonian are known[43],[44].141414Recently analytic expression of modular
Hamiltonian for intervals in BMS invariant field theories has been discussed
where we could study similar construction to study algebra[46].
We hope to return to some of these questions in the near future.
Acknowledgment: SD would like to acknowledge the support provided by the Max
Planck Partner Group grant MAXPLA/PHY/2018577. The work of SP and BR was
supported by a Junior Research Fellowship(JRF) from UGC.
## Appendix A Modular inclusion in CFT2 and finite dimensional system
### A.1 Modular inclusion
The Reeh-Schileder theorem states 151515The readers may look at [2] for a
recent review on algebraic QFT and modular theory that an algebra
$\mathcal{A}_{V}$, made out of bounded operators restricted to an arbitrary
small open set $V$ in spacetime (flat), is enough to generate (by acting on
vacuum) the full vacuum sector of the Hilbert space. Due to this property the
vacuum state is said to be ‘cyclic’ w.r.t the algebra of operators
$\mathcal{A}_{V}$ in that small open region $V$. Incorporating microcausality,
an obvious conclusion can be drawn that such a state is also separating w.r.t
$\mathcal{A}_{V}$, which means that, there exists no operator in $V$ which
annihilate the vacuum. In algebraic QFT, some useful quantum information
quantities like Relative entropy, total modular Hamiltonian can be rigorously
constructed for such cyclic and separating states of the QFT Hilbert space. In
particular, a self-adjoint ‘modular operator’ $\Delta$ ($=e^{-K}$, K is the
total modular Hamiltonian) and an antiunitary operator ‘modular conjugation’
$J$ are the central objects of ‘Tomita-Takesaki theory’, which lies at the
foundation of modular theory or modular algebra. The main result of the
Tomita-Takesaki theory is that $\Delta$ defines an automorphism which maps an
algebra of a region to itself while $J$ defines an isomorphism from the
algebra to its commutant $\mathcal{A}^{\prime}_{V}$.
$\displaystyle\Delta^{is}A\Delta^{-is}=\tilde{A}\;;\;JAJ=A^{\prime},\;\;\;(A,\tilde{A})\in\mathcal{A}_{V},\;A^{\prime}\in\mathcal{A}^{\prime}_{V},\;\forall
s\in\mathbb{R}$ (A.1)
The $\Delta$ generates a modular flow w.r.t total modular Hamiltonian $K$.
Here, the algebra $\mathcal{A}_{V}$ is considered to be a type of Von-Neumann
algebra such that, $\mathcal{A}_{V}=\hat{\mathcal{A}}_{V}$. Where,
$\hat{\mathcal{A}}_{V}$ is the algebra of the causal domain of the region $V$.
Within the context of the Tomita-Takesaki theory, a notion of inclusion of
algebras has been discussed -the so-called ‘half-sided modular inclusion’
(hsmi)[49]-[51]. Take two Von-Neumann algebra of observables $M,M^{\prime}$,
such that the vacuum $\Omega$ is a common cyclic and separating state for both
of them. We can define $\tilde{M}\subset M$ as the +hsmi if it satisfies the
condition that $M^{\prime}$ is preserved under the modular flow of $M$, i.e
$\displaystyle\Delta^{-it}_{M}\tilde{M}\Delta^{it}_{M}\subset\tilde{M},\;\;\;\forall
t\geq 0$ (A.2)
Here $\Delta_{M},\Delta_{\tilde{M}}$ are modular operator of $M,\tilde{M}$.
161616The corresponding modular conjugation operators are
$J_{M},J_{\tilde{M}}$. However, in the present context, we won’t need the
properties of $J$s and we only focus on modular flows generated by $\Delta$s.
For further details, we refer the readers to the following references
[50],[51]. Once the above condition is satisfied, one can construct an one-
parameter unitary group $U(a)$ on the Hilbert space such that,
$\displaystyle
U(a)=e^{iap};\;p\equiv\frac{1}{2\pi}(\ln\Delta_{\tilde{M}}-\ln\Delta_{M})\geq
0;\;\forall a\in\mathbb{R}$ (A.3)
The generator $p$ is a positive operator. In such settings, the following
properties hold:
$\displaystyle\Delta_{M}^{it}U(a)\Delta_{M}^{-it}$
$\displaystyle=\Delta_{\tilde{M}}^{it}U(a)\Delta_{\tilde{M}}^{it}=U(e^{-2\pi
t}a);\;\forall a,t\in\mathbb{R}$ (A.4) $\displaystyle\Delta_{\tilde{M}}^{it}$
$\displaystyle=U(1)\Delta^{it}_{M}U(-1);\;\forall t\in\mathbb{R}$ (A.5)
$\displaystyle\tilde{M}$ $\displaystyle=U(1)MU(-1)$ (A.6)
$\displaystyle\Delta_{M}^{it}\Delta_{\tilde{M}}^{-it}$
$\displaystyle=e^{i\left(-1+e^{-2\pi t}\right)p}$ (A.7)
One can see that the first two relations are solved by
$\displaystyle[K_{M},K_{\tilde{M}}]=2\pi
ip;\;K_{M,\tilde{M}}=-\ln\Delta_{M,\tilde{M}}$ (A.8)
We get the last two relations from the first two. Hence, if $\tilde{M}\subset
M$ is a modular inclusion, then (A.8) must be satisfied. When the condition of
inclusion (A.2) is satisfied for $t\leq 0$, it is called -hsmi. For that case,
the commutation relation of modular Hamiltonian is given by
$[K_{M},K_{\tilde{M}}]=-2\pi ip$. Using this $\pm$hsmi, a representation of
the $SL(2,\mathbb{R})$ could be constructed in the following way [52],[53]
Theorem:
Let $M,M_{1},M_{2}$ be Von-Neumann algebras on a Hilbert space $\mathcal{H}$
and $\Omega$ is a cyclic and separating state $\Omega\in\mathcal{H}$. Assume:
* •
1 $M_{1}\subset M$ is a -hsmi
* •
2 $M_{2}\subset M$ is a +hsmi
* •
3 $M_{2}\subset M^{\prime}_{1}$ is a -hsmi
(where $M^{\prime}_{1}$ is the commutant of $M_{1}$.) Then
$\Delta_{M}^{it},\Delta_{M_{1}}^{ir},\Delta^{is}_{M_{2}}$ ,
$t,r,s\in\mathbb{R}$ generate a representation of $SL(2,\mathbb{R})$ where,
$\displaystyle
P\equiv\frac{1}{2\pi}\left(\ln\Delta_{M_{1}}-\ln\Delta_{M}\right);\;K\equiv\frac{1}{2\pi}\left(\ln\Delta_{M_{2}}-\ln\Delta_{M}\right);\;D\equiv\frac{1}{2\pi}\ln\Delta_{M}$
(A.9)
In this way, the algebraic structure of modular inclusion provides an
interesting way to construct chiral part of 2D conformal algebra.
### A.2 Modular inclusion in vacuum CFT2
Within the set up of section 3, we can explicitly see (3.16) and (3.19)
exhibits both $\pm$hsmi structure. However, (3.16) is constructed out of
$P_{D}$ which is not the modular Hamiltonian. However, we can see $P_{D}$s of
$N$, $N^{\prime}$, $N^{\prime\prime}$ satisfies all the criterion of hsmi.
Hence, in CFT2 vacuum, we define two types of inclusion structure which we
call ‘$K$-inclusion’ and ‘$P_{D}$-inclusion’. $P_{D}$-inclusion
Let us first consider the two nested diamonds $N^{\prime}$ and $N$ where
$N^{\prime}\subset N$. Since $P_{D}$ generates a geometric flow from the left
tip to the right tip of a diamond, the algebra of smaller nested diamond
$N^{\prime}$ remain invariant under the flow of $P_{D}$ of the larger diamond
$N$ i.e. $e^{-iP_{D,N}t}N^{\prime}e^{iP_{D,N}t}\subset N^{\prime}$. In such
case, we call such inclusion $N^{\prime}\subset N$ as the ‘$P_{D}$-inclusion’
which satisfies (A.2). From the algebra of (3.16) we have seen that
$P_{D,N^{\prime}}$ and $P_{D,N}$ indeed satisfy half sided modular inclusion
algebra which is $[P_{D,N},P_{D,N^{\prime}}]=2\pi
i(P_{D,N^{\prime}}-P_{D,N})$. Since $P_{D}$ is self adjoint, we could
construct a self-adjoint $p\equiv P_{D,N}-P_{D,N}$. Using $U(a)$, we can check
the following inclusion property $N^{\prime}=U(1)NU(-1)$, where
$U(a)=e^{iap}$.
Here in the spacetime representation,
$\displaystyle
p_{(N,N^{\prime})}=\frac{z_{31}(z_{2}-\zeta)^{2}}{z_{12}z_{32}}\partial_{\zeta}+\frac{z_{31}(z_{2}-\bar{\zeta})^{2}}{z_{12}z_{32}}\partial_{\bar{\zeta}}$
(A.10)
Hence the action of $U(1)$ on spacetime point $(\zeta,\bar{\zeta})$ gives
$\displaystyle e^{p_{(N,N^{\prime})}}(\zeta,\bar{\zeta})=\left(\frac{\alpha
z_{2}(\zeta-z_{2})+\zeta}{\alpha(\zeta-z_{2})+1},\frac{\alpha
z_{2}(\bar{\zeta}-z_{2})+\bar{\zeta}}{\alpha(\bar{\zeta}-z_{2})+1}\right)\;;\;\alpha=\frac{z_{31}}{z_{12}z_{32}}$
(A.11)
Here this particular $SL(2,\mathbb{R})$ transformation
$\zeta\rightarrow\frac{(\alpha z_{2}+1)\zeta-\alpha
z_{2}^{2}}{\alpha\zeta+1-\alpha z_{2}}$ gives the map from the larger diamond
$N$ to smaller diamond $N$. For instance, the left tip $(z_{3},z_{3})$ maps to
$(z_{1},z_{1})$, upper tip $(z_{2},z_{3})$ maps to that of $N^{\prime}$ i.e
$(z_{2},z_{1})$ and so on. Using the reverse transformation $U(-1)$ one could
construct $N$ from $N^{\prime}$. Similarly we could treat
$N^{\prime\prime}\subset N$ as a -half sided $P_{D}$ inclusion as the
commutators gives an overall minus sign. In the same way, one can define
$\displaystyle
p_{(N,N^{\prime\prime})}=\frac{z_{12}(z_{3}-\zeta)^{2}}{z_{32}z_{31}}\partial_{\zeta}+\frac{z_{12}(z_{3}-\bar{\zeta})^{2}}{z_{32}z_{31}}\partial_{\bar{\zeta}}$
(A.12)
Here the action of $U(-1)$ is given by
$\displaystyle
e^{-p_{(N,N^{\prime\prime})}}(\zeta,\bar{\zeta})=\left(\frac{(\beta
z_{3}-1)\zeta-\beta z_{3}^{2}}{\beta\zeta-\beta z_{3}-1},\frac{(\beta
z_{3}-1)\bar{\zeta}-\beta z_{3}^{2}}{\beta\bar{\zeta}-\beta
z_{3}-1}\right)\;;\;\beta=\frac{z_{12}}{z_{32}z_{31}}$ (A.13)
In this map $z_{2}\rightarrow z_{1}$ and $z_{3}$ remains unchanged and thus it
transforms $N$ to $N^{\prime\prime}$. Hence using $p_{N,N^{\prime}}$,
$p_{N,N^{\prime\prime}}$ consecutively we can map $N^{\prime}$ to
$N^{\prime\prime}$ and vice versa. In this way, $P_{D}$ inclusion gives a
natural way to map between diamonds with the structures like $N$,
$N^{\prime}$, $N^{\prime\prime}$. $K$-inclusion
Let us look at the another set of algebra described in (3.19) which provides
another notion of modular inclusion which we call ‘$K$-modular inclusion’. It
consists of three diamonds $N$, $U$, $L$ such that $U$, $L\subset N$. Since
the total modular Hamiltonian $K$ generates a flow from lower to upper tip,
the algebra of $U$ and $L$ left unchanged under the flow of $K^{N}$, i.e
$e^{-iK^{N}t}(U,L)e^{iK^{N}t}\subset(U,L)$. In a similar way of
$P_{D}$-inclusion, here we can define a self adjoint $p\equiv K^{N}-K^{U,L}$.
For instance, considering the inclusion $U\subset N$, we have
$\displaystyle\frac{z_{12}(z_{3}-\zeta)^{2}}{z_{32}z_{31}}\partial_{\zeta}+\frac{z_{13}(z_{2}-\bar{\zeta})^{2}}{z_{12}z_{32}}\partial_{\bar{\zeta}}$
(A.14)
Hence the action of $U(1)$ gives,
$\displaystyle e^{p_{(N,U)}}(\zeta,\bar{\zeta})=\left(\frac{(\beta
z_{3}-1)\zeta-\beta z_{3}^{2}}{\beta\zeta-\beta z_{3}-1},\frac{\alpha
z_{2}(\bar{\zeta}-z_{2})+\bar{\zeta}}{\alpha(\bar{\zeta}-z_{2})+1}\right)$
(A.15)
In this map, one could see the left tip of $N$ i.e $(z_{3},z_{3})$ maps to
left tip of $U$ i.e $(z_{3},z_{1})$, the right tip $(z_{2},z_{2})$ of $N$ maps
to that of $U$ i.e $(z_{1},z_{2})$, the lower tip $(z_{2},z_{3})$ maps to the
same $(z_{1},z_{1})$ and the upper tip remains unchanged for both diamonds. In
the similar fashion, we could obtain the map from $N$ to $L$ using -half-sided
$K$-inclusion of $L\subset N$.
Also both $K$-inclusion and $P_{D}$-inclusion of the form (3.19) and (3.16),
satisfy $SL(2,\mathbb{R})$ algebra which we describe above as a theorem.
Using the fact that any modular generators of any diamond can be constructed
from the modular algebra of $N^{\prime}$, $N$, $N^{\prime\prime}$ and using
the above mentioned $K$ and $P_{D}$ inclusion, we can now reproduce all causal
diamonds and the fields of them from the modular conformal generators.
### A.3 Modular inclusion in finite dimensional Hilbert space
Let us consider a finite dimensional quantum system and divide it into four
subsystems $A,A^{\prime},B$and $B^{\prime}$, such that the dimensions of
subsystems are related in the following way:
$\displaystyle H_{tot}=H_{A}\otimes H_{A^{\prime}}\otimes H_{B}\otimes
H_{B^{\prime}};\;d_{A}=d_{A^{\prime}}=N,d_{B}=d_{B^{\prime}}=N^{\prime}.$
(A.16)
Without any loss of generality, we also assume that the total Hilbert space
can be factorized as $H_{tot}=H_{AA^{\prime}}\otimes H_{BB^{\prime}}$, such
that, there exists the state vectors $\ket{\psi}\in H_{tol}$, $\ket{\phi}\in
H_{AA^{\prime}}$ and $\ket{\chi}\in H_{BB^{\prime}}$ which satisfy
$\ket{\psi}=\ket{\phi}_{AA^{\prime}}\otimes\ket{\chi}_{BB^{\prime}}$
We will first show that for such construction of the state $\ket{\psi}$, the
modular inclusion criterion (A.2) will be automatically satisfied. Here we
take $M$ to be the system $AB$ and $\tilde{M}$ to be $A$. To show this, we
first define the corresponding density matrices and reduced density matrices
as follows:
$\displaystyle\rho_{\psi}=\ket{\phi}\bra{\phi}\otimes\ket{\chi}\bra{\chi}$
(A.17)
$\displaystyle\rho_{\\!{}_{AB}}=tr_{\\!{}_{A^{\prime}B^{\prime}}}\rho=tr_{A^{\prime}}\ket{\phi}\bra{\phi}\otimes
tr_{B^{\prime}}\ket{\chi}\bra{\chi}$ (A.18)
$\displaystyle\rho_{\\!{}_{A^{\prime}B^{\prime}}}=tr_{\\!{}_{AB}}\rho=tr_{A}\ket{\phi}\bra{\phi}\otimes
tr_{B}\ket{\chi}\bra{\chi}$ (A.19)
$\displaystyle\rho_{\\!{}_{A}}=tr_{\\!{}_{A^{\prime}BB^{\prime}}}\rho=tr_{A^{\prime}}\ket{\phi}\bra{\phi}$
(A.20)
$\displaystyle\rho_{\\!{}_{A^{\complement}}}=tr_{\\!{}_{A}}\rho=tr_{A}\ket{\phi}\bra{\phi}\otimes\ket{\chi}\bra{\chi}$
(A.21)
The total modular Hamiltonians $K_{M,\tilde{M}}$ or the modular operator
$\Delta_{M,\tilde{M}}$ for the regions $M$ and $\tilde{M}$ are defined as:
$\Delta_{M}\equiv\Delta_{AB}=\rho_{\\!{}_{AB}}\otimes{\rho_{\\!{}_{A^{\prime}B^{\prime}}}}^{-1}$
(A.22)
$\Delta_{N}\equiv\Delta_{A}=\rho_{\\!{}_{A}}\otimes{\rho_{\\!{}_{A^{\complement}}}}^{-1}$
(A.23)
To begin with, we use Schmidt decomposition of $\ket{\phi}_{AA^{\prime}}$ and
$\ket{\chi}_{BB^{\prime}}$ as following:
$\ket{\phi}_{AA^{\prime}}=\sum\limits_{i=1}^{N}C_{i}\ket{i}_{A}\otimes\ket{i}_{A^{\prime}}$
(A.24)
And
$\ket{\chi}_{BB^{\prime}}=\sum\limits_{k=1}^{N^{\prime}}D_{k}\ket{k}_{B}\otimes\ket{k}_{B^{\prime}}$
(A.25)
Hence in this basis, we get
$\rho_{\\!{}_{AB}}=\sum\limits_{i,k=1}^{N,N^{\prime}}\mathinner{\\!\left\lvert
C_{i}\right\rvert}^{2}\mathinner{\\!\left\lvert
D_{k}\right\rvert}^{2}\ket{i}_{A}\ket{k}_{B}\bra{i}_{A}\bra{k}_{B}$
$\rho_{\\!{}_{A^{\prime}B^{\prime}}}=\sum\limits_{i,k}\mathinner{\\!\left\lvert
C_{i}\right\rvert}^{2}\mathinner{\\!\left\lvert
D_{k}\right\rvert}^{2}\ket{i}_{A^{\prime}}\ket{k}_{B^{\prime}}\bra{i}_{A^{\prime}}\bra{k}_{B^{\prime}}$
Using the definition of (A.22), we get:
${\Delta_{AB}}^{it}=(\,\sum\limits_{i,j,k,l}\frac{{\mathinner{\\!\left\lvert
C_{i}\right\rvert}^{2}\mathinner{\\!\left\lvert
D_{j}\right\rvert}^{2}}}{\mathinner{\\!\left\lvert
C_{k}\right\rvert}^{2}\mathinner{\\!\left\lvert
D_{l}\right\rvert}^{2}})\,^{it}(\ket{i}_{A}\ket{j}_{B}\bra{i}_{A}\bra{j}_{B})(\ket{k}_{A^{\prime}}\ket{l}_{B^{\prime}}\bra{k}_{A^{\prime}}\bra{l}_{B^{\prime}}).$
(A.26)
To show the inclusion condition (A.2), we need to define an operator which has
support only in the region A, as :
$\sum_{m,n}\mathcal{O}_{m,n}\ket{m}_{A}\bra{n}_{A}\otimes\mathbb{I}_{\\!{}_{A^{\prime}}}\otimes\mathbb{I}_{\\!{}_{B}}\otimes\mathbb{I}_{\\!{}_{B^{\prime}}}$
(A.27)
Using the definition of $\Delta$, it is straightforward to show that,
${\Delta_{AB}}^{-it}(\sum_{m,n}\mathcal{O}_{m,n}\ket{m}_{A}\bra{n}_{A}\otimes\mathbb{I}_{\\!{}_{A^{\prime}}}\otimes\mathbb{I}_{\\!{}_{B}}\otimes\mathbb{I}_{\\!{}_{B^{\prime}}}){\Delta_{AB}}^{it}=\sum_{m,n}(\frac{{\mathinner{\\!\left\lvert
C_{m}\right\rvert}^{2}}}{\mathinner{\\!\left\lvert
C_{n}\right\rvert}^{2}})^{it}\mathcal{O}_{m,n}\ket{m}_{A}\bra{n}_{A}\otimes\mathbb{I}_{\\!{}_{A^{\prime}}}\otimes\mathbb{I}_{\\!{}_{B}}\otimes\mathbb{I}_{\\!{}_{B^{\prime}}}$
(A.28)
From the above equation, it is clear that the state $\psi$ satisfy the
equation (A.2). With this, we want to check explicitly if it satisfies the
condition of (A.8). To do so, we need to evaluate
$\rho_{\\!{}_{A^{\complement}}}$. However,
$\rho_{\\!{}_{A^{\\!{}_{\complement}}}}^{-1}$ may not be defined. Since we are
calculating $ln\Delta_{A}$, this won’t matter. We can write,
$\displaystyle\ln\Delta_{AB}=\ln\rho_{\\!{}_{AB}}\otimes{\mathbb{I}_{\\!{}_{A^{\prime}B^{\prime}}}}-\mathbb{I}_{\\!{}_{AB}}\otimes{\ln\rho_{\\!{}_{A^{\prime}B^{\prime}}}}$
(A.29)
$\displaystyle\ln\Delta_{A}=\ln\rho_{\\!{}_{A}}\otimes{\mathbb{I}_{\\!{}_{A^{\complement}}}}-\mathbb{I}_{\\!{}_{A}}\otimes{\ln\rho_{\\!{}_{A^{\complement}}}}$
(A.30)
To calculate the commutation, we act $\ln\Delta_{AB}$ and $\ln\Delta_{A}$
consecutively on a basis state
$\ket{i}_{A}\ket{j}_{A^{\prime}}\ket{i}_{B}\ket{j}_{B^{\prime}}$ . One can see
that,
$\ln\Delta_{AB}\ket{i}_{A}\ket{j}_{A^{\prime}}\ket{i}_{B}\ket{j}_{B^{\prime}}=0$
So,
$\bra{j^{\prime}}_{B^{\prime}}\bra{i^{\prime}}_{B}\bra{j^{\prime}}_{A^{\prime}}\bra{i^{\prime}}_{A}(\ln\Delta_{A}\ln\Delta_{AB})\ket{i}_{A}\ket{j}_{A^{\prime}}\ket{i}_{B}\ket{j}_{B^{\prime}}=0$
(A.31)
In the similar manner we can also check that,
$\ln\Delta_{A}\ket{i}_{A}\ket{j}_{A^{\prime}}\ket{i}_{B}\ket{j}_{B^{\prime}}=(\ln\mathinner{\\!\left\lvert
C_{i}\right\rvert}^{2}-\ln\mathinner{\\!\left\lvert
C_{j}\right\rvert}^{2})\ket{i}_{A}\ket{j}_{A^{\prime}}\ket{i}_{B}\ket{j}_{B^{\prime}}$
Since
$\ln\Delta_{AB}\ket{i}_{A}\ket{j}_{A^{\prime}}\ket{i}_{B}\ket{j}_{B^{\prime}}=0$,
it follows from above that,
$\bra{j^{\prime}}_{B^{\prime}}\bra{i^{\prime}}_{B}\bra{j^{\prime}}_{A^{\prime}}\bra{i^{\prime}}_{A}\ln\Delta_{AB}\ln\Delta_{A}\ket{i}_{A}\ket{j}_{A^{\prime}}\ket{i}_{B}\ket{j}_{B^{\prime}}=0$
(A.32)
So from the above equations, we finally get
$\bra{j^{\prime}}_{B^{\prime}}\bra{i^{\prime}}_{B}\bra{j^{\prime}}_{A^{\prime}}\bra{i^{\prime}}_{A}[\ln\Delta_{AB},\ln\Delta_{A}]\ket{i}_{A}\ket{j}_{A^{\prime}}\ket{i}_{B}\ket{j}_{B^{\prime}}=0$
(A.33)
Similarly, one can easily check that
$\bra{j^{\prime}}_{B^{\prime}}\bra{i^{\prime}}_{B}\bra{j^{\prime}}_{A^{\prime}}\bra{i^{\prime}}_{A}\left(\ln\Delta_{AB}-\ln\Delta_{A}\right)\ket{i}_{A}\ket{j}_{A^{\prime}}\ket{i}_{B}\ket{j}_{B^{\prime}}=0$
(A.34)
Therefore in such example we get the desired inclusion properties (since it is
true for any basis state)
$[\ln\Delta_{AB},\ln\Delta_{A}]=\ln\Delta_{AB}-\ln\Delta_{A}$
Thus for such finite dimensional quantum system modular inclusion still holds.
## Appendix B Commutation relation of modular generators and Virasoro algebra
Here we will reproduce the Virasoro algebra (2.10) from the expression of
$L_{n}$s which is of the form (2.7), using the commutation relations of stress
energy tensors. In CFT2, $TT$ OPE takes the following form,
$\displaystyle
T(z)T(\omega)=\frac{c/2}{(z-\omega)^{4}}+\frac{2T(\omega)}{(z-\omega)^{2}}+\frac{\partial
T(\omega)}{z-\omega}+\text{regular terms}$ (B.1)
Using the Sokhotski-Plemelj formula, after analytically continuing to
lightcone coordinate by $i\epsilon$ prescription, we get the following stress
tensor commutators which we need to evaluate the $L_{n}$ commutators.
$\displaystyle[T(\zeta),T(\omega)]=2\pi
i[-\frac{c}{12}\partial^{3}_{\omega}\delta(\omega-\zeta)+\delta(\omega-\zeta)\partial_{\omega}T(\omega)+2\partial_{\omega}\delta(\omega-\zeta)T(\omega)]$
(B.2) $\displaystyle[\bar{T}(\bar{\zeta}),\bar{T}(\bar{\omega})]=-2\pi
i[-\frac{c}{12}\partial^{3}_{\bar{\omega}}\delta(\bar{\omega}-\bar{\zeta})+\delta(\bar{\omega}-\bar{\zeta})\partial_{\bar{\omega}}\bar{T}(\bar{\omega})+2\partial_{\bar{\omega}}\delta(\bar{\omega}-\bar{\zeta})\bar{T}(\bar{\omega})]$
(B.3)
Inserting this into the commutator of $\mathbb{L}_{m}$, we have
$\displaystyle[\mathbb{L}_{m},\mathbb{L}_{n}]$
$\displaystyle=\int^{\infty}_{-\infty}d\zeta\int^{\infty}_{-\infty}d\omega\frac{(z_{2}-\zeta)^{-m+1}(\zeta-
z_{3})^{m+1}}{(z_{2}-z_{3})^{2}}\left(\frac{z_{21}}{z_{31}}\right)^{m+n}(z_{2}-\omega)^{-n+1}(\omega-
z_{3})^{n+1}$ $\displaystyle\times 2\pi
i[-\frac{c}{12}\partial^{3}_{\omega}\delta(\omega-\zeta)+\delta(\omega-\zeta)\partial_{\omega}T(\omega)+2\partial_{\omega}\delta(\omega-\zeta)T(\omega)]$
(B.4)
First let us consider the last two term (ignoring the $c$ term) of the
$[T,T]$, we have
$\displaystyle\frac{1}{2\pi i}[\mathbb{L}_{m},\mathbb{L}_{n}]^{(1)}$
$\displaystyle=-\int^{\infty}_{-\infty}d\zeta\frac{(z_{2}-\zeta)^{-m-n+2}(\zeta-
z_{3})^{m+n+2}}{(z_{2}-z_{3})^{2}}\left(\frac{z_{21}}{z_{31}}\right)^{m+n}\partial_{\zeta}T(\zeta)+2\text{(T.D)}_{1}$
$\displaystyle-2\int^{\infty}_{-\infty}d\zeta\frac{(z_{2}-\zeta)^{-m+1}(\zeta-
z_{3})^{m+1}}{(z_{2}-z_{3})^{2}}\left(\frac{z_{21}}{z_{31}}\right)^{m+n}T(\zeta)$
$\displaystyle\times\left[(n-1)(z_{2}-\zeta)^{-n}(\zeta-
z_{3})^{n+1}+(n+1)(z_{2}-\zeta)^{-n+1}(\zeta-z_{3})^{n}\right]$
$\displaystyle=2\text{(T.D)}_{1}-\text{(T.D)}_{2}$
$\displaystyle+\int^{\infty}_{-\infty}d\zeta\frac{(z_{2}-\zeta)^{-m-n+1}(\zeta-
z_{3})^{m+n+1}}{(z_{2}-z_{3})^{2}}\left(\frac{z_{21}}{z_{31}}\right)^{m+n}\left[(m+n-2)(\zeta-
z_{3})+(m+n+2)(z_{2}-\zeta)\right]T(\zeta)$
$\displaystyle-\int^{\infty}_{-\infty}d\zeta\frac{(z_{2}-\zeta)^{-m-n+1}(\zeta-
z_{3})^{m+n+1}}{(z_{2}-z_{3})^{2}}\left(\frac{z_{21}}{z_{31}}\right)^{m+n}\left[(2n-2)(\zeta-
z_{3})+(2n+2)(z_{2}-\zeta)\right]T(\zeta)$
$\displaystyle=2\text{(T.D)}_{1}-\text{(T.D)}_{2}+(m-n)\int^{\infty}_{-\infty}d\zeta\frac{(z_{2}-\zeta)^{-m-n+1}(\zeta-
z_{3})^{m+n+1}}{z_{2}-z_{3}}\left(\frac{z_{21}}{z_{31}}\right)^{m+n}T(\zeta)\zeta$
(B.5)
We can identify the last term as the $(m-n)\mathbb{L}_{m+n}$. Here
$(T.D)_{1,2}$ are two total derivative terms coming from the intermediate
steps of the partial integration. Here,
$\displaystyle\text{(T.D)}_{1}=\int^{\infty}_{-\infty}d\zeta\frac{(z_{2}-\zeta)^{-m+1}(\zeta-
z_{3})^{m+1}}{(z_{2}-z_{3})^{2}}\left(\frac{z_{21}}{z_{31}}\right)^{m+n}\bigg{[}(z_{2}-\omega)^{-n+1}(\omega-
z_{3})^{n+1}\delta(\omega-\zeta)T(\omega)\bigg{]}^{\omega=\infty}_{\omega=-\infty}$
(B.6)
Due to the Dirac delta function, the total derivative term inside the bracket,
vanishes. Hence this term vanishes. The other term is,
$\displaystyle\text{(T.D)}_{2}=\left(\frac{z_{21}}{z_{31}}\right)^{m+n}\left[\frac{(z_{2}-\zeta)^{-m-n+2}(\zeta-
z_{3})^{m+n+2}}{(z_{2}-z_{3})^{2}}T(\zeta)\right]^{\zeta=\infty}_{\zeta=-\infty}$
(B.7)
To analyze this term, we need to look at the behavior of the stress tensor
near spacetime infinity. From the transformation property of stress tensor we
know, $T^{\prime}(\zeta^{\prime})=\left(\frac{\partial z^{\prime}}{\partial
z}\right)^{-2}+$ Schwarzian derivative term. We choose a global
transformation171717Hence we can get rid of the Schwarzian derivative term.
$\zeta^{\prime}=\frac{a\zeta+b}{c\zeta+d}$ at
$\zeta=\zeta_{0}=-\frac{d}{c}+\epsilon$, such that
$\zeta^{\prime}_{0}\sim\frac{1}{\epsilon}$. For such choice, we get the
transformation of stress tensor in the following way,
$\displaystyle
T^{\prime}(\zeta^{\prime}_{0})=(c\zeta_{0}+d)^{4}T(\zeta_{0})\sim\frac{1}{\epsilon^{4}}T(\zeta_{0})$
(B.8)
Hence, for $\epsilon\rightarrow 0$, we get the behavior of the stress tensor
near infinity as $T(\zeta)|_{\zeta\rightarrow\infty}\sim\frac{1}{\zeta^{4}}$.
Using this, if we look at the term $(T.D)_{2}$, we get,
$\displaystyle\text{(T.D)}_{2}=\left(\frac{z_{21}}{z_{31}}\right)^{m+n}\lim_{\Lambda\rightarrow\infty}\left[\frac{(z_{2}-\Lambda)^{-m-n+2}(\Lambda-
z_{3})^{m+n+2}}{(z_{2}-z_{3})^{2}}\frac{1}{\Lambda^{4}}-\Big{(}\Lambda\rightarrow-\Lambda\Big{)}\right]=0$
(B.9)
Hence, both the total derivative terms vanishes. Let us now see the
contribution coming from the central charge($c$) part of the stress tensor
commutator.
$\displaystyle\frac{1}{2\pi i}[\mathbb{L}_{m},\mathbb{L}_{n}]^{(2)}$
$\displaystyle=-\frac{c}{12}\left(\frac{z_{21}}{z_{31}}\right)^{m+n}\int^{\infty}_{-\infty}d\zeta\int^{\infty}_{-\infty}d\omega\frac{(z_{2}-\zeta)^{-m+1}(\zeta-
z_{3})^{m+1}}{(z_{2}-z_{3})^{2}}(z_{2}-\omega)^{-n+1}(\omega-
z_{3})^{n+1}\partial^{3}_{\omega}\delta(\omega-\zeta)$ (B.10)
Let us denote the constant term
$\frac{c}{12}\left(\frac{z_{21}}{z_{31}}\right)^{m+n}\frac{1}{(z_{2}-z_{3})^{2}}\equiv
A$. In a similar fashion of the previous calculation, after some simple
algebraic steps the final integration is of the following form
$\displaystyle\frac{1}{2\pi i}[\mathbb{L}_{m},\mathbb{L}_{n}]^{(2)}$
$\displaystyle=\text{(T.D)}_{3}+\text{(T.D)}_{4}+\text{(T.D)}_{5}+n(n^{2}-1)A(z_{2}-z_{3})^{3}\int^{\infty}_{-\infty}d\zeta(z_{2}-\zeta)^{-m-n-1}(\zeta-
z_{3})^{m+n-1}$ (B.11)
Here, the total derivative terms $(\text{T.D})_{3,4,5}$ are getting vanished
due to the presence of dirac delta function and it’s derivatives as we argued
before. After carefully choosing a contour, we get the final result of the
complex integration as (we choose $\text{Re}[z_{2}]>0,\text{Re}[z_{3}]<0$)
$\displaystyle\frac{1}{2\pi
i}[\mathbb{L}_{m},\mathbb{L}_{n}]^{(2)}=\frac{c}{12}n(n^{2}-1)\left(\frac{z_{21}}{z_{31}}\right)^{m+n}\frac{\left(\frac{-z_{2}}{z_{2}}\right)^{m+n}-\left(\frac{-z_{3}}{z_{3}}\right)^{m+n}}{m+n};$
(B.12)
This term vanishes for any $m+n\neq 0,\in\mathbb{Z}$. To extract the
contribution for $m+n=0$, we can perform an analytic continuation by choosing
$m+n=\epsilon$, taking $\epsilon\rightarrow 0$. This gives,
$\displaystyle\frac{1}{2\pi
i}[\mathbb{L}_{m},\mathbb{L}_{n}]^{(2)}=\frac{c}{12}n(n^{2}-1)\lim_{\epsilon\rightarrow
0}\left(\frac{z_{21}}{z_{31}}\right)^{\epsilon}\frac{(-1)^{\epsilon}-(-1)^{-\epsilon}}{\epsilon}=\frac{c}{12}n(n^{2}-1)2\pi
i$ (B.13)
Hence, combining $[\mathbb{L}_{m},\mathbb{L}_{n}]^{(1)}$ and
$[\mathbb{L}_{m},\mathbb{L}_{n}]^{(2)}$, we finally have
$\displaystyle[\mathbb{L}_{m},\mathbb{L}_{n}]=(m-n)\mathbb{L}_{m+n}+\frac{c}{12}n(n^{2}-1)\delta_{m+n,0}$
(B.14)
Here, $\mathbb{L}_{m,n}$s are redefined as
$\mathbb{L}_{m,n}\rightarrow\frac{1}{2\pi i}\mathbb{L}_{m,n}$.
## References
* [1] R. Haag, “Local quantum physics: Fields, particles, algebras,”
* [2] E. Witten, “APS Medal for Exceptional Achievement in Research: Invited article on entanglement properties of quantum field theory,” Rev. Mod. Phys. 90, no.4, 045003 (2018)
* [3] H. Casini, “Relative entropy and the Bekenstein bound,” Class. Quant. Grav. 25, 205021 (2008)
* [4] R. Bousso, H. Casini, Z. Fisher and J. Maldacena, “Proof of a Quantum Bousso Bound,” Phys. Rev. D 90, no.4, 044002 (2014)
* [5] R. Bousso, H. Casini, Z. Fisher and J. Maldacena, “Entropy on a null surface for interacting quantum field theories and the Bousso bound,” Phys. Rev. D 91, no.8, 084030 (2015)
* [6] H. Casini and M. Huerta, “A Finite entanglement entropy and the c-theorem,” Phys. Lett. B 600, 142-150 (2004)
* [7] H. Casini and M. Huerta, “On the RG running of the entanglement entropy of a circle,” Phys. Rev. D 85, 125016 (2012)
* [8] H. Casini, E. Teste and G. Torroba, “Relative entropy and the RG flow,” JHEP 03, 089 (2017)
* [9] H. Casini, E. Testé and G. Torroba, “Markov Property of the Conformal Field Theory Vacuum and the a Theorem,” Phys. Rev. Lett. 118, no.26, 261602 (2017)
* [10] H. Casini, I. Salazar Landea and G. Torroba, “Irreversibility in quantum field theories with boundaries,” JHEP 04, 166 (2019)
* [11] T. Faulkner, R. G. Leigh, O. Parrikar and H. Wang, “Modular Hamiltonians for Deformed Half-Spaces and the Averaged Null Energy Condition,” JHEP 09, 038 (2016)
* [12] J. Koeller, S. Leichenauer, A. Levine and A. Shahbazi-Moghaddam, “Local Modular Hamiltonians from the Quantum Null Energy Condition,” Phys. Rev. D 97, no.6, 065011 (2018)
* [13] S. Balakrishnan, T. Faulkner, Z. U. Khandker and H. Wang, “A General Proof of the Quantum Null Energy Condition,” JHEP 09, 020 (2019)
* [14] F. Ceyhan and T. Faulkner, “Recovering the QNEC from the ANEC,” Commun. Math. Phys. 377, no.2, 999-1045 (2020)
* [15] D. L. Jafferis, A. Lewkowycz, J. Maldacena and S. J. Suh, “Relative entropy equals bulk relative entropy,” JHEP 06, 004 (2016)
* [16] T. Faulkner and A. Lewkowycz, “Bulk locality from modular flow,” JHEP 07, 151 (2017)
* [17] D. Kabat and G. Lifschytz, “Local bulk physics from intersecting modular Hamiltonians,” JHEP 06, 120 (2017)
* [18] G. Sárosi and T. Ugajin, “Modular Hamiltonians of excited states, OPE blocks and emergent bulk fields,” JHEP 01, 012 (2018)
* [19] S. Das and B. Ezhuthachan, “Modular Hamiltonians and large diffeomorphisms in AdS3,” JHEP 12, 096 (2018)
* [20] B. Czech, L. Lamprou, S. Mccandlish and J. Sully, “Modular Berry Connection for Entangled Subregions in AdS/CFT,” Phys. Rev. Lett. 120, no.9, 091601 (2018)
* [21] T. Faulkner, M. Li and H. Wang, “A modular toolkit for bulk reconstruction,” JHEP 04, 119 (2019)
* [22] B. Czech, J. De Boer, D. Ge and L. Lamprou, “A modular sewing kit for entanglement wedges,” JHEP 11, 094 (2019)
* [23] J. De Boer and L. Lamprou, “Holographic Order from Modular Chaos,” JHEP 06, 024 (2020)
* [24] D. D. Blanco, H. Casini, L. Y. Hung and R. C. Myers, “Relative Entropy and Holography,” JHEP 08, 060 (2013)
* [25] N. Lashkari, J. Lin, H. Ooguri, B. Stoica and M. Van Raamsdonk, “Gravitational positive energy theorems from information inequalities,” PTEP 2016, no.12, 12C109 (2016)
* [26] D. Blanco, H. Casini, M. Leston and F. Rosso, “Modular energy inequalities from relative entropy,” JHEP 01, 154 (2018)
* [27] T. Faulkner, M. Guica, T. Hartman, R. C. Myers and M. Van Raamsdonk, “Gravitation from Entanglement in Holographic CFTs,” JHEP 03, 051 (2014)
* [28] T. Faulkner, F. M. Haehl, E. Hijano, O. Parrikar, C. Rabideau and M. Van Raamsdonk, “Nonlinear Gravity from Entanglement in Conformal Field Theories,” JHEP 08, 057 (2017)
* [29] S. R. Roy and D. Sarkar, “Bulk metric reconstruction from boundary entanglement,” Phys. Rev. D 98, no.6, 066017 (2018)
* [30] D. Kabat and G. Lifschytz, “Emergence of spacetime from the algebra of total modular Hamiltonians,” JHEP 05, 017 (2019)
* [31] H. Casini, M. Huerta and R. C. Myers, “Towards a derivation of holographic entanglement entropy,” JHEP 05, 036 (2011)
* [32] R. Bousso, V. Chandrasekaran, P. Rath and A. Shahbazi-Moghaddam, “Gravity dual of Connes cocycle flow,” Phys. Rev. D 102, no.6, 066008 (2020)
* [33] A. Levine, A. Shahbazi-Moghaddam and R. M. Soni, “Seeing the entanglement wedge,” JHEP 06, 134 (2021)
* [34] B. Czech, L. Lamprou, S. McCandlish, B. Mosk and J. Sully, “A Stereoscopic Look into the Bulk,” JHEP 07, 129 (2016)
* [35] J. de Boer, F. M. Haehl, M. P. Heller and R. C. Myers, “Entanglement, holography and causal diamonds,” JHEP 08, 162 (2016)
* [36] B. Czech, L. Lamprou, S. McCandlish and J. Sully, “Integral Geometry and Holography,” JHEP 10, 175 (2015)
* [37] S. Das and B. Ezhuthachan, “Spectrum of Modular Hamiltonian in the Vacuum and Excited States,” JHEP 10, 009 (2019)
* [38] S. Das, “Comments on spinning OPE blocks in AdS3/CFT2,” Phys. Lett. B 792, 397-405 (2019)
* [39] S. Ferrara, A. F. Grillo, G. Parisi and R. Gatto, “Covariant expansion of the conformal four-point function,” Nucl. Phys. B 49, 77 (1972) Erratum: [Nucl. Phys. B 53, 643 (1973)].
* [40] D. Simmons-Duffin, “Projectors, Shadows, and Conformal Blocks,” JHEP 1404, 146 (2014)
* [41] M. Banados, “Three-dimensional quantum geometry and black holes,” AIP Conf. Proc. 484, no.1, 147-169 (1999)
* [42] N. Anand, H. Chen, A. L. Fitzpatrick, J. Kaplan and D. Li, “An Exact Operator That Knows Its Location,” JHEP 02, 012 (2018)
* [43] H. Casini and M. Huerta, “Reduced density matrix and internal dynamics for multicomponent regions,” Class. Quant. Grav. 26, 185005 (2009)
* [44] J. Erdmenger, P. Fries, I. A. Reyes and C. P. Simon, “Resolving modular flow: a toolkit for free fermions,” JHEP 12, 126 (2020)
* [45] J. Cardy and E. Tonni, “Entanglement hamiltonians in two-dimensional conformal field theory,” J. Stat. Mech. 1612, no.12, 123103 (2016)
* [46] L. Apolo, H. Jiang, W. Song and Y. Zhong, “Modular Hamiltonians in flat holography and (W)AdS/WCFT,” JHEP 09, 033 (2020)
* [47] H. Casini, E. Teste and G. Torroba, “Modular Hamiltonians on the null plane and the Markov property of the vacuum state,” J. Phys. A 50, no.36, 364001 (2017)
* [48] R. Jefferson, “Comments on black hole interiors and modular inclusions,” SciPost Phys. 6, no.4, 042 (2019)
* [49] H. J. Borchers, “The CPT theorem in two-dimensional theories of local observables,” Commun. Math. Phys. 143, 315-332 (1992)
* [50] H. W. Wiesbrock, “Half sided modular inclusions of von Neumann algebras,” Commun. Math. Phys. 157, 83-92 (1993)
* [51] H. J. Borchers, “On revolutionizing quantum field theory with Tomita’s modular theory,” J. Math. Phys. 41, 3604-3673 (2000)
* [52] H. W. Wiesbrock, “Symmetries and modular intersections of von Neumann algebras,” Lett. Math. Phys. 39, 203-212 (1997)
* [53] H. W. Wiesbrock, “Modular intersections of von Neumann algebras in quantum field theory,” Commun. Math. Phys. 193, 269-285 (1998) doi:10.1007/s002200050329
* [54] P. Kravchuk and D. Simmons-Duffin, “Light-ray operators in conformal field theory,” JHEP 11, 102 (2018)
* [55] K. W. Huang, “Lightcone Commutator and Stress-Tensor Exchange in $d>2$ CFTs,” Phys. Rev. D 102, no.2, 021701 (2020)
* [56] A. Belin, D. M. Hofman, G. Mathys and M. T. Walters, “On the Stress Tensor Light-ray Operator Algebra,”
* [57] K. W. Huang, “$d>2$ Stress-Tensor OPE near a Line,”
* [58] M. Besken, J. de Boer and G. Mathys, “On Local and Integrated Stress-Tensor Commutators,” [arXiv:2012.15724 [hep-th]].
* [59] N. Ishibashi and T. Tada, “Infinite circumference limit of conformal field theory,” J. Phys. A 48, no.31, 315402 (2015)
* [60] N. Ishibashi and T. Tada, “Dipolar quantization and the infinite circumference limit of two-dimensional conformal field theories,” Int. J. Mod. Phys. A 31, no.32, 1650170 (2016)
|
# A Trigger-Sense Memory Flow Framework for Joint Entity and Relation
Extraction
Yongliang Shen Zhejiang University<EMAIL_ADDRESS>, Xinyin Ma Zhejiang
University<EMAIL_ADDRESS>, Yechun Tang Zhejiang University
<EMAIL_ADDRESS>and Weiming Lu Zhejiang University<EMAIL_ADDRESS>
(2021)
###### Abstract.
Joint entity and relation extraction framework constructs a unified model to
perform entity recognition and relation extraction simultaneously, which can
exploit the dependency between the two tasks to mitigate the error propagation
problem suffered by the pipeline model. Current efforts on joint entity and
relation extraction focus on enhancing the interaction between entity
recognition and relation extraction through parameter sharing, joint decoding,
or other ad-hoc tricks (e.g., modeled as a semi-Markov decision process, cast
as a multi-round reading comprehension task). However, there are still two
issues on the table. First, the interaction utilized by most methods is still
weak and uni-directional, which is unable to model the mutual dependency
between the two tasks. Second, relation triggers are ignored by most methods,
which can help explain why humans would extract a relation in the sentence.
They’re essential for relation extraction but overlooked. To this end, we
present a Trigger-Sense Memory Flow Framework (TriMF) for joint entity and
relation extraction. We build a memory module to remember category
representations learned in entity recognition and relation extraction tasks.
And based on it, we design a multi-level memory flow attention mechanism to
enhance the bi-directional interaction between entity recognition and relation
extraction. Moreover, without any human annotations, our model can enhance
relation trigger information in a sentence through a trigger sensor module,
which improves the model performance and makes model predictions with better
interpretation. Experiment results show that our proposed framework achieves
state-of-the-art results by improves the relation F1 to 52.44% (+3.2%) on
SciERC, 66.49% (+4.9%) on ACE05, 72.35% (+0.6%) on CoNLL04 and 80.66% (+2.3%)
on ADE.
††copyright: iw3c2w3††journalyear: 2021††doi:
10.1145/3442381.3449895††conference: Proceedings of the Web Conference 2021;
April 19–23, 2021; Ljubljana, Slovenia††booktitle: Proceedings of the Web
Conference 2021 (WWW ’21), April 19–23, 2021, Ljubljana, Slovenia††isbn:
978-1-4503-8312-7/21/04††ccs: Computing methodologies Information extraction
## 1\. Introduction
Entity recognition and relation extraction aim to extract structured knowledge
from unstructured text and hold a critical role in information extraction and
knowledge base construction. For example, given the following text: Ruby shot
Oswald to death with the 0.38-caliber Colt Cobra revolver in the basement of
Dallas City Jail on Nov. 24, 1963, two days after President Kennedy was
assassinated., the goal is to recognize entities about People, Location and
extract relations about Kill, Located in held between recognized entities.
There are two things of interest to humans when carrying out this task. First,
potential constraints between the relation type and the entity type, e.g., the
head and tail entities of the Kill are of People type, and the tail entity of
the Located in is of Location type. Second, triggers for relations, e.g. with
words shot and death, the fact (Ruby, Kill, Oswald) can be easily extracted
from the above example.
Current entity recognition and relation extraction methods fall into two
categories: pipeline methods and joint methods. Pipeline methods label
entities in a sentence through an entity recognition model and then predict
the relation between them through a relation extraction model (Chan and Roth,
2011; Lin et al., 2016). Although it is flexible to build pipeline methods,
there are two common issues with these methods. First, they are more
susceptible to error prorogation wherein prediction errors from entity
recognition can affect relation extraction. Second, they lack effective
interaction between entity recognition and relation extraction, ignoring the
intrinsic connection and dependency between the two tasks. To address these
issues, many joint entity and relation extraction methods are proposed and
have achieved superior performance than traditional pipeline methods. In these
methods, an entity recognition model and a relation extraction model are
unified through different strategies, including constraint-based joint
decoding (Li and Ji, 2014; Wang et al., 2018), parameter sharing (Bekoulis et
al., 2018b; Luan et al., 2018; Eberts and Ulges, 2019), cast as a reading
comprehension task (Li et al., 2019; Zhao et al., 2020) or hierarchical
reinforcement learning (Takanobu et al., 2019). Current joint extraction
models have made great progress, but the following issues still remain:
1. (1)
Trigger information is underutilized in entity recognition and relation
extraction. Before neural information extraction models, rule-based entity
recognition and relation extraction framework were widely used. They were
devoted to mine hard template-based rules or soft feature-based rules from
text and match them with instances (Hearst, 1992; Jones et al., 1999;
Agichtein and Gravano, 2000; Batista et al., 2015; Aone et al., 1998; Miller
et al., 2000; Fundel et al., 2007). Such methods provide good explanations for
the extraction work, but the formulation of rules requires domain expert
knowledge or automatic discovery from a large corpus, suffering from tedious
data processing and incomplete rule coverage. End-to-end neural network
methods have made great progress in the field of information extraction in
recent years. To exploit the rules, many works have begun to combine
traditional rule-based methods by introducing a neural matching module (Zhou
et al., 2020; Lin et al., 2020; Wang et al., 2019). However, these methods
still need to formulate seed rules or label seed relation triggers manually,
and iteratively expand them.
2. (2)
The interaction between entity recognition and relation extraction is
insufficient and uni-directional. Entity recognition and relation extraction
tasks are supposed to be mutually beneficial, but joint extraction methods do
not take full advantage of dependency between the two tasks. Most joint
extraction models are based on parameter sharing, where different task modules
share input features or internal hidden layer states. However, these methods
usually use independent decoding algorithms, resulting in a weak interaction
between the entity recognition module and the relation extraction module. The
joint decoding-based extraction model strengthens the interaction between
modules, but it requires a trade-off between the richness of features for
different tasks and joint decoding accuracy. Other joint extraction methods,
such as modeling the task as a reading comprehension problem (Li et al., 2019;
Zhao et al., 2020) or a semi-Markov process (Takanobu et al., 2019), still
suffer from a lack of bi-directional interaction due to the sequential order
of subtasks. More specifically, if relation extraction follows entity
recognition, the entity classification task will ignore the solution of the
relation classification task.
3. (3)
There is no distinction between the syntactic and semantic importance of words
in a sentence. We note that some words have a significant syntactic role but
contribute little to the semantics of a sentence, such as prepositions and
conjunctions. While some words are just the opposite, they contribute
significantly to the semantics, such as nouns and notional verbs. When
encoding context, most methods are too simple to inject syntactic features
into the word vector, ignoring the fact that words differ in their semantic
and syntactic importance. For example, some methods concatenate part of speech
tags of words onto their semantic vectors via an embedding layer (Miwa and
Bansal, 2016; Fu et al., 2019). Other methods combine the word, lexical, and
entity class features of the nodes on the shortest entity path in the
dependency tree to get the final features, which are then concatenated onto
the semantic vector (Bunescu and Mooney, 2005; Miwa and Bansal, 2016). These
methods do not distinguish the two roles of a word for sentence semantics and
syntax, but rather treat both roles of all words as equally important.
In this paper, we propose a novel framework for joint entity and relation
extraction to address the issues mentioned above. First, our model makes full
use of relation triggers, which can indicate a specific type of relation.
Without any relation trigger annotations, our model can extract relation
triggers in a sentence and provide them as an explanation for model
predictions. Second, to enhance the bi-directional interaction between entity
recognition and relation extraction tasks, we design a Memory Flow Attention
module. It stores the already learned entity category and relation category
representations in memory. Then we adopt a memory flow attention mechanism to
compute memory-aware sentence encoding, and make the two subtasks mutually
boosted by enhancing task-related information of a sentence. The Memory Flow
Attention module can easily be extended to multiple language levels, enabling
the interaction between the two subtasks at both subword-level and word-level.
Finally, we distinguish the syntactic and semantic importance of a word in a
sentence and propose a node-wise Graph Weighted Fusion module to dynamically
fuse the syntactic and semantic information of words.
Our main contributions are as follow:
* •
Considering the relation triggers, we propose the Trigger Sensor module, which
implicitly extracts the relation triggers from a sentence and then aggregates
the information of triggers into span-pair representation. Thus, it can
improve the model performance and strengthens the model interpretability.
* •
To model the mutual dependency between entity recognition and relation
extraction, we propose the Multi-level Memory Flow Attention module. This
module constructs entity memory and relation memory to preserve the learned
representations of entity and relation categories. Through the memory flow
attention mechanism, it enables the bi-directional interaction between entity
recognition and relation extraction tasks at multiple language levels.
* •
Since the importance of semantic and syntactic roles that words play in a
sentence are different, we propose a node-wise Graph Weighted Fusion module to
dynamically fuse semantic and syntactic information.
* •
Experiments show that our model achieves state-of-the-art performance
consistently on the SciERC, ACE05, CoNLL04, and ADE datasets, and outperforms
several competing baseline models on relation F1 score by 3.2% on SciERC, 4.9%
on ACE05, 0.6% on CoNLL04 and 2.3% on ADE.
## 2\. Related Work
### 2.1. Rule-based Relation Extraction
Traditional relation extraction methods utilize template-based rules (Aone et
al., 1998; Miller et al., 2000; Fundel et al., 2007), which are first
formulated by domain experts or automatically generated from a large corpus
based on statistical methods. Then, they apply hard matching to extract the
corresponding relation facts corresponding to the rules. Later on, some works
change the template-based rules to feature-based rules (such as TF-IDF, CBOW)
and extract relations by soft matching (Kambhatla, 2004; Zhang et al., 2006;
Jiang and Zhai, 2007; Bui et al., 2011), but still could not avoid mining the
rule features from a large corpus using statistical methods. In short, rule-
based relation extraction models typically suffer from a number of
disadvantages, including tedious efforts on the rule formulation, a lack of
extensibility, and low accuracy due to incomplete rule coverage, but they can
provide a new idea for neural relation extraction systems.
Some recent efforts on neural extraction systems attempt to focus on rules or
natural language explanations (Wang et al., 2019). NERO (Zhou et al., 2020)
explicitly exploits labeling rules over unmatched sentences as supervision for
training RE models. It consists of a sentence-level relation classifier and a
soft rule matcher. The former learns the neural representations of sentences
and classifies which relation it talks about. The latter is a learnable module
that produces matching scores for unmatched sentences with collected rules.
NERO labels sentences according to predefined rules, and makes full use of
information from unmatched instances. However, it is still a tedious process
to formulate seed rules manually. And the quality of rule-making affects the
performance of the entire system.
### 2.2. Joint Entity and Relation Extraction
Previous entity and relation extraction models are pipelined (Chan and Roth,
2011; Lin et al., 2016). In these methods, an entity recognition model first
recognizes entities of interest, and a relation extraction model then predicts
the relation type between the recognized entities. Although pipeline models
have the flexibility of integrating different model structures and learning
algorithms, they suffer significantly from error propagation. To tackle this
issue, joint learning models have been proposed. They fall into two main
categories: parameter sharing and joint decoding methods.
Most methods jointly model the two tasks through parameter sharing (Miwa and
Bansal, 2016; Zheng et al., 2017). They unite entity recognition and relation
extraction modules by sharing input features or internal hidden layer states.
Specifically, these methods use the same encoder to provide sentence encoding
for both the entity recognition module and the relation extraction module.
Some methods (Bekoulis et al., 2018a; Luan et al., 2018; Luan et al., 2019;
Wadden et al., 2019) perform entity recognition first and then pair entities
of interest for relation classification. While other methods (Takanobu et al.,
2019; Yuan et al., 2020) are the opposite, they predict possible relations
first and then recognize the entities in the sentence. DygIE (Luan et al.,
2019) constructs a span-graph and uses message propagation methods to enhance
interaction between entity recognition and relation extraction. HRL (Takanobu
et al., 2019) models the joint extraction problem as a semi-Markov decision
process, and uses hierarchical reinforcement learning to extract entities and
relations. CASREL (Wei et al., 2020) considers the general relation
classification as a tagging task. Each relation corresponds to a tagger that
recognizes the tail entities based on a head entity and context. CopyMTL (Zeng
et al., 2018) casts the extraction task as a generation task and proposes an
encoder-decoder model with a copy mechanism to extract relation tuples with
overlapping entities. Although entity recognition and relation extraction
modules can adopt different structures in these methods, their independent
decoding algorithms result in insufficient interaction between the two
modules. Furthermore, subtasks are performed sequentially in these methods, so
the interaction between two tasks is uni-directional.
To enhance the bi-directional interaction between entity recognition and
relation extraction tasks, some joint decoding algorithms have been proposed.
(Yang and Cardie, 2013) proposes to use integer linear planning to enforce
constraints on the prediction results of the entity and relation models.
(Katiyar and Cardie, 2016) uses conditional random fields for both entity and
relation models and obtains the output results of the entity and relation by
the Viterbi decoding algorithm. Although the joint decoding-based extraction
model strengthens the interaction between two modules, it still requires a
trade-off between the richness of features required for different tasks and
the accuracy of joint decoding.
## 3\. Trigger-Sense Memory Flow Framework
### 3.1. Framework Overview
In this section, we will introduce the Trigger-Sense Memory Flow Framework
(TriMF) for joint entity and relation extraction, which consists of five main
modules: Memory module, Multi-Level Memory Flow Attention module, Syntactic-
Semantic Graph Weighted Fusion module, Trigger Sensor module, and Memory-Aware
Classifier module.
The overall architecture of the TriMF is illustrated in Figure 2. We first
initialize the Memory, including an Entity Memory
$\mathbf{M}^{\mathcal{E}}\in\mathbb{R}^{n^{e}\times h_{me}}$ and a Relation
Memory $\mathbf{M}^{\mathcal{R}}\in\mathbb{R}^{n^{r}\times h_{mr}}$, where
$n^{e}$ and $n^{r}$ denote the number of entity categories and relation
categories, $h_{me}$ and $h_{mr}$ denote the slot size of entity memory and
the relation memory.
Figure 1. Four Levels Encoding Figure 2. Trigger-Sense Memory Flow Framework
(TriMF) Overview
Our model performs a four-level sentence encoding (subword, word, span, and
span-pair, as shown in Figure 1) and two-step classification (entity
classification and relation classification). More specifically, a sentence is
encoded by BERT (Devlin et al., 2018) to obtain subword sequence encoding
$\mathbf{E}^{d}=\mathbb{R}^{m\times h}$, where $m$ denotes the number of
subwords in the sentence, and $h$ denotes the hidden state size of BERT. Based
on $\mathbf{M}^{\mathcal{R}}$, $\mathbf{M}^{\mathcal{E}}$ and
$\mathbf{E}^{d}$, we perform the first Memory Flow Attention at the subword-
level. Then we use $f_{w}$ to aggregate the subword sequence encoding into a
word sequence encoding $\mathbf{E}^{w}=\mathbb{R}^{n\times h_{w}}$, where $n$
denotes the number of words in the sentence, and $h_{w}$ denotes the size of
the word vector. Here for $f_{w}$, we adopt the max-pooling function. Based on
$\mathbf{M}^{\mathcal{R}}$, $\mathbf{M}^{\mathcal{E}}$ and $\mathbf{E}^{w}$,
we perform the second Memory Flow Attention at the word-level. After that, the
word sequence encoding is fed into the Syntactic-Semantic Graph Weighted
Fusion module to fuse semantic and syntactic information at the word-level.
Then, we combine the word sequence encodings by $f_{s}$ to obtain the span
sequence encodings $\mathbf{E}^{s}=\mathbb{R}^{N\times h_{s}}$, where $N$
denotes the number of spans in the sentence, and $h_{s}$ denotes the size of
the span vector. Here for $f_{s}$, we adopt a method of concatenating a span-
size embedding on max-pooled word embeddings. We filter out the spans which
are classified as the None category by a Memory-Aware Entity Classifier. After
pairing the spans of interest, We compute local-context representation
$g_{local}$ and full-contextual span-pair specific trigger representation
$\mathbf{g}_{trigger}$ using the Trigger Sensor. We combine the encodings of
the head span, tail span, $g_{local}$ and $\mathbf{g}_{trigger}$ to obtain the
encoding $\mathbf{E}^{r}\in\mathbb{R}^{M\times h_{r}}$, where
$\mathbf{E}^{r}_{\left(ij\right)}$ denotes the span pair encoding consisting
of the $i^{th}$ and $j^{th}$ spans, $M$ denotes the number of candidate span
pairs, and $h_{r}$ denotes the size of the span pair encoding. Lastly, we
input the candidate span-pair representation to the Memory-Aware Relation
Classifier and predict the relation type between the two spans. In the next
sections, we’ll cover five main modules of our model in detail.
### 3.2. Memory
Memory holds category representations learned from historical training
examples, consist of entity memory and relation memory. Each slot of these two
memories indicates an entity category and a relation category respectively.
The category representation is held in the corresponding memory slot, which
can be used by the Memory Flow Attention module to enhance information related
to the tasks in a sentence, or by the Trigger Sensor module to sense triggers.
In the Memory module, we define two types of processes, Memory Read Process
and Memory Write Process, to manipulate the memory.
Memory Read Process Given an input $\mathbf{E}$ and our memory $\mathbf{M}$,
we define two processes to read memory: normal read process and inverse read
process. The normal read process takes the input as query, the memory as key
and value. First, we calculate the attention weights of the input $\mathbf{E}$
on the memory $\mathbf{M}$ by bilinear similarity function, and then we weigh
the memory by the weights.
(1)
$\operatorname{A}_{norm}\left(\mathbf{E},\mathbf{M}\right)=\operatorname{softmax}\left(\mathbf{E}\mathbf{W}\mathbf{M}^{T}\right)$
(2)
$\operatorname{Read}_{norm}\left(\mathbf{E},\mathbf{M}\right)=\operatorname{A}_{norm}\left(\mathbf{E},\mathbf{M}\right)\mathbf{M}$
where $\mathbf{W}$ is a learnable parameter for the bilinear attention
mechanism. While the inverse read process takes the memory as query, the input
as key and value. We first compute 2d-attention weight matrix through bilinear
similarity function, and then sum the 2d-attention weight matrix on the
memory-slot dimension to obtain a 1d-attention weight vector on the input
$\mathbf{E}$. The more relevant element in input with the memory has a larger
weight. We then multiply the 1d-attention weight vector with $\mathbf{E}$ to
get a memory-aware sequence encoding:
(3)
$\operatorname{A}_{inv}\left(\mathbf{E},\mathbf{M}\right)=\sum\limits_{i=1}^{|\mathbf{M}|}\operatorname{softmax}\left(\mathbf{M}_{i}\mathbf{W}\mathbf{E}^{T}\right)$
(4)
$\operatorname{Read}_{inv}(\mathbf{E},\mathbf{M})=\operatorname{A}_{inv}\left(\mathbf{E},\mathbf{M}\right)\mathbf{E}$
where $\mathbf{W}$ is a learnable parameter for the bilinear attention
mechanism and $|\mathbf{M}|$ denotes the number of slots in the memory
$\mathbf{M}$.
Memory Write Process We write entity memory using gradients of entity
classification losses and write relation memory using gradients of relation
classification losses. If the gradient of the current instance’s
classification loss is large, it means that the classified instance (span or
span-pair) representation is far away from the corresponding memory slot
(entity or relation category representation of ground truth) while closer to
the memory slots of the other categories, and we need to assign a large weight
to this instance when writing it into memory. This makes the representations
of the categories stored in memory more accurate. The write process for entity
memory and relation memory is described below:
(5)
$\mathbf{M}^{\mathcal{E}}_{e}=\mathbf{M}^{\mathcal{E}}_{e}-\mathbf{E}^{s}_{i}\mathbf{W}^{e}\frac{\partial\mathcal{L}^{e}}{\partial
logit_{e}}lr$
(6)
$\mathbf{M}^{\mathcal{R}}_{r}=\mathbf{M}^{\mathcal{R}}_{r}-\mathbf{E}^{r}_{(ij)}\mathbf{W}^{r}\frac{\partial\mathcal{L}^{r}}{\partial
logit_{r}}lr$
(7) $logit_{e}=log\left(\frac{p(s_{i}=e)}{1-p(s_{i}=e)}\right)$
(8) $logit_{r}=log\left(\frac{p(r_{ij}=r)}{1-p(r_{ij}=r)}\right)$
where $\mathcal{L}^{e}$ and $\mathcal{L}^{r}$ denote entity classification
loss and relation classification loss, $lr$ denotes the learning rate,
$\mathbf{W}^{e}$ and $\mathbf{W}^{r}$ are two weight matrices, $p(s_{i}=e)$
denotes the probability of span $s_{i}$ belonging to entity type $e$,
$p(r_{ij}=r)$ denotes the probability of span-pair’s relation $r_{ij}$
belonging to relation type $r$, and $\mathbf{E}^{s}_{i}$,
$\mathbf{E}^{r}_{ij}$ denote candidate span and span-pair encoding,
respectively. The above symbols are specifically defined in defined at
Sec.3.6.
### 3.3. Multi-level Memory Flow Attention
We perform a memory flow attention mechanism between the memory and the input
sequence to enhance task-relevant information, such as entity surface names
and trigger words. Entity memory and relation memory can enhance entity-
related and relation-related information in the input instance for the two
tasks respectively, thus they can help to strengthen bi-directional
interaction between tasks.
Memory Flow Attention In order to enhance the task-relevant information in a
sentence, we designed the Memory Flow Attention based on the Memory. Given a
memory $\mathbf{M}$ and a sequence encoding $\mathbf{E}$, We calculate the
memory-aware sequence encoding by runing memory inverse read process:
(9)
$\operatorname{MFA}_{s}\left(\mathbf{E},\mathbf{M}\right)=\operatorname{Read}_{inv}\left(\mathbf{E},\mathbf{M}\right)$
A single memory flow can be extended to multiple memory flows. We consider two
types in our work: relation memory flow and entity memory flow. So we design a
Multi-Memory Flow Attention mechanism, which is calculated as follows:
(10)
$\operatorname{MFA}_{m}(\mathbf{E},\mathbf{M}^{\mathcal{R}},\mathbf{M}^{\mathcal{E}})=\operatorname{mean}\left(\operatorname{MFA}_{s}\left(\mathbf{E},\mathbf{M}^{\mathcal{R}}\right),\operatorname{MFA}_{s}\left(\mathbf{E},\mathbf{M}^{\mathcal{E}}\right)\right)$
where $\mathbf{M}^{\mathcal{E}}$ and $\mathbf{M}^{\mathcal{R}}$ denote entity
and relation memory respectively. we know that languages are hierarchical, and
different levels represent semantic information at different levels of
granularity. As shown in Figure 3, we extend the multi-memory flow attention
mechanism to multiple levels ( subword-level and word-level ), and design a
Multi-Level Multi-Memory Flow Attention mechanism:
Figure 3. Multi-Level Multi-Memory Flow Attention
(11)
$\overline{\mathbf{E}}^{d}=\operatorname{MFA}_{m}(\mathbf{E}^{d},\mathbf{M}^{r},\mathbf{M}^{e})$
(12) $\mathbf{E}^{w}=f_{w}\left(\mathbf{\overline{E}}^{d}\right)$
(13)
$\overline{\mathbf{E}}^{w}=\operatorname{MFA}_{m}(\mathbf{E}^{w},\mathbf{M}^{r},\mathbf{M}^{e})$
where $\mathbf{\overline{E}}^{d}$ and $\mathbf{\overline{E}}^{w}$ denote
memory-aware sequence encoding at subword-level and word-level respectively.
### 3.4. Syntactic-Semantic Graph Weighted Fusion
The semantic information and syntactic structure of a sentence are important
for both entity recognition and relation extraction. We consider both by
constructing semantic and syntactic graphs from a sentence, with nodes in the
graph refer to words in the sentence. We update a node representation based on
its neighbor nodes’ representations and the graph structure in the two graphs.
We note that some words have a significant syntactic role but contribute
little to the semantics of a sentence, such as prepositions and conjunctions.
While some words are just the opposite, they contribute significantly to the
semantics, such as nouns and notional verbs. Therefore, we need to fuse
syntactic and semantic graphs based on the relative importance of the
syntactic role and semantic role. First, the nodes in the two graphs are
initialized as:
(14) $\mathbf{H}^{(0)}=\overline{\mathbf{E}}^{w}$
Syntactic Graph We construct a directed syntactic graph from a sentence based
on dependency parsing, with the word as a node and the dependency between
words as an edge. We then use the R-GCN (Schlichtkrull et al., 2018) to update
node representations. The node representations of the syntactic graph
$\widehat{\mathbf{H}}^{(l)}$ in $l^{th}$ layer are calculated as:
(15)
$\widehat{\mathbf{H}}_{i}^{(l)}=\sigma\left(\sum_{r\in\mathcal{R}_{dep}}\sum_{j\in\mathcal{N}_{i}^{r}}\frac{1}{c_{i,r}}\mathbf{\widehat{W}}_{r}^{(l)}\mathbf{H}_{j}^{(l)}+\mathbf{\widehat{W}}_{0}^{(l)}\mathbf{H}_{i}^{(l)}\right)$
where $\mathbf{\widehat{W}}_{r}^{(l)}$ and $\mathbf{\widehat{W}}_{0}^{(l)}$
denote two learnable weight matrices, and $\mathcal{N}_{i}^{r}$ denotes the
set of neighbor indices of node $i$ under relation $r\in\mathcal{R}_{dep}$.
Semantic Graph We compute the dense adjacency matrix based on semantic
similarity and randomly sample from the fully connected graph to construct the
semantic graph:
(16)
$\mathbf{\alpha}=\operatorname{LeakyReLU}\left(\mathbf{\widetilde{W}}\mathbf{H}^{(l)}\right)^{T}\operatorname{LeakyReLU}\left(\mathbf{\widetilde{W}}\mathbf{H}^{(l)}\right)$
where $\mathbf{\widetilde{W}}$ denotes a trainable weight matrix. Then we
compute a weighted average for aggregation of neighbor nodes $\mathcal{N}(i)$,
where the weights come from the normalized adjacency matrices
$\mathbf{\overline{\alpha}}$. We update the node representations of semantic
graph $\widetilde{\mathbf{H}}_{i}^{(l)}$ in $l^{th}$ layer, which are
calculated as follows:
(17)
$\overline{\mathbf{\alpha}}=\operatorname{softmax\left(\mathbf{\alpha}\right)}$
(18)
$\widetilde{\mathbf{H}}_{i}^{(l)}=\overline{\alpha}_{i,i}\mathbf{\widetilde{W}}\mathbf{H}_{i}^{(l)}+\sum_{j\in\mathcal{N}(i)}\overline{\alpha}_{i,j}\mathbf{\widetilde{W}}\mathbf{H}_{j}^{(l)}$
Node-Wise Graph Weighted Fusion We design a graph weighted fusion module to
dynamically fuse two graphs according to the relative semantic and syntactic
importance of words in a sentence. The [CLS] vector, denote as
$\mathbf{e}^{cls}$, is often used for sentence-level tasks and contains
information about the entire sentence. We first calculate the bilinear
similarity between $\mathbf{e}^{cls}$ and each node of semantic and syntactic
graphs. Then we normalize the similarity vectors across two graphs to obtain
two sets of weights, which indicate semantic and syntactic importance
respectively. Finally, we fuse all nodes across the graphs based on the
weights:
(19)
$\mathbf{\overline{w}},\mathbf{\widehat{w}}=\operatorname{softmax}\left(\left\\{\mathbf{e}^{cls}\mathbf{W}\widetilde{\mathbf{H}}^{(l)},\mathbf{e}^{cls}\mathbf{W}\widehat{\mathbf{H}}^{(l)}\right\\}\right)$
(20)
$\mathbf{H}^{(l+1)}_{i}=\mathbf{\widetilde{w}}_{i}\cdot\widetilde{\mathbf{H}}^{(l)}_{i}+\mathbf{\widehat{w}}_{i}\cdot\widehat{\mathbf{H}}^{(l)}_{i}$
where $\mathbf{W}$ is a learnable weight matrix, $\mathbf{\widetilde{w}}$ and
$\mathbf{\widehat{w}}$ denote the node importance weights of syntactic and
semantic graphs, respectively. Then we map the node representations
$\mathbf{H}^{(l+1)}$ to the corresponding word representations $E^{g}$ using
mean-pooling:
(21)
$\mathbf{E}^{g}=\operatorname{mean}\left(\mathbf{H}^{(l+1)},\mathbf{\overline{E}}^{w}\right)$
### 3.5. Trigger Sensor
We know that a particular relation usually occurs in conjunction with a
particular set of words, which we call relation triggers. They can help
explain why humans would extract a relation in the sentence and play an
essential role in relation extraction. We present a Trigger Sensor module that
senses and enhances the contextual trigger information without any trigger
annotations.
Relation triggers typically appear in local context between a pair of spans
$\left(s_{i},s_{j}\right)$, and some approaches encode local context directly
into the span-pair representation for relation classification. However, these
approaches do not consider the case where the triggers are outside the span-
pair, resulting in the model ignoring useful information from other contexts.
We design both a Local-Context Encoder and a Full-Context Trigger Sensor to
compute the local-context representation $\mathbf{g}_{local}$ and the full-
context trigger representation $\mathbf{g}_{trigger}$.
Local-Context Encoder We aggregate local-context information between spans of
interest using max-pooling. The local-context representation
$\mathbf{g}_{local}$ is calculated as:
(22)
$\mathbf{g}_{local}=\max\left(\mathbf{E}^{g}_{k},\mathbf{E}^{g}_{k+1},\cdots,\mathbf{E}^{g}_{h}\right)$
where $\mathbf{E^{g}_{k}},\mathbf{E^{g}_{k+1}},\cdots,\mathbf{E^{g}_{h}}$ are
the encodings of words between the two spans $\left(s_{i},s_{j}\right)$.
Full-Context Trigger Sensor Full-context trigger sensor aims to sense and
enhance span-pair specific triggers. Given a pair of spans
$\left(s_{i},s_{j}\right)$, we use head span and tail span as queries
respectively and execute normal read process on the relation memory. After
obtaining two span-specific memory representations, we perform mean-pooling
across them to get the span-pair specific relation representation
$m^{r}_{(ij)}$:
(23)
$m^{r}_{(ij)}=\operatorname{mean}\left(\operatorname{Read}_{norm}\left(\mathbf{E}^{s}_{i},\mathbf{M}^{\mathcal{R}}\right),\operatorname{Read}_{norm}\left(\mathbf{E}^{s}_{j},\mathbf{M}^{\mathcal{R}}\right)\right)$
We calculate the similarity between $\mathbf{m}^{r}_{(ij)}$ and each word
representation of a word sequence, and then weigh the word sequence to get the
full-context trigger representation $\mathbf{g}_{trigger}$.
(24)
$\textbf{g}_{trigger}=\operatorname{softmax}\left({\mathbf{m}_{(ij)}(\mathbf{E}^{g})^{T}}\right)\mathbf{E}^{g}$
We incorporate the local-context representation $\mathbf{g}_{local}$ and the
full-context trigger representation $\mathbf{g}_{trigger}$ into the span-pair
encoding $\mathbf{E}^{r}_{ij}$ using $f_{r}$:
(25)
$\mathbf{E}^{r}_{ij}=f_{r}\left(\mathbf{E}^{s}_{i},\mathbf{E}^{s}_{j},\mathbf{g}_{local},\mathbf{g}_{trigger}\right)$
for $f_{r}$ we adopt the concatenate function.
Trigger Extraction Using the trigger sensor, we can also extract relation
triggers and provide a reasonable explanation for model predictions. Based on
the similarity of each word representation with the span-pair specific
relation representations $m^{r}_{(ij)}$, we rank the words. The top-ranked
words can be used as relation triggers to explain the model’s predictions. We
will show the trigger extraction ability of our model in the case study
section.
### 3.6. Memory-Aware Classifier
Representations of the entity and relation categories are stored in entity
memory and relation memory, respectively. Based on the bilinear similarity
between instance (span or span-pair) representation and categories
representations, we compute the probability of candidate span $s_{i}$ being an
entity $e$:
(26)
$p\left(s_{i}=e\right)=\frac{\exp\left({\mathbf{E}^{s}_{i}\mathbf{W}^{e}M^{\mathcal{\mathbf{E}}}_{e}}\right)}{\sum_{k\in\mathcal{E}}\exp\left({\mathbf{E}^{s}_{i}\mathbf{W}^{e}\mathbf{M}^{\mathcal{\mathbf{E}}}_{k}}\right)}$
and the probability of candidate span-pair $\left(s_{i},s_{j}\right)$ having a
relation $r$:
(27)
$p\left(r_{(ij)}=r\right)=\operatorname{sigmoid}\left({\mathbf{E}^{r}_{(ij)}\mathbf{W}^{r}\mathbf{M}^{\mathcal{R}}_{r}}\right)$
where $\mathbf{W}^{e}\in\mathbb{R}^{h_{s}\times h_{me}}$ and
$\mathbf{W}^{r}\in\mathbb{R}^{h_{r}\times h_{mr}}$ denote two learnable weight
matrices. Finally, we define a joint loss function for entity classification
and relation classification:
$\mathcal{L}=\mathcal{L}^{s}+\mathcal{L}^{r}$
where $\mathcal{L}^{s}$ denotes the cross-entropy loss over entity
categories(including the None category), and $\mathcal{L}^{r}$ denotes the
binary cross-entropy loss over relation categories.
### 3.7. Two-Stage Training
At the start of training, since the memory is randomly initialized, the Memory
Flow Attention module and Trigger Sensor module will introduce noises to the
sequence encoding. These noises further corrupt the semantic information of
the pre-trained BERT (Devlin et al., 2018) through the gradient descent. We
therefore divide the model training procedure into two stages. In the first
stage, we aim to learn more accurate category representations and store them
into the corresponding memory slots. We only train Memory-Aware Classifier and
Graph Weighted Fusion modules and update the memory through the memory write
process. In the second stage, we add Memory Flow Attention and Trigger Sensor
modules to the training procedure. Based on the more accurate representations
of the categories stored in the memory, we can strengthen the contextual task-
related features and relation triggers through memory read process.
## 4\. Experiments
Dataset | Model | Entity | Relation
---|---|---|---
Precision | Recall | F1 | Precision | Recall | F1
SciERC | SciIE$\dagger$ | 67.20 | 61.50 | 64.20 | 47.60 | 33.50 | 39.30
DyGIE$\dagger$ | - | - | 65.20 | - | - | 41.60
DYGIE++$\dagger$ | - | - | 67.50 | - | - | 48.40
SpERT$\dagger$ (using SciBERT (Beltagy et al., 2019)) | 70.87 | 69.79 | 70.33 | 53.40 | 48.54 | 50.84
TriMF$\dagger$ (using SciBERT) | 70.18 ($\pm$0.65) | 70.17 ($\pm$0.94) | 70.17 ($\pm$0.56) | 52.63 ($\pm$1.24) | 52.32 ($\pm$1.73) | 52.44 ($\pm$0.40)
ACE05 | DyGIE$\dagger$ | - | - | 88.40 | - | - | 63.20
DYGIE++$\dagger$ | - | - | 88.60 | - | - | 63.40
TriMF$\dagger$ | 87.67 ($\pm$0.17) | 87.54 ($\pm$0.29) | 87.61 ($\pm$0.21) | 65.87 ($\pm$0.55) | 67.12 ($\pm$0.63) | 66.49 ($\pm$0.32)
Multi-turn QA$\ddagger$ | 84.70 | 84.90 | 84.80 | 64.80 | 56.2 | 60.20
MRC4ERE++$\ddagger$ | 85.90 | 85.20 | 85.50 | 62.00 | 62.20 | 62.10
TriMF$\ddagger$ | 87.67 ($\pm$0.17) | 87.54 ($\pm$0.29) | 87.61 ($\pm$0.21) | 62.19 ($\pm$0.52) | 63.37 ($\pm$0.52) | 62.77 ($\pm$0.22)
CoNLL04 | Multi-head + AT (Bekoulis et al., 2018a) $\ddagger$ | | | 83.9 | | | 62.04
Multi-turn QA$\ddagger$ | 89.00 | 86.60 | 87.80 | 69.20 | 68.20 | 68.90
SpERT$\ddagger$ | 88.25 | 89.64 | 88.94 | 73.04 | 70.00 | 71.47
MRC4ERE++$\ddagger$ | 89.30 | 88.50 | 88.90 | 72.20 | 71.50 | 71.90
TriMF$\ddagger$ | 90.26 ($\pm$0.62) | 90.34 ($\pm$0.60) | 90.30 ($\pm$0.24) | 73.01 ($\pm$0.21) | 71.63 ($\pm$0.26) | 72.35 ($\pm$0.23)
ADE | Multi-head + AT (Bekoulis et al., 2018a) $\ddagger$* | - | - | 86.73 | - | - | 75.52
SpERT$\ddagger$* | 88.99 | 89.59 | 89.28 | 77.77 | 79.96 | 78.84
TriMF$\ddagger$* | 89.50 | 91.29 | 90.38 | 74.22 | 83.43 | 80.66
Table 1. Precision, Recall, and F1 scores on the SciERC, ACE05, CoNLL04 and
ADE datasets. (macro-average=*, boundary evaluation=$\dagger$, strict
evaluation=$\ddagger$)
### 4.1. Datasets
We evaluate TriMF described above using the following four datasets:
* •
SciERC: The SciERC (Luan et al., 2018) includes annotations for scientific
entities, their relations, and coreference clusters for 500 scientific
abstracts. The dataset defines 6 types for annotating scientific entities and
7 relation categories. We adopt the same data splits as in (Luan et al.,
2018).
* •
ACE05: ACE05 was built upon ACE04, and is commonly used to benchmark NER and
RE methods. ACE05 defines 7 entity categories. For each pair of entities, it
defines 6 relation categories. We adopt the same data splits as in (Miwa and
Bansal, 2016).
* •
CoNLL04: The CoNLL04 dataset (Roth and Yih, 2004) consists of 1,441 sentences
with annotated entities and relations extracted from news articles. It defines
4 entity categories and 5 relation categorie. We adopt the same data splits as
in (Gupta et al., 2016), which contains 910 training, 243 dev, and 288 test
sentences.
* •
ADE: The Adverse Drug Events (ADE) dataset (Gurulingappa et al., 2012)
consists of 4, 272 sentences and 6, 821 relations extracted from medical
reports. These sentences describe the adverse effects arising from drug use.
ADE dataset contains two entity categories and a single relation category.
### 4.2. Compared Methods
Our model is compared with current advanced joint entity and relation
extraction models, divided into three types: general parameter-sharing based
models (Multi-head AT, SPtree, SpERT, SciIE), span-graph based models (DyGIE,
DyGIE++), and reading-comprehension based models (multi-turn QA, MRC4ERE).
Multi-head + AT (Bekoulis et al., 2018a) treats the relation extraction task
as a multi-head selection problem. Each entity is combined with all other
entities to form entity pairs that can be predicted which relations to have.
In addition, instead of being a multi-category task where each category is
mutually exclusive, the relation classification is treated as multiple
bicategorical tasks where each relation is independent, which allows more than
one relation to be predicted.
SPTree (Miwa and Bansal, 2016) shares parameters of the encoder in joint
entity recognition and relation extraction tasks, which strengthens the
correlation between the two tasks. SPTree is the first model that adopts a
neural network to solve a joint extraction task for entities and relations.
SpERT (Eberts and Ulges, 2019) is a simple and effective model for joint
entity and relation extraction. It uses BERT (Devlin et al., 2018) to encode a
sentence, and enumerates all spans in the sentence. Then it performs span
classification and span-pair classification to extract entities and relations.
SciIE (Luan et al., 2018) is a framework for extracting entities and relations
from the scientific literature. It reduces error propagation between tasks and
leverages cross-sentence relations through coreference links by introducing a
multi-task setup and a coreference disambiguation task.
DyGIE/DYGIE++ (Luan et al., 2019; Wadden et al., 2019) dynamically build a
span graph, and iteratively refine the span representations by propagating
coreference and relation type confidences through the constructed span graph.
Also, DyGIE++ takes event extraction into account.
Multi-turn QA (Li et al., 2019) treats joint entity and relation extraction
task as a multiple-round question-and-answer task. Each entity and each
relation is depicted using a question-and-answer template, so that these
entities and relations can be extracted by answering these templated
questions.
MRC4ERE++ (Zhao et al., 2020) introduces a diversity question answering
mechanism based on Multi-turn QA. Two answering selection strategies are
designed to integrate different answers. Moreover, MRC4ERE++ proposes to
predict a subset of potential relations to filter out irrelevant ones to
generate questions effectively.
### 4.3. Evaluation Metrics
We evaluate these models on both entity recognition and relation extraction
tasks. An entity is considered correct if its predicted span and entity label
match the ground truth. When evaluating relation extraction task, previous
works have used different metrics. For the convenience of comparison, we
report multiple evaluation metrics consistent with them. We define a strict
evaluation, where a relation is considered correct if its relation type, as
well as the two related entities, are both correct, and a boundary evaluation,
where entity type correctness is not considered. We reported strict relation
f1 on Conll04 and ADE, boundary relation f1 on SciERC, and both on ACE05. Our
experiments on these datasets all report a micro-F1 score, except for the ADE
dataset, where we report the macro-F1 score.
### 4.4. Experiment Settings
In most experiments, we use BERT (Devlin et al., 2018) as the encoder, pre-
trained on an English corpus. On the SciERC dataset, we replace BERT with
SciBERT (Beltagy et al., 2019). We perform the four-level encoding with a
subword encoding size $h=768$, a word encoding size $h_{w}=768$, a span
encoding size $h_{s}=793$, and a span-pair encoding size $h_{r}=2354$. We set
both entity memory slot size $h_{me}$ and relation memory slot size $h_{mr}$
to 768. We just use a single graph neural layer in semantic and syntactic
graphs. We initialize entity memory and relation memory using the normal
distribution $\mathcal{N}(0.0,0.02)$. We use the Adam Optimizer with a linear
warmup-decay learning rate schedule (with a peak learning rate of 5e-5), a
dropout before the entity and relation bilinear classifier with a rate of 0.5,
a batch size of 8, span width embeddings of 25 dimensions and max span-size of
10. The training is divided into two stages with the first stage of 18 epochs,
and the second stage of 12 epochs 111Our code will be available at
https://github.com/tricktreat/trimf.
### 4.5. Results and Analysis
Main Results We report the average results over 5 runs on SciERC, ACE05 and
CoNLL04 datasets. For ADE, we report metrics averaged across the 10 folds.
Table 1 illustrates the performance of the proposed method as well as baseline
models on SciERC, ACE05, CoNLL04 and ADE datasets. Our model consistently
outperforms the state-of-the-art models for both entity and relation
extraction on all datasets. Specifically, the relation F1 scores of our model
advance previous models by +3.2%, +4.9%, +0.6%, +2.3% on SciERC, ACE05,
CoNLL04 and ADE respectively. We attribute the improvement to three reasons.
First, our model can share learned information between tasks through the
Memory module, enhancing task interactions in both directions(from NER to RE,
and from RE to NER). Second, the Trigger Sensor module can enhance the
relation trigger information, which is essential for relation classification.
Lastly, taking a step further from introducing structure information through
syntactic graphs, we distinguish the semantic and syntactic importance of
words to fuse two-way information through a dynamic Graph Weighted Fusion
module. We conduct ablation studies to further investigate the effectiveness
of these modules.
### 4.6. Ablation Study
Effect of Different Modules To prove the effects of each proposed modules, we
conduct the ablation study. As shown in Table 2, all modules contribute to the
final performance. Specifically, removing the Trigger Sensor module has the
most significant effect, causing the relation F1 score to drop from 52.44% to
51.23% on SciERC, from 62.77% to 61.60% on ACE05. Comparing the effects of
Memory-Flow Attention at subword-level and word-level on the two datasets, we
find that the improvement of MFA at subword-level is more significant. We thus
believe that fine-grained semantic information is more effective for relation
extraction. The performance of the Syntactic-Semantic Graph Weighted Fusion
module varies widely across datasets, achieving an improvement of 1.09% on
ACE05, but only 0.61% on SciERC. This may be related to the different
importance of syntactic information for relation extraction on different
domains.
| Entity | Relation
---|---|---
Method | F1 | $\Delta$ | F1 | $\Delta$
SciERC
TriMF | 70.17 | - | 52.44 | -
w/o Graph Weighted Fusion | 70.12 | -0.05 | 51.83 | -0.61
w/o Trigger Sensor | 70.19 | +0.02 | 51.23 | -1.21
w/o Subword-level MFA | 70.11 | -0.06 | 51.27 | -1.17
w/o Token-level MFA | 70.21 | +0.04 | 51.78 | -0.66
ACE05
TriMF | 87.61 | - | 62.77 | -
w/o Graph Weighted Fusion | 87.55 | -0.06 | 61.68 | -1.09
w/o Trigger Sensor | 87.45 | -0.16 | 61.60 | -1.17
w/o Subword-level MFA | 87.09 | -0.52 | 61.68 | -1.09
w/o Token-level MFA | 87.42 | -0.19 | 62.02 | -0.75
Table 2. Effect of Different Modules
Effect of Interaction Between Two Subtasks There is a mutual dependency
between the entity recognition and relation extraction tasks. Our framework
models this relationship through the Multi-level Memory Flow Attention module.
Depending on the memory that the attention mechanism relies on, it can be
divided into Relation-specific MFA and Entity-specific MFA. The Relation-
specific MFA module enhances the relation-related information based on the
relation memory, allowing the entity recognition task to utilize the
information already captured in the relation extraction task, as does Entity-
specific MFA. To verify that the Memory Flow Attention module can facilitate
the interaction between entity recognition and relation extraction, we perform
ablation studies, as shown in Table 3. On ACE05 and SciERC, both Entity-
specific MFA and Relation-specific MFA bring significant performance
improvement. In addition, the Relation-specific MFA improves more compared
with Entity-specific MFA. We think the reason may be that our model performs
entity recognition first and then relation extraction. This order determines
that information from entity recognition has been used by relation extraction,
but the information from relation extraction is not fed back to entity
recognition. When using Relation-specific MFA, a bridge for bi-directional
information flow is built between the two tasks. Furthermore, when we use both
Entity-specific MFA and Relation-specific MFA, the experiment achieves the
best performance, indicating that MFA can enhance the bi-directional
interaction between entity recognition and relation extraction.
| Entity | Relation
---|---|---
Method | F1 | $\Delta$ | F1 | $\Delta$
SciERC
TriMF | 70.17 | - | 52.44 | -
w/o MFA | 70.04 | -0.13 | 50.78 | -1.66
w/o Relation MFA | 70.07 | -0.10 | 51.28 | -1.16
w/o Entity MFA | 70.17 | 0 | 51.84 | -0.60
ACE05
TriMF | 87.61 | - | 62.77 | -
w/o MFA | 87.42 | -0.19 | 62.19 | -0.58
w/o Relation MFA | 87.37 | -0.24 | 62.06 | -0.71
w/o Entity MFA | 87.38 | -0.23 | 62.64 | -0.13
Table 3. Effect of Interaction between NER and RE
Effect of Different Graph Fusion Methods Our proposed graph weighted fusion
module employs a node-wise weighted fusion approach based on attention, which
enables a flexible fusion of node representations according to words’
syntactic importance and semantic importance. To demonstrate the effectiveness
of our approach, we compare other node-wise fusion methods, including no-
fusion, max-fusion, mean-fusion and sum-fusion, as shown in Table 4. Comparing
the two experiments which only use the semantic graph or syntactic graph, we
find that the syntactic graph provides a greater improvement in model
performance, probably because the initial encodings of the nodes of the
syntactic graph have already contained semantic information. Compared to max-
fusion, mean-fusion, and sum-fusion, the node-wise weight-fusion method brings
more improvement on relation F1 scores of both SciERC and ACE05, which proves
the effectiveness of our method.
| Entity | Relation
---|---|---
Method | P | R | F1 | P | R | F1
SciERC
No Graph | 69.87 | 70.33 | 70.10 | 52.56 | 49.59 | 51.03
Semantic Graph | 68.47 | 69.61 | 49.04 | 52.00 | 50.62 | 51.30
Syntactic Graph | 72.18 | 70.68 | 71.42 | 54.02 | 48.97 | 51.37
Mean-fusion | 69.77 | 69.02 | 69.39 | 53.56 | 49.39 | 51.38
Sum-fusion | 69.45 | 69.57 | 69.51 | 52.94 | 49.65 | 51.24
Max-fusion | 69.12 | 69.64 | 69.38 | 53.01 | 49.45 | 51.17
Weighted fusion | 70.18 | 70.17 | 70.17 | 52.63 | 52.32 | 52.44
ACE05
No Graph | 87.24 | 87.18 | 87.21 | 60.11 | 61.83 | 60.96
Semantic Graph | 87.57 | 87.69 | 87.63 | 59.45 | 62.47 | 60.92
Syntactic Graph | 87.47 | 87.36 | 87.41 | 59.29 | 62.96 | 61.07
Mean-fusion | 87.32 | 87.78 | 87.55 | 59.74 | 62.90 | 61.28
Sum-fusion | 87.85 | 87.47 | 87.66 | 60.12 | 62.26 | 61.17
Max-fusion | 87.51 | 87.62 | 87.56 | 60.22 | 62.25 | 61.22
Weighted fusion | 87.67 | 87.54 | 87.61 | 62.19 | 63.37 | 62.77
Table 4. Effect of Different Graph Fusion Methods Original Text | Relation | Top-5 Relation Triggers
---|---|---
Urutigoechea and the others were arrested Wednesday in the cities of Bayonee and Bonloc in southwestern France in Poitiers in west-central France. | (Bonloc, Located in, France) | southwestern, west-central, cities, of, in
Kleber Elias Gia Bustamante, accused by the police of being a member of the ”Red Sun” central committee, has been living clandestinely since his escape from the Garcia Moreno Prison, where he was held accused of assassinating the industrialist, Jose Antonio Briz Lopez. | (Kleber Elias Gia Bustamante, Kill, Jose Antonio Briz Lopez) | Prison, assassinating, held, of, accused
Table 5. Results of Trigger Words Extraction
Effect of Different Stage Divisions for Memory We explored the effect of
different two-stage divisions on the relation classification, as shown in
Figure 4 (x-axis is the number of epochs for the first stage and the total
number of epochs is 30). We can note that if our model skips the first stage
(x=0) or ignores the second stage (x=30), the performance of the model
degrades significantly. Specifically, as the proportion of first stage epochs
to total epochs increases, our model performs better. But at a certain point,
the performance degrades significantly. We believe this is due to a decrease
in epochs of the second stage and the memory already written in the first
stage is not utilized effectively. Therefore the two-stage training strategy
is effective, and a good balance of the two stages can bring out a better
model performance.
Figure 4. Effect of Train Stage Division
Effect of Different Gradients Flow to Memory Our model primarily writes the
memory in Memory-Aware Classifier. Furthermore, we can also tune the memory in
MFA and Trigger Sensor modules through the backpropagation of gradients. The
gradient flows are divided into three types: Trigger Sensor gradients,
Subword-level MFA gradients and Word-level MFA gradients, and we investigated
the effects of different gradients, as shown in Table 6. We see that on the
ACE05 dataset, when we block any of the gradients flows, the model performance
decreases significantly, by 1.35%, 1.54%, and 0.92% on relation F1 score,
which indicates that tuning the memory during the second stage is effective.
However, On the SciERC dataset, there is no significant drop, and we believe
that the model has learned accurate representations of the categories in the
first training stage.
| Entity | Relation
---|---|---
Method | F1 | $\Delta$ | F1 | $\Delta$
SciERC
TriMF | 70.17 | - | 52.44 | -
w/o Trigger Sensor Grad. | 70.14 | -0.03 | 52.28 | -0.16
w/o Subword-level MFA Grad. | 70.23 | +0.08 | 52.03 | -0.41
w/o Word-level MFA Grad. | 70.12 | -0.05 | 52.14 | -0.30
ACE05
TriMF | 87.61 | - | 62.77 | -
w/o Trigger Sensor Grad. | 87.55 | -0.06 | 61.42 | -1.35
w/o Subword-level MFA Grad. | 87.43 | -0.18 | 61.23 | -1.54
w/o Word-level MFA Grad. | 87.34 | -0.27 | 61.85 | -0.92
Table 6. Effect of Gradient Flow to Memory
Effect of Relation Filtering Threshold The precision and recall of relation
classification are correlated with predefined thresholds. We investigate the
impact of the relation filtering threshold on relation F1. Figure 5 shows the
relation F1 score on the SciERC and ACE05 test sets, plotted against the
relation filtering threshold. We see that the performance of our model is
stable for the choice of relation filtering thresholds. Our model is able to
achieve good results on relation classification except for extreme thresholds
of 0.0 or 1.0. Therefore, within a reasonable range, our model is not
sensitive to choose a threshold.
Figure 5. Effect of Relation Filtering Threshold
### 4.7. Case Study
Figure 6. Two case studies of relation memory flow attention during inference.
The darker cells have higher attention weights. The problem is not unusual in
[[ Guernsey ]${}_{\text{H\\_Located-in}}$]${}_{\text{H\\_Located-in}}$, one of
[ Britain ]${}_{\text{T\\_Located-in}}$ ’s [ Channel Islands
]${}_{\text{T\\_Located-in}}$ off the coast of [[ France
]]${}_{\text{T\\_Located-in}}$
---
… and former [[ CBS ]${}_{\text{T\\_Work-for}}$ News ] commentator [[ Eric
Sevareid ]${}_{\text{H\\_Live-in}}$ ]${}_{\text{H\\_Work-for}}$ , who was born
in [ Velva ]${}_{\text{T\\_Live-in}}$ , several miles southeast of [ Minot ].
Text of the statement issued by the [ Organization of the Oppressed on Earth ]
claiming [[ U. S. ]${}_{\text{T\\_Live-in}}$]${}_{\text{T\\_Live-in}}$ Marine
Lt.[[ William R. Higgins ]${}_{\text{H\\_Live-in}}$]${}_{\text{H\\_Live-in}}$
was hanged.
Table 7. Typical error examples. Red brackets indicate entities predicted by
the model, blue brackets indicate true entities, and the labels in the lower
right corner indicate the type of relation between the corresponding entities
and the head or tail type (T for the tail entity; H for the head entity)
Trigger Words Extraction With the Trigger Sensor module, our model has the
ability to extract the relation triggers. We rank the similarities of each
word representation with the span-pair specific relation representation, which
have been calculated in the Trigger Sensor. Filtering out the entity surface
words and stopwords, the top k words are picked as relation triggers and used
to interpret the results of the relation extraction. We show two cases in
Table 5.
Memory Flow Attention Visualization We visualize the weights of attention to
provide a straightforward picture of how the entity and relation memory flow
attention we designed can both enhance the interaction between entity
recognition and relation extraction. Also, it can enhance the information
about relation triggers in context, to some extent explaining the model’s
predictions. Figure 6 shows two cases of how attention weights on context from
a relation memory flow can help the model recognize entities and highlight
relation triggers. Each example is split into two visualizations, with the top
showing the original attention weights and the bottom showing the attention
weights after masking the entities. In the top figure, we can see that the
darker words belong to an entity, for example, ”Urutigoechea”, ”Bayonee”,
”Bonloc” in case 1, ”Dallas”, ”Jack Ruby” in case 2, illustrating that the
attention of our relation memory flow attention can highlight relevant entity
information. Consistent with (Han et al., 2020), our attention distribution
also illustrates that entity names provide more valid information for relation
classification compared to context. To more clearly visualize the attention
weights of different contextual words, we mask all entities, formalize the
weights of the remaining words, and then visualize them. As shown in the
bottom figure, the relation memory flow pays more attention on the words that
indicate the type of relation, i.e., relation triggers, such as ”in”,
”southwestern”, ”west-central” in case 1 can indicate ”Located in” relation,
and ”assassin”, ”murdering” in case 2 can indicate ”Kill” relation. This shows
that our relation memory flow is able to highlight relation triggers, helping
the model with better performance on relation extraction.
Error Cases In addition to visualizing Memory Flow Attention weights on true
positives, we also analyze a number of false positives and false negatives.
These error cases include relation requiring inference, ambiguous entity
recognition and long entity recognition, as shown in Table 7. In the first
case, although our model is able to recognize the four entities about
Location, it incorrectly extracts the relation ”(Guernsey, Located in,
France)” and does not extract the correct one ”(Guernsey, Located in, Channel
Islands)”. This is because the model does not infer the complex location
relation between the four entities. Our model is prone to make mistakes when
classifying ambiguous entities, and False Positive and False Negative often
occur together. For example, in the second row of the Table 7, the model does
not recognize ”CBS News” as a Location entity, but recognizes ”CBS” which is
not labeled in the test set. Furthermore, recognition of long entities is a
challenge for our model due to the fact that long entities are sparse in the
dataset. For example, in the third row of the Table 7, the model fails to
recognize the long entity ”Organization of the Oppressed on Earth”.
## 5\. Conclusion and Future Work
In this paper, we propose a Trigger-Sense Memory Flow Framework (TriMF) for
joint entity and relation extraction. We use the memory to boost the task-
related information in a sentence through the Multi-level Memory Flow
Attention module. This module can effectively exploit the mutual dependency
and enhance the bi-directional interaction between entity recognition and
relation extraction tasks. Also, focusing on the relation triggers, we design
a Trigger Sensor to sense and enhance triggers based on memory. Our model can
extract the relation triggers without any trigger annotations, which can
better assist the relation extraction and provide an explanation. Furthermore,
we distinguish the semantic and syntactic importance of a word in a sentence
and fuse semantic and syntactic graphs dynamically based on the attention
mechanism. Experiments on SciERC, ACE05, CoNLL04 and ADE datasets show that
our proposed model TriMF achieves state-of-the-art performance.
In the future, we will improve our work along with two directions. First, we
plan to impose constraints on the representations of entity categories and
relation categories written in the memory, due to the fact that relations and
entities substantively satisfy specific constraints at the ontology level.
Second, for improving the model’s ability on sensing the trigger, we plan to
add weak supervision (e.g. word frequency, entity boundary) to the Trigger
Sensor module.
###### Acknowledgements.
This work is supported by the National Key Research and Development Project of
China (No. 2018AAA0101900), the Fundamental Research Funds for the Central
Universities, the Chinese Knowledge Center of Engineering Science and
Technology (CKCEST) and MOE Engineering Research Center of Digital Library.
## References
* (1)
* Agichtein and Gravano (2000) Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In _Proceedings of the fifth ACM conference on Digital libraries_. 85–94.
* Aone et al. (1998) Chinatsu Aone, Lauren Halverson, Tom Hampton, and Mila Ramos-Santacruz. 1998. SRA: Description of the IE2 system used for MUC-7. In _Seventh Message Understanding Conference (MUC-7): Proceedings of a Conference Held in Fairfax, Virginia, April 29-May 1, 1998_.
* Batista et al. (2015) David S Batista, Bruno Martins, and Mário J Silva. 2015\. Semi-supervised bootstrapping of relationship extractors with distributional semantics. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_. 499–504.
* Bekoulis et al. (2018a) Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018a. Adversarial training for multi-context joint entity and relation extraction. _arXiv preprint arXiv:1808.06876_ (2018).
* Bekoulis et al. (2018b) Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018b. Joint entity recognition and relation extraction as a multi-head selection problem. _Expert Systems with Applications_ 114 (2018), 34–45.
* Beltagy et al. (2019) Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. _arXiv preprint arXiv:1903.10676_ (2019).
* Bui et al. (2011) Quoc-Chinh Bui, Sophia Katrenko, and Peter MA Sloot. 2011\. A hybrid approach to extract protein–protein interactions. _Bioinformatics_ 27, 2 (2011), 259–265.
* Bunescu and Mooney (2005) Razvan Bunescu and Raymond Mooney. 2005. A shortest path dependency kernel for relation extraction. In _Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing_. 724–731.
* Chan and Roth (2011) Yee Seng Chan and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extraction. In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. 551–560.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_ (2018).
* Eberts and Ulges (2019) Markus Eberts and Adrian Ulges. 2019. Span-based Joint Entity and Relation Extraction with Transformer Pre-training. _arXiv preprint arXiv:1909.07755_ (2019).
* Fu et al. (2019) Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. 2019. GraphRel: Modeling text as relational graphs for joint entity and relation extraction. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_. 1409–1418.
* Fundel et al. (2007) Katrin Fundel, Robert Küffner, and Ralf Zimmer. 2007\. RelEx-Relation extraction using dependency parse trees. _Bioinformatics_ 23, 3 (2007), 365–371.
* Gupta et al. (2016) Pankaj Gupta, Hinrich Schütze, and Bernt Andrassy. 2016\. Table filling multi-task recurrent neural network for joint entity and relation extraction. In _Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers_. 2537–2547.
* Gurulingappa et al. (2012) Harsha Gurulingappa, Abdul Mateen Rajput, Angus Roberts, Juliane Fluck, Martin Hofmann-Apitius, and Luca Toldo. 2012. Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. _Journal of biomedical informatics_ 45, 5 (2012), 885–892.
* Han et al. (2020) Xu Han, Tianyu Gao, Yankai Lin, Hao Peng, Yaoliang Yang, Chaojun Xiao, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2020\. More Data, More Relations, More Context and More Openness: A Review and Outlook for Relation Extraction. _arXiv preprint arXiv:2004.03186_ (2020).
* Hearst (1992) Marti A Hearst. 1992\. Automatic acquisition of hyponyms from large text corpora. In _Coling 1992 volume 2: The 15th international conference on computational linguistics_.
* Jiang and Zhai (2007) Jing Jiang and ChengXiang Zhai. 2007. A systematic exploration of the feature space for relation extraction. In _Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference_. 113–120.
* Jones et al. (1999) Rosie Jones, Andrew McCallum, Kamal Nigam, and Ellen Riloff. 1999. Bootstrapping for text learning tasks. In _IJCAI-99 Workshop on Text Mining: Foundations, Techniques and Applications_ , Vol. 1.
* Kambhatla (2004) Nanda Kambhatla. 2004\. Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations. In _Proceedings of the ACL 2004 on Interactive poster and demonstration sessions_. 22–es.
* Katiyar and Cardie (2016) Arzoo Katiyar and Claire Cardie. 2016. Investigating lstms for joint extraction of opinion entities and relations. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. 919–929.
* Li and Ji (2014) Qi Li and Heng Ji. 2014\. Incremental joint extraction of entity mentions and relations. In _Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. 402–412.
* Li et al. (2019) Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, and Jiwei Li. 2019\. Entity-relation extraction as multi-turn question answering. _arXiv preprint arXiv:1905.05529_ (2019).
* Lin et al. (2020) Bill Yuchen Lin, Dong-Ho Lee, Ming Shen, Ryan Moreno, Xiao Huang, Prashant Shiralkar, and Xiang Ren. 2020. Triggerner: Learning with entity triggers as explanations for named entity recognition. In _ACL_.
* Lin et al. (2016) Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. 2124–2133.
* Luan et al. (2018) Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. _arXiv preprint arXiv:1808.09602_ (2018).
* Luan et al. (2019) Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. _arXiv preprint arXiv:1904.03296_ (2019).
* Miller et al. (2000) Scott Miller, Heidi Fox, Lance Ramshaw, and Ralph Weischedel. 2000\. A novel use of statistical parsing to extract information from text. In _1st Meeting of the North American Chapter of the Association for Computational Linguistics_.
* Miwa and Bansal (2016) Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. _arXiv preprint arXiv:1601.00770_ (2016).
* Roth and Yih (2004) Dan Roth and Wen-tau Yih. 2004. _A linear programming formulation for global inference in natural language tasks_. Technical Report. ILLINOIS UNIV AT URBANA-CHAMPAIGN DEPT OF COMPUTER SCIENCE.
* Schlichtkrull et al. (2018) Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In _European Semantic Web Conference_. Springer, 593–607.
* Takanobu et al. (2019) Ryuichi Takanobu, Tianyang Zhang, Jiexi Liu, and Minlie Huang. 2019\. A hierarchical framework for relation extraction with reinforcement learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 33. 7072–7079.
* Wadden et al. (2019) David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. _arXiv preprint arXiv:1909.03546_ (2019).
* Wang et al. (2018) Shaolei Wang, Yue Zhang, Wanxiang Che, and Ting Liu. 2018\. Joint extraction of entities and relations based on a novel graph scheme.. In _IJCAI_. 4461–4467.
* Wang et al. (2019) Ziqi Wang, Yujia Qin, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, and Xiang Ren. 2019\. Learning from explanations with neural execution tree. In _International Conference on Learning Representations_.
* Wei et al. (2020) Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian, and Yi Chang. 2020. A Novel Cascade Binary Tagging Framework for Relational Triple Extraction. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_. 1476–1488.
* Yang and Cardie (2013) Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In _Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. 1640–1649.
* Yuan et al. (2020) Yue Yuan, Xiaofei Zhou, Shirui Pan, Qiannan Zhu, Zeliang Song, and Li Guo. 2020\. A Relation-Specific Attention Network for Joint Entity and Relation Extraction. In _International Joint Conference on Artificial Intelligence 2020_. Association for the Advancement of Artificial Intelligence (AAAI), 4054–4060.
* Zeng et al. (2018) Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018. Extracting relational facts by an end-to-end neural model with copy mechanism. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. 506–514.
* Zhang et al. (2006) Min Zhang, Jie Zhang, and Jian Su. 2006. Exploring syntactic features for relation extraction using a convolution tree kernel. In _Proceedings of the Human Language Technology Conference of the NAACL, Main Conference_. 288–295.
* Zhao et al. (2020) Tianyang Zhao, Zhao Yan, Y. Cao, and Zhoujun Li. 2020\. Asking Effective and Diverse Questions: A Machine Reading Comprehension based Framework for Joint Entity-Relation Extraction. In _IJCAI_.
* Zheng et al. (2017) Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017\. Joint extraction of entities and relations based on a novel tagging scheme. _arXiv preprint arXiv:1706.05075_ (2017).
* Zhou et al. (2020) Wenxuan Zhou, Hongtao Lin, Bill Yuchen Lin, Ziqi Wang, Junyi Du, Leonardo Neves, and Xiang Ren. 2020. Nero: A neural rule grounding framework for label-efficient relation extraction. In _Proceedings of The Web Conference 2020_. 2166–2176.
|
On the duals of the Fibonacci and Catalan-Fibonacci polynomials and Motzkin
paths
Paul Barry
School of Science
Waterford Institute of Technology
Ireland
<EMAIL_ADDRESS>
###### Abstract
We use the inversion of coefficient arrays to define dual polynomials to the
Fibonacci and Catalan-Fibonacci polynomials, and we explore the properties of
these new polynomials sequences. Many of the arrays involved are Riordan
arrays. Direct links to the counting of Motzkin paths by different statistics
emerge.
## 1 Preliminaries
The Fibonacci polynomials are the family of polynomials $F_{n}(y)$ with
generating function $F(x,y)=\frac{x}{1-yx-x^{2}}$ [5, 6, 8, 12]. We
immediately have that $F_{n}(1)=F_{n}$, the Fibonacci numbers A000045, which
explains the name of this family. We have
$\displaystyle F_{0}(y)$ $\displaystyle=0$ $\displaystyle F_{1}(y)$
$\displaystyle=1$ $\displaystyle F_{2}(y)$ $\displaystyle=y$ $\displaystyle
F_{3}(y)$ $\displaystyle=y^{2}+1$ $\displaystyle F_{4}(y)$
$\displaystyle=y^{3}+2y$ $\displaystyle\ldots$
By the _dual Fibonacci polynomials_ $\hat{F}_{n}(y)$ we shall mean the
polynomials whose generating function is given by the series reversion of
$F(x,y)$, where the reversion is taken with respect to $x$. To find this
generating function, we solve the equation
$\frac{u}{1-yu-u^{2}}=x$
to get the solution
$u(x)=\frac{1-yx-\sqrt{1+2yx+(y^{2}+4)x^{2}}}{2x}.$
We find that
$\displaystyle\hat{F}_{0}(y)$ $\displaystyle=0$ $\displaystyle\hat{F}_{1}(y)$
$\displaystyle=1$ $\displaystyle\hat{F}_{2}(y)$ $\displaystyle=-y$
$\displaystyle\hat{F}_{3}(y)$ $\displaystyle=y^{2}-1$
$\displaystyle\hat{F}_{4}(y)$ $\displaystyle=-y^{3}+3y$ $\displaystyle\ldots$
More insight is gained by characterizing the coefficient arrays of these
polynomials. It will be seen that many of the coefficient arrays we meet in
this note are Riordan arrays [2, 9] or are closely related to them. Many
examples of Riordan arrays are documented in the On-Line Encyclopedia of
Integer Sequences (OEIS) [10, 11]. Sequences in this database are referenced
by their $Axxxxxx$ numbers.
###### Lemma 1.
The coefficient array of the Fibonacci polynomial sequence
$F_{1}(y),F_{2}(y),F_{3}(y),\ldots$ is the Riordan array
$\left(\frac{1}{1-x^{2}},\frac{x}{1-x^{2}}\right)$.
###### Proof.
By the theory of Riordan arrays, the bivariate generating function of the
Riordan array $\left(\frac{1}{1-x^{2}},\frac{x}{1-x^{2}}\right)$ is given by
$\frac{\frac{1}{1-x^{2}}}{1-y\frac{x}{1-x^{2}}}=\frac{1}{1-yx-x^{2}}.$
∎
###### Corollary 2.
We have
$F_{n+1}(y)=\sum_{k=0}^{n}\binom{\frac{n+k}{2}}{k}\frac{1+(-1)^{n}}{2}y^{k}.$
###### Proof.
The $(n,k)$-th element of the Riordan array
$\left(\frac{1}{1-x^{2}},\frac{x}{1-x^{2}}\right)$ is given by
$t_{n,k}=[x^{n}]\frac{1}{1-x^{2}}\left(\frac{x}{1-x^{2}}\right)^{k}=\binom{\frac{n+k}{2}}{k}\frac{1+(-1)^{n}}{2}.$
∎
This coefficient array begins
$\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&1&0&0&0&0\\\ 1&0&1&0&0&0\\\
0&2&0&1&0&0\\\ 1&0&3&0&1&0\\\ 0&3&0&4&0&1\\\ \end{array}\right).$
The combinatorial meaning of the $(n,k)$-th element of this array is that it
counts the number of ways an $n\times 1$ board can be tiled with $2\times 1$
dominoes and exactly $k$ $1\times 1$ squares. The inversion of this array,
denoted by $\left(\frac{1}{1-x^{2}},\frac{x}{1-x^{2}}\right)^{!}$, begins
$\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&-1&0&0&0&0\\\ -1&0&1&0&0&0\\\
0&3&0&-1&0&0\\\ 2&0&-6&0&1&0\\\ 0&-10&0&10&0&-1\\\ \end{array}\right).$
The array $\left(\frac{1}{1-x^{2}},\frac{x}{1-x^{2}}\right)$ is an element of
the Bell subgroup of the group of Riordan arrays. We therefore have the
following [1].
###### Corollary 3.
The coefficient array of the dual Fibonacci polynomials
$\hat{F}_{1}(y),\hat{F}_{2}(y),\hat{F}_{3}(y),\ldots$ is given by the
exponential Riordan array
$\left[\frac{I_{1}(2ix)}{ix},-x\right].$
Here, $i=\sqrt{-1}$. The general element of this array is given by
$\hat{t}_{n,k}=\binom{n}{k}C_{\frac{n-k}{2}}(-1)^{\frac{n+k}{2}}\frac{1+(-1)^{n-k}}{2},$
where $C_{n}=\frac{1}{n+1}\binom{2n}{n}$ is the $n$-the Catalan number. Then
$\hat{F}_{n+1}(y)=\sum_{k=0}^{n}\hat{t}_{n,k}y^{k}.$
The corresponding matrix $\left[\frac{I_{1}(2x)}{x},x\right]$ with all
nonnegative elements is A097610 in the OEIS. This array counts the number of
Motzkin paths of length $n$ having $k$ horizontal steps. We can generalize
these results by considering the generating function $\frac{1}{1-yx-zx^{2}}$.
Expanding this along $x$ we have the following.
$\displaystyle[x^{n}]\frac{1}{1-yx-zx^{2}}$
$\displaystyle=[x^{n}](1-x(y+zx))^{-1}$
$\displaystyle=[x^{n}]\sum_{i=0}^{\infty}x^{i}(y+zx)^{i}$
$\displaystyle=[x^{n}]\sum_{i=0}^{\infty}x^{i}\sum_{j=0}^{i}\binom{i}{j}y^{j}z^{i-j}x^{j}$
$\displaystyle=\sum_{i=0}^{n}\binom{i}{n-i}y^{n-i}z^{2i-n}$
$\displaystyle=\sum_{i=0}^{n}\binom{n-i}{i}y^{i}z^{n-2i}.$
We then have
$F_{n+1}(y)=\sum_{i=0}^{n}\binom{i}{n-i}y^{n-i}\quad\text{and}\quad
F_{n+1}(y)=\sum_{i=0}^{\lfloor\frac{n}{2}\rfloor}\binom{n-i}{i}y^{i}.$
Thus we have a second and a third matrix associated with the Fibonacci
polynomials.
The second matrix is the lower-triangular invertible triangle
$\left(\binom{k}{n-k}\right)_{0\leq n,k\leq\infty}$, which corresponds to the
Riordan array $(1,x(1+x))$. This triangle begins
$\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&1&0&0&0&0\\\ 0&1&1&0&0&0\\\
0&0&2&1&0&0\\\ 0&0&1&3&1&0\\\ 0&0&0&3&4&1\\\ \end{array}\right).$
The generating function of this matrix is given by
$\frac{1}{1-yx(1+x)}=\frac{1}{1-yx-yx^{2}}.$
To get its inversion, we thus solve the equation
$\frac{u}{1-yu-yu^{2}}=x$
to get
$\frac{u}{x}=\frac{\sqrt{1+2yx+y(y+4)x^{2}}-yx-1}{2yx^{2}}.$
This expands to give the matrix $(1,x(1+x))^{!}$ that begins
$\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&-1&0&0&0&0\\\ 0&-1&1&0&0&0\\\
0&0&3&-1&0&0\\\ 0&0&2&-6&1&0\\\ 0&0&0&-10&10&-1\\\ \end{array}\right).$
The general element of this matrix is
$\tilde{t}_{n,k}=\frac{(-1)^{k}}{k+1}\binom{n}{k}\binom{k+1}{n-k+1}.$
The nonnegative matrix is A107131, which counts Motzkin paths of length $n$
with $k$ up steps, or $k$ horizontal steps. We let $\tilde{F}_{n}(y)$ be the
polynomials with
$\tilde{F}_{0}(y)=0,\tilde{F}_{1}(y)=1,\tilde{F}_{2}=-y,\tilde{F}_{3}(y)=y^{2}-y,\tilde{F}_{4}(y)=-y^{3}+3y^{2},\ldots,$
defined by the above matrix. We have the following result.
###### Proposition 4.
$\tilde{F}_{n+1}(y)=y^{n}\,_{2}F_{1}\left(\frac{1}{2}-\frac{n}{2},-\frac{n}{2};2;-\frac{4}{y}\right).$
We can express the dual polynomials $\hat{F}_{n}$ in terms of the matrix
$(\tilde{t}_{n,k})$ as follows.
###### Proposition 5.
We have
$\hat{F}_{n+1}(y)=\sum_{k=0}^{n}\tilde{t}_{n,k}y^{2k-n}.$
The third matrix associated with the Fibonacci polynomials is the matrix
$\left(\binom{n-k}{k}\right)$ (which is the one most usually associated with
the Fibonacci polynomials). This is the “stretched” Riordan array
$\left(\frac{1}{1-x},\frac{x^{2}}{1-x}\right)$, which begins
$\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\
1&2&0&0&0&0\\\ 1&3&1&0&0&0\\\ 1&4&3&0&0&0\\\ \end{array}\right).$
This matrix is A011973 in the OEIS. Its generating function is given by
$\frac{\frac{1}{1-x}}{1-y\frac{x^{2}}{1-x}}=\frac{1}{1-x-yx^{2}}.$
To find the inversion of this matrix, we solve the equation
$\frac{u}{1-u-yu^{2}}=x$
to get
$\frac{u}{x}=\frac{\sqrt{1+2x+(1+4y)x^{2}}-x-1}{2yx^{2}}$
as the generating function of the inversion. This expands to give the matrix
$\left(\tilde{\tilde{t}}_{n,k}\right)$ that begins
$\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ -1&0&0&0&0&0\\\ 1&-1&0&0&0&0\\\
-1&3&0&0&0&0\\\ 1&-6&2&0&0&0\\\ -1&10&-10&0&0&0\\\ \end{array}\right).$
This matrix is the coefficient array of the polynomials
$\tilde{\tilde{F}}_{n}(y)$ with
$\tilde{\tilde{F}}_{0}(y)=0,\tilde{\tilde{F}}_{1}(y)=1,\tilde{\tilde{F}}_{2}(y)=-1,\tilde{\tilde{F}}_{3}(y)=1-y,\tilde{\tilde{F}}_{4}(y)=3y-1,\tilde{\tilde{F}}_{5}(y)=2y^{2}-6y+1,\ldots.$
The general term of this matrix is
$\tilde{\tilde{t}}_{n,k}=\binom{n}{2k}C_{k}(-1)^{n-k}.$
The nonnegative matrix $\left(\binom{n}{2k}C_{k}\right)$ is A055151, which
counts the number of Motzkin paths of length $n$ with $k$ up steps. We an
express the dual Fibonacci polynomials $\hat{F}_{n}(y)$ in terms of this
matrix as follows.
###### Proposition 6.
We have
$\hat{F}_{n+1}(y)=\sum_{k=0}^{\lfloor\frac{n}{2}\rfloor}\tilde{\tilde{t}}_{n,k}y^{n-2k}=\sum_{k=0}^{\lfloor\frac{n}{2}\rfloor}\binom{n}{2k}C_{k}(-1)^{n-k}y^{n-2k}.$
## 2 Catalan-Fibonacci polynomials and their duals
The Catalan-Fibonacci polynomials are obtained by scaling the Fibonacci
polynomials by the Catalan numbers. Thus we set $CF_{n}(y)=C_{n-1}F_{n}(y)$.
In order to explore this concept, we first look at the relevant generating
functions. We have the following result in this direction.
###### Proposition 7.
We have
$[x^{n+1}]\operatorname{Rev}\left(x(\sqrt{1-4bx^{2}}-ax)\right)=C_{n}\sum_{i=0}^{\lfloor\frac{n}{2}\rfloor}\binom{n-i}{i}a^{n-2i}b^{i}.$
###### Proof.
The proof uses Lagrange Inversion [4, 7]. We have
$\displaystyle[x^{n+1}]\operatorname{Rev}\left(x(\sqrt{1-4bx^{2}}-ax)\right)$
$\displaystyle=\frac{1}{n+1}[x^{n}]\left(\sqrt{1-4bx^{2}}-ax\right)^{-(n+1)}$
$\displaystyle=\frac{1}{n+1}[x^{n}]\sum_{j=0}^{\infty}\binom{-(n+1)}{j}(1-4bx^{2})^{\frac{j}{2}}(-ax)^{-(n+1)-j}$
$\displaystyle=\frac{1}{n+1}[x^{n}]\sum_{j=0}^{\infty}\binom{n+j}{j}(-1)^{j}\sum_{i=0}^{\frac{j}{2}}\binom{\frac{j}{2}}{i}(-4b)^{i}x^{2i}(-ax)^{-n-j-1}$
$\displaystyle=\frac{1}{n+1}\sum_{i\geq
0}\binom{\frac{2i-2n-1}{2}}{i}(-4b)^{i}\binom{2i-n-1}{2i-2n-1}(-a)^{n-2i}$
$\displaystyle=\frac{1}{n+1}\sum_{i\geq
0}\binom{-\left(\frac{2n-2i+1}{2}\right)}{i}(-4b)^{i}\binom{2i-n-1}{n}(-a)^{n-2i}$
$\displaystyle=\frac{1}{n+1}\sum_{i\geq
0}\binom{\frac{2n-2i+1}{2}+i-1}{i}(4b)^{i}\binom{-(n-2i+1)}{n}(-a)^{n-2i}$
$\displaystyle=\frac{1}{n+1}\sum_{i\geq
0}\binom{n-\frac{1}{2}}{i}(4b)^{i}\binom{n-2i+1+n-1}{n}(-1)^{n}(-a)^{n-2i}$
$\displaystyle=\frac{1}{n+1}\sum_{i\geq
0}\binom{n-\frac{1}{2}}{i}\binom{2n-2i}{n}4^{i}a^{n-2i}b^{i}$
$\displaystyle=\frac{1}{n+1}\sum_{i=0}^{\lfloor\frac{n}{2}\rfloor}\binom{2n}{n}\binom{n-i}{i}a^{n-2i}b^{i}$
$\displaystyle=C_{n}\sum_{i=0}^{\lfloor\frac{n}{2}\rfloor}\binom{n-i}{i}a^{n-2i}b^{i}.$
∎
###### Corollary 8.
The generating function of the Catalan-Fibonacci polynomial sequence
$C_{n}F_{n+1}(y)$ is given by
$\frac{1}{x}\operatorname{Rev}\left(x(\sqrt{1-4yx^{2}}-x)\right).$
In order to get a closed expression for
$\operatorname{Rev}\left(x(\sqrt{1-4bx^{2}}-ax)\right)$, we solve the equation
$u(\sqrt{1-4bu^{2}}-au)=x$
and we take the solution with $u(0)=0$. We find that
$\operatorname{Rev}\left(x(\sqrt{1-4bx^{2}}-ax)\right)=\frac{\sqrt{1-2ax-\sqrt{1-4ax-16bx^{2}}}}{\sqrt{2}\sqrt{a^{2}+4b}}.$
The following result is immediate.
###### Corollary 9.
The generating function of the Catalan-Fibonacci polynomials $CF_{n}(y)$ is
given by
$\frac{\sqrt{1-2x-\sqrt{1-4x-16yx^{2}}}}{\sqrt{2}\sqrt{1+4y}}.$
Regarded as the bivariate generating function in $x$ and $y$, this generating
function expands to give the matrix that begins
$\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&0&0&0&0&0\\\ 2&2&0&0&0&0\\\
5&10&0&0&0&0\\\ 14&42&14&0&0&0\\\ 42&168&126&0&0&0\\\ \end{array}\right).$
We define the _dual Catalan-Fibonacci polynomials_ $\hat{FC}_{n}(y)$ to be the
sequence of polynomials whose generating function is given by the series
reversion of that of the Catalan-Fibonacci polynomials. Thus we have that the
generating function of the dual Catalan-Fibonacci polynomials is given by
$x(\sqrt{1-4yx^{2}}-x).$
These polynomials therefore start
$0,1,-1,-2y,0,-2y^{2},0,-4y^{3},0,-10y^{4},0,\ldots.$
It is interesting to note the simple form of these polynomials, which are
defined essentially by the Catalan numbers, since we have
$(2,2,4,10,\ldots)=2(1,1,2,5,\ldots).$
In terms of the inversion of coefficient matrices, we have the following.
$\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&0&0&0&0&0\\\ 2&2&0&0&0&0\\\
5&10&0&0&0&0\\\ 14&42&14&0&0&0\\\ 42&168&126&0&0&0\\\
\end{array}\right)^{!}=\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\
-1&0&0&0&0&0\\\ 0&-2&0&0&0&0\\\ 0&0&0&0&0&0\\\ 0&0&-2&0&0&0\\\ 0&0&0&0&0&0\\\
\end{array}\right).$
###### Example 10.
The sequence $\hat{CF}_{n+1}(1)$ begins
$1,-1,-2,0,-2,0,-4,0,-10,0,-28,0,-84,0,-264,0,-858,0,\ldots.$
The Hankel transform of this sequence begins
$1,-3,14,-32,96,-208,544,-1152,2816,-5888,\ldots.$
This has generating function
$\frac{1-x+4x^{2}}{(1-2x)(1+2x)^{2}}.$
The sequence $\hat{CF}_{n+1}(-1)$ begins
$1,-1,2,0,-2,0,4,0,-10,0,28,0,-84,0,264,0,-858,0,\ldots.$
The Hankel transform of this sequence begins
$1,1,-10,-16,64,112,-352,-640,1792,3328,\ldots.$
and it has generating function
$\frac{1+x-2x^{2}-8x^{3}}{(1+4x^{2})^{2}}.$
## 3 The Catalan-Fibonacci matrix
We have
$CF_{n+1}(y)=C_{n}\sum_{i=0}^{\lfloor\frac{n}{2}\rfloor}\binom{n-i}{i}a^{n-2i}b^{i}$.
The sequence $CF_{n+1}(y)$ begins
$1,a,2(a^{2}+b),5a(a^{2}+2b),14(a^{4}+3a^{2}b+b^{2}),42a(a^{4}+4a^{2}b+3b^{2}),\ldots.$
In matrix terms, we can express this in two ways. We have
$\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ a&0&0&0&0&0\\\
2a^{2}&2&0&0&0&0\\\ 5a^{3}&10a&0&0&0&0\\\ 14a^{4}&42a^{2}&14&0&0&0\\\
42a^{5}&168a^{3}&126a&0&0&0\\\ \end{array}\right)\left(\begin{array}[]{c}1\\\
b\\\ b^{2}\\\ b^{3}\\\ b^{4}\\\ b^{5}\\\
\end{array}\right)=\left(\begin{array}[]{c}1\\\ a\\\ 2(a^{2}+b)\\\
5a(a^{2}+2b\\\ 14(a^{4}+3a^{2}b+b^{2}\\\ 42a(a^{4}+4a^{2}b+3b^{2})\\\
\end{array}\right),$
and
$\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&1&0&0&0&0\\\ 2b&0&2&0&0&0\\\
0&10b&0&5&0&0\\\ 14b^{2}&0&42b&0&14&0\\\ 0&126b^{2}&0&168b&0&42\\\
\end{array}\right)\left(\begin{array}[]{c}1\\\ a\\\ a^{2}\\\ a^{3}\\\ a^{4}\\\
a^{5}\\\ \end{array}\right)=\left(\begin{array}[]{c}1\\\ a\\\ 2(a^{2}+b)\\\
5a(a^{2}+2b\\\ 14(a^{4}+3a^{2}b+b^{2}\\\ 42a(a^{4}+4a^{2}b+3b^{2})\\\
\end{array}\right).$
We call the matrix for $b=1$ that begins
$\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&1&0&0&0&0\\\ 2&0&2&0&0&0\\\
0&10&0&5&0&0\\\ 14&0&42&0&14&0\\\ 0&126&0&168&0&42\\\ \end{array}\right)$
the _Catalan-Fibonacci matrix_. Its generating function is
$\frac{\sqrt{1-2yx-\sqrt{1-4yx-16x^{2}}}}{\sqrt{2(y+4)}}.$
Its row sums are the numbers $C_{n}F_{n+1}$, which gives the sequence A098614
in the OEIS. The inversion of the Catalan-Fibonacci matrix is the matrix that
begins
$\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&-1&0&0&0&0\\\ -2&0&0&0&0&0\\\
0&0&0&0&0&0\\\ -2&0&0&0&0&0\\\ 0&0&0&0&0&0\\\ \end{array}\right).$
Here, the first column is the sequence
$1,0,-2,0,-2,0,-4,0,-10,0,-28,0,\ldots.$
When $b=2$, we get the matrix that begins
$\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&1&0&0&0&0\\\ 4&0&2&0&0&0\\\
0&20&0&5&0&0\\\ 56&0&84&0&14&0\\\ 0&504&0&336&0&42\\\ \end{array}\right).$
We call this the _Catalan-Jacobsthal_ matrix. Its row sums are the product of
the Catalan numbers and the Jacobsthal numbers. This row sum sequence is
sequence A200375 in the OEIS.
## 4 The generating function $\frac{1}{\sqrt{1-4bx^{2}}-ax}$
To explore the reciprocal of the generating function $\sqrt{1-4bx^{2}}-ax$ we
consider the Riordan array
$\left(\frac{1}{\sqrt{1-4bx^{2}}},\frac{x}{\sqrt{1-4bx^{2}}}\right).$
By the fundamental theorem of Riordan arrays, we have
$\displaystyle\left(\frac{1}{\sqrt{1-4bx^{2}}},\frac{x}{\sqrt{1-4bx^{2}}}\right)\cdot\frac{1}{1-ax}$
$\displaystyle=\frac{1}{\sqrt{1-4bx^{2}}}\frac{1}{1-a\frac{x}{\sqrt{1-4bx^{2}}}}$
$\displaystyle=\frac{1}{\sqrt{1-4bx^{2}}-ax}.$
Equivalently, we have
$\frac{1}{\sqrt{1-4bx^{2}}-ax}=\left(\frac{1}{\sqrt{1-4bx^{2}}},\frac{x}{\sqrt{1-4bx^{2}}}\right)\cdot\frac{1}{1-ax}=\left(\frac{1}{\sqrt{1-4bx^{2}}},\frac{ax}{\sqrt{1-4bx^{2}}}\right)\cdot\frac{1}{1-x}.$
This gives us the following result.
###### Proposition 11.
The generating function $\frac{1}{\sqrt{1-4bx^{2}}-ax}$ is the generating
function of the row sums of the Riordan array
$\left(\frac{1}{\sqrt{1-4bx^{2}}},\frac{ax}{\sqrt{1-4bx^{2}}}\right)$.
The array
$\left(\frac{1}{\sqrt{1-4bx^{2}}},\frac{ax}{\sqrt{1-4bx^{2}}}\right)$ is thus
the coefficient array of the bivariate polynomials in $a$ and $b$ that begin
$1,a,a^{2}+2b,a^{3}+4ab,a^{4}+6a^{2}b+6b^{2},a^{5}+8a^{3}b+16ab^{2},a^{6}+10a^{4}b+30a^{2}b^{2}+20b^{3},\ldots.$
Specializing to the case $b=y$ and $a=1$, which is the case of the Catalan-
Fibonacci polynomials, we find that these “reciprocal” polynomials begin
$1,1,2y+1,4y+1,6y^{2}+6y+1,16y^{2}+8y+1,20y^{3}+30y^{2}+10y+1,\ldots.$
The Riordan array
$\left(\frac{1}{\sqrt{1-4x^{2}}},\frac{x}{\sqrt{1-4x^{2}}}\right)$ begins
$\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&1&0&0&0&0\\\ 2&0&1&0&0&0\\\
0&4&0&1&0&0\\\ 6&0&6&0&1&0\\\ 0&16&0&8&0&1\\\ \end{array}\right).$
This is A111959 in the OEIS. Since this is a Bell matrix, and since we have
$\operatorname{Rev}\left(\frac{x}{\sqrt{1-4x^{2}}}\right)=\operatorname{Rev}\left(\frac{x}{\sqrt{1+4x^{2}}}\right)$,
we deduce that its inversion is the exponential Riordan array
$\left[I_{0}(2ix),-x\right],$
which begins
$\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&-1&0&0&0&0\\\ -2&0&1&0&0&0\\\
0&6&0&-1&0&0\\\ 6&0&-12&0&1&0\\\ 0&-30&0&20&0&-1\\\ \end{array}\right).$
The corresponding nonnegative matrix $\left[I_{0}(2x),x\right]$ is A109187 in
the OEIS. Its elements count grand Motzkin paths of length $n$ with $k$ level
steps.
## 5 Conclusion
The Fibonacci polynomials are related to the number of ways we can tile an
$n\times 1$ rectangle by $2\times 1$ dominoes and $1\times 1$ squares [3]. In
this paper we have indicated that the dual Fibonacci and Catalan-Fibonacci
polynomials have interpretations in terms of Motzkin paths. In this optic, for
instance, a Motzkin path of length $n$ with $k$ horizontal steps is “dual” to
a tiling of the $n\times 1$ board by dominoes and exactly $k$ $1\times 1$
square. We have used the theory of triangle inversions, and particularly
Riordan array inversions, as the principal tool in this investigation.
## References
* [1] P. Barry, On the inversion of Riordan arrays, https://arxiv.org/abs/2101.06713.
* [2] P. Barry, _Riordan Arrays: a Primer_ , Logic Press, 2017.
* [3] A. T. Benjamin, _Proofs that really count: the art of combinatorial proof_ , Mathematical Association of America.
* [4] P. Henrici, An algebraic proof of the Langrange-Bürmann formula, _J. Math. Anal. Appl._ , 8 (1964), 218–224.
* [5] V. E. Hoggatt and M. Bicknell, Roots of Fibonacci polynomials, _Fibonacci Quart._ , 11 (1973), 271–274.
* [6] V. E. Hoggatt and Calvin T. Long, Divisibility properties of generalized Fibonacci Polynomials, _Fibonacci Quart._ , 12 (1974), 113-120.
* [7] D. Merlini, R. Sprugnoli, and M. C. Verri, Lagrange inversion: when and how, _Acta Appl. Math._ , 94 (2006), 233–249.
* [8] P. E. Ricci, Generalized Lucas polynomials and Fibonacci polynomials, _Riv. Math. Univ. Parma_ , 5 (1995), 137–146
* [9] L. W. Shapiro, S. Getu, W-J. Woan, and L.C. Woodson, The Riordan group, _Discr. Appl. Math._ , 34 (1991), 229–239.
* [10] N. J. A. Sloane, _The On-Line Encyclopedia of Integer Sequences_. Published electronically at http://oeis.org, 2021.
* [11] N. J. A. Sloane, The On-Line Encyclopedia of Integer Sequences, _Notices Amer. Math. Soc._ , 50 (2003), 912–915.
* [12] Yi Yuan and W. Zhang, Some identities involving the Fibonacci Polynomials, _Fibonacci Quart._ , 40 (2002), 314–318.
2010 Mathematics Subject Classification: Primary 11B39; Secondary 11B83,
15B36, 11C20. _Keywords:_ Fibonacci polynomials, Catalan numbers, Catalan-
Fibonacci polynomials, Motzkin path, Riordan array, matrix inversion.
(Concerned with sequences A000045, A011973, A011973, A097610, A098614,
A107131, A109187, A111959, and A200375.)
|
DESY 21-005
EPJC published version
CASCADE3
A Monte Carlo event generator based on TMDs
S. Baranov1, A. Bermudez Martinez2, L.I. Estevez Banos2, F. Guzman3, F.
Hautmann4,5, H. Jung2, A. Lelek4, J. Lidrych2, A. Lipatov6, M. Malyshev6, M.
Mendizabal2, S. Taheri Monfared2, A.M. van Kampen4, Q. Wang2,7 H. Yang2,7
1Lebedev Physics Institute, Russia
2DESY, Hamburg, Germany
3InSTEC, Universidad de La Habana, Cuba
4Elementary Particle Physics, University of Antwerp, Belgium
5RAL and University of Oxford, UK
6SINP, Moscow State University, Russia
7School of Physics, Peking University, China
###### Abstract
The Cascade3 Monte Carlo event generator based on Transverse Momentum
Dependent (TMD) parton densities is described. Hard processes which are
generated in collinear factorization with LO multileg or NLO parton level
generators are extended by adding transverse momenta to the initial partons
according to TMD densities and applying dedicated TMD parton showers and
hadronization. Processes with off-shell kinematics within
$k_{t}$-factorization, either internally implemented or from external packages
via LHE files, can be processed for parton showering and hadronization. The
initial state parton shower is tied to the TMD parton distribution, with all
parameters fixed by the TMD distribution.
## 1 Introduction
The simulation of processes for high energy hadron colliders has been improved
significantly in the past years by automation of next-to-leading order (NLO)
calculations and matching of the hard processes to parton shower Monte Carlo
event generators which also include a simulation of hadronization. Among those
automated tools are the MadGraph5_amc@nlo [1] generator based on the mc@nlo
[2, 3, 4, 5] method or the Powheg [6, 7] generator for the calculation of the
hard process. The results from these packages are then combined with either
the Herwig [8] or Pythia [9] packages for parton showering and hadronization.
Different jet multiplicities can be combined at the matrix element level and
then merged with special procedures, like the MLM [10] or CKKW [11] merging
for LO processes, the FxFx [12] or MiNLO method [13] for merging at NLO, among
others. While the approaches of matching and merging matrix element
calculations and parton showers are very successful, two ingredients important
for high energy collisions are not (fully) treated: the matrix elements are
calculated with collinear dynamics and the inclusion of initial state parton
showers results in a net transverse momentum of the hard process; the special
treatment of high energy effects (small $x$) is not included.
The Cascade Monte Carlo event generator, developed originally for small $x$
processes based on high-energy factorization [14] and the CCFM [15, 16, 17,
18] evolution equation, has been extended to cover the full kinematic range
(not only small $x$) by applying the Parton Branching (PB) method and the
corresponding PB Transverse Momentum Dependent (TMD) parton densities [19,
20]. The initial state evolution is fully described and determined by the TMD
density, as it was in the case of the CCFM gluon density, but now available
for all flavor species, including quarks, gluons and photons at small and
large $x$ and any scale $\mu$. For a general overview of TMD parton densities,
see Ref. [21].
With the advances in determination of PB TMDs [19, 20], it is natural to
develop a scheme, where the initial parton shower follows as close as possible
the TMD parton density and where either collinear (on-shell) or
$k_{t}$-dependent (off-shell) hard process calculations can be included at LO
or NLO. In order to be flexible and to use the latest developments in
automated matrix element calculations of hard process at higher order in the
strong coupling $\alpha_{s}$, events available in the Les Houches Event (LHE)
file format [22], which contains all the information of the hard process
including the color structure, can be further processed for parton shower and
hadronization in Cascade3.
In this report we describe the new developments in Cascade3 for a full PB-TMD
parton shower and the matching of TMD parton densities to collinear hard
process calculations. We also mention features of the small-$x$ mode of
Cascade3.
## 2 The hard process
The cross section for the scattering process of two hadrons $A$ and $B$ can be
written in collinear factorization as a convolution of the partonic cross
section of partons $a$ and $b$, $a+b\to X$, and the densities
$f_{a(b)}(x,\mu)$ of partons $a$ ($b$) inside the hadrons $A$ ($B$),
$\sigma(A+B\to Y)=\int dx_{a}\int
dx_{b}\,f_{a}(x_{a},\mu)\,f_{b}(x_{b},\mu)\,\sigma(a+b\to X)\,,$ (1)
where $x_{a}(x_{b})$ are the fractions of the longitudinal momenta of hadrons
$A,B$ carried by the partons $a(b)$, $\sigma(a+b\to X)$ is the partonic cross
section, and $\mu$ is the factorization scale of the process. The final state
$Y$ contains the partonic final state $X$ and the recoils from the parton
evolution and hadron remnants.
In Cascade3 we extend collinear factorization to include transverse momenta in
the initial state, either by adding a transverse momentum to an on-shell
process or by using off-shell processes directly, as described in detail in
Sections 2.1 and 2.2. TMD factorization is proven for semi-inclusive deep-
inelastic scattering, Drell-Yan production in hadron-hadron collisions and
$e^{+}e^{-}$ annihilation [23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34,
35]. In the high-energy limit (small-$x$) $k_{T}$-factorization has been
formulated also in hadronic collisions for processes like heavy flavor or
heavy boson (including Higgs) production [14, 36, 37, 38], with so-called
unintegrated parton distribution functions (uPDFs), see e.g. Refs. [39, 40,
41, 42, 43, 44, 45, 46, 47, 48, 49].
### 2.1 On-shell processes
The hard processes in collinear factorization (with on-shell initial partons,
without transverse momenta) can be calculated by standard automated methods
like MadGraph5_amc@nlo [1] for multileg processes at LO or NLO accuracy. The
matrix element processes are calculated with collinear parton densities (PDF),
as provided by LHAPDF [50].
We extend the factorization formula given in eq.(1) by replacing the collinear
parton densities $f(x,\mu)$ by TMD densities ${\cal A}(x,k_{t},\mu)$ with
$k_{t}$ being the transverse momentum of the interacting parton, and
integrating over the transverse momenta.
However, when the hard process is to be combined with a TMD parton density, as
described later, the integral over $k_{t}$ of the TMD density must agree with
the collinear ($k_{t}$-integrated) density; this feature is guaranteed by
construction for the PB-TMDs (also available as integrated PDFs in LHAPDF
format).
In a LO partonic calculation the TMD or the parton shower can be included
respecting energy momentum conservation, as described below. In an NLO
calculation based on the MC@NLO method [2, 3, 4, 5] the contribution from
collinear and soft partons is subtracted, as this is added later with the
parton shower. For the use with PB TMDs, the Herwig6 subtraction terms are
best suited as the angular ordering conditions coincide with those applied in
the PB-method. The PB TMDs play the same role as a parton shower does, in the
sense that a finite transverse momentum is created as a result of the parton
evolution [51, 52].
When transverse momenta of the initial partons from TMDs are to be included to
the hard scattering process, which was originally calculated under the
assumption of collinear initial partons, care has to be taken that energy and
momentum are still conserved. When the initial state partons have transverse
momenta, they also acquire virtual masses. The procedure adopted in Cascade3
is the following: for each initial parton, a transverse momentum is assigned
according to the TMD density, and the parton-parton system is boosted to its
center-of-mass frame and rotated such that only the longitudinal and energy
components are non-zero. The energy and longitudinal component of the initial
momenta $p_{a,b}$ are recalculated taking into account the virtual masses
$Q_{a}^{2}=k_{t,a}^{2}$ and $Q_{b}^{2}=k_{t,b}^{2}$ [53],
$\displaystyle E_{a,b}$ $\displaystyle=$
$\displaystyle\frac{1}{2\sqrt{\hat{s}}}\left(\hat{s}\pm(Q_{b}^{2}-Q_{a}^{2})\right)$
(2) $\displaystyle p_{z\;a,b}$ $\displaystyle=$
$\displaystyle\pm\frac{1}{2\sqrt{\hat{s}}}\sqrt{(\hat{s}+Q_{a}^{2}+Q_{b}^{2})^{2}-4Q_{a}^{2}Q_{b}^{2}}$
(3)
with $\hat{s}=(p_{a}+p_{b})^{2}$ with $p_{a}(p_{b})$ being the four-momenta of
the interacting partons $a$ and $b$. The partonic system is then rotated and
boosted back to the overall center-of-mass system of the colliding particles.
By this procedure, the parton-parton mass $\sqrt{\hat{s}}$ is exactly
conserved, while the rapidity of the partonic system is approximately
restored, depending on the transverse momenta.
In Fig. 1 a comparison of the Drell-Yan (DY) mass, transverse momentum and
rapidity is shown for an NLO calculation of DY production in pp collisions at
$\sqrt{s}=13$ TeV in the mass range $30<m_{DY}<2000$ GeV. The curve labelled
NLO(LHE) is the calculation of MadGraph5_amc@nlo with the subtraction terms,
the curve NLO(LHE+TMD) is the prediction after the transverse momentum is
included according to the procedure described above. In the $p_{T}$ spectrum
one can clearly see the effect of including transverse momenta from the TMD
distribution. The DY mass distribution is not changed, and the rapidity
distribution is almost exactly reproduced, only at large rapidities small
differences are observed.
Figure 1: Distributions of Drell-Yan mass, transverse momentum and rapidity
for $pp\to DY+X$ at $\sqrt{s}=13$ TeV. The hard process is calculated with
MadGraph5_amc@nlo. NLO(LHE) is the prediction including subtraction terms,
NLO(LHE+TMD) includes transverse momenta of the interacting partons according
to the description in the text.
The transverse momenta $k_{t}$ are generated according to the TMD density
${\cal A}(x,k_{t},\mu)$, at the original longitudinal momentum fraction $x$
and the hard process scale $\mu$. In a LO calculation, the full range of
$k_{t}$ is available, but in an NLO calculation via the MC@NLO method a shower
scale defines the boundary between parton shower and real emissions from the
matrix element, limiting the transverse momentum $k_{t}$. Technically the
factorization scale $\mu$ is calculated within Cascade3 (see parameter
`lhescale`) as it is not directly accessible from the LHE file, while the
shower scale is given by `SCALUP`. The limitation of the transverse momenta
coming from the TMD distribution and TMD shower to be smaller than the shower
scale SCALUP guarantees that the overlap with real emissions from the matrix
element is minimized according to the subtraction of counterterms in the
MC@NLO method.
The advantage of using TMDs for the complete process is that the kinematics
are fixed, independent of simulating explicitly the radiation history from the
parton shower. For inclusive processes, for example inclusive Drell-Yan
processes, the details of the hadronic final state generated by a parton
shower do not matter, and only the net effect of the transverse momentum
distribution is essential. However, for processes which involve jets, the
details of the parton shower become also important. The parton shower, as
described below, follows very closely the transverse momentum distribution of
the TMD and thus does not change any kinematic distribution after the
transverse momentum of the initial partons are included.
All hard processes, which are available in MadGraph5_amc@nlo can be used
within Cascade3. The treatment of multijet merging is described in Section 8.
### 2.2 Off-shell processes
In a region of phase space, where the longitudinal momentum fractions $x$
become very small, the transverse momentum of the partons cannot be neglected
and has to be included already at the matrix element level, leading to so-
called off-shell processes.
In off-shell processes a natural suppression at large $k_{t}$ [54] (with
$k_{t}>\mu$) is obtained, shown explicitly in Fig. 2, where the matrix element
for $g^{*}g^{*}\to Q\bar{Q}$, with $Q$ being a heavy quark, is considered. The
process is integrated over the final state phase space [55],
$\tilde{\sigma}(k_{t})=\int\frac{dx_{2}}{x_{2}}\,d\phi_{1,2}\,{\rm
dLips}\,|ME|^{2}\,(1-x_{2})^{5}\;,$ (4)
where ${\rm dLips}$ is the Lorentz-invariant phase space of the final state,
${\rm ME}$ is the matrix-element for the process, $\phi_{1,2}$ is the
azimuthal angle between the two initial partons, and a simple scale-
independent and $k_{t}$-independent gluon density $xG(x)=(1-x)^{5}$ is
included which suppresses large-$x$ contributions. In Fig. 2 we show
$\tilde{\sigma}(k_{t})$ normalized to its on-shell value $\tilde{\sigma}(0)$
at $\sqrt{s}=13000$ GeV as a function of the transverse momentum of the
incoming gluon $k_{t,2}$ for different values of $x_{1}$, which are chosen
such that the ratio $k^{2}_{t,1}/(x_{1}s)$ is kept constant.
Figure 2: The reduced cross section $\tilde{\sigma}(k_{t})/\tilde{\sigma}(0)$
as a function of the transverse momentum $k_{t,2}$ of the incoming gluon at
$\sqrt{s}=13000$ GeV. (Left) for different values of $k_{t,1}$ and $x_{1}$,
(right) for different heavy flavor masses and fixed values of $k_{t,1}$ and
$x_{1}$.
In Fig. 2 (left) predictions are shown for bottom quarks with mass $m=5$ GeV
and different $k_{t,1}$, in Fig. 2 (right) a comparison is made for different
heavy quark masses. Using off-shell matrix elements a suppression at large
transverse momenta of the initial partons is obtained, depending on the heavy
flavor mass and the transverse momentum. In a collinear approach, with
implicit integration over transverse momenta of the initial state partons, the
transverse momenta are limited by a theta function at the factorization scale,
while off-shell matrix elements give a smooth transition to a high $k_{t}$
tail.
When using off-shell processes, BFKL or CCFM type parton densities should be
used to cover the full available phase space in transverse momentum, which can
lead to $k_{t}$’s larger than the transverse momentum of any of the partons of
the hard process [56]. Until now, only gluon densities obtained from CCFM[15,
16, 17, 18] or BFKL[57, 58, 59] are available, thus limiting the advantages of
using off-shell matrix elements to gluon induced processes.
Several processes with off-shell matrix elements are implemented in Cascade3
as listed in Tab. 1, and described in detail in [60]. However, many more
processes are accessible via the automated matrix element calculators for off-
shell processes, KaTie [61] and Pegasus [62]. The events from the hard process
are then read with the Cascade3 package via LHE files. For processes generated
with KaTie or Pegasus no further corrections need to be performed and the
event can be directly passed to the showering procedure, described in the next
section.
Lepto(photo)production | process | IPRO | Reference
---|---|---|---
| $\gamma^{*}g^{*}\to q\bar{q}$ | 10 | [63]
| $\gamma^{*}g^{*}\to Q\bar{Q}$ | 11 | [63]
| $\gamma^{*}g^{*}\to J/\psi g$ | 2 | [64, 65, 66, 67]
Hadroproduction | | |
| $g^{*}g^{*}\to q\bar{q}$ | 10 | [63]
| $g^{*}g^{*}\to Q\bar{Q}$ | 11 | [63]
| $g^{*}g^{*}\to J/\psi g$ | 2 | [67]
| $g^{*}g^{*}\to\Upsilon g$ | 2 | [67]
| $g^{*}g^{*}\to\chi_{c}$ | 3 | [67]
| $g^{*}g^{*}\to\chi_{b}$ | 3 | [67]
| $g^{*}g^{*}\to J/\psi J/\psi$ | 21 | [68]
| $g^{*}g^{*}\to h^{0}$ | 102 | [38]
| $g^{*}g^{*}\to ZQ\bar{Q}$ | 504 | [69, 70]
| $g^{*}g^{*}\to Zq\bar{q}$ | 503 | [69, 70]
| $g^{*}g^{*}\to Wq_{i}Q_{j}$ | 514 | [69, 70]
| $g^{*}g^{*}\to Wq_{i}q_{j}$ | 513 | [69, 70]
| $qg^{*}\to Zq$ | 501 | [71]
| $qg^{*}\to Wq$ | 511 | [71]
| $qg^{*}\to qg$ | 10 | [72]
| $gg^{*}\to gg$ | 10 | [72]
Table 1: Processes included in Cascade3. $Q$ stands for heavy quarks, $q$ for
light quarks.
## 3 Initial State Parton Shower based on TMDs
The parton shower, which is described here, follows consistently the parton
evolution of the TMDs. By this we mean that the splitting functions $P_{ab}$,
the order and the scale in $\alpha_{\mathrm{s}}$ as well as kinematic
restrictions are identical to both the parton shower and the evolution of the
parton densities (for NLO PB TMD densities, the NLO DGLAP splitting functions
[73, 74] together with NLO $\alpha_{\mathrm{s}}$ is applied, while for the LO
TMD densities the corresponding LO splitting functions [75, 76, 77] and LO
$\alpha_{\mathrm{s}}$ is used).
### 3.1 From PB TMD evolution to TMD Parton Shower
The PB method describes the TMD parton density as (cf eq.(2.43) in Ref. [19])
$\displaystyle{x{\cal A}}_{a}(x,k_{t},\mu)$ $\displaystyle=$
$\displaystyle\Delta_{a}(\mu)\ x{{\cal
A}}_{a}(x,k_{t},\mu_{0})+\sum_{b}\int{{dq^{2}}\over{q^{2}}}{{d\phi}\over{2\pi}}\
{{\Delta_{a}(\mu)}\over{\Delta_{a}(q)}}\ \Theta(\mu-q)\ \Theta(q-\mu_{0})$ (5)
$\displaystyle\times$
$\displaystyle\int_{x}^{z_{M}}{dz}\;P_{ab}^{(R)}(\alpha_{\mathrm{s}}(f(z,q)),z)\;\frac{x}{z}{{\cal
A}}_{b}\left({\frac{x}{z}},k_{t}^{\prime},q\right)\;\;,$
with $z_{M}<1$ defining resolvable branchings, ${\bf k}$ (${\bf q}_{c}$) being
the transverse momentum vector of the propagating (emitted) parton,
respectively. The transverse momentum of the parton before branching is
defined as $k_{t}^{\prime}=|{\bf k}+(1-z){\bf q}|$ with ${\bf q}={\bf
q}_{c}/(1-z)$ being the rescaled transverse momentum vector of the emitted
parton (see Fig. 3, with the notation $k_{t}=|{\bf k}|$ and $q=|{\bf q}|$) and
$\phi$ being the azimuthal angle between ${\bf q}$ and ${\bf k}$. The argument
in $\alpha_{\mathrm{s}}$ is in general a function of the evolution scale $q$.
Higher order calculations indicate the transverse momentum of the emitted
parton as the preferred scale. The real emission branching probability is
denoted by $P_{ab}^{(R)}(\alpha_{\mathrm{s}}(f(z,q)),z)$ including
$\alpha_{\mathrm{s}}$ as described in Ref. [19] (in the following we omit
$\alpha_{\mathrm{s}}$ in the argument of $P_{ab}^{(R)}$ for easier reading).
The Sudakov form factor is given by:
$\Delta_{a}(z_{M},\mu,\mu_{0})=\exp\left(-\sum_{b}\int^{\mu^{2}}_{\mu^{2}_{0}}{{dq^{2}}\over
q^{2}}\int_{0}^{z_{M}}dz\ z\ P_{ba}^{(R)}\right)\;.$ (6)
Dividing Eq.(5) by $\Delta_{a}(\mu^{2})$ and differentiating with respect to
${\mu^{2}}$ gives the differential form of the evolution equation describing
the probability for resolving a parton with transverse momentum ${\bf
k}^{\prime}$ and momentum fraction $x/z$ into a parton with momentum fraction
$x$ and emitting another parton during a small decrease of $\mu$,
$\displaystyle{\mu^{2}}\frac{d}{d\mu^{2}}\left(\frac{{x{\cal
A}}_{a}(x,k_{t},\mu)}{\Delta_{a}(\mu)}\right)$ $\displaystyle=$
$\displaystyle\sum_{b}\int_{x}^{z_{M}}{dz}{{d\phi}\over{2\pi}}\;P_{ab}^{(R)}\;\frac{x}{z}\frac{{{\cal
A}}_{b}\left({\frac{x}{z}},k_{t}^{\prime},\mu\right)}{\Delta_{a}(\mu)}\;.\;\;$
(7)
The normalized probability is then given by
$\displaystyle\frac{\Delta_{a}(\mu)}{{x{\cal
A}}_{a}(x,k_{t},\mu)}d\left(\frac{{x{\cal
A}}_{a}(x,k_{t},\mu)}{\Delta_{a}(\mu)}\right)$ $\displaystyle=$
$\displaystyle\sum_{b}{{d\mu^{2}}\over{\mu^{2}}}\int_{x}^{z_{M}}{dz}{{d\phi}\over{2\pi}}\;P_{ab}^{(R)}\;\frac{{\frac{x}{z}{\cal
A}}_{b}\left({\frac{x}{z}},k_{t}^{\prime},\mu\right)}{{x{\cal
A}}_{a}(x,k_{t},\mu)}\;\;$ (8)
This equation can be integrated between $\mu^{2}_{i-1}$ and $\mu^{2}$ to give
the no-branching probability (Sudakov form factor) for the backward evolution
$\Delta_{bw}$,111In Eq.(3.1) ordering in $\mu$ is assumed. However, if angular
ordering as in CCFM [15, 16, 17, 18] is applied then the ratio of parton
densities would change to $[x^{\prime}{\cal
A}_{b}(x^{\prime},k_{t}^{\prime},q^{\prime}/z)]/[x{\cal
A}_{a}(x,k_{t},q^{\prime})]$ as discussed in [60].
$\displaystyle\log\Delta_{bw}(x,k_{t},\mu,\mu_{i-1})$ $\displaystyle=$
$\displaystyle\log\left(\frac{\Delta_{a}(\mu)}{\Delta_{a}(\mu_{i-1})}\frac{x{\cal
A}_{a}(x,k_{t},\mu_{i-1})}{{{x{\cal A}}_{a}(x,k_{t},\mu)}}\right)$
$\displaystyle=$
$\displaystyle-\sum_{b}\int_{\mu_{i-1}^{2}}^{\mu^{2}}{{dq^{\prime\,2}}\over{q^{\prime\,2}}}{{d\phi}\over{2\pi}}\int_{x}^{z_{M}}{dz}\;P_{ab}^{(R)}\;\frac{{x^{\prime}{\cal
A}}_{b}\left(x^{\prime},k_{t}^{\prime},q^{\prime}\right)}{{x{\cal
A}}_{a}(x,k_{t},q^{\prime})}\;,$
with $x^{\prime}=x/z$. This Sudakov form factor is very similar to the Sudakov
form factor in ordinary parton shower approaches, with the difference that for
the PB TMD shower the ratio of PB TMD densities $[x^{\prime}{\cal
A}_{b}\left(x^{\prime},k_{t}^{\prime},q^{\prime}\right)]/[x{\cal
A}_{a}(x,k_{t},q^{\prime})]$ is applied, which includes a dependence on
$k_{t}$.
In Eq.(3.1) a relation between the Sudakov form factor $\Delta_{a}$ used in
the evolution equation and the Sudakov form factor $\Delta_{bw}$ used for the
backward evolution of the parton shower is made explicit. A similar relation
was also studied in Refs. [78, 79]. In Ref [78] the $z_{M}$ limit was
identified as a source of systematic uncertainty when using conventional
showers with standard collinear pdfs; in the PB approach, the same $z_{M}$
limit is present in the parton evolution as well as in the PB-shower. The PB
approach allows a consistent formulation of the parton shower with the PB
TMDs, as in both Sudakov form factors $\Delta_{a}$ and $\Delta_{bw}$ the same
value of $z_{M}$ is used.
The splitting functions $P_{ab}^{(R)}$ contain the coupling,
$P_{ab}(\alpha_{\mathrm{s}},z)=\sum^{\infty}_{n=1}\left(\frac{\alpha_{\mathrm{s}}(f(z,q))}{2\pi}\right)^{n}P_{ab}^{(n-1)}(z)\;,$
(10)
where the scale $f(z,q)$ in the coupling depends on the ordering condition as
discussed later (see Eq.(11)).
The advantage of using a PB TMD shower is that as long as the parameters of
the parton shower are set through TMD distributions the parton shower
uncertainties can be recast as uncertainties of the TMDs, which in turn can be
fitted to experimental data in a systematic global manner.
### 3.2 Backward Evolution for initial state TMD Parton Shower
A backward evolution method, as now common in Monte Carlo event generators, is
applied for the initial state parton shower, evolving from the large scale of
the matrix-element process backwards down to the scale of the incoming hadron.
However, in contrast to the conventional parton shower, which generates
transverse momenta of the initial state partons during the backward evolution,
the transverse momenta of the initial partons of the hard scattering process
is fixed by the TMD and the parton shower does not change the kinematics. The
transverse momenta during the backward cascade follow the behavior of the TMD.
The hard scattering process is obtained as described in section 2. The
backward evolution of the initial state parton shower follows very closely the
description in [60, 80, 81], which is based on Ref. [53].
The starting value of the evolution scale $\mu$ is calculated from the hard
scattering process, as described in Section 2. In case of on-shell matrix
elements at NLO, the transverse momentum of the hardest parton in the parton
shower evolution is limited by the shower-scale, as described in Section 2.1.
Figure 3: Left: Schematic view of a parton branching process. Right: Branching
process $b\to a+c$.
Starting at the hard scale $\mu=\mu_{i}$, the parton shower algorithm searches
for the next scale $\mu_{i-1}$ at which a resolvable branching occurs (see
Fig. 3 left). This scale $\mu_{i-1}$ is selected from the Sudakov form factor
$\Delta_{bw}$ as given in Eq.(3.1) (see also [60]). In the parton shower
language, the selection of the next branching comes from solving
$R=\Delta_{bw}(x,k_{t},\mu_{i},\mu_{i-1})$ for $\mu_{i-1}$ using uniformly
distributed random numbers R for given $x$ and $\mu_{i}$. However, to solve
the integrals in Eq.(3.1) numerically for every branching would be too time
consuming, instead the veto-algorithm [53, 82] is applied.
The splitting function $P_{ab}$ as well as the argument $f(z,q)$ in the
calculation of $\alpha_{\mathrm{s}}$ is chosen exactly as used in the
evolution of the parton density. In a parton shower one treats “resolvable”
branchings, defined via a cut in $z<z_{M}$ in the splitting function to avoid
the singular behavior of the terms $1/(1-z)$, and branchings with $z>z_{M}$
are regarded as “non-resolvable” and are treated similarly as virtual
corrections: they are included in the Sudakov form factor $\Delta_{bw}$. The
splitting variable $z_{i-1}$ is obtained from the splitting functions
following the standard methods (see Eq.(2.37) in [19]).
The calculation of the transverse momentum $k_{t}$ is sketched in Fig. 3
(right). The transverse momentum $q_{t\,c}$ can be calculated in case of
angular ordering (where the scale $q$ of each branching is associated with the
angle of the emission) in terms of the angle $\Theta$ of the emitted parton
with respect to the beam directions $q_{t,c}=(1-z)E_{b}\sin\Theta$,
${\bf q}_{c}^{2}=(1-z)^{2}q^{2}\;\;.$ (11)
Once the transverse momentum of the emitted parton ${\bf q}_{c}$ is known, the
transverse momentum of the propagating parton can be calculated from
${\bf k}^{\prime}={\bf k}+{\bf q}_{c}$ (12)
with a uniformly distributed azimuthal angle $\phi$ assumed for the vector
components of ${\bf k}$ and ${\bf q}_{c}$. The generation of the parton
momenta is performed in the center-of-mass frame of the collision (in contrast
to conventional parton showers, which are generated in different partonic
frames).
The whole procedure is iterated until one reaches a scale $\mu_{i-1}<q_{0}$
with $q_{0}$ being a cut-off parameter, which can be chosen to be the starting
evolution scale of the TMD. It is of advantage to continue the parton shower
evolution to lower scales $q_{0}\sim\Lambda_{qcd}\sim 0.3$ GeV.
The final transverse momentum of the propagating parton ${\bf k}$ is the sum
of all transverse momenta ${\bf q}_{c}$ (see Fig. 3 right):
${\bf k}={\bf k}_{0}-\sum_{c}{\bf q}_{c}\;\;.$ (13)
with ${\bf k}_{0}$ being the intrinsic transverse momentum.
The PB TMD parton shower is selected with `PartonEvolution=2` (or `ICCF=2`).
### 3.3 CCFM parton evolution and parton shower
The CCFM parton evolution and corresponding parton shower follows a similar
approach as described in the previous section and in detail also in Refs. [81,
80, 60, 83]. The main difference to the PB-TMD shower are the splitting
functions with the non-Sudakov form factor $\Delta_{ns}$ and the allowed phase
space for emission. The original CCFM splitting function
$\tilde{P}_{g}(z,q,k_{t})$ for branching $g\to gg$ is given by222Finite terms
are neglected as they are not obtained in CCFM at the leading infrared
accuracy (cf p.72 in [17]).
$\tilde{P}_{g}(z,q,k_{t})=\frac{\bar{\alpha}_{s}(q(1-z))}{1-z}+\frac{\bar{\alpha}_{s}(k_{t})}{z}\Delta_{ns}(z,q,k_{t}),$
(14)
where the non-Sudakov form factor $\Delta_{ns}$ is defined as
$\log\Delta_{ns}=-\bar{\alpha}_{s}(k_{t})\int_{0}^{1}\frac{dz^{\prime}}{z^{\prime}}\int\frac{dq^{2}}{q^{2}}\Theta(k_{t}-q)\Theta(q-z^{\prime}q_{t})\,,$
(15)
with $q_{t}=\sqrt{{\bf q}_{t}^{2}}$ being the magnitude of the transverse
vector defined in Eq.(11) and $k_{t}$ the magnitude of the transverse vector
in Eq.(12).
The CCFM parton shower is selected with `ICCF=1` ( `PartonEvolution=1`). 333A
one loop parton shower (DGLAP like) with $\Delta_{ns}=1$, one loop
$\alpha_{\mathrm{s}}$ and strict ordering in $q$ can be selected with ICCF=0.
## 4 The TMD parton densities
In the previous versions of Cascade the TMD densities were part of the
program. With the development of TMDlib [84, 85] there is easy access to all
available TMDs, including parton densities for photons (as well as Z, W and H
densities, if available).
These parton densities can be selected via `PartonDensity` with a value
$>100000$. For example the TMDs from the parton branching method [19, 20] are
selected via `PartonDensity=102100 (102200)` for PB-NLO-HERAI+II-2018-set1
(set2).
Note that the features of the TMD parton shower are only fully available for
the PB-TMD sets and the CCFM shower clearly needs CCFM parton densities (like
for instance [86]). PB-TMD parton densities are determined in Ref. [87] from
fits to HERA DIS $F_{2}$ measurements for $Q^{2}>3$ GeV2, giving very good
$\chi^{2}$ values. In Refs. [88, 89] the transverse momentum distribution of
Drell-Yan pairs at low and high masses, obtained from PB-TMD densities, are
compared with experimental measurements in a wide variety of kinematic
regions, from low-energy fixed target experiments to high-energy collider
experiments. Good agreement is found between predictions and measurements
without the need for tuning of nonperturbative parameters, which illustrates
the validity of the approach over a broad kinematic range in energy and mass
scales.
## 5 Final state parton showers
The final state parton shower uses the parton shower routine `PYSHOW` of
Pythia. Leptons in the final state, coming for example from Drell-Yan decays,
can radiate photons, which are also treated in the final state parton shower.
Here the method from `PYADSH` of Pythia is applied, with the scale for the QED
shower being fixed at the virtuality of the decaying particle (for example the
mass of the Z-boson).
The default scale for the QCD final state shower is
$\mu^{2}=2\cdot(m_{1\;\perp}^{2}+m_{2\;\perp}^{2})$ (`ScaleTimeShower=1`),
with $m_{1(2)\;\perp}$ being the transverse mass of the hard parton 1(2).
Other choices are possible: $\mu^{2}=\hat{s}$ (`ScaleTimeShower=2`) and
$\mu^{2}=2\cdot(m_{1}^{2}+m_{2}^{2})$ (`ScaleTimeShower=3`). In addition a
scale factor can be applied: `ScaleFactorFinalShower`$\times\mu^{2}$ (default:
`ScaleFactorFinalShower=1`).
## 6 Hadronization
The hadronization (fragmentation of the partons in colorless systems) is done
exclusively by Pythia. Hadronization (fragmentation) is switched off by
`Hadronization = 0` (or `NFRA = 0` for the older steering cards). All
parameters of the hadronization model can be changed via the steering cards.
## 7 Uncertainties
Uncertainties of QCD calculations mainly arise from missing higher order
corrections, which are estimated by varying the factorization and
renormalization scales up and down by typically a factor of 2. The scale
variations are performed when calculating the matrix elements and are stored
as additional weights in the LHE file, which are then passed directly via
Cascade3 to the HEPMC [90] output file for further processing.
The uncertainties coming from the PDFs can also be calculated as additional
weight factors during the matrix element calculation. However, when using
TMDs, additional uncertainties arise from the transverse momentum distribution
of the TMD. The PB-TMDs come with uncertainties from the experimental
uncertainties as well as from model uncertainties, as discussed in Ref. [87].
These uncertainties can be treated and applied as additional weight factors
with the parameter `Uncertainty_TMD=1`.
## 8 Multi-jet merging
Showered multijet LO matrix element calculations can be merged using the
prescription discussed in Ref. [91]. The merging performance is controlled by
the three parameters `Rclus`, `Etclus`, `Etaclmax`. Final-state partons with
pseudorapidity $\eta<$`Etaclmax` present in the event record after the shower
step but before hadronization are passed to the merging machinery if `Imerge =
1`. Partons are clustered using the kt-jet algorithm with a cone radius
`Rclus` and matched to the PB evolved matrix element partons if the distance
between the parton and the jet is $R<1.5\times$`Rclus`. The hardness of the
reconstructed jets is controlled by its minimum transverse energy `Etclus`
(merging scale).
The number of light flavor partons is defined by the `NqmaxMerge` parameter.
Heavy flavor partons and their corresponding radiation are not passed to the
merging algorithm. All jet multiplicities are treated in exclusive mode except
for the highest multiplicity `MaxJetsMerge` which is treated in inclusive
mode.
## 9 Program description
In Cascade3 all variables are declared as `Double Precision`. With Cascade3
the source of Pythia 6.428 is included to avoid difficulties in linking.
### 9.1 Random Numbers
Cascade3 uses the `RANLUX` random number generator, with luxory level `LUX =
4`. The random number seed can be set via the environment variable `CASEED`,
the default value is `CASEED=12345`.
### 9.2 Event Output
When `HEPMC` is included, generated events are written out in HEPMC [90]
format for further processing. The environment variable `HEPMCOUT` is used to
specify the file name, by default this variable is set to
`HEPMCOUT=/dev/null`.
The HEPMC events can be further processed, for example with Rivet [92].
### 9.3 Input parameters
The input parameters are steered via steering files. The new format of
steering is discussed in Section 9.3.1 and should be used when reading LHE
files, while the other format, which is appropriate for the internal off-shell
processes, is discussed in Section 9.3.2.
#### 9.3.1 Input parameters - new format
Examples for steering files are under `$install_path/share/cascade/LHE`.
&CASCADE_input
NrEvents = -1 ! Nr of events to process
Process_Id = -1 ! Read LHE file
Hadronisation = 0 ! Hadronisation (on =1, off = 0)
SpaceShower = 1 ! Space-like Parton Shower
SpaceShowerOrderAlphas=2 ! Order alphas in Space Shower
TimeShower = 1 ! Time-like Parton Shower
ScaleTimeShower = 4 ! Scale choice for Time-like Shower
! 1: 2(m^2_1t+m^2_2t)
! 2: shat
! 3: 2(m^2_1+m^2_2)
! 4: 2*scalup (from lhe file)
!ScaleFactorFinalShower = 1. ! scale factor for Final State Parton Shower
PartonEvolution = 2 ! type of parton evolution in Space-like Shower
! 1: CCFM
! 2: full all flavor TMD evolution
! EnergyShareRemnant = 4 ! energy sharing in proton remnant
! 1: (a+1)(1-z)**a, <z>=1/(a+2)=1/3
! 2: (a+1)(1-z)**a, <z>=1/(a+2)=mq/(mq+mQ
! 3: N/(z(1-1/z-c/(1-z))**2), c=(mq/mQ)**2
! 4: PYZDIS: KFL1=1
! Remnant = 0 ! =0 no remnant treatment
PartonDensity = 102200 ! use TMDlib: PB-TMDNLO-set2
! PartonDensity = 102100 ! use TMDlib: PB-TMDNLO-set1
! TMDDensityPath= ’./share’ ! Path to TMD density for internal files
Uncertainty_TMD = 0 ! calculate and store uncertainty TMD pdfs
lheInput=’MCatNLO-example.lhe’ ! LHE input file
lheHasOnShellPartons = 1 ! = 0 LHE file has off-shell parton configuration
lheReweightTMD = 0 ! Reweight with new TMD given in PartonDensity
lheScale = 2 ! Scale defintion for TMD
! 0: use scalup
! 1: use shat
! 2: use 1/2 Sum pt^2 of final parton/particles
! 3: use shat for Born and 1/2 Sum pt^2 of final parton(particle)
! 4: use shat for Born and max pt of most forward/backward
! parton(particle)
lheNBornpart = 2 ! Nr of hard partons (particles) (Born process)
ScaleFactorMatchingScale = 2. ! Scale factor for matching scale when including TMDs
&End
&PYTHIA6_input
P6_Itune = 370 ! Retune of Perugia 2011 w CTEQ6L1 (Oct 2012)
! P6_MSTJ(41) = 1 ! (D = 2) type of branchings allowed in shower.
! 1: only QCD
! 2: QCD and photons off quarks and leptons
P6_MSTJ(45) = 4 ! Nr of flavors in final state shower: g->qqbar
P6_PMAS(4,1)= 1.6 ! charm mass
P6_PMAS(5,1)= 4.75 ! bottom mass
P6_MSTJ(48) = 1 ! (D=0), 0=no max. angle, 1=max angle def. in PARJ(85)
! P6_MSTU(111) = 1 ! = 0 : alpha_s is fixed, =1 first order; =2 2nd order;
! P6_PARU(112) = 0.2 ! lambda QCD
P6_MSTU(112)= 4 ! nr of flavours wrt lambda_QCD
P6_MSTU(113)= ! min nr of flavours for alphas
P6_MSTU(114)= 5 ! max nr of flavours for alphas
&End
#### 9.3.2 Input parameters - off-shell processes
Examples for steering files are under `$install_path/share/cascade/HERA` and
`$install_path/share/cascade/PP`.
* OLD STEERING FOR CASCADE
*
* number of events to be generated
*
NEVENT 100
*
* +++++++++++++++++ Kinematic parameters +++++++++++++++
*
’PBE1’ 1 0 -7000. ! Beam energy
’KBE1’ 1 0 2212 ! -11: positron, 22: photon 2212: proton
’IRE1’ 1 0 1 ! 0: beam 1 has no structure
* ! 1: beam 1 has structure
’PBE2’ 1 0 7000. ! Beam energy
’KBE2’ 1 0 2212 ! 11: electron, 22: photon 2212: proton
’IRE2’ 1 0 1 ! 0: beam 3 has no structure
* ! 1: beam 2 has structure
’NFLA’ 1 0 4 ! (D=5) nr of flavours used in str.fct
* +++++++++++++++ Hard subprocess selection ++++++++++++++++++
’IPRO’ 1 0 2 ! (D=1)
* ! 2: J/psi g
* ! 3: chi_c
’I23S’ 1 0 0 ! (D=0) select 2S or 3S state
’IPOL’ 1 0 0 ! (D=0) VM->ll (polarization study)
’IHFL’ 1 0 4 ! (D=4) produced flavour for IPRO=11
* ! 4: charm
* ! 5: bottomΨΨΨΨ
’PTCU’ 1 0 1. ! (D=0) p_t **2 cut for process
* ++++++++++++ Parton shower and fragmentation ++++++++++++
’NFRA’ 1 0 1 ! (D=1) Fragmentation on=1 off=0
’IFPS’ 1 0 3 ! (D=3) Parton shower
* ! 0: off
* ! 1: initial state PS
* ! 2: final state PS
* ! 3: initial and final state PS
’IFIN’ 1 0 1 ! (D=1) scale switch for FPS
* ! 1: 2(m^2_1t+m^2_2t)
* ! 2: shat
* ! 3: 2(m^2_1+m^2_2)
’SCAF’ 1 0 1. ! (D=1) scale factor for FPS
’ITIM’ 1 0 0 ! 0: timelike partons may not shower
* ! 1: timelike partons may shower
’ICCF’ 1 0 1 ! (D=1) Evolution equation
* ! 0: DGLAP
* ! 1: CCFM
* ! 2: PB TMD evolution
* +++++++++++++ Structure functions and scales +++++++++++++
’IRAM’ 1 0 0 ! (D=0) Running of alpha_em(Q2)
* ! 0: fixed
* ! 1: running
’IRAS’ 1 0 1 ! (D=1) Running of alpha_s(MU2)
* ! 0: fixed alpha_s=0.3
* ! 1: running
’IQ2S’ 1 0 3 ! (D=1) Scale MU2 of alpha_s
* ! 1: MU2= 4*m**2 (only for heavy quarks)
* ! 2: MU2 = shat(only for heavy quarks)
* ! 3: MU2= 4*m**2 + pt**2
* ! 4: MU2 = Q2
* ! 5: MU2 = Q2 + pt**2
* ! 6: MU2 = k_t**2
’SCAL’ 1 0 1.0 ! scale factor for renormalisation scale
’SCAF’ 1 0 1.0 ! scale factor for factorisation scale*
*’IGLU’ 1 0 1201 ! (D=1010)Unintegrated gluon density
* ! > 10000 use TMDlib (i.e. 101201 for JH-2013-set1)
* ! 1201: CCFM set JH-2013-set1 (1201 - 1213)
* ! 1301: CCFM set JH-2013-set2 (1301 - 1313)
* ! 1001: CCFM J2003 set 1
* ! 1002: CCFM J2003 set 2
* ! 1003: CCFM J2003 set 3
* ! 1010: CCFM set A0
* ! 1011: CCFM set A0+
* ! 1012: CCFM set A0-
* ! 1013: CCFM set A1
* ! 1020: CCFM set B0
* ! 1021: CCFM set B0+
* ! 1022: CCFM set B0-
* ! 1023: CCFM set B1
* ! 1: CCFM old set JS2001
* ! 2: derivative of collinear gluon (GRV)
* ! 3: Bluemlein
* ! 4: KMS
* ! 5: GBW (saturation model)
* ! 6: KMR
* ! 7: Ryskin,Shabelski
* ++++++++++++ BASES/SPRING Integration procedure ++++++++++++
’NCAL’ 1 0 50000 ! (D=20000) Nr of calls per iteration for bases
’ACC1’ 1 0 1.0 ! (D=1) relative prec.(%) for grid optimisation
’ACC2’ 1 0 0.5 ! (0.5) relative prec.(%) for integration
* ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
*’INTE’ 1 0 0 ! Interaction type (D=0)
* ! = 0 electromagnetic interaction
*’KT1 ’ 1 0 0.44 ! (D=0.0) intrinsic kt for beam 1
*’KT2 ’ 1 0 0.44 ! (D=0.0) intrinsic kt for beam 2
*’KTRE’ 1 0 0.35 ! (D=0.35) primordial kt when non-trivial
* ! target remnant is split into two particles
* Les Houches Accord Interface
’ILHA’ 1 0 0 ! (D=10) Les Houches Accord
* ! = 0 use internal CASCADE
* ! = 1 write event file
* ! = 10 call PYTHIA for final state PS and remnant frag
* path for updf files
* ’UPDF’ ’./share’
## 10 Program Installation
Cascade3 now follows the standard AUTOMAKE convention. To install the program,
do the following
1) Get the source from http://www.desy.de/~jung/cascade
tar xvfz cascade-XXXX.tar.gz
cd cascade-XXXX
2) Generate the Makefiles (do not use shared libraries)
./configure --disable-shared --prefix=install-path --with-lhapdf="lhapdflib_path"
--with-tmdlib="TMDlib_path" --with-hepmc="hepmc_path"
with (as example):
lhapdflib_path=/Users/jung/MCgenerators/lhapdf/6.2.1/local
TMDlib_path=/Users/jung/jung/cvs/TMDlib/TMDlib2/local
hepmc_path/Users/jung/MCgenerators/hepmc/HepMC-2.06.09/local
3) Compile the binary
make
4) Install the executable and PDF files
make install
4) The executable is in bin
run it with:
export CASEED=1242425
export HEPMCOUT=outfile.hepmc
cd $install-path/bin
./cascade < $install-path/share/cascade/LHE/steering-DY-MCatNLO.txt
## Acknowledgments.
FG acknowledges the support and hospitality of DESY, Hamburg, where part of
this work started. FH acknowledges the hospitality and support of DESY,
Hamburg and of CERN, Theory Division while parts of this work were being done.
SB, ALi and MM are grateful the DESY Directorate for the support in the
framework of Cooperation Agreement between MSU and DESY on phenomenology of
the LHC processes and TMD parton densities. MM was supported by a grant of the
foundation for the advancement of theoretical physics and mathematics “Basis”
20-1-3-11-1. STM thanks the Humboldt Foundation for the Georg Forster research
fellowship and gratefully acknowledges support from IPM. ALe acknowledges
funding by Research Foundation-Flanders (FWO) (application number: 1272421N).
QW and HY acknowledge the support by the Ministry of Science and Technology
under grant No. 2018YFA040390 and by the National Natural Science Foundation
of China under grant No. 11661141008.
## 11 Program Summary
Title of Program: Cascade3 3.1.0
Computer for which the program is designed and others on which it is operable:
any with standard Fortran 77 (gfortran)
Programming Language used: FORTRAN 77
High-speed storage required: No
Separate documentation available: No
Keywords: QCD, TMD parton distributions.
Method of solution: Since measurements involve complex cuts and multi-particle
final states, the ideal tool for any theoretical description of the data is a
Monte Carlo event generator which generates initial state parton showers
according to Transverse Momentum Dependent (TMD) parton densities, in a
backward evolution, which follows the evolution equation as used for the
determination of the TMD.
Restrictions on the complexity of the problem: Any LHE file (with on-shell or
off-shell) initial state partons can be processed.
Other Program used: Pythia (version $>$ 6.4) for final state parton shower and
hadronization, Bases/Spring 5.1 for integration (both supplied with the
program package),
TMDlib as a library for TMD parton densities.
Download of the program: `http://www.desy.de/~jung/cascade`
Unusual features of the program: None
## References
* [1] J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, et al., JHEP 1407, 079 (2014). 1405.0301
* [2] S. Frixione and B. R. Webber (2006). hep-ph/0612272
* [3] S. Frixione, P. Nason, and B. R. Webber, JHEP 08, 007 (2003). hep-ph/0305252
* [4] S. Frixione and B. R. Webber (2002). hep-ph/0207182
* [5] S. Frixione and B. R. Webber, JHEP 06, 029 (2002). hep-ph/0204244
* [6] S. Alioli, K. Hamilton, P. Nason, C. Oleari, and E. Re, JHEP 04, 081 (2011). 1012.3380
* [7] S. Frixione, P. Nason, and C. Oleari, JHEP 0711, 070 (2007). 0709.2092
* [8] M. Bahr, S. Gieseke, M. Gigg, D. Grellscheid, K. Hamilton, et al., Eur. Phys. J. C 58, 639 (2008). 0803.0883
* [9] T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen, and P. Z. Skands, Comput. Phys. Commun. 191, 159 (2015). 1410.3012
* [10] J. Alwall, S. Hoche, F. Krauss, N. Lavesson, L. Lonnblad, et al., Eur. Phys. J. C 53, 473 (2008). 0706.2569
* [11] S. Catani, F. Krauss, R. Kuhn, and B. R. Webber, JHEP 11, 063 (2001). hep-ph/0109231
* [12] R. Frederix and S. Frixione, JHEP 12, 061 (2012). 1209.6215
* [13] K. Hamilton, P. Nason, and G. Zanderighi, JHEP 10, 155 (2012). 1206.3572
* [14] S. Catani, M. Ciafaloni, and F. Hautmann, Phys. Lett. B 242, 97 (1990)
* [15] M. Ciafaloni, Nucl. Phys. B 296, 49 (1988)
* [16] S. Catani, F. Fiorani, and G. Marchesini, Phys. Lett. B 234, 339 (1990)
* [17] S. Catani, F. Fiorani, and G. Marchesini, Nucl. Phys. B 336, 18 (1990)
* [18] G. Marchesini, Nucl. Phys. B 445, 49 (1995). hep-ph/9412327
* [19] F. Hautmann, H. Jung, A. Lelek, V. Radescu, and R. Zlebcik, JHEP 01, 070 (2018). 1708.03279
* [20] F. Hautmann, H. Jung, A. Lelek, V. Radescu, and R. Zlebcik, Phys. Lett. B 772, 446 (2017). 1704.01757
* [21] R. Angeles-Martinez et al., Acta Phys. Polon. B 46, 2501 (2015). 1507.05267
* [22] J. Alwall et al., Comput. Phys. Commun. 176, 300 (2007). hep-ph/0609017
* [23] J. C. Collins and D. E. Soper, Nucl. Phys. B 193, 381 (1981). [Erratum: Nucl. Phys.B213,545(1983)]
* [24] J. C. Collins and D. E. Soper, Nucl. Phys. B 194, 445 (1982)
* [25] J. C. Collins, D. E. Soper, and G. F. Sterman, Nucl. Phys. B 223, 381 (1983)
* [26] J. C. Collins, D. E. Soper, and G. F. Sterman, Phys. Lett. B 109, 388 (1982)
* [27] J. C. Collins, D. E. Soper, and G. F. Sterman, Nucl. Phys. B 250, 199 (1985)
* [28] J. Collins, Foundations of perturbative QCD, Vol. 32. Cambridge monographs on particle physics, nuclear physics and cosmology., 2011
* [29] R. Meng, F. I. Olness, and D. E. Soper, Phys. Rev. D 54, 1919 (1996). hep-ph/9511311
* [30] P. M. Nadolsky, D. R. Stump, and C. P. Yuan, Phys. Rev. D 61, 014003 (2000). [Erratum: Phys.Rev.D 64, 059903 (2001)], hep-ph/9906280
* [31] P. M. Nadolsky, D. R. Stump, and C. P. Yuan, Phys. Rev. D 64, 114011 (2001). hep-ph/0012261
* [32] X.-D. Ji, J.-P. Ma, and F. Yuan, Phys. Rev. D 71, 034005 (2005). hep-ph/0404183
* [33] X.-D. Ji, J.-P. Ma, and F. Yuan, Phys. Lett. B 597, 299 (2004). hep-ph/0405085
* [34] M. G. Echevarria, A. Idilbi, and I. Scimemi, JHEP 07, 002 (2012). 1111.4996
* [35] J.-Y. Chiu, A. Jain, D. Neill, and I. Z. Rothstein, Phys. Rev. Lett. 108, 151601 (2012). 1104.0881
* [36] E. M. Levin, M. G. Ryskin, Y. M. Shabelski, and A. G. Shuvaev, Sov. J. Nucl. Phys. 53, 657 (1991)
* [37] J. C. Collins and R. K. Ellis, Nucl. Phys. B 360, 3 (1991)
* [38] F. Hautmann, Phys. Lett. B 535, 159 (2002). hep-ph/0203140
* [39] E. Avsar (2012). 1203.1916
* [40] E. Avsar, Int. J. Mod. Phys. Conf. Ser. 04, 74 (2011). 1108.1181
* [41] S. Jadach and M. Skrzypek, Acta Phys. Polon. B 40, 2071 (2009). 0905.1399
* [42] F. Dominguez, Unintegrated Gluon Distributions at Small-x. Ph.D. Thesis, Columbia U., 2011
* [43] F. Dominguez, J.-W. Qiu, B.-W. Xiao, and F. Yuan, Phys. Rev. D 85, 045003 (2012). 1109.6293
* [44] F. Dominguez, A. Mueller, S. Munier, and B.-W. Xiao, Phys. Lett. B 705, 106 (2011). 1108.1752
* [45] F. Hautmann, Acta Phys.Polon. B 40, 2139 (2009)
* [46] F. Hautmann, M. Hentschinski, and H. Jung (2012). 1205.6358
* [47] F. Hautmann and H. Jung, Nucl. Phys. Proc. Suppl. 184, 64 (2008). 0712.0568
* [48] S. Catani, M. Ciafaloni, and F. Hautmann, Phys. Lett. B307, 147 (1993)
* [49] S. Catani and F. Hautmann, Nucl. Phys. B427, 475 (1994). hep-ph/9405388
* [50] A. Buckley, J. Ferrando, S. Lloyd, K. Nordström, B. Page, M. Rüfenacht, M. Schönherr, and G. Watt, Eur. Phys. J. C 75, 132 (2015). 1412.7420
* [51] S. Dooling, P. Gunnellini, F. Hautmann, and H. Jung, Phys.Rev.D 87, 094009 (2013). 1212.6164
* [52] F. Hautmann and H. Jung, Eur. Phys. J. C 72, 2254 (2012). 1209.6549
* [53] M. Bengtsson, T. Sjostrand, and M. van Zijl, Z. Phys. C 32, 67 (1986)
* [54] S. Catani, M. Ciafaloni, and F. Hautmann, Nucl. Phys. B Proc. Suppl. 29, 182 (1992)
* [55] G. Marchesini and B. R. Webber, Nucl. Phys. B 386, 215 (1992)
* [56] F. Hautmann, H. Jung, and S. T. Monfared, Eur. Phys. J. C 74, 3082 (2014). 1407.5935
* [57] E. A. Kuraev, L. N. Lipatov, and V. S. Fadin, Sov. Phys. JETP 44, 443 (1976)
* [58] E. A. Kuraev, L. N. Lipatov, and V. S. Fadin, Sov. Phys. JETP 45, 199 (1977)
* [59] I. I. Balitsky and L. N. Lipatov, Sov. J. Nucl. Phys. 28, 822 (1978)
* [60] H. Jung, S. Baranov, M. Deak, A. Grebenyuk, F. Hautmann, et al., Eur. Phys. J. C 70, 1237 (2010). 1008.0152
* [61] A. van Hameren, Comput. Phys. Commun. 224, 371 (2018). 1611.00680
* [62] A. Lipatov, M. Malyshev, and S. Baranov, Eur. Phys. J. C 80, 330 (2020). 1912.04204
* [63] S. Catani, M. Ciafaloni, and F. Hautmann, Nucl. Phys. B 366, 135 (1991)
* [64] V. Saleev and N. Zotov, Mod. Phys. Lett. A 9, 151 (1994). [Erratum: Mod.Phys.Lett.A 9, 1517–1518 (1994)]
* [65] A. Lipatov and N. Zotov, Eur. Phys. J. C 27, 87 (2003). hep-ph/0210310
* [66] S. Baranov and N. Zotov, J. Phys. G 29, 1395 (2003). hep-ph/0302022
* [67] S. P. Baranov, Phys. Rev. D 66, 114003 (2002)
* [68] S. Baranov, Phys. Rev. D 84, 054012 (2011)
* [69] S. P. Baranov, A. V. Lipatov, and N. P. Zotov, Phys. Rev. D 78, 014025 (2008). 0805.4821
* [70] M. Deak and F. Schwennsen, JHEP 09, 035 (2008). 0805.3763
* [71] S. Marzani and R. D. Ball, Nucl. Phys. B 814, 246 (2009). 0812.3602
* [72] M. Deak, F. Hautmann, H. Jung, and K. Kutak, JHEP 09, 121 (2009). 0908.0538
* [73] W. Furmanski and R. Petronzio, Z. Phys. C 11, 293 (1982)
* [74] G. Curci, W. Furmanski, and R. Petronzio, Nucl. Phys. B175, 27 (1980)
* [75] Y. L. Dokshitzer, Sov. Phys. JETP 46, 641 (1977). [Zh. Eksp. Teor. Fiz.73,1216(1977)]
* [76] G. Altarelli and G. Parisi, Nucl. Phys. B 126, 298 (1977)
* [77] V. N. Gribov and L. N. Lipatov, Sov. J. Nucl. Phys. 15, 438 (1972). [Yad. Fiz.15,781(1972)]
* [78] Z. Nagy and D. E. Soper, Phys. Rev. D 102, 014025 (2020). 2002.04125
* [79] L. Gellersen, D. Napoletano, S. Prestel, Monte Carlo studies, in 11th Les Houches Workshop on Physics at TeV Colliders:Les Houches 2019: Physics at TeV Colliders: Standard Model Working Group Report, p. 131. 2020\. Also in preprint 2003.01700
* [80] H. Jung, Comput. Phys. Commun. 143, 100 (2002). hep-ph/0109102
* [81] H. Jung and G. P. Salam, Eur. Phys. J. C 19, 351 (2001). hep-ph/0012143
* [82] S. Platzer and M. Sjodahl, Eur. Phys. J. Plus 127, 26 (2012). 1108.6180
* [83] F. Hautmann and H. Jung, JHEP 10, 113 (2008). 0805.1049
* [84] F. Hautmann, H. Jung, M. Krämer, P. Mulders, E. Nocera, et al., Eur. Phys. J. C 74, 3220 (2014). 1408.3015
* [85] N. A. Abdulov et al., TMDlib2 and TMDplotter: a platform for 3D hadron structure studies. 2103.09741
* [86] F. Hautmann and H. Jung, Nuclear Physics B 883, 1 (2014). 1312.7875
* [87] A. Bermudez Martinez, P. Connor, F. Hautmann, H. Jung, A. Lelek, V. Radescu, and R. Zlebcik, Phys. Rev. D 99, 074008 (2019). 1804.11152
* [88] A. Bermudez Martinez et al., Eur. Phys. J. C 80, 598 (2020). 2001.06488
* [89] A. Bermudez Martinez et al., Phys. Rev. D100, 074027 (2019). 1906.00919
* [90] M. Dobbs and J. B. Hansen, Comput. Phys. Commun. 134, 41 (2001)
* [91] A. Bermudez Martinez et al., Jet merging with TMD parton branching. To be published, 2021
* [92] A. Buckley, J. Butterworth, L. Lonnblad, D. Grellscheid, H. Hoeth, J. Monk, H. Schulz, and F. Siegert, Comput. Phys. Commun. 184, 2803 (2013). 1003.0694.
|
# The conjectures of Artin–Tate and Birch–Swinnerton-Dyer
Stephen Lichtenbaum Department of Mathematics, Brown University, Providence,
RI 02912<EMAIL_ADDRESS>, Niranjan Ramachandran Department
of Mathematics, University of Maryland, College Park, MD 20742 USA.
<EMAIL_ADDRESS>and Takashi Suzuki Department of Mathematics, Chuo
University, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan
<EMAIL_ADDRESS>
* * scAbstract. We provide two proofs that the conjecture of Artin–Tate for a fibered surface is equivalent to the conjecture of Birch–Swinnerton-Dyer for the Jacobian of the generic fibre. As a byproduct, we obtain a new proof of a theorem of Geisser relating the orders of the Brauer group and the Tate–Shafarevich group.
scKeywords. Birch–Swinnerton-Dyer conjecture; finite fields; zeta functions;
Tate conjecture
sc2020 Mathematics Subject Classification. 11G40, 14G10, 19F27
*
cDecember 27, 2021Received by the Editors on May 14, 2021\.
Accepted on January 29, 2022.
Department of Mathematics, Brown University, Providence, RI 02912
sce-mail<EMAIL_ADDRESS>
Department of Mathematics, University of Maryland, College Park, MD 20742 USA.
sce-mail<EMAIL_ADDRESS>
Department of Mathematics, Chuo University, 1-13-27 Kasuga, Bunkyo-ku, Tokyo
112-8551, Japan
sce-mail<EMAIL_ADDRESS>
© by the author(s) This work is licensed under
http://creativecommons.org/licenses/by-sa/4.0/
###### Contents
1. 1 Introduction and statement of results
2. 2 Preparations
3. 3 First proof of Theorem 1.1
4. 4 Second proof of Theorem 1.1
## 1\. Introduction and statement of results
Let $k=\mathbb F_{q}$ be a finite field of characteristic $p$ and let $S$ be a
smooth projective (geometrically connected) curve over $T=\leavevmode\nobreak\
$Spec$\leavevmode\nobreak\ k$ and let $F=k(S)=\mathbb F_{q}(S)$ be the
function field of $S$. Let $X$ be a smooth proper surface over $T$ with a flat
proper morphism $\pi:X\to S$ with smooth geometrically connected generic fiber
$X_{0}$ over Spec $F$. The Jacobian $J$ of $X_{0}$ is an Abelian variety over
$F$.
Our first main result is a proof of the following statement conjectured by
Artin and Tate [Tat66, Conjecture (d)]:
###### Theorem 1.1.
The Artin–Tate conjecture for $X$ is equivalent to the Birch–Swinnerton-Dyer
conjecture for $J$.
Recall that these conjectures concern two (conjecturally finite) groups: the
Tate–Shafarevich group $\Sha(J/F)$ of $J$ and the Brauer group
$\textrm{Br}(X)$ of $X$. A result of Artin–Grothendieck [Gor79, Theorem 2.3]
[Gro68, §4] is that $\Sha(J/F)$ is finite if and only if $\textrm{Br}(X)$ is
finite.
Our second main result is a new proof of a beautiful result (2.18) of Geisser
[Gei20, Theorem 1.1] that relates the conjectural finite orders of $\Sha(J/F)$
and $\textrm{Br}(X)$; special cases of (2.18) are due to Milne–Gonzales-Aviles
[Mil81, GA03].
We actually provide two proofs of Theorem 1.1; while our first proof uses
Geisser’s result (2.18), the second (and very short) proof in §4, completely
due to the third-named author, does not.
### 1.1. History
Artin and Tate regarded Theorem 1.1 as easier to prove as opposed to the other
conjectures in [Tat66]. They proved Theorem 1.1 when $\pi$ is smooth and has a
section ([Tat66, p.427]) using the equality
(1.1) $[\Sha(J/F)]=[\textrm{Br}(X)]$
between the orders of the groups $\Sha(J/F)$ and $\textrm{Br}(X)$ which
follows from Artin’s theorem [Tat66, Theorem 3.1], [Gor79, Theorem 2.3]: if
$\pi$ is generically smooth with connected fibers and admits a section, then
$\Sha(J/F)\cong\textrm{Br}(X)$. Gordon [Gor79, Theorem 6.1] used (1.1) to
prove Theorem 1.1 when111There is another proof (up to $p$-torsion) in this
case due to Z. Yun [Yun15]. $\pi$ is cohomologically flat with a section (see
[Gor79, Theorem 2.3]). Building on Gordon [Gor79], Liu–Lorenzini–Raynaud
[LLR04] proved several new cases of Theorem 1.1 by eliminating the condition
of cohomological flatness of $\pi$; their proof [LLR04, Theorem 4.3] proceeds
by proving that Theorem 1.1 is equivalent to a precise relation generalizing
(1.1) between $[\textrm{Br}(X)]$ and $[\Sha(J/F)]$ which in their case had
been proved by Milne and Gonzales-Aviles [Mil81, GA03].
As Liu–Lorenzini–Raynaud (and Milne) point out [LLR05, Theorem 2], Theorem 1.1
follows by combining [Tat66, Gro68, Mil75, KT03]:
$AT(X)\xLeftrightarrow{\ \text{Artin--Tate--Milne}\
}\textrm{Br}(X)\leavevmode\nobreak\ \textrm{finite}\xLeftrightarrow{\
\text{Artin--Grothendieck}\ }\Sha(J/F)\leavevmode\nobreak\
\textrm{finite}\xLeftrightarrow{\ \text{Kato--Trihan}\ }BSD(J).$
In 2018, Geisser pointed out that a slight correction is necessary in the
relation [LLR04, Theorem 4.3] between $[\textrm{Br}(X)]$ and $[\Sha(J/F)]$;
Liu–Lorenzini–Raynaud [LLR18, Corrected Theorem 4.3] showed that Theorem 1.1
holds if and only if this slightly corrected version holds. This precise
relation (Theorem 2.11) was then proved by Geisser [Gei20, Theorem 1.1]
without using Theorem 1.1. Thus, combining [LLR18, Corrected Theorem 4.3] and
[Gei20, Theorem 1.1] gives the second known proof of Theorem 1.1. But this
proof relies heavily on the work of Gordon222Known to have several
inaccuracies; see [LLR18, §3.3]. [Gor79] as can be seen from [LLR18, §3,
(3.9)].
### 1.2. Our approach
Our first proof depends on [Gor79] only for the elementary result (2.9). As in
[Gor79, LLR04, LLR18], this proof also follows the strategy in [Tat66, §4]. We
use the localization sequence to record a short proof333This is similar to the
ideas of Hindry–Pacheco and Kahn in [Kah09, §§3.2-3.3]. of the Tate–Shioda
relation (Corollary 2.2). In turn, this gives a quick calculation (2.17) of
the height pairing ${}_{\mathrm{ar}}(\operatorname{NS}(X))$ on the
Néron–Severi group of $X$. The same calculation in [Gor79, LLR18] requires a
detailed analysis of various subgroups of $\operatorname{NS}(X)$. A beautiful
introduction to these results is [Ulm14]; see [Lic83, Lic05, GS20] for Weil-
étale analogues.
The second proof (§4) of Theorem 1.1 uses only (2.5) and the Weil-étale
formulations of the two conjectures. In this proof, we do not compare each
term of the two special value formulas and entirely work in derived
categories.
### Notations
Throughout, $k=\mathbb F_{q}$ is a finite field of characteristic $p$ and
$T=\mathrm{Spec}\leavevmode\nobreak\ k$; if $\bar{k}$ is an algebraic closure
of $k$, let $\bar{T}=\mathrm{Spec}\leavevmode\nobreak\ \bar{k}$. The function
field of $S$ is $F=k(S)$. Let $X$ be a smooth proper surface over $T$ with a
flat proper morphism $\pi:X\to S$ with smooth geometrically connected generic
fiber $X_{0}$ over Spec $F$. The Jacobian $J$ of $X_{0}$ is an Abelian variety
over $F$.
### 1.3. The Artin–Tate conjecture
Let $k=\mathbb F_{q}$ and $F=k(S)$. For any scheme $V$ of finite type over
$T$, the zeta function $\zeta(V,s)$ is defined as
$\zeta(V,s)=\prodop\displaylimits_{v\in V}\frac{1}{(1-q_{v}^{-s})};$
the product is over all closed points $v$ of $V$ and $q_{v}$ is the size of
the finite residue field $k(v)$ of $v$. If $V$ is smooth proper (geometrically
connected) of dimension $d$, then the zeta function $\zeta(V,s)$ factorizes as
$\zeta(V,s)=\frac{P_{1}(V,q^{-s})\cdots
P_{2d-1}(V,q^{-s})}{P_{0}(V,q^{-s})\cdots P_{2d}(V,q^{-s})},\quad
P_{0}=(1-q^{-s}),\quad P_{2d}=(1-q^{d-s}),$
where $P_{i}(V,t)\in\mathbb Z[t]$ is the characteristic polynomial of
Frobenius acting on the $\ell$-adic étale cohomology
$H^{i}(V\times_{T}\bar{T},\mathbb Q_{\ell})$ for any prime $\ell$ not dividing
$q$; by Grothendieck and Deligne, $P_{j}(V,t)$ is independent of $\ell$. One
has the factorization [Tat66, (4.1)] (the second equality uses Poincaré
duality)
(1.2) $\zeta(X,s)=\frac{P_{1}(X,q^{-s})\cdot P_{3}(X,q^{-s})}{(1-q^{-s})\cdot
P_{2}(X,q^{-s})\cdot(1-q^{2-s})}=\frac{P_{1}(X,q^{-s})\cdot
P_{1}(X,q^{1-s})}{(1-q^{-s})\cdot P_{2}(X,q^{-s})\cdot(1-q^{2-s})}.$
Let $\rho(X)$ be the rank of the finitely generated Néron–Severi group
$\operatorname{NS}(X)$. The intersection $D\cdot E$ of divisors $D$ and $E$
provides a symmetric non-degenerate bilinear pairing on
$\operatorname{NS}(X)$; the height pairing $\langle D,E\rangle_{\mathrm{ar}}$
[LLR18, Remark 3.11] on $\operatorname{NS}(X)$ is related to the intersection
pairing as follows:
$\operatorname{NS}(X)\times\operatorname{NS}(X)\to\mathbb Q(\log q),\qquad
D,E\mapsto\langle D,E\rangle_{\mathrm{ar}}=(D\cdot E)\log q.$
Let $A$ be the reduced identity component
$\operatorname{Pic}^{\mathrm{red},0}_{X/k}$ of the Picard scheme
$\operatorname{Pic}_{X/k}$ of $X$. Let
(1.3) $\alpha(X)=\chi(X,\mathcal{O}_{X})-1+\dim(A).$
We write $[G]$ for the order of a finite group $G$.
###### Conjecture 1.2 (Artin–Tate [Tat66, Conjecture (C)]).
The Brauer group $\operatorname{Br}(X)$ is finite,
$\operatorname{ord}_{s=1}P_{2}(X,q^{-s})=\rho(X)$, and the special value
$P^{*}_{2}(X,q^{-1}):=\lim_{s\to 1}\frac{P_{2}(X,q^{-s})}{(s-1)^{\rho(X)}}$
of $P_{2}(X,t)$ at $t=1/q$ $($this corresponds to $s=1)$ satisfies
(1.4)
$P^{*}_{2}(X,q^{-1})=[\operatorname{Br}(X)]\cdot{}_{\mathrm{ar}}(\operatorname{NS}(X))\cdot
q^{-\alpha(X)}.$
Here ${}_{\mathrm{ar}}(\operatorname{NS}(X))$ is the discriminant $($see §1.4
$)$ of the height pairing on $\operatorname{NS}(X)$.
###### Remark.
The discriminant ${}_{\mathrm{ar}}(\operatorname{NS}(X))$ of the height
pairing on $\operatorname{NS}(X)$ is related to the discriminant
$\Delta(\operatorname{NS}(X))$ of the intersection pairing as follows:
${}_{\mathrm{ar}}(\operatorname{NS}(X))=\Delta(\operatorname{NS}(X))\cdot(\log
q)^{\rho(X)}$.
### 1.4. Discriminants
For more details on the basic notions recalled next, see [Yun15, §2.8] and
[Blo87]. Let $N$ be a finitely generated Abelian group $N$ and let
$\psi:N\times N\to K$ be a symmetric bilinear form with values in any field
$K$ of characteristic zero. If $\psi:N/{\mathrm{tor}}\times
N/{\mathrm{tor}}\to K$ is non-degenerate, the discriminant $\Delta(N)$ is
defined as the determinant of the matrix $\psi(b_{i},b_{j})$ divided by
$(N:N^{\prime})^{2}$ where $N^{\prime}$ is the subgroup of finite index
generated by a maximal linearly independent subset $\\{b_{i}\\}$ of $N$. Note
that $\Delta(N)$ is independent of the choice of the subset $\\{b_{i}\\}$ and
the subgroup $N^{\prime}$ and incorporates the order of the torsion subgroup
of $N$. For us, $K=\mathbb Q$ or $\mathbb Q(\log q)$.
Given a short exact sequence $0\to N^{\prime}\to N\to N^{\prime\prime}\to 0$
which splits over $\mathbb Q$ as an orthogonal direct sum $N_{\mathbb Q}\cong
N^{\prime}_{\mathbb Q}\oplus N^{\prime\prime}_{\mathbb Q}$ with respect to a
definite pairing $\psi$ on $N$, one has the following standard relation
(1.5) $\Delta(N)=\Delta(N^{\prime})\cdot\Delta(N^{\prime\prime}).$
Given a map $f:C\to C^{\prime}$ of Abelian groups with finite kernel and
cokernel, the invariant $z(f)=\frac{[\textrm{Ker}(f)]}{[\textrm{Coker}(f)]}$
[Tat66] extends to the derived category $\mathcal{D}$ of complexes in Abelian
groups with bounded and finite homology: given any such complex $C_{\bullet}$,
the invariant
$z(C_{\bullet})=\prodop\displaylimits_{i}[H_{i}(C_{\bullet})]^{(-1)^{i}}$
is an Euler characteristic; for any triangle $K\to L\to M\to K[1]$ in
$\mathcal{D}$, the following relation holds
(1.6) $z(K)\cdot z(M)=z(L).$
One recovers $z(f)$ viewing $f:C\to C^{\prime}$ as a complex in degrees zero
and one. For any pairing $\psi:N\times N\to\mathbb Z$, the induced map $N\to
R\textrm{Hom}(N,\mathbb Z)$ recovers $\Delta(N)$ above:
$\Delta(N)=z(N\to R\textrm{Hom}(N,\mathbb Z))^{-1}.$
∎
### 1.5. The Birch–Swinnerton-Dyer conjecture
For more details on the basic notions recalled next, see [GS20]. Let $J$ be
the Jacobian of $X_{0}$. Recall that the complete L-function [Ser70, Mil72],
[GS20, §4] of $J$ is defined as a product of local factors
(1.7) $L(J,s)=\prodop\displaylimits_{v\in S}\frac{1}{L_{v}(J,q_{v}^{-s})}.$
For any closed point $v$ of $S$, the local factor $L_{v}(J,t)$ is the
characteristic polynomial of Frobenius on
(1.8) $H^{1}_{\mathrm{\acute{e}t}}(J\times F_{v}^{\mathrm{sep}},\mathbb
Q_{\ell})^{I_{v}},$
where $F_{v}$ is the complete local field corresponding to $v$ and $I_{v}$ is
the inertia group at $v$. By [GS20, Proposition 4.1], $L_{v}(J,t)$ has
coefficients in $\mathbb Z$ and is independent of $\ell$, for any prime $\ell$
distinct from the characteristic of $k$. Let $\Sha(J/F)$ be the
Tate–Shafarevich group of $J$ over Spec $F$ and let $r$ be the rank of the
finitely generated group $J(F)$. Let ${}_{\mathrm{NT}}(J(F))$ be the
discriminant of the Néron–Tate pairing [Tat66, p. 419], [KT03, §1.5] on
$J(F)$:
(1.9) $J(F)\times J(F)\to\mathbb Q(\log\leavevmode\nobreak\
q),\quad(\gamma,\kappa)\mapsto\langle\gamma,\kappa\rangle_{\mathrm{NT}}.$
Let $\mathcal{J}\to S$ be the Néron model of $J$; for any closed point $v\in
S$, define $c_{v}=[{}_{v}(k_{v})]$ where v is the group of connected
components of $\mathcal{J}_{v}$ and put $c(J)=\prodop\displaylimits_{v\in
S}c_{v}$; this is a finite product as $c_{v}=1$ for all but finitely many $v$.
Let $\operatorname{Lie}\,\mathcal{J}$ be the locally free sheaf on $S$ defined
by the Lie algebra of $\mathcal{J}$. Recall the444By [GS20, Corollary 4.5],
this is equivalent to the formulation in [Tat66].
###### Conjecture 1.3 (Birch–Swinnerton-Dyer).
The group $\Sha(J/F)$ is finite, $\operatorname{ord}_{s=1}L(J,s)=r$, and the
special value
$L^{*}(J,1):=\lim_{s\to 1}\frac{L(J,s)}{(s-1)^{r}}$
satisfies
(1.10) $L^{*}(J,1)=[\Sha(J/F)]\cdot{{}_{\mathrm{NT}}(J(F))}\cdot c(J)\cdot
q^{\chi(S,\operatorname{Lie}\,\mathcal{J})}.$
The proof of Theorem 1.1, _i.e._ the equivalence of Conjectures 1.2 and 1.3,
naturally divides into four parts:
* •
$\operatorname{Br}(X)$ is finite if and only if $\Sha(J/F)$ is finite. This is
known [Gro68, (4.41), Corollaire (4.4)].
* •
Comparison of $\chi(S,\operatorname{Lie}\,\mathcal{J})$ and $\alpha(X)$ given
in (2.5). This is known [LLR04, p. 483]. For the convenience of the reader, we
recall it in §2.2.
* •
(Proposition 2.4) $\operatorname{ord}_{s=1}P_{2}(X,q^{-s})=\rho(X)$ if and
only if $\operatorname{ord}_{s=1}L(J,s)=r$.
* •
(§3) $P_{2}^{*}(X,1)$ satisfies (1.4) if and only if $L^{*}(J,1)$ satisfies
(1.10).
The first two parts are not difficult and we provide elementary proofs of the
last two parts.
### Acknowledgements
This paper would not exist without the inspiration provided by [FS21, Gor79,
LLR18, Gei20, Yun15] in terms of both mathematical ideas and clear exposition.
We thank Professors Liu, Lorenzini and K. Sato for their valuable comments on
an earlier draft. We heartily thank the referee for a valuable and detailed
report.
## 2\. Preparations
### 2.1. Elementary identities and known results
The Néron–Severi group $\operatorname{NS}(X)$ is the group of $k$-points of
the group scheme $\operatorname{NS}_{X/k}=\pi_{0}(\operatorname{Pic}_{X/k})$
of connected components of the Picard scheme $\operatorname{Pic}_{X/k}$ of
$X$. Let $A=\operatorname{Pic}^{\mathrm{red},0}_{X/k}$. The Leray spectral
sequence for the morphism $X\to\mathrm{Spec}\leavevmode\nobreak\ k$ and the
étale sheaf $\mathbb G_{m}$ provides the first exact sequences [BLR90,
Proposition 4, p. 204] below:
$0\longrightarrow\operatorname{Pic}(k)\longrightarrow\operatorname{Pic}(X)\longrightarrow\operatorname{Pic}_{X/k}(k)\longrightarrow\operatorname{Br}(k)\quad\text{and}\quad
0\longrightarrow\operatorname{Pic}^{0}_{X/k}\longrightarrow\operatorname{Pic}_{X/k}\longrightarrow\pi_{0}(\operatorname{Pic}_{X/k})\longrightarrow
0.$
Since $\operatorname{Br}(k)=0$,
$H^{1}_{\mathrm{\acute{e}t}}(\mathrm{Spec}\leavevmode\nobreak\
k,\operatorname{Pic}^{0}_{X/k})=H^{1}_{\mathrm{\acute{e}t}}(\mathrm{Spec}\leavevmode\nobreak\
k,\operatorname{Pic}^{\mathrm{red},0}_{X/k})$ and
$H^{1}_{\mathrm{\acute{e}t}}(\mathrm{Spec}\leavevmode\nobreak\ k,A)=0$ (Lang’s
theorem [Tat66, p. 209]), this provides
(2.1)
$\operatorname{Pic}_{X/k}(k)=\operatorname{Pic}(X)\quad\text{and}\quad\operatorname{NS}(X)=\operatorname{NS}_{X/k}(k)=\frac{\operatorname{Pic}(X)}{A(k)}.$
Let $P$ be the identity component of the Picard scheme
$\operatorname{Pic}_{S/k}$ of $S$. Let $B$ be the cokernel of the natural
injective map $\pi^{*}:P\to A$. So one has short exact sequences (using Lang’s
theorem [Tat66, p. 209] for the last sequence)
(2.2) $A=\operatorname{Pic}^{\mathrm{red},0}_{X/k},\quad
P=\operatorname{Pic}^{0}_{S/k},\quad 0\longrightarrow P\longrightarrow
A\longrightarrow B\longrightarrow 0,\quad\text{and}\quad 0\longrightarrow
P(k)\longrightarrow A(k)\longrightarrow B(k)\longrightarrow 0.$
It is known that [Tat66, p. 428]
(2.3) $P_{1}(S,q^{-s})=P_{1}(P,q^{-s}),\quad
P_{1}(X,q^{-s})=P_{1}(A,q^{-s}),\quad\text{and}\quad
P_{1}(A,q^{-s})=P_{1}(P,q^{-s})\cdot P_{1}(B,q^{-s}).$
For any Abelian variety $G$ of dimension $d$ over $k=\mathbb F_{q}$, it is
well known that [Tat66, p. 429, top line] (or [Gor79, 6.1.3])
(2.4) $P_{1}(G,1)=[G(k)]\quad\text{and}\quad P_{1}(G,q^{-1})=[G(k)]q^{-d}.$
### 2.2. Comparison of $\chi(S,\operatorname{Lie}\,\mathcal{J})$ and
$\alpha(X)$
It is known [LLR04, p. 483] that
(2.5) $\chi(S,\operatorname{Lie}\,\mathcal{J})-\dim(B)=-\alpha(X).$
We include their proof here for the convenience of the reader. A special case
of this is due to Gordon [Gor79, Proposition 6.5]. The Leray spectral sequence
for $\pi$ and $\mathcal{O}_{X}$ provides $H^{0}(S,\mathcal{O}_{S})\cong
H^{0}(X,\mathcal{O}_{X})$,
$0\to H^{1}(S,\mathcal{O}_{S})\to H^{1}(X,\mathcal{O}_{X})\to
H^{0}(S,R^{1}\pi_{*}\mathcal{O}_{X})\to 0,\quad H^{2}(X,\mathcal{O}_{X})\cong
H^{1}(S,R^{1}\pi_{*}\mathcal{O}_{X}).$
This proves
$\chi(X,\mathcal{O}_{X})=\chi(S,\mathcal{O}_{S})-\chi(S,R^{1}\pi_{*}\mathcal{O}_{X})$.
Recall that $\mathcal{J}$ is the Néron model of the Jacobian $J$ of $X_{0}$.
As the kernel and cokernel of the natural map555The map $\phi$ is obtained by
the composition of the maps
$R^{1}\pi_{*}\mathcal{O}_{X}\to\operatorname{Lie}\,P$ [LLR04, Proposition 1.3
(b)] and $\operatorname{Lie}\,P\to\operatorname{Lie}\,Q$ [LLR04, Theorem 3.1]
with $Q\xrightarrow{\sim}\mathcal{J}$ [LLR04, Facts 3.7 (a)]; it uses the fact
that $X$ is regular, $\pi:X\to S$ is proper flat, and
$\pi_{*}\mathcal{O}_{X}=\mathcal{O}_{S}$.
$\phi:R^{1}\pi_{*}\mathcal{O}_{X}\to\operatorname{Lie}\,\mathcal{J}$ are
torsion sheaves on $S$ of the same length [LLR04, Theorem 4.2], we have
[LLR04, p. 483]
(2.6)
$\chi(S,R^{1}\pi_{*}\mathcal{O}_{X})=\chi(S,\operatorname{Lie}\,\mathcal{J}).$
Thus,
$\displaystyle\alpha(X)$
$\displaystyle\overset{(\ref{alphax})}{=}\chi(X,\mathcal{O}_{X})-1+\dim(A)=\chi(S,\mathcal{O}_{S})-\chi(S,R^{1}\pi_{*}\mathcal{O}_{X})-1+\dim(A)$
$\displaystyle=1-\dim(P)-\chi(S,\operatorname{Lie}\,\mathcal{J})-1+\dim(A)=-\chi(S,\operatorname{Lie}\,\mathcal{J})+\dim(A)-\dim(P)$
$\displaystyle\overset{(\ref{eq5})}{=}-\chi(S,\operatorname{Lie}\,\mathcal{J})+\dim(B).$
### 2.3. The Tate–Shioda relation about the Néron–Severi group
The structure of $\operatorname{NS}(X)$ depends on the singular fibers of the
morphism $\pi:X\to S$.
#### 2.3.1. Singular fibers
Let $Z=\\{v\in S\leavevmode\nobreak\ |\leavevmode\nobreak\
\pi^{-1}(v)=X_{v}\leavevmode\nobreak\ \textrm{is\leavevmode\nobreak\
not\leavevmode\nobreak\ smooth}\\}$. For any $v\in S$, let $G_{v}$ be the set
of irreducible components i of $X_{v}$, let $m_{v}$ be the cardinality of
$G_{v}$, and $m:=\sumop\displaylimits_{v\in Z}(m_{v}-1)$; for any $i\in
G_{v}$, let $r_{i}$ be the number of irreducible components of
${}_{i}\times\overline{k(v)}$. Let $R_{v}$ be the quotient
(2.7) $R_{v}=\frac{\mathbb Z^{G_{v}}}{\mathbb Z}$
of the free Abelian group generated by the irreducible components of $X_{v}$
by the subgroup generated by the cycle associated with $X_{v}=\pi^{-1}(v)$. If
$v\notin Z$, then $R_{v}$ is trivial.
Let $U=S-Z$; the map $X_{U}=\pi^{-1}(U)\to U$ is smooth. For any finite
$Z^{\prime}\subset S$ with $Z\subset Z^{\prime}$, we consider
$U^{\prime}=S-Z^{\prime}$ and $X_{U^{\prime}}=X-\pi^{-1}(U^{\prime})$. The
following proposition provides a description of
$\operatorname{NS}(X)\overset{(\ref{eq7})}{\cong}\operatorname{Pic}(X)/{A(k)}$.
###### Proposition 2.1.
1. (i)
The natural maps $\pi^{*}:\operatorname{Pic}(S)\to\operatorname{Pic}(X)$ and
$\pi^{*}:\operatorname{Pic}(U^{\prime})\to\operatorname{Pic}(X_{U^{\prime}})$
are injective.
2. (ii)
There is an exact sequence
(2.8) $0\longrightarrow\underset{v\in Z}{\oplus}\leavevmode\nobreak\
R_{v}\longrightarrow\frac{\operatorname{Pic}(X)}{\pi^{*}\operatorname{Pic}(S)}\longrightarrow\operatorname{Pic}(X_{0})\longrightarrow
0.$
###### Proof.
(i) From the Leray spectral sequence for $\pi:X\to S$ and the étale sheaf
$\mathbb G_{m}$ on $X$, we get the exact sequence
$0\longrightarrow H^{1}_{et}(S,\pi_{*}\mathbb G_{m})\longrightarrow
H^{1}_{et}(X,\mathbb G_{m})\longrightarrow H^{0}(S,R^{1}\pi_{*}\mathbb
G_{m})\longrightarrow\operatorname{Br}(S).$
Now $X_{0}$ being geometrically connected and smooth over $F$ implies [Mil81,
Remark 1.7a] that $\pi_{*}\mathbb G_{m}$ is the sheaf $\mathbb G_{m}$ on $S$.
This provides the injectivity of the first map. The same argument with
$U^{\prime}$ in place of $S$ provides the injectivity of the second.
(ii) The class group $\textrm{Cl}(Y)$ and the Picard group
$\operatorname{Pic}(Y)$ are isomorphic for regular schemes $Y$ such as $S$ and
$X$. The localization sequences for $X_{U^{\prime}}\subset X$ and
$U^{\prime}\subset S$ can be combined as
${0}$${\Gamma({S},\mathbb G_{m})}$${\Gamma({U^{\prime}},\mathbb
G_{m})}$${\underset{v\in Z^{\prime}}{\oplus}\mathbb
Z}$${\operatorname{Pic}(S)}$${\operatorname{Pic}(U^{\prime})}$${0}$${0}$${\Gamma({X},\mathbb
G_{m})}$${\Gamma(X_{U^{\prime}},\mathbb G_{m})}$${\underset{v\in
Z^{\prime}}{\oplus}\mathbb
Z^{G_{v}}}$${\operatorname{Pic}(X)}$${\operatorname{Pic}(X_{U^{\prime}})}$${0.}$
$\scriptstyle\sim$
$\scriptstyle\sim$
Here $\Gamma({X},\mathbb G_{m})=H^{0}_{et}({X},\mathbb
G_{m})=H^{0}_{Zar}({X},\mathbb G_{m})$. The induced exact sequence on the
cokernels of the vertical maps is
$0\longrightarrow\underset{v\in
Z^{\prime}}{\oplus}R_{v}\longrightarrow\frac{\operatorname{Pic}(X)}{\pi^{*}\operatorname{Pic}(S)}\longrightarrow\frac{\operatorname{Pic}(X_{U^{\prime}})}{\pi^{*}\operatorname{Pic}(U^{\prime})}\longrightarrow
0.$
In particular, we get this sequence for $Z$ and $U$. By assumption, $X_{v}$ is
geometrically irreducible for any $v\notin Z$; so $R_{v}=0$ for any $v\notin
Z$. So this means that, for any $U^{\prime}=S-Z^{\prime}$ contained in $U$,
the induced maps
$\frac{\operatorname{Pic}(X_{U})}{\pi^{*}\operatorname{Pic}(U)}\longrightarrow\frac{\operatorname{Pic}(X_{U^{\prime}})}{\pi^{*}\operatorname{Pic}(U^{\prime})}$
are isomorphisms. Taking the limit over $Z^{\prime}$ gives us the exact
sequence in the proposition.∎
###### Corollary 2.2.
1. (i)
The Tate–Shioda relation [Tat66, (4.5)] $\rho(X)=2+r+m$ holds.
2. (ii)
One has an exact sequence
$0\longrightarrow
B(k)\longrightarrow\frac{\operatorname{Pic}(X)}{\pi^{*}\operatorname{Pic}(S)}\longrightarrow\frac{\operatorname{NS}(X)}{\pi^{*}\operatorname{NS}(S)}\longrightarrow
0.$
###### Proof.
(i) Since $r$ is the rank of $J(F)$, the rank of $\operatorname{Pic}(X_{0})$
is $r+1$. Since $\operatorname{Pic}(S)$ has rank one, $A(k)$ is finite and
$m=\sumop\displaylimits_{v\in Z}(m_{v}-1)$, this follows from (2.1) and (2.8).
(ii) This follows from the diagram
${0}$${P(k)}$${A(k)}$${B(k)}$${0}$${0}$${\operatorname{Pic}(S)}$${\operatorname{Pic}(X)}$${\frac{\operatorname{Pic}(X)}{\pi^{*}\operatorname{Pic}(S)}}$${0}$${0}$${\operatorname{NS}(S)}$${\operatorname{NS}(X)}$${\frac{\operatorname{NS}(X)}{\pi^{*}\operatorname{NS}(S)}}$${0.}$$\scriptstyle{\pi^{*}}$$\scriptstyle{\pi^{*}}$$\scriptstyle{\pi^{*}}$
∎
### 2.4. Relating the order of vanishing at $s=1$ of $P_{2}(X,q^{-s})$ and
$L(J,s)$
By666This proposition, first stated on Page 176 of [Gor79], has a typo in the
formula for $P_{2}$ which is corrected in its restatement on Page 193. We only
need the part about $P_{2}$ (and this is elementary). [Gor79, Proposition
3.3], one has
(2.9) $\zeta(X_{v},s)=\frac{P_{1}(X_{v},q_{v}^{-s})}{(1-q_{v}^{-s})\cdot
P_{2}(X_{v},q_{v}^{-s})},\quad\text{and}\quad
P_{2}(X_{v},q_{v}^{-s})=\left\\{\begin{array}[]{lr}(1-q_{v}^{1-s}),&\text{for
}v\notin Z\\\ \prodop\displaylimits_{i\in
G_{v}}(1-(q_{v})^{r_{i}(1-s)}),&\text{for }v\in Z\end{array}\right\\},$
see §2.3.1 for notation. Using
$Q_{2}(s)=\prodop\displaylimits_{v\in
Z}\frac{P_{2}(X_{v},q^{-s})}{(1-q_{v}^{1-s})},\quad\zeta(S,s)=\frac{P_{1}(S,q^{-s})}{(1-q^{-s})\cdot(1-q^{1-s})},\quad\text{and}\quad
Q_{1}(s)=\prodop\displaylimits_{v\in S}P_{1}(X_{v},q_{v}^{-s}),$
we can rewrite
$\zeta(X,s)=\prodop\displaylimits_{v\in
S}\zeta(X_{v},s)=\frac{1}{Q_{2}(s)}\cdot\prodop\displaylimits_{v\in
S}\frac{P_{1}(X_{v},q_{v}^{-s})}{(1-q_{v}^{-s})\cdot(1-q_{v}^{1-s})}=\frac{\zeta(S,s)\cdot\zeta(S,s-1)\cdot
Q_{1}(s)}{Q_{2}(s)}.$
The precise relation between $P_{2}(X,q^{-s})$ and $L(J,s)$ is given by
(2.11).
###### Proposition 2.3.
One has $\operatorname{ord}_{s=1}Q_{2}(s)=m$ and
(2.10) $Q_{2}^{*}(1)=\lim_{s\to
1}\frac{Q_{2}(s)}{(s-1)^{m}}=\prodop\displaylimits_{v\in
Z}\leavevmode\nobreak\ {\left((\log
q_{v})^{(m_{v}-1)}\cdot\prodop\displaylimits_{i\in G_{v}}r_{i}\right)},$
(2.11) $\frac{P_{2}(X,q^{-s})}{(1-q^{1-s})^{2}}=P_{1}(B,q^{-s})\cdot
P_{1}(B,q^{1-s})\cdot L(J,s)\cdot Q_{2}(s).$
###### Proof.
Observe that (2.10) is elementary: for any positive integer $r$, one has
$\lim_{s\to 1}\frac{(1-q_{v}^{r(1-s)})}{(s-1)}=\lim_{s\to
1}\frac{(1-q_{v}^{r(1-s)})}{(1-q_{v}^{1-s})}\cdot\frac{(1-q_{v}^{1-s})}{(s-1)}=\lim_{s\to
1}(1+q_{v}^{1-s}+\cdots+q_{v}^{(r-1)(1-s)})\cdot\log q_{v}=r\cdot\log q_{v}.$
For each $v\in Z$, this shows that
$\lim_{s\to 1}\frac{P_{2}(X_{v},q^{-s})}{(s-1)^{m_{v}}}=(\log
q_{v})^{m_{v}}\cdot\prodop\displaylimits_{i\in G_{v}}r_{i}.$
Therefore, we obtain that
$\lim_{s\to 1}\frac{Q_{2}(s)}{(s-1)^{m}}=\prodop\displaylimits_{v\in
Z}\lim_{s\to
1}\frac{\frac{P_{2}(X_{v},q^{-s})}{(1-q_{v}^{1-s})}}{(s-1)^{m_{v}-1}}=\prodop\displaylimits_{v\in
Z}\lim_{s\to
1}\frac{\frac{P_{2}(X_{v},q^{-s})}{(s-1)^{m_{v}}}}{\frac{(1-q_{v}^{1-s})}{s-1}}=\prodop\displaylimits_{v\in
Z}{\left(\frac{(\log q_{v})^{m_{v}}\cdot\prodop\displaylimits_{i\in
G_{v}}r_{i}.}{\log q_{v}}\right)}.$
We now prove (2.11). Simplifying the identity
$\frac{P_{1}(X,q^{-s})\leavevmode\nobreak\ \cdot\leavevmode\nobreak\
P_{1}(X,q^{1-s})}{(1-q^{-s})\cdot
P_{2}(X,q^{-s})\cdot(1-q^{2-s})}=\zeta(X,s)=\frac{P_{1}(S,q^{-s})}{(1-q^{-s})\cdot(1-q^{1-s})}\cdot\frac{P_{1}(S,q^{1-s})}{(1-q^{1-s})\cdot(1-q^{2-s})}\cdot\frac{Q_{1}(s)}{Q_{2}(s)}$
from (1.2) using (2.3), one obtains
$\frac{P_{1}(B,q^{-s})\cdot
P_{1}(B,q^{1-s})}{P_{2}(X,q^{-s})}=\frac{1}{(1-q^{1-s})}\cdot\frac{1}{(1-q^{1-s})}\cdot\frac{Q_{1}(s)}{Q_{2}(s)}.$
On reordering, this becomes
$\frac{P_{2}(X,q^{-s})}{(1-q^{1-s})^{2}}=\frac{P_{1}(B,q^{-s})\cdot
P_{1}(B,q^{1-s})\cdot Q_{2}(s)}{Q_{1}(s)}.$
Let $T_{\ell}J$ be the $\ell$-adic Tate module of the Jacobian $J$ of $X$. For
any $v\in S$, the Kummer sequence on $X$ and $J$ provides a
$\textrm{Gal}(F_{v}^{\mathrm{sep}}/{F_{v}})$-equivariant isomorphism
$H^{1}_{\mathrm{\acute{e}t}}(X\times_{S}F_{v}^{\mathrm{sep}},\mathbb
Z_{\ell}(1))\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}T_{\ell}J\stackrel{{\scriptstyle\sim}}{{\longleftarrow}}H^{1}_{\mathrm{\acute{e}t}}(J\times_{F}F_{v}^{\mathrm{sep}},\mathbb
Z_{\ell}(1)),$
as $J$ is a self-dual Abelian variety: this provides the isomorphisms
$H^{1}_{\mathrm{\acute{e}t}}(J\times_{F}F_{v}^{\mathrm{sep}},\mathbb
Q_{\ell})\cong
H^{1}_{\mathrm{\acute{e}t}}(X\times_{S}F_{v}^{\mathrm{sep}},\mathbb
Q_{\ell}),\quad
H^{1}_{\mathrm{\acute{e}t}}(J\times_{F}F_{v}^{\mathrm{sep}},\mathbb
Q_{\ell})^{I_{v}}\cong
H^{1}_{\mathrm{\acute{e}t}}(X\times_{S}F_{v}^{\mathrm{sep}},\mathbb
Q_{\ell})^{I_{v}}.$
From [Del80, Théorème 3.6.1, pp.213–214] (the arithmetic case is in [Blo87,
Lemma 1.2]), we obtain an isomorphism
$H^{1}_{\mathrm{\acute{e}t}}(X_{v}\times_{k(v)}\overline{k(v)},\mathbb
Q_{\ell})\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}H^{1}_{\mathrm{\acute{e}t}}(X\times_{S}F_{v}^{\mathrm{sep}},\mathbb
Q_{\ell})^{I_{v}}.$
The definition of $L_{v}(J,t)$ in (1.8) now implies that
$P_{1}(X_{v},q_{v}^{-s})=L_{v}(J,q_{v}^{-s})$ and hence $Q_{1}(s)\cdot
L(J,s)=1$.∎
###### Proposition 2.4.
1. (i)
$\operatorname{ord}_{s=1}P_{2}(X,q^{-s})=\rho(X)$ if and only if
$\operatorname{ord}_{s=1}L(J,s)=r$.
2. (ii)
One has
(2.12) $P_{2}^{*}(X,\frac{1}{q})=P_{1}(B,q^{-1})\cdot P_{1}(B,1)\cdot
L^{*}(J,1)\cdot Q_{2}^{*}(1)\cdot(\log\leavevmode\nobreak\
q)^{2}\overset{(\ref{fq-points})}{=}\frac{[B(k)]^{2}}{q^{\dim(B)}}\cdot
L^{*}(J,1)\cdot Q_{2}^{*}(1)\cdot(\log\leavevmode\nobreak\ q)^{2}.$
###### Proof.
As $P_{1}(B,q^{-s})\cdot P_{1}(B,q^{1-s})$ does not vanish at $s=1$ by (2.4),
it follows from (2.11) that
$\operatorname{ord}_{s=1}P_{2}(X,q^{-s})-2=\operatorname{ord}_{s=1}L(J,s)+\operatorname{ord}_{s=1}Q_{2}(s).$
Corollary 2.2 says $\rho(X)=r+m+2$; (i) follows as
$\operatorname{ord}_{s=1}Q_{2}(s)=m$.
For (ii), use (2.4) and (2.11). ∎
### 2.5. Pairings on $\operatorname{NS}(X)$
Our next task is to compute $\Delta(\operatorname{NS}(X))$.
###### Definition 2.5.
1. (i)
Let $\operatorname{Pic}^{0}(X_{0})$ be the kernel of the degree map
$\textrm{deg}:\operatorname{Pic}(X_{0})\to\mathbb Z$; the order $\delta$ of
its cokernel is, by definition, the index of $X_{0}$ over $F$.
2. (ii)
Let $\alpha$ be the order of the cokernel of the natural map
$\operatorname{Pic}^{0}(X_{0})\hookrightarrow J(F)$.
3. (iii)
Let $H$ (horizontal divisor on $X$) be the Zariski closure in $X$ of a divisor
$d$ on $X_{0}$, rational over $F$, of degree $\delta$.
4. (iv)
The (vertical) divisor $V$ on $X$ is $\pi^{-1}(s)$ for a divisor $s$ of degree
one on $S$. Such a divisor $s$ exists as $k$ is a finite field and so the
index of the curve $S$ over $k$ is one. Writing $s=\sumop\displaylimits
a_{i}v_{i}$ as a sum of closed points $v_{i}$ on $S$ gives
$V=\sumop\displaylimits a_{i}\pi^{-1}(v_{i})$. Note that $V$ generates
$\pi^{*}\operatorname{NS}(S)\subset\operatorname{NS}(X)$.
###### Remark.
The definitions show that the intersections of the divisor classes $H$ and $V$
in $\operatorname{NS}(X)$ are given by
(2.13) $H\cdot V=\delta=V\cdot H\quad\text{and}\quad V\cdot V=0.$
Also, since $\pi:X\to S$ is a flat map between smooth schemes, the map
$\pi^{*}:CH(S)\to CH(X)$ on Chow groups is compatible with intersection of
cycles. Since $V=\pi^{*}(s)$ and the intersection $s\cdot s=0$ in $CH(S)$, one
has $V\cdot V=0$.
Let $\operatorname{NS}(X)_{0}=(\pi^{*}\operatorname{NS}(S))^{\perp}$; as $V$
generates $\pi^{*}\operatorname{NS}(S)$, we see that
$\operatorname{NS}(X)_{0}$ is the subgroup of divisor classes $Y$ such that
$Y\cdot X_{v}=0$ for any fiber $\pi^{-1}(v)=X_{v}$ of $\pi$; let
$\operatorname{Pic}(X)_{0}$ be the inverse image of $\operatorname{NS}(X)_{0}$
under the projection
$\operatorname{Pic}(X)\to\operatorname{NS}(X)\cong\frac{\operatorname{Pic}(X)}{A(k)}$.
###### Lemma 2.6.
$\operatorname{NS}(X)_{0}$ is the subgroup of $\operatorname{NS}(X)$ generated
by divisor classes whose restriction to $X_{0}$ is trivial.
###### Proof.
We need to show that $\operatorname{NS}(X)_{0}$ is equal to
$K:=\textrm{Ker}(\operatorname{NS}(X)\to\operatorname{NS}(X_{0}))$. If $D$ is
a vertical divisor ($\pi(D)\subset S$ is finite), then $D$ is clearly in $K$;
by [Liu02, §9.1, Proposition 1.21], $D$ is in $\operatorname{NS}(X)_{0}$.
If $D$ has no vertical components, then $D\cdot V=\textrm{deg}(D_{0})$. To see
this, clearly we may assume $D$ is reduced and irreducible (integral) and so
flat over $S$. So $\mathcal{O}_{D}$ is locally free over $\mathcal{O}_{S}$ of
constant degree $n$ since $S$ is connected. But then $\textrm{deg}(D_{0})$ is
equal to $n$ as is the integer $D\cdot V$.∎
###### Lemma 2.7.
Let us denote
$R=\underset{v\in Z}{\oplus}R_{v}\quad\text{and}\quad E=B(k)\cap
R\subset\frac{\operatorname{Pic}(X)_{0}}{\pi^{*}\operatorname{Pic}(S)}.$
One has the exact sequences
$\displaystyle 0\longrightarrow
R\longrightarrow\frac{\operatorname{Pic}(X)_{0}}{\pi^{*}\operatorname{Pic}(S)}\longrightarrow\operatorname{Pic}^{0}(X_{0})\longrightarrow
0,\quad\text{and}$ (2.14) $\displaystyle
0\longrightarrow\frac{R}{E}\longrightarrow\frac{\operatorname{NS}(X)_{0}}{\pi^{*}\operatorname{NS}(S)}\longrightarrow\frac{\operatorname{Pic}^{0}(X_{0})}{B(k)/E}\longrightarrow
0.$
###### Proof.
Lemma 2.6 shows that
$R\subset\frac{\operatorname{Pic}(X)_{0}}{\pi^{*}\operatorname{Pic}(S)}$. As
$A(k)$ is the kernel of the map
$\operatorname{Pic}(X)\to\operatorname{NS}(X)$, it follows that
$A(k)\subset\operatorname{Pic}(X)_{0}$. Thus, $B(k)$ is a subgroup of
$\frac{\operatorname{Pic}(X)_{0}}{\pi^{*}\operatorname{Pic}(S)}$.
The first exact sequence follows from Lemma 2.6; the second one follows from
Corollary 2.2 (ii). ∎
###### Lemma 2.8.
One has the equality
${}_{\mathrm{ar}}\left(\frac{\operatorname{NS}(X)_{0}}{\pi^{*}\operatorname{NS}(S)}\right)=[B(k)]^{2}\cdot\alpha^{2}\cdot{}_{\mathrm{NT}}(J(F))\cdot\prodop\displaylimits_{v\in
Z}\leavevmode\nobreak\ {}_{\mathrm{ar}}(R_{v}).$
###### Proof.
The exact sequence (2.7) splits orthogonally over $\mathbb Q$: for any divisor
$\gamma$ representing an element of $\operatorname{Pic}(X_{0})$, consider its
Zariski closure $\bar{\gamma}$ in $X$. Since the intersection pairing on
$R_{v}$ is negative-definite [Liu02, §9.1, Theorem 1.23], the linear map
$R_{v}\to\mathbb Z$ defined by $\beta\mapsto\beta\cdot\bar{\gamma}$ is
represented by a unique element
$\psi_{v}(\gamma)\in R_{v}\otimes\mathbb
Q\subset\frac{\operatorname{NS}(X)_{0}}{\pi^{*}\operatorname{NS}(S)}\otimes\mathbb
Q.$
Thus, the element
$\tilde{\gamma}:=\bar{\gamma}-\sumop\displaylimits_{v\in Z}\psi_{v}(\gamma)$
is good in the sense of [Gor79, §5, p. 185]: by construction, the divisor
$\tilde{\gamma}$ on $X$ intersects every irreducible component of every fiber
of $\pi$ with multiplicity zero. Fix
$\gamma,\kappa\in\operatorname{Pic}^{0}(X_{0})$: viewing them as elements of
$J(F)$, one computes their Neron–Tate pairing (1.9); also, one can compute the
height pairing of $\tilde{\gamma}$ and $\tilde{\kappa}$ in
$\operatorname{NS}(X)$. These two are related by the identity [Tat66, p. 429]
[LLR18, Remark 3.11]
$\langle\gamma,\kappa\rangle_{\mathrm{NT}}=-\langle\tilde{\gamma},\tilde{\kappa}\rangle_{\mathrm{ar}}=-(\tilde{\gamma}\cdot\tilde{\kappa})\cdot\log
q.$
This says that
(2.15)
${}_{\mathrm{ar}}\left(\operatorname{Pic}^{0}(X_{0})\right)={}_{\mathrm{NT}}\left(\operatorname{Pic}^{0}(X_{0})\right).$
The map
$\operatorname{Pic}^{0}(X_{0})\otimes\mathbb
Q\to\frac{\operatorname{NS}(X)_{0}}{\pi^{*}\operatorname{NS}(S)}\otimes\mathbb
Q,\qquad\gamma\mapsto\tilde{\gamma}$
provides an orthogonal splitting of (2.7) (over $\mathbb Q$). So
$\displaystyle{}_{\mathrm{ar}}\left(\frac{\operatorname{NS}(X)_{0}}{\pi^{*}\operatorname{NS}(S)}\right)$
$\displaystyle\overset{(\ref{deltann})}{=}{}_{\mathrm{ar}}\left(\frac{\operatorname{Pic}^{0}(X_{0})}{B(k)/E}\right)\cdot{}_{\mathrm{ar}}\left(\frac{R}{E}\right)=\frac{[B(k)]^{2}}{e^{2}}\cdot{}_{\mathrm{ar}}\left({\operatorname{Pic}^{0}(X_{0})}\right)\cdot
e^{2}{}_{\mathrm{ar}}(R)$
$\displaystyle\overset{(\ref{equality})}{=}[B(k)]^{2}\cdot{}_{\mathrm{NT}}\left({\operatorname{Pic}^{0}(X_{0})}\right)\cdot{}_{\mathrm{ar}}(R)$
where $e=[E]$ as the size of $E$. As
(2.16)
${}_{\mathrm{NT}}(\operatorname{Pic}^{0}(X_{0}))=\alpha^{2}\cdot{}_{\mathrm{NT}}(J(F))\quad\text{and}{}_{\mathrm{ar}}(R)=\prodop\displaylimits_{v\in
Z}{}_{\mathrm{ar}}(R_{v}),$
this proves the lemma. ∎
With Lemma 2.8 at hand we are almost ready to compute
${}_{\mathrm{ar}}(\operatorname{NS}(X))$. As the intersection pairing on
$\operatorname{NS}(X)$ is not definite (Hodge index theorem), we cannot apply
(1.5). Instead, we use a variant of a lemma of Z. Yun [Yun15].
#### 2.5.1. A lemma of Yun
Given a non-degenerate symmetric bilinear pairing
$\Lambda\times\Lambda\to\mathbb Z$ on a finitely generated Abelian group , an
isotropic subgroup , a subgroup ′ containing and with finite index in ⟂, let
${}_{0}=\frac{{}^{\prime}}{\Gamma}$. We recall from §1.4 that
$\Delta(\Lambda)=z(D)^{-1}$ where $D:=\Lambda\to R\textrm{Hom}(\Lambda,\mathbb
Z)$ and $\Delta({}_{0})={z(D_{0})}^{-1}$ where $D_{0}:={}_{0}\to
R\textrm{Hom}({}_{0},\mathbb Z)$. Let be the discriminant of the induced non-
degenerate pairing $\Gamma\times\frac{\Lambda}{{}^{\prime}}\to\mathbb Z$:
$\Delta=\frac{1}{z(C)}=\frac{1}{z(C^{\prime})},\quad C:=\Gamma\to
R\textrm{Hom}\left(\frac{\Lambda}{{}^{\prime}},\mathbb
Z\right),\quad\text{and}\quad C^{\prime}:=\frac{\Lambda}{{}^{\prime}}\to
R\textrm{Hom}(\Gamma,\mathbb Z).$
###### Lemma 2.9 (_cf._ [Yun15, Lemma 2.12]).
One has $\Delta(\Lambda)=\Delta({}_{0})\cdot{}^{2}$.
###### Proof.
Applying (1.6) to the maps of triangles
${\frac{\Lambda}{\Gamma}}$${\Gamma[1]}$${R\textrm{Hom}\left(\frac{\Lambda}{{}^{\prime}},\mathbb
Z\right)}$${R\textrm{Hom}(\Lambda,\mathbb
Z)}$${R\textrm{Hom}({}^{\prime},\mathbb
Z)}$${R\textrm{Hom}\left(\frac{\Lambda}{{}^{\prime}},\mathbb Z\right)[1]}$
and
${\frac{{}^{\prime}}{\Gamma}}$${\frac{\Lambda}{\Gamma}}$${\frac{\Lambda}{{}^{\prime}}}$${\frac{{}^{\prime}}{\Gamma}[1]}$${R\textrm{Hom}\left(\frac{{}^{\prime}}{\Gamma},\mathbb
Z\right)}$${R\textrm{Hom}({}^{\prime},\mathbb
Z)}$${R\textrm{Hom}(\Gamma,\mathbb
Z)}$${R\textrm{Hom}\left(\frac{{}^{\prime}}{\Gamma},\mathbb Z\right)[1]}$
shows that $z(D)\cdot z(C)^{-1}=z(D_{0})\cdot z(C^{\prime})$. ∎
We can finally compute ${}_{\mathrm{ar}}(\operatorname{NS}(X))$.
###### Proposition 2.10.
The following relations hold
${}_{\mathrm{ar}}(\operatorname{NS}(X))=\delta^{2}\cdot{}_{\mathrm{ar}}\left(\frac{\operatorname{NS}(X)_{0}}{\pi^{*}\operatorname{NS}(S)}\right)\cdot(\log
q)^{2}\quad\text{and}\quad\Delta(\operatorname{NS}(X))=\delta^{2}\cdot\Delta\left(\frac{\operatorname{NS}(X)_{0}}{\pi^{*}\operatorname{NS}(S)}\right).$
###### Proof.
Let $\mathbb
Z\cong\Gamma=\pi^{*}\operatorname{NS}(S)\subset\operatorname{NS}(X)=\Lambda$
with ${}^{\prime}=\operatorname{NS}(X)_{0}$ and
${}_{0}=\frac{\operatorname{NS}(X)_{0}}{\pi^{*}\operatorname{NS}(S)}$. Lemma
2.6 implies that
$\frac{\Lambda}{{}^{\prime}}=\frac{\operatorname{NS}(X)}{\operatorname{NS}(X)_{0}}\cong\mathbb
Z\quad\text{and}\quad
C=\Gamma\to\textrm{Hom}\left(\frac{\operatorname{NS}(X)}{\operatorname{NS}(X)_{0}},\mathbb
Z\right),$
with $C$ as in Lemma 2.9. Now (2.13) shows that $\pi^{*}\operatorname{NS}(S)$
is isotropic and $\Delta=\delta$. The result follows from Lemma 2.9. ∎
Combining the previous proposition with Lemma 2.8 provides the identity
(2.17)
${}_{\mathrm{ar}}(\operatorname{NS}(X))=\delta^{2}\cdot[B(k)]^{2}\cdot\alpha^{2}\cdot{}_{\mathrm{NT}}(J(F))\cdot\prodop\displaylimits_{v\in
Z}\leavevmode\nobreak\ {}_{\mathrm{ar}}(R_{v})\cdot(\log q)^{2}.$
For $v\in S$, we put $\delta_{v}$ and $\delta^{\prime}_{v}$ for the (local)
index and period of $X\times F_{v}$ over the local field $F_{v}$.
###### Theorem 2.11.
[Gei20, Theorem 1.1] Assume that $\operatorname{Br}(X)$ is finite. The
following equality holds:
(2.18)
$[\operatorname{Br}(X)]\alpha^{2}\delta^{2}=[\Sha(J/F)]\prodop\displaylimits_{v\in
S}\delta^{\prime}_{v}\delta_{v}.$
###### Remark 2.12.
Note that for $v\in U$, one has $\delta_{v}=1=\delta^{\prime}_{v}$ [LLR18, p.
603], [FS21, (74)] (for $\delta_{v}=1$), [Gro68, Proposition (4.1) (a)]
($\delta^{\prime}_{v}$ divides $\delta_{v}$); the basic reason is that if
$v\in U$, then $X_{v}$ has a rational divisor of degree one as $k(v)$ is
finite; this divisor lifts to a rational divisor of degree one on $X\times
F_{v}$ by smoothness of $X_{v}$. Also, $c_{v}=1$ [BLR90, Theorem 1, §9.5 p.
264]. So $c(J):=\prodop\displaylimits_{v\in S}c_{v}$ satisfies
(2.19) $c(J)=\prodop\displaylimits_{v\in Z}c_{v}.$
###### Lemma 2.13.
One has
(2.20) $c(J)\cdot Q_{2}^{*}(1)=\prodop\displaylimits_{v\in
Z}\delta_{v}\cdot\delta^{\prime}_{v}\cdot{}_{\mathrm{ar}}(R_{v}).$
###### Proof.
By a result of Flach and Siebel [FS21, Lemma 17] (using Raynaud’s theorem
[Gor79, Theorem 5.2] in [BL99]), one has
${}_{\mathrm{ar}}(R_{v})=\frac{c_{v}}{\delta_{v}\cdot\delta^{\prime}_{v}}\cdot(\log
q_{v})^{m_{v}-1}\cdot\prodop\displaylimits_{i\in G_{v}}r_{i}.$
So we find that
$\displaystyle\prodop\displaylimits_{v\in
Z}\delta_{v}\cdot\delta^{\prime}_{v}\cdot{}_{\mathrm{ar}}(R_{v})$
$\displaystyle=\prodop\displaylimits_{v\in Z}\left({c_{v}}\cdot(\log
q_{v})^{m_{v}-1}\cdot\prodop\displaylimits_{i\in
G_{v}}r_{i}\right)=\prodop\displaylimits_{v\in
Z}{c_{v}}\cdot\prodop\displaylimits_{v\in Z}\left((\log
q_{v})^{m_{v}-1}\cdot\prodop\displaylimits_{i\in G_{v}}r_{i}\right)$
$\displaystyle\overset{(\ref{rm1})}{=}c(J)\cdot\prodop\displaylimits_{v\in
Z}\leavevmode\nobreak\ \left((\log
q_{v})^{m_{v}-1}\cdot\prodop\displaylimits_{i\in
G_{v}}r_{i}\right)\overset{(\ref{rm2})}{=}c(J)\cdot Q_{2}^{*}(1).$
∎
## 3\. First proof of Theorem 1.1
###### Proof of Theorem 1.1.
By (2.17) and (2.20), we have
${}_{\mathrm{ar}}(\operatorname{NS}(X))=\frac{\alpha^{2}\,\delta^{2}}{\prodop\displaylimits_{v\in
Z}\delta_{v}\cdot\delta^{\prime}_{v}}\cdot{}_{\mathrm{NT}}(J(F))\cdot
c(J)\cdot[B(k)]^{2}\cdot Q_{2}^{*}(1)\cdot(\log q)^{2}.$
From Theorem 2.11, we have
$[\operatorname{Br}(X)]\cdot{}_{\mathrm{ar}}(\operatorname{NS}(X))=[\Sha(J/F)]\cdot{}_{\mathrm{NT}}(J(F))\cdot
c(J)\cdot[B(k)]^{2}\cdot Q_{2}^{*}(1)\cdot(\log q)^{2}.$
Further with (2.5), we obtain
$[\operatorname{Br}(X)]\cdot{}_{\mathrm{ar}}(\operatorname{NS}(X))\cdot
q^{-\alpha(X)}=[\Sha(J/F)]\cdot{}_{\mathrm{NT}}(J(F))\cdot c(J)\cdot
q^{\chi(S,\operatorname{Lie}\,\mathcal{J})}\leavevmode\nobreak\
.\leavevmode\nobreak\ [B(k)]^{2}\cdot Q_{2}^{*}(1)\cdot q^{-\dim(B)}\cdot(\log
q)^{2}.$
On the other hand, recall (2.12)
$P_{2}^{*}(X,\frac{1}{q})=L^{*}(J,1)\cdot[B(k)]^{2}\cdot Q_{2}^{*}(1)\cdot
q^{-\dim(B)}\cdot(\log q)^{2}.$
The ratio of the previous two equalities gives
$\frac{P_{2}^{*}(X,\frac{1}{q})}{[\operatorname{Br}(X)]\cdot{}_{\mathrm{ar}}(\operatorname{NS}(X))\cdot
q^{-\alpha(X)}}=\frac{L^{*}(J,1)}{[\Sha(J/F)]\cdot{}_{\mathrm{NT}}(J(F))\cdot
c(J)\cdot q^{\chi(S,\operatorname{Lie}\,\mathcal{J})}}.$
This equality implies Theorem 1.1. ∎
## 4\. Second proof of Theorem 1.1
We will give another more direct proof of Theorem 1.1 using Weil-étale
cohomology. We refer the reader to [Lic05, Gei04, GS20] for basics about Weil-
étale cohomology over finite fields. Throughout this section, we assume that
$\operatorname{Br}(X)$ (and hence $\Sha(J/F)$) is finite.
### 4.1. Setup
Let $C\in D^{b}(T_{\mathrm{\acute{e}t}})$ be an object of the bounded derived
category of sheaves of Abelian groups on the small étale site
$T_{\mathrm{\acute{e}t}}$. Let $D\in D^{b}(\mathrm{FDVect}_{k})$ be an object
of the bounded derived category of finite-dimensional vector spaces over $k$.
Assume that the Weil-étale cohomology $H^{\ast}_{W}(T,C)$ is finitely
generated and the cohomology sheaf
$H^{\ast}(C\otimes^{L}\mathbb{Z}/l\mathbb{Z})$ is finite in all degrees for
all prime numbers $l\nmid q$. Let $e\colon H^{i}_{W}(T,C)\to H^{i+1}_{W}(T,C)$
be the map defined by cup product with the arithmetic Frobenius $\in
H^{1}_{W}(T,\mathbb{Z})$. It defines a complex
$\cdots\stackrel{{\scriptstyle
e}}{{\longrightarrow}}H^{i}_{W}(T,C)\stackrel{{\scriptstyle
e}}{{\longrightarrow}}H^{i+1}_{W}(T,C)\stackrel{{\scriptstyle
e}}{{\longrightarrow}}\cdots$
with finite cohomology. Set
$C_{\mathbb{Q}_{l}}=R\varprojlim_{n}(C\otimes^{L}\mathbb{Z}/l^{n}\mathbb{Z})\otimes_{\mathbb{Z}_{l}}\mathbb{Q}_{l}$,
whose cohomologies are finite-dimensional vector spaces over $\mathbb{Q}_{l}$
(by the finiteness of $H^{\ast}(C\otimes^{L}\mathbb{Z}/l\mathbb{Z})$) equipped
with an action of the geometric Frobenius $\varphi$ of $k$. Define
$\displaystyle Z(C,t)$ $\displaystyle=\prodop\displaylimits_{i}\det(1-\varphi
t\,|\,H^{i}(C_{\mathbb{Q}_{l}}))^{(-1)^{i+1}},$ $\displaystyle\rho(C)$
$\displaystyle=\sumop\displaylimits_{j}(-1)^{j+1}\cdot
j\cdot\operatorname{rank}H^{j}_{W}(T,C),$ $\displaystyle\chi_{W}(C)$
$\displaystyle=\chi(H^{\ast}_{W}(T,C),e),\quad\text{and}$
$\displaystyle\chi(D)$ $\displaystyle=\sumop\displaylimits_{j}(-1)^{j}\dim
H^{j}(D).$
Assume that $Z(C,t)\in\mathbb{Q}(t)$ and is independent of $l$. Define
$Q(C,D)\in\mathbb{Q}_{>0}^{\times}\times(1-t)^{\mathbb{Z}}$ to be the leading
term of the $(1-t)$-adic expansion of the function
$\pm\frac{Z(C,t)(1-t)^{\rho(C)}}{\chi_{W}(C)q^{\chi(D)}}$
(the sign is the one that makes the coefficient positive). It is the defect of
a zeta value formula of the form
$\lim_{t\to 1}Z(C,t)(1-t)^{\rho(C)}=\pm\chi_{W}(C)q^{\chi(D)}.$
We mention $Q(C,D)$ only when $H^{\ast}_{W}(T,C)$ is finitely generated,
$H^{\ast}(C\otimes^{L}\mathbb{Z}/l\mathbb{Z})$ is finite and
$Z(C,t)\in\mathbb{Q}(t)$ is independent of $l$. These conditions are satisfied
for the cases of interest below. We have
$Q(C[1],D[1])=Q(C,D)^{-1}.$
If $(C,D)$, $(C^{\prime},D^{\prime})$ and
$(C^{\prime\prime},D^{\prime\prime})$ are pairs as above, and $C\to
C^{\prime}\to C^{\prime\prime}\to C[1]$ and $D\to D^{\prime}\to
D^{\prime\prime}\to D[1]$ are distinguished triangles, then
$Q(C^{\prime},D^{\prime})=Q(C,D)Q(C^{\prime\prime},D^{\prime\prime})$.
### 4.2. Special cases
We give two special cases of the above constructions. First, let
$\pi_{X}\colon X_{\mathrm{\acute{e}t}}\to T_{\mathrm{\acute{e}t}}$ be the
structure morphism. Let $P_{2}^{\diamond}(X,1)(1-t)^{\rho(X)^{\prime}}$ be the
leading term of the $(1-t)$-adic expansion of $P_{2}(X,t/q)$.
###### Proposition 4.1.
Let $(C,D)=(R\pi_{X,\ast}\mathbb{G}_{m}[-1],R\Gamma(X,\mathcal{O}_{X}))$. Then
$H^{\ast}(C\otimes^{\mathrm{L}}\mathbb{Z}/l\mathbb{Z})$ is finite,
$H^{\ast}_{W}(T,C)$ is finitely generated, $Z(C,q^{-s})=\zeta(X,s+1)$ and
$Q(C,D)^{-1}=\frac{P_{2}^{\diamond}(X,1)\cdot(1-t)^{\rho(X)^{\prime}-\rho(X)}}{[\operatorname{Br}(X)]\cdot\Delta(\operatorname{NS}(X))\cdot
q^{-\alpha(X)}}.$
In particular, the statement $Q(C,D)=1$ is equivalent to Conjecture 1.2.
###### Proof.
We have $H^{\ast}_{W}(T,C)\cong H^{\ast}_{W}(X,\mathbb{G}_{m}[-1])\cong
H^{\ast}_{W}(X,\mathbb{Z}(1))$. The finiteness assumption on
$\operatorname{Br}(X)$ implies the Tate conjecture for divisors on $X$ and
hence the finite generation of $H^{\ast}_{W}(X,\mathbb{Z}(1))$ by [Gei04,
Theorems 8.4 and 9.3]. The object
$C\otimes^{\mathrm{L}}\mathbb{Z}/l\mathbb{Z}\cong
R\pi_{X,\ast}\mathbb{Z}/l\mathbb{Z}(1)\in D^{b}(T_{\mathrm{\acute{e}t}})$ is
constructible and hence its cohomologies are finite. We have
$H^{i}(C_{\mathbb{Q}_{l}})\cong R^{i}\pi_{X,\ast}\mathbb{Q}_{l}(1)$, which is
the vector space
$H_{\mathrm{\acute{e}t}}^{i}(X\times_{k}\bar{k},\mathbb{Q}_{l}(1))$ equipped
with the natural Frobenius action. It follows that $Z(C,q^{-s})=\zeta(X,s+1)$.
We calculate $Q(C,D)^{-1}$. By (1.2), (2.3) and (2.4), the leading term of the
$(1-t)$-adic expansion of $Z(C,t)$ is
(4.1) $-\frac{[A(k)]^{2}}{P_{2}^{\diamond}(X,1)\cdot(q-1)^{2}\cdot q^{\dim
A-1}\cdot(1-t)^{\rho(X)^{\prime}}}.$
By [Gei04, Theorems 7.5 and 9.1], we have
$\chi_{W}(C)=\prodop\displaylimits_{i}[H_{W}^{i}(X,\mathbb{Z}(1))_{\mathrm{tor}}]^{(-1)^{i}}\cdot
R^{-1},$
where $R$ is the determinant of the pairing
$H_{W}^{2}(X,\mathbb{Z}(1))\times
H_{W}^{2}(X,\mathbb{Z}(1))\stackrel{{\scriptstyle\cup}}{{\longrightarrow}}H_{W}^{4}(X,\mathbb{Z}(2))\longrightarrow
H_{\mathrm{\acute{e}t}}^{4}(X\times_{k}\bar{k},\mathbb{Z}(2))\cong\mathrm{CH}^{2}(X\times_{k}\bar{k})\stackrel{{\scriptstyle\deg}}{{\longrightarrow}}\mathbb{Z}.$
We have $H_{W}^{n}(X,\mathbb{Z}(1))=0$ for $n>5$ by [Gei04, Theorem 7.3] and
for $n<1$ obviously. Also
$H_{W}^{1}(X,\mathbb{Z}(1))\cong k^{\times},\quad
H_{W}^{2}(X,\mathbb{Z}(1))\cong\operatorname{Pic}(X),\quad\text{and}\quad
H_{W}^{3}(X,\mathbb{Z}(1))_{\mathrm{tor}}\cong\operatorname{Br}(X)$
by [Gei04, Proposition 7.4 (c) and (d)]. By [Gei18, Remark 3.3], the group
$H_{W}^{i}(X,\mathbb{Z}(1))_{\mathrm{tor}}$ is Pontryagin dual to
$H_{W}^{6-i}(X,\mathbb{Z}(1))_{\mathrm{tor}}$ for any $i$. The above pairing
defining $R$ can be identified with the intersection pairing
$\operatorname{Pic}(X)\times\operatorname{Pic}(X)\to\mathbb{Z}$. Thus, with
(2.1), we have
(4.2)
$\chi_{W}(C)=\frac{[A(k)]^{2}}{[\operatorname{Br}(X)]\cdot\Delta(\operatorname{NS}(X))\cdot(q-1)^{2}}.$
Since the rank of $H_{W}^{i}(X,\mathbb{Z}(1))$ is $\rho(X)$ for $i=2,3$ and
zero otherwise by [Gei04, Proposition 7.4 (c) and (d)], we have
(4.3) $\rho(C)=\rho(X).$
Combining (1.3), (4.1), (4.2) and (4.3), we get the desired formula for
$Q(C,D)^{-1}$. ∎
Next, let $\pi_{S}\colon S_{\mathrm{\acute{e}t}}\to T_{\mathrm{\acute{e}t}}$
be the structure morphism. Let $L^{\diamond}(J,1)(1-q^{-s})^{r^{\prime}}$ be
the leading term of the $(1-q^{-s})$-adic expansion of $L(J,s+1)$. Let
$\Delta(J(F))$ be the discriminant of the pairing
$(\gamma,\kappa)\mapsto\langle\gamma,\kappa\rangle_{\mathrm{NT}}/\log q$ on
$J(F)$.
###### Proposition 4.2.
Let
$(C,D)=(R\pi_{S,\ast}\mathcal{J}[-1],R\Gamma(S,\operatorname{Lie}\,\mathcal{J}))$.
Then $H^{\ast}(C\otimes^{\mathrm{L}}\mathbb{Z}/l\mathbb{Z})$ is finite,
$H^{\ast}_{W}(T,C)$ is finitely generated, $Z(C,q^{-s})=L(J,s+1)$ and
$Q(C,D)=\frac{L^{\diamond}(J,1)\cdot(1-t)^{r^{\prime}-r}}{[\Sha(J/F)]\cdot\Delta(J(F))\cdot
c(J)\cdot q^{\chi(S,\operatorname{Lie}\,\mathcal{J})}}.$
In particular, the statement $Q(C,D)=1$ is equivalent to Conjecture 1.3.
###### Proof.
We have $H^{\ast}_{W}(T,C)\cong H_{W}^{\ast-1}(S,\mathcal{J})$. The finiteness
assumption of $\Sha(J/F)$ implies the finite generation of
$H_{W}^{\ast}(S,\mathcal{J})$ by [GS20, Proposition 6.4]. We have
$C\otimes^{\mathrm{L}}\mathbb{Z}/l\mathbb{Z}\cong
R\pi_{S,\ast}(\mathcal{J}\otimes^{L}\mathbb{Z}/l\mathbb{Z})[-1]$. By the
paragraph before the proof of [GS20, Proposition 9.2] and the first displayed
equation in the proof of [GS20, Proposition 9.2], we know that
$\mathcal{J}\otimes^{\mathrm{L}}\mathbb{Z}/l\mathbb{Z}\in
D^{b}(S_{\mathrm{\acute{e}t}})$ is constructible. Hence
$H^{\ast}(C\otimes^{\mathrm{L}}\mathbb{Z}/l\mathbb{Z})$ is finite. We also
have $H^{i}(C_{\mathbb{Q}_{l}})\cong R^{i}\pi_{S,\ast}V_{l}(\mathcal{J})$
(where $V_{l}$ denotes the $l$-adic Tate modules tensored with
$\mathbb{Q}_{l}$), which is the vector space
$H_{\mathrm{\acute{e}t}}^{i}(S\times_{k}\bar{k},V_{l}(\mathcal{J}))$ equipped
with the natural Frobenius action. Hence we have $Z(C,q^{-s})=L(J,s+1)$ by
[Sch82, Satz 1]. We have
$\chi_{W}(C)=[\Sha(J/F)]\cdot\Delta(J(F))\cdot c(J)$
by [GS20, Proposition 8.3]. By [GS20, Proposition 7.1], the rank of
$H_{W}^{i}(S,\mathcal{J})$ is $r$ for $i=0,1$ and zero otherwise. Hence
$\rho(C)=-r$. The formula for $Q(C,D)$ follows. ∎
### 4.3. Comparison
Now Theorem 1.1 follows from the following
###### Proposition 4.3.
One has
$Q(R\pi_{X,\ast}\mathbb{G}_{m}[-1],R\Gamma(X,\mathcal{O}_{X}))^{-1}=Q(R\pi_{S,\ast}\mathcal{J}[-1],R\Gamma(S,\operatorname{Lie}\,\mathcal{J})).$
###### Proof.
We have $R^{i}\pi_{\ast}\mathbb{G}_{m}=0$ over $S_{\mathrm{\acute{e}t}}$ for
all $i\geq 2$ by [Gro68, Corollaire (3.2)]. Hence we have a distinguished
triangle
$R\pi_{S,\ast}\mathbb{G}_{m}\longrightarrow
R\pi_{X,\ast}\mathbb{G}_{m}\longrightarrow
R\pi_{S,\ast}\operatorname{Pic}_{X/S}[-1]\longrightarrow
R\pi_{S,\ast}\mathbb{G}_{m}[1]$
in $D(T_{\mathrm{\acute{e}t}})$.777 Here
$\operatorname{Pic}_{X/S}=R^{1}\pi_{\ast}\mathbb{G}_{m}$ is only an étale
sheaf. The fppf sheaf denoted by the same symbol is not an algebraic space in
general. Similarly, we have a distinguished triangle
$R\Gamma(S,\mathcal{O}_{S})\longrightarrow
R\Gamma(X,\mathcal{O}_{X})\longrightarrow
R\Gamma(S,R^{1}\pi_{\ast}\mathcal{O}_{X})[-1]\longrightarrow
R\Gamma(S,\mathcal{O}_{S})[1].$
We have $Q(R\pi_{S,\ast}\mathbb{G}_{m}[-1],R\Gamma(S,\mathcal{O}_{S}))=1$ by
the class number formula ([Gei04, Theorems 9.1 and 9.3], or [Lic05, Theorems
5.4 and 7.4] and the functional equation). Therefore
(4.4)
$Q(R\pi_{X,\ast}\mathbb{G}_{m}[-1],R\Gamma(X,\mathcal{O}_{X}))^{-1}=Q(R\pi_{S,\ast}\operatorname{Pic}_{X/S}[-1],R\Gamma(S,R^{1}\pi_{\ast}\mathcal{O}_{X})).$
For a closed point $v\in S$, let
$\iota_{v}\colon\operatorname{Spec}k(v)\hookrightarrow S$ be the inclusion.
For any $i\in G_{v}$, let $k(v)_{i}$ be the algebraic closure of $k(v)$ in the
function field of i. Let $\iota_{v,i}\colon\operatorname{Spec}k(v)_{i}\to S$
be the natural morphism. Set
$E=\bigoplusop\displaylimits_{v\in Z}\frac{\bigoplusop\displaylimits_{i\in
G_{v}}\iota_{v,i,\ast}\mathbb{Z}}{\iota_{v,\ast}\mathbb{Z}}.$
Let $j\colon\operatorname{Spec}F\hookrightarrow S$ be the inclusion. Then we
have a natural exact sequence
$0\longrightarrow E\longrightarrow\operatorname{Pic}_{X/S}\longrightarrow
j_{\ast}\operatorname{Pic}_{X_{0}/F}\longrightarrow 0$
over $S_{\mathrm{\acute{e}t}}$ by [Gro68, Equations (4.10 bis) and (4.21)]
(where the assumption [Gro68, Equation (4.13)] is satisfied since $k(v)$ is
finite and hence perfect for all closed $v\in S$). Therefore we have a
distinguished triangle
$R\pi_{S,\ast}E\longrightarrow
R\pi_{S,\ast}\operatorname{Pic}_{X/S}\longrightarrow
R\pi_{S,\ast}j_{\ast}\operatorname{Pic}_{X_{0}/F}\longrightarrow
R\pi_{S,\ast}E[1].$
Since $E$ is skyscraper, we have $Q(R\pi_{S,\ast}E,0)=1$ by [GS21, Theorem
3.1] (Step 3 of the proof is sufficient). Therefore
(4.5)
$Q(R\pi_{S,\ast}\operatorname{Pic}_{X/S}[-1],R\Gamma(S,R^{1}\pi_{\ast}\mathcal{O}_{X}))=Q(R\pi_{S,\ast}j_{\ast}\operatorname{Pic}_{X_{0}/F}[-1],R\Gamma(S,R^{1}\pi_{\ast}\mathcal{O}_{X})).$
Applying $j_{\ast}$ to the exact sequence
$0\longrightarrow
J\longrightarrow\operatorname{Pic}_{X_{0}/F}\longrightarrow\mathbb{Z}\longrightarrow
0$
over $\operatorname{Spec}F_{\mathrm{\acute{e}t}}$, we obtain an exact sequence
$0\longrightarrow\mathcal{J}\longrightarrow
j_{\ast}\operatorname{Pic}_{X_{0}/F}\longrightarrow\mathbb{Z}$
over $S_{\mathrm{\acute{e}t}}$. Let $I$ be the image of the last morphism, so
that we have an exact sequence
$0\longrightarrow\mathcal{J}\longrightarrow
j_{\ast}\operatorname{Pic}_{X_{0}/F}\longrightarrow I\longrightarrow 0.$
Then we have distinguished triangles
$\displaystyle R\pi_{S,\ast}\mathcal{J}\longrightarrow
R\pi_{S,\ast}j_{\ast}\operatorname{Pic}_{X_{0}/F}\longrightarrow
R\pi_{S,\ast}I\longrightarrow R\pi_{S,\ast}\mathcal{J}[1],\quad\text{and}$
$\displaystyle R\pi_{S,\ast}I\longrightarrow
R\pi_{S,\ast}\mathbb{Z}\longrightarrow
R\pi_{S,\ast}(\mathbb{Z}/I)\longrightarrow R\pi_{S,\ast}I[1].$
We have $Q(R\pi_{S,\ast}\mathbb{Z},0)=1$ again by the class number formula
([Gei04, Theorems 9.1 and 9.2] or [Lic05, Theorem 7.4]). Since $\mathbb{Z}/I$
is skyscraper with finite stalks, we have $Q(R\pi_{S,\ast}(\mathbb{Z}/I),0)=1$
by [GS21, Theorem 3.1] (Step 2 of the proof is sufficient). Therefore
(4.6)
$Q(R\pi_{S,\ast}j_{\ast}\operatorname{Pic}_{X_{0}/F}[-1],R\Gamma(S,R^{1}\pi_{\ast}\mathcal{O}_{X}))=Q(R\pi_{S,\ast}\mathcal{J}[-1],R\Gamma(S,R^{1}\pi_{\ast}\mathcal{O}_{X})).$
The complexes $R\Gamma(S,R^{1}\pi_{\ast}\mathcal{O}_{X})$ and
$R\Gamma(S,\operatorname{Lie}\,\mathcal{J})$ have the same Euler
characteristic by (2.15). Hence
(4.7)
$Q(R\pi_{S,\ast}\mathcal{J}[-1],R\Gamma(S,R^{1}\pi_{\ast}\mathcal{O}_{X}))=Q(R\pi_{S,\ast}\mathcal{J}[-1],R\Gamma(S,\operatorname{Lie}\,\mathcal{J})).$
Combining (4.4)—(4.7), we get the desired equality. ∎
### 4.4. A new proof of Geisser’s formula
The above proposition, combined with the results of the previous sections,
also gives a new proof of Theorem 2.11 as follows.
###### Proof of Theorem 2.11.
By Proposition 4.3, we have
$\frac{P_{2}^{\diamond}(X,1)}{[\operatorname{Br}(X)]\cdot\Delta(\operatorname{NS}(X))\cdot
q^{-\alpha(X)}}=\frac{L^{\diamond}(J,1)}{[\Sha(J/F)]\cdot\Delta(J(F))\cdot
c(J)\cdot q^{\chi(S,\operatorname{Lie}\,\mathcal{J})}}.$
By (2.12), we have
$P_{2}^{\diamond}(X,1)=L^{\diamond}(J,1)\cdot q^{-\dim B}\cdot[B(k)]^{2}\cdot
Q_{2}^{\diamond}(1),$
where $Q_{2}^{\diamond}(1)$ is the leading coefficient of the
$(1-q^{-s})$-adic expansion of $Q_{2}(s+1)$. By (2.17) and (2.20), we have
$\Delta(\operatorname{NS}(X))=\frac{\alpha^{2}\delta^{2}}{\prodop\displaylimits_{v\in
Z}\delta_{v}^{\prime}\delta_{v}}\cdot\Delta(J(F))\cdot
c(J)\cdot[B(k)]^{2}\cdot Q_{2}^{\diamond}(1).$
By (2.5), we have
$q^{-\alpha(X)}=q^{\chi(S,\operatorname{Lie}\,\mathcal{J})}\cdot q^{-\dim B}.$
Taking a suitable alternating product of these four equalities, we obtain
(2.18). ∎
## References
* [Blo87] S. Bloch, _de Rham cohomology and conductors of curves_ , Duke Math. J. 54 (1987), no. 2, 295–308.
* [BL99] S. Bosch and Q. Liu, _Rational points of the group of components of a Néron model_ , Manuscr. Math. 98 (1999), no. 3, 275–293.
* [BLR90] S. Bosch, W. Lütkebohmert, and M. Raynaud, Néron models, Ergebnisse der Mathematik und ihrer Grenzgebiete (3), vol. 21, Springer-Verlag, Berlin, 1990.
* [Del80] P. Deligne, _La conjecture de Weil. II_ , Inst. Hautes Études Sci. Publ. Math. 52 (1980), 137–252.
* [FS21] M. Flach and D. Siebel, _Special values of the zeta function of an arithmetic surface_ , J. Inst. Math. Jussieu (2021), 1–49. Published online doi:10.1017/S1474748021000104
* [GA03] C. D. Gonzalez-Aviles, _Brauer groups and Tate-Shafarevich groups_ , J. Math. Sci. Univ. Tokyo 10 (2003), no. 2, 391–419.
* [Gei04] T. H. Geisser, _Weil-étale cohomology over finite fields_ , Math. Ann. 330 (2004), no. 4, 665–692.
* [Gei18] by same author, _Duality of integral étale motivic cohomology_ in: $K$-Theory—Proceedings of the International Colloquium (Mumbai, 2016), pp. 195–209, Hindustan Book Agency, New Delhi, 2018.
* [Gei20] by same author, _Comparing the Brauer group to the Tate-Shafarevich group_ , J. Inst. Math. Jussieu 19 (2020), no. 3, 965–970.
* [GS20] T. H. Geisser and T. Suzuki, _A Weil-étale version of the Birch and Swinnerton-Dyer formula over function fields_ , J. Number Theory 208 (2020), 367–389.
* [GS21] by same author, _Special values of L-functions of one-motives over function fields_ , preprint arXiv:2009.14504v2 (2021).
* [Gor79] W. J. Gordon, _Linking the conjectures of Artin–Tate and Birch–Swinnerton-Dyer_ , Compos. Math. 38 (1979), no. 2, 163–199.
* [Gro68] A. Grothendieck, _Le groupe de Brauer. III. Exemples et compléments_ , in: Dix exposés sur la cohomologie des schémas, Adv. Stud. Pure Math. 3, pp. 88–188, North-Holland, Amsterdam, 1968\.
* [Kah09] B. Kahn. _Démonstration géométrique du théorème de Lang-Néron et formules de Shioda-Tate_ , in: Motives and algebraic cycles, Fields Inst. Commun. vol. 56, pp. 149–155, Amer. Math. Soc., Providence, RI, 2009.
* [KT03] K. Kato and F. Trihan, _On the conjectures of Birch and Swinnerton-Dyer in characteristic $p>0$_, Invent. Math. 153 (2003), no. 3, 537–592.
* [Lic83] S. Lichtenbaum, _Zeta functions of varieties over finite fields at $s=1$_, in: Arithmetic and geometry, Vol. I, Progr. Math. 35, pp. 173–194, Birkhäuser Boston, Boston, MA, 1983.
* [Lic05] by same author, _The Weil-étale topology on schemes over finite fields_ , Compos. Math., 141(3):689–702, 2005.
* [Liu02] Q. Liu, Algebraic geometry and arithmetic curves, Oxford Graduate Texts in Mathematics, vol. 6, Oxford University Press, Oxford, 2002.
* [LLR04] Q. Liu, D. Lorenzini, and M. Raynaud, _Néron models, Lie algebras, and reduction of curves of genus one_ , Invent. Math. 157 (2004) no. 3, 455–518.
* [LLR05] by same author, _On the Brauer group of a surface_ , Invent. Math. 159 (2005), no. 3, 673–676.
* [LLR18] by same author, Corrigendum to _Néron models, Lie algebras, and reduction of curves of genus one_ and _The Brauer group of a surface_ , Invent. Math. 214 (2018), no. 1, 593–604.
* [Mil72] J. S. Milne, _On the arithmetic of abelian varieties_ , Invent. Math. 17 (1972), 177–190.
* [Mil75] by same author, _On a conjecture of Artin and Tate_ , Ann. of Math. (2) 102 (1975), no. 3, 517–533.
* [Mil81] by same author, _Comparison of the Brauer group with the Tate-Šafarevič group_ , J. Fac. Sci. Univ. Tokyo Sect. IA Math. 28 (1981), no. 3, 735–743.
* [Sch82] P. Schneider, _Zur Vermutung von Birch und Swinnerton-Dyer über globalen Funktionenkörpern_ , Math. Ann. 260 (1982), no. 4, 495–510.
* [Ser70] J.-P. Serre, _Facteurs locaux des fonctions zêta des variétés algébriques (définitions et conjectures)_ , Théorie des nombres, séminaire Delange–Pisot–Poitou 11 (1970), no. 19. Available from Numdam.
* [Tat66] J. Tate, _On the conjectures of Birch and Swinnerton-Dyer and a geometric analog_ , in: _Dix exposés sur la cohomologie des schémas_ , Adv. Stud. Pure Math. 3, pp. 189–214, North-Holland, Amsterdam, 1968.
* [Ulm14] D. Ulmer, _Curves and Jacobians over function fields_ , in: Arithmetic geometry over global function fields, Adv. Courses Math. CRM Barcelona, p. 283–337, Birkhäuser/Springer, Basel, 2014\.
* [Yun15] Z. Yun, _The equivalence of Artin–Tate and Birch–Swinnerton-Dyer conjectures_ , notes (2015). Available from Seminar on BSD.
|
© 20XX IEEE. Personal use of this material is permitted. Permission from IEEE
must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes,
creating new collective works, for resale or redistribution to servers or
lists, or reuse of any copyrighted component of this work in other works.
# A two-step explainable approach for COVID-19 computer-aided diagnosis from
chest x-ray images
###### Abstract
Early screening of patients is a critical issue in order to assess immediate
and fast responses against the spread of COVID-19. The use of nasopharyngeal
swabs has been considered the most viable approach; however, the result is not
immediate or, in the case of fast exams, sufficiently accurate. Using Chest
X-Ray (CXR) imaging for early screening potentially provides faster and more
accurate response; however, diagnosing COVID from CXRs is hard and we should
rely on deep learning support, whose decision process is, on the other hand,
“black-boxed” and, for such reason, untrustworthy.
We propose an explainable two-step diagnostic approach, where we first detect
known pathologies (anomalies) in the lungs, on top of which we diagnose the
illness. Our approach achieves promising performance in COVID detection,
compatible with expert human radiologists. All of our experiments have been
carried out bearing in mind that, especially for clinical applications,
explainability plays a major role for building trust in machine learning
algorithms.
Index Terms— Explainable AI, Chest X-ray, Deep Learning, Classification,
COVID-19
## 1 Introduction
Early COVID diagnosis is a key element for proper treatment of the patients
and prevention of the spread of the disease. Given the high tropism of
COVID-19 for respiratory airways and lung epythelium, identification of lung
involvement in infected patients can be relevant for treatment and monitoring
of the disease. Virus testing is currently considered the only specific method
of diagnosis. Nasopharingeal swabs are easily executable and affordable and
current standard in diagnostic setting; their accuracy in literature is
influenced by the severity of the disease and the time from symptoms onset and
is reported up to 73.3% [1]. Current position papers from radiological
societies (Fleischner Society, SIRM, RSNA) [2, 3, 4] do not recommend routine
use of imaging for COVID-19 diagnosis; however, it has been widely
demonstrated that, even at early stages of the disease, chest x-rays (CXR) can
show pathological findings.
Fig. 1: Comparison between standard approaches to COVID diagnosis and our two-
step approach.
In the last year, many works attempted to tackle this problem, proposing deep
learning-based strategies [5, 6, 7, 8, 9]. All of the proposed approaches
include some elements in common: i) the images collected during the pandemic
need to be augmented with non-COVID cases from publicly available datasets;
ii) some standard pre-processing is applied to the images, like lung
segmentation using U-Net [10] or similar models [5] or converting the pixels
of the CXR scan in Hounsfield units; iii) the deep learning model is trained
to the final diagnosis using state-of-the-art approaches for deep neural
networks. Despite some very optimistic results, the proposed approaches
exhibit significant limitations that deserve further analysis. For example,
augmenting COVID datasets with negative cases from publicly-available datasets
can inject a dangerous bias, where the trained model learn to discriminate
different data sources rather than actual radiological features related to the
disease [5]. These unwanted effects are difficult to spot when using a “black
box” model like deep learning ones, without having control on the decision
process.
In this work we propose an explainable approach, mimicking the radiologists’
decision process. Towards this end, we break the COVID diagnosis problem into
two sub-problems. First, we train a model to detect anomalies in the lungs.
These anomalies are widely known and, following [11], comprise 14 objective
radiological observations which can be found in lungs. Then, on top of these,
we train a decision tree model, where the COVID diagnosis is explicit (Fig.
1). Mimicking the radiologist’s decision is more robust to biases and aims at
building trust for the physicians and patients towards the AI tool, which can
be useful for fast COVID diagnosis. Thanks to the collaboration with the
radiology units of Città della Salute e della Scienza di Torino (CDSS) and San
Luigi Hospital (SLG) in Turin, we collected the COvid Radiographic images
DAta-set for AI (CORDA), comprising both positive and negative COVID cases as
well as a ground truth on the human radiological reporting, and it currently
comprises almost 1000 CXRs.
## 2 Datasets
In this section we introduce the datasets that will be used for our proposed
approach.
Fig. 2: _CheXpert_ ’s radiological findings.
For our purposes we first need to detect some objective radiological findings
(we train a model on the _CheXpert_ dataset) and then, on top of those, we
train a model to elaborate the COVID diagnosis (using the _CORDA_ dataset).
CheXpert: this is a large dataset comprising about 224k CXRs. This dataset
consists of 14 different observations on the radiographic image: differently
from many other datasets which are focused on disease classification based on
clinical diagnosis, the main focus here is “chest radiograph interpretation”,
where anomalies are detected [12]. The learnable radiological findings are
summarized in Fig. 2.
CORDA: this dataset was created for this study by retrospectively selecting
chest x-rays performed at a dedicated Radiology Unit in CDSS and at SLG in all
patients with fever or respiratory symptoms (cough, shortness of breath,
dyspnea) that underwent nasopharingeal swab to rule out COVID-19 infection.
Patients’ average age is 61 years (range 17-97 years old). It contains a total
of 898 CXRs and can be split by different collecting institution into two
similarly sized subgroups: CORDA-CDSS [5], which contains a total of 447 CXRs
from 386 patients, with 150 images coming from COVID-negative patients and 297
from positive ones, and CORDA-SLG, which contains the remaining 451 CXRs, with
129 COVID-positive and 322 COVID-negative images. Including data from
different hospitals at test time is crucial to doublecheck the generalization
capability of our model. The data collection is still in progress, with other
5 hospitals in Italy willing to contribute at time of writing. We plan to make
CORDA available for research purposes according to EU regulations as soon as
possible.
## 3 Radiological report
In this section we are going to describe our proposed method to extract
radiological findings from CXRs. For this task, we leverage the large scale
dataset _CheXpert_ , which contains annotation for different kinds of common
radiological findings that can be observed in CXR images (like opacity,
pleural effusion, cardiomegaly, etc.). Given the high heterogeneity and the
high cardinality of _CheXpert_ , its use is perfect for our purposes: in fact,
once the model is trained on this dataset, there is no need to fine-tune it
for the COVID diagnosis, since it will already extract objective radiological
findings.
CheXpert provides 14 different types of observations for each image in the
dataset. For each class, the labels have been generated from radiology reports
associated with the studies with NLP techniques, conforming to the Fleischner
Society’s recommended glossary [11], and marked as: negative (N), positive
(P), uncertain (U) or blank (N/A). Following the relationship among labels
illustrated in Fig. 2, as proposed by [12], we can identify 8 top-level
pathologies and 6 child ones.
### 3.1 Dealing with uncertainty
Table 1: Performance (AUC) for DenseNet-121 trained on CheXpert. Method | Atelectasis | Cardiomegaly | Consolidation | Edema | Pleural Effusion
---|---|---|---|---|---
Baseline [12] | 0.79 | 0.81 | 0.90 | 0.91 | 0.92
U-label use | 0.81 | 0.80 | 0.92 | 0.94 | 0.93
In order to extract the radiological findings from CXRs, a deep learning model
is trained on the 14 observations. Towards this end, given the possibility of
having multiple findings in the same CXR, the weighted binary cross entropy
loss is used to train the model. Typically, weights are used to compensate
class unbalancing, giving higher importance to less-represented classes.
Within _CheXpert_ , however, we also need to tackle another issue: how to
treat the samples with the U label. Towards this issue, multiple approaches
have been suggested by [12]. The most popular is to ignore all the uncertain
samples, excluding them from the training process and considering them as N/A.
We propose to include the U samples in the learning process, mapping them to
maximum uncertainty (probability $0.5$ to be P or N). Then, we balance P and N
outcomes for every radiological finding. Table 1 shows a performance
comparison between the standard approach as proposed by [12] and our proposal
(U-label use), for 5 salient radiological findings, using the same setting as
in [12]. We observe an overall improvement in the performance, which is
expected by the inclusion of the U-labeled examples. For all our experiments,
we will use models trained using the U labeled samples.
## 4 COVID diagnosis
The second step of the proposed approach is building the model which can
actually provide a clinical diagnosis for COVID. We freeze the model obtained
from Sec. 3 and use its output as image features to train a new binary
classifier on the CORDA dataset. We test two different types of classifiers: a
decision tree (Tree) and a neural network-based classifier (FC).
The decision tree is trained on the probabilities output of the radiological
reports, using the state-of-the-art CART Algorithm implementation provided by
the Python scikit-learn [13] package. Besides the fully explainable decision
tree-based result, we also train a neural network classifier, comprising one
hidden layer of size 512 and the output layer. Despite working with the same
features as the decision tree, such an approach loses in explainability, but
potentially enhances the performance in terms of COVID diagnosis, as we will
see in Sec. 5.
## 5 Results
Table 2: Results for COVID diagnosis. Method | Backbone | Classifier | Pretrain dataset | Dataset | Sensitivity | Specificity | BA | AUC
---|---|---|---|---|---|---|---|---
Baseline [5] | ResNet-18 | FC | none | CORDA-CDSS | 0.56 | 0.58 | 0.57 | 0.59
ResNet-18 | FC | RSNA | CORDA-CDSS | 0.54 | 0.80 | 0.67 | 0.72
ResNet-18 | FC | ChestXRay | CORDA-CDSS | 0.54 | 0.58 | 0.56 | 0.67
Two-step | ResNet-18 | FC | CheXpert | CORDA-CDSS | 0.69 | 0.73 | 0.71 | 0.76
DenseNet-121 | FC | CheXpert | CORDA-CDSS | 0.72 | 0.78 | 0.75 | 0.81
DenseNet-121 | Tree | CheXpert | CORDA-CDSS | 0.77 | 0.60 | 0.68 | 0.70
Two-step | DenseNet-121 | FC | CheXpert | CORDA-SLG | 0.79 | 0.82 | 0.81 | 0.84
In this section we compare the COVID diagnosis generalization capability
through a direct deep learning-based approach (baseline) and our proposed two-
step diagnosis, where first we detect the radiological findings, and then we
discriminate patients affected by COVID using a decision tree-based diagnosis
(Tree) or a deep learning-based classifier from the radiological findings
(FC). The performance is tested on a subset of _patients_ not included in the
training / validation set. The assessed metrics are: balanced accuracy (BA),
sensitivity, specificity and area under the ROC curve (AUC). For all of the
methods we adopt a 70%-30% train-test split. For the deep learning-based
strategy, SGD is used with a learning rate $0.01$ and a weight decay of
$10^{-5}$.
Fig. 3: Decision Tree obtained for COVID-19 classification based on the
probabilities for the 14 classes of findings.
All of the experiments were run on NVIDIA Tesla T4 GPUs using PyTorch 1.4.
Table 2 compares the standard deep learning-based approach [5] to our two-
steps diagnosis. Baseline results are obtained pre-training the model on some
of the most used publicly-available datasets. We observe that the best
achievable performance is very low, consisting in a BA of 0.67. A key takeaway
is that trying to directly diagnose diseases such as COVID-19 from CXRs might
be currently infeasible, probably given the small dataset sizes and strong
selective bias in the datasets.
We can clearly see how the two-step method outperforms the direct diagnosis:
using the same network architecture (ResNet-18 as backbone and a fully-
connected classifier on top of it), we obtain a significant increase in all of
the assessed metrics. Even better results are achieved by using a DenseNet-121
as backbone and the fully-connected classifier.
Fig. 3 graphically shows the learned decision tree (whose performance is shown
in Table 2): this provides a very clear interpretation for the decision
process. From the clinical and radiological perspective, these data are
consistent with the COVID-19 CXR semiotics that radiologists are used to deal
with. The edema feature, although unspecific, is strictly related to the
interstitial involvement that is typical of COVID-19 infections and it has
been largely reported in the recent literature [14]. Indeed, in recent
COVID-19 radiological papers, interstitial involvement has been reported as
ground glass opacity appearance [15]. However this definition is more
pertinent to the CT imaging setting rather than CXR; the “edema” feature can
be compatible, from the radiological perspective, to the interstitial opacity
of COVID-19 patients. Furthermore, the not irrelevant role of cardiomegaly (or
more in general enlarged cardiomediastinum) in the decision tree can be
interesting from the clinical perspective. In fact, this can be read as an
additional proof that established cardiovascular disease can be a relevant
risk factor to develop COVID-19 [16]. Moreover, it may be consistent with the
hypotheses of a larger role of the primary cardiovascular damage observed on
on preliminary data of autopsies of COVID-19 patients [17].
Fig. 4: Grad-CAM on COVID-positive samples.
Focusing on the deep learning-based approach (FC) we observe a boost in the
performance, achieving a BA of 0.75. However, this is the result of a trade-
off between interpretability and discriminative power. Using Grad-CAM [18] we
have hints on the area the model focused on to take the final diagnostic
decision. From Fig. 4 we observe that on COVID-positive images, the model
seems to mostly focus on the expected lung areas.
Finally, to further test the reliability of our approach, we used our strategy
also on CORDA-SLG (which are data coming from a different hospital structure),
reaching comparable and encouraging results.
## 6 Conclusions
One of the latest challenges for both the clinical and the AI community has
been applying deep learning in diagnosing COVID from CXRs. Recent works
suggested the possibility of successfully tackling this problem, despite the
currently small quantity of publicly available data. In this work we propose a
multi-step approach, close to the physicians’ diagnostic process, in which the
final diagnosis is based upon detected lung pathologies. We performed our
experiments on CORDA, a COVID-19 CXR dataset comprising approximately 1000
images. All of our experiments have been carried out bearing in mind that,
especially for clinical applications, explainability plays a major role for
building trust in machine learning algorithms, although better
interpretability can come at the cost of a lower prediction accuracy.
## References
* [1] Yang Yang, Minghui Yang, Chenguang Shen, Fuxiang Wang, Jing Yuan, Jinxiu Li, Mingxia Zhang, Zhaoqin Wang, Li Xing, Jinli Wei, et al., “Laboratory diagnosis and monitoring the viral shedding of 2019-ncov infections,” medRxiv, 2020.
* [2] “ACR recommendations for the use of chest radiography and computed tomography (CT) for suspected COVID-19 infection,” https://www.acr.org/.
* [3] Italian Radiology Society, “Utilizzo della Diagnostica per Immagini nei pazienti Covid 19,” https://www.sirm.org/.
* [4] Geoffrey D. Rubin, Christopher J. Ryerson, Linda B. Haramati, Nicola Sverzellati, et al., “The role of chest imaging in patient management during the covid-19 pandemic: A multinational consensus statement from the fleischner society,” RSNA Radiology, 2020.
* [5] Enzo Tartaglione, Carlo Alberto Barbano, Claudio Berzovini, Marco Calandri, and Marco Grangetto, “Unveiling covid-19 from chest x-ray with deep learning: A hurdles race with small data,” International Journal of Environmental Research and Public Health, vol. 17, no. 18, pp. 6933, Sep 2020.
* [6] Prabira Kumar Sethy and Santi Kumari Behera, “Detection of coronavirus disease (covid-19) based on deep features,” 2020\.
* [7] Ioannis D Apostolopoulos and Tzani Bessiana, “Covid-19: Automatic detection from x-ray images utilizing transfer learning with convolutional neural networks,” arXiv preprint arXiv:2003.11617, 2020.
* [8] Ali Narin, Ceren Kaya, and Ziynet Pamuk, “Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks,” arXiv preprint arXiv:2003.10849, 2020.
* [9] Linda Wang, Zhong Qiu Lin, and Alexander Wong, “Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images,” Scientific Reports, vol. 10, no. 1, pp. 1–12, 2020.
* [10] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
* [11] David M Hansell, Alexander A Bankier, Heber MacMahon, Theresa C McLoud, Nestor L Muller, and Jacques Remy, “Fleischner society: glossary of terms for thoracic imaging,” Radiology, vol. 246, no. 3, pp. 697–722, 2008.
* [12] Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, et al., “Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, vol. 33, pp. 590–597.
* [13] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011\.
* [14] Wei-jie Guan, Zheng-yi Ni, Yu Hu, Wen-hua Liang, Chun-quan Ou, Jian-xing He, Lei Liu, Hong Shan, Chun-liang Lei, David SC Hui, et al., “Clinical characteristics of coronavirus disease 2019 in china,” New England journal of medicine, vol. 382, no. 18, pp. 1708–1720, 2020.
* [15] Ho Yuen Frank Wong, Hiu Yin Sonia Lam, Ambrose Ho-Tung Fong, Siu Ting Leung, Thomas Wing-Yan Chin, Christine Shing Yen Lo, Macy Mei-Sze Lui, Jonan Chun Yin Lee, Keith Wan-Hang Chiu, Tom Chung, et al., “Frequency and distribution of chest radiographic findings in covid-19 positive patients,” Radiology, p. 201160, 2020.
* [16] ESC Guidance for the Diagnosis and Management of CV Disease during the COVID-19 Pandemic., 2020.
* [17] Dominic Wichmann, Jan-Peter Sperhake, Marc Lütgehetmann, Stefan Steurer, Carolin Edler, Axel Heinemann, Fabian Heinrich, Herbert Mushumba, Inga Kniep, Ann Sophie Schröder, et al., “Autopsy findings and venous thromboembolism in patients with covid-19: a prospective cohort study,” Annals of Internal Medicine, 2020.
* [18] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.
|
# Lightweight Convolutional Neural Network with Gaussian-based Grasping
Representation for Robotic Grasping Detection
Hu Cao 1, Guang Chen 1,2∗, Zhijun Li3, Jianjie Lin 1,4, Alois Knoll 1 ∗Guang
Chen is the corresponding author of this workAuthors Affiliation: 1Chair of
Robotics, Artificial Intelligence and Real-time Systems, Technische
Universität München, München , Germany, 2Tongji University, Shanghai, China,
3University of Science and Technology of China, China, 4Fortiss Research
Institute, München , Germany
###### Abstract
The method of deep learning has achieved excellent results in improving the
performance of robotic grasping detection. However, the deep learning methods
used in general object detection are not suitable for robotic grasping
detection. Current modern object detectors are difficult to strike a balance
between high accuracy and fast inference speed. In this paper, we present an
efficient and robust fully convolutional neural network model to perform
robotic grasping pose estimation from n-channel input image of the real
grasping scene. The proposed network is a lightweight generative architecture
for grasping detection in one stage. Specifically, a grasping representation
based on Guassian kernel is introduced to encode training samples, which
embodies the principle of maximum central point grasping confidence.
Meanwhile, to extract multi-scale information and enhance the feature
discriminability, a receptive field block (RFB) is assembled to the bottleneck
of our grasping detection architecture. Besides, pixel attention and channel
attention are combined to automatically learn to focus on fusing context
information of varying shapes and sizes by suppressing the noise feature and
highlighting the grasping object feature. Extensive experiments on two public
grasping datasets, Cornell and Jacquard demonstrate the state-of-the-art
performance of our method in balancing accuracy and inference speed. The
network is an order of magnitude smaller than other excellent algorithms,
while achieving better performance with accuracy of 98.9$\%$ and 95.6$\%$ on
the Cornell and Jacquard datasets, respectively.
###### Index Terms:
Efficient Grasping Detection, Gaussian-based Grasping Representation,
Receptive Field Module, Multi-Dimension Attention Fusion, Fully Convolutional
Neural Network
## I Introduction
Intelligent robots are widely used in industrial manufacturing fields, such as
human-robot cooperation, robot assembly, and robot welding. The robots need an
effective automated manipulation system to complete the task of picking and
placing. Although grasping is a very simple action for humans, it is still a
challenging task for robots, which involves subsystems such as perception,
planning and extection. Grasping detection is a basic skill for robots to
perform grasping and manipulation tasks in the unstructured enviroments of the
real world. In order to improve the performance of robotic grasping, it is
necessary to develop a robust algorithm to predict the location and
orientation of the grasping objects.
Early grasping detection works are mainly based on traditional methods, such
as serach algorithm. However, these algorithms cannot work effectively in
complex real scenarios [1]. In recent years, deep learning-based methods have
achieved excellent results in robotic grasping detection. Based on two-
dimension space can be projected into the three-dimensional space to guide the
robot to grasp, a five-dimensional grasp configuration is proposed to
represent grasp rectangle [2]. Due to the simplification of the grasping
object dimension, the deep convolutional neural network can be used to learn
extracting features mroe suitable for specific tasks than hand-engineered
features by taking 2-D images as input. Many works, such as [3, 4, 5, 6],
train the neural network to predict the grasping rectangle of objects, and
select the one with the highest grasp probability score from multiple grasp
candidate rectangles as the best grasp result. Some one or two-stage deep
learning methods [7, 8, 9] that have achieved great success in object
detection have been modified to perform grasping detection task. For example,
[10] refers to some key ideas of Faster RCNN [9] in the field of object
detection to carry out robotic grasping from the input RGB-D images. In
addition, other works, such as [5, 11], implemented high-precision grasp
detection on Cornell grasping dataset based on the one stage object detection
method [7, 8]. Although these object detection-based methods achieve better
accuracy in robotic grasping detection, their design based on horizontal
rectangular box is not suitable for angular grasp detection task, and most of
them have complex network structure, so it is difficult to achieve a good
balance in detection accuracy and speed. In [12, 13], the authors improve the
performance of grasping detection by demploying an oriented anchor box
mechanism to match the grasp rectangles. However, although these methods have
achieved some improvement in accuracy or speed, the size of network parameters
of their algorithms is still too large to be suitable for real-time
applications. To solve these problems mentioned above, a new grasping
representation is proposed by [14]. Different from previous works, which used
the method of sampling grasping candidate rectangle, [14] applies generated
convolutional neural network to directly regress grasp points, which
simplifies the definition of grasping representation and achieves high real-
time performance based on the lightweight architecture. Inspired by [14], the
authors of [15, 16] utilize some ideas of algorithms in vision segmentation
tasks to predict robotic grasping pose from extracted pixel-wise features.
Recently, the residual structure is introduced into the generated neural
network model [17], which achieved state-of-the-art grasping detection
accuracy on Cornell and Jacquard grasping datasets. However, they all have a
shortcoming that although they take the location with the largest grasping
score as the center point coordinate, they fail to highlight the importance of
the largest grasping probability at the center point.
In this work, we utilize 2-D Guassian kernel to encode training samples to
emphasize that the center point position with the highest grasping confidence
score. On the basis of Guassian-based grasping representation, we develop a
lightweight generative architecture for robotic grasping pose estimation.
Referring to the receptive field structure in human visual system, we combine
the residual block and a receptive field block module in the bottleneck layer
to enhance the feature discriminability and robustness. In addition, in order
to reduce the information loss in the sampling process, we fuse low-level
features with depth features in the decoder process, and use a multi-
dimensional attention network composed of pixel attention network and channel
attention network to suppress redundant features and highlight meaningful
features in the fusion process. Extensive experiments demonstrate that our
algorithm achieves state-of-the-art performance in accuracy and inference
speed on the public grasping datasets Cornell and Jacquard with a small
network parameter size. Concretely, the main contributions of this paper are
as follows:
* •
We propose a Gaussian-based grasping representation, which relects the maximum
grasping score at the center point location and can signigicantly improve the
grasping detection accuracy.
* •
We develope a lightweight generative architecture which achieves high
detection accuracy and real-time running speed with small network parameters.
* •
A receptive field block module is embedded in the bottleneck of the network to
enhance its feature discriminability and robustness, and a multi-dimensional
attention fusion network is developed to suppress redundant features and
enhance target features in the fusion process.
* •
Evaluation on the public Cornell and Jacquard grasping datasets demonstrate
that the proposed generative based grasping detection algorithm achieves
state-of-the-art performance of both speed and detection accuracy.
The rest of this paper is organized as follows: previous works related to the
grasp detection are reviewed in section 2. Robotic grasping system is
introduced in section 3,. Detailed description of the proposed grasping
detection method is illustrated in section 4. Dataset analysis is presented in
section 5. Experiments based on the public grasping datasets, Cornell and
Jacquard are discussed in section 6. Finaly, we conclude our work in section
7.
## II Related Work
For 2D planar robotic grasping where the grasp is constrained in one
direction, the methods can be divided into oriented rectangle-based grasp
representation methods and contact point-based grasp representation methods.
The comparision of the two grasp representations are presented in Fig. 1. We
will review the relevant works below.
Figure 1: A comparision between the methods of oriented rectangle-based grasp
representation and the methods of contact point-based grasp representation.
The top branch is the workflow of the model using the oriented rectangle as
grasp representation, and the bottom branch is the workflow of the model using
the contact point grasp representation.
### II-A Methods of oriented rectangle-based grasp representation
The goal of grasping detection is to find the appropriate grasp pose for the
robot through the visual information of the grasping object, so as to provide
reliable perception information for subsequent planning and control process,
and achieve successful grasp. Grasp is a widely studied topic in the field of
robotics, and the approaches used can be summmarized as anlytic methods and
empirical methods. The analytical methods use mathematical and physical models
in geometry, motion and dynamics to carry out the calculation for grasping
[18]. Its theoretical foundation is solid, but the deficiency lies in that the
model between the robot manipulator and the grasping object in the real
3-dimensional world is very complex, and it is difficult to realize the model
with high precision. In contrast, empirical methods do not strictly rely on
real-world modeling methods, and some works utilize data information from
known objects to build models to predict the grasping pose of new objects [19,
20, 21]. A new grasp representation is proposed in [22], where a simplified
five-dimensional oriented rectangle grasp representation is used to replace
the seven-dimensional grasp pose consisting of 3D location, 3D orientation and
the opening and closing distance of the plate gripper. Based on the oriented
rectangles grasp configuration, the deep learning approaches can be
successfully applied to the grasping detection task, which mainly include
classification-based methods, regression-based methods and detection-based
methods [23].
Classification-based Methods: A first deep learning-based robotic grasing
detection method is presented in [2], the authors achieve excellent results by
using a two-step cascaded structure with two deep networks. In [24], grasping
proposals are estimated by sampling grasping locations and adjacent image
patches. The grasp orientation is predicted by dividing angle into 18
disccrete angles. Since grasping dataset is scant, a large simulation database
called Dex-Net 2.0 is built in [25]. On the basis of Dex-Net 2.0, a Grasp-
Quality Covolutional Neural Network (GQ-CNN) is developed to classify the
potential grasps. Although the network is trained on synthetic data, the
proposed method still works well in the real world. Moreover, a
classification-based robotic grasping detection method with spatial
transformer network (STN) is proposed in [26]. The results of evalating on
Cornell grasping dataset indicate that their multi-stage STN algorithm peforms
well. The grasping detection method based on classification is a more direct
and reasonable method, many aspects of which are worth further study.
Regression-based Methods: Regression-based methods is to directly predict
grasp parameters of location and orientation by training a model. A first
regression-based single shot grasping detection approach is proposed in [3],
in which the authors use AlexNet to extract feature and achieve real-time
performance by removing the process of searching potential grasps. Combing RGB
and depth data, a multi-modal fusion method is introduced in [27]. With fusing
RGB and depth features, the proposed method directly regress the grasp
parameters and improve the grasping detection accuracy on the Cornell grasping
dataset. Similar to [27], the authors of [28] use ResNet as backbone to
integrate RGB and depth information and further improves the performance of
grasping detection. In addition, a graping detection method based on ROI
(Region of Interest) is proposed in [21]. In this work, the authors regress
grasp pose on ROI features and achieve better performance in object
overlapping challenge scene. The regression-based method is effective, but its
disadvantage is that it is more incilined to learn the mean value of the
ground truth grasps.
Detection-based Methods: Many detection-based methods refer to some key ideas
from object detection, such as anchor box. Based on the prior knowledge of
these anchor boxes, the regression problem of grasping parameters is
simplified. In [29], vision and tactile sensing are fused to build a hybrid
architecture for robotic grasping. The authors use anchor box to do axis
aligned and grasp orientation is predicted by considering grasp angle
estimation as classification problem. The grasp angle estimation methods used
in [29] is extened by [10]. By transforming the angel estimation into
classification problem, the method of [10] achieves high grasping detection
accuracy on Cornell dataset based on FasterRCNN [9]. Different from the
horizontal anchor box used in object detection, the authors of [12] specially
design an oriented anchor box mechanism for grasping task and improve the
performance of model by combing end-to-end fully convolutional neural network.
Morever, [30] further extend the method of [12] and proposes a deep neural
network architecture that performs better on the Jacquard dataset.
### II-B Methods of contact point-based grasp representation
The grasping representation based on oriented rectangle is widely used in
robotic grasping detection task. In terms of the real plate grasping task, the
gripper does not need so much information to perform the grasping action. A
new simplified contact point-based grasping representation is introduced in
[14], which consists of grasp quality, center point, oriented angle and grasp
width. Based on this grasping representation, GGCNN and GGCNN2 are developed
to predict the grasping pose, and their methods achieve excellent performance
in both detection accuracy and inference speed. Refer to [14], the grasping
detection performance is improved by a fully convolutional neural network with
pixel-wise way in [15]. Both [14] and [15] take depth data as input, a
generative residual convolutional neural network is proposed in [17] to
generate grasps, which take n-channel images as input. Recently, the authors
of [16] take some ideas from image segmentation to perform three-finger
robotic grasping detection. Similar to [16], a orientation attentive grasp
synthesis (ORANGE) framwork is developed in [31], which achieves better
results on Jacquard dataset based on the GGCNN and Unet model. In this paper,
we propose a Guassian-based grasping representation to highlight the
importance of center point. We further develop a lightweight generative
architecture for robotic grasping detection, which performs well in inference
speed and accuracy on two public datasets, Cornell and Jacquard.
## III Robotic Grasping System
In this section, we give an overview of the robotic grasping system settings
and illustrate the principles of Gaussian-based grasping representation.
### III-A System Setting
A robotic grasping system usually consists of a robot arm, perception sensors,
grasping objects and workspace. In order to complete the grasping task
successfully, not only the grasp pose of objects needs to be obtained, but the
subsystem of planning and control is involved. In grasping detection part, we
consider limiting the manipulator to the normal direction of the workspace so
that it becomes a goal for perception in 2D space. Through this setting, most
of the grasping objects can be considered as flat objects by placing them
reasonably on the workbench. Instead of building 3D point cloud data, the
whole grasping system can reduce the cost of storage and calculation and
improve its operation capacity. The grasp pose of flat objects can be treated
as a rectangle. Since the size of each plate gripper is fixed, we use a
simplified grasping representation mentioned in section II-B to perform grasp
pose estimation.
### III-B Gaussian-based grasp representation
For given RGB images or depth information of different objects, the grasping
detection system should learn how to obtain the optimal grasp configuration
for subsequent tasks. Many works, such as [29, 10, 12], are based on five-
dimensional grasping representation to generate grasp pose.
$g=\left\\{x,y,\theta,w,h\right\\}$ (1)
where, $(x,y)$ is the coordinates of the center point, $\theta$ represents the
orientation of the grasping rectangle, and the weight and height of the
grasping rectangle are denoted by $(w,h)$. Rectangular box is frequently used
in object detection, but it is not suitable for grasping detection task. As
the size of gripper is usually a known variable, a simplified representation
is introduced in [14] for high-precision, real-time robotic grasping. The new
grasping representation for 3-D pose is defined as:
$g=\left\\{\textbf{p},\varphi,w,q\right\\}$ (2)
where, the center point location in Cartesian coordinates is
$\textbf{p}=(x,y,z)$. $\varphi$ and $w$ are the rotation angle of the gripper
around the $z$ axis and the opening and closing distance of the gripper,
respectively. Sicne the five-dimensional grasping representation lacks the
scale factor to evaluate the grasping quality, $q$ is added to the new
representation as a scale to measure the probability of grasp success. In
addition, the definition of the new grasping representation in 2-D space can
be described as,
$\hat{g}=\left\\{\hat{p},\hat{\varphi},\hat{w},\hat{q}\right\\}$ (3)
where, $\hat{p}=(u,v)$ represents the center point in the image coordinates.
$\hat{\varphi}$ denotes the orientation in the camera frame. $\hat{w}$ and
$\hat{q}$ still represent the opening and closing distance of the gripper and
the grasp quality, respectively. When we know the calibration result of the
grasping system, the grasp pose $\hat{g}$ can be converted to the world
coordinates $g$ by matrix operation,
$g=T_{RC}(T_{CI}(\hat{g}))$ (4)
where, $T_{RC}$ and $T_{CI}$ represent the transform matrices of the camera
frame to the world frame and 2-D image space to the camera frame respectively.
Moreover, the grasp map in the image space is denoted as:
$\textbf{G}=\left\\{\Phi,W,Q\right\\}\in{\mathbb{R}^{3\times W\times H}}$ (5)
where, each pixel in the grasp maps, $\Phi,W,Q$, is filled with the
corresponding $\hat{\varphi},\hat{w},\hat{q}$ values. In this way, it can be
ensured that the center point coordinates in the subsequent inference process
can be found by searching for the pixel value of the maximum grasp quality,
$\hat{g^{*}}=max_{\hat{Q}}\hat{G}$. In [14], the authors filled a rectangular
area around the center point with 1 indicating the highest grasping quality,
and the other pixels were 0. The model is trained by this method to learn the
maximum grasp quality of the center point. Because all pixels in the
rectangular area have the best grasping quality, it leads to a defect that the
importance of the center point is not highlighted, resulting the ambiguity to
the model. In this work, we use 2-D Gaussian kernel to regularize the grasping
representation to indicate where the object center might exist, as is shown in
Fig. 2. The novel Gaussian-based grasping representation is represented as
$g_{k}$, the corresponding Gaussian-based grasp map is defined as:
$\displaystyle G_{K}$
$\displaystyle=\left\\{\Phi,W,Q_{K}\right\\}\in{\mathbb{R}^{3\times W\times
H}}$ (6) $\displaystyle where,$ $\displaystyle Q_{K}$
$\displaystyle=K(x,y)=exp(-\frac{(x-x_{0})^{2}}{2\sigma_{x}^{2}}-\frac{(y-y_{0})^{2}}{2\sigma_{y}^{2}})$
$\displaystyle where,$ $\displaystyle\sigma_{x}$
$\displaystyle=T_{x},\sigma_{y}=T_{y}$
In Eq. 6, the generated grasp quality map is decided by the center point
location $(x_{0},y_{0})$, the parameter $\sigma_{x}$ and $\sigma_{y}$, and the
corresponding scale factor $T_{x}$ and $T_{y}$. By this method, the peak of
Gaussian distribution is the center coordinate of the grasp rectangle. In this
work, we will discuss the impact of parameter settings in more detail in
section VI-F.
Figure 2: Gaussian-based grasp representation: The 2-D Gaussian kernel is
applied to the grasp quality map to highlight the max grasp quality of its
central point position. (a) the schematic diagram of grasp quality weight
distribution after 2-D Gaussian function deployment, and (b) the schematic
diagram of grasp representation. Figure 3: The structure of our lightweight
generative grasping detection algorithm. I and Conv denote the input data and
covolution filter, respectively. The proposed method consisits of the
downsampling block, the bottleneck layer, the multi-dimensional attention
fusion network and the upsampling block.
## IV Method
In this section, we introduce a lightweight generative architecture for
robotic grasping detection. Fig. 3 presents the structure of our grasping
detection model. The input data is transformed by downsampling block into
feature maps with smaller size, more channels and richer semantic information.
In the bottleneck, resnet block and multi-scale receptive fields block module
are combined to extract more discriminability and robustness features.
Meanwhile, a multi-dimensional attention fusion network consisted of pixel
attention sub-network and channel attention sub-network is used to fuse
shallow and deep semantic features before upsampling, while suppressing
redundant features and enhancing the meaningful features during the fusion
process. Finally, based on the extracted features, four task-specific sub-
networks are added to predict grasp quality, angle (the form of $sin(2\theta)$
and $cos(2\theta)$), and width (the opening and closing distance of the
gripper) respectively. We will illustrate the details of each component of the
proposed grasping network.
### IV-A Basic Network Architecture
The proposed generative grasping architecture is composed of the downsampling
block, the bottleneck layer, the multi-dimensional attention fusion network
and the upsampling block, as shown in Fig. 3. A downsampling block consists of
covolution layer with kernel size of 3x3 and maximum pooling layer with kernel
size of 2x2, which can be represented as Eq. 7.
$x_{d}=f_{maxpool}(f_{conv}^{n}(f_{conv}^{n-1}(...f_{conv}^{0}(I)...)))$ (7)
In this work, we use 2 down-sampling blocks and 2 convolutional layers in the
down-sampling process. Specifically, the first down-sampling block is composed
of 4 convolutional layers (n = 3) and 1 maximum pooling layer, and the second
down-sampling layer is composed of 2 convolutional layers (n = 1) and 1
maximum pooling layer. After the down-sampled data pass through 2
convolutional layers, they are fed into a bottleneck layer consisting of 3
residual blocks (k = 2) and 1 receptive fields block module (RFBM) to further
extract features. Since RFBM is composed of vary scale convolutional filters,
we can acquire more rich image details. More details about RFBM will be
discussed in section IV-B. The output of the bottleneck can be formulated as
Eq. 8.
$x_{b}=f_{RFBM}(f_{res}^{k}(f_{res}^{k-1}(...f_{res}^{0}(f_{conv}^{1}(f_{conv}^{0}(x_{d})))...)))$
(8)
The output $x_{b}$ of the bottleneck is fed into multi-dimensional attention
fusion network (MDAFN) and up-sampling block. The multi-dimensional attention
fusion network composed of pixel attention and channel attention subnetwork
can suppress the noise feature and enhance the effective feature during the
fusion of shallow feature and deep feature. The MDAFN will be illustrated in
more detail in section IV-C. In upsampling block, the pixshuffle layer [32] is
used to increase feature resolution with the scale factor set to 2. In this
work, the number of multi-dimensional attention fusion networks and upsampling
blocks are both 2, and the output can be expressed as Eq. 9.
$x_{u}=f_{pixshuffle}^{1}(f_{MDAFN}^{1}(f_{pixshuffle}^{0}(f_{MDAFN}^{0}(x_{b}))))$
(9)
Final network layer is composed of 4 task-specific convolutional filters with
kernel size 3x3. The final output results can be given as Eq. 10.
$\displaystyle g_{q}$ $\displaystyle=max_{q}(f_{conv}^{0}(x_{u})),$ (10)
$\displaystyle g_{cos(2\theta)}$ $\displaystyle=max_{q}(f_{conv}^{1}(x_{u})),$
$\displaystyle g_{sin(2\theta)}$ $\displaystyle=max_{q}(f_{conv}^{2}(x_{u})),$
$\displaystyle g_{w}$ $\displaystyle=max_{q}(f_{conv}^{3}(x_{u})),$
where, the position of the center point is the pixel coordinates of the
largest grasp quality $g_{q}$, the opening and closing distance of the gripper
is $g_{w}$, and the grasp angle can be computed by
$g_{angle}=arctan(\frac{g_{sin(2\theta)}}{g_{cos(2\theta)}})/2$.
Figure 4: Receptive field block module.
### IV-B Multi-scale Receptive Fields Block Module
In neuroscience, researchers have found that there is a eccentricity function
in the human visual cortex that adjusts the size of the receptive field of
vision [33]. This mechanism can help to emphasize the importance of the area
near the center. In this work, we introduce a multi-scale receptive field
block (RFB) [34] to assemble the bottleneck layer of our grasping detection
architecture for improving the ability of extracting multi-scale information
and enhancing the feature dicriminability. The receptive field block module is
composed of multi-branch covolution layers with different kernels
corresponding to the receptive fields of different sizes. Moreover, the
dilated convolution layer is used to control the eccentricity, and the
features extracted by the branches of the different receptive fields are
recombined to form the final representation, as shown in Fig 4. In each
branch, the convolutional layer with a specific kernel size is followed by a
dilated convolutional layer with a corresponding dilation rate, which uses a
combination of different kernel sizes (1x1, 3x3, 7x1, 1x7). The features
extracted from the four branches are concatenated and then added to the input
data to obtain the final multi-scale feature output.
Figure 5: Multi-dimensional attention fusion network. The top branch is the
pixel-level attention subnetwork, and the bottom branch is the channel-level
attention subnetwork.
### IV-C Multi-dimensional Attention Fusion Network
When humans look at an image, we don’t pay attention to everything in the
image, but instead focus on what’s interesting to us. The attention mechanism
in the visual system focuses limited attention on the important information,
thus saving resources and obtaining the most effective information quickly. In
the field of computer vision, some attention mechanisms with few parameters,
fast speed and excellent effect have been developed [35, 36, 37, 38]. In order
to perceive the grasping objects effectively from the complex background, a
multi-dimensional attention network composed of pixel attention subnetwork and
channel attention subnetwork is designed to suppress the noise feature and
highlight the object feature, as shown in Fig. 5. Specificaly, the shallow
features and the deep features are concatenated together, and the fused
features are fed into a multi-dimensional attention network to automatically
learn the importance of the fused features at the pixel level and the channel
level. In pixel attention subnetwork, the feature map F passes through a 3x3
covolution layer to generate an attention map by covolution operation. The
attention map is further computed with sigmoid to abtain the corresponding
pixel-wise weight score. Moreover, SENet [36] is used as the channel attention
subnetwork, which obtains 1x1xC features through global average pooling, and
then uses two fully connection layers and the corresponding activation
function Relu to build the correlation between channels, and finally outputs
the weight score of the feature channel through sigmoid operation. Both the
pixel-wise and channel-wise weight maps are multiplied with the feature map F
to obtain a novel output with reduced noise and enhanced object information.
### IV-D Loss Function
For a dataset including grasping objects $O=\left\\{O_{1}...O_{n}\right\\}$,
input images $I=\left\\{I_{1}...I_{n}\right\\}$, and corresponding grasp
labels $L=\left\\{L_{1}...L_{n}\right\\}$, We propose a lightweight fully
convoluton neural network to approximate the complex function
$F:I\longmapsto\hat{G}$, where $F$ represents a neural network model with
weighted parameters, $I$ is input image data, and $\hat{G}$ denotes grasp
prediction. We train our model to learn the mapping function F by optimizing
the minimum error between grasp prediction $\hat{G}$ and the corresponding
label $L$. In this work, we consider the grasp pose estimation as regression
problem, therefore the Smooth L1 loss is used as our regression loss function.
The loss function $L_{r}$ of our grasping detection model is defined as :
$\displaystyle
L_{r}(\hat{G},L)=\sum_{i}^{N}\sum_{m\in{\\{q,cos2\theta,sin2\theta,w\\}}}Smooth_{L1}(\hat{G}_{i}^{m}-L_{i}^{m})$
(11)
where $Smooth_{L1}$ is formulated as:
$Smooth_{L1}(x)=\begin{dcases}(\sigma x)^{2}/2,&\text{if}\>|x|\textless 1;\\\
|x|-0.5/\sigma^{2},&\text{otherwise}.\end{dcases}$
where $N$ is the number of grasp candidates. $q,w$ represent the grasp quality
and the opening and closing distance of the gripper, respectively, and
$(cos(2\theta),sin(2\theta))$ is the form of orientation angle. In
$Smooth_{L1}$ fuction, $\sigma$ is the hyperparameter that controls the smooth
area, and it is set to 1 in this work.
Figure 6: Qualitative images from Cornell grasping dataset. Figure 7:
Qualitative images from Jacquard grasping dataset.
## V Dataset Analysis
Since the deep learning has become popular, large public datasets, such as
ImageNet, COCO, KITTI, etc, have been driving the progress of algorithms.
However, in the field of robotic grasping detection, the number of available
grasping datasets is insufficient. Dexnet, Cornell, and Jacquard are famous
common grasping datasets that serve as a platform to compare the performance
of the state-of-the-art grasping detection algorithms. In Tab. I, it presents
a summary of the different grasping datasets.
Dexnet Grasping Dataset: The Dexterity Network (Dex-Net) is a research
project established by UC Berkeley Automation Lab that provides code, dataset,
and algorithms for grasping task. At present, the project has released four
versions of the dataset, namely Dex-Net 1.0, Dex-Net 2.0, Dex-Net 3.0, and
Dex-Net 4.0. Dex-Net 1.0 is a synthetic dataset with over 10000 unique 3D
object models and 2.5 million corresponding grasp labels. Based on Dex-Net
1.0, thousands of 3D objects with arbitrary poses are used to generate more
than 6.7 million ponit clouds and grasps, which constitute the Dex-Net 2.0
dataset. Dex-Net 3.0 is built to study the grasp using suction-based end
effectors. Recently, a extension of previous versions, Dex-Net 4.0, has been
developed, which can perform training for parallel-jaw and suction gripper.
Since Dex-Net dataset includes only synthetic point cloud data and no RGB
information of the grasp objects, the experiment of this work is mainly
carried out on Cornell and Jacquard grasping dataset.
Cornell Grasping Dataset: The Cornell dataset, which is widely used as a
benchmark evaluation platform, was collected in the real world with the RGB-D
camera. Some example imgaes are shown in Fig 6. The dataset is composed of 885
images with a resolution of 640$\times$480 pixels of 240 different objects
with positive grasps (5110) and negative grasps (2909). RGB images and
corresponding point cloud data of each object with various poses are provided.
However, the scale of Cornell dataset is small for training our convolutional
neural network model. In this work, we use online data augement methods
including random cropping, zooms and rotation to extend the dataset to avoid
overfitting during training.
Jacquard Grasping Dataset: Jacquard is a large grasping dataset created
through simulation based on CAD models. Because no manual collection and
annotation is required, the Jacquard dataset is larger than the Cornell
dataset, containing 50k images of 11k objects and over 1 million grasp labels.
In Fig. 7, it presents some images from the Jacquard datset. Furthermore, the
dataset also provides a standard simulation environment to perform simulated
grasp trials (SGTs) under a consistent condition for different algorithms. In
this work, we use SGTs as a benchmark to fairly compare the performance of
various algorithms in the robot arm grasp. Since the Jacquard dataset is large
enough, we do not use any data auguement methods to it.
TABLE I: Description of the public Grasping Datasets Dataset | Modality | Objects | Images | Grasps
---|---|---|---|---
Dexnet | Depth | 1500 | 6.7M | 6.7M
Cornell | RGB-D | 240 | 885 | 8019
Jacquard | RGB-D | 11K | 54K | 1.1M
## VI Experiment
To verify the generalization capability of the proposed lightweight generative
model, we conducted experiments on two public grasping datasets, Cornell and
Jacquard. Extensive experiments results indicate that our algorithm has high
inference speed while achieving high grasp detection accuracy, and the size of
network parameters is an order of magnitude smaller than most previous
excellent algorithms. In addition, we also explore the impact of different
network designs on algorithm performance and discuss the shortcomings of our
method.
TABLE II: Detection Accuracy (%) of Different Methods on Cornell Dataset Author | Method | Input Size | Accuracy(%) | Time (ms)
---|---|---|---|---
Image-Wise | Object-Wise
Jiang [22] | Fast Search | 227 $\times$ 227 | 60.5 | 58.3 | 5000
Lenz [2] | SAE | 227 $\times$ 227 | 73.9 | 75.6 | 1350
Karaoguz [39] | GRPN | - | 88.7 | - | 200
Chu [10] | FasterRcnn | 227 $\times$ 227 | 96.0 | 96.1 | 120
Zhang [27] | Multimodal Fusion | 224 $\times$ 224 | 88.9 | 88.2 | 117
Zhou [12] | FCGN | 320 $\times$ 320 | 97.7 | 96.6 | 117
Wang [40] | Two-stage, Cloosed Loop | - | 85.3 | - | 140
Redmon [3] | AlexNet, MultiGrasp | 224 $\times$ 224 | 88.0 | 87.1 | 76
Kumra [28] | ResNet-50 | 224 $\times$ 224 | 89.2 | 88.9 | 103
Kumra [17] | GR-ConvNet | 300$\times$ 300 | 97.7 | 96.8 | -
Asif [41] | GraspNet | 224 $\times$ 224 | 90.6 | 90.2 | 24
Guo [29] | ZF-Net, MultiGrasp | - | 93.2 | 89.1 | -
Park [11] | FCNN | 360$\times$ 360 | 96.6 | 95.4 | 20
Morrison [14] | GGCNN | 300$\times$ 300 | 73.0 | 69.0 | 3
Zhang [21] | ROI-GD | - | 93.6 | 93.5 | 40
Song [14] | Matching Strategy | 320$\times$ 320 | 96.2 | 95.6 | -
Wang [42] | GPWRG | 400$\times$ 400 | 94.4 | 91.0 | 8
Our | Efficient Grasping-D | 300$\times$ 300 | 98.9 | 95.5 | 6
Efficient Grasping-RGB | 96.6 | 91.0 | 6
Efficient Grasping-RGB-D | 98.9 | 97.8 | 6
Figure 8: The detection results of grasping network on Cornell dataset. The
first three rows are the maps for grasp quality, angle and width representing
the opening and closing distance of the gripper. And, the last row is the best
grasp outputs for several objects. Figure 9: The detection results of grasping
network on Jacquard dataset. The first three rows are the maps for grasp
quality, angle and width representing the opening and closing distance of the
gripper. And, the last row is the best grasp outputs for several objects.
### VI-A Evaluation Metrics
Simillar to many previous works, the metric used in this paper to evaluate our
model on the Cornell and Jacquard datasets is rectangle metric. Specifically,
a pridicted grasp is regarded a correct grasp when it meets the following two
conditions:
* •
Angle difference: the difference of orientation angle between the predicted
grasp and corresponding grasp label is less than $30^{\circ}$ .
* •
Jaccard index: the Jaccard index of the predicted grasp and corresponding
grasp label is greater than 25%, which can be formulated as Eq. 12.
$J(g_{p},g_{l})=\frac{|g_{p}\cap g_{l}|}{g_{p}\cup g_{l}}$ (12)
where $g_{p}$ and $g_{l}$ denote the predicted grasp rectangle and the area of
the corresponding grasp label, respectively. $g_{p}\cap g_{t}$ represents the
intersection of predicted grasp and the corresponding grasp label. And the
union of predicted grasp and the corresponding grasp label is represented as
$g_{p}\cup g_{t}$.
### VI-B Data preprocessing
The experiments for this work are performed on the Cornell and Jacquard
grasping dataset. Due to the small data size of Cornell, we conducted online
data augmentation to train our network. Meanwhile, Jacquard dataset has
sufficient data, so we train the network directly on it without adopting any
data augmentation method. The images of Cornell and Jacquard are resized to
300x300 to feed into the network. In addition, the data labels are encoded for
training. A 2D Gaussian kernel is used to encode each ground-truth positive
grasp so that the corresponding region satisfies the Gaussian distribution,
where the peak of the Gaussian distribution is the coordinate of the center
point. We also use $sin(2\theta)$ and $cos(2\theta)$ to encode the grap angle,
where $\theta\in[-\frac{\pi}{2},\frac{\pi}{2}]$. The resulting corresponding
valuses range from -1 to 1. By using this method, ambiguity can be avoided in
the Angle learning process, which is beneficial to the convergence of the
network. Similarly, the grasp width representing the opening and closing
distance of the gripper is scaled to a range of 0 to 1 during the training.
### VI-C Training Methodology
In training period, we train our generative model end to end on a Nvidia
GTX2080Ti GPU with 22GB memory. The grasping network is achieved based on
Pytorch 1.2.0 with cudnn-7.5 and cuda-10.0 pacakges. The popular Adam
optimizer is used to optimize the network for back propagation during training
process. Futhermore, The initial learning rate is defiend as 0.001 and the
batch size of 8 is used in this work.
### VI-D Experiments on Cornell Grasping Dataset
Following the previous works [10, 12, 13], the Cornell dataset is divided into
two different ways to validate the generalization ability of the model:
* •
Image-wise level: the images of dataset are randomly divided. The images of
each grasp object in the training set and test set are different. Image-wise
level method is used to test the generalization ability of the network to new
grasp pose.
* •
Object-wise level: the object instances of dataset are randomly divided. All
the images of the same object are split into the same set (training set or
test set). Object-wise level method is used to validate the generalization
ability of the network for new object, which is not seen in the training
process.
The comparison of the grasp detection accuracy of our model and other methods
on the Cornell dataset is presented in Table. II. Experiment results indicate
that the proposed grasp detection algorithm achieves high accuracy of 98.9$\%$
and 97.8$\%$ in image-wise and object-wise split with an inference time of
6ms. Compared with other state-of-the-art algorithms, our model maintains a
better balance betweeen accuracy and real-time performance. By changing the
mode of input data, we can find that our generated grasping detection
achitecture can get excellent performance with the input of depth data. And,
the results in object-wise split demonstrate that the combination of depth
data and RGB data with rich color and texture information enables the model to
have more robust generalization ability to unseen objects. In Fig. 8, we plot
the grasping detection results of some objects for display. Only the grasp
candidate with the highest grasp quality is selected as the final output, and
the top-1 grasp is visualised in the last row. The first three rows are the
maps for grasp quality, angle and width representing the opening and closing
distance of the gripper. It can be seen from the figure that our algorithm can
provide reliable grasp candidate for objects with different shapes and poses.
TABLE III: Detection Accuracy (%) of Different Methods on Jacquard Dataset Author | Method | Accuracy($\%$)
---|---|---
Depierre [43] | Jacquard | 74.2
Morrison [14] | GG-CNN2 | 84
Zhou [12] | FCGN-RGD | 92.8
Zhang [21] | ROIGD-RGD | 93.6
Song [13] | Resnet-101-RGD | 93.2
Kumra [17] | GR-ConvNet-RGB-D | 94.6
Ours | Efficient Grasping-D | 95.6
Efficient Grasping-RGB | 91.6
Efficient Grasping-RGB-D | 93.6
### VI-E Experiments on Jacquard Grasping Dataset
Similar to the Cornell dataset, we trian our network on Jacquard dataset to
perform grasp pose estimation. The results are summarized in Table. III.
Taking depth data as input, the proposed method obtains state-of-the-art
performance with a detection accuracy of 95.6$\%$, which exceeds the existing
methods and reaches the best result on Jacquard dataset. The experimental
results in Table. II and Table. III demonstrate that our algorithm not only
achieves excellent performance on the Cornell dataset but also outperforms
other methods on the Jacquard dataset. Some detection examples are displayed
in Fig. 9. As with the Cornell dataset, grasp quality, Angle, width
representing the opening and closing distance of the gripper, and the best
detection results on the jacquard dataset are presented in the figure.
TABLE IV: The impact of different network Settings on detection performance \+ GGR | | ✓ | ✓ | ✓
---|---|---|---|---
\+ RFBM | ✓ | | ✓ | ✓
\+ MDAFN | ✓ | ✓ | | ✓
Acurracy ($\%$) | 97.8 | 94.4 | 96.6 | 98.9
Figure 10: The grasp detection accuracy when using different scale factors of
Gaussian kernel
### VI-F Ablation Study
In order to further explore the impact of different components on grasping
pose learning, we trained our models of different network Settings in image-
wise split of Cornell dataset with RGBD data as input. The experimental
results are summarized in Table. IV. It can be obtained from the detection
accuracy evaluation results in the Table. IV that Gaussian-based grasp
representation (GGR), receptive field block module (RFBM) and multi-
dimensional attention fusion network (MDAFN) can all bring performance
improvement to the network, and all components combined together can get the
best grasping detection performance. Moreover, we also discuss the impact of
different scale factor Settings (T) on the model, as shown in the Fig. 10. In
this work, the scale factors $T_{x}$ and $T_{y}$ mentioned in section III-B
are set to $Tx=Ty=T$ with values ranging from $\left\\{4,8,16,32,64\right\\}$.
When the $T=16$ , the model in object-wise split of Cornell dataset reaches
the best detection accuracy of 97.8. In the process of experiment, we found
the different density of annotation for a particular dataset should be set the
size of the corresponding scale factor value, which can slow the instability
of the nerwork learning caused by labels overlap.
Figure 11: The detection results of multiple grasping objects. The first column is the grasp outputs of corresponding RGB images for several objects. The last three columns are the maps for grasp quality, angle and width representing the opening and closing distance of the gripper. TABLE V: Network size comparison of different methods Author | Parameters (Approx.) | Time
---|---|---
Lenz [2] | - | 13.5s
Pinto and Gupta [24] | 60 million | -
Levine [44] | 1 million | 0.2-0.5s
Johns [45] | 60 million | -
Chu [10] | 216 million | 120ms
Morrison [14] | 66 k | 3ms
Ours | 4.67 million | 5ms
### VI-G Comparison of network parameter sizes
In Table. V, some comparisons of network sizes used for grasping predictions
are listed. Many works, such as [24, 44, 45, 10], contain thousands or
millions of network parameters. In order to improve the real-time performance
of the grasping algorithm, we developed a lightweight generative grasping
detection architecture, which achieves high detection acurracy and fast
running speed, and its network size of 4.67M is an order of magnitude smaller
than other methods.
### VI-H Objects in clutter
To validate the generalization ability of the proposed model in clutter scene,
we use the model trained on the Cornell dataset to test in a more realistic
multi-object enviroment. The detection results are presented in Fig. 11.
Although the model is trained on a single object dataset, it is still able to
effectively predict the grasp pose of multiple objects. In complex scenarios,
the proposed model has better generalization ability to perform grasp pose
estimation for multiple objects simultaneously.
### VI-I Failure cases analysis
During the experiment, it was found that although the proposed algorithm
achieved high detection accuracy, it still failed to detect some cases, as
shown in Fig. 12. For some objects in the Jacquard dataset with complex
shapes, our model does not work well. Furthermore, in the clutter scenes,
smaller objects among multiple objects are often missed by the model, and the
detection quality of the model for large boxe is not good as well. However,
these shortcomings can be addressed by increasing the diversity of the
training dataset.
Figure 12: Failed detection cases with single and multiple objects.
## VII Conclusion
In this paper, we proposed a Gaussian-based grasp representation to highlight
the maximum grasp quality at the center position. Based on Gaussian-based
grasp representation, a lightweight generative architecture with a receptive
field block module and multi-dimensional attention fusion network was
developed for grasp pose estimation. Experiments on two common public
datasets, Cornell and Jacquad, show that our model has a very fast inference
speed while achieving a high detection accuracy, and it reaches a detection
accuracy of 98.9 and 95.6 respectively on Cornell and Jacquard datasets.
## References
* [1] F. T. Pokorny, Y. Bekiroglu, and D. Kragic, “Grasp moduli spaces and spherical harmonics,” in _2014 IEEE International Conference on Robotics and Automation (ICRA)_ , 2014, pp. 389–396.
* [2] I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic grasps,” _The International Journal of Robotics Research_ , vol. 34, no. 4-5, pp. 705–724, 2015. [Online]. Available: https://doi.org/10.1177/0278364914549607
* [3] J. Redmon and A. Angelova, “Real-time grasp detection using convolutional neural networks,” in _Robotics and Automation (ICRA), 2015 IEEE International Conference on_. Seattle: IEEE, July 2015.
* [4] U. Asif, J. Tang, and S. Harrer, “Densely supervised grasp detector (DSGD),” _CoRR_ , vol. abs/1810.03962, 2018. [Online]. Available: http://arxiv.org/abs/1810.03962
* [5] G. Wu, W. Chen, H. Cheng, W. Zuo, D. Zhang, and J. You, “Multi-object grasping detection with hierarchical feature fusion,” _IEEE Access_ , vol. 7, pp. 43 884–43 894, 2019.
* [6] S. Kumra and C. Kanan, “Robotic grasp detection using deep convolutional neural networks,” _CoRR_ , vol. abs/1611.08036, 2016. [Online]. Available: http://arxiv.org/abs/1611.08036
* [7] J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” _CoRR_ , vol. abs/1612.08242, 2016. [Online]. Available: http://arxiv.org/abs/1612.08242
* [8] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” 2015, cite arxiv:1512.02325Comment: ECCV 2016. [Online]. Available: http://arxiv.org/abs/1512.02325
* [9] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in _Advances in Neural Information Processing Systems 28_ , C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, Eds. Curran Associates, Inc., 2015, pp. 91–99. [Online]. Available: http://papers.nips.cc/paper/5638-faster-r-cnn-towards-real-time-object-detection-with-region-proposal-networks.pdf
* [10] F. Chu, R. Xu, and P. A. Vela, “Deep grasp: Detection and localization of grasps with deep neural networks,” _CoRR_ , vol. abs/1802.00520, 2018. [Online]. Available: http://arxiv.org/abs/1802.00520
* [11] Y. S. Dongwon Park, Y. S. Se Young Chun Dongwon Park, and S. Y. Chun, “Real-time, highly accurate robotic grasp detection using fully convolutional neural networks with high-resolution images,” _CoRR_ , vol. abs/1809.05828, 2018, withdrawn. [Online]. Available: http://arxiv.org/abs/1809.05828
* [12] X. Zhou, X. Lan, H. Zhang, Z. Tian, Y. Zhang, and N. Zheng, “Fully convolutional grasp detection network with oriented anchor box,” _CoRR_ , vol. abs/1803.02209, 2018. [Online]. Available: http://arxiv.org/abs/1803.02209
* [13] Y. Song, L. Gao, X. Li, and W. Shen, “A novel robotic grasp detection method based on region proposal networks,” _Robotics and Computer-Integrated Manufacturing_ , vol. 65, p. 101963, 2020. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0736584519308105
* [14] D. Morrison, P. Corke, and J. Leitner, “Learning robust, real-time, reactive robotic grasping,” _The International Journal of Robotics Research_ , vol. 39, no. 2-3, pp. 183–201, 2020. [Online]. Available: https://doi.org/10.1177/0278364919859066
* [15] S. Wang, X. Jiang, J. Zhao, X. Wang, W. Zhou, and Y. Liu, “Efficient fully convolution neural network for generating pixel wise robotic grasps with high resolution images,” _CoRR_ , vol. abs/1902.08950, 2019. [Online]. Available: http://arxiv.org/abs/1902.08950
* [16] D. Wang, “Sgdn: Segmentation-based grasp detection network for unsymmetrical three-finger gripper,” 2020.
* [17] S. Kumra, S. Joshi, and F. Sahin, “Antipodal robotic grasping using generative residual convolutional neural network,” 2019.
* [18] A. Bicchi and V. Kumar, “Robotic grasping and contact: a review,” in _Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065)_ , vol. 1, 2000, pp. 348–353 vol.1.
* [19] Y. Inagaki, R. Araki, T. Yamashita, and H. Fujiyoshi, “Detecting layered structures of partially occluded objects for bin picking,” in _2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2019, pp. 5786–5791.
* [20] A. Gariépy, J.-C. Ruel, B. Chaib-draa, and P. Giguère, “Gq-stn: Optimizing one-shot grasp detection based on robustness classifier,” _2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pp. 3996–4003, 2019.
* [21] H. Zhang, X. Lan, S. Bai, X. Zhou, Z. Tian, and N. Zheng, “Roi-based robotic grasp detection for object overlapping scenes,” in _2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2019, pp. 4768–4775.
* [22] Yun Jiang, S. Moseson, and A. Saxena, “Efficient grasping from rgbd images: Learning using a new rectangle representation,” in _2011 IEEE International Conference on Robotics and Automation_ , 2011, pp. 3304–3311.
* [23] G. Du, K. Wang, and S. Lian, “Vision-based robotic grasping from object localization, pose estimation, grasp detection to motion planning: A review,” _CoRR_ , vol. abs/1905.06658, 2019. [Online]. Available: http://arxiv.org/abs/1905.06658
* [24] L. Pinto and A. Gupta, “Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours,” in _2016 IEEE International Conference on Robotics and Automation (ICRA)_ , 2016, pp. 3406–3413.
* [25] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” _CoRR_ , vol. abs/1703.09312, 2017. [Online]. Available: http://arxiv.org/abs/1703.09312
* [26] D. Park and S. Y. Chun, “Classification based grasp detection using spatial transformer network,” _CoRR_ , vol. abs/1803.01356, 2018. [Online]. Available: http://arxiv.org/abs/1803.01356
* [27] Zhang, Qiang, Qu, Daokui, Xu, Fang, and Zou, Fengshan, “Robust robot grasp detection in multimodal fusion,” _MATEC Web Conf._ , vol. 139, p. 00060, 2017. [Online]. Available: https://doi.org/10.1051/matecconf/201713900060
* [28] S. Kumra and C. Kanan, “Robotic grasp detection using deep convolutional neural networks,” in _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2017, pp. 769–776.
* [29] D. Guo, F. Sun, H. Liu, T. Kong, B. Fang, and N. Xi, “A hybrid deep architecture for robotic grasp detection,” in _2017 IEEE International Conference on Robotics and Automation (ICRA)_ , 2017, pp. 1609–1614.
* [30] A. Depierre, E. Dellandréa, and L. Chen, “Optimizing correlated graspability score and grasp regression for better grasp prediction,” 2020.
* [31] N. Gkanatsios, G. Chalvatzaki, P. Maragos, and J. Peters, “Orientation attentive robot grasp synthesis,” 2020.
* [32] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” _CoRR_ , vol. abs/1609.05158, 2016. [Online]. Available: http://arxiv.org/abs/1609.05158
* [33] W. BA and W. J, “Field block net for accurreceptiveate and fast object deteccomputational neuroimaging and population recep-tive eldstion,” in _Trends in Cognitive Sciences_ , 2015.
* [34] S. Liu, D. Huang, and a. Wang, “Receptive field block net for accurate and fast object detection,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , September 2018.
* [35] X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2018.
* [36] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 7132–7141.
* [37] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , September 2018.
* [38] X. Li, W. Wang, X. Hu, and J. Yang, “Selective kernel networks,” _CoRR_ , vol. abs/1903.06586, 2019. [Online]. Available: http://arxiv.org/abs/1903.06586
* [39] H. Karaoguz and P. Jensfelt, “Object detection approach for robot grasp detection,” in _2019 International Conference on Robotics and Automation (ICRA)_ , 2019, pp. 4953–4959.
* [40] Z. Wang, Z. Li, B. Wang, and H. Liu, “Robot grasp detection using multimodal deep convolutional neural networks,” _Advances in Mechanical Engineering_ , vol. 8, no. 9, p. 1687814016668077, 2016. [Online]. Available: https://doi.org/10.1177/1687814016668077
* [41] U. Asif, J. Tang, and S. Harrer, “Graspnet: An efficient convolutional neural network for real-time grasp detection for low-powered devices,” in _Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18_. International Joint Conferences on Artificial Intelligence Organization, 7 2018, pp. 4875–4882. [Online]. Available: https://doi.org/10.24963/ijcai.2018/677
* [42] S. Wang, X. Jiang, J. Zhao, X. Wang, W. Zhou, and Y. Liu, “Efficient fully convolution neural network for generating pixel wise robotic grasps with high resolution images,” _CoRR_ , vol. abs/1902.08950, 2019. [Online]. Available: http://arxiv.org/abs/1902.08950
* [43] A. Depierre, E. Dellandréa, and L. Chen, “Jacquard: A large scale dataset for robotic grasp detection,” _CoRR_ , vol. abs/1803.11469, 2018\. [Online]. Available: http://arxiv.org/abs/1803.11469
* [44] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” _The International Journal of Robotics Research_ , vol. 37, no. 4-5, pp. 421–436, 2018. [Online]. Available: https://doi.org/10.1177/0278364917710318
* [45] E. Johns, S. Leutenegger, and A. J. Davison, “Deep learning a grasp function for grasping under gripper pose uncertainty,” in _2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2016, pp. 4461–4468.
|
# Universal Approximation Properties for ODENet and ResNet
Yuto Aizawa Division of Mathematical and Physical Sciences, Kanazawa
University<EMAIL_ADDRESS>Masato Kimura Faculty of Mathematics and
Physics, Kanazawa University<EMAIL_ADDRESS>
###### Abstract
We prove a universal approximation property (UAP) for a class of ODENet and a
class of ResNet, which are used in many deep learning algorithms. The UAP can
be stated as follows. Let $n$ and $m$ be the dimension of input and output
data, and assume $m\leq n$. Then we show that ODENet width $n+m$ with any non-
polynomial continuous activation function can approximate any continuous
function on a compact subset on $\mathbb{R}^{n}$. We also show that ResNet has
the same property as the depth tends to infinity. Furthermore, we derive
explicitly the gradient of a loss function with respect to a certain tuning
variable. We use this to construct a learning algorithm for ODENet. To
demonstrate the usefulness of this algorithm, we apply it to a regression
problem, a binary classification, and a multinomial classification in MNIST.
Keywords: deep neural network, ODENet, ResNet, universal approximation
property
## 1 Introduction
Recent advances in neural networks have proven immensely successful for
regression analysis, image classification, time series modeling, and so on
[19]. Neural Networks are models of the human brain and vision [17, 7]. A
neural network performs regression analysis, image classification, and time
series modeling by performing a series of sequential operations, known as
layers. Each of these layers is composed of neurons that are connected to
neurons of other (typically, adjacent) layers. We consider a neural network
with $L+1$ layers, where the input layer is layer $0$, the output layer is
layer $L$, and the number of nodes in layer $l~{}(l=0,1,\ldots,L)$ is
$n_{l}\in\mathbb{N}$. Let $f^{(l)}:\mathbb{R}^{n_{l}}\to\mathbb{R}^{n_{l+1}}$
be the function of each layer. The output of each layer is, therefore, a
vector in $\mathbb{R}^{n_{l+1}}$. If the input data is
$\xi\in\mathbb{R}^{n_{0}}$, then, at each layer, we have
$\left\\{\begin{aligned} x^{(l+1)}&=f^{(l)}(x^{(l)}),&l=0,1,\ldots,L-1,\\\
x^{(0)}&=\xi.&\end{aligned}\right.$
The final output of the network then becomes $x^{(L)}$, and the network is
represented by $H=[\xi\mapsto x^{(L)}]$.
A neural network approaches the regression and classification problem in two
steps. Firstly, a priori observed and classified data is used to train the
network. Then, the trained network is used to predict the rest of the data.
Let $D\subset\mathbb{R}^{n_{0}}$ be the set of input data, and
$F:D\to\mathbb{R}^{n_{L}}$ be the target function. In the training step, the
training data $\\{(\xi^{(k)},F(\xi^{(k)}))\\}_{k=1}^{K}$ are available, where
$\\{\xi^{(k)}\\}_{k=1}^{K}\subset D$ are the inputs, and
$\\{F(\xi^{(k)})\\}_{k=1}^{K}\subset\mathbb{R}^{n_{L}}$ are the outputs. The
goal is to learn the neural network so that $H(\xi)$ approximates $F(\xi)$.
This is achieved by minimizing a loss function that represents a similarity
distance measure between the two quantities. In this paper, we consider the
loss function with the mean square error
$\frac{1}{K}\sum_{k=1}^{K}\left|H(\xi^{(k)})-F(\xi^{(k)})\right|^{2}.$
Finding the optimal functions
$f^{(l)}:\mathbb{R}^{n_{l}}\to\mathbb{R}^{n_{l+1}}$ out of all possible such
functions is challenging. In addition, this includes a risk of overfitting
because of the high number of available degrees of freedom. We restrict the
functions to the following form:
$f^{(l)}(x)=a^{(l)}\odot\mbox{\boldmath$\sigma$}(W^{(l)}x+b^{(l)}),$ (1.1)
where $W^{(l)}\in\mathbb{R}^{n_{l+1}\times n_{l}}$ is a weight matrix,
$b\in\mathbb{R}^{n_{l+1}}$ is a bias vector, and
$a^{(l)}\in\mathbb{R}^{n_{l+1}}$ is weight vector of the output of each layer.
The operator $\odot$ denotes the Hadamard product (element-wise product) of
two vectors defined by (2.2). The function
$\mbox{\boldmath$\sigma$}:\mathbb{R}^{n_{l+1}}\to\mathbb{R}^{n_{l+1}}$ is
defined by
$\mbox{\boldmath$\sigma$}(x)=(\sigma(x_{1}),\sigma(x_{2}),\ldots,\sigma(x_{n_{l+1}}))^{\top}$,
where $\sigma:\mathbb{R}\to\mathbb{R}$ is called an activation function. For a
scalar $x\in\mathbb{R}$, the sigmoid function $\sigma(x)=(1+e^{-x})^{-1}$, the
hyperbolic tangent function $\sigma(x)=\tanh(x)$, the rectified linear unit
(ReLU) function $\sigma(x)=\max(0,x)$, and the linear function $\sigma(x)=x$,
and so on, are used as activation functions.
If we restrict the functions of the form (1.1), the goal is to learn
$W^{(l)},b^{(l)},a^{(l)}$ that approximates $F(\xi)$ in the training step. The
gradient method is used for training. Let $G_{W^{(l)}},G_{b^{(l)}}$ and
$G_{a^{(l)}}$ be the gradient of the loss function with respect to
$W^{(l)},b^{(l)}$ and $a^{(l)}$, respectively, and let $\tau>0$ be the
learning rate. Using the gradient method, the weights and biases are updated
as follows:
$W^{(l)}\leftarrow W^{(l)}-\tau G_{W^{(l)}},\quad b^{(l)}\leftarrow
b^{(l)}-\tau G_{b^{(l)}},\quad a^{(l)}\leftarrow a^{(l)}-\tau G_{a^{(l)}}.$
Note that the stochastic gradient method [3] has been widely used recently.
Then, error backpropagation [18] was used to find the gradient.
It is known that deep (convolutional) neural networks are of great importance
in image recognition [20, 22]. In [11], it was found through controlled
experiments that the increase of depth in networks actually improves its
performance and accuracy, in exchange, of course, for additional time
complexity. However, in the case that the depth is overly increased, the
accuracy might get stagnant or even degraded [11]. In addition, considering
deeper networks may impede the learning process which is due to the vanishing
or exploding of the gradient [2, 9]. Apparently, deeper neural networks are
more difficult to train. To address such an issue, the authors in [12]
recommended the used of residual learning to facilitate the training of
networks that are considerably deeper than those used previously. Such
networks are referred to as residual network or ResNet. Let $n$ and $m$ be the
dimensions of the input and output data. Let $N$ be the number of nodes of
each layers. A ResNet can be represented as
$\left\\{\begin{aligned}
x^{(l+1)}&=x^{(l)}+f^{(l)}(x^{(l)}),&l=0,1,\ldots,L-1,\\\
x^{(0)}&=Q\xi.&\end{aligned}\right.$ (1.2)
The final output of the network then becomes $H(\xi):=Px^{(L)}$, where
$P\in\mathbb{R}^{m\times N}$ and $Q\in\mathbb{R}^{N\times n}$. Moreover, the
function $f^{(l)}$ is learned from training data.
Transforming (1.2) into
$\left\\{\begin{aligned}
x^{(l+1)}&=x^{(l)}+hf^{(l)}(x^{(l)}),&l=0,1,\ldots,L-1,\\\
x^{(0)}&=Q\xi,&\end{aligned}\right.$ (1.3)
where $h$ is the step size of the layer, leads to the same equation for the
Euler method, which is a method for finding numerical solution to initial
value problem for ordinary differential equation. Indeed, putting
$x(t):=x^{(t)}$ and $f(t,x):=f^{(t)}(x)$, then the limit of (1.3) as $h$
approaches zero yields the following initial value problem of ordinary
differential equation
$\left\\{\begin{aligned} x^{\prime}(t)&=f(t,x(t)),&t\in(0,T],\\\
x(0)&=Q\xi.&\end{aligned}\right.$ (1.4)
We call the function $H=[D\ni\xi\mapsto Px(T)]$ and ODENet [5] associated with
the system of ordinary differential equations (1.4). Similar to ResNet, ODENet
can address the issue of vanishing and exploding gradients. In this paper, we
consider the ODENet given as follows:
$\left\\{\begin{aligned} x^{\prime}(t)&=\beta(t)x(t)+\gamma(t),&t\in(0,T],\\\
y^{\prime}(t)&=\alpha(t)\odot\mbox{\boldmath$\sigma$}(Ax(t)),&t\in(0,T],\\\
x(0)&=\xi,&\\\ y(0)&=0,&\end{aligned}\right.$ (1.5)
where $x$ is the function from $[0,T]$ to $\mathbb{R}^{n}$, $y$ is a function
from $[0,T]$ to $\mathbb{R}^{m}$, and $\xi\in D$ is the input data. Moreover,
$\alpha:[0,T]\to\mathbb{R}^{m},\beta:[0,T]\to\mathbb{R}^{n\times n}$, and
$\gamma:[0,T]\to\mathbb{R}^{n}$ are design parameters, and $A$ is an $m\times
n$ real matrix. The function
$\mbox{\boldmath$\sigma$}:\mathbb{R}^{m}\to\mathbb{R}^{m}$ is defined by (2.1)
and the operator $\odot$ denotes the Hadamard product.
A neural network of arbitrary width and bounded depth has universal
approximation property (UAP). The classical UAP is that continuous functions
on a compact subset on $\mathbb{R}^{n}$ can be approximated by a linear
combination of activation functions. It has been shown that the UAP for the
neural networks holds by choosing a sigmoidal function [6, 13, 4, 8], any
bounded function that is not a polynomial function [15], and any function in
Lizorkin space including a ReLU [21] function as an activation function. The
UAP for neural network and its proof for each activation function are
presented in Table 1.
Table. 1: Activation function and classical universal approximation property of neural network References | Activation function | How to prove
---|---|---
Cybenko [6] | Continuous sigmoidal | Hahn-Banach theorem
Hornik et al. [13] | Monotonic sigmoidal | Stone-Weiertrass theorem
Carroll [4] | Continuous sigmoidal | Radon transform
Funahashi [8] | Monotonic sigmoidal | Fourier transform
Leshno [15] | Non-polynomial | Weiertrass theorem
Sonoda, Murata [21] | Lizorkin distribution | Ridgelet transform
Recently, some positive results have been established showing the UAP for
particular deep narrow networks. Hanin and Sellke [10] have shown that deep
narrow networks with ReLU activation function have the UAP, and require only
width $n+m$. Lin and Jegelka [16] have shown that a ResNet with ReLU
activation function, arbitrary input dimension, width 1, output dimension 1
have the UAP. For activation functions other than ReLU, Kidger and Lyons [14]
have shown that deep narrow networks with any non-polynomial continuous
function have the UAP, and require only width $n+m+1$. The comparison the UAPs
are shown in Table 2.
Table. 2: The comparison universal approximation properties | Shallow wide NN | Deep narrow NN | ResNet
---|---|---|---
References | [15, 21] | [14] | [16]
Input dimension $n$ | $n,m$ : any | $n,m$ : any | $n$ : any
Output dimension $m$ | $m=1$
Activation function | Non-polynomial | Non-polynomial | ReLU
Depth $L$ | $L=3$ | $L\to\infty$ | $L\to\infty$
Width $N$ | $N\to\infty$ | $N=n+m+1$ | $N=1$
| ResNet | ODENet
References | Theorem 2.6 | Theorem 2.3
Input dimension $n$ | $n\geq m$ | $n\geq m$
Output dimension $m$
Activation function | Non-polynomial | Non-polynomial
Depth $L$ | $L\to\infty$ | $(L=\infty)$
Width $N$ | $N=n+m$ | $N=n+m$
In this paper, we propose a function of the form (1.5) that can be learned
from training data in ODENet and ResNet. We show the conditions for the UAP
for this ODENet and ResNet. In Section 2, we show that the UAP holds for the
ODENet and ResNet associated with (1.5). In Section 3, we derive the gradient
of the loss function and a learning algorithm for this ODENet in consideration
followed by some numerical experiments in Section 4. Finally, we end the paper
with a conclusion in Section 5.
## 2 Universal Approximation Theorem for ODENet and ResNet
### 2.1 Definition of an activation function with universal approximation
property
Let $m$ and $n$ be natural numbers. Our main results, Theorem 2.3 and Theorem
2.6, show that any continuous function on a compact subset on $\mathbb{R}^{n}$
can be approximated using the ODENet and ResNet.
In this paper, the following notations are used
$|x|:=\left(\sum_{i=1}^{n}|x_{i}|^{2}\right)^{\frac{1}{2}},\quad\|A\|:=\left(\sum_{i=1}^{m}\sum_{j=1}^{n}|a_{ij}|^{2}\right)^{\frac{1}{2}},$
for any $x=(x_{1},x_{2},\ldots,x_{n})^{\top}\in\mathbb{R}^{n}$ and
$A=(a_{ij})_{\begin{subarray}{c}i=1,\ldots,m\\\
j=1,\ldots,n\end{subarray}}\in\mathbb{R}^{m\times n}$. Also, we define
$\nabla_{x}^{\top}f:=\left(\frac{\partial f_{i}}{\partial
x_{j}}\right)_{\begin{subarray}{c}i=1,\ldots,m\\\
j=1,\ldots,n\end{subarray}},\quad\nabla_{x}f^{\top}:=\left(\nabla_{x}^{\top}f\right)^{\top}$
for any $f\in C^{1}(\mathbb{R}^{n};\mathbb{R}^{m})$. For a function
$\sigma:\mathbb{R}\to\mathbb{R}$, we define
$\mbox{\boldmath$\sigma$}:\mathbb{R}^{m}\to\mathbb{R}^{m}$ by
$\mbox{\boldmath$\sigma$}(x):=\left(\begin{array}[]{c}\sigma(x_{1})\\\
\sigma(x_{2})\\\ \vdots\\\ \sigma(x_{m})\end{array}\right)$ (2.1)
for $x=(x_{1},x_{2},\ldots,x_{m})^{\top}\in\mathbb{R}^{m}$. For
$a=(a_{1},a_{2},\ldots,a_{m})^{\top},b=(b_{1},b_{2},\ldots,b_{m})^{\top}\in\mathbb{R}^{m}$,
their Hadamard product is defined by
$a\odot b:=\left(\begin{array}[]{c}a_{1}b_{1}\\\ a_{2}b_{2}\\\ \vdots\\\
a_{m}b_{m}\end{array}\right)\in\mathbb{R}^{m}.$ (2.2)
###### Definition 2.1 (Universal approximattion property for the activation
function $\sigma$).
Let $\sigma$ be a real-valued function on $\mathbb{R}$ and $D$ be a compact
subset of $\mathbb{R}^{n}$. Also, consider the set
$S:=\left\\{G:D\to\mathbb{R}\left|G(\xi)=\sum_{l=1}^{L}\alpha_{l}\sigma(\mbox{\boldmath$c$}_{l}\cdot\xi+d_{l}),L\in\mathbb{N},\alpha_{l},d_{l}\in\mathbb{R},\mbox{\boldmath$c$}_{l}\in\mathbb{R}^{n}\right.\right\\}.$
Suppose that $S$ is dense in $C(D)$. In other words, given $F\in C(D)$ and
$\eta>0$, there exists a function $G\in S$ such that
$|G(\xi)-F(\xi)|<\eta$
for any $\xi\in D$. Then, we say that $\sigma$ has a universal approximation
property (UAP) on $D$.
Some activation functions with the universal approximation property are
presented in Table 3.
Table. 3: Example of activation functions with universal approximation property | Activation function | $\sigma(x)$
---|---|---
Unbounded functions
| Truncated power function | $x_{+}^{k}:=\left\\{\begin{array}[]{ll}x^{k}&x>0\\\ 0&x\leq 0\end{array}\right.\quad k\in\mathbb{N}\cup\\{0\\}$
| ReLU function | $x_{+}$
| Softplus function | $\log(1+e^{x})$
Bounded but not integrable functions
| Unit step function | $x_{+}^{0}$
| (Standard) Sigmoidal function | $(1+e^{-x})^{-1}$
| Hyperbolic tangent function | $\tanh(x)$
Bump functions
| (Gaussian) Radial basis function | $\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{x^{2}}{2}\right)$
| Dirac’s $\delta$ function | $\delta(x)$
A non-polynomial activation function in a neural network with three layers has
a universal approximation property. Such result was shown by Leshno [15] using
functional analysis and later by Sonoda and Murata [21] using Ridgelet
transform.
### 2.2 Main Theorem for ODENet
In this subsection, we show the universal approximation property for the
ODENet associated with the ODE system (2.3).
###### Definition 2.2 (ODENet).
Suppose that an $m\times n$ real matrix $A$ and a function
$\sigma:\mathbb{R}\to\mathbb{R}$ are given. We consider a system of ODEs
$\left\\{\begin{aligned} x^{\prime}(t)&=\beta(t)x(t)+\gamma(t),&t\in(0,T],\\\
y^{\prime}(t)&=\alpha(t)\odot\mbox{\boldmath$\sigma$}(Ax(t)),&t\in(0,T],\\\
x(0)&=\xi,&\\\ y(0)&=0,&\end{aligned}\right.$ (2.3)
where $x$ and $y$ are functions from $[0,T]$ to $\mathbb{R}^{n}$ and
$\mathbb{R}^{m}$, respectively; $\xi\in\mathbb{R}^{n}$ is an input data and
$y(T)\in\mathbb{R}^{m}$ is the final output. Moreover, the functions
$\alpha:[0,T]\to\mathbb{R}^{m}$, $\beta:[0,T]\to\mathbb{R}^{n\times n}$, and
$\gamma:[0,T]\to\mathbb{R}^{n}$ are design parameters. The functions
$\mbox{\boldmath$\sigma$}:\mathbb{R}^{m}\to\mathbb{R}^{m}$ is defined by (2.1)
and the operator $\odot$ denotes the Hadamard product defined by (2.2). We
call $H=[\xi\mapsto y(T)]:\mathbb{R}^{n}\to\mathbb{R}^{m}$ an ODENet
associated with the ODE system (2.3).
For a compact subset $D\subset\mathbb{R}^{n}$, we define
$S(D):=\\{[\xi\mapsto y(T)]\in C(D;\mathbb{R}^{m})|\alpha\in
C^{\infty}([0,T];\mathbb{R}^{m}),\beta\in C^{\infty}([0,T];\mathbb{R}^{n\times
n}),\gamma\in C^{\infty}([0,T];\mathbb{R}^{n})\\}.$
We will assume that the activation function is locally Lipschitz continuous,
in other words,
$\forall R>0,~{}\exists
L_{R}>0~{}\mathrm{s.t.}\quad|\sigma(s_{1})-\sigma(s_{2})|\leq
L_{R}|s_{1}-s_{2}|\quad\mathrm{for}~{}s_{1},s_{2}\in[-R,R].$ (2.4)
###### Theorem 2.3 (UAP for ODENet).
Suppose that $m\leq n$ and $\mathrm{rank}(A)=m$. If
$\sigma:\mathbb{R}\to\mathbb{R}$ satisfies (2.4) and has UAP on a compact
subset $D\subset\mathbb{R}^{n}$, then $S(D)$ is dense in
$C(D;\mathbb{R}^{m})$. In other words, given $F\in C(D;\mathbb{R}^{m})$ and
$\eta>0$, there exists a function $H\in S(D)$ such that
$|H(\xi)-F(\xi)|<\eta,$
for any $\xi\in D$.
###### Corollary 2.4.
Let $1\leq p<\infty$. Then, $S(D)$ is dense in $L^{p}(D;\mathbb{R}^{m})$. In
other words, given $F\in L^{p}(D;\mathbb{R}^{m})$ and $\eta>0$, there exists a
function $H\in S(D)$ such that
$\|H-F\|_{L^{p}(D;\mathbb{R}^{m})}<\eta.$
### 2.3 Main Theorem for ResNet
In this subsection, we show that a universal approximation property also holds
for a ResNet with the system of difference equations (2.5).
###### Definition 2.5 (ResNet).
Suppose that an $m\times n$ real matrix $A$ and a function
$\sigma:\mathbb{R}\to\mathbb{R}$ are given. We consider a system of difference
equations
$\left\\{\begin{aligned}
x^{(l)}&=x^{(l-1)}+\beta^{(l)}x^{(l-1)}+\gamma^{(l)},&l=1,2,\ldots,L\\\
y^{(l)}&=y^{(l-1)}+\alpha^{(l)}\odot\mbox{\boldmath$\sigma$}(Ax^{(l)}),&l=1,2,\ldots,L\\\
x^{(0)}&=\xi,&\\\ y^{(0)}&=0,&\end{aligned}\right.$ (2.5)
where $x^{(l)}$ and $y^{(l)}$ are $n$\- and $m$-dimensional real vectors, for
all $l=0,1,\ldots,L$, respectively. Also, $\xi\in\mathbb{R}^{n}$ denotes the
input data while $y^{(L)}\in\mathbb{R}^{m}$ represents the final output.
Moreover, the vectors
$\alpha^{(l)}\in\mathbb{R}^{m},\beta^{(l)}\in\mathbb{R}^{n\times n}$ and
$\gamma\in\mathbb{R}^{n}~{}(l=1,2,\ldots,L)$ are design parameters. The
functions $\mbox{\boldmath$\sigma$}:\mathbb{R}^{m}\to\mathbb{R}^{m}$ is
defined by (2.1) and the operator $\odot$ denotes the Hadamard product defined
by (2.2). We call the function $H=[\xi\mapsto y^{(L)}]:D\to\mathbb{R}^{m}$ an
ResNet with a system of difference equations (2.5).
For a compact subset $D\subset\mathbb{R}^{n}$, we define
$S_{\mathrm{res}}(D):=\\{[\xi\mapsto y^{(L)}]\in
C(D;\mathbb{R}^{m})|L\in\mathbb{N},\alpha^{(l)}\in\mathbb{R}^{m},\beta^{(l)}\in\mathbb{R}^{n\times
n},\gamma^{(l)}\in\mathbb{R}^{n}~{}(l=1,2,\ldots,L)\\}.$
###### Theorem 2.6 (UAP for ResNet).
Suppose that $m\leq n$ and $\mathrm{rank}(A)=m$. If
$\sigma:\mathbb{R}\to\mathbb{R}$ satisfies (2.4) and has UAP on a compact
subset $D\subset\mathbb{R}^{n}$, then $S_{\mathrm{res}}(D)$ is dense in
$C(D;\mathbb{R}^{m})$.
### 2.4 Some lemmas
We describe some lemmas used to prove Theorems 2.3 and 2.6.
###### Lemma 2.7.
Suppose that $m\leq n$. Let $\sigma$ be a function from $\mathbb{R}^{m}$ to
$\mathbb{R}^{m}$ defined by (2.1). For any $\alpha,d\in\mathbb{R}^{m}$ and
$C=(\mbox{\boldmath$c$}_{1},\mbox{\boldmath$c$}_{2},\ldots,\mbox{\boldmath$c$}_{m})^{\top}\in\mathbb{R}^{m\times
n}$ which has no zero rows (i.e. $\mbox{\boldmath$c$}_{l}\neq 0$ for
$l=1,2,\ldots,m$), there exist
$\tilde{\alpha}^{(l)},\tilde{d}^{(l)}\in\mathbb{R}^{m}$, and
$\tilde{C}^{(l)}\in\mathbb{R}^{m\times n}~{}(l=1,2,\ldots,m)$ such that
$\alpha\odot\mbox{\boldmath$\sigma$}(C\xi+d)=\sum_{l=1}^{m}\tilde{\alpha}^{(l)}\odot\mbox{\boldmath$\sigma$}(\tilde{C}^{(l)}\xi+\tilde{d}^{(l)}),$
for any $\xi\in\mathbb{R}^{n}$, and $\mathrm{rank}(\tilde{C}^{(l)})=m$, for
all $l=1,2,\ldots,m$. Moreover, if $m=n$, we can choose
$\tilde{C}^{(l)}\in\mathbb{R}^{n\times n}$ such that $\det\tilde{C}^{(l)}>0$,
for all $l=1,2,\ldots,n$.
###### Proof.
Let $m\leq n$. For all $l=1,2,\ldots,m$, there exists
$\tilde{C}^{(l)}=(\tilde{\mbox{\boldmath$c$}}_{1}^{(l)},\tilde{\mbox{\boldmath$c$}}_{2}^{(l)},\ldots,\tilde{\mbox{\boldmath$c$}}_{m}^{(l)})^{\top}\in\mathbb{R}^{m\times
n}$ such that $\tilde{\mbox{\boldmath$c$}}_{l}^{(l)}=\mbox{\boldmath$c$}_{l}$,
$\mathrm{rank}(\tilde{C}^{(l)})=m$. Then, we put
$\tilde{\alpha}_{k}^{(l)}:=\left\\{\begin{array}[]{ll}\alpha_{k},&\mathrm{if}~{}l=k,\\\
0,&\mathrm{if}~{}l\neq
k,\end{array}\right.\quad\tilde{d}_{k}^{(l)}:=\left\\{\begin{array}[]{ll}d_{k},&\mathrm{if}~{}l=k,\\\
0,&\mathrm{if}~{}l\neq k.\end{array}\right.$
Looking at the $k$-th component, we see that for any $\xi\in\mathbb{R}^{n}$,
we have
$\sum_{l=1}^{m}\tilde{\alpha}_{k}^{(l)}\sigma(\tilde{\mbox{\boldmath$c$}}_{k}^{(l)}\cdot\xi+\tilde{d}_{k}^{(l)})=\tilde{\alpha}_{k}^{(k)}\sigma(\tilde{\mbox{\boldmath$c$}}_{k}^{(k)}\cdot\xi+\tilde{d}_{k}^{(k)})=\alpha_{k}\sigma(\mbox{\boldmath$c$}_{k}\cdot\xi+d_{k}).$
Therefore,
$\sum_{l=1}^{m}\tilde{\alpha}^{(l)}\odot\mbox{\boldmath$\sigma$}(\tilde{C}^{(l)}\xi+\tilde{d}^{(l)})=\alpha\odot\mbox{\boldmath$\sigma$}(C\xi+d).$
Now, if $m=n$, then $\mathrm{rank}(\tilde{C}^{(l)})=n$, and so
$\det(\tilde{C}^{(l)})\neq 0$. In particular, we can choose $\tilde{C}^{(l)}$
such that $\det(\tilde{C}^{(l)})>0$. ∎
###### Lemma 2.8.
Suppose that $m\leq n$. Let $\sigma$ be a function from $\mathbb{R}^{m}$ to
$\mathbb{R}^{m}$. For any
$L\in\mathbb{N},\alpha^{(l)},d^{(l)}\in\mathbb{R}^{m},C^{(l)}\in\mathbb{R}^{m\times
n}~{}(l=1,2,\ldots,L)$, there exists
$L^{\prime}\in\mathbb{N},\tilde{\alpha}^{(l)},\tilde{d}^{(l)}\in\mathbb{R}^{m},\tilde{C}^{(l)}\in\mathbb{R}^{m\times
n}~{}(l=1,2,\ldots,L^{\prime})$ such that
$\frac{1}{L}\sum_{l=1}^{L}\alpha^{(l)}\odot\mbox{\boldmath$\sigma$}(C^{(l)}\xi+d^{(l)})=\frac{1}{L^{\prime}}\sum_{l=1}^{L^{\prime}}\tilde{\alpha}^{(l)}\odot\mbox{\boldmath$\sigma$}(\tilde{C}^{(l)}\xi+\tilde{d}^{(l)})$
for any $\xi\in\mathbb{R}^{n}$, and $\mathrm{rank}(\tilde{C}^{(l)})=m$, for
all $l=1,2,\ldots,L^{\prime}$. Moreover, if $m=n$, we can choose
$\tilde{C}^{(l)}\in\mathbb{R}^{m\times n}$ such that $\det\tilde{C}^{(l)}>0$,
for all $l=1,2,\ldots,L^{\prime}$.
###### Proof.
This follows from Lemma 2.7. ∎
###### Lemma 2.9.
Suppose that $m<n$. Let $A$ be an $m\times n$ real matrix satisfying
$\mathrm{rank}(A)=m$. Then, for any $C\in\mathbb{R}^{m\times n}$ satisfying
$\mathrm{rank}(C)=m$, there exists $P\in\mathbb{R}^{n\times n}$ such that
$C=AP,\quad\det P>0.$ (2.6)
In addition, if $m=n$ and $\mathrm{sgn}(\det C)=\mathrm{sgn}(\det A)$, there
exists $P\in\mathbb{R}^{n\times n}$ such that (2.6).
###### Proof.
1. (i)
Suppose that $m<n$. From $\mathrm{rank}(A)=\mathrm{rank}(C)=m$, there exists
$\bar{A},\bar{C}\in\mathbb{R}^{(n-m)\times n}$ such that
$\det\tilde{A}>0,\quad\tilde{A}=\left(\begin{array}[]{c}A\\\
\bar{A}\end{array}\right),\quad\det\tilde{C}>0,\quad\tilde{C}=\left(\begin{array}[]{c}C\\\
\bar{C}\end{array}\right).$
If we put $P:=\tilde{A}^{-1}\tilde{C}$, we get $\det P>0$, $C=AP$.
2. (ii)
Suppose that $m=n$. We put $P:=A^{-1}C$. Because $\mathrm{sgn}(\det
C)=\mathrm{sgn}(\det A)$, we have $\det P>0$, and so $C=AP$.
∎
###### Lemma 2.10.
Let $p\in[0,\infty)$. Suppose that
$P(t)=P^{(l)}\in\mathbb{R}^{n\times n},\quad\det P^{(l)}>0,$
for $t_{l-1}\leq t<t_{l}$, and for all $l=1,2,\ldots,L$, where $t_{0}=0$ and
$t_{L}=T$. Then, there exists a real number $C>0$ such that, for any
$\varepsilon>0$, there exists $P^{\varepsilon}\in C([0,T];\mathbb{R}^{n\times
n})$ such that
$\|P^{\varepsilon}-P\|_{L^{p}(0,T;\mathbb{R}^{n\times
n})}<\varepsilon,\quad\det
P^{\varepsilon}(t)>0,\quad\mathrm{and}\quad\|P^{\varepsilon}(t)\|\leq C,$
for any $t\in[0,T]$.
###### Proof.
We define $\mathrm{GL}^{+}(n,\mathbb{R}):=\\{A\in\mathbb{R}^{n\times n}|\det
A>0\\}$. From [1, Chapter 9, p.239], $\mathrm{GL}^{+}(n,\mathbb{R})$ is path-
connected. For all $l=1,2,\ldots,L$, there exists $Q^{(l)}\in
C([0,1];\mathbb{R}^{n\times n})$ such that
$Q^{(l)}(0)=P^{(l)},\quad Q^{(l)}(1)=P^{(l+1)},\quad\mathrm{and}\quad\det
Q^{(l)}(s)>0,$
for any $s\in[0,1]$. For $\delta>0$, we put
$Q^{\delta}(t):=\left\\{\begin{array}[]{lll}P^{(1)},&-\infty<t<t_{1},&\\\
\displaystyle{Q^{(l)}\left(\frac{t-t_{l}}{\delta}\right)},&t_{l}\leq
t<t_{l}+\delta,&(l=1,2,\ldots,L-1),\\\ P^{(l)}&t_{l-1}+\delta\leq
t<t_{l},&(l=2,3,\ldots,L-2),\\\ P^{(L)}&t_{L-1}+\delta\leq
t<\infty.&\end{array}\right.$
Then, $Q^{\delta}$ is a continuous function from $\mathbb{R}$ to
$\mathbb{R}^{n\times n}$. There exists a $C_{0}>0$ such that $\det
Q^{\delta}(t)\geq C_{0}$, for any $t\in\mathbb{R}$. Let
$\\{\varphi_{\varepsilon}\\}_{\varepsilon>0}$ be a sequence of Friedrichs’
mollifiers in $\mathbb{R}$. We put
$P^{\varepsilon}(t):=(\varphi_{\varepsilon}*Q^{\delta})(t).$
Then, $P^{\varepsilon}\in C^{\infty}(\mathbb{R};\mathbb{R}^{n\times n})$.
Since
$\lim_{\varepsilon\to
0}\|P^{\varepsilon}-Q^{\delta}\|_{C([0,T];\mathbb{R}^{n\times n})}=0,$
there exists a number $\varepsilon_{0}>0$ such that, for any
$\varepsilon\leq\varepsilon_{0}$,
$\det P^{\varepsilon}(t)\geq\frac{C_{0}}{2}$
for all $t\in[0,T]$. Because $Q^{\delta}$ is bounded, there exists a number
$C>0$ such that $\|P^{\varepsilon}(t)\|\leq C$, for any $t\in[0,T]$. Now, we
note that
$\|P^{\varepsilon}-P\|_{L^{p}(0,T;\mathbb{R}^{n\times
n})}\leq\|P^{\varepsilon}-Q^{\delta}\|_{L^{p}(0,T;\mathbb{R}^{n\times
n})}+\|Q^{\delta}-P\|_{L^{p}(0,T;\mathbb{R}^{n\times n})}.$
The last summand is calculated as follows
$\displaystyle\|Q^{\delta}-P\|_{L^{p}(0,T;\mathbb{R}^{n\times n})}^{p}$
$\displaystyle=\int_{0}^{T}\|Q^{\delta}(t)-P(t)\|^{p}dt,$
$\displaystyle=\sum_{l=1}^{L-1}\int_{t_{l}}^{t_{l}+\delta}\left\|Q^{(l)}\left(\frac{t-t_{l}}{\delta}\right)-P^{(l+1)}\right\|^{p}dt,$
$\displaystyle=\delta\sum_{l=1}^{L-1}\int_{0}^{1}\|Q^{\delta}(s)-P^{(l+1)}\|^{p}ds.$
Hence, if $\delta\to 0$, then $\|Q^{\delta}-P\|_{L^{p}(0,T;\mathbb{R}^{n\times
n})}\to 0$. Therefore,
$\|P^{\varepsilon}-P\|_{L^{p}(0,T;\mathbb{R}^{n\times n})}<\varepsilon,$
for any $\varepsilon>0$. ∎
### 2.5 Proofs
In this subsection, we provide the proof of Theorem 2.3 and Theorem 2.6.
#### 2.5.1 Proof of Theorem 2.3
###### Proof.
Since $\mbox{\boldmath$\sigma$}\in C(\mathbb{R}^{m};\mathbb{R}^{m})$ is
defined by (2.1), where $\sigma\in C(\mathbb{R})$ satisfies a UAP, then given
$F\in C(D;\mathbb{R}^{m})$ and $\eta>0$, there exist a positive integer $L$,
$\mathbb{R}^{m}$-valued vectors $\alpha^{(l)}$ and $d^{(l)}$, and matrices
$C^{(l)}\in\mathbb{R}^{m\times n}$, for all $l=1,2,\ldots,L$, such that
$G(\xi)=\frac{T}{L}\sum_{l=1}^{L}\alpha^{(l)}\odot\mbox{\boldmath$\sigma$}(C^{(l)}\xi+d^{(l)}),$
$|G(\xi)-F(\xi)|<\frac{\eta}{2},$ (2.7)
for any $\xi\in D$. From Lemma 2.8, we know that $\mathrm{rank}(C^{(l)})=m$,
for $l=1,2,\ldots,L$. In addition, when $m=n$, we have $\mathrm{sgn}(\det
A)=\mathrm{sgn}(\det C^{(l)})$. In view of Lemma 2.9, there exists a matrix
$P^{(l)}\in\mathbb{R}^{n\times n}$ such that $\det P^{(l)}>0$ and
$C^{(l)}=AP^{(l)}$, for each $l=1,2,\ldots,L$. We put
$q^{(l)}:=A^{\top}(AA^{\top})^{-1}d^{(l)}$ so that $d^{(l)}=Aq^{(l)}$. In
addition, we let
$\alpha(t):=\alpha^{(l)},\quad P(t):=P^{(l)},\quad
q(t):=q^{(l)},\quad\frac{l-1}{L}T\leq t<\frac{l}{L}T.$
Then, $\det P(t)>0$ for any $t\in[0,T]$ and
$G(\xi)=\frac{T}{L}\sum_{l=1}^{L}\alpha^{(l)}\odot\mbox{\boldmath$\sigma$}(AP^{(l)}\xi+Aq^{(l)})=\int_{0}^{T}\alpha(t)\odot\mbox{\boldmath$\sigma$}(A(P(t)\xi+q(t)))dt.$
Let $\\{\varphi_{\varepsilon}\\}_{\varepsilon>0}$ be a sequence of Friedrichs’
mollifiers. We put
$\alpha^{\varepsilon}(t):=(\varphi_{\varepsilon}*\alpha)(t)$ and
$q^{\varepsilon}(t):=(\varphi_{\varepsilon}*q)(t)$. Then,
$\alpha^{\varepsilon}\in C^{\infty}([0,T];\mathbb{R}^{m})$ and
$q^{\varepsilon}\in C^{\infty}([0,T];\mathbb{R}^{n})$. From Lemma 2.10, there
exists a real number $C>0$ such that, given $\eta>0$, there exists
$P^{\varepsilon}\in C^{\infty}([0,T];\mathbb{R}^{n\times n})$ from which we
have
$\|P^{\varepsilon}-P\|_{L^{1}(0,T;\mathbb{R}^{n\times n})}<\eta,\quad\det
P^{\varepsilon}(t)>0,\quad\|P^{\varepsilon}(t)\|\leq C,$
for any $t\in[0,T]$. If we put
$x^{\varepsilon}(t;\xi):=P^{\varepsilon}(t)\xi+q^{\varepsilon}(t),$ (2.8)
$y^{\varepsilon}(t;\xi):=\int_{0}^{T}\alpha^{\varepsilon}(s)\odot\mbox{\boldmath$\sigma$}(Ax^{\varepsilon}(s;\xi))ds,$
(2.9)
then
$y^{\varepsilon}(T;\xi)=\int_{0}^{T}\alpha^{\varepsilon}(t)\odot\mbox{\boldmath$\sigma$}(A(P^{\varepsilon}(t)\xi+q^{\varepsilon}(t)))dt.$
Hence, we have
$\displaystyle|y^{\varepsilon}(T;\xi)-G(\xi)|$ $\displaystyle\leq$
$\displaystyle\int_{0}^{T}\left|\alpha^{\varepsilon}(t)\odot\mbox{\boldmath$\sigma$}(A(P^{\varepsilon}(t)\xi+q^{\varepsilon}(t)))-\alpha(t)\odot\mbox{\boldmath$\sigma$}(A(P(t)\xi+q(t)))\right|dt,$
$\displaystyle\leq$
$\displaystyle\int_{0}^{T}|\alpha^{\varepsilon}(t)-\alpha(t)||\mbox{\boldmath$\sigma$}(A(P(t)\xi+q(t)))|dt,$
$\displaystyle+\int_{0}^{T}|\alpha^{\varepsilon}(t)||\mbox{\boldmath$\sigma$}(A(P^{\varepsilon}(t)\xi+q^{\varepsilon}(t)))-\mbox{\boldmath$\sigma$}(A(P(t)\xi+q(t)))|dt.$
Because $P$ and $q$ are piecewise constant functions, then they are bounded.
Since $\mbox{\boldmath$\sigma$}\in C(\mathbb{R}^{m};\mathbb{R}^{m})$, there
exists $M>0$ such that $|\mbox{\boldmath$\sigma$}(A(P(t)\xi+q(t)))|\leq M$,
for any $t\in[0,T]$. On the other had, we have the estimate
$|\alpha^{\varepsilon}(t)|\leq\int_{\mathbb{R}}\varphi_{\varepsilon}(t-s)|\alpha(s)|ds\leq\|\alpha\|_{L^{\infty}(0,T;\mathbb{R}^{m})}\int_{\mathbb{R}}\varphi_{\varepsilon}(\tau)d\tau=\|\alpha\|_{L^{\infty}(0,T;\mathbb{R}^{m})}.$
Similarly, because
$\|q^{\varepsilon}\|_{L^{\infty}(0,T;\mathbb{R}^{n})}\leq\|q\|_{L^{\infty}(0,T;\mathbb{R}^{n})}$,
then $q^{\varepsilon}$ is bounded. We assume that
$A(P^{\varepsilon}(t)\xi+q^{\varepsilon}(t))$, $A(P(t)\xi+q(t))\in[-R,R]^{m}$,
for any $t\in[0,T]$,
$\displaystyle|\mbox{\boldmath$\sigma$}(A(P^{\varepsilon}(t)\xi+q^{\varepsilon}(t)))-\mbox{\boldmath$\sigma$}(A(P(t)\xi+q(t)))|$
$\displaystyle\leq L_{R}\|A\|\left(\|P^{\varepsilon}(t)-P(t)\|(\max_{\xi\in
D}|\xi|)+|q^{\varepsilon}(t)-q(t)|\right).$
Therefore,
$\displaystyle|y^{\varepsilon}(T;\xi)-G(\xi)|\leq
M\|\alpha^{\varepsilon}-\alpha\|_{L^{1}(0,T;\mathbb{R}^{m})}$
$\displaystyle+L_{R}\|A\|\|\alpha\|_{L^{\infty}(0,T;\mathbb{R}^{m})}\left(\|P^{\varepsilon}-P\|_{L^{1}(0,T;\mathbb{R}^{n\times
n})}(\max_{\xi\in
D}|\xi|)+\|q^{\varepsilon}-q\|_{L^{1}(0,T;\mathbb{R}^{n})}\right).$
We know that there exists a number $\varepsilon>0$ such that
$|y^{\varepsilon}(T;\xi)-G(\xi)|<\frac{\eta}{2},$ (2.10)
for any $\xi\in D$. Thus, from (2.7) and (2.10),
$|y^{\varepsilon}(T;\xi)-F(\xi)|\leq|y^{\varepsilon}(T;\xi)-G(\xi)|+|G(\xi)-F(\xi)|<\eta,$
for any $\xi\in D$. For all $t\in[0,T]$, we know that $\det
P^{\varepsilon}(t)>0$, so $P^{\varepsilon}(t)$ is invertible. This allows us
to define
$\beta(t):=\left(\frac{d}{dt}P^{\varepsilon}(t)\right)\left(P^{\varepsilon}(t)\right)^{-1},\quad\gamma(t):=\frac{d}{dt}q^{\varepsilon}(t)-\beta(t)q^{\varepsilon}(t).$
This gives us
$\frac{d}{dt}P^{\varepsilon}(t)=\beta(t)P^{\varepsilon}(t),\quad\frac{d}{dt}q^{\varepsilon}(t)=\beta(t)q^{\varepsilon}(t)+\gamma(t).$
In view of (2.8) and (2.9),
$\frac{d}{dt}x^{\varepsilon}(t;\xi)=\frac{d}{dt}P^{\varepsilon}(t)\xi+\frac{d}{dt}q^{\varepsilon}(t)=\beta(t)P^{\varepsilon}(t)\xi+\beta(t)q^{\varepsilon}(t)+\gamma(t)=\beta(t)x^{\varepsilon}(t;\xi)+\gamma(t),$
$\frac{d}{dt}y^{\varepsilon}(t;\xi)=\alpha^{\varepsilon}(t)\odot\mbox{\boldmath$\sigma$}(Ax^{\varepsilon}(t;\xi)).$
Hence, $y^{\varepsilon}(T,\cdot)\in S(D)$. Therefore, given $F\in
C(D;\mathbb{R}^{m})$ and $\eta>0$, there exist some functions $\alpha\in
C^{\infty}([0,T];\mathbb{R}^{m})$, $\beta\in
C^{\infty}([0,T];\mathbb{R}^{n\times n})$, and $\gamma\in
C^{\infty}([0,T];\mathbb{R}^{n})$ such that
$|y(T;\xi)-F(\xi)|<\eta,$
for any $\xi\in D$. ∎
#### 2.5.2 Proof of Theorem 2.6
We now processed on the proof of Theorem 2.6.
###### Proof.
Again, we start with the fact that $\mbox{\boldmath$\sigma$}\in
C(\mathbb{R}^{m};\mathbb{R}^{m})$ is defined by (2.1), where $\sigma\in
C(\mathbb{R})$ satisfies a UAP; that is, given $F\in C(D;\mathbb{R}^{m})$ and
$\eta>0$, there exist a positive integer $L$, $\mathbb{R}^{m}$-valued vectors
$\alpha^{(l)}$ and $d^{(l)}$, and matrices $C^{(l)}\in\mathbb{R}^{m\times n}$,
for all $l=1,2,\ldots,L$, such that
$G(\xi)=\sum_{l=1}^{L}\alpha^{(l)}\odot\mbox{\boldmath$\sigma$}(C^{(l)}\xi+d^{(l)}),$
$|G(\xi)-F(\xi)|<\eta,$
for any $\xi\in D$. By virtue of Lemma 2.8, we know that
$\mathrm{rank}(C^{(l)})=m$, for all $l=1,2,\ldots,L$. Moreover, if $m=n$, we
have $\mathrm{sgn}(\det A)=\mathrm{sgn}(\det C^{(l)})$. On the other hand,
from Lemma 2.9, there exists $P^{(l)}\in\mathbb{R}^{n\times n}$ such that
$\det P^{(l)}>0$ and $C^{(l)}=AP^{(l)}$, for each $l=1,2,\ldots,L$. Putting
$q^{(l)}:=A^{\top}(AA^{\top})^{-1}d^{(l)}$, we get $d^{(l)}=Aq^{(l)}$, from
which we obtain
$G(\xi)=\sum_{l=1}^{L}\alpha^{(l)}\odot\mbox{\boldmath$\sigma$}(A(P^{(l)}\xi+q^{(l)})).$
Next, we define
$x^{(l)}:=P^{(l)}\xi+q^{(l)},\quad
y^{(l)}:=\sum_{i=1}^{l}\alpha^{(i)}\odot\mbox{\boldmath$\sigma$}(Ax^{(i)}),$
$\beta^{(l)}:=(P^{(l)}-P^{(l-1)})(P^{(l-1)})^{-1},\quad\gamma^{(l)}:=q^{(l)}-q^{(l-1)}-\beta^{(l)}q^{(l-1)},$
for all $l=1,2,\ldots,L$, and set $P^{(0)}:=I_{n}$, $q^{(0)}=0$. Because
$P^{(l)}-P^{(l-1)}=\beta^{(l)}P^{(l-1)}$ and
$q^{(l)}-q^{(l-1)}=\beta^{(l)}q^{(l-1)}+\gamma^{(l)}$ hold true, then
$x^{(l)}-x^{(l-1)}=(P^{(l)}-P^{(l-1)})\xi+(q^{(l)}-q^{(l-1)})=\beta^{(l)}x^{(l-1)}+\gamma^{(l)},$
$y^{(L)}=\sum_{l=1}^{L}\alpha^{(l)}\odot\mbox{\boldmath$\sigma$}(A(P^{(l)}\xi+q^{(l)}))=G(\xi).$
Hence, $[\xi\mapsto y^{(L)}]\in S_{\mathrm{res}}(D)$. Therefore, given $F\in
C(D;\mathbb{R}^{m})$ and $\eta>0$, there exists
$L\in\mathbb{N},\alpha^{(l)}\in\mathbb{R}^{m},\beta^{(l)}\in\mathbb{R}^{n\times
n},\gamma^{(l)}\in\mathbb{R}^{n}~{}(l=1,2,\ldots,L)$ such that
$|y^{(L)}-F(\xi)|<\eta,$
for any $\xi\in D$. ∎
## 3 The gradient and learning algorithm
### 3.1 The gradient of loss function with respect to the design parameter
We consider ODENet associated with the ODE system of (2.3). We also consider
the approximation of $F\in C(D;\mathbb{R}^{m})$. Let $K\in\mathbb{N}$ be the
number of training data and $\\{(\xi^{(k)},F(\xi^{(k)}))\\}_{k=1}^{K}\subset
D\times\mathbb{R}^{m}$ be the training data. We divide the label of the
training data into the following disjoint sets.
$\\{1,2,\ldots,K\\}=I_{1}\cup I_{2}\cup\cdots\cup
I_{M}~{}(\mathrm{disjoint})\quad(1\leq M\leq K)$
Let $x^{(k)}(t)$ and $y^{(k)}(t)$ be the solution to (2.3) with the initial
value $\xi^{(k)}$. For all $\mu=1,2,\ldots,M$, let
$\mbox{\boldmath$x$}=(x^{(k)})_{k\in I_{\mu}}$ and
$\mbox{\boldmath$y$}=(y^{(k)})_{k\in I_{\mu}}$. We define the loss function as
follows:
$e_{\mu}[\mbox{\boldmath$x$},\mbox{\boldmath$y$}]=\frac{1}{|I_{\mu}|}\sum_{k\in
I_{\mu}}\left|y^{(k)}(T)-F(\xi^{(k)})\right|^{2},$ (3.1)
$E=\frac{1}{K}\sum_{k=1}^{K}\left|y^{(k)}(T)-F(\xi^{(k)})\right|^{2}.$ (3.2)
We consider the learning for each label set using the gradient method. We fix
$\mu=1,2,\ldots,M$. Let $\lambda^{(k)}:[0,T]\to\mathbb{R}^{n}$ be the adjoint
and satisfy the following adjoint equation for any $k\in I_{\mu}$.
$\left\\{\begin{aligned}
\frac{d}{dt}\lambda^{(k)}(t)&=-(\beta(t))^{\top}\lambda^{(k)}(t)-\frac{1}{|I_{\mu}|}A^{\top}\left(\left(y^{(k)}(T)-F(\xi^{(k)})\right)\odot\alpha(t)\odot\mbox{\boldmath$\sigma$}^{\prime}(Ax^{(k)}(t))\right),\\\
\lambda^{(k)}(T)&=0.\end{aligned}\right.$ (3.3)
Then, the gradient $G[\alpha]^{(\mu)}\in
C([0,T];\mathbb{R}^{m}),G[\beta]^{(\mu)}\in C([0,T];\mathbb{R}^{n\times n})$
and $G[\gamma]^{(\mu)}\in C([0,T];\mathbb{R}^{n})$ of the loss function (3.1)
at $\alpha\in C([0,T];\mathbb{R}^{m}),\beta\in C([0,T];\mathbb{R}^{n\times
n})$ and $\gamma\in C([0,T];\mathbb{R}^{n})$ with respect to
$L^{2}(0,T;\mathbb{R}^{m}),L^{2}(0,T;\mathbb{R}^{n\times
n}),L^{2}(0,T;\mathbb{R}^{n})$ can be represented as
$G[\alpha]^{(\mu)}(t)=\frac{1}{|I_{\mu}|}\sum_{k\in
I_{\mu}}\left(y^{(k)}(T)-F(\xi^{(k)})\right)\odot\mbox{\boldmath$\sigma$}(Ax^{(k)}(t)),$
$G[\beta]^{(\mu)}(t)=\sum_{k\in
I_{\mu}}\lambda^{(k)}(t)\left(x^{(k)}(t)\right)^{\top},\quad
G[\gamma]^{(\mu)}(t)=\sum_{k\in I_{\mu}}\lambda^{(k)}(t),$
respectively.
### 3.2 Learning algorithm
In this section, we describe the learning algorithm of ODENet associated with
an ODE system (2.3). The initial value problems of ordinary differential
equations (2.3) and (3.3) are computed using the explicit Euler method. Let
$h$ be the size of the time step. We define $L:=\lfloor T/h\rfloor$. By
discretizing the ordinary differential equations (2.3), we obtain
$\left\\{\begin{aligned}
\frac{x_{l+1}^{(k)}-x_{l}^{(k)}}{h}&=\beta_{l}x_{l}^{(k)}+\gamma_{l},&l=0,1,\ldots,L-1,\\\
\frac{y_{l+1}^{(k)}-y_{l}^{(k)}}{h}&=\alpha_{l}\odot\mbox{\boldmath$\sigma$}(Ax_{l}^{(k)}),&l=0,1,\ldots,L-1,\\\
x_{0}^{(k)}&=\xi^{(k)},&\\\ y_{0}^{(k)}&=0,&\end{aligned}\right.$
for any $k\in I_{\mu}$. Furthermore, by discretizing the adjoint equation
(3.3), we obtain
$\left\\{\begin{aligned}
\frac{\lambda_{l}^{(k)}-\lambda_{l-1}^{(k)}}{h}&=-\beta_{l}^{\top}\lambda_{l}^{(k)}-\frac{1}{|I_{\mu}|}A^{\top}\left(\left(y_{L}^{(k)}-F(\xi^{(k)})\right)\odot\alpha_{l}\odot\mbox{\boldmath$\sigma$}^{\prime}(Ax_{l}^{(k)})\right),\\\
\lambda_{L}^{(k)}&=0,\end{aligned}\right.$
with $l=L,L-1,\ldots,1$ for any $k\in I_{\mu}$. Here we put
$\alpha_{l}=\alpha(lh),\quad\beta_{l}=\beta(lh),\quad\gamma_{l}=\gamma(lh),$
for all $l=0,1,\ldots,L$.
We perform the optimization of the loss function (3.2) using a stochastic
gradient descent (SGD). We show the learning algorithm in Algorithm 1.
Algorithm 1 Stochastic gradient descent method for ODENet
1: Choose $\eta>0$ and $\tau>0$
2: Set $\nu=0$ and choose
$\alpha_{(0)}\in\prod_{l=0}^{L}\mathbb{R}^{m},\beta_{(0)}\in\prod_{l=0}^{L}\mathbb{R}^{n\times
n}$ and $\gamma_{(0)}\in\prod_{l=0}^{L}\mathbb{R}^{n}$
3: repeat
4: Divide the label of the training data
$\\{(\xi^{(k)},F(\xi^{(k)}))\\}_{k=1}^{K}$ into the following disjoint sets
$\\{1,2,\ldots,K\\}=I_{1}\cup I_{2}\cup\cdots\cup
I_{M}~{}(\mathrm{disjoint}),\quad(1\leq M\leq K)$
5: Set $\alpha^{(1)}=\alpha_{(\nu)},\beta^{(1)}=\beta_{(\nu)}$ and
$\gamma^{(1)}=\gamma_{(\nu)}$
6: for $\mu=1,M$ do
7: Solve $\left\\{\begin{aligned}
\frac{x_{l+1}^{(k)}-x_{l}^{(k)}}{h}&=\beta_{l}x_{l}^{(k)}+\gamma_{l},&l=0,1,\ldots,L-1,\\\
\frac{y_{l+1}^{(k)}-y_{l}^{(k)}}{h}&=\alpha_{l}\odot\mbox{\boldmath$\sigma$}(Ax_{l}^{(k)}),&l=0,1,\ldots,L-1,\\\
x_{0}^{(k)}&=\xi^{(k)},&\\\ y_{0}^{(k)}&=0,&\end{aligned}\right.$ for any
$k\in I_{\mu}$
8: Solve $\left\\{\begin{aligned}
\frac{\lambda_{l}^{(k)}-\lambda_{l-1}^{(k)}}{h}&=-\beta_{l}^{\top}\lambda_{l}^{(k)}-\frac{1}{|I_{\mu}|}A^{\top}\left(\left(y_{L}^{(k)}-F(\xi^{(k)})\right)\odot\alpha_{l}\odot\mbox{\boldmath$\sigma$}^{\prime}(Ax_{l}^{(k)})\right),\\\
\lambda_{L}^{(k)}&=0,\end{aligned}\right.$ with $l=L,L-1,\ldots,1$ for any
$k\in I_{\mu}$
9: Compute the gradients $G[\alpha]_{l}^{(\mu)}=\frac{1}{|I_{\mu}|}\sum_{k\in
I_{\mu}}\left(y_{L}^{(k)}-F(\xi^{(k)})\right)\odot\mbox{\boldmath$\sigma$}(Ax_{l}^{(k)}),$
$G[\beta]_{l}^{(\mu)}=\sum_{k\in
I_{\mu}}\lambda_{l}^{(k)}(x_{l}^{(k)})^{\top},\quad
G[\gamma]_{l}^{(\mu)}=\sum_{k\in I_{\mu}}\lambda_{l}^{(k)}$
10: Set $\alpha_{l}^{(\mu+1)}=\alpha_{l}^{(\mu)}-\tau
G[\alpha]_{l}^{(\mu)},\quad\beta_{l}^{(\mu+1)}=\beta_{l}^{(\mu)}-\tau
G[\beta]_{l}^{(\mu)},$ $\gamma_{l}^{(\mu+1)}=\gamma_{l}^{(\mu)}-\tau
G[\gamma]_{l}^{(\mu)}$
11: end for
12: Set
$\alpha_{(\nu+1)}=(\alpha_{l}^{(M)})_{l=0}^{L},\beta_{(\nu+1)}=(\beta_{l}^{(M)})_{l=0}^{L}$
and $\gamma_{(\nu+1)}=(\gamma_{l}^{(M)})_{l=0}^{L}$
13: Shuffle the training data $\\{(\xi^{(k)},F(\xi^{(k)}))\\}_{k=1}^{K}$
randomly and set $\nu=\nu+1$
14: until
$\max(\|\alpha_{(\nu)}-\alpha_{(\nu-1)}\|,\|\beta_{(\nu)}-\beta_{(\nu-1)}\|,\|\gamma_{(\nu)}-\gamma_{(\nu-1)}\|)<\eta$
###### Remark.
In 10 of Algorithm 1, we call the momentum SGD [18], in which the following
expression is substituted for the update expression.
$\alpha_{l}^{(\mu+1)}:=\alpha_{l}^{(\mu)}-\tau
G[\alpha]_{l}^{(\mu)}+\tau_{1}(\alpha_{l}^{(\mu)}-\alpha_{l}^{(\mu-1)})$
$\beta_{l}^{(\mu+1)}:=\beta_{l}^{(\mu)}-\tau
G[\beta]_{l}^{(\mu)}+\tau_{1}(\beta_{l}^{(\mu)}-\beta_{l}^{(\mu-1)})$
$\gamma_{l}^{(\mu+1)}:=\gamma_{l}^{(\mu)}-\tau
G[\gamma]_{l}^{(\mu)}+\tau_{1}(\gamma_{l}^{(\mu)}-\gamma_{l}^{(\mu-1)})$
where $\tau$ is the learning rate and $\tau_{1}$ is the momentum rate.
## 4 Numerical results
### 4.1 Sinusoidal Curve
We performed a numerical example of the regression problem of a 1-dimensional
signal $F(\xi)=\sin 4\pi\xi$ defined on $\xi\in[0,1]$. Let the number of
training data be $K_{1}=1000$, and let the training data be
$\left\\{\left(\frac{k-1}{K_{1}},F\left(\frac{k-1}{K_{1}}\right)\right)\right\\}_{k=1}^{K_{1}}\subset[0,1]\times\mathbb{R},\quad
D_{1}:=\left\\{\frac{k-1}{K_{1}}\right\\}_{k=1}^{K_{1}}.$
We run Algorithm 1 until $\nu=10000$. We set the learning rate to $\tau=0.01$
and
$\alpha_{(0)}\equiv 0,\quad\beta_{(0)}\equiv 0,\quad\gamma_{(0)}\equiv 0.$
Let the number of validation data be $K_{2}=3333$. The signal sampled with
$\Delta\xi=1/K_{2}$ was used as the validation data. Let $D_{2}$ be the set of
input data used for the validation data. Fig. 2. shows the training data which
is $F(\xi)=\sin 4\pi\xi$ sampled from $[0,1]$ with $\Delta\xi=1/K_{1}$. Fig.
2. shows the result predicted using validation data when $\nu=10000$. The
validation data is shown as a blue line, and the result predicted using the
validation data is shown as an orange line. Fig. 4. shows the initial values
of parameters $\alpha,\beta$ and $\gamma$. Fig. 4. shows the learning results
of each design parameters at $\nu=10000$. Fig. 5. shows the change in the loss
function during learning for each of the training data and validation data.
Fig. 5. shows that the loss function can be decreased using Algorithm 1. Fig.
2. suggests that the prediction is good. In addition, the learning results of
the parameters $\alpha,\beta$ and $\gamma$ are continuous functions.
Fig. 1: The training data which is $F(\xi)=\sin 4\pi\xi$ sampled from $[0,1]$
with $\Delta\xi=1/K_{1}$.
Fig. 2: The result predicted using validation data when $\nu=10000$.
Fig. 3: The initial values of design parameters $\alpha,\beta$ and $\gamma$.
Fig. 4: The learning results of design parameters $\alpha,\beta$ and $\gamma$
at $\nu=10000$.
Fig. 5: The change in the loss function during learning.
### 4.2 Binary classification
We performed numerical experiments on a binary classification problem for
2-dimensional input. We set $n=2$ and $m=1$. Let the number of the training
data be $K_{1}=10000$, and let
$D_{1}=\\{\xi^{(k)}|k=1,2,\ldots,K_{1}\\}\subset[0,1]^{2}$ be the set of
randomly generated points. Let
$\left\\{\left(\xi^{(k)},F(\xi^{(k)})\right)\right\\}_{k=1}^{K_{1}}\subset[0,1]^{2}\times\mathbb{R},$
$F(\xi)=\left\\{\begin{array}[]{ll}0,&\mathrm{if}~{}|\xi-(0.5,0.5)|<0.3,\\\
1,&\mathrm{if}~{}|\xi-(0.5,0.5)|\geq 0.3,\end{array}\right.$ (4.1)
be the training data. We run the Algorithm 1 until $\nu=10000$. We set the
learning rate to $\tau=0.01$ and
$\alpha_{(0)}\equiv 0,\quad\beta_{(0)}\equiv 0,\quad\gamma_{(0)}\equiv 0.$
Let the number of validation data be $K_{2}=2500$. The set of points $\xi$
randomly generated on $[0,1]^{2}$ and $F(\xi)$ is used as the validation data.
Fig. 7. shows the training data in which randomly generated $\xi\in D_{1}$ are
classified in (4.1). Fig. 7. shows the prediction result using validation data
at $\nu=10000$. The results that were successfully predicted are shown in dark
red and dark blue, and the results that were incorrectly predicted are shown
in light red and light blue. Fig. 9. shows the result of predicting the
validation data using $k$-nearest neighbor algorithm at $k=3$. Fig. 9. shows
the result of predicting the validation data using a multi-layer perceptron
with $5000$ nodes. Fig. 11. shows the initial value of parameters
$\alpha,\beta$ and $\gamma$. Fig. 11., 13. and 13. show the learning results
of each parameters at $\nu=10000$. Fig. 15. shows the change of the loss
function during learning for each of the training data and validation data.
Fig. 15. shows the change of accuracy during learning. The accuracy is defined
as
$\mathrm{Accuracy}=\frac{\\#\\{\xi|F(\xi)=\bar{y}(\xi)\\}}{K_{i}},\quad\mathrm{if}~{}\\{\xi|F(\xi)=\bar{y}(\xi)\\}\subset
D_{i},\quad(i=1,2),$
$\bar{y}(\xi):=\left\\{\begin{array}[]{ll}0,&\mathrm{if}~{}y(T;\xi)<0.5,\\\
1,&\mathrm{if}~{}y(T;\xi)\geq 0.5.\end{array}\right.$
Table 4 shows the value of the loss function and the accuracy of the
prediction of each method.
From Fig. 15. and 15., we observe that the loss function can be decreased and
accuracy can be increased using Algorithm 1. Fig. 7. shows that some points in
the neighborhood of $|\xi-(0.5,0.5)|=0.3$ are wrongly predicted; however, most
points are well predicted. The results are similar when compared with Fig. 9.
and 9. In addition, the learning results of the parameters $\alpha,\beta$, and
$\gamma$ are continuous functions. From Table 4, the $k$-nearest neighbor
algorithm minimizes the value of the loss function among three methods. We
consider that this is because the output of ODENet is $y(T;\xi)\in[0,1]$,
while the output of the $k$-nearest neighbor algorithm is $\\{0,1\\}$.
Prediction accuracies of both methods are similar.
Fig. 6: The training data defined by (4.1).
Fig. 7: The result predicted using validation data when $\nu=10000$.
Fig. 8: The result of predicting the validation data using $k$-nearest
neighbor algorithm at $k=3$.
Fig. 9: The result of predicting the validation data using a multi-layer
perceptron with 5000 nodes.
Fig. 10: The initial values of design parameters $\alpha,\beta$ and $\gamma$.
Fig. 11: The learning result of design parameters $\alpha$ at $\nu=10000$.
Fig. 12: The learning result of design parameters $\beta$ at $\nu=10000$.
Fig. 13: The learning result of design parameters $\gamma$ at $\nu=10000$.
Fig. 14: The change of the loss function during learning.
Fig. 15: The change of accuracy during learning.
Table. 4: The prediction result of each methods. Method | Loss | Accuracy
---|---|---
This paper (ODENet) | 0.02629 | 0.9592
$K$-nearest neighbor algorithm (K-NN) | 0.006000 | 0.9879
Multilayer perceptron (MLP) | 0.006273 | 0.9883
### 4.3 Multinomial classification in MNIST
We performed a numerical experiment on a classification problem using MNIST, a
dataset of handwritten digits. The input is a $28\times 28$ image and the
output is a one-hot vector of labels attached to the MNIST dataset. We set
$n=784$ and $m=10$. Let the number of training data be $K_{1}=43200$ and let
the batchsize be $|I_{\mu}|~{}128$. We run the Algorithm 1 until $\nu=1000$.
However, the momentum SGD was used to update the design parameters. We set the
learning rate as $\tau=0.01$ and the momentum rate as $0.9$ and
$\alpha_{(0)}\equiv 10^{-8},\quad\beta_{(0)}\equiv
10^{-8},\quad\gamma_{(0)}\equiv 10^{-8}.$
Let the number of validation data be $K_{2}=10800$. Fig. 17. shows the change
of the loss function during learning for each of the training data and
validation data. Fig. 17. shows the change of accuracy during learning. Using
the test data, the values of the loss function and accuracy are
$E=0.06432,\quad\mathrm{Accuracy}=0.9521,$
at $\nu=1000$, respectively.
Fig. 17. and 17. suggest that the loss function can be decreased and accuracy
can be increased using the Algorithm 1 (using the Momentum SGD).
Fig. 16: The change of the loss function during learning.
Fig. 17: The change of accuracy during learning.
## 5 Conclusion
In this paper, we proposed ODENet and ResNet with special forms and showed
that they uniformly approximate an arbitrary continuous function on a compact
set. This result shows that the ODENet and ResNet can ve represent a variety
of data. In addition, we showed the existence and continuity of the gradient
of loss function in a general ODENet. We performed numerical experiments on
some data and confirmed that the gradient method reduces the loss function and
represents the data.
Our future work is to show that the design parameters converge to a global
minimizer of the loss function using a continuous gradient. We also plan to
show that ODENet with other forms, such as convolution, can represent
arbitrary data.
## 6 Acknowledgement
This work is partially supported by JSPSKAKENHI JP20KK0058, and JST CREST
Grant Number JPMJCR2014, Japan.
## References
* [1] A. Baker. Matrix groups: An Introduction to Lie Group Theory. Springer Science & Business Media, 2003.
* [2] Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166, 1994.
* [3] L. Bottou. Online learning and stochastic approximations. On-line learning in neural networks, 17(9):142, 1998.
* [4] S.M. Carroll and B.W. Dickinson. Construction of neural networks using the radon transform. In International Joint Conference on Neural Networks 1989, volume 1, pages 607–611. IEEE, 1989.
* [5] R.TQ Chen, Y. Rubanova, J. Bettencourt, and D.K. Duvenaud. Neural ordinary differential equations. In Advances in neural information processing systems, pages 6571–6583, 2018.
* [6] G. Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303–314, 1989\.
* [7] K. Fukushima and S. Miyake. Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition. In Competition and cooperation in neural nets, pages 267–285. Springer, 1982.
* [8] K.I. Funahashi. On the approximate realization of continuous mappings by neural networks. Neural networks, 2(3):183–192, 1989.
* [9] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, 2010.
* [10] B. Hanin and M. Sellke. Approximating continuous functions by relu nets of minimal width. arXiv preprint arXiv:1710.11278, 2017.
* [11] K. He and J. Sun. Convolutional neural networks at constrained time cost. In CVPR, 2015.
* [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [13] K. Hornik, M. Stinchcombe, H. White, et al. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359–366, 1989.
* [14] P. Kidger and T. Lyons. Universal approximation with deep narrow networks. In Conference on Learning Theory, pages 2306–2327. PMLR, 2020.
* [15] M. Leshno, V.Y. Lin, A. Pinkus, and S. Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural networks, 6(6):861–867, 1993.
* [16] H. Lin and S. Jegelka. Resnet with one-neuron hidden layers is a universal approximator. Advances in neural information processing systems, 31:6169–6178, 2018.
* [17] W.S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5:115–133, 1943.
* [18] D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning representations by back-propagating errors. nature, 323(6088):533–536, 1986.
* [19] J. Schmidhuber. Deep learning in neural networks: An overview. Neural networks, 61:85–117, 2015.
* [20] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
* [21] S. Sonoda and N. Murata. Neural network with unbounded activation functions is universal approximator. Applied and Computational Harmonic Analysis, 43(2):233–268, 2017\.
* [22] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V Vanhoucke, and A Rabinovich. Going deeper with convolutions. In CVPR, 2015.
## Appendix A Differentiability with respect to parameters of ODE
We discuss the differentiability with respect to the design parameters of
ordinary differential equations.
###### Theorem A.1.
Let $N$ and $r$ be natural numbers, and $T$ be a positive real number. We
define $X:=C^{1}([0,T];\mathbb{R}^{N})$ and $\Omega:=C([0,T];\mathbb{R}^{r})$.
We consider the initial value problem for the ordinary differential equation:
$\left\\{\begin{aligned} x^{\prime}(t)&=f(t,x(t),\omega(t)),&t\in(0,T],\\\
x(0)&=\xi,&\end{aligned}\right.$ (A.1)
where $x$ is a function from $[0,T]$ to $\mathbb{R}^{N}$, and $\xi\in D$ is
the initial value; $\omega\in\Omega$ is the design parameter; $f$ is a
continuously differentiable function from
$[0,T]\times\mathbb{R}^{N}\times\mathbb{R}^{r}$ to $\mathbb{R}^{N}$; There
exists $L>0$ such that
$|f(t,x_{1},\omega(t))-f(t,x_{2},\omega(t))|\leq L|x_{1}-x_{2}|$
for any $t\in[0,T],x_{1},x_{2}\in\mathbb{R}^{N}$, and $\omega\in\Omega$. Then,
the solution to (A.1) satisfies $[\Omega\ni\omega\mapsto x[\omega]]\in
C^{1}(\Omega;X)$. Furthermore, if we define
$y(t):=(\partial_{\omega}x[\omega]\eta)(t)$
for any $\eta\in\Omega$, the following relations
$\left\\{\begin{array}[]{ll}y^{\prime}(t)-\nabla_{x}^{\top}f(t,x[\omega](t),\omega(t))y(t)=\nabla_{\omega}^{\top}f(t,x[\omega](t),\omega(t))\eta(t),&t\in(0,T],\\\
y(0)=0,&\end{array}\right.$
are satisfied.
###### Proof.
Let $X_{0}$ be the set of continuous functions from $[0,T]$ to
$\mathbb{R}^{N}$. Because $f(t,\cdot,\omega(t))$ is Lipschitz continuous for
any $t\in[0,T]$ and $\omega\in\Omega$, there exists a unique solution
$x[\omega]\in X$ in (A.1). We define the map $J:X\times\Omega\to X$ as
$J(x,\omega)(t):=x(t)-\xi-\int_{0}^{t}f(s,x(s),\omega(s))ds.$
The map $J$ satisfies
$J(x,\omega)^{\prime}(t)=x^{\prime}(t)-f(t,x(t),\omega(t)).$
Since $f\in
C^{1}([0,T]\times\mathbb{R}^{N}\times\mathbb{R}^{r};\mathbb{R}^{N})$, $J\in
C(X\times\Omega;X)$.
Take an arbitrary $\omega\in\Omega$. For any $x\in X$, let
$f\circ x(t):=f(t,x(t),\omega(t)),\quad\nabla_{x}^{\top}f\circ
x(t):=\nabla_{x}^{\top}f(t,x(t),\omega(t)).$
We define the map $A(x):X\to X$ as
$(A(x)y)(t):=y(t)-\int_{0}^{t}(\nabla_{x}^{\top}f\circ x(s))y(s)ds.$
The map $A(x)$ satisfies
$(A(x)y)^{\prime}(t)=y^{\prime}(t)-(\nabla_{x}^{\top}f\circ x(t))y(t).$
$x$ and $\omega$ are bounded because they are continuous functions on a
compact interval. Because $f\in
C^{1}([0,T]\times\mathbb{R}^{N}\times\mathbb{R}^{r};\mathbb{R}^{N})$, there
exists $C>0$ such that $|\nabla_{x}^{\top}f\circ x(t)|\leq C$ for any
$t\in[0,T]$. From
$|(A(x)y)(t)|\leq\|y\|_{X_{0}}+CT\|y\|_{X_{0}},\quad|(A(x)y)^{\prime}(t)|\leq\|y^{\prime}\|_{X_{0}}+C\|y\|_{X_{0}},$
$\|A(x)y\|_{X}\leq\|y\|_{X}+C(T+1)\|y\|_{X}=(1+C(T+1))\|y\|_{X}$
is satisfied. Hence, $A(x)\in B(X,X)$. Let us fix $x_{0}\in X$. We take $x\in
X$ such that $x\to x_{0}$.
$\displaystyle|(A(x)y)(t)-(A(x_{0})y)(t)|$
$\displaystyle\leq\int_{0}^{t}|\nabla_{x}^{\top}f\circ
x(s)-\nabla_{x}^{\top}f\circ x_{0}(s)||y(s)|ds$ $\displaystyle\leq
T\|y\|_{X}\|\nabla_{x}^{\top}f\circ x-\nabla_{x}^{\top}f\circ
x_{0}\|_{C([0,T];\mathbb{R}^{N\times N})}$
$\displaystyle|(A(x)y)^{\prime}(t)-(A(x_{0})y)^{\prime}(t)|$
$\displaystyle\leq|\nabla_{x}^{\top}f\circ x(t)-\nabla_{x}^{\top}f\circ
x_{0}(t)||y(t)|$ $\displaystyle\leq\|y\|_{X}\|\nabla_{x}^{\top}f\circ
x-\nabla_{x}^{\top}f\circ x_{0}\|_{C([0,T];\mathbb{R}^{N\times N})}$
$\|A(x)y-A(x_{0})y\|_{X}\leq(T+1)\|y\|_{X}\|\nabla_{x}^{\top}f\circ
x-\nabla_{x}^{\top}f\circ x_{0}\|_{C([0,T];\mathbb{R}^{N\times N})}$
$\|A(x)-A(x_{0})\|_{B(X,X)}\leq(T+1)\|\nabla_{x}^{\top}f\circ
x-\nabla_{x}^{\top}f\circ x_{0}\|_{C([0,T];\mathbb{R}^{N\times N})}$
Hence, $A\in C(X;B(X,X))$.
$\displaystyle J(x+y,\omega)(t)-J(x,\omega)(t)-(A(x)y)(t)$
$\displaystyle=-\int_{0}^{t}(f\circ(x+y)(s)-f\circ
x(s)-(\nabla_{x}^{\top}f\circ x(s))y(s))ds$
$\|J(x+y,\omega)-J(x,\omega)-A(x)y\|_{X}\leq(T+1)\|f\circ(x+y)-f\circ
x-(\nabla_{x}^{\top}f\circ x)y\|_{X_{0}}$
Form the Taylor expansion of $f$, we obtain
$f(t,x(t)+y(t),\omega(t))=f(t,x(t),\omega(t))+\int_{0}^{1}\nabla_{x}^{\top}f(t,x(t)+\zeta
y(t),\omega(t))y(t)d\zeta$
for any $t\in[0,T],x,y\in X$ and $\omega\in\Omega$. We obtain
$|f\circ(x+y)(t)-f\circ x(t)-(\nabla_{x}^{\top}f\circ
x(t))y(t)|\leq\int_{0}^{1}|\nabla_{x}^{\top}f\circ(x+\zeta
y)(t)-\nabla_{x}^{\top}f\circ x(t)||y(t)|d\zeta.$
For any $\epsilon>0$, there exists $\delta>0$ such that
$\|y\|_{X_{0}}<\delta,\zeta\in[0,1]~{}\Rightarrow~{}|\nabla_{x}^{\top}f\circ(x+\zeta
y)(t)-\nabla_{x}^{\top}f\circ x(t)|<\epsilon.$
We obtain
$|f\circ(x+y)(t)-f\circ x(t)-(\nabla_{x}^{\top}f\circ
x(t))y(t)|\leq\epsilon\|y\|_{X_{0}},$
$\|J(x+y,\omega)-J(x,\omega)-A(x)y\|_{X}\leq\epsilon(T+1)\|y\|_{X}.$
Hence,
$\partial_{x}J(x,\omega)y=A(x)y.$
From $\partial_{x}J(\cdot,\omega)\in C(X;B(X,X))$, $J(\cdot,\omega)\in
C^{1}(X;X)$.
By fixing $\omega_{0}\in\Omega$, there exists a solution $x_{0}\in X$ of (A.1)
such that
$x_{0}(t)=\xi+\int_{0}^{t}f(s,x_{0}(s),\omega_{0}(s))ds.$
That is,
$J(x_{0},\omega_{0})(t)=x_{0}(t)-\xi-\int_{0}^{t}f(s,x_{0}(s),\omega_{0}(s))ds=x_{0}(t)-x_{0}(t)=0$
is satisfied. If $y\in X$ satisfies
$(\partial_{x}J(x_{0},\omega_{0})y)(t)=g(t)$ for any $g\in X$, then
$\left\\{\begin{array}[]{ll}y^{\prime}(t)-\nabla_{x}^{\top}f(t,x_{0}(t),\omega_{0}(t))y(t)=g^{\prime}(t),&t\in(0,T],\\\
y(0)=g(0).&\end{array}\right.$
Because the solution to this ordinary differential equation exists uniquely,
there exists an inverse map $(\partial_{x}J(x_{0},\omega_{0}))^{-1}$ such that
$(\partial_{x}J(x_{0},\omega_{0}))^{-1}\in B(X,X)$.
From the implicit function theorem, for any $\omega\in\Omega$, there exists
$x[\omega]\in X$ such that $J(x[\omega],\omega)=0$. From $J\in
C^{1}(X\times\Omega;X)$, we obtain $[\omega\mapsto x[\omega]]\in
C^{1}(\Omega;X)$. We put
$y(t):=(\partial_{\omega}x[\omega]\eta)(t)$
for any $\eta\in\Omega$. From $J(x[\omega],\omega)=0$,
$(\partial_{x}J(x[\omega],\omega)y)(t)+(\partial_{\omega}J(x[\omega],\omega)\eta)(t)=0,$
$y(t)-\int_{0}^{t}\nabla_{x}^{\top}f(s,x[\omega](s),\omega(s))y(s)ds-\int_{0}^{t}\nabla_{\omega}^{\top}f(s,x[\omega](s),\omega(s))\eta(s)ds=0.$
Therefore, we obtain
$\left\\{\begin{array}[]{ll}y^{\prime}(t)-\nabla_{x}^{\top}f(t,x[\omega](t),\omega(t))y(t)=\nabla_{\omega}^{\top}f(t,x[\omega](t),\omega(t))\eta(t),&t\in(0,T],\\\
y(0)=0.&\end{array}\right.$
∎
## Appendix B General ODENet
In this section, we describe the general ODENet and the existence and
continuity of the gradient of loss function with respect to the design
parameter. Let $N$ and $r$ be natural numbers and $T$ be a positive real
number. Let the input data $D\subset\mathbb{R}^{n}$ be a compact set. We
define $X:=C^{1}([0,T];\mathbb{R}^{N})$ and $\Omega:=C([0,T];\mathbb{R}^{r})$.
We consider the ODENet with the following system of ordinary differential
equations.
$\left\\{\begin{aligned} x^{\prime}(t)&=f(t,x(t),\omega(t)),&t\in(0,T],\\\
x(0)&=Q\xi,&\end{aligned}\right.$ (B.1)
where $x$ is a function from $[0,T]$ to $\mathbb{R}^{N}$; $\xi\in D$ is the
input data; $Px(T)$ is the final output; $\omega\in\Omega$ is the design
parameter; $P$ and $Q$ are $m\times N$ and $N\times n$ real matrices; $f$ is a
continuously differentiable function from
$[0,T]\times\mathbb{R}^{N}\times\mathbb{R}^{r}$ to $\mathbb{R}^{N}$, and
$f(t,\cdot,\omega(t))$ is Lipschitz continuous for any $t\in[0,T]$ and
$\omega\in\Omega$. For an input data $\xi\in D$, we denote the output data as
$Px(T;\xi)$. We consider an approximation of $F\in C(D;\mathbb{R}^{m})$ using
ODENet with a system of ordinary differential equations (B.1). We define the
loss function as
$e[x]=\frac{1}{2}\left|Px(T;\xi)-F(\xi)\right|^{2}.$
We define the gradient of the loss function with respect to the design
parameter as follows:
###### Definition B.1.
Let $\Omega$ be a real Banach space. Assume that the inner product
$\left<\cdot,\cdot\right>$ is defined on $\Omega$. The functional
$\Phi:\Omega\to\mathbb{R}$ is a Fréchet differentiable at $\omega\in\Omega$.
The Fréchet derivative is denoted by $\partial\Phi[\omega]\in\Omega^{*}$. If
$G[\omega]\in\Omega$ exists such that
$\partial\Phi[\omega]\eta=\left<G[\omega],\eta\right>,$
for any $\eta\in\Omega$, we call $G[\omega]$ the gradient of $\Phi$ at
$\omega\in\Omega$ with respect to the inner product
$\left<\cdot,\cdot\right>$.
###### Remark.
If there exists a gradient $G[\omega]$ of the functional $\Phi$ at
$\omega\in\Omega$ with respect to the inner product
$\left<\cdot,\cdot\right>$, the algorithm to find the minimum value of $\Phi$
by
$\omega_{(\nu)}=\omega_{(\nu-1)}-\tau G[\omega_{(\nu-1)}]$
is called the steepest descent method.
###### Theorem B.2.
Given the design parameter $\omega\in\Omega$, let $x[\omega](t;\xi)$ be the
solution to (B.1) with the initial value $\xi\in D$. Let
$\lambda:[0,T]\to\mathbb{R}^{N}$ be the adjoint and satisfy the following
adjoint equation:
$\left\\{\begin{aligned}
\lambda^{\prime}(t)&=-\nabla_{x}f^{\top}\left(t,x[\omega](t;\xi),\omega(t)\right)\lambda(t),&t\in[0,T),\\\
\lambda(T)&=P^{\top}\left(Px[\omega](T;\xi)-F(\xi)\right).&\end{aligned}\right.$
We define the functional $\Phi:\Omega\to\mathbb{R}$ as
$\Phi[\omega]=e[x[\omega]]$. Then, there exists a gradient
$G[\omega]\in\Omega$ of $\Phi$ as $\omega\in\Omega$ with respect to the
$L^{2}(0,T;\mathbb{R}^{r})$ inner predict such that
$\partial\Phi[\omega]\eta=\int_{0}^{T}G[\omega](t)\cdot\eta(t)dt,\quad
G[\omega](t)=\nabla_{\omega}f^{\top}\left(t,x[\omega](t;\xi),\omega(t)\right)\lambda(t),$
for any $\eta\in\Omega$.
###### Proof.
$e$ is a continuously differentiable function from $X$ to $\mathbb{R}$, and
the solution of (B.1) satisfies $[\omega\mapsto x[\omega]]$ from the Theorem
A.1. Hence, $\Phi\in C^{1}(\Omega)$. For any $\eta\in\Omega$,
$\displaystyle\partial\Phi[\omega]\eta$
$\displaystyle=(Px[\omega](T;\xi)-F(\xi))\cdot
P(\partial_{\omega}x[\omega]\eta)(T),$
$\displaystyle=P^{\top}(Px[\omega](T;\xi)-F(\xi))\cdot(\partial_{\omega}x[\omega]\eta)(T).$
We put $y(t):=(\partial_{\omega}x[\omega]\eta)(t)$. From Theorem A.1, we
obtain
$\left\\{\begin{array}[]{ll}y^{\prime}(t)-\nabla_{x}^{\top}f\left(t,x[\omega](t,\xi),\omega(t)\right)y(t)=\nabla_{\omega}^{\top}f\left(t,x[\omega](t;\xi),\omega(t)\right)\eta(t),&t\in(0,T],\\\
y(0)=0.\end{array}\right.$
Since the assumption,
$\left\\{\begin{aligned}
\lambda^{\prime}(t)&=-\nabla_{x}f^{\top}\left(t,x[\omega](t;\xi),\omega(t)\right)\lambda(t),&t\in[0,T),\\\
\lambda(T)&=P^{\top}\left(Px[\omega](T;\xi)-F(\xi)\right).&\end{aligned}\right.$
is satisfied. We define
$g(t):=\nabla_{\omega}f^{\top}\left(t,x[\omega](t;\xi),\omega(t)\right)\lambda(t).$
Then, $g\in\Omega$ is satisfied. We calculate the $L^{2}(0,T;\mathbb{R}^{r})$
inner product of $g$ and $\eta$,
$\displaystyle\left<g,\eta\right>$
$\displaystyle=\int_{0}^{T}(\nabla_{\omega}f^{\top}(t,x[\omega](t;\xi),\omega(t))\lambda(t))\cdot\eta(t)dt,$
$\displaystyle=\int_{0}^{T}\lambda(t)\cdot(\nabla_{\omega}^{\top}f(t,x[\omega](t;\xi),\omega(t))\eta(t))dt,$
$\displaystyle=\int_{0}^{T}\lambda(t)\cdot(y^{\prime}(t)-\nabla_{x}^{\top}f(t,x[\omega](t;\xi),\omega(t))y(t))dt,$
$\displaystyle=\lambda(T)\cdot y(T)-\lambda(0)\cdot
y(0)-\int_{0}^{T}(\lambda^{\prime}(t)+\nabla_{x}f^{\top}(t,x[\omega](t;\xi),\omega(t))\lambda(t))\cdot
y(t)dt,$ $\displaystyle=P^{\top}(Px[\omega](t;\xi)-F(\xi))\cdot y(T),$
$\displaystyle=\partial\Phi[\omega]\eta.$
Therefore, there exists a gradient $G[\omega]\in\Omega$ of $\Phi$ at
$\omega\in\Omega$ with respect to the $L^{2}(0,T;\mathbb{R}^{r})$ inner
product such that
$G[\omega](t)=\nabla_{\omega}f^{\top}\left(t,x[\omega](t;\xi),\omega(t)\right)\lambda(t).$
∎
## Appendix C General ResNet
In this section, we describe the general ResNet and error backpropagation. We
consider a ResNet with the following system of difference equations
$\left\\{\begin{aligned}
x^{(l+1)}&=x^{(l)}+f^{(l)}(x^{(l)},\omega^{(l)}),&l=0,1,\ldots,L-1,\\\
x^{(0)}&=Q\xi,&\end{aligned}\right.$ (C.1)
where $x^{(l)}$ is an $N$-dimensional real vector for all $l=0,1,\ldots,L$;
$\xi\in D$ is the input data; $Px^{(L)}$ is the final output;
$\omega^{(l)}\in\mathbb{R}^{r_{l}}~{}(l=0,1,\ldots,L-1)$ are the design
parameters; $P$ and $Q$ are $m\times N$ and $N\times n$ real matrices;
$f^{(l)}$ is a continuously differentiable function from
$\mathbb{R}^{N}\times\mathbb{R}^{r_{l}}$ to $\mathbb{R}^{N}$ for all
$l=0,1,\ldots,L-1$. We consider an approximation of $F\in C(D;\mathbb{R}^{m})$
using ResNet with a system of difference equations (C.1). Let $K\in\mathbb{N}$
be the number of training data and
$\\{(\xi^{(k)},F(\xi^{(k)}))\\}_{k=1}^{K}\subset D\times\mathbb{R}^{m}$ be the
training data. We divide the label of the training data into the following
disjoint sets.
$\\{1,2,\ldots,K\\}=I_{1}\cup I_{2}\cup\cdots\cup
I_{M}~{}(\mathrm{disjoint}),\quad(1\leq M\leq K).$
Let $Px^{(L,k)}$ denote the final output for a given input data $\xi^{(k)}\in
D$. We set
$\mbox{\boldmath$\omega$}=(\omega^{(0)},\omega^{(1)},\ldots,\omega^{(L-1)})$.
We define the loss function for all $\mu=1,2,\ldots,M$ as follows:
$e_{\mu}(\mbox{\boldmath$\omega$})=\frac{1}{2|I_{\mu}|}\sum_{k\in
I_{\mu}}\left|Px^{(L,k)}-F(\xi^{(k)})\right|^{2},$ (C.2)
$E=\frac{1}{2K}\sum_{k=1}^{K}\left|Px^{(L,k)}-F(\xi^{(k)})\right|^{2}.$
We consider the learning for each label set using the gradient method. We find
the gradient of the loss function (C.2) with respect to the design parameter
$\omega^{(l)}\in\mathbb{R}^{r_{l}}$ for all $l=0,1,\ldots,L-1$ using error
backpropagation. Using the chain rule, we obtain
$\nabla_{\omega^{(l)}}e_{\mu}(\mbox{\boldmath$\omega$})=\sum_{k\in
I_{\mu}}\nabla_{\omega^{(l)}}{x^{(l+1,k)}}^{\top}\nabla_{x^{(l+1,k)}}e_{\mu}(\mbox{\boldmath$\omega$})$
for all $l=0,1,\ldots,L-1$. From (C.1),
$\nabla_{\omega^{(l)}}{x^{(l+1,k)}}^{\top}=\nabla_{\omega^{(l)}}{f^{(l)}}^{\top}(x^{(l,k)},\omega^{(l)}).$
We define
$\lambda^{(l,k)}:=\nabla_{x^{(l,k)}}e_{\mu}(\mbox{\boldmath$\omega$})$ for all
$l=0,1,\ldots,L$ and $k\in I_{\mu}$. We obtain
$\lambda^{(l,k)}=\nabla_{x^{(l,k)}}{x^{(l+1,k)}}^{\top}\nabla_{x^{(l+1,k)}}e_{\mu}(\mbox{\boldmath$\omega$})=\lambda^{(l+1,k)}+\nabla_{x^{(l,k)}}{f^{(l)}}^{\top}(x^{(l,k)},\omega^{(l)})\lambda^{(l+1,k)}.$
Also,
$\lambda^{(L,k)}=\nabla_{x^{(L,k)}}e_{\mu}(\mbox{\boldmath$\omega$})=\frac{1}{|I_{\mu}|}P^{\top}\left(Px^{(L,k)}-F(\xi^{(k)})\right).$
Therefore, we can find the gradient
$\nabla_{\omega^{(l)}}e_{\mu}(\mbox{\boldmath$\omega$})$ of the loss function
(C.2) with respect to the design parameters $\omega^{(l)}\in\mathbb{R}^{r}$ by
using the following equations
$\left\\{\begin{array}[]{lll}\displaystyle{\nabla_{\omega^{(l)}}e_{\mu}(\mbox{\boldmath$\omega$})=\sum_{k\in
I_{\mu}}\nabla_{\omega^{(l)}}{f^{(l)}}^{\top}(x^{(l,k)},\omega^{(l)})\lambda^{(l+1,k)}},&l=0,1,\ldots,L-1,&\\\
\displaystyle{\lambda^{(l,k)}=\lambda^{(l+1,k)}+\nabla_{x^{(l,k)}}{f^{(l)}}^{\top}(x^{(l,k)},\omega^{(l)})\lambda^{(l+1,k)}},&l=0,1,\ldots,L-1,&k\in
I_{\mu},\\\
\displaystyle{\lambda^{(L,k)}=\frac{1}{|I_{\mu}|}P^{\top}\left(Px^{(L,k)}-F(\xi^{(k)})\right)},&&k\in
I_{\mu}.\end{array}\right.$
|
# Creating a Virtuous Cycle in Performance Testing at MongoDB
David Daly 0000-0001-9678-3721 MongoDB Inc<EMAIL_ADDRESS>
(2021)
###### Abstract.
It is important to detect changes in software performance during development
in order to avoid performance decreasing release to release or dealing with
costly delays at release time. Performance testing is part of the development
process at MongoDB, and integrated into our continuous integration system. We
describe a set of changes to that performance testing environment designed to
improve testing effectiveness. These changes help improve coverage, provide
faster and more accurate signaling for performance changes, and help us better
understand the state of performance. In addition to each component performing
better, we believe that we have created and exploited a virtuous cycle:
performance test improvements drive impact, which drives more use, which
drives further impact and investment in improvements. Overall, MongoDB is
getting faster and we avoid shipping major performance regressions to our
customers because of this infrastructure.
change point detection, performance, testing, continuous integration,
variability
††journalyear: 2021††copyright: rightsretained††conference: Proceedings of the
2021 ACM/SPEC International Conference on Performance Engineering; April
19–23, 2021; Virtual Event, France††booktitle: Proceedings of the 2021
ACM/SPEC International Conference on Performance Engineering (ICPE ’21), April
19–23, 2021, Virtual Event, France††doi: 10.1145/3427921.3450234††isbn:
978-1-4503-8194-9/21/04††ccs: General and reference Performance††ccs:
Information systems Database performance evaluation††ccs: Mathematics of
computing Time series analysis
## 1\. Introduction
Over the last several years we have focused on improving our performance
testing infrastructure at MongoDB. The performance testing infrastructure is a
key component in ensuring the overall quality of the software we develop, run,
and support. It allows us to detect changes in performance as we develop the
software, enabling prompt isolation and resolution of regressions and bugs. It
keeps performance regressions from being included in the software we release
to customers. It also allows us to recognize, confirm, and lock in performance
improvements. As a business, performance testing impacts our top and bottom
lines: the more performant the server, the more our customers will use our
services; the more effective our performance testing infrastructure, the more
productive are our developers. Testing performance and detecting performance
changes is a hard problem in practice, as performance tests and test platforms
inherently contain some degree of noise. The use of change point detection
(Daly et al., 2020) was a large improvement in our ability to detect
performance changes in the presence of noise.
After putting our change point detection system into production, we explicitly
focused on 4 challenges: how to deal with the large number of results and
process all the changes; how to better deal with and isolate noise due to the
testbed system itself; how to easily compare the results from arbitrary test
runs; and how to capture and how to more flexibly handle more result types.
The first two are familiar challenges, having been an explicit focus of the
change point detection work, while the second two challenges become more
serious problems once we achieved a basic ability to process our existing
results.
The cumulative impact of these changes and our previous work has been to
enable a virtuous cycle for performance at MongoDB. As the system is used
more, we catch and address more performance changes, leading to us using the
system more.
The rest of this paper is organized as follows. In Section 2 we review our
previous work on which this paper builds. In Section 3 we discuss changes that
have happened naturally as we have used the system more, leading to more load
on the system. We then dive into four changes that we have tried in order to
improve our infrastructure: Section 4 for improving our processing of results,
Section 5 for handling more result types, Section 6 to address system noise,
and Section 7 to improve the comparison of arbitrary test runs. Those sections
are followed by a dive into the practical impact of all these changes in
Section 8, before reviewing future work, related work, and conclusions in
Sections 9, 10, and 12.
## 2\. Review
We built our performance testing infrastructure to be completely automated,
and integrated with our continuous integration system Evergreen (noa,
[n.d.]c). From past experience we had concluded that it was essential to
automate the execution and analysis of our performance tests, and regularly
run those tests as our developers worked on the next release. Previously we
had done ad-hoc testing and manual testing at the end of the release cycle. In
both cases we were continually challenged by test results that would not
reproduce, as well as a huge diagnosis effort to identify which component and
changes to that component caused the performance changes. The combination of
those challenges led to a large effort late in each release cycle to try to
identify and fix performance regressions, often resulting in release delays or
performance regressions shipping to customers. Creating the infrastructure
(Ingo and Daly, 2020) to test performance in our CI system let us identify and
address regressions earlier, and made it much easier to isolate performance
changes.
Automation does not inherently make the tests reproducible, but it does make
it clearer that there is noise in the results. Further work went into lowering
the noise in the test results (Henrik Ingo and David Daly, 2019). That work
lowered, but did not eliminate the noise in the performance results. It was
still challenging to detect changes in performance. Originally we tested for
performance changes above some threshold (usually $10\%$), but this had a
number of problems, leading us to use change point detection (Daly et al.,
2020). Change point detection attempts to determine when there are statistical
changes in a time-series, which is precisely the problem we want to solve.
After the transition to change point detection, we had a system with
completely automated, low noise tests that we could successfully triage and
process.
## 3\. Organic Changes
There are a number of organic changes to our performance test environment that
have occurred over the last couple of years. These changes were not planned,
but they were still important changes. The performance testing system works,
detecting that the performance has changed and correctly identifying when
those performance changes occurred. The development engineers have seen that
it works and so they use the performance test infrastructure more. One key
aspect of that increase in use is that the development engineers have added
more tests. We have also added new test configurations to further increase
test coverage. Development engineers and performance engineers both add
performance tests and configurations.
Table 1 shows the number of system under test configurations, tasks
(collections of tests), tests, and number of raw results from running the
performance tests for any version of the software. The data covers that past
three years and is collected from tests run in September of each year. The
table specifically filters out canary111Canary tests are discussed in Section
6. results and anything that we would not actively triage. In some cases, the
line between configurations, tasks, tests, and results may be arbitrary, but
it is how our system is organized and users interact with each of those
levels.
| 2018 | 2019 | 2020
---|---|---|---
Number of Configurations | 8 | 17 | 24
Number of Tasks | 86 | 181 | 356
Number of Tests | 960 | 1849 | 3122
Number of Results | 2393 | 3865 | 5787
Table 1. The number of total possible test results we can create per source
code revision has increased significantly over the past two years. This is due
to increases in the number of tests and the number of configurations in which
we run those tests.
You can see the huge increase in every dimension. We run our change point
detection algorithm on the time-series for every result, and someone must
triage all of that data. The total number of results went up $50\%$ year over
year, and $142\%$ over two years.
Additionally, the development organization has grown leading to more commits
to our source repository. Overall the number of engineers working on our core
server has gone up approximately $30\%$ year over year for the past two years.
Table 2 shows the number of commits and commits per day over the last 3 years.
There has a been a steady increase in commits, going up $18\%$ in the past
year and $27\%$ over the past two years. Each commit can potentially influence
performance. If you combine the increased commit velocity with the increase in
results per revision, you get a $76\%$ increase in total results year over
year, and an over $3x$ increase in total possible results to generate and
analyze over two years.
12 months ending | 2018-09-01 | 2019-09-01 | 2020-09-01
---|---|---|---
Commits | 4394 | 4702 | 5538
Commits per day | 12.0 | 12.9 | 15.2
Table 2. The number of commits per day to our source repository has been
increasing as the development organization has grown.
The net result of these changes (more commits + engineers using the system
more) is many more possible results that may introduce performance changes and
need to be isolated. During this time we have not increased the people
dedicated to processing these results. All the problems we needed to address
in the past are increased. Our processes to find and isolate changes need to
scale or they will break down under the weight of new results.
## 4\. Better Processing of performance changes
In our previous paper (Daly et al., 2020) we described the role of “build
baron”: the “build baron” is a dedicated role to triage all performance
changes, producing JIRA tickets and assigning them to the appropriate teams to
address the changes. Originally the build baron role rotated through the
members of the team that built the performance infrastructure. On the positive
side, these people knew the system very well. However, that was balanced by
the feeling that the work was a distraction from their primary work. Build
baroning was a large transition from regular day to day work, and required
both rebuilding mental state when becoming build baron and when returning to
normal work. Everyone tried to dedicate the proper time to the work, but it is
easy to want to do a little bit more of the development work you had been
doing. Additionally, it’s likely that the skills for a build baron differ from
the skills of a software developer.
As such, we built a new team dedicated to build baroning. This new team
originally covered correctness build failures, but has since expanded to the
performance tests as well. The roles still rotate with the build baron team,
but the team is always doing triage (not triage and development). The team
members are better able to build up intuition and mental state about the
system, and can more easily get help from each other. Possibly more
importantly for members of this new team, triaging failures is their job, not
an interruption from their job. While we added this new team, we did not
allocate more people to doing the build baroning, rather we shifted who was
doing the work.
The dedicated team is also able to better articulate the challenges of build
baroning, and what changes would make them more productive. Over time the team
developed a set of heuristics to deal with all the change points they had to
process and shared knowledge. Part of this was adding filters to the existing
boards and new ways of looking at the data. Where feasible we reviewed these
heuristics and integrated them into the displays by default. Examples include
better filtering of canary workloads (recall we do not want to triage changes
in canaries, but rather rerun them) and sorting capabilities.
The impact of these changes show up in our overall statistics which are
discussed in Section 8. The summary is that they allowed us to evaluate more
tests and commits to find more changes, while also increasing the overall
quality of the generated tickets without any additional human time.
## 5\. Making the System More Descriptive
Our performance testing environment was originally designed for tests that
measured throughput, as throughput based tests are the easiest to create and
analyze (just run an operation in a loop for a period of time, possibly with
multiple threads). This assumption got built into the system. We knew it was a
limitation in our system and have been striving to get around it. We developed
conventions to add some latency results to our system, but it was inelegant.
Worse, it largely assumed only one result per test. Ideally we could measure
many performance results per test, such as throughput, median latency, tail
latencies, and resource utilizations. Before change point detection, we could
not add significantly more metrics since we could not keep up with the simpler
tests we already had. Now that we had change point detection, we wanted to be
able to track and process these additional metrics.
There were fundamentally two ways we could add these new metrics: 1. Have
tests measure the metrics of interest and then compute and report the relevant
statistics to the results system. 2. Have tests measure the metrics of
interest and report all of those results to the result system. In the second
case the test would report the metric for every operation — much more data —
and let the results system calculate the statistics. After some review, we
decided we preferred case 2, but that we also had to support case 1.
We preferred the more data intensive case 2 because of what it enables. If we
run a test that executes $10$k operations, the system will report the latency
for each of those $10$k operations. First, having all the data allows us to
change and recompute the statistics in the future. For example, if we decide
we need the $99.99\%$ latency in addition to the existing statistics, we can
add it and recompute. If the test itself was computing the statistics we would
have to rerun the test. Additionally, it allows us to view performance over
test time, within a test and from the test’s perspective (client side). This
gives us a much more dynamic view of the performance of the system. We chose
our preferred case, and it was paired with work on our open-source performance
workload generation tool Genny (noa, 2020a). We created a new service called
Cedar (noa, [n.d.]e) to store the results and calculate the statistics, and a
tool called Poplar (noa, [n.d.]f) to help report the results from the testbed
to Cedar. Both are open source and part of our continuous integration system
ecosystem (noa, [n.d.]d).
While we chose the detailed case, we decided we also had to support the case
in which tests computed their own statistics. The reason for this was simple:
in addition to workloads written in Genny, we also run third party industry
standard benchmarks in our regression environment (e.g., YCSB (Cooper et al.,
2010; noa, 2020b)). Those tests already generate their own statistics, and it
is not reasonable to adapt each such workload to report the metrics for every
operation. The system we built handles both the case of getting all the raw
results and the case of receiving the pre-aggregated data.
The new system was just that, a new system. We needed to integrate it into our
production systems without breaking anything. The test result history is
important both to the result display as well as the change point analysis, so
we could not just turn off the old system and turn on the new. Instead we
needed to make the old system and the new work together in the UI. We also
needed to make it possible to handle the increase in information without
completely overwhelming the build baron team222The results discussed in this
section are in addition to the increase in results discussed in Section 3.
Figure 1. The existing build baron triage page is used by the build barons to
triage change points on the existing data.
Screenshot of the existing build baron triage page, which is used by the build
barons to triage change points on the existing data
Figure 2. The new build baron triage page is used by the build barons to
triage change points detected on the new, expanded metrics.
Screenshot of the new build baron triage page, which is used by the build
barons to triage change points for the new, expanded metrics
Figure 1 shows a snapshot of the existing build baron triage page and Figure 2
shows a snapshot of the new triage board. These pages are setup to enable the
build barons to triage detected change points, create JIRA tickets, and assign
those tickets to teams. We aggregate all change points for a given commit
revision into one line by default to simplify processing. Each group of change
points can be expanded to show all the impacted tests and configurations, as
is done for one group in Figure 2.
For now we have placed all the new data on a new tab called “Change Points -
Expanded Metrics”. Adding a new tab is not optimal, but it does allow us to
update and experiment with the new system with no fear of breaking our
existing system and the processing of the legacy “Change Points” tab.
Eventually we expect that the two tabs will merge together. The new tab has
the additional column “Measurement”. The argument in the field is a regular
expression allowing tight control and filtering for the build baron. For now,
the system is setup to display three such metrics ($50$th, $95$th, and $99$th
percentile latencies). We expect to add more metrics to be triaged, as well as
migrating the legacy metrics to this page in the future. The page also shows
for each change the date the change was committed (Date) as well as the date
on which the change point was calculated (Calculated On). The first is useful
for understanding the development of the software, while the latter is useful
for insight into the change point detection process. A change point that has
been calculated recently is the result of more recent test executions. Both
dates replace the somewhat ambiguous “create time” on the original page.
We also display trend graphs for each test, showing the evolution for a
performance result over time, as the software is developed. The graphs are
included on the page summarizing results for each task. As in the case of the
triage page, we worried about overwhelming the users with additional results,
so we added a pull down enabling the user to select which metric to display.
Figure 3 shows a particularly interesting example of the value of these
additional metrics and graphs. We detected a small change in average
throughput, but further investigation showed a clearer change in the 90th
percentile latency, while there was no change in the median latency. This
information makes it easier to debug the issue, as it clearly is not the
common path that is slower, but rather something making a small fraction of
the operations significantly slower.
Figure 3. Three trend views of the same test, showing a performance
regression. All three show performance over time. The top graph shows
throughput, the middle shows median latency, and the bottom graph shows $90$th
percentile latency. The regression is visible on the throughput and $90$th
percentile latency graphs, but not for the median latency.
Graphs showing performance over time for one test. The top graphs shows a
mostly flat line, followed by a small dip to a lower flat line. That graph
shows the average throughput, so this is a performance regression. The second
graph shows a completely flat line for the median latency for the test. The
last graph shows a line with some variability, and an increase to a higher
level towards the right side. That is 90% latency, showing a clear increase,
and is the cause of the drop in throughput.
## 6\. Lowering the Performance Impact of System Issues
We run many of our performance tests in the Cloud and have done work to reduce
the noise and increase the reproducibility of that system (Henrik Ingo and
David Daly, 2019). Sometimes there are performance problems on the testbed
itself. We use “canary tests” to detect that. A “canary test” is a test that
tests the testbed instead of software under test. In normal operation we
expect the results for our canary tests not to change over time.
The canary tests are tests just like any other test, but treating them the
same leads to some challenges. First, anyone looking at the result needs to
know what is a canary test and what is not. We do not want server engineers
spending any time diagnosing canary test failures. At the same time, we also
do not want a server engineer diagnosing a performance change on a non-canary
test when a canary test has also failed. Ideally, we would discard that result
because it is suspect, and rerun those performance tests. If we were able to
completely discard every (or even most) case of significant noise due to the
system, it makes the job of the change point detection algorithm that much
easier.
We set out to lower the impact of system noise by leveraging the data from our
change point detection algorithm. We recognized that while changes in server
performance manifested as changes in the distribution on our performance test
results, system noise was different. The common problem was a bad run with
results that did not match recent history. This is a problem of finding
statistical outliers, not of finding change points. As NIST defines it, “An
outlier is an observation that appears to deviate markedly from other
observations in the sample.” (noa, [n.d.]a).
There are a number of existing outlier detection algorithms. We implemented
the Generalized ESD Test (GESD) (Rosner, 1983; noa, [n.d.]b) algorithm. The
code is included in our open source signal processing repository
(https://github.com/10gen/signal-processing). Specifically, we wanted to use
the outlier detection to detect outliers on canary tests. An outlier on a
canary test would indicate something strange happened on the testbed. We want
to not use the data from such a run, and ideally rerun those experiments. When
an outlier is detected on a canary test, we would automatically suppress the
test results for that task and reschedule the task.
While reasonable in theory, we ran into some challenges. Figure 4 shows an
example of one such challenge: We had a short period of time in which the
underlying system got faster. This may have been a temporary configuration
change. Essentially every task that ran after that change was flagged as an
outlier and re-run. In fact, they were all run 3 or more times. This cost a
lot of money and (worse) slowed our ability to get results. Also, as it was a
real change in the underlying testbed performance, the results did not
noticeably change with any of the re-runs. In this case we spent a lot of
money for no improvement. We added a system to “mute” such changes, but it
required active intervention to avoid the worst cases. This change did not
last long, but it was long enough to cause more outliers and re-runs when the
performance returned to normal.
Figure 4. Performance for one of our canary workloads over time. It shows a
real, if short lived, change in testbed performance, causing the outlier
detection based system to rerun many tests.
In other cases the system would rightly detect a transient change in testbed
performance, but the underlying issue lasted for some period of time. The
tests would immediately rerun, but still get the bad results. Only after
waiting some period of time would the performance return to normal on a rerun.
At the end of the day we disabled the system. It was not solving our problem,
but it was costing us money. We have kept the computation running, so we have
built up a wealth of data when we come back to this area or decide to use
outlier detection for other challenges.
## 7\. Improved Comparison of Arbitrary Results
Our previous work on change point detection (Daly et al., 2020) only addressed
identifying when performance changed. It did nothing for comparing two
arbitrary builds to see if performance changed. There are two common cases in
which we want to compare arbitrary test runs:
1. (1)
Comparing performance from recent commits to the last stable release.
2. (2)
Comparing performance from a “patch” build (proposed change). Does that patch
change performance?
In the first case we want to determine the net performance change over a
period of time. Very commonly this is how we check how our proposed release
compares to the previous release. We would like to know what is faster, what
is slower, and what is more or less stable now compared to then. There may be
multiple changes in performance for a given test across a release cycle.
Change point detection helps us understand each of those changes, but at the
end of the day we need to let our customers know what to expect if they switch
to the newer version. This check also provides a backstop to change point
detection to make sure nothing significant has slipped through the triage
process.
In the second case the engineer needs to know what impact their changes will
have on performance. We have tools to compare the results from two arbitrary
test executions, but it does not have any sense of the noise distribution for
the test. It makes it hard to tell which differences are “real” and which are
just artifacts of the noise of those particular runs. A common pattern to deal
with this is to compare all the data, sort by percentage change, and inspect
the tests with he largest changes. Invariably the largest reported changes are
due to noise, usually from tests that report a low absolute result value
(e.g., latency of something fast), leading to large percentage changes. An
advanced user may learn which tests to ignore over time, while a less
experienced user may either use brute force, or enlist an experienced user.
Neither solution is a good use of time.
The change point detection system does not directly improve our ability to
compare performance across releases, however, its results do enable smarter
comparisons. All of the data from the change point detection algorithms is
available in an internal database. That data includes the location of change
points, as well as sample mean and variances for periods between change
points. The sample mean averages out some of the noise, and the sample
variance gives us a sense of how much noise there is. We can use that data a
number of ways to improve the comparison. The simplest may be to compare means
instead of points, and use the variance data to understand how big the change
is relative to regular noise.
After a few iterations we had the following system:
* •
Select two revisions to compare.
* •
Query the database for all the raw results for each revisions.
* •
For each result query the database for the most recent change point before the
given revision. Save the sample mean and variance for the region after the
change point.
* •
Compute a number of new metrics based on those results.
The new computed values were:
* •
Ratio of the sample means
* •
Percentage change of the sample means
* •
Change in means in terms of standard deviation
Note that there are better statistical tests we could use (see future work in
Sec 9). Comparing means and standard deviations is not technically correct for
determining the probability that a change is statistically significant.
However, it is both easy to do and proved useful for a prototype.
We exported the data as a CSV file and operated on it in a spreadsheet for a
first proof of concept. Our first instinct was to sort all the results by how
many standard deviations a change represented, however, that did not work
well. It turned out that some of our tests reported very low variances. The
top results ended up being very small changes in absolute terms, but huge
changes in terms standard deviation. With that in mind, we shifted to a more
complex strategy: we filtered out all results that were less than a 2 standard
deviation change, and then sorted by percentage change. We felt comfortable
doing that since we did not need to catch every change for the current use,
only the most significant (in a business sense, not a statistical one)
changes. A change that was less than two standard deviations was unlikely to
be the performance change that the engineering organization had to know about.
Once we filtered on number of standard deviations and sorted on percentage
change, the signal greatly improved. The most important changes rose to the
top and were reviewed first.
We regularly need the ability to compare two commits as part of a monthly
update on performance. Once a month we checkpoint the current status of
performance for the development branch against the previous month, and against
the last stable release. This gives us the big picture on the state of
performance, in addition to the detailed results from change point detection.
Figure 5 shows a spreadsheet we created using this process for a recent
monthly checkpoint on the state of performance. The figure shows the two
standard deviation filter applied (“Deviation” column), and then sorted on the
“Percent Change” column. This view enabled us to quickly review all the real
changes and avoid changes that were due to noisy tests. For example, the top
test is $250\%$ faster across the comparison. While we have shown performance
improvements in the figure, we review both improvements and regressions to get
a complete view of the state of performance.
Figure 5. Spreadsheet view of performance comparing performance of two
commits, taking advantage of the statistics generated by the change point
detection algorithm.
In practical terms, this POC has lowered the cost of reviewing the monthly
build from multiple hours, to somewhere between 30 and 60 minutes.
Additionally, all of that time is now productive time looking at real issues.
If there are more issues, it takes more time, and if there are fewer, it takes
less time. We expect to transition this view from the CSV and proof of concept
stage, into another page in our production system available to all engineers.
We also expect to implement more rigorous statistical tests.
## 8\. Impact
The combination of the changes described above has had noticeable impact on
our performance testing infrastructure and on our engineering organization.
The basic way we track a performance change is a JIRA tickets. We compiled
statistics from our JIRA tickets to quantify part of that impact. The
statistics are aligned with our release cycle, which is nominally a year long.
Release Cycle | 4.2.0 | 4.4.0
---|---|---
Total Tickets | 273 | 393
Resolved Tickets | 252 | 346
Percent Resolved | 92.3% | 88.0%
Resolved by Release | 205 | 330
Percent Resolved by Release | 75.1% | 84.0%
Release Duration | 412 | 352
Tickets per Day | 0.66 | 1.12
Table 3. Statistics on performance related JIRA tickets over the previous two
release cycles.
We had considerably more performance related BF tickets in 4.4.0 than 4.2.0,
over a shorter release cycle. Tickets per day went from $0.66$ to $1.12$, a
$70\%$ increase. We had a large increase in tickets, but simultaneously
increased the percentage of tickets resolved by the release. Those are both
positive signs, especially since we spent the same amount of time triaging
those changes, but it is only truly positive if the ticket quality has stayed
the same or improved.
Release Cycle | 4.2.0 | 4.4.0
---|---|---
Code related | 28.57% | 43.06%
Test related | 8.73% | 7.80%
Configuration related | 0.00% | 0.58%
System related | 28.17% | 24.86%
Noise related | 7.94% | 6.94%
Duplicate ticket | 11.11% | 14.45%
Not labeled | 16.67% | 2.31%
Table 4. Breakdown of root causes for performance JIRA tickets.
Table 4 shows quality information about our performance tickets. We label
every ticket based on its cause. The best case is for the change to be code
related: that indicates that the ticket captures a performance change based on
changes in the code under test. These are tickets telling us something useful
about our software. There are many other causes for tickets however.
Performance changes can be created due to changes in the test (test related)
or the testbed configuration (configuration related), the system itself can
directly cause an error (system related), or noise in the system can create a
false alert (noise related). Sometimes we create multiple tickets which we
eventually determine are the same cause (duplicate ticket). Finally, some
tickets are not labeled at all because they do not have a clear cause.
The fraction of code related tickets has gone up, even as the ticket volume
has also gone up. We can conclude that we are generating more tickets, with
the same amount of time dedicated to triage, and the tickets are of higher
quality than last year. In other words, we are doing our job better than last
year. While we are happy with that improvement, we also recognize that less
than half our tickets are about changes in the software under test. We would
like to continue to drive that percentage higher.
Interestingly, the category with the largest drop are tickets that are not
labeled. This is due to us doing a better job of diagnosing tickets and making
them actionable. It is not the case that we were just missing code related
tickets with the labels in the past. The number of duplicates is the only non-
code related category to go up noticeably. We attribute this to the increase
load of change points and tickets on the build barons.
Release Cycle | 4.2.0 | 4.4.0
---|---|---
Performance improvements | 21 | 40
Percentage of tickets that are improvements | 7.69% | 10.18%
Days per performance improvement | 19.62 | 8.80
Performance regressions | 15 | 13
Percentage of tickets that are regressions | 5.49% | 3.31%
Days per performance regression | 27.47 | 27.08
Table 5. Breakdown on the number and rate of performance JIRA tickets closed
as improvements and regressions over the past two release cycles.
The last measure of goodness is how many tickets were fixed (or not), and how
many things improved. Table 5 shows those statistics. Before discussing the
numbers we note that we count any net improvement as an improvement and any
net regression closed without fixing as a regression, regardless of its
practical significance. We had comparable number of accepted regressions year
over year, while nearly doubling the number of improvements. So, even with the
large increase in tickets, we still only get a regression that is not fixed
about once a month, and we went from getting an improvement every 20 days to
one every 9 days.
Clearly our system is working better. We have more tickets and they are higher
quality. In addition to each component performing better, we believe that we
have enabled a virtuous cycle. Performance issues get diagnosed faster, making
them easier to fix, so more issues get fixed. Development engineers get used
to receiving performance tickets and know they are high quality and
operational. Since the system provides useful information, engineers are more
likely to look to fix their regressions and to add more performance tests. As
we add more performance tests, we are more likely to catch performance
changes. One last improvement is that with increased trust, engineers are more
likely to test their key changes before merging, so we can avoid some
performance regressions ever being committed to the development mainline.
## 9\. Future Work and Continuing Challenges
Our current performance testing system enables us to detect performance
changes during the development cycle, and to enable our developers to
understand the impact of their code changes. While we have made great
progress, there is still much that we would like to improve in the system. We
expect that everything (commits, tests, results, changes) will continue to
increase, putting more load on our system. Additionally, we are increasing our
release frequency to quarterly (Mat Keep and Dan Pasette, 2020), which will
further increase the load on the system. In the near term we are working to
improve the ability to compare arbitrary versions, building on the work
described in Section 7. This will involve both using better statistical tests,
such as Welch’s t-test (Welch, 1947) (assuming normality) or Mann-Whitney
U-test (Mann and Whitney, 1947) in place of the simple variance based
calculation, as well as building the view into our production system. This
will help us to compare performance between releases, as well as help
developers determine if their proposed changes impact performance.
There is still much we can do on the change point detection itself. In order
to simplify the implementation, all tests and configurations are treated as
separate and independent time series by the change point detection algorithm.
We think there is a large opportunity to consider correlations between tests
and configurations. It is very infrequent that one test and configuration
changes separately from all others. We should be able to exploit correlated
changes to better diagnose and describe real performance changes, and exclude
noise.
There is still too much noise in the system, including some cases of
particularly nasty noise. Two examples include tests that show bimodal
behavior and unrelated system noise. Some tests will return one of two
different results, and may stay with one of those results for a period of time
before reverting to the other (e.g., 5 tests runs at $20$ followed by 4 tests
runs at $10$). The change point detection algorithm has a very hard time with
bimodal behavior as it looks like a statistical change. Today, a human has to
filter these changes out. There are also cases of system noise that are real
performance changes due to compiler changes. Sometimes these are due to code
layout issues letting a critical code segment fit within or not fit within a
performance-critical hardware cache. These issues manifest as deterministic
changes in performance, but there is not much we can do about them except
filter them out by hand.
Ultimately, the goal of all of this work can be described as a multi-
dimensional optimization problem. We want to simultaneously:
* •
Maximize the useful signal on performance versus noise and distractions.
* •
Maximize the test and configuration coverage.
* •
Minimize the cost of performance testing.
* •
Minimize the time from creation of a performance change to its detection,
diagnosis, and fix. (the limit of this is catching a regression before
commit).
We have work to do on all of these points. Often, in the past, we have found
ourselves with bad options, which explicitly trade off one point for another.
We hope to develop techniques that improve one or more items above at the same
time, without hurting the others.
## 10\. Related Work
Related work has looked at testing performance in continuous integration
systems. Rehman et al. (Rehmann et al., 2016) describe the system developed
for testing SAP HANA and stressed the need for complete automation. The system
compared results to a user specified limit in order to determine a pass fail
criterion. The authors also discuss challenges in reproducibility, isolation,
and getting developers to accept responsibility for issues.
Continuous integration tests need to be fast, but standard benchmarks require
extended periods of time to run. Laaber and Leitner (Laaber and Leitner, 2018)
looked at using microbenchmarks for performance testing in continuous
integration to deal with this problem. They found some, but not all
microbenchmarks are suitable for this purpose.
Once performance tests are included in a CI system, the next challenge is to
efficiently isolate the changes. Muhlbauer et al. (Muhlbauer et al., 2019)
describe sampling performance histories to build a Gaussian Process model of
those histories. The system decides which versions should be tested in order
to efficiently build up an accurate model of performance over time and to
isolate abrupt performance changes. The paper addresses a problem similar to
our previous work on detecting change points in test histories (Daly et al.,
2020), although our previous work assumes performance test results have a
constant mean value between change points.
Test result noise is an ongoing challenge. Several papers investigate both
sources of noise (Duplyakin et al., 2020; Maricq et al., 2018) and quantifying
the impact of that noise (Laaber et al., 2019). Duplyakin et al. (Duplyakin et
al., 2020) use change point detection to identify when the performance of the
nodes in a datacenter change. Their objective is to identify and isolate those
performance changes in order to keep them from impacting experiments run in
the datacenter. The paper by Maricq et al. (Maricq et al., 2018) includes a
number of practical suggestions to reduce performance variability. The
suggestions should be useful for anyone running performance benchmarks, and we
perform many of these suggestions in our system. They also show the lack of
statistical normality in their results, validating our design choice to not
assume normality. Finally, Laaber et al. (Laaber et al., 2019) compare the
variability of different microbenchmark tests across different clouds and
instance types on those clouds, demonstrating that different tests and
different instance types have wildly different performance variability.
Running benchmark test and control experiments on the same hardware can help
control the impact of that noise.
The related area of energy consumption testing shows similar issues with test
noise. Ournani et al. (Ournani et al., 2020) describe the impact of CPU
features (C-states, TurboBoost, core pinning) on energy variability. We have
observed similar impacts on performance variability from those factor in our
test environment (Henrik Ingo and David Daly, 2019). Other work looks at
extending the state of the art for change point detection in the presence of
outliers (Paul Fearnhead and Guillem Rigaill, 2019). Our system is sensitive
to outliers in the results as well. Our efforts on outlier detection would
have helped reduce the impact of outliers in our use case, if it had been
successful.
Finally, there is ongoing work related to our ultimate goal of more
efficiently detecting changes while simultaneously increasing our overall
performance test coverage. Grano et al. (Grano et al., 2019) investigated
testing with fewer resources. While this work is focused on correctness
testing, the principles can be extended to performance testing. Multiple
papers (De Oliveira et al., 2017; Huang et al., 2014) try to identify which
software changes are most likely to have performance impact in order to
prioritize the testing of those changes. Huang et al. (Huang et al., 2014) use
code analysis of software changes to decide which changes are most likely to
impact which tests, while de Oliveria et al. (De Oliveira et al., 2017) use
many indicators (including static and dynamic data) to build a predictor of
the likelihood of a performance change in the tests based on a given software
change. Other work has focused on efficiently finding performance changes
across both versions and configurations (Mühlbauer et al., 2020) and is
specifically focused on minimizing test effort while enabling the testing of
potentially huge space of configuration options and software changes. We hope
to build on these efforts to improve the efficiency of our performance
testing.
## 11\. Acknowledgments
The work described in this paper was done by a large collection of people
within MongoDB. Key teams include the Decision Automation Group (including
David Bradford, Alexander Costas, and Jim O’Leary) who are collectively
responsible for all of our analysis code, the Server Tooling and Methods team
who own the testing infrastructure, the Evergreen team which built Cedar and
Poplar for the expanded metrics support, and of course our dedicated build
baron team whom make the whole system work.
We would also like to thank Eoin Brazil for his feedback on drafts of this
paper.
## 12\. Conclusion
In this paper we have reviewed a number of recent changes to our performance
testing infrastructure at MongoDB. This builds on previous work we have done
to automate our performance testing environment, reduce the noise in the
environment (both actual noise and its impact), and better makes use of the
results from our performance testing. This infrastructure is critical to our
software development processes in order to ensure the overall quality of the
software we develop.
We first reviewed the general increase in load on the infrastructure. Each
year we run more tests in more configurations while our developers commit more
changes to our source repository. Overall we had a more than $3x$ increase
over two years in the total possible number of test results to generate and
analyze.
Paired with the general increase in load, we focused on improving the
scalability of our ability to process those results and isolate performance
changes. We also added the ability to report more and more descriptive results
from tests, enabling saving information about every operation within a
performance test. This required new systems to store and process the results,
as well as new displays for triaging the results.
Attempting to better control system noise, we built a system to detect when
the performance of our testbeds changed, and therefore we should not trust the
results of our performance tests. While promising in theory, in practice this
did not work as well as we had hoped, and ultimately we disabled it.
Finally, we enabled better comparison of results between arbitrary commits.
This was a large open challenge for us. Building upon the change point
detection system we use to process our results, we were able to give a much
clearer view of the significant changes between arbitrary commits, making it
much easier to regularly check the current state of the development software
against the last stable release. We continue to both refine this comparison of
results and lift it into our production environment.
The cumulative impact of these changes and our previous work has been to
enable a virtuous cycle for performance at MongoDB. As the system is used
more, we catch and address more performance changes, leading us to use the
system more. This virtuous cycle directly increases the productivity of our
development engineers and leads to a more performant product.
## References
* (1)
* noa ([n.d.]a) [n.d.]a. 1.3.5.17. Detection of Outliers. https://www.itl.nist.gov/div898/handbook/eda/section3/eda35h.htm
* noa ([n.d.]b) [n.d.]b. 1.3.5.17.3. Generalized Extreme Studentized Deviate Test for Outliers. https://www.itl.nist.gov/div898/handbook/eda/section3/eda35h3.htm
* noa ([n.d.]c) [n.d.]c. Evergreen Continuous Integration: Why We Reinvented The Wheel. https://engineering.mongodb.com/post/evergreen-continuous-integration-why-we-reinvented-the-wheel
* noa ([n.d.]d) [n.d.]d. Evergreen Ecosystem. https://github.com/evergreen-ci/evergreen
* noa ([n.d.]e) [n.d.]e. Package cedar. https://godoc.org/github.com/evergreen-ci/cedar
* noa ([n.d.]f) [n.d.]f. Package poplar. https://godoc.org/github.com/evergreen-ci/poplar
* noa (2020a) 2020a. Genny workload generator. https://github.com/mongodb/genny original-date: 2018-02-12T19:23:44Z.
* noa (2020b) 2020b. YCSB. https://github.com/mongodb-labs/YCSB original-date: 2015-03-17T18:10:30Z.
* Cooper et al. (2010) Brian F. Cooper, Adam Silberstein, Erwin Tam, Raghu Ramakrishnan, and Russell Sears. 2010\. Benchmarking cloud serving systems with YCSB. In _Proceedings of the 1st ACM symposium on Cloud computing - SoCC ’10_. ACM Press, Indianapolis, Indiana, USA, 143. https://doi.org/10.1145/1807128.1807152
* Daly et al. (2020) David Daly, William Brown, Henrik Ingo, Jim O’Leary, and David Bradford. 2020. The Use of Change Point Detection to Identify Software Performance Regressions in a Continuous Integration System. In _Proceedings of the ACM/SPEC International Conference on Performance Engineering_ _(ICPE ’20)_. Association for Computing Machinery, Edmonton AB, Canada, 67–75. https://doi.org/10.1145/3358960.3375791
* De Oliveira et al. (2017) Augusto Born De Oliveira, Sebastian Fischmeister, Amer Diwan, Matthias Hauswirth, and Peter F. Sweeney. 2017. Perphecy: Performance Regression Test Selection Made Simple but Effective. In _2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)_. 103–113. https://doi.org/10.1109/ICST.2017.17
* Duplyakin et al. (2020) Dmitry Duplyakin, Alexandru Uta, Aleksander Maricq, and Robert Ricci. 2020. In Datacenter Performance, The Only Constant Is Change. In _2020 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID)_. 370–379. https://doi.org/10.1109/CCGrid49817.2020.00-56
* Grano et al. (2019) Giovanni Grano, Christoph Laaber, Annibale Panichella, and Sebastiano Panichella. 2019. Testing with Fewer Resources: An Adaptive Approach to Performance-Aware Test Case Generation. _IEEE Transactions on Software Engineering_ (2019), 1–1. https://doi.org/10.1109/TSE.2019.2946773 arXiv: 1907.08578.
* Henrik Ingo and David Daly (2019) Henrik Ingo and David Daly. 2019. Reducing variability in performance tests on EC2: Setup and Key Results. https://engineering.mongodb.com/post/reducing-variability-in-performance-tests-on-ec2-setup-and-key-results
* Huang et al. (2014) Peng Huang, Xiao Ma, Dongcai Shen, and Yuanyuan Zhou. 2014\. Performance regression testing target prioritization via performance risk analysis. In _Proceedings of the 36th International Conference on Software Engineering_ _(ICSE 2014)_. Association for Computing Machinery, Hyderabad, India, 60–71. https://doi.org/10.1145/2568225.2568232
* Ingo and Daly (2020) Henrik Ingo and David Daly. 2020. Automated system performance testing at MongoDB. In _Proceedings of the workshop on Testing Database Systems_ _(DBTest ’20)_. Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3395032.3395323
* Laaber and Leitner (2018) Christoph Laaber and Philipp Leitner. 2018. An evaluation of open-source software microbenchmark suites for continuous performance assessment. In _Proceedings of the 15th International Conference on Mining Software Repositories_ _(MSR ’18)_. Association for Computing Machinery, Gothenburg, Sweden, 119–130. https://doi.org/10.1145/3196398.3196407
* Laaber et al. (2019) Christoph Laaber, Joel Scheuner, and Philipp Leitner. 2019\. Software microbenchmarking in the cloud. How bad is it really? _Empirical Software Engineering_ 24, 4 (Aug. 2019), 2469–2508. https://doi.org/10.1007/s10664-019-09681-1
* Mann and Whitney (1947) H. B. Mann and D. R. Whitney. 1947. On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other. _Annals of Mathematical Statistics_ 18, 1 (March 1947), 50–60. https://doi.org/10.1214/aoms/1177730491 Publisher: Institute of Mathematical Statistics.
* Maricq et al. (2018) Aleksander Maricq, Dmitry Duplyakin, Ivo Jimenez, Carlos Maltzahn, Ryan Stutsman, and Robert Ricci. 2018\. Taming Performance Variability. In _13th $\{$USENIX$\}$ Symposium on Operating Systems Design and Implementation ($\{$OSDI$\}$ 18)_. 409–425. https://www.usenix.org/conference/osdi18/presentation/maricq
* Mat Keep and Dan Pasette (2020) Mat Keep and Dan Pasette. 2020. Accelerating Delivery with a New Quarterly Release Cycle, Starting with MongoDB 5.0 | MongoDB Blog. https://www.mongodb.com/blog/post/new-quarterly-releases-starting-with-mongodb-5-0
* Muhlbauer et al. (2019) Stefan Muhlbauer, Sven Apel, and Norbert Siegmund. 2019\. Accurate Modeling of Performance Histories for Evolving Software Systems. In _2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE)_. IEEE, San Diego, CA, USA, 640–652. https://doi.org/10.1109/ASE.2019.00065
* Mühlbauer et al. (2020) Stefan Mühlbauer, Sven Apel, and Norbert Siegmund. 2020\. Identifying Software Performance Changes Across Variants and Versions. In _2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE)_. 12. https://doi.org/10.1145/3324884.3416573
* Ournani et al. (2020) Zakaria Ournani, Mohammed Chakib Belgaid, Romain Rouvoy, Pierre Rust, Joel Penhoat, and Lionel Seinturier. 2020. Taming Energy Consumption Variations In Systems Benchmarking. In _Proceedings of the ACM/SPEC International Conference on Performance Engineering_ _(ICPE ’20)_. Association for Computing Machinery, New York, NY, USA, 36–47. https://doi.org/10.1145/3358960.3379142
* Paul Fearnhead and Guillem Rigaill (2019) Paul Fearnhead and Guillem Rigaill. 2019. Changepoint Detection in the Presence of Outliers. _J. Amer. Statist. Assoc._ 114, 525 (2019), 169–183. https://doi.org/10.1080/01621459.2017.1385466
* Rehmann et al. (2016) Kim-Thomas Rehmann, Changyun Seo, Dongwon Hwang, Binh Than Truong, Alexander Boehm, and Dong Hun Lee. 2016. Performance Monitoring in SAP HANA’s Continuous Integration Process. _ACM SIGMETRICS Performance Evaluation Review_ 43, 4 (Feb. 2016), 43–52. https://doi.org/10.1145/2897356.2897362
* Rosner (1983) Bernard Rosner. 1983\. Percentage Points for a Generalized ESD Many-Outlier Procedure. _Technometrics_ 25, 2 (1983), 165–172. https://doi.org/10.2307/1268549 Publisher: [Taylor & Francis, Ltd., American Statistical Association, American Society for Quality].
* Welch (1947) B. L. Welch. 1947\. The Generalization of ’Student’s’ Problem When Several Different Population Variances are Involved. _Biometrika_ 34, 1-2 (Jan. 1947), 28–35. https://doi.org/10.1093/biomet/34.1-2.28 Publisher: Oxford Academic.
|
# Degenerated Liouvillians and Steady-State Reduced Density Matrices
Juzar Thingna Center for Theoretical Physics of Complex Systems, Institute
for Basic Science (IBS), Daejeon 34126, Republic of Korea. Basic Science
Program, University of Science and Technology, Daejeon 34113, Republic of
Korea. Daniel Manzano<EMAIL_ADDRESS>Universidad de Granada,
Departamento de Electromagnetismo y Física de la Materia and Instituto Carlos
I de Física Teórica y Computacional, Granada 18071, Spain.
###### Abstract
Symmetries in an open quantum system lead to degenerated Liouvillian that
physically implies the existence of multiple steady states. In such cases,
obtaining the initial condition independent stead states is highly nontrivial
since any linear combination of the _true_ asymptotic states, which may not
necessarily be a density matrix, is also a valid asymptote for the
Liouvillian. Thus, in this work we consider different approaches to obtain the
_true_ steady states of a degenerated Liouvillian. In the ideal scenario, when
the open system symmetry operators are known we show how these can be used to
obtain the invariant subspaces of the Liouvillian and hence the steady states.
We then discuss two other approaches that do not require any knowledge of the
symmetry operators. These could be a powerful tool to deal with quantum many-
body complex open systems. The first approach which is based on Gramm-Schmidt
orthonormalization of density matrices allows us to obtain _all_ the steady
states, whereas the second one based on large deviations allows us to obtain
the non-degenerated maximum and minimum current-carrying states. We discuss
our method with the help of an open para-Benzene ring and examine interesting
scenarios such as the dynamical restoration of Hamiltonian symmetries in the
long-time limit and apply the method to study the eigenspacing statistics of
the nonequilibrium steady state.
> In 1976 Gorini, Kossakowski, Sudarshan, and Lindblad (GKSL) [1, 2]
> independently proposed a completely positive trace preserving master
> equation that governs the dynamics of a generic quantum system. Since then
> the equation has been a hallmark in the study of dissipative open quantum
> systems and has been used in a wider variety of applications. In recent
> years, due to the experimental advancements, engineering the bath properties
> and system-bath interaction has become possible. One immediate consequence
> is the existence of multiple steady states. In such cases, the dissipative
> Liouvillian becomes degenerated, having more than one invariant subspace. In
> general, finding the nonequilibrium steady states (NESS) is highly
> nontrivial and in this work we outline three methods to address this issue.
> Each method has its own benefits and drawbacks. Using a _para_ -Benzene ring
> as a open quantum system we elucidate the methods and find the existence of
> decoherence free subspaces or even dynamical restoration of Hamiltonian
> symmetries in the long time limit. Lastly, since our approach allows us to
> obtain the NESSs for a degenerated Liouvillian we use it to study the
> statistics of the ratio of consecutive eigenspacing $r$ of the NESS which
> shows $P(r)\rightarrow 0$ as $r\rightarrow 0$.
## I Introduction
Quantum master equations are an essential tool to study dissipative systems
and have been applied to a wide variety of model systems in quantum optics [3,
4, 5], thermodynamics [6, 7, 8, 9, 10], transport [11, 12, 13, 14], and
quantum information [15, 16]. The most general Markovian master equation that
preserves the properties of the density matrix (positivity, Hermiticity, and
trace) is the Lindblad (or Gorini-Kossakowski-Sudarshan-Lindblad, GKSL)
equation [2, 1, 17, 18]. This equation describes the dynamics of a system
under the effect of a Markovian environment. The fixed points of this dynamics
have also been broadly analysed. Evans proved that [19] bounded systems
present at least one fixed point, and that there can be more than one leading
to degeneracy of the Liouvillian.
The study of degenerated master equations has been very active during the last
decade. The use of symmetries and degeneracy has been applied to reduce the
dimensionality of open quantum systems [20], to harness quantum transport
[21], to detect magnetic fields [22], and in error correction [23]. In the
timely field of quantum machine learning there are approaches to pattern
retrieval by the use of degenerated open quantum systems [24]. Furthermore,
the non-equilibrium properties of molecular systems have been addressed to
detect symmetries and multiple fixed points [25].
In the non-degenerated case the initial condition independent steady state of
a system can be obtained by numerically diagonalising the dissipative
Liouvillian. Unfortunately, the degenerated case is complicated because a
linear combination of fixed points is also fixed and thus there is no
guarantee that the diagonalization algorithm will return the physical steady
states instead of their linear combinations. Thus, the problem of degenerated
Liouvillians becomes non trivial and hard to analyse numerically since the
initial condition dependence cannot be easily eliminated.
In this paper, we present a toolbox for the extraction of the physical steady-
states of degenerated open quantum systems in the Lindblad form. We present
three different methods, a block diagonalization, a Gramm-Schmidt-inspired
orthonormalization, and a method based in large deviation theory. Each method
has its own strengths and weaknesses. To illustrate the presented methods we
apply them to a model of a ring driven out of equilibrium by two thermal
baths. We analytically calculate the steady-states, for a specific choice of
the parameters, by the block-diagonalization method. We discuss the
phenomenology of the open quantum system as a function of its bath parameters
and test the numerical methods. The minimal model allows us to analytically
discuss a plethora of interesting scenarios, e.g., we find the invariant
subspace of the Liouvillian can become degenerate if the bath is engineered to
only pump energy into the system. In other words, even though one expects a
single steady state corresponding to the invariant subspace we find multiple
steady states due to the dynamical degeneration of the invariant subspace. The
Gramm-Schmidt inspired method also allows us to explore the eigen-spacing
statistics of the nonequilibrium steady state (NESS) and understand the
signatures from the perspective of random matrix theory [26].
The paper is organized as follows: In Sec. II we discuss the main idea behind
degenerated Liouvillians and symmetries in open quantum systems. Sec. III is
dedicated to the general formulation of the three different methods to obtain
the steady states. Particularly, Sec. III.1 deals with the block
diagonalization approach in which the open system symmetry operators are
known. In Sec. III.2 we discuss the Gramm-Schmidt based orthonormalization
procedure that allows us to obtain all the steady states and Sec. III.3 is
dedicated to the large deviation theory based method which helps obtain the
non-degenerate states carrying minimum or maximum current. In Sec. IV we apply
our different methods to a _para_ -Benzene ring, discuss analytically solvable
cases, and study the eigen-spacing statistics of the NESS. Finally in Sec. V
we conclude and provide a future outlook.
## II Degenerated Liouvillians
In this section we present the basics of degenerated Liouvillians and set up
the notation that will be used in the paper. The main object of this study are
mixed quantum states described by density matrices. If the Hilbert space of
the pure states of our system is ${\cal H}$. A mixed state is determined by a
matrix $\rho\in O({\cal H})$, with $O({\cal H})$ being the space of bounded
operators, that fulfils two properties:
$\displaystyle\text{Normalization:}\;\textrm{Tr}(\rho)=1$
$\displaystyle\text{Positivity:}\quad\rho>0\quad\text{i.e.,}\quad\forall|{\psi}\rangle\in{\cal
H}\quad\langle{\psi}|\rho|{\psi}\rangle\geq 0.$ (1)
Any matrix fulfilling these two properties is considered a density matrix.
Another important concept we will use is orthogonality of density matrices.
Two density matrices $\rho_{i}$ and $\rho_{j}$ are considered orthogonal if
$\textrm{Tr}[\rho_{i}\rho_{j}]=0$.
In this work, we consider the dynamics of the system to be governed by the
GKSL equation (see Ref. [18] for an introduction),
$\displaystyle\frac{d\rho(t)}{dt}$ $\displaystyle=$
$\displaystyle-i\left[H,\rho(t)\right]+\sum_{i}\left(L_{i}\rho(t)L_{i}^{\dagger}-\frac{1}{2}\left\\{\rho(t),L_{i}^{\dagger}L_{i}\right\\}\right),$
(2) $\displaystyle\equiv$ $\displaystyle{\cal L}[\rho(t)],$
where $H$ is the Hamiltonian of our system of interest and $L_{i}$ are
positive bound operators called “jump operators”. Throughout this work we will
set $\hbar=k_{B}=1$. The super-operator ${\cal L}$ is usually named the
Liouville operator of the system dynamics or just the Liouvillian. If the
system pure states, ${\cal H}$, has a dimension $N$ the operators space
dimension, $O({\cal H})$ is $N^{2}$. As the Lindblad equation represents a map
of operators, the Liouvillian ${\cal L}$ may be represented by a matrix of
dimension $N^{2}\times N^{2}$.
For bounded systems, Evans’ theorem states that this equation has at least one
fixed point [19], meaning that there is at least one density matrix $\rho$
s.t.
$\textrm{Re}\\{{\cal L}[\rho]\\}=0.$ (3)
In most cases, there is at least one state s.t. ${\cal L}[\rho^{{\rm SS}}]=0$.
These are called steady-states and they do not evolve with time as
$d\rho^{{\rm SS}}/dt={\cal L}[\rho^{{\rm SS}}]=0$. Evans’ theorem, as stated
above, also includes the possibility of having pairs of states with zero real
part but non-zero imaginary one [27, 21]. These states are called stationary
coherences and they evolve indefinitely.
The Liouvillian is a super-operator and hence to obtain its spectrum we need
to map it to a matrix. The mathematical tool to do so is called the Fock-
Liouville space (FLS). In the FLS, the density matrices are written as column
vectors using an arbitrary map for its elements. All maps produce equivalent
results and hence any choice of the map is a good choice. Once the density
matrix is mapped to a column vector the Liouvillian super-operator can be
written as a $N^{2}\times N^{2}$ general non-Hermitian matrix. It has both
right and left eigenvectors and steady states (fixed points) correspond to the
right eigenvectors with zero real eigenvalue.
Evan’s theorem also gives the conditions for obtaining a unique steady state
[19]. This happens iff the set of operators $\\{H,L_{i}\\}$ can generate the
entire algebra of the space of bounded operators under multiplication and
addition. In general, this condition is hard to prove for most systems (see
Ref. [28] for an example). However, when not fulfilled there are more than one
steady states. This degeneracy in the Liouvillian may be related to the
presence of symmetries as we discuss in the next section.
Let’s suppose that we have a degenerated Liouvillian with $M$ zero eigenvalues
(we suppose there are no oscillating coherences). Each zero eigenvalue has an
associated right-eigenvector that can be obtained by diagonalizing the
Liouvillian expressed in the FLS. One could naively think that each of these
right eigenvectors corresponds to a steady-state density matrix, but this is
true only in very simple cases. In general, any linear combination of the
steady-state density matrices is a right eigenvector of the Liouvillian with
zero eigenvalue, but it is not necessarily a density matrix in the sense that
is may not be positive. Furthermore, it is also possible that the obtained
right eigenvectors do not form an orthogonal set 111Note that duality of basis
ensures the left and right eigenvectors are form an orthonormal set. This does
not ensure that the right eigenvectors are orthogonal amongst themselves.,
meaning that they do not belong to different invariant subspaces. Bearing
these issues in mind, in the next section we propose various approaches to
obtain the steady state density matrices which are independent of initial
conditions in each subspace of the Liouvillian.
## III Methods to obtain steady states
We present three methods to calculate the steady-states of degenerated
Liouvillians, the symmetry-decomposition, the orthonormalisation and the large
deviation method. Each method has its own advantages. The symmetry-based one
can be applied analytically for many cases and it is numerically cheap, but it
requires full knowledge of the system’s symmetries. The orthonormalisation can
be applied with no previous knowledge about open system symmetries, but it’s
computational cost increases with the degree of degeneracy. Finally, the large
deviation method does not require previous knowledge about open system
symmetries and it’s computationally cheap but it only gives the non-
degenerated maximum and minimum current carrying states.
### III.1 Diagonalisation by symmetry-decomposition
In this sub-section, we explain the relation between open system symmetry
operators and multiple steady-states. We then use the knowledge of the
symmetry operators and outline a procedure to obtain the steady states, some
of which could have zero trace (non-physical density matrices).
To simplify our discussion we focus on _strong_ open system symmetries in
which there exists a unitary operator $\pi$ s.t. [20, 21]
$[\pi,H]=[\pi,L_{i}]=0\quad\forall i.$ (4)
This implies that the generators of the dissipative system dynamics
$\\{H,L_{i}\\}$ and the symmetry operator $\pi$ can be diagonalised with a
common basis. Let us denote the eigenvalues of $\pi$ as
$v_{i}=e^{i\theta_{i}}$, with $i\in[1,n]$ and $n$ being the number of distinct
eigenvalues. Each eigenvalue can be degenerated and hence we introduce the
index $d_{i}$ that represents the dimension of the subspace corresponding to
eigenvalue $v_{i}$. The corresponding eigenvectors of the symmetry operator
$\pi$ are $|{v_{i}^{\alpha}}\rangle$, with $i\in[1,n]$ and $1\leq\alpha\leq
d_{i}$.
We define a super-operator $\Pi$ acting on the subspace of the bounded
operators of ${\cal H}$ as
$\Pi\left[x\right]\equiv\pi\cdot x\cdot\pi^{\dagger}.$ (5)
The spectrum of $\Pi$ is derived from the one of $\pi$ as
$\Pi\left[|{v_{i}^{\alpha}}\rangle\\!\langle{v_{j}^{\beta}}|\right]=e^{i\left(\theta_{i}-\theta_{j}\right)}|{v_{i}^{\alpha}}\rangle\\!\langle{v_{j}^{\beta}}|.$
(6)
Thus, the Hilbert space ${\cal H}$ can be decomposed using the spectrum of
$\pi$,
${\cal H}=\bigoplus_{i=1}^{n}{\cal H}_{i},$ (7)
with ${\cal
H}_{i}=\text{span}\left\\{|{v_{i}^{\alpha}}\rangle,\alpha=1,...,d_{i}\right\\}$.
Similarly, the space of bounded operators ${\cal B}$ can be expanded in the
eigenspace of the super-operator $\Pi$ as
${\cal B}=\bigoplus_{i,j=1}^{n}{\cal B}_{i,j},$ (8)
with ${\cal
B}_{i,j}=\text{span}\left\\{|{v_{i}^{\alpha}}\rangle\\!\langle{v_{j}^{\beta}}|,\alpha=1,\cdots,d_{i};\,\beta=1,\cdots,d_{j}\right\\}$.
Using this decomposition, it is clear that these eigenspaces are invariant
under the effect of the Liouvillian ${\cal L}[{\cal B}_{i,j}]\subseteq{\cal
B}_{i,j}$. This implies that the Liouvillian can be block decomposed, using
the basis of $\Pi$, into $n^{2}$ invariant subspaces.
Normalized density matrices are only possible in the subspaces ${\cal
B}_{i,i}$, meaning that we have at least $n$ steady states. It is also
possible to find states having zero trace, belonging to the subspaces ${\cal
B}_{i,j}$ $(i\neq j)$ [22]. These states do not represent real density
matrices, but they can form linear combinations with the steady states making
physical differences. Note that we use the term “steady state” only for the
states with finite trace and corresponding to zero eigenvalue of the
Liouvillian. From the above description, it is also clear that steady states
corresponding to different subspaces are orthogonal to each other.
The knowledge of a strong symmetry operator $\pi$ gives us only a lower bound
of the number of steady states. It is always possible that some of the blocks
${\cal B}_{i,i}$ are further degenerated. This happens when there are $K>1$
strong symmetry operators, i.e., $\left\\{\pi^{(1)},\dots,\pi^{(K)}\right\\}$
each of them with $n^{(j)}$ ($j=1,\cdots,K$) different eigenvalues s.t. [30]
$[\pi^{(j)},H]=[\pi^{(j)},L_{i}]=[\pi^{(j)},\pi^{(l)}]=0\quad\forall(i,j,l).$
(9)
In this case we can perform the block-diagonalization of the Liouvillian using
the eigenbasis of $\pi^{(1)}$, obtaining
${\cal H}=\bigoplus_{i=1}^{n^{(1)}}{\cal H}_{i}.$ (10)
Then each block ${\cal H}_{i}$ can be further block diagonalised into a
maximum of $n^{(2)}$ blocks using the eigenbasis of $\pi^{(2)}$. This can be
repeated until all symmetry operators are used. Thus, since the operation of
each symmetry operator not always diagonalize the Liouvillian into exactly
$n^{(i)}$ blocks it is impossible to predict the total number of steady
states. Thus, we can only impose bounds on the number of steady states $M$ as
$\text{max}\left[n^{(i)}\right]<M<\prod_{i=1}^{K}n^{(i)}$.
To summarise the above outlined approach we provide an algorithm to be applied
to a system having $K$ symmetry operators $\left\\{\pi^{(j)}\right\\}$
($j=1,\cdots,K$). Each of the symmetry operators $\pi^{(j)}$ have $n^{(j)}$
distinct eigenvalues with phases
$\left\\{\theta^{(j)}_{1},\theta^{(j)}_{2},\dots,\theta^{(j)}_{n^{(j)}}\right\\}$.
As the symmetry operators commute with each other we can define a common
eigenbasis of all of them. The eigenbasis can be defined by the eigenvectors
$\left\\{|{v_{\theta^{(1)}_{i_{1}},\theta^{(2)}_{i_{2}},\cdots,\theta^{(K)}_{i_{K}}}^{\alpha}}\rangle\right\\}$,
where $i_{j}\in[1,n^{(j)}]$, and $\alpha$ stands for the degeneracy of the
subspace determined by the eigenvalues
$\bm{\theta}_{\bm{i}}=\left\\{\theta^{(1)}_{i_{1}},\theta^{(2)}_{i_{2}},\cdots,\theta^{(K)}_{i_{K}}\right\\}$
where $\bm{i}=\\{i_{1},i_{2},\cdots,i_{K}\\}$ and each element $i_{j}$ of
$\bm{i}$ is associated with the same element $\theta^{(j)}$ of $\bm{\theta}$.
This means that each vector $|{v_{\bm{\theta}_{\bm{i}}}^{\alpha}}\rangle$ is
an eigenvector of each symmetry operator $\pi^{(j)}$, i.e.,
$\displaystyle\pi^{(j)}|{v_{\bm{\theta}_{\bm{i}}}^{\alpha}}\rangle=\theta_{i_{j}}^{(j)}|{v_{\bm{\theta}_{\bm{i}}}^{\alpha}}\rangle.$
(11)
The eigenbasis of the corresponding super-operators $\Pi^{(j)}$ is naturally
given by the elements
$\left\\{|{v_{\bm{\theta}_{\bm{i}}}^{\alpha}}\rangle\langle{v_{\bm{\theta}_{\bm{i^{\prime}}}}^{\beta}}|\right\\}$.
The method to obtain the steady sates of the degenerated Liouvillian, if we
know its symmetry operators, is then:
1. 1.
Find the common eigenbasis of all the symmetry operators
$\left\\{\pi^{(j)}\right\\}$.
2. 2.
Calculate the eigenvalues of the symmetry operators corresponding to the
elements of the basis, obtaining a classification of the form
$|{v_{\bm{\theta}_{\bm{i}}}^{\alpha}}\rangle$.
3. 3.
Order the elements of the basis by grouping all the vectors with the same
eigenvalues.
4. 4.
Change the Liouvillian to the new basis. A block-diagonal structure arises.
5. 5.
Diagonalise each block of the new basis. Any eigenvector with a zero
eigenvalue corresponds to a steady state. Note that the dimension of the
blocks are smaller than the dimension of the Liouvillian and, therefore, the
eigenvectors of the blocks do not represent density matrices by themselves.
6. 6.
Increase the dimension of the eigenvectors of each block by adding $0$’s to
complete the dimension.
7. 7.
Change back to the original basis.
### III.2 Diagonalisation by orthonormalization
In the last sub-section we dealt with the ideal scenario in which all the
strong symmetry operators were known. In complex many-body open quantum
systems knowing all the strong symmetry operators is highly non-trivial and
the problem can become even more complicated if _weak_ symmetry [20] is
degenerating the Liouvillian. In this case, our starting point could be a set
of $M$ linearly independent right eigenvectors of the Liouvillian which
correspond to zero eigenvalue. One could naively expect that these operators
are indeed the density matrices corresponding to the fixed points of the
Liouvillian, but this is not the general case. In most cases, the
diagonalization algorithm will give us a set of operators that are neither
positive nor orthogonal to each other. Thus, in this sub-section we explain
our second method to reconstruct the density matrices from such a set. This
method was first presented in Ref. [22] and it does not require any pre-
requisite knowledge of the strong or weak symmetry operators.
Having this objective in mind the question we ask is: If we have a set of $M$
zero eigenvalue eigenvectors of ${\cal L}$ that are linearly independent
$\left\\{\tilde{\rho}_{i}\right\\}$, how can we reconstruct $M$ positive
density matrices $\left\\{\rho_{i}\right\\}$ with the following properties:
$\displaystyle{\cal L}[\rho_{i}]$ $\displaystyle=$ $\displaystyle
0\quad\forall i,$ (12) $\displaystyle\text{Tr}[\rho_{i}\rho_{j}]$
$\displaystyle=$ $\displaystyle 0\quad\forall i\neq j.$ (13)
We will address this problem by a two-step approach. First, we construct a set
of orthogonal matrices. To construct the orthonormal set we start by applying
an orthogonalisation process based on Gramm-Schmidt algorithm. To begin, we
form a set of Hermitian matrices $\left\\{\rho^{H}_{i}\right\\}$ from the
original set,
$\rho^{H}_{i}=\tilde{\rho}_{i}+\tilde{\rho}_{i}^{\dagger}.$ (14)
Then we use these Hermitian matrices $\left\\{\rho^{H}_{i}\right\\}$ to
construct a set of orthogonal Hermitian matrices by applying
$\displaystyle\rho_{1}^{O}$ $\displaystyle=$ $\displaystyle\rho_{1}^{H},$
$\displaystyle\rho_{2}^{O}$ $\displaystyle=$
$\displaystyle\rho_{2}^{H}-\frac{\textrm{Tr}[\rho_{1}^{O}\;\rho_{2}^{H}]}{\textrm{Tr}[\rho_{1}^{O}\;\rho_{1}^{O}]}\rho_{1}^{O},$
$\displaystyle\rho_{3}^{O}$ $\displaystyle=$
$\displaystyle\rho_{3}^{H}-\frac{\textrm{Tr}[\rho_{1}^{O}\;\rho_{3}^{H}]}{\textrm{Tr}[\rho_{1}^{O}\;\rho_{1}^{O}]}\rho_{1}^{O}-\frac{\textrm{Tr}[\rho_{2}^{O}\;\rho_{3}^{H}]}{\textrm{Tr}[\rho_{2}^{O}\;\rho_{2}^{O}]}\rho_{2}^{O},$
$\displaystyle\vdots$ $\displaystyle\rho_{M}^{O}$ $\displaystyle=$
$\displaystyle\rho_{M}^{H}-\sum_{j=1}^{M-1}\frac{\textrm{Tr}[\rho_{j}^{O}\;\rho_{N}^{H}]}{\textrm{Tr}[\rho_{j}^{O}\;\rho_{j}^{O}]}\rho_{j}^{O}.$
(15)
The orthonormalization process preserves Hermiticity and it trivially follows
that the set $\left\\{\rho^{O}_{i}\right\\}$ fulfil the orthogonality relation
$\textrm{Tr}[\rho_{i}^{O}\rho_{j}^{O}]=0\quad\text{if}\quad i\neq j.$ (16)
This is a set of eigenmatrices of the Liouvillian with zero eigenvalue in
which every matrix is Hermitian and orthogonal to each other. The only
remaining issue is that these matrices may not be semi-positive definite,
meaning that they may have negative eigenvalues. To address this issue, we
first define the positivity functional, $P$, of a set of $M$ Hermitian
operators, $\left\\{A_{i}\right\\}_{i=1}^{M}$, of dimension $N$ (same as the
dimension of density matrices) as
$P\left[\left\\{A_{i}\right\\}\right]=\sum_{i=1}^{M}\sum_{j=1}^{N}v_{j}^{A_{i}}-\left|v_{j}^{A_{i}}\right|,$
(17)
with $v_{j}^{A_{i}}$ being the $j$th eigenvalue of operator $A_{i}$. It is
clear that this measure is equal to zero iff all the matrices of the set
$\left\\{A_{i}\right\\}_{i=1}^{M}$ are semi-positive definite. As the set of
matrices $\left\\{\rho^{O}_{i}\right\\}_{i=1}^{M}$ are orthogonal and a linear
combination of positive matrices, we may find a unitary operator, $U$, that
transforms this set to a zero eigenvalue positive orthogonal matrices
$\left\\{\rho_{i}^{P}\right\\}$. To do so, we first write the original set as
a column vector
$|\rho^{O}\rangle\rangle\equiv\left(\begin{array}[]{c}\rho_{1}^{O}\\\
\rho_{2}^{O}\\\ \vdots\\\ \rho_{M}^{O}\end{array}\right).$ (18)
As we want to preserve orthogonality, we need to apply a unitary operator to
the vector $|\rho^{O}\rangle\rangle$. This transformation can be described by
a set of $(M^{2}-M)/2$ Euler angles,
$\bm{\chi}=\left\\{\chi_{1},\,\chi_{2},\dots,\,\chi_{\frac{M^{2}-M}{2}}\right\\}$.
For a specific choice of the Euler angles we can define the new vector of
matrices $|\rho(\bm{\chi})\rangle\rangle=U(\bm{\chi})|\rho^{O}\rangle\rangle$,
corresponding to the set of matrices $\left\\{\rho_{i}(\bm{\chi})\right\\}$.
In order to find the correct choice of the angles that performs the correct
transformation we need to maximise the functional
$F\left[\left\\{\rho_{i}(\bm{\chi})\right\\}\right]=\sum_{i=1}^{M}\sum_{j=1}^{N}v_{j}^{\rho_{i}(\bm{\chi})}-\left|v_{j}^{\rho_{i}(\bm{\chi})}\right|,$
(19)
with respect to the various Euler angles. Thus, we can obtain a set of
orthogonal semi-definite positive zero eigenvalue right-eigenvector matrices
$\left\\{\rho_{i}^{P}\right\\}$. These obtained matrices need not be
normalized and this can be easily achieved by transforming
$\rho_{i}=\rho_{i}^{P}/\textrm{Tr}[\rho_{i}^{P}]$ for all the matrices that
have $\textrm{Tr}[\rho_{i}^{P}]\neq 0$.
The above described method can be summarised as follows:
1. 1.
Obtain a set of Hermitian matrices by applying Eq. (14) and obtaining the set
$\left\\{\rho^{H}_{i}\right\\}$ .
2. 2.
Construct a set of orthogonal matrices, $\left\\{\rho^{O}_{i}\right\\}$, by
applying a Gram-Schmidt method for density matrices.
3. 3.
Find the rotation angles,
$\bm{\chi}=\left\\{\chi_{1},\,\chi_{2},\dots,\,\chi_{\frac{M^{2}-M}{2}}\right\\}$,
by maximising the functional, Eq. (19).
4. 4.
Apply the rotation $U(\bm{\chi})$ to obtain the orthonormal semi-positive
definite Hermitian set of matrices $\left\\{\rho_{i}^{P}\right\\}$.
5. 5.
Renormalise by doing $\rho_{i}=\rho_{i}^{P}/\textrm{Tr}[\rho_{i}^{P}]$ for all
the matrices that have $\textrm{Tr}[\rho_{i}^{P}]\neq 0$.
### III.3 Diagonalisation by large deviations
In this sub-section we describe a method to obtain some of the steady states
by a single diagonalization of the Liouvillian, making it much simpler than
the previous methods. On the other hand, it can be applied only in some cases
and it allows us to obtain only some of the states. The method is based on the
study of the thermodynamic currents and it was first presented in Ref. [31]
(see Ref. [21] for a more detailed discussion). Here we focus only on the
description of this approach and its applicability.
We consider a system connected to several incoherent channels that allow the
exchange of quanta between the system and an environment. This allows us to
divide the super-operator ${\cal L}$ from Eq. (2) into three parts
${\cal L}={\cal L}_{-1}+{\cal L}_{0}+{\cal L}_{+1},$ (20)
where the subscripts indicate the number of excitations introduced/removed
from the system by the environment. Of course, there could be more exotic
environments that exchange more than one excitation but for the sake of
simplicity we will not consider this possibility. Next, we define the system
density matrix conditioned on a fixed number of excitations $Q$ as
$\rho_{Q}(t)\equiv\textrm{Tr}_{Q}[\rho(t)]$ where $\textrm{Tr}_{Q}$ is partial
trace over the manifold containing $Q$ excitations. Thus, the evolution of
$\rho_{Q}(t)$ is governed by
$\frac{d\rho_{Q}(t)}{dt}={\cal L}_{-1}[\rho_{Q+1}(t)]+{\cal
L}_{0}[\rho_{Q}(t)]+{\cal L}_{+1}[\rho_{Q-1}(t)].$ (21)
This gives a hierarchy of equations that can be unravelled using the Laplace
transform
$\rho_{\lambda}(t)=\sum_{Q=-\infty}^{\infty}\rho_{Q}(t)e^{-\lambda Q},$ (22)
which when applied to Eq. (21) gives a set of independent equations
$\displaystyle\frac{d\rho_{\lambda}(t)}{dt}$ $\displaystyle=$ $\displaystyle
e^{\lambda}{\cal L}_{-1}[\rho_{\lambda}(t)]+{\cal
L}_{0}[\rho_{\lambda}(t)]+e^{-\lambda}{\cal L}_{+1}[\rho_{\lambda}(t)]$ (23)
$\displaystyle\equiv$ $\displaystyle{\cal L}_{\lambda}[\rho_{\lambda}(t)].$
where $\lambda$ in known as the counting field. For the Lindblad equation that
takes the form of Eq. (2), we have the correspondence
$\displaystyle{\cal L}_{-1}[\rho(t)]$ $\displaystyle=$ $\displaystyle
L_{i}\rho(t)L_{i}^{\dagger}$ $\displaystyle{\cal L}_{+1}[\rho(t)]$
$\displaystyle=$ $\displaystyle L_{j}\rho(t)L_{j}^{\dagger}$
$\displaystyle{\cal L}_{0}[\rho(t)]$ $\displaystyle=$
$\displaystyle-i\left[H,\rho(t)\right]$ $\displaystyle+\sum_{k\neq
i,j}L_{k}\rho(t)L_{k}^{\dagger}-\frac{1}{2}\sum_{k}\left\\{L_{k}L_{k}^{\dagger},\rho(t)\right\\},$
where the index $i/j$ stand for the incoherent channels that extract/inject
excitations in the system. The probability of finding the system in a state
with $Q$ excitations is $P_{Q}(t)=\textrm{Tr}[\rho_{Q}(t)]$, and
$Z_{\lambda}(t)\equiv\textrm{Tr}[\rho_{\lambda}(t)]=\sum_{Q=-\infty}^{\infty}P_{Q}(t)e^{-\lambda
Q},$ (25)
is known as the generating function of the current probability distribution.
This generating function follows a large deviation principle and for long
times it scales as
$Z_{\lambda}(t)\sim e^{t\mu(\lambda)},$ (26)
where $\mu(\lambda)$ is called the current Large Deviation Function (LDF). It
can be calculated as the highest eigenvalue of the tilted super-operator
${\cal L}_{\lambda}$. As $Z_{\lambda}(t)$ is the current moment generating
function, the LDF $\mu_{\lambda}$ corresponds to the cumulant generating
function of the current distribution. Therefore, the average current can be
calculated as
$\langle\dot{Q}\rangle=\lim_{t\to\infty}\left.\frac{1}{t}\frac{\partial
Z_{\lambda}(t)}{\partial\lambda}\right|_{\lambda=0}=\left.\frac{\partial\mu(\lambda)}{\partial\lambda}\right|_{\lambda=0}.$
(27)
If $\left|\lambda\right|<<1$ we can expand the LDF as
$\left.\mu(\lambda)\right|_{\lambda\to
0}\sim\mu(0)+\left.\frac{\partial\mu(\lambda)}{\partial\lambda}\right|_{\lambda=0}=\langle\dot{Q}\rangle.$
(28)
Therefore, if the Liouvillian is degenerated and the different steady states
have different average currents the LDF $\mu(\lambda)$ will have a non-
analytic behaviour around $\lambda=0$ in the form
$\mu(\lambda)=\left\\{\begin{array}[]{cc}+\left|\lambda\right|\langle\dot{Q}\rangle_{\text{max}}&\text{for
}\lambda\to 0^{-}\\\
-\left|\lambda\right|\langle\dot{Q}\rangle_{\text{min}}&\text{for }\lambda\to
0^{+}\end{array}\right.$ (29)
This allows us to calculate the steady-states corresponding to the maximum and
minimum currents as long as they are not degenerated. The method may be
summarised as follow:
1. 1.
Calculate the highest eigenvalue $\mu(\lambda)$(and its corresponding
eigenvector $\rho_{\lambda}$) of the modified Liouvillian ${\cal
L}_{\lambda}$.
2. 2.
Take the limits $\rho^{\prime}_{\text{min}}=\lim_{\lambda\to
0^{+}}\rho_{\lambda}$ and $\rho^{\prime}_{\text{max}}=\lim_{\lambda\to
0^{-}}\rho_{\lambda}$.
3. 3.
Renormalize, obtaining
$\rho_{\text{min}}=\rho^{\prime}_{\text{min}}/\textrm{Tr}[\rho^{\prime}_{\text{min}}]$
and
$\rho_{\text{max}}=\rho^{\prime}_{\text{max}}/\textrm{Tr}[\rho^{\prime}_{\text{max}}]$.
To summarise this section, we have introduced three different methods using
which we can obtain the steady states for an open quantum system with a
degenerated Liouvillian. The first method described in Sec. III.1 is the most
general approach, but requires the knowledge of symmetry operators which are
usually difficult to obtain. The second approach (Sec. III.2) could be easily
implemented computationally and does not require any knowledge of the symmetry
operators. Although this seems most beneficial, with increase in the degree of
degeneracy the computational cost increases substantially due to the
minimization procedure to find the optimal Euler angles. The final method is
the easiest computationally (Sec. III.3), but is limited to class of
nonequilibrium systems and can be used to obtain only a subset of the steady
states.
## IV Example: _para_ -Benzene ring
Figure 1: Illustration of the para-Benzene-type system with 6 sites connected
to two incoherent baths (red and blue rectangles) at different temperatures
$T_{L}$ and $T_{R}$. The para-Benzene system exchanges energy with the left
$L$ and right $R$ baths due to the pumping rates $\Gamma^{+}$ and dumping
rates $\Gamma^{-}$. The tilde basis is the original site representation.
The methods presented can deal with a wide variety of scenarios and in order
to illustrate these we use the example of a _para_ -Benzene ring connected to
two reservoirs as illustrated in Fig. 1. We restrict to the single-excitation
picture and consider the Hilbert space to be spanned by the site basis
$\left\\{|{\tilde{i}}\rangle\right\\}_{i=1}^{6}$ plus a ground state
$|{\tilde{0}}\rangle$ to allow interactions with the reservoir. The system
Hamiltonian takes the form
$H=J\sum_{\tilde{n}=1}^{6}|{\tilde{n}}\rangle\\!\langle{\widetilde{n+1}}|+{\rm
H.c.}.$ (30)
with $|\tilde{7}\rangle=|\tilde{1}\rangle$. The system is boundary driven by
two incoherent baths connected to sites $1$ and $4$. The baths exchange energy
and excitations with the system via the jump operators
$\displaystyle
L_{1}=\sqrt{\Gamma_{L}^{+}}|{\tilde{1}}\rangle\\!\langle{\tilde{0}}|,\quad
L_{2}=\sqrt{\Gamma_{L}^{-}}|{\tilde{0}}\rangle\\!\langle{\tilde{1}}|,$
$\displaystyle
L_{3}=\sqrt{\Gamma_{R}^{+}}|{\tilde{4}}\rangle\\!\langle{\tilde{0}}|,\quad
L_{4}=\sqrt{\Gamma_{R}^{-}}|{\tilde{0}}\rangle\\!\langle{\tilde{4}}|,$ (31)
where $\Gamma_{\rm x}^{+(-)}\geq 0$ are the pumping (dumping) rates for the
${\rm x}$th bath (${\rm x}=L$ or $R$). All properties of the baths are encoded
in these rates and we will not consider any specific form herein. For this
simple ring structure, there is only one open system symmetry operator given
by,
$\pi=\sum_{i=0,1,4}|{\tilde{i}}\rangle\\!\langle{\tilde{i}}|+|{\tilde{2}}\rangle\\!\langle{\tilde{6}}|+|{\tilde{6}}\rangle\\!\langle{\tilde{2}}|+|{\tilde{3}}\rangle\\!\langle{\tilde{5}}|+|{\tilde{5}}\rangle\\!\langle{\tilde{3}}|.$
(32)
The unitary operator $\pi$ has two eigenvalues $+1$ and $-1$ and the
transformation matrix ${\rm T}$ to change basis from the site representation
to the eigenvectors $|i\rangle$ of $\pi$ reads,
$\displaystyle{\rm T}\,|\tilde{i}\rangle$ $\displaystyle=$
$\displaystyle|i\rangle$ $\displaystyle{\rm T}$ $\displaystyle=$
$\displaystyle\sum_{i=0,1,4}|{\tilde{i}}\rangle\\!\langle{\tilde{i}}|+\frac{1}{\sqrt{2}}\sum_{i=2,3}|{\tilde{i}}\rangle\\!\langle{\tilde{i}}|-\frac{1}{\sqrt{2}}\sum_{i=5,6}|{\tilde{i}}\rangle\\!\langle{\tilde{i}}|$
(33)
$\displaystyle+\frac{1}{\sqrt{2}}\left(|\tilde{2}\rangle\langle\tilde{6}|+|\tilde{3}\rangle\langle\tilde{5}|+\mathrm{H.c.}\right).$
The ground $|0\rangle$ and symmetric states $|i\rangle$ ($i=1,\cdots,4$) have
eigenvalue $+1$ whereas the anti-symmetric states $|i\rangle$ ($i=5,6$)
correspond to eigenvalue $-1$. The transformation matrix does not affect the
ground ($\tilde{0}$) and _edge_ sites ($\tilde{1}$ and $\tilde{4}$) which are
connected to the baths but only transforms the _bulk_ sites ($\tilde{2}$,
$\tilde{3}$, $\tilde{5}$, and $\tilde{6}$).
The system Hamiltonian in the transformed basis takes the form
$H=\sqrt{2}J\left(|{1}\rangle\\!\langle{2}|+|{3}\rangle\\!\langle{4}|\right)+J\left(|{2}\rangle\\!\langle{3}|+|{5}\rangle\\!\langle{6}|\right)+\mathrm{h.c.},$
(34)
which is block diagonal since the ground and symmetric subspace
($|0\rangle,\cdots,|4\rangle$) does not interact with the anti-symmetric one
($|5\rangle$ and $|6\rangle$). Since the transformation does not affect the
ground state and the edge sites, there is no entanglement generated in the
jump operators and they remain the same form as Eq. (IV) with
$|\tilde{i}\rangle\rightarrow|i\rangle$.
Given the block diagonal form of the system Hamiltonian and the jump operators
confined to the ground and symmetric subspace we can split the system space
into the subspace of the ground state (${\cal H}_{g}$ with 1 state), symmetric
states (${\cal H}_{s}$ with 4 states), and anti-symmetric states (${\cal
H}_{a}$ with 2 states). Thus, the system Hamiltonian can be decomposed into a
$3\times 3$ matrix that takes the form
$H=\left(\begin{array}[]{ccc}0&0&0\\\ 0&H_{ss}&0\\\ 0&0&H_{aa}\\\
\end{array}\right).$ (35)
In this representation the sum of the jump operators takes the form
$\displaystyle\sum_{i=1}^{4}L_{i}=\left(\begin{array}[]{ccc}0&L_{-}&0\\\
L_{+}&0&0\\\ 0&0&0\\\ \end{array}\right)$ (39)
with $L_{+}=L_{1}+L_{3}$ representing the net pumping operator and
$L_{-}=L_{2}+L_{4}$ being the net dumping operator. The Lindblad equation (2)
then separates out for each sub block and the resultant equations read
$\displaystyle\frac{d\rho_{gg}(t)}{dt}$ $\displaystyle=$
$\displaystyle-\frac{1}{2}\\{N_{+},\rho_{gg}(t)\\}+L_{-}\rho_{ss}(t)L_{-}^{\dagger},$
$\displaystyle\frac{d\rho_{ss}(t)}{dt}$ $\displaystyle=$
$\displaystyle-i[H_{ss},\rho_{ss}(t)]-\frac{1}{2}\\{N_{-},\rho_{ss}(t)\\}$
(40) $\displaystyle+L_{+}\rho_{gg}(t)L_{+}^{\dagger},$
$\displaystyle\frac{d\rho_{gs}(t)}{dt}$ $\displaystyle=$ $\displaystyle
i\rho_{gs}(t)H_{ss}-\frac{1}{2}\rho_{gs}(t)N_{-}-\frac{1}{2}N_{+}\rho_{gs}(t),$
$\displaystyle\frac{d\rho_{ga}(t)}{dt}$ $\displaystyle=$ $\displaystyle
i\rho_{ga}(t)H_{aa}-\frac{1}{2}N_{+}\rho_{ga}(t),$
$\displaystyle\frac{d\rho_{sa}(t)}{dt}$ $\displaystyle=$
$\displaystyle-i\left(H_{ss}\rho_{sa}(t)-\rho_{sa}(t)H_{aa}\right)-\frac{1}{2}N_{-}\rho_{sa}(t),$
(41) $\displaystyle\frac{d\rho_{aa}(t)}{dt}$ $\displaystyle=$
$\displaystyle-i[H_{aa},\rho_{aa}(t)],$ (42)
with $N_{+}=L_{+}^{\dagger}L_{+}$ and $N_{-}=L_{-}^{\dagger}L_{-}$ being
positive operators and $\rho_{{\rm x},{\rm y}}(t)=\rho_{{\rm y},{\rm
x}}^{\dagger}(t)$ ($\\{{\rm x},{\rm y}\\}=g,s,a$). The cross-subspaces, i.e.,
$\rho_{{\rm x},{\rm y}}(t)~{}\forall{\rm x}\neq{\rm y}$, the reduced density
matrix $\rho_{{\rm x},{\rm y}}(t)$ decays exponentially as can be seen from
Eq. (41).
Thus in the steady state only the the diagonal components of the reduced
density matrix survive and we now focus on the anti-symmetric subspace whose
evolution is described by Eq. (42). Clearly, this describes coherent evolution
and thus the anti-symmetric subspace is a decoherence free subspace. The
eigenvectors of $H_{aa}$ ($2\times 2$ matrix) can be easily obtained and are
given by,
$\displaystyle|{\psi_{1}}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\left(|{5}\rangle+|{6}\rangle\right),$
$\displaystyle|{\psi_{2}}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\left(|{5}\rangle-|{6}\rangle\right).$ (43)
If we initiate our system in any one of these states it will not evolve in
time and hence from the perspective of the general Lindblad equation both of
these pure states are steady states. In other words, the dark states
$\rho^{{\rm DS}}_{1}=|{\psi_{1}}\rangle\\!\langle{\psi_{1}}|\quad{\rm
and}\quad\rho^{{\rm DS}}_{2}=|{\psi_{2}}\rangle\\!\langle{\psi_{2}}|$ (44)
are zero current carrying steady states. The cross combination of these states
display oscillating behaviour and are known as oscillating coherences [27]
whose state has zero trace $\rho^{{\rm
OC}}(t)=e^{-i2Jt}|{\psi_{1}}\rangle\\!\langle{\psi_{2}}|+e^{i2Jt}|{\psi_{2}}\rangle\\!\langle{\psi_{1}}|$.
The frequency of the oscillations is given by the difference of the
eigenvalues of $H_{aa}$. Thus, the existence of decoherence free subspaces
always gives us $L$ number of steady states, where $L$ is the dimension of the
decoherence free subspace, and $L$ pairs of eigenvalues of the Liouvillian
with zero real part but finite imaginary part known as oscillating coherences.
The reduced density matrix for the subspaces belonging to the ground and
symmetric states obeys coupled first order differential equations [see Eq.
(IV)] which is impossible to solve analytically. In general, this set up has
_three_ steady states; one from the ground and symmetric subspace and two from
the anti-symmetric subspace described above. In specific scenarios, wherein
the effect of the bath can be simplified we can obtain analytic solutions as
described below.
### Equilibrium
We can simplify our problem by considering that the pumping (dumping) rates of
both baths are the same, i.e., $\Gamma^{+}_{L}=\Gamma^{+}_{R}=\Gamma$ and
$\Gamma^{-}_{L}=\Gamma^{-}_{R}=\gamma$. In this case, the equilibrium steady
state is given by,
$\rho^{{\rm
EQ}}_{3}=\frac{\gamma}{\gamma+4\Gamma}|{0}\rangle\langle{0}|+\frac{\Gamma}{\gamma+4\Gamma}\sum_{i=1}^{4}|{i}\rangle\langle{i}|.$
(45)
If the baths were ideal sinks $\Gamma=0$ (zero temperature baths) or pumping
and dumping at the same rate $\gamma=\Gamma$ (infinite temperature baths) we
obtain the physically intuitive results of either being localized in the
ground state or all states being equally populated. Note here that in the
general equilibrium scenario we do not obtain the canonical Gibbs state
because the jump operators in our Lindblad equation are resonantly being
coupled to the ground state $\tilde{0}$ and either site $\tilde{1}$ or
$\tilde{4}$. Such a resonant coupling does not allow the dissipator to mix
_all_ the energy levels which is a crucial requirement to obtain a Gibbsian at
equilibrium.
Figure 2: Populations as a function of time $t$ for the case of pure pumping
$L^{-}_{i}=0$. The system exhibits a dynamical decoherence free subspace due
to which we obtain multiple steady states even in the absence of strong or
weak open system symmetries. The symmetric subspace is invariant and only in
the limit $t\rightarrow\infty$ the invariant symmetric subspace becomes
decoherence free. The ground state population is
$\langle{\tilde{0}}|\rho(t)|{\tilde{0}}\rangle$, edge state population is
$\rho_{\text{edge}}(t)=\sum_{i=1,4}\langle{\tilde{i}}|\rho(t)|{\tilde{i}}\rangle$,
and bulk state population is
$\rho_{\text{bulk}}(t)=\sum_{i=2,3,5,6}\langle{\tilde{i}}|\rho(t)|{\tilde{i}}\rangle$.
All individual sites in the bulk or edge have the same populations due to the
open system symmetries and the difference in the bulk and edge site
populations is due to the symmetries in $H_{ss}$. The pumping rate for both
baths $\Gamma^{+}_{\rm x}=\Gamma=0.1$ and the hopping $J=1$.
### Ideal source
In another extreme scenario when the baths are an ideal source such that
$\Gamma^{+}_{L}=\Gamma^{+}_{R}=\Gamma$ and $\Gamma^{-}_{L}=\Gamma^{-}_{R}=0$
the dynamical equations of ground symmetric subspace [Eq. (IV)] simplify as,
$\displaystyle\frac{d\rho_{gg}(t)}{dt}$ $\displaystyle=$
$\displaystyle-2\Gamma\rho_{gg}(t),$ (46)
$\displaystyle\frac{d\rho_{ss}(t)}{dt}$ $\displaystyle=$
$\displaystyle-i[H_{ss},\rho_{ss}(t)]+\Gamma\rho_{gg}(t)\sum_{i,j=1,4}|{i}\rangle\langle{j}|.$
(47)
The equation for $\rho_{gg}(t)$ can be solved analytically giving an
exponentially decaying solution $\rho_{gg}(t)=\exp[-2\Gamma t]\rho_{gg}(0)$
with $\rho_{gg}(0)$ being the initial condition. In the long-time limit
$\rho_{gg}=0$, which is expected since the baths only pump excitations from
the ground state to the ring. In this long-time limit, it is clear from Eq.
(47) that $\rho_{ss}(t)$ obeys an oscillatory coherent evolution. Thus, in
this ideal source limit, we obtain more than three steady states (six in
particular): the anti-symmetric subspace is not affected by this analysis and
hence gives the two steady states as explained above, whereas the ground and
symmetric subspace now give _four_ (dimension of $H_{ss}$) steady states using
the same arguments we provided for the coherent evolution in the anti-
symmetric subspace analysis. Note here that the emergence of these extra
steady states is not due to the open system symmetries but because there was a
dynamical restoration of Hamiltonian symmetries in the long-time limit. Thus,
in general the existence of multiple steady states need not be rooted in open
system symmetries (as usually believed), but could arise due to the peculiar
properties of the baths.
We illustrate this evolution for the real-space populations in Fig. 2. The
ground state (black solid line) population decays exponentially as expected
and the populations of the edge
($\rho_{\text{edge}}(t)=\sum_{i=1,4}\langle{\tilde{i}}|\rho(t)|{\tilde{i}}\rangle$,
red solid line) and bulk
($\rho_{\text{bulk}}(t)=\sum_{i=2,3,5,6}\langle{\tilde{i}}|\rho(t)|{\tilde{i}}\rangle$,
blue solid line) sites oscillate indefinitely. The oscillations of the edge
and bulk are out of phase and the difference in their amplitudes is due to the
symmetries in $H_{ss}$, which has different weights for the connections
between the edges and the bulk sites [see Eq. (34)].
### Ideal sink and source
Next we turn our attention to systems in nonequilibrium. The simplest case
which yields analytic results is when one of the baths is an ideal sink
$\Gamma_{L}^{+}=0$, $\Gamma_{L}^{-}=\gamma$ whereas the other is an ideal
source $\Gamma_{R}^{+}=\Gamma$, $\Gamma_{R}^{-}=0$. Unlike the ideal source
scenario, in which the ground state gets depleted leading to dynamical
restoration of Hamiltonian symmetries, in this case the ideal sink would re-
populate the ground state ensuring a current carrying NESS exists. The ground
and symmetric subspace have only one NESS given by,
$\displaystyle\rho^{{\rm NESS}}_{3}$ $\displaystyle=$
$\displaystyle\frac{1}{1+\frac{4\Gamma}{\gamma}+\frac{9\gamma\Gamma}{16J^{2}}}\left\\{|{0}\rangle\langle{0}|+\frac{\Gamma}{\gamma}|{1}\rangle\langle{1}|+\left(\frac{\Gamma}{\gamma}+\frac{\gamma\Gamma}{8J^{2}}\right)|{2}\rangle\langle{2}|+\left(\frac{\Gamma}{\gamma}+\frac{\gamma\Gamma}{4J^{2}}\right)|{3}\rangle\langle{3}|+\left(\frac{\Gamma}{\gamma}+\frac{3\gamma\Gamma}{16J^{2}}\right)|{4}\rangle\langle{4}|\right.$
(48)
$\displaystyle\left.-i\frac{\Gamma}{2\sqrt{2}J}\left(|{1}\rangle\langle{2}|+\sqrt{2}\,|{1}\rangle\langle{4}|+\frac{1}{\sqrt{2}}|{2}\rangle\langle{3}|-i\frac{4J}{\gamma}|{2}\rangle\langle{4}|+|{3}\rangle\langle{4}|+{\rm
H.c.}\right)\right\\}.$
The steady-state excitonic currents for the ideal sink source scenario
$\displaystyle I_{L}$ $\displaystyle=$
$\displaystyle\textrm{Tr}[L_{1}^{\dagger}L_{1}\rho^{{\rm
NESS}}_{3}]-\textrm{Tr}[L_{2}^{\dagger}L_{2}\rho_{3}^{{\rm NESS}}]$ (49)
$\displaystyle=$
$\displaystyle\frac{\Gamma}{1+\frac{4\Gamma}{\gamma}+\frac{9\gamma\Gamma}{16J^{2}}}$
Figure 3: Populations as a function of time $t$ for the general nonequilibrium
scenario. Solid lines are for the case of low temperature with $T_{L}=0.25$
and $T_{R}=0.5$, whereas dashed lines are for the high temperature regime with
$T_{L}=1$ and $T_{R}=2$. The individual edge and bulk sites (same as that
defined in the caption of Fig. 2) have the same populations due to open system
symmetries. At low temperatures, the edge and bulk populations are distinct
exhibiting the same symmetry governed by $H_{ss}$ (same as Fig. 2). At high
temperatures, the Hamiltonian symmetry is broken and the bulk and edge site
populations become equal after a short transient. The hopping is chosen to be
$J=1$ and the rates obey local-detailed balance, $\Gamma_{{\rm
x}}^{+}=\Gamma\omega_{0}n(T_{{\rm x}},\omega_{0})/2$ and $\Gamma_{{\rm
x}}^{-}=\Gamma\omega_{0}[1+n(T_{{\rm x}},\omega_{0})]/2$ with ${\rm x}=L,R$,
$n(T,\omega_{0})=[\exp[\omega_{0}/T]-1]^{-1}$ being the Bose-Einstein
distribution, $\omega_{0}=1$ being the system-bath resonant frequency, and
$\Gamma=0.1$ the system-bath coupling strength.
### General case
In the general nonequilibrium case it is not possible to solve the
differential equations exactly and hence we solve these numerically and
display the dynamics in Fig. 3. The solid lines in Fig. 3 are for the low
temperature regime in which we find that the edge (red lines) and bulk (blue
lines) state populations are different. The difference in the populations can
be attributed to the symmetries of the symmetric subspace Hamiltonian $H_{ss}$
(recall a similar behaviour was observed in the ideal-source case). At low
temperatures, the bath should not affect the system dramatically and thus the
Hamiltonian symmetries should be respected. On the other hand, at high
temperatures [Fig. 3 dashed lines] the dissipative baths completely alter the
system dynamics and hence in this case we do not see any signatures of the
$H_{ss}$ symmetries being preserved. In fact, at high temperatures the edge
and bulk populations become equal after a short transient indicating a equal
distribution of the excitation among the edge and bulk.
### Eigenspacing statistics of NESS
There are several scenarios in which knowing the NESS for a degenerated
Liouvillian could be useful. In this subsection we focus on the timely example
of studying the eigenspacing statistics of the NESS as first proposed in Ref.
[26]. Recently, there has been a surge in understanding the universal
properties of a dissipative open quantum system mostly restricted to the
spectra of a non-degenerated Liouvillian [32, 33, 34]. The idea is to observe
universal features based on statistical correlations between the eigenvalues
of the Liouvillian or the NESS. For closed Hamiltonian systems, there is a
deep connection between the quantum chaos conjecture [35, 36] and the
statistical correlations of the eigenvalues which is described by random
matrix theory [37]. However, for open quantum systems very little is known in
this direction.
Unlike closed systems, for a complex many-body open quantum system evaluating
the entire spectra of the Liouvillian can be computationally expensive since
its corresponding matrix dimension scales as $N^{2}\times N^{2}$ (recall that
$N$ is the dimension of the system Hilbert space). For degenerated
Liouvillians, most studies are restricted up to $N\approx 250$. On the other
hand, since the NESS is the eigenvector corresponding to the zero eigenvalue
of the Liouvillian it can be obtained for much larger systems (up to $N\approx
1000$ provided the Liouvillian is sparse) using variants of the Lanczos
algorithm. This reduces the computational cost of obtaining the NESS, but this
reduction is accompanied by a square-root reduction in the sample size which
needs to be compensated by more sampling. In other words, the computational
advantage of studying the eigenspacing statistics of the NESS lies in being
able to explore large system Hilbert space to understand the scaling with $N$.
Although, its computationally lucrative to study the eigenspacing statistics
of the NESS to uncover universal features, it is highly nontrivial if the
Liouvillian is degenerated and the open system symmetries (weak or strong) are
unknown. Our approach based on orthonormalization (Sec. III.2) is ideally
suited for this case. To illustrate this idea, we consider the same para-
Benzene ring as before but choose the jump operators [Eq. (IV)] extended to
all ground-symmetric states [$|i\rangle$ with $i=1,\cdots,4$; see Eq. (IV)]
and then randomly picked from the Ginibre unitary ensemble [38]. To simulate a
nonequilibrium situation we choose only two jump operators
$L_{\mathrm{x}}/\sqrt{\Gamma_{\mathrm{x}}}$ with $\mathrm{x}=1,2$ whose
distribution is given by
$\displaystyle
P(L_{\mathrm{x}})=\frac{1}{(2\pi)^{N^{2}}}\exp\left[-\frac{\textrm{Tr}[L_{\mathrm{x}}^{*}L_{\mathrm{x}}]}{2}\right],$
(50)
with $N=5$ for the case described above. This allows us to ensure that the
randomization process is non-pathological [39] and covers the manifold of all
jump operators within the ground-symmetric subsector uniformly. Moreover, the
full Liouvillian still has a block diagonal structure between ground-symmetric
and anti-symmetric subspaces with three steady states. We use then our
orthonormalization procedure outlined in Sec. III.2 and evaluate the
distribution of the ratio of consecutive eigenspacing [40],
$\displaystyle 0\leq
r_{n}=\frac{\text{min}\\{s_{n},s_{n-1}\\}}{\text{max}\\{s_{n},s_{n-1}\\}}\leq
1$ (51)
with $s_{n}=\nu_{n+1}-\nu_{n}$ being the eigenspacing of the NESS
($\rho^{\text{NESS}}|\varphi_{n}\rangle=\nu_{n}|\varphi_{n}\rangle$). The
ratio, since it is independent of the local density of states, avoids the
complications with unfolding of the spectrum and the resulting distribution is
shown in Fig. 4. The distribution shows $P(r)\rightarrow 0$ as $r\rightarrow
0$ indicating level repulsion and/or spectral rigidity which means that the
NESS is a thermalizing or highly nonintegrable state. The average $\langle
r\rangle\sim 0.463$ lies in between the exact predictions from a Poisson and
Gaussian orthogonal ensemble (GOE) [41]. The inset, Fig. 4, shows the
distribution of the eigenvalues of the Liouvillian which are available in this
case. It should be noted that a more sophisticated form of the sampling could
be chosen to obtain a perfect lemon structure [32, 42], but this does not turn
out to be a strict requirement as indicated by the eigenspacing distribution
of the the NESS.
Overall, in this section we studied the para-Benzene ring in detail. Although
we dealt with the symmetry-decomposition based approach (Sec. III.1)
throughout this section we would like to end with a few remarks on the other
two methods. In all cases, we found that the orthonormalization based approach
(Sec. III.2) yielded the same results as the symmetry based one. The
orthonormalization based approach was also able to treat the ideal-source case
and obtain all the six steady states. In complex many-body systems wherein the
symmetry operators are either not known or wherein there could be mechanisms
due to the baths leading to additional steady states, the orthonormalization
based approach is perfectly suited to treat such cases. The large deviation
based approach although computationally cheap would fail in the equilibrium
and ideal-source situation since the currents for all steady states are zero.
This method would also not allow us to obtain the two dark states [Eq. (44)]
from the anti-symmetric subspace since they both carry zero current. Finally,
we ended with studying the eigenspacing distribution of the NESS using the
orthonormalization based approach which gave us the expected result that the
NESS is a highly non-integrable state.
22footnotetext: The perfect lemon is achieved when the coherent contribution
$-i[H,\rho]$ to the Liouvillian vanishes which is not the case here. Figure 4:
The probability distribution $P(r)$ and the inset shows the eigenvalues of the
Liouvillian $\Lambda$ confined to the ground-symmetric subspace . The ‘lemon’
shape [42] is distinct near the origin for the eigenvalues of the Liouvillian.
The system Hamiltonian is chosen such that $J=1$ and the distribution is
obtained over $10^{7}$ samples. In the inset we plot the eigenvalues only for
$2500$ randomly chosen samples. The jump operators have rates $\Gamma_{L}=1$
and $\Gamma_{R}=2$.
## V Conclusions
In this paper we have presented several techniques to obtain the steady states
of a degenerated Lindblad Liouvillian. Each method comes with advantages and
disadvantages and, together, they form a useful toolbox for many different
problems. First, we have presented a method based on the use of symmetry
operators. This technique allows the analytical resolution of many systems,
but it requires the existence and knowledge of the open system symmetry
operators. The second method is based on a Gramm-Schmidt orthonormalisation is
general but computationally expensive. Its utility depends on the degree of
degeneracy and on the system dimension. Finally, we have presented a method
based on large deviations theory. It does not require any previous knowledge
about the system symmetries and it is also computationally cheap as it only
requires the diagonalization of an operator of the same size as the
Liouvillian. On the other hand, it only gives the density matrices that
maximise or minimise a given flux.
These methods have been illustrated by a canonical example, a para-benzene
ring. This system can be analytically diagonalised and in several specific
cases it shows a rich phenomenology including dark-states, oscillating
coherences, and steady-states that are not a consequence of symmetries.
Finally, we have also studied the eigenspacing distribution of the NESS
obtained via the orthonormalization method. Since the system by construction
is a thermalizing open quantum system the eigenspacing distribution
$P(r)\rightarrow 0$ as $r\rightarrow 0$.
There are still several open question to be addressed in this field of
research. The para-Benzene ring considered herein had only one NESS, whereas
the other steady states were pure. An interesting question remains whether it
is possible to construct open quantum systems with more than one NESS, i.e.,
steady states influenced by the reservoir. Consequently, would these states
belong to the same random matrix ensembles, and if they do not, what could be
the consequences on observables such as heat and particle currents.
Furthermore, the existence of trace zero steady-states has been recently
probed but the consequence of these states has not been analysed so far. How
do they affect the physical properties of the system and how can they be
engineered and detected remains open.
## Acknowledgments
J.T. acknowledges support by the Institute for Basic Science in Korea
(IBS-R024-Y2). D.M. acknowledges the Spanish Ministry and the Agencia Española
de Investigación (AEI) for financial support under grant FIS2017-84256-P
(FEDER funds).
## Data Availability
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## References
* Gorini, Kossakowski, and Sudarsahan [1976] V. Gorini, A. Kossakowski, and E. Sudarsahan, J. Math. Phys. 17, 821 (1976).
* Lindblad [1976] G. Lindblad, Commun. Math. Phys. 119, 48 (1976).
* Olmos, Lesanovsky, and Garrahan [2012] B. Olmos, I. Lesanovsky, and J. Garrahan, Phys. Rev. Lett. 109, 020403 (2012).
* Manzano and Kyoseva [2016] D. Manzano and E. Kyoseva, Sci. Rep. 6, 31161 (2016).
* Han _et al._ [2020] J. Han, D. Leykam, D. Angelakis, and J. Thingna, (2020), arXiv:2011.02663 .
* Thingna, Esposito, and Barra [2019] J. Thingna, M. Esposito, and F. Barra, Phys. Rev. E 99, 042142 (2019).
* Chiara _et al._ [2018] G. D. Chiara, G. Landi, A. Hewgill, B. Reid, A. Ferraro, A. Roncaglia, and M. Antezza, New J. Phys. 20, 113024 (2018).
* Liu, Segal, and Hanna [2019] J. Liu, D. Segal, and G. Hanna, J. Phys. Chem. C 123, 18303 (2019).
* Quach and Munro [2020] J. Q. Quach and W. J. Munro, Phys. Rev. App. 14, 024092 (2020).
* Tejero, Thingna, and Manzano [2020] A. Tejero, J. Thingna, and D. Manzano, (2020), arXiv:2012.08224 .
* Žnidarič, Žunkovič, and Prosen [2011] M. Žnidarič, B. Žunkovič, and T. Prosen, Phys. Rev. E 84, 051115 (2011).
* Thingna, García-Palacios, and Wang [2012] J. Thingna, J. García-Palacios, and J.-S. Wang, Phys. Rev. B 85, 195452 (2012).
* Asadian _et al._ [2013] A. Asadian, D. Manzano, M. Tiersch, and H. Briegel, Phys. Rev. E 87, 012109 (2013).
* Manzano, Chuang, and Cao [2016] D. Manzano, C. Chuang, and J. Cao, New J. Phys. 18, 043044 (2016).
* Hu, Xia, and Kais [2020] Z. Hu, R. Xia, and S. Kais, Sci. Rep. 10 (2020).
* Kraus _et al._ [2008] B. Kraus, H. Büchler, S. Diehl, A. Kantian, A. Micheli, and P. Zoller, Phys. Rev. A (2008).
* Breuer and Petruccione [2002] H. Breuer and F. Petruccione, _The theory of open quantum systems_ (Oxford University Press, 2002).
* Manzano [2020] D. Manzano, AIP Adv. 10, 025106 (2020).
* Evans and Hance-Olsen [1979] D. Evans and H. Hance-Olsen, J. Funct. Anal. 32, 207 (1979).
* Buča and Prosen [2012] B. Buča and T. Prosen, New J. Phys. 14, 073007 (2012).
* Manzano and Hurtado [2018] D. Manzano and P. Hurtado, Adv. Phys. 67, 1 (2018).
* Thingna, Manzano, and Cao [2020] J. Thingna, D. Manzano, and J. Cao, New J. Phys. 22, 083026 (2020).
* Lieu _et al._ [2020] S. Lieu, R. Belyansky, J. Young, R. Lundgren, V. Albert, and A. Gorshkov, (2020), arXiv:2008.02816 .
* Fiorelli _et al._ [2019] E. Fiorelli, P. Rotondo, M. Marcuzzi, J. Garrahan, and I. Lesanovsky, Phys. Rev. A 99, 032126 (2019).
* Thingna, Manzano, and Cao [2016] J. Thingna, D. Manzano, and J. Cao, Sci. Rep. 6, 28027 (2016).
* Prosen and Žnidarič [2013] T. Prosen and M. Žnidarič, Phys. Rev. Lett. 111, 124101 (2013).
* Albert and Jiang [2014] V. Albert and L. Jiang, Phys. Rev. A 89, 022118 (2014).
* Prosen [2012] T. Prosen, Phys. Scr. 86, 058511 (2012).
* Note [1] Note that duality of basis ensures the left and right eigenvectors are form an orthonormal set. This does not ensure that the right eigenvectors are orthogonal amongst themselves.
* Zhang _et al._ [2020] Z. Zhang, J. Tindall, J. Mur-Petit, D. Jaksch, and B. Buca, J. Phys. A: Math. Theor. 53, 215304 (2020).
* Manzano and Hurtado [2014] D. Manzano and P. Hurtado, Phys. Rev. B 90, 125138 (2014).
* Denisov _et al._ [2019] S. Denisov, T. Laptyeva, W. Tarnowski, D. Chruściński, and K. Życzkowski, Phys. Rev. Lett. 123, 140403 (2019).
* Wang, Piazza, and Luitz [2020] K. Wang, F. Piazza, and D. J. Luitz, Phys. Rev. Lett. 124, 100604 (2020).
* Sá, Ribeiro, and Prosen [2020] L. Sá, P. Ribeiro, and T. Prosen, Phys. Rev. X 10, 021019 (2020).
* Berry and Tabor [1977] M. Berry and M. Tabor, Proc. R. Soc. Lond. A 356, 375 (1977).
* Berry [1981] M. Berry, Ann. Phys. 131, 163 (1981).
* Mehta [2004] M. Mehta, _Random Matrices_ (Elsevier, New York, 2004).
* Ginibre [1965] J. Ginibre, J. Math. Phys. 6, 440 (1965).
* Can [2019] T. Can, J. Phys. A: Math. Theor. 52, 485302 (2019), see simple dissipator herein.
* Oganesyan and Huse [2007] V. Oganesyan and D. A. Huse, Phys. Rev. B 75, 155111 (2007).
* Atas _et al._ [2013] Y. Y. Atas, E. Bogomolny, O. Giraud, and G. Roux, Phys. Rev. Lett. 110, 084101 (2013).
* Note [2] The perfect lemon is achieved when the coherent contribution $-i[H,\rho]$ to the Liouvillian vanishes which is not the case here.
|
# Mismatched Decoding Reliability
Function at Zero Rate
Marco Bondaschi, Albert Guillén i Fàbregas,
and Marco Dalai M. Bondaschi is with the School of Computer and Communication
Sciences, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne,
Switzerland (e-mail: marco.bondaschi@epfl.ch).A. Guillén i Fàbregas is with
the Department of Engineering, University of Cambridge, Cambridge CB2 1PZ,
U.K., and with the Department of Information and Communication Technologies,
Universitat Pompeu Fabra, Barcelona 08018, Spain (e-mail: guillen@ieee.org).M.
Dalai is with the Department of Information Engineering at the University of
Brescia, Via Branze 38 I-25123 Brescia, Italy (e-mail:
marco.dalai@unibs.it).This work was supported in part by the European Research
Council under Grant 725411.This research was partially supported by Italian
Ministry of Education under Grant PRIN 2015 D72F16000790001.This work was
presented in part at the 2021 IEEE International Symposium on Information
Theory, Melbourne, Australia, Jul. 2021.Copyright (c) 2021 IEEE. Personal use
of this material is permitted. However, permission to use this material for
any other purposes must be obtained from the IEEE by sending a request to
<EMAIL_ADDRESS>
###### Abstract
We derive an upper bound on the reliability function of mismatched decoding
for zero-rate codes. The bound is based on a result by Komlós that shows the
existence of a subcode with certain symmetry properties. The bound is shown to
coincide with the expurgated exponent at rate zero for a broad family of
channel-decoding metric pairs.
###### Index Terms:
Error exponents, mismatched decoding.
## I Introduction
Consider a discrete memoryless channel with finite input alphabet
$\mathcal{X}$ and output alphabet $\mathcal{Y}$, and with transition
probabilities $W(y|x)$. For a message set $\mathcal{M}=\\{1,2,\ldots,M\\}$ and
blocklength $n$, an encoder is a function
$\mathcal{C}:\mathcal{M}\to\mathcal{X}^{n}$ that assigns to each message $m$ a
corresponding codeword $\boldsymbol{x}_{m}=(x_{m,1},x_{m,2},\ldots,x_{m,n})$.
The _rate of transmission_ is defined as
$R\triangleq\frac{\log M}{n}\,.$ (1)
When message $m$ is sent, an output sequence
$\boldsymbol{y}=(y_{1},y_{2},\ldots,y_{n})$ is received with probability
$W^{n}(\boldsymbol{y}|\boldsymbol{x}_{m})=\prod_{i=1}^{n}W(y_{i}|x_{m,i})\,.$
(2)
A decoder is a function $\mathcal{C}^{-1}:\mathcal{Y}^{n}\to\mathcal{M}$ whose
task is to map each possible output sequence to a message in $\mathcal{M}$
which hopefully is equal to the message that was originally sent with high
probability. In this paper, we consider a decoder that follows the rule
$\mathcal{C}^{-1}(\boldsymbol{y})\in\\{m\in\mathcal{M}:q^{n}(\boldsymbol{x}_{m},\boldsymbol{y})\geq
q^{n}(\boldsymbol{x}_{m^{\prime}},\boldsymbol{y})\,\,\forall
m^{\prime}\in\mathcal{M}\\}\,,$ (3)
where
$q^{n}(\boldsymbol{x}_{m},\boldsymbol{y})=\prod_{i=1}^{n}q(x_{m,i},y_{i})$ (4)
for a fixed function $q:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}^{+}$ called
_decoding metric_. We assume for now that when there is a tie, that is, when
for a certain output sequence $\boldsymbol{y}$ the maximal
$q^{n}(\boldsymbol{x}_{m},\boldsymbol{y})$ is attained by more than one
message, the decoder selects one of them with an arbitrary rule. However, as
we will discuss in more detail later on, most of the results in this work are
valid only for a decoder that breaks ties equiprobably among the messages that
maximize $q^{n}(\boldsymbol{x}_{m},\boldsymbol{y})$. We will distinguish this
case from the general one when necessary.
When $q(x,y)=W(y|x)$, the decoder is the maximum likelihood decoder, achieving
the lowest probability of error. Instead, when the decoding metric $q(x,y)\neq
W(y|x)$, the decoder is, in general, said to be mismatched [1, 2]. The
mismatched decoding problem encompasses a number of important problems such as
channel uncertainty, fading channels, reduced-complexity decoding, bit-
interleaved coded modulation, optical communications, and zero-error and zero-
undetected error capacity. See [3] for a recent survey of the information
theoretic foundations of mismatched decoding.
When message $m$ is sent, a decoding error occurs if
$\mathcal{C}^{-1}(\boldsymbol{y})\neq m$. The probability of this event is
$P_{e,m}^{(n)}\triangleq\sum_{\boldsymbol{y}\in\mathcal{Y}^{c}_{m}}W^{n}(\boldsymbol{y}|\boldsymbol{x}_{m})\,,$
(5)
where $\mathcal{Y}_{m}\subset\mathcal{Y}^{n}$ is the subset of output
sequences that are decoded to $m$. The average probability of error of the
code is
$P_{e}^{(n)}\triangleq\frac{1}{M}\sum_{m=1}^{M}P_{e,m}^{(n)}\,.$ (6)
For fixed $R$, $n$ and decoding metric $q$, let $P_{e}^{q}(R,n)$ be the
smallest probability of error for mismatched decoding over all codes with rate
at least $R$ and block length $n$, when $q$ is used as the decoding metric.
The mismatched reliability function is defined as
$E^{q}(R)\triangleq\limsup_{n\to\infty}-\frac{\log P_{e}^{q}(R,n)}{n}$ (7)
and represents the asymptotic exponent with which the probability of error
goes to zero for a given channel and decoding metric, when an optimal code
with blocklength $n$ and rate at least $R$ is used. The supremum of the
information rates $R$ for which the error probability tends to zero is called
_mismatched capacity_.
In general, there is no single-letter expression for the mismatched capacity.
A number of achievable rate and error exponent results based on random coding
are available in the literature [1, 2, 4, 5, 6]. In terms of upper bounds on
the mismatched capacity or on the reliability function, there are fewer
results in the literature. Recently, single-letter upper bounds on the
mismatched capacity improving over the Shannon capacity were proposed in [7,
8]. A sphere-packing upper bound on the mismatched reliability function was
recently derived in [9], yielding an improved upper bound on the mismatched
capacity.
In this paper, we study the problem of finding an upper bound on the
mismatched reliability function of any given discrete memoryless channel and
decoding metric, when the rate tends to $0$, that is, we are interested in
upper-bounding $E^{q}(0^{+})$. In the following, we focus only on decoding
metrics that are meaningful for our problem. In particular, we restrict our
attention to decoding metrics such that
$W(y|x)>0\implies q(x,y)>0$ (8)
for all $x\in\mathcal{X}$ and $y\in\mathcal{Y}$. In fact, channels with a
decoding metric that does not meet this condition for some input $x$ have a
mismatched capacity equal to $0$ if that input is used [1], and so they are of
little interest.
As we already mentioned, several lower bounds on the mismatched reliability
function exist. The one that is most relevant for this work is a
generalization of Gallager’s classical expurgated bound to the case of
mismatched decoding obtained by Scarlett _et al._ [10]; when the rate
approaches zero, their bound takes the form
$\displaystyle E^{q}(0^{+})$ $\displaystyle\geq$
$\displaystyle\max_{Q\in\mathcal{P}(\mathcal{X})}\sup_{s\geq
0}-\\!\\!\sum_{a\in\mathcal{X}}\sum_{b\in\mathcal{X}}Q(a)Q(b)\log\sum_{y\in\mathcal{Y}}W(y|a)\left(\frac{q(b,y)}{q(a,y)}\right)^{\\!\\!s}.$
(9)
In the following, we derive an upper bound on $E^{q}(0^{+})$ for a wide class
of channels and decoding metrics, under the assumption that ties are broken
equiprobably. Such an upper bound will turn out to be equal to the lower bound
(I); therefore, for such a class of channels, the bound (I) is tight.
In order to prove our bound, in Section II we study conditions that channels
and decoding metrics must satisfy in order to have a finite reliability
function at rate $R=0^{+}$. Then, in Section III we derive a lower bound on
the mismatched probability of error for two codewords, and in section IV we
prove the tightness of (I) using the lower bound of Section III and a
probabilistic result by Komlós (obtained using Ramsey theory) on the existence
of a subset of random variables from a larger set that (asymptotically) have
pairwise symmetric distributions. The application of these ideas in coding
theory originated in works of Blinovsky, see for example [15, 16, 17]. See
also [19] for a recent revisitation of the maximum likelihood case.
## II Mismatched zero-error capacity
In the following we assume that condition (8) is satisfied. It is also useful
to restrict our attention only to channels and decoding metrics such that at
all rates $R>0$ the minimum probability of error is strictly positive; if this
is not the case, then at $R=0^{+}$ the reliability function is infinite and no
finite upper bound is possible. Thus, we introduce a new quantity, the
mismatched zero-error capacity $C^{q}_{0}$ for a channel $W(y|x)$ and a
decoding metric $q(x,y)$, defined as the supremum of the rates $R$ for which
there exist codes with probability of error exactly equal to zero. If
$C_{0}^{q}$ is positive for some channel and decoding metric, then the
reliability function at $R=0^{+}$ is infinite. Hence, we would like to
restrict our attention to channels and decoding metrics with $C_{0}^{q}=0$.
We now proceed to analyze more closely the conditions for a positive
mismatched zero-error capacity. Notice that $C_{0}^{q}$ is positive if and
only if there exist two codewords $\boldsymbol{x}_{1}$ and
$\boldsymbol{x}_{2}$ (of arbitrary blocklength) such that for all output
sequences $\boldsymbol{y}$:
1. 1.
either $W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})=0$ or
$W^{n}(\boldsymbol{y}|\boldsymbol{x}_{2})=0$;
2. 2.
$W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})>0\implies
q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})\geq
q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})$
$W^{n}(\boldsymbol{y}|\boldsymbol{x}_{2})>0\implies
q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})\geq
q^{n}(\boldsymbol{x}_{1},\boldsymbol{y}).$
Condition 1 states that each possible output sequence can be obtained only
from one of the two codewords; condition 2 states that each sequence is always
decoded correctly. Notice that up to now we are still assuming that ties can
be decoded with an arbitrary rule. That is why Condition 2 admits cases such
that
$q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})=q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})$:
there always exists a tie-breaking rule that decodes each output sequence
correctly in these cases (the one that associates to each $\boldsymbol{y}$ the
only $\boldsymbol{x}$ with $W^{n}(\boldsymbol{y}|\boldsymbol{x})>0$).
As we stated earlier on, our upper bound on $E^{q}(0^{+})$ only holds for a
decoder that breaks ties equiprobably. Therefore, even if the definition of
mismatched zero-error capacity given above is the most general, since it
admits any decoding strategy for breaking ties, it is nonetheless meaningful
to us to introduce a second definition of mismatched zero-error capacity, that
we denote by $\bar{C}_{0}^{q}$, that is the supremum of the rates $R$ for
which there exist codes with probability of error exactly equal to zero, given
that ties are broken equiprobably. Since this decoding strategy is not
necessarily the best one, that is, the one that achieves the minimum
probability of error, it follows that, in general, $\bar{C}_{0}^{q}\leq
C_{0}^{q}$.
As for the conditions stated above, the only difference, when
$\bar{C}_{0}^{q}$ is considered instead of $C_{0}^{q}$, is that condition 2
becomes:
1. 2b)
$W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})>0\implies
q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})>q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})$
$W^{n}(\boldsymbol{y}|\boldsymbol{x}_{2})>0\implies
q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})>q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})$
since in this case ties are not allowed, given that, with ties broken
equiprobably, there is always a positive probability of decoding the output
sequence incorrectly.
Finally, when the chosen decoding metric is the maximum-likelihood one, that
is, $q(x,y)=W(y|x)$, the two zero-error capacities introduced above are equal
and coincide with the classical zero-error capacity $C_{0}$; also, since the
maximum-likelihood decoding metric is the one that minimizes the probability
of error, we have that, for any decoding metric, $\bar{C}_{0}^{q}\leq
C_{0}^{q}\leq C_{0}$.
The next objective of this section is to find conditions for the mismatched
zero-error capacity to be zero that depend only on the single-letter channel
probabilities $W(y|x)$ and decoding metric $q(x,y)$. This can be done using
the same tools that we will need to study $E^{q}(0^{+})$. Therefore, we
introduce now a real-valued function that will be useful to both ends. For any
two sequences $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$ in
$\mathcal{X}^{n}$, we define
$\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)\triangleq-\log\sum_{\boldsymbol{y}\in\hat{\mathcal{Y}}^{n}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}}W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})\bigg{(}\frac{q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})}{q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})}\bigg{)}^{s}\,,$
(10)
where
$\hat{\mathcal{Y}}^{n}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}\triangleq\big{\\{}\boldsymbol{y}\in\mathcal{Y}^{n}:q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})>0\big{\\}}\,.$
(11)
When $n=1$, (10) becomes, for any $a,b\in\mathcal{X}$,
$\mu_{a,b}(s)\triangleq-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}}W(y|a)\bigg{(}\frac{q(b,y)}{q(a,y)}\bigg{)}^{s}\,,$
(12)
with
$\hat{\mathcal{Y}}_{a,b}\triangleq\big{\\{}y\in\mathcal{Y}:q(a,y)q(b,y)>0\big{\\}}.$
(13)
An additional quantity that will be useful to us is the limit of the
derivative of $\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$ when
$s\to\infty$; for this reason, we introduce a compact symbol for it:
$\displaystyle\mu^{\prime}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}\triangleq\lim_{s\to\infty}\mu^{\prime}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$
(14)
$\displaystyle\mu^{\prime}_{a,b}\triangleq\lim_{s\to\infty}\mu^{\prime}_{a,b}(s)$
(15)
and we set by definition
$\mu^{\prime}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}=+\infty$ if
$\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)=+\infty$, and the same for
$\mu^{\prime}_{a,b}$.111Throughout the paper we use the convention
$\frac{\cdot}{0}=+\infty$.
###### Lemma 1.
The following properties hold for all $a,b\in\mathcal{X}$ and all sequences
$\boldsymbol{x}_{1},\boldsymbol{x}_{2}$ in $\mathcal{X}^{n}$.
1. 1.
Let $P_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}$ be the joint type of
$\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$. Then,
$\frac{1}{n}\,\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)=\sum_{a\in\mathcal{X}}\sum_{b\in\mathcal{X}}P_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(a,b)\mu_{a,b}(s)\,.$
(16)
2. 2.
$\mu_{a,b}(s)$ and $\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$ are
concave.
3. 3.
$\mu_{a,a}(s)=0\,.$ (17)
4. 4.
$\mu_{a,b}^{\prime}=\min_{y:W(y|a)>0}\log\frac{q(a,y)}{q(b,y)}\,.$ (18)
###### Proof.
To prove property 1, notice that
$\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$ is additive, in the sense
that it can be rewritten coordinate-by-coordinate as
$\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)=\sum_{c=1}^{n}\mu_{c}(s)\,,$
(19)
where
$\mu_{c}(s)\triangleq-\log\sum_{y\in\hat{\mathcal{Y}}_{c}}W(y|x_{1,c})\bigg{(}\frac{q(x_{2,c},y)}{q(x_{1,c},y)}\bigg{)}^{s}$
(20)
and
$\hat{\mathcal{Y}}_{c}\triangleq\big{\\{}y\in\mathcal{Y}:q(x_{1,c},y)q(x_{2,c},y)>0\big{\\}}\,.$
(21)
Each term of the sum in (19) only depends on the pair of input symbols
$(x_{1,c},x_{2,c})$. Since every pair $(a,b)\in\mathcal{X}^{2}$ appears in
$nP_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(a,b)$ coordinates, grouping
together the equal terms in (19) leads to (16).
Property 2 can be proved by computing the first and second derivatives of
$\mu_{a,b}(s)$, that is
$\mu_{a,b}^{\prime}(s)=-\frac{\sum_{y\in\hat{\mathcal{Y}}_{a,b}}W(y|a)\big{(}\frac{q(b,y)}{q(a,y)}\big{)}^{s}\log\frac{q(b,y)}{q(a,y)}}{\sum_{\bar{y}\in\hat{\mathcal{Y}}_{a,b}}W(\bar{y}|a)\big{(}\frac{q(b,\bar{y})}{q(a,\bar{y})}\big{)}^{s}}$
(22)
and
$\mu_{a,b}^{\prime\prime}(s)=-\frac{\sum_{y\in\hat{\mathcal{Y}}_{a,b}}W(y|a)\big{(}\frac{q(b,y)}{q(a,y)}\big{)}^{s}\big{(}\log\frac{q(b,y)}{q(a,y)}\big{)}^{2}}{\sum_{\bar{y}\in\hat{\mathcal{Y}}_{a,b}}W(\bar{y}|a)\big{(}\frac{q(b,\bar{y})}{q(a,\bar{y})}\big{)}^{s}}\\\
+\big{[}\mu_{a,b}^{\prime}(s)\big{]}^{2}.$ (23)
Now, for any $s\geq 0$, these two quantities can be seen respectively as the
expected value and the variance (with a minus sign) of a random variable; in
fact, define the following probability distribution over the set of sequences
$\hat{\mathcal{Y}}_{a,b}$,
$Q_{s}(y)\triangleq\frac{W(y|a)\big{(}\frac{q(b,y)}{q(a,y)}\big{)}^{s}}{\sum_{\bar{y}\in\hat{\mathcal{Y}}_{a,b}}W(\bar{y}|a)\big{(}\frac{q(b,\bar{y})}{q(a,\bar{y})}\big{)}^{s}}\,,$
(24)
and define the random variable
$D(y)\triangleq-\log\frac{q(b,y)}{q(a,y)}\,.$ (25)
Then, we can rewrite the two derivatives as
$\mu_{a,b}^{\prime}(s)=\mathbb{E}_{Q_{s}}[D]=\sum_{y\in\hat{\mathcal{Y}}_{a,b}}Q_{s}(y)D(y)$
(26) $\displaystyle\mu_{a,b}^{\prime\prime}(s)$
$\displaystyle=-\text{Var}_{Q_{s}}[D]=-\mathbb{E}_{Q_{s}}\big{[}D^{2}\big{]}+\big{(}\mathbb{E}_{Q_{s}}[D]\big{)}^{2}$
(27)
$\displaystyle=-\sum_{y\in\hat{\mathcal{Y}}_{a,b}}Q_{s}(y)D^{2}(y)+\big{[}\mu_{a,b}^{\prime}(s)\big{]}^{2}.$
(28)
Since the variance of a random variable is always non-negative, it follows
that $\mu_{a,b}^{\prime\prime}(s)\leq 0$ and therefore that $\mu_{a,b}(s)$ is
concave. By property 1, $\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$ is
also concave, since it can be rewritten as a (weighted) sum of concave
functions.
Property 3 follows from the fact that, from definition (12),
$\mu_{a,a}(s)=-\log\sum_{y\in\hat{\mathcal{Y}}_{a,a}}W(y|a)=0\,,$ (29)
where the last equality follows from (8), since
$\big{\\{}y\in\mathcal{Y}:W(y|a)>0\big{\\}}\subset\big{\\{}y\in\mathcal{Y}:q(a,y)>0\big{\\}}=\hat{\mathcal{Y}}_{a,a}\,.$
(30)
To prove property 4, first notice that $\mu_{a,b}(s)=+\infty$ if and only if
$\big{\\{}y\in\mathcal{Y}:W(y|a)q(b,y)>0\big{\\}}=\varnothing\,.$ (31)
In such a case, the right-hand side of (18) equals $+\infty$, in accordance to
what we set by definition. If instead the set on the left-hand side of (31) is
not empty, then the property follows directly by taking the limit
$s\to+\infty$ of the right-hand side of (22), since both numerator and
denominator are dominated by the exponentials with the largest base, which is
equal to
$\max_{y:W(y|a)>0}\frac{q(b,y)}{q(a,y)}\,.\qed$
We are now ready to prove the following theorem on the mismatched zero-error
capacities $C_{0}^{q}$ and $\bar{C}_{0}^{q}$.
###### Theorem 1.
For any given discrete memoryless channel $W(y|x)$ and decoding metric
$q(x,y)$:
1. 1.
$C_{0}^{q}=0$ if and only if
$\min_{y:W(y|a)>0}\frac{q(a,y)}{q(b,y)}\leq\max_{y:W(y|b)>0}\frac{q(a,y)}{q(b,y)}$
(32)
for all $a,b\in\mathcal{X}$, and for all $a,b\in\mathcal{X}$ such that
$\min_{y:W(y|a)>0}\frac{q(a,y)}{q(b,y)}=\max_{y:W(y|b)>0}\frac{q(a,y)}{q(b,y)}$
(33)
there exists some $y\in\mathcal{Y}$ such that
$W(y|a)W(y|b)>0\,.$ (34)
2. 2.
$\bar{C}_{0}^{q}=0$ if and only if
$\min_{y:W(y|a)>0}\frac{q(a,y)}{q(b,y)}\leq\max_{y:W(y|b)>0}\frac{q(a,y)}{q(b,y)}$
(35)
for all $a,b\in\mathcal{X}$.
###### Corollary 1.
$\bar{C}_{0}^{q}=0\implies\max_{Q\in\mathcal{P}(\mathcal{X})}\sup_{s\geq
0}\sum_{a}\sum_{b}Q(a)Q(b)\mu_{a,b}(s)<+\infty\,.$ (36)
###### Proof.
We first show that the quantities defined in (10) and (14) — and consequently
also (12) and (15) — satisfy the following properties, for every
$\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$:
$\displaystyle\lim_{s\to+\infty}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)=+\infty$
$\displaystyle\iff\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}^{\prime}>0$ (37)
$\displaystyle\lim_{s\to+\infty}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)\in[0,+\infty)$
$\displaystyle\iff\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}^{\prime}=0$ (38)
$\displaystyle\lim_{s\to+\infty}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)=-\infty$
$\displaystyle\iff\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}^{\prime}<0\,.$
(39)
In fact, consider the function
$\displaystyle f_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$
$\displaystyle\triangleq e^{-\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)}$
$\displaystyle=\sum_{\boldsymbol{y}\in\hat{\mathcal{Y}}^{n}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}}W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})\bigg{(}\frac{q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})}{q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})}\bigg{)}^{s}.$
(40)
Since $f_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$ is the sum of non-
negative quantities, when $s\to\infty$ only three alternatives are possible:
$f_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$ tends to infinity,
$f_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$ tends to a finite positive
number, or $f_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$ tends to zero. In
the first case, $f_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)\to\infty$ (and
consequently $\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)\to-\infty$) if
and only if at least one non-zero term of the sum goes to infinity, which in
turn happens if and only if
$\max_{\boldsymbol{y}:W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})>0}\log\frac{q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})}{q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})}=-\mu^{\prime}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}>0\,.$
(41)
In the second case, $f_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$ tends to a
finite positive real number if and only if at least one term of the sum tends
to a finite positive number and all the other terms tend to zero, which
happens if and only if
$\max_{\boldsymbol{y}:W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})>0}\log\frac{q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})}{q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})}=-\mu^{\prime}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}=0\,;$
(42)
in such a case, the limit is
$\lim_{s\to\infty}f_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)=\sum_{\mathrm{some}\
\boldsymbol{y}}W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})\,,$ (43)
which is strictly positive and at most $1$, consequently
$\lim_{s\to\infty}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)=-\log\sum_{\mathrm{some}\
\boldsymbol{y}}W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})$ (44)
is finite and greater than or equal to $0$. Finally,
$f_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$ tends to zero (and consequently
$\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)\to\infty$) if and only if all
terms of the sum tend to zero, which happens if and only if
$\max_{\boldsymbol{y}:W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})>0}\log\frac{q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})}{q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})}=-\mu^{\prime}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}<0\,.$
(45)
Notice that the same properties hold also for $\mu_{a,b}(s)$ and
$\mu^{\prime}_{a,b}$ for any $a,b\in\mathcal{X}$, since one can choose
$\boldsymbol{x}_{1}=a$ and $\boldsymbol{x}_{2}=b$.
Next, we analyze more closely the properties that a pair of codewords
$\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$ must have in order to satisfy
conditions 1 and 2 above for a positive $C_{0}^{q}$. Condition 1 is satisfied
if and only if in at least a coordinate of the pair of codewords, there is a
pair of input symbols $(a,b)$ such that $W(y|a)W(y|b)=0$ for all $y$, that is,
the joint type $P_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}$ of the two
codewords must have $P_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(a,b)>0$ for
that pair of input symbols. This condition can be satisfied only if there
actually exists a pair of symbols $(a,b)$ such that $W(y|a)W(y|b)=0$ for all
$y$. Thus, a precondition for $C_{0}^{q}>0$ is that
$\mathcal{A}\triangleq\big{\\{}(a,b)\in\mathcal{X}^{2}:W(y|a)W(y|b)=0\text{
for all }y\in\mathcal{Y}\big{\\}}\neq\varnothing.$ (46)
Notice that this is also the condition for the classical $C_{0}$ to be
positive, which is of course a necessary condition to have $C_{0}^{q}>0$,
since, as we already pointed out, we have in general $C_{0}^{q}\leq C_{0}$.
Instead, thanks to (18), condition 2 is satisfied if and only if both
$\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}^{\prime}\geq
0\qquad\text{and}\qquad\mu_{\boldsymbol{x}_{2},\boldsymbol{x}_{1}}^{\prime}\geq
0\,.$ (47)
Hence, using (16), there exists a pair of codewords satisfying condition 2 if
and only if there exists a joint type $P$ such that both
$\sum_{a}\sum_{b}P(a,b)\mu_{a,b}^{\prime}\geq
0\quad\text{and}\quad\sum_{a}\sum_{b}P(a,b)\mu_{b,a}^{\prime}\geq 0\,.$ (48)
Any pair of codewords with a joint type $P$ satisfying (48) satisfies
Condition 2 for a positive $C_{0}^{q}$. Now, condition (48) is true if and
only if
$\sup_{P\in\mathcal{P}(\mathcal{X}^{2})}\\!\\!\min\Big{\\{}\sum_{a}\sum_{b}P(a,b)\mu_{a,b}^{\prime}\,,\,\sum_{a}\sum_{b}P(a,b)\mu_{b,a}^{\prime}\Big{\\}}>0$
(49)
or
$\min\Big{\\{}\sum_{a}\sum_{b}P(a,b)\mu_{a,b}^{\prime}\,,\,\sum_{a}\sum_{b}P(a,b)\mu_{b,a}^{\prime}\Big{\\}}=0$
(50)
for some $P\in\mathcal{P}_{\mathbb{Q}}(\mathcal{X}^{2})$, where
$\mathcal{P}_{\mathbb{Q}}(\mathcal{X}^{2})$ denotes the set of probability
vectors over $\mathcal{X}^{2}$ with rational entries. This supremum can be
computed easily. Notice first that the minimum of two linear functions is
concave. Then, since the minimum of the two functions is invariant with
respect to the transformation $P(a,b)\leftrightarrow P(b,a)$, its maximum is
always attained (also) by a joint distribution such that $P(a,b)=P(b,a)$ for
all $a,b$. In such a case, the two functions are both equal to
$\sum_{a\leq b}P(a,b)(\mu_{a,b}^{\prime}+\mu_{b,a}^{\prime})$ (51)
and this quantity is maximized when all the weight is given to the largest
term. Notice also that the $P$ achieving this maximum has rational entries and
so it belongs to $\mathcal{P}_{\mathbb{Q}}(\mathcal{X}^{2})$. Hence, thanks to
(18), conditions (49) and (50) become
$\max_{a,b}\bigg{(}\min_{y:W(y|a)>0}\log\frac{q(a,y)}{q(b,y)}+\min_{y:W(y|b)>0}\log\frac{q(b,y)}{q(a,y)}\bigg{)}\geq
0\,.$ (52)
Thus, if (52) is true, then we can find at least one joint type $P$ that
satisfies (48), and with it a set of pairs of codewords that satisfy Condition
2 for $C_{0}^{q}>0$. However, we have no guarantees that there exists a pair
of codewords in this set that satisfies also Condition 1. For this to be true,
it is necessary that a pair of codewords in the set has a joint type with
$P(a,b)>0$ for some $(a,b)\in\mathcal{A}$. We now investigate this issue. If
the maximum in (52) is strictly positive, then, thanks to the fact that the
argument of the $\max$ in (51) is linear in $P$, in the neighborhood of the
joint type achieving the maximum, there exists a (symmetric) joint type
$\hat{P}$ that has $\hat{P}(a,b)>0$ for a pair of symbols
$(a,b)\in\mathcal{A}$, and that, when put into (51), still returns a positive
value. Hence, the two codewords with that joint type satisfy both conditions 1
and 2, and $C_{0}^{q}$ is positive. If, instead, the maximum in (52) is
exactly zero, then, a joint type satisfying also condition 1 exists only if
one of the joint types achieving the maximum already has a positive entry
corresponding to a pair of symbols $(a,b)\in\mathcal{A}$, that is, there
exists a pair of symbols $(a,b)$ such that
$\min_{y:W(y|a)>0}\log\frac{q(a,y)}{q(b,y)}+\min_{y:W(y|b)>0}\log\frac{q(b,y)}{q(a,y)}=0$
(53)
and for all $y$, $W(y|a)W(y|b)=0$.
To summarize, $C_{0}^{q}>0$ if and only if $\mathcal{A}\neq\varnothing$ and
either
$\max_{a,b}\bigg{(}\min_{y:W(y|a)>0}\log\frac{q(a,y)}{q(b,y)}+\min_{y:W(y|b)>0}\log\frac{q(b,y)}{q(a,y)}\bigg{)}>0$
(54)
or there exists a pair $(a,b)\in\mathcal{A}$ such that
$\min_{y:W(y|a)>0}\log\frac{q(a,y)}{q(b,y)}+\min_{y:W(y|b)>0}\log\frac{q(b,y)}{q(a,y)}=0\,.$
(55)
The complementary conditions give the first part of the theorem.
The second part is a bit more straightforward. Condition 1 remains identical;
as for condition 2b, condition (47) is replaced by
$\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}^{\prime}>0\qquad\text{and}\qquad\mu_{\boldsymbol{x}_{2},\boldsymbol{x}_{1}}^{\prime}>0\,.$
(56)
Following the same steps as before, we get that $\bar{C}_{0}^{q}>0$ if and
only if $\mathcal{A}\neq\varnothing$ and
$\max_{a,b}\bigg{(}\min_{y:W(y|a)>0}\log\frac{q(a,y)}{q(b,y)}+\min_{y:W(y|b)>0}\log\frac{q(b,y)}{q(a,y)}\bigg{)}>0\,.$
(57)
The complementary conditions give the second part of the theorem.
Finally, regarding the corollary, the implication follows from the fact that
$\displaystyle\max_{Q\in\mathcal{P}(\mathcal{X})}$ $\displaystyle\sup_{s\geq
0}\sum_{a}\sum_{b}Q(a)Q(b)\mu_{a,b}(s)$
$\displaystyle=\frac{1}{2}\max_{Q\in\mathcal{P}(\mathcal{X})}\sup_{s\geq
0}\sum_{a}\sum_{b}Q(a)Q(b)\big{(}\mu_{a,b}(s)+\mu_{b,a}(s)\big{)}$ (58)
$\displaystyle\leq\frac{1}{2}\max_{Q\in\mathcal{P}(\mathcal{X})}\sum_{a}\sum_{b}Q(a)Q(b)\sup_{s\geq
0}\big{(}\mu_{a,b}(s)+\mu_{b,a}(s)\big{)}\,,$ (59)
where the equality follows from the fact that
$\sum_{a}\sum_{b}Q(a)Q(b)\mu_{a,b}(s)=\sum_{a}\sum_{b}Q(a)Q(b)\mu_{b,a}(s)\,.$
(60)
The quantity in (59) is finite if $\bar{C}_{0}^{q}=0$, since inequality (35)
can be rewritten as
$\mu^{\prime}_{a,b}+\mu^{\prime}_{b,a}\leq 0\,,$ (61)
which by (37) is equivalent to
$\lim_{s\to+\infty}\big{(}\mu_{a,b}(s)+\mu_{b,a}(s)\big{)}<+\infty\,,$ (62)
which in turn implies that
$\sup_{s\geq 0}\big{(}\mu_{a,b}(s)+\mu_{b,a}(s)\big{)}<+\infty$ (63)
since $\mu_{a,b}(s)+\mu_{b,a}(s)$ is concave. ∎
## III Lower bound on the probability of error
We now proceed to derive a lower bound on the minimum probability of error of
any discrete memoryless channel and mismatched metric, under the assumption
that $\bar{C}_{0}^{q}=0$ and that ties are decoded equiprobably. In order to
achieve this, we first derive a lower bound on the probability of error of
codes with two codewords, and then we generalize the result to codes with an
arbitrary number of codewords.
Following (5), the probabilities of error for the two messages 1 and 2 satisfy
$\displaystyle P_{e,1}^{(n)}$
$\displaystyle\geq\sum_{\boldsymbol{y}\notin\mathcal{Y}_{1}^{n}}W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})$
(64) $\displaystyle P_{e,2}^{(n)}$
$\displaystyle\geq\sum_{\boldsymbol{y}\notin\mathcal{Y}_{2}^{n}}W^{n}(\boldsymbol{y}|\boldsymbol{x}_{2})\,,$
(65)
where
$\displaystyle\mathcal{Y}_{1}^{n}$
$\displaystyle=\\{\boldsymbol{y}\in\mathcal{Y}^{n}:q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})\geq
q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})\\}$ (66)
$\displaystyle\mathcal{Y}_{2}^{n}$
$\displaystyle=\\{\boldsymbol{y}\in\mathcal{Y}^{n}:q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})\leq
q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})\\}.$ (67)
Notice that the lower bounds are due to the fact that we consider all
sequences that are tied between the two messages as correctly decoded. Also,
we can restrict our attention only to sequences such that
$q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})\,q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})>0$,
that is, we can substitute $\mathcal{Y}^{n}$ with the set
$\hat{\mathcal{Y}}^{n}=\\{\boldsymbol{y}\in\mathcal{Y}^{n}:q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})\,q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})>0\\}\,.$
(68)
In fact, thanks to the condition in (8), if
$q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})$ and
$q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})$ are both zero for some sequence
$\boldsymbol{y}$, then also $W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})$ and
$W^{n}(\boldsymbol{y}|\boldsymbol{x}_{2})$ are zero, and the sequence
contributes neither to $P_{e,1}$ nor to $P_{e,2}$. If instead only one of the
two is zero, for example $q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})$, then
$q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})<q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})$
and the sequence would only contribute to $P_{e,1}$; however, by (8) we have
$W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})=0$, and so also its contribution to
$P_{e,1}$ is zero.
We now introduce some tools from the method of types developed by Csiszár and
Körner [11]. We define the _conditional type_ of the sequence $\boldsymbol{y}$
given the codewords $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$ for any
$a,b\in\mathcal{X}$ and $y\in\mathcal{Y}$ as
$V_{\boldsymbol{y}}(y|a,b)\triangleq\frac{P_{\boldsymbol{x}_{1},\boldsymbol{x}_{2},\boldsymbol{y}}(a,b,y)}{P_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(a,b)}\,,$
(69)
where $P_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}$ and
$P_{\boldsymbol{x}_{1},\boldsymbol{x}_{2},\boldsymbol{y}}$ are the joint types
of the pair $(\boldsymbol{x}_{1},\boldsymbol{x}_{2})$ and the triple
$(\boldsymbol{x}_{1},\boldsymbol{x}_{2},\boldsymbol{y})$ respectively. In
order to lighten the notation, from now on we let
$P_{1,2}(a,b)=P_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(a,b)$. We also define
the _conditional Kullback-Leibler divergence_ as
$D(V\|Z|P_{1,2})\triangleq\sum_{a\in\mathcal{X}}\sum_{b\in\mathcal{X}}P_{1,2}(a,b)D\big{(}V(\cdot\,|a,b)\|Z(\cdot\,|a,b)\big{)}$
(70)
for any two conditional distributions $V,Z:\mathcal{X}^{2}\to\mathcal{Y}$.
Now, all sequences $\boldsymbol{y}\in\mathcal{Y}^{n}$ with the same
conditional type $V_{\boldsymbol{y}}$ also have the same values for
$W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})$,
$W^{n}(\boldsymbol{y}|\boldsymbol{x}_{2})$,
$q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})$ and
$q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})$, so they all give the same
contribution to the probabilities of error in (64) and (65). Hence, we can
group them together and reformulate the probability of error as a function of
conditional types instead of sequences:
$\displaystyle P_{e,1}^{(n)}$
$\displaystyle=\sum_{V\notin\mathcal{V}_{1}^{n}}W^{n}(V|\boldsymbol{x}_{1})$
(71) $\displaystyle P_{e,2}^{(n)}$
$\displaystyle=\sum_{V\notin\mathcal{V}_{2}^{n}}W^{n}(V|\boldsymbol{x}_{2})\,,$
(72)
where
$\displaystyle\mathcal{V}_{1}^{n}$
$\displaystyle=\\{V\in\mathcal{V}^{n}(\boldsymbol{x}_{1},\boldsymbol{x}_{2}):q^{n}(\boldsymbol{x}_{1},V)\geq
q^{n}(\boldsymbol{x}_{2},V)\\}$ (73)
$\displaystyle=\Big{\\{}V\in\mathcal{V}^{n}(\boldsymbol{x}_{1},\boldsymbol{x}_{2}):$
$\displaystyle\hskip
20.00003pt\sum_{a,b}P_{1,2}(a,b)\sum_{y}V(y|a,b)\log\frac{q(a,y)}{q(b,y)}\geq
0\Big{\\}}\,,$ (74)
$\mathcal{V}_{2}^{n}=\Big{\\{}V\in\mathcal{V}^{n}(\boldsymbol{x}_{1},\boldsymbol{x}_{2}):\\\
\sum_{a,b}P_{1,2}(a,b)\sum_{y}V(y|a,b)\log\frac{q(a,y)}{q(b,y)}\leq
0\Big{\\}}\,,$ (75)
and $\mathcal{V}^{n}(\boldsymbol{x}_{1},\boldsymbol{x}_{2})$ is the set of all
conditional types given $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$.
Furthermore, if we define the two conditional distributions
$\displaystyle W_{1}(y|a,b)$ $\displaystyle=W(y|a)\quad\text{for all}\quad
b\in\mathcal{X}$ (76) $\displaystyle W_{2}(y|a,b)$
$\displaystyle=W(y|b)\quad\text{for all}\quad a\in\mathcal{X}$ (77)
then from classical results of the method of types (see [11]) we can derive
the following lemma.
###### Lemma 2.
For any conditional type
$V\in\mathcal{V}^{n}(\boldsymbol{x}_{1},\boldsymbol{x}_{2})$ we have
$W^{n}(V|\boldsymbol{x}_{m})\geq\frac{1}{(n+1)^{|\mathcal{X}|^{2}|\mathcal{Y}|}}e^{-nD(V\|W_{m}|P_{1,2})}$
(78)
for any $m\in\mathcal{M}=\\{1,2\\}$.
Hence, we can lower bound the probabilities of error in (71) and (72) as
$P_{e,m}^{(n)}\geq\sum_{V\notin\mathcal{V}^{n}_{m}}e^{-n[D(V\|W_{m}|P_{1,2})+\delta_{1}(n)]}\,,$
(79)
where
$\delta_{1}(n)=|\mathcal{X}|^{2}|\mathcal{Y}|\frac{\log(n+1)}{n}\,.$ (80)
Also, using (10), it can be verified by substitution that
$\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)-s\mu^{\prime}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)=nD(V_{s}\|W_{1}|P_{1,2})$
(81)
for the conditional distribution
$V_{s}(y|a,b)=\frac{W(y|a)\big{(}\frac{q(b,y)}{q(a,y)}\big{)}^{s}}{\sum_{\bar{y}\in\hat{\mathcal{Y}}}W(\bar{y}|a)\big{(}\frac{q(b,\bar{y})}{q(a,\bar{y})}\big{)}^{s}}\,.$
(82)
We also need the following lemma about approximating a probability
distribution with a type.
###### Lemma 3 (Shannon [13]).
For any distribution $Q\in\mathcal{P}(\mathcal{Y})$, for any $n\in\mathbb{N}$,
there exists a type $\hat{Q}\in\mathcal{T}^{n}(\mathcal{Y})$ such that
$\lvert\hat{Q}(y)-Q(y)\rvert\leq\frac{1}{n}\quad\text{for all}\quad
y\in\mathcal{Y}$ (83)
and $\hat{Q}(y)=0$ if $Q(y)=0$.
We can now prove the following theorem on the probability of error.
###### Theorem 2.
For $n$ large enough, the probabilities of error $P_{e,1}^{(n)}$ and
$P_{e,2}^{(n)}$ are lower-bounded by
$P_{e,1}^{(n)}\geq
e^{-\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)+s\mu^{\prime}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)-\delta(n)}$
(84)
for every $s$ such that
$\mu^{\prime}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)<0$, and
$P_{e,2}^{(n)}\geq
e^{-\mu_{\boldsymbol{x}_{2},\boldsymbol{x}_{1}}(s)+s\mu^{\prime}_{\boldsymbol{x}_{2},\boldsymbol{x}_{1}}(s)-\delta(n)}$
(85)
for every $s$ such that
$\mu^{\prime}_{\boldsymbol{x}_{2},\boldsymbol{x}_{1}}(s)<0$, where
$\delta(n)=\lvert\mathcal{X}\rvert^{2}\lvert\mathcal{Y}\rvert\bigg{(}1+2\log(n+1)+\log\frac{1}{W_{\min}}\bigg{)}$
(86)
and $W_{\min}=\min_{x,y}W(y|x)$, where the minimum is over all
$x\in\mathcal{X}$ and $y\in\mathcal{Y}$ such that $W(y|x)>0$.
###### Proof.
We prove the bound for $P_{e,1}^{(n)}$; the bound for $P_{e,2}^{(n)}$ follows
similarly. Notice that we can rewrite
$\mu^{\prime}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)=\sum_{a,b}P_{1,2}(a,b)\sum_{y}V_{s}(y|a,b)\log\frac{q(a,y)}{q(b,y)}$
(87)
for $V_{s}$ as defined in (82). Hence, for every $s$ such that
$\mu^{\prime}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)<0$ we have
$\sum_{a,b}P_{1,2}(a,b)\sum_{y}V_{s}(y|a,b)\log\frac{q(a,y)}{q(b,y)}<0\,.$
(88)
Intuitively, for $n$ large enough we can approximate $V_{s}$ with a
conditional type $\hat{V}_{s}$ that still satisfies (88). In fact, thanks to
Lemma 3 we have
$\displaystyle\bigg{\lvert}\sum_{a,b}$ $\displaystyle
P_{1,2}(a,b)\sum_{y}[\hat{V}_{s}(y|a,b)-V_{s}(y|a,b)]\log\frac{q(a,y)}{q(b,y)}\bigg{\rvert}$
$\displaystyle\leq\sum_{a,b}P_{1,2}(a,b)\sum_{y}\lvert\hat{V}_{s}(y|a,b)-V_{s}(y|a,b)\rvert\bigg{\lvert}\\!\log\frac{q(a,y)}{q(b,y)}\bigg{\rvert}$
(89)
$\displaystyle\leq\sum_{a,b}P_{1,2}(a,b)\sum_{y}\frac{1}{nP_{1,2}(a,b)}\bigg{\lvert}\\!\log\frac{q(a,y)}{q(b,y)}\bigg{\rvert}$
(90)
$\displaystyle\leq\frac{1}{n}\sum_{a,b,y}\bigg{\lvert}\\!\log\frac{q(a,y)}{q(b,y)}\bigg{\rvert}\,,$
(91)
that goes to $0$ as $n\to\infty$. Now, $\hat{V}_{s}$ does not belong to
$\mathcal{V}_{1}^{n}$, so we can lower bound (79) with
$\displaystyle P_{e,1}^{(n)}$
$\displaystyle\geq\sum_{V\notin\mathcal{V}^{n}_{1}}e^{-n[D(V\|W_{1}|P_{1,2})+\delta_{1}(n)]}$
(92) $\displaystyle\geq
e^{-n[D(\hat{V}_{s}\|W_{1}|P_{1,2})+\delta_{1}(n)]}\,.$ (93)
Again, since $\hat{V}_{s}$ and $V_{s}$ are close to each other, also
$D(\hat{V}_{s}\|W_{1}|P_{1,2})$ and $D(V_{s}\|W_{1}|P_{1,2})$ are close to
each other. As a matter of fact, we can write
$\displaystyle\lvert D($
$\displaystyle\hat{V}_{s}\|W_{1}|P_{1,2})-D(V_{s}\|W_{1}|P_{1,2})\rvert$
$\displaystyle\leq\sum_{a,b}P_{1,2}(a,b)\sum_{y}\bigg{\lvert}\hat{V}_{s}(y|a,b)\log\frac{\hat{V}_{s}(y|a,b)}{W(y|a)}$
$\displaystyle\hskip 90.00014pt-
V_{s}(y|a,b)\log\frac{V_{s}(y|a,b)}{W(y|a)}\bigg{\rvert}\,.$ (94)
Now for each $a$, $b$ and $y$ there are two possibilities: if $V_{s}(y|a,b)>0$
and $\hat{V}_{s}(y|a,b)=0$, then thanks to Lemma 3 we have
$\displaystyle\bigg{\lvert}\hat{V}_{s}(y|a,b)\log\frac{\hat{V}_{s}(y|a,b)}{W(y|a)}$
$\displaystyle-V_{s}(y|a,b)\log\frac{V_{s}(y|a,b)}{W(y|a)}\bigg{\rvert}$
$\displaystyle=V_{s}(y|a,b)\bigg{\lvert}\log\frac{V_{s}(y|a,b)}{W(y|a)}\bigg{\rvert}$
(95)
$\displaystyle\leq\frac{1}{nP_{1,2}(a,b)}\bigg{\lvert}\log\frac{V_{s}(y|a,b)}{W(y|a)}\bigg{\rvert}\,,$
(96)
where the term in absolute value is finite and independent of $n$. If instead
both $\hat{V}_{s}(y|a,b)$ and $V_{s}(y|a,b)$ are positive, then we can apply
Lagrange’s mean value theorem to the function
$g(x)=x\log\frac{x}{W(y|a)}\,,$ (97)
whose derivative is
$g^{\prime}(x)=\log\frac{x}{W(y|x)}+1\,,$ (98)
to get
$\displaystyle\bigg{\lvert}\hat{V}_{s}($ $\displaystyle
y|a,b)\log\frac{\hat{V}_{s}(y|a,b)}{W(y|a)}-V_{s}(y|a,b)\log\frac{V_{s}(y|a,b)}{W(y|a)}\bigg{\rvert}$
$\displaystyle\leq\lvert\hat{V}_{s}(y|a,b)-V_{s}(y|a,b)\rvert\bigg{\lvert}\log\frac{\bar{V}(y|a,b)}{W(y|a)}+1\bigg{\rvert}$
(99) $\displaystyle\leq\frac{1}{nP_{1,2}(a,b)}\big{(}1+\lvert\log
W(y|a)\rvert+\lvert\log\bar{V}(y|a,b)\rvert\big{)}$ (100)
for some $\bar{V}(y|a,b)\in\big{(}V_{s}(y|a,b),\hat{V}_{s}(y|a,b)\big{)}$ (the
interval endpoints might be inverted).
Notice that both $\hat{V}_{s}(y|a,b)$ and $\bar{V}(y|a,b)$ depend implicitly
on $n$. In order to make this dependence explicit, we study two cases. If
$V_{s}(y|a,b)<\hat{V}_{s}(y|a,b)$, then
$V_{s}(y|a,b)<\bar{V}(y|a,b)$ and
$\displaystyle\bigg{\lvert}\hat{V}_{s}$
$\displaystyle(y|a,b)\log\frac{\hat{V}_{s}(y|a,b)}{W(y|a)}-V_{s}(y|a,b)\log\frac{V_{s}(y|a,b)}{W(y|a)}\bigg{\rvert}$
$\displaystyle\leq\frac{1}{nP_{1,2}(a,b)}\big{(}1+\lvert\log
W(y|a)\rvert+\lvert\log V_{s}(y|a,b)\rvert\big{)}\,.$ (101)
If instead $\hat{V}_{s}(y|a,b)<V_{s}(y|a,b)$, then
$\frac{1}{n}\leq\hat{V}_{s}(y|a,b)<\bar{V}(y|a,b)$ and
$\displaystyle\bigg{\lvert}\hat{V}_{s}(y|a,b)$
$\displaystyle\log\frac{\hat{V}_{s}(y|a,b)}{W(y|a)}-V_{s}(y|a,b)\log\frac{V_{s}(y|a,b)}{W(y|a)}\bigg{\rvert}$
$\displaystyle\leq\frac{1}{nP_{1,2}(a,b)}\big{(}1+\log n+\lvert\log
W(y|a)\rvert\big{)}\,.$ (102)
Putting it all together, for $n$ large enough we have
$\displaystyle\lvert D(\hat{V}_{s}\|$ $\displaystyle
W_{1}|P_{1,2})-D(V_{s}\|W_{1}|P_{1,2})\rvert$
$\displaystyle\leq\frac{1}{n}\sum_{a,b,y}\big{(}1+\log n+\lvert\log
W(y|a)\rvert\big{)}$ (103)
$\displaystyle\leq\frac{\lvert\mathcal{X}\rvert^{2}\lvert\mathcal{Y}\rvert}{n}\bigg{(}1+\log
n+\log\frac{1}{W_{\min}}\bigg{)}\triangleq\delta_{2}(n)$ (104)
that again goes to $0$ as $n\to\infty$. Hence, equations (81), (93) and (104)
lead to
$\displaystyle P_{e,1}^{(n)}$ $\displaystyle\geq
e^{-n[D(\hat{V}_{s}\|W_{1}|P_{1,2})+\delta_{1}(n)]}$ (105) $\displaystyle\geq
e^{-nD(V_{s}\|W_{1}|P_{1,2})-n\delta_{1}(n)-n\delta_{2}(n)}$ (106)
$\displaystyle\geq
e^{-\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)+s\mu^{\prime}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)-\delta(n)}\,,$
(107)
which concludes the proof. ∎
Notice that Theorem 2 holds for arbitrary tie-breaking rules. The following
corollary, instead, holds only under the assumption that ties are broken
equiprobably (or in the case where ties are always counted as errors).
###### Corollary 2.
If ties are broken equiprobably, then:
$\displaystyle P_{e,1}^{(n)}$ $\displaystyle\geq\exp\Big{\\{}-\sup_{s\geq
0}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)-\delta(n)\Big{\\}}$ (108)
$\displaystyle P_{e,2}^{(n)}$ $\displaystyle\geq\exp\Big{\\{}-\sup_{s\geq
0}\mu_{\boldsymbol{x}_{2},\boldsymbol{x}_{1}}(s)-\delta(n)\Big{\\}}\,.$ (109)
###### Proof.
We prove again only the bound for $P_{e,1}^{(n)}$. There are three
possibilities:
1. 1.
$\lim_{s\to\infty}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)=+\infty$;
2. 2.
$\lim_{s\to\infty}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)\in(-\infty,+\infty)$;
3. 3.
$\lim_{s\to\infty}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)=-\infty$.
In the first case, we have $\sup_{s\geq
0}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)=+\infty$ and the bound simply
becomes $P_{e,1}^{(n)}\geq 0$, which is trivial.
In the second case, due to the concavity of
$\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$, we have
$\sup_{s\geq
0}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)=\lim_{s\to\infty}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)\,.$
(110)
Since the limit is a finite real number, then from the definition of
$\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$ in (10) we can deduce that
for all sequences $\boldsymbol{y}\in\hat{\mathcal{Y}}^{n}$ such that
$W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})>0$, that is, for all sequences
$\boldsymbol{y}$ that can possibly contribute to $P_{e,1}^{(n)}$, we must have
$\frac{q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})}{q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})}\leq
1$, or equivalently, $q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})\leq
q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})$. Since all sequences such that
$q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})<q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})$
do not contribute to $P_{e,1}^{(n)}$, this means that all sequences that
appear in the sum (64) are those that satisfy
$q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})=q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})$.
Hence, in this case we can write
$P_{e,1}^{(n)}=\sum_{\boldsymbol{y}\in\hat{\mathcal{Y}}^{n}_{\rm
t}}W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})\,,$ (111)
where
$\hat{\mathcal{Y}}^{n}_{\rm
t}=\\{\boldsymbol{y}\in\hat{\mathcal{Y}}^{n}:q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})=q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})\\}\,.$
(112)
But in this particular case we also have
$\displaystyle\lim_{s\to\infty}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$
$\displaystyle=\lim_{s\to\infty}-\log\sum_{\boldsymbol{y}\in\hat{\mathcal{Y}}^{n}}W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})\bigg{(}\frac{q^{n}(\boldsymbol{x}_{2},\boldsymbol{y})}{q^{n}(\boldsymbol{x}_{1},\boldsymbol{y})}\bigg{)}^{s}$
(113) $\displaystyle=-\log\sum_{\boldsymbol{y}\in\hat{\mathcal{Y}}^{n}_{\rm
t}}W^{n}(\boldsymbol{y}|\boldsymbol{x}_{1})=-\log P_{e,1}^{(n)}\,,$ (114)
or equivalently,
$P_{e,1}^{(n)}=\exp\Big{\\{}-\lim_{s\to\infty}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)\Big{\\}}=\exp\Big{\\{}-\sup_{s\geq
0}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)\Big{\\}}\,.$ (115)
In the third case, let
$\hat{s}=\operatorname*{arg\,max}_{s\in\mathbb{R}}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$.
If $\hat{s}\geq 0$, then thanks to the continuity of
$\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$ and its derivative, we can
apply Theorem 2 for $s\to\hat{s}$, so that
$\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)\to\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(\hat{s})=\sup_{s\geq
0}\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)$ and
$\mu^{\prime}_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s)\to 0$. If instead
$\hat{s}<0$, we can apply Theorem 2 with $s=0$. ∎
Corollary 2 leads to the fact that
$P_{e}^{(n)}\geq\exp\big{\\{}-nD_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}^{(n)}+o(n)\big{\\}}\,,$
(116)
where
$D_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}^{(n)}=\min\bigg{\\{}\sup_{s\geq
0}\frac{1}{n}\,\mu_{\boldsymbol{x}_{1},\boldsymbol{x}_{2}}(s),\sup_{s\geq
0}\frac{1}{n}\,\mu_{\boldsymbol{x}_{2},\boldsymbol{x}_{1}}(s)\bigg{\\}}\,.$
(117)
Finally, notice that if we consider a code with more than two codewords, say
$M$, then there is one message $m$ such that
$P_{e,m}^{(n)}\geq\exp\big{\\{}-nD_{\min}(\mathcal{C})+o(n)\big{\\}}\,,$ (118)
where
$D_{\min}(\mathcal{C})\triangleq\min_{m\neq
m^{\prime}\in\mathcal{C}}D_{\boldsymbol{x}_{m},\boldsymbol{x}_{m^{\prime}}}^{(n)}\,,$
(119)
and therefore, for the whole code, the average probability of error is lower-
bounded by
$P_{e}^{(n)}\geq\frac{P_{e,m}^{(n)}}{M}\geq\exp\big{\\{}-n\big{(}D_{\min}(\mathcal{C})+R+o(1)\big{)}\big{\\}}\,.$
(120)
## IV Upper bound on the reliability function
Equation (120) shows that the problem of upper-bounding the exponent of the
probability of error reduces to upper-bounding $D_{\min}(\mathcal{C})$.
Specifically, for any rate $R>0$, the number of codewords $M$ of every code of
rate $R$ goes to infinity when the blocklength $n$ goes to infinity; hence, if
we can find an upper bound on $D_{\min}(\mathcal{C})$ which is valid for all
codes whose size $M$ tends to infinity, then thanks to (120), that bound will
also be a valid bound on
$E(0^{+})=\lim_{R\to 0}\limsup_{n\to\infty}-\frac{\log P_{e}(R,n)}{n}\,.$
(121)
Before going into the formal details, we give a brief outline of the proof of
our bound and the intuition behind it. First of all, the minimum distance
$D_{\min}(\mathcal{C})$ of any code $\mathcal{C}$ can be upper-bounded by the
minimum distance of any subcode extracted from $\mathcal{C}$. Furthermore, the
minimum distance $D_{\min}(\mathcal{C})$ is upper-bounded by the average
distance $D_{\boldsymbol{x}_{m},\boldsymbol{x}_{m^{\prime}}}^{(n)}$ over all
pairs of codewords in $\mathcal{C}$. Therefore, one natural way to upper bound
the minimum distance of a code is to upper bound the average distance over a
carefully selected subcode, that is,
$\displaystyle D_{\min}(\mathcal{C})$ $\displaystyle\leq
D_{\min}(\mathcal{\hat{C}})\leq\frac{1}{\hat{M}(\hat{M}-1)}\sum_{m\neq
m^{\prime}\in\hat{\mathcal{C}}}\\!\\!D_{\boldsymbol{x}_{m},\boldsymbol{x}_{m^{\prime}}}^{(n)}$
(122) $\displaystyle=\frac{1}{\hat{M}(\hat{M}-1)}$ $\displaystyle\sum_{m\neq
m^{\prime}\in\hat{\mathcal{C}}}\min\bigg{\\{}\sup_{s\geq
0}\frac{1}{n}\,\mu_{\boldsymbol{x}_{m},\boldsymbol{x}_{m^{\prime}}}(s),\sup_{s\geq
0}\frac{1}{n}\,\mu_{\boldsymbol{x}_{m^{\prime}},\boldsymbol{x}_{m}}(s)\bigg{\\}}$
(123)
for any $\hat{\mathcal{C}}\subset\mathcal{C}$ with
$|\hat{\mathcal{C}}|=\hat{M}$.
The choice of the subcode $\hat{\mathcal{C}}$ is crucial, since, in general,
the average in (123) may be too difficult to evaluate, for two reasons: for
two generic codewords $\boldsymbol{x}_{m}$ and $\boldsymbol{x}_{m^{\prime}}$,
the functions $\mu_{\boldsymbol{x}_{m},\boldsymbol{x}_{m^{\prime}}}(s)$ and
$\mu_{\boldsymbol{x}_{m^{\prime}},\boldsymbol{x}_{m}}(s)$ can be very
different from each other, and also, different pairs of codewords have in
general very different values of $s$ at which the functions $\mu(s)$ attain
their supremum. Luckily, we are able to overcome both these difficulties
thanks to the following result, which is essentially by Komlós [14], and that
was first employed in a coding setting by Blinovsky [15], in the maximum
likelihood case. We first state the following fundamental lemma. In order to
lighten the notation, from now on, in all subscripts, a pair of codewords
$\boldsymbol{x}_{m},\boldsymbol{x}_{m^{\prime}}$ will be denoted just by
$m,m^{\prime}$.
###### Lemma 4 (Komlós [14]).
Consider a code $\mathcal{C}$ with $M$ codewords. If for each pair
$(a,b)\in\mathcal{X}^{2}$ there exists a number $r_{a,b}$ such that for all
$m<m^{\prime}$,
$\big{\lvert}\,P_{m,m^{\prime}}(a,b)-r_{a,b}\,\big{\rvert}\leq\delta\,,$ (124)
then, for all $m\neq m^{\prime}$ and $(a,b)\in\mathcal{X}^{2}$,
$\big{\lvert}\,P_{m,m^{\prime}}(a,b)-P_{m,m^{\prime}}(b,a)\,\big{\rvert}\leq\frac{6}{\sqrt{M}}+4\sqrt{\delta}+4\delta\,.$
(125)
Then, using Ramsey’s theorem on the edge coloring of graphs (see for example
[18]), the following result can be proved.
###### Theorem 3.
For any positive integers $t$ and $\hat{M}$, there exists a positive integer
$M_{0}(\hat{M},t)$ such that from any code $\mathcal{C}$ with
$M>M_{0}(\hat{M},t)$ codewords, a subcode
$\hat{\mathcal{C}}\subset\mathcal{C}$ with $\hat{M}$ codewords can be
extracted such that for any $m\neq m^{\prime}$ and
$\bar{m}\neq\bar{m}^{\prime}$ (not necessarily different from $m$ and
$m^{\prime}$) in $\hat{\mathcal{C}}$, and any $(a,b)\in\mathcal{X}^{2}$,
$\big{\lvert}\,P_{m,m^{\prime}}(a,b)-P_{\bar{m},\bar{m}^{\prime}}(a,b)\,\big{\rvert}\leq\Delta(\hat{M},t)\,,$
(126)
where
$\Delta(\hat{M},t)\triangleq\frac{6}{\sqrt{\hat{M}}}+2\sqrt{\frac{2}{t}}+\frac{3}{t}\,.$
(127)
A proof of the two previous results in the more general list-decoding setting
can be found in [19]. Komlós’ result shows that for any positive integer
$\hat{M}$, any code with an appropriately large number of codewords contains a
subcode of size $\hat{M}$, whose codewords satisfy certain symmetry
properties, namely:
1. (i)
all pairs of codewords have approximately the same joint type;
2. (ii)
the joint types are also approximately symmetrical, that is, $P(a,b)\simeq
P(b,a)$ for all $a,b$.
Thanks to property (16), the fact that all pairs of codewords have similar
joint types implies that they also have similar $\mu(s)$, while the fact that
these types are close to symmetrical implies that
$\mu_{\boldsymbol{x}_{m},\boldsymbol{x}_{m^{\prime}}}(s)$ and
$\mu_{\boldsymbol{x}_{m^{\prime}},\boldsymbol{x}_{m}}(s)$ are close to each
other. However, technical problems arise due to the presence of the suprema in
(123), since even if the joint types are close to each other, the suprema of
the functions $\mu(s)$ might be very different if they are approached as
$s\to\infty$. This constrains our study only to a (very wide) class of
channels and decoding metrics for which we are sure that at least one supremum
in the definition of each
$D_{\boldsymbol{x}_{m},\boldsymbol{x}_{m^{\prime}}}^{(n)}$ is attained at an
$s$ no larger than a known fixed value. The class is the following.
###### Definition 1.
A discrete memoryless channel $W(y|x)$ and a decoding metric $q(x,y)$ form a
_balanced pair_ if $\bar{C}_{0}^{q}=0$ and for every pair
$(a,b)\in\mathcal{X}^{2}$ belonging to the set
$\mathcal{B}\triangleq\left\\{(a,b)\in\mathcal{X}^{2}:\min_{y:W(y|a)>0}\frac{q(a,y)}{q(b,y)}=\max_{y:W(y|b)>0}\frac{q(a,y)}{q(b,y)}\right\\}$
(128)
there exists a constant $B(a,b)$ such that
$\frac{q(a,y)}{q(b,y)}=B(a,b)$ (129)
for all $y\in\hat{\mathcal{Y}}_{a,b}$ such that $W(y|a)+W(y|b)>0$.
Notice that all channels and decoding metrics such that $\bar{C}_{0}^{q}=0$
and
$W(y|x)>0\iff q(x,y)>0$ (130)
are balanced pairs, and indeed represent a very important special case. To see
this, consider a channel-metric pair satisfying (130); for any
$(a,b)\in\mathcal{B}$, we can partition the set of possible output symbols in
$\hat{\mathcal{Y}}_{a,b}$ into three subsets:
$\displaystyle\mathcal{S}_{a}$
$\displaystyle=\\{y:W(y|a)>0\quad\text{and}\quad W(y|b)=0\\}$ (131)
$\displaystyle\mathcal{S}_{b}$
$\displaystyle=\\{y:W(y|a)=0\quad\text{and}\quad W(y|b)>0\\}$ (132)
$\displaystyle\mathcal{S}_{ab}$
$\displaystyle=\\{y:W(y|a)>0\quad\text{and}\quad W(y|b)>0\\}\,.$ (133)
For all $y\in\mathcal{S}_{a}$, $q(a,y)>0$ and $q(b,y)=0$, therefore
$q(a,y)/q(b,y)=+\infty$. Similarly, $q(a,y)/q(b,y)=0$ for all
$y\in\mathcal{S}_{b}$. Hence, we have that
$\displaystyle\min_{y:W(y|a)>0}\frac{q(a,y)}{q(b,y)}$
$\displaystyle=\min_{y:W(y|a)W(y|b)>0}\frac{q(a,y)}{q(b,y)}$ (134)
$\displaystyle\max_{y:W(y|b)>0}\frac{q(a,y)}{q(b,y)}$
$\displaystyle=\max_{y:W(y|a)W(y|b)>0}\frac{q(a,y)}{q(b,y)}\,,$ (135)
and since $(a,b)\in\mathcal{B}$, these two quantities must be equal, that is,
$\min_{y:W(y|a)W(y|b)>0}\frac{q(a,y)}{q(b,y)}=\max_{y:W(y|a)W(y|b)>0}\frac{q(a,y)}{q(b,y)}\,,$
(136)
which means that the ratio $q(a,y)/q(b,y)$ must be equal for all possible
$y\in\hat{\mathcal{Y}}_{a,b}$. This proves that the channel-metric pair is
indeed balanced. Furthermore, for this particular subclass,
$C_{0}=0\iff C_{0}^{q}=0\iff\bar{C}_{0}^{q}=0\,,$ (137)
where $C_{0}$ is the classical zero-error capacity. An example of a non-
balanced channel-decoding metric pair is the following.
###### Example 1.
Consider the three-input typewriter channel with
$\mathcal{X}=\mathcal{Y}=\\{0,1,2\\}$ and crossover probabilities
$W(1|0)=W(2|1)=W(0|2)=\varepsilon$, with $0<\varepsilon<2-\sqrt{2}$.
Furthermore, consider a decoding metric such that $q(a,y)=W(y|a)$ for all $a$
and $y$ with the exception of $q(1,0)=q(1,2)=\frac{\varepsilon}{2}$ (Fig. 1).
Figure 1: Unbalanced channel-decoding metric pair of Example 1.
As one can check, this channel-decoding metric satisfies the condition for
$\bar{C}_{0}^{q}=0$ in Theorem 1, since for all $a,b\in\mathcal{X}$,
$\min_{y:W(y|a)>0}\frac{q(a,y)}{q(b,y)}=\max_{y:W(y|b)>0}\frac{q(a,y)}{q(b,y)}\,.$
(138)
Notice that for this channel also the classical zero-error capacity $C_{0}$ is
zero. However, this channel-decoding metric pair does not satisfy the second
condition in Definition 1 for a balanced pair, since
$\frac{q(0,0)}{q(1,0)}=\frac{2(1-\varepsilon)}{\varepsilon}\neq\frac{q(0,1)}{q(1,1)}=\frac{\varepsilon}{1-\varepsilon}\,.$
(139)
One last lemma that will be useful in bounding the average in (123) is the
following, which is a standard trick employed, for example, in the derivation
of the Plotkin bound.
###### Lemma 5.
For any code with $\hat{M}$ codewords of blocklength $n$, for any
$a,b\in\mathcal{X}$, with $a\neq b$,
$\sum_{m\neq
m^{\prime}}P_{m,m^{\prime}}(a,b)=\frac{1}{n}\sum_{c=1}^{n}\hat{M}_{c}(a)\hat{M}_{c}(b)\,,$
(140)
where $\hat{M}_{c}(a)$ is the number of times the symbol $a$ occurs in the
coordinate $c$ in all the codewords.
###### Proof.
Imagine the code as an $\hat{M}\times n$ matrix, with each codeword as a row.
Then, $\sum_{m\neq m^{\prime}}nP_{m,m^{\prime}}(a,b)$ is the number of times
the pair $(a,b)$ can be found by selecting any two entries of the matrix
belonging to the same column. The same computation can be performed column by
column: for a generic column $c$, that number is simply the number of times
$a$ occurs in that column, multiplied by the number of times $b$ occurs.
Finally, summing over all columns returns the same number of the first
computation, thus proving the lemma. ∎
We are ready to proceed and prove the upper bound on the reliability function
at $R=0^{+}$ for any balanced channel-metric pair. In particular, we will show
that for this class, the $D_{m,m^{\prime}}^{(n)}$ are all close to each other
for all pairs of codewords in the symmetric subcode
$\hat{\mathcal{C}}\subset\mathcal{C}$, whose existence is guaranteed by
Theorem 3. In order to show this, first of all, for any concave function
$f(s)$, let222Here $f(+\infty)$ means $\lim_{s\to+\infty}f(s)$. If
$\lim_{s\to+\infty}f(s)=+\infty$, then $\mathcal{S}=\\{+\infty\\}$, since
$f(s)$ is concave.
$\mathcal{S}\triangleq\Big{\\{}0\leq s\leq+\infty:f(s)=\sup_{s\geq
0}f(s)\Big{\\}}$ (141)
and define
$\operatorname*{arg\,sup}_{s\geq 0}f(s)\triangleq\inf\mathcal{S}\,.$ (142)
In the following lemma we prove that in the case of balanced pairs, for all
pairs of codewords, the concave functions
$\mu_{m,m^{\prime}}(s)+\mu_{m^{\prime},m}(s)$ achieve their suprema at an $s$
in a bounded interval, determined only by the channel and the decoding metric.
###### Lemma 6.
For any balanced pair, for any pair of codewords $m,m^{\prime}$,
$\operatorname*{arg\,sup}_{s\geq
0}\big{(}\mu_{m,m^{\prime}}(s)+\mu_{m^{\prime},m}(s)\big{)}\in[\,0,\,\hat{s}\,]\,,$
(143)
where
$\hat{s}\triangleq\max_{a,b}\Big{\\{}\operatorname*{arg\,sup}_{s\geq
0}\big{(}\mu_{a,b}(s)+\mu_{b,a}(s)\big{)}\Big{\\}}<+\infty\,.$ (144)
###### Proof.
We first show that $\hat{s}$ is finite. We already pointed out in the proof of
Theorem 1, that equation (35) can be rewritten as
$\mu^{\prime}_{a,b}+\mu^{\prime}_{b,a}\leq 0\,.$ (145)
For the $(a,b)\in\mathcal{X}^{2}$ such that
$\mu^{\prime}_{a,b}+\mu^{\prime}_{b,a}<0$, we have that
$\lim_{s\to+\infty}\mu_{a,b}(s)+\mu_{b,a}(s)=-\infty\,,$ (146)
which in turn implies that there exists a finite $\hat{s}_{a,b}\geq 0$ such
that
$\mu_{a,b}(\hat{s}_{a,b})+\mu_{b,a}(\hat{s}_{a,b})=\max_{s\geq
0}\big{(}\mu_{a,b}(s)+\mu_{b,a}(s)\big{)}$ (147)
since $\mu_{a,b}(s)+\mu_{b,a}(s)$ is concave. For the
$(a,b)\in\mathcal{X}^{2}$ such that $\mu^{\prime}_{a,b}+\mu^{\prime}_{b,a}=0$,
instead, equation (129) implies that
$\displaystyle\mu_{a,b}(s)$
$\displaystyle=-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}}W(y|a)\bigg{(}\frac{q(b,y)}{q(a,y)}\bigg{)}^{s}$
(148) $\displaystyle=-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}}W(y|a)B(a,b)^{-s}$
(149) $\displaystyle=s\log
B(a,b)-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}}W(y|a)\,,$ (150)
which is a straight line. Furthermore, since ${B(b,a)=1/B(a,b)}$, we have that
$\displaystyle\mu_{a,b}$ $\displaystyle(s)+\mu_{b,a}(s)$ $\displaystyle=s\log
B(a,b)B(b,a)-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}}W(y|a)$
$\displaystyle\hskip 110.00017pt-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}}W(y|b)$
(151)
$\displaystyle=-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}}W(y|a)-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}}W(y|b)\,,$
(152)
which is a constant. Hence,
$\sup_{s\geq
0}\big{(}\mu_{a,b}(s)+\mu_{b,a}(s)\big{)}=\mu_{a,b}(0)+\mu_{b,a}(0)$ (153)
and we can set $\hat{s}_{a,b}=0$ for these $(a,b)$. Thus, the $\hat{s}$
defined by (144) can be rewritten as
$\hat{s}=\max_{a,b}\,\hat{s}_{a,b}\,,$ (154)
which is finite, since all $\hat{s}_{a,b}$ are finite.
Equation (143) follows from the fact that
$\mu^{\prime}_{m,m^{\prime}}(\hat{s})+\mu^{\prime}_{m^{\prime},m}(\hat{s})\\\
=n\sum_{a}\sum_{b}P_{m,m^{\prime}}(a,b)\big{(}\mu^{\prime}_{a,b}(\hat{s})+\mu^{\prime}_{b,a}(\hat{s})\big{)}\leq
0\,,$ (155)
where the equality is due to (16), while the inequality is due to the fact
that for all $(a,b)$,
$\mu^{\prime}_{a,b}(\hat{s})+\mu^{\prime}_{b,a}(\hat{s})\leq\mu^{\prime}_{a,b}(\hat{s}_{a,b})+\mu^{\prime}_{b,a}(\hat{s}_{a,b})\leq
0\,,$ (156)
since $\hat{s}_{a,b}\leq\hat{s}$ and $\mu_{a,b}(s)+\mu_{b,a}(s)$ is concave. ∎
The previous lemma and the symmetry properties of the codewords in
$\hat{\mathcal{C}}$ lead to the following fundamental result, that shows that
the $D_{m,m^{\prime}}^{(n)}$ are close to each other for all pairs of
codewords in $\hat{\mathcal{C}}$. This fact is what will make the computation
of the average in (123) possible.
###### Lemma 7.
For any balanced pair, for any pair of codewords
$m,m^{\prime}\in\hat{\mathcal{C}}$, let
$\bar{s}_{m,m^{\prime}}\triangleq\min\Big{\\{}\operatorname*{arg\,sup}_{s\geq
0}\mu_{m,m^{\prime}}(s),\operatorname*{arg\,sup}_{s\geq
0}\mu_{m^{\prime},m}(s)\Big{\\}}\,.$ (157)
Then, $0\leq\bar{s}_{m,m^{\prime}}\leq\hat{s}$, with $\hat{s}$ defined by
(144), and
$D_{m,m^{\prime}}^{(n)}\leq\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})+K\Delta(\hat{M},t)$
(158)
with $\Delta(\hat{M},t)$ as defined by (127), and
$K\triangleq\max_{0\leq
s\leq\hat{s}}\sum_{a}\sum_{b}\big{\lvert}\mu_{a,b}(s)\big{\rvert}\,.$ (159)
Furthermore, for any other pair of codewords
$\bar{m},\bar{m}^{\prime}\in\hat{\mathcal{C}}$,
$\bigg{\lvert}\frac{1}{n}\,\mu_{\bar{m},\bar{m}^{\prime}}(\bar{s}_{\bar{m},\bar{m}^{\prime}})-\frac{1}{n}\,\mu_{\bar{m},\bar{m}^{\prime}}(\bar{s}_{m,m^{\prime}})\bigg{\rvert}\leq
4K\Delta(\hat{M},t)\,.$ (160)
###### Proof.
To prove that $\bar{s}_{m,m^{\prime}}\leq\hat{s}$, notice that from equation
(143) we get
$\mu^{\prime}_{m,m^{\prime}}(\hat{s})+\mu^{\prime}_{m^{\prime},m}(\hat{s})\leq
0\,,$ (161)
which is possible only if
$\mu^{\prime}_{m,m^{\prime}}(\hat{s})\leq
0\quad\text{or}\quad\mu^{\prime}_{m^{\prime},m}(\hat{s})\leq 0\,,$ (162)
which in turn implies that
$\operatorname*{arg\,sup}_{s\geq
0}\mu_{m,m^{\prime}}(s)\leq\hat{s}\quad\text{or}\quad\operatorname*{arg\,sup}_{s\geq
0}\mu_{m^{\prime},m}(s)\leq\hat{s}\,.$ (163)
This proves that
$\bar{s}_{m,m^{\prime}}\triangleq\min\Big{\\{}\operatorname*{arg\,sup}_{s\geq
0}\mu_{m,m^{\prime}}(s),\operatorname*{arg\,sup}_{s\geq
0}\mu_{m^{\prime},m}(s)\Big{\\}}\leq\hat{s}\,.$ (164)
Next, definition (157) implies that
$\sup_{s\geq
0}\mu_{m,m^{\prime}}(s)=\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})$ (165)
or
$\sup_{s\geq
0}\mu_{m^{\prime},m}(s)=\mu_{m^{\prime},m}(\bar{s}_{m,m^{\prime}}).$ (166)
Hence, we have that, thanks to (117),
$D_{m,m^{\prime}}^{(n)}\leq\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})\quad\text{or}\quad
D_{m,m^{\prime}}^{(n)}\leq\frac{1}{n}\,\mu_{m^{\prime},m}(\bar{s}_{m,m^{\prime}})\,.$
(167)
In the first case, equation (158) follows immediately; in the second case, we
have that
$\displaystyle\bigg{\lvert}\frac{1}{n}\,$
$\displaystyle\mu_{m^{\prime},m}(\bar{s}_{m,m^{\prime}})-\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})\bigg{\rvert}$
$\displaystyle=\Big{\lvert}\sum_{a}\sum_{b}\big{(}P_{m,m^{\prime}}(a,b)-P_{m,m^{\prime}}(b,a)\big{)}\mu_{a,b}(\bar{s}_{m,m^{\prime}})\Big{\rvert}$
(168)
$\displaystyle\leq\sum_{a}\sum_{b}\big{\lvert}P_{m,m^{\prime}}(a,b)-P_{m^{\prime},m}(a,b)\big{\rvert}\big{\lvert}\mu_{a,b}(\bar{s}_{m,m^{\prime}})\big{\rvert}$
(169)
$\displaystyle\leq\Delta(\hat{M},t)\sum_{a}\sum_{b}\big{\lvert}\mu_{a,b}(\bar{s}_{m,m^{\prime}})\big{\rvert}$
(170) $\displaystyle\leq\Delta(\hat{M},t)\max_{0\leq
s\leq\hat{s}}\sum_{a}\sum_{b}\big{\lvert}\mu_{a,b}(s)\big{\rvert}$ (171)
$\displaystyle=K\Delta(\hat{M},t)$ (172)
and therefore,
$D_{m,m^{\prime}}^{(n)}\leq\frac{1}{n}\,\mu_{m^{\prime},m}(\bar{s}_{m,m^{\prime}})\leq\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})+K\Delta(\hat{M},t)\,.$
(173)
Finally, in order to prove (160), first notice that
$\displaystyle\bigg{\lvert}\frac{1}{n}\,$
$\displaystyle\mu_{\bar{m},\bar{m}^{\prime}}(\bar{s}_{\bar{m},\bar{m}^{\prime}})-\frac{1}{n}\,\mu_{\bar{m},\bar{m}^{\prime}}(\bar{s}_{m,m^{\prime}})\bigg{\rvert}$
$\displaystyle=\frac{1}{n}\big{\lvert}\mu_{\bar{m},\bar{m}^{\prime}}(\bar{s}_{\bar{m},\bar{m}^{\prime}})-\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})$
$\displaystyle\hskip
60.00009pt+\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})-\mu_{\bar{m},\bar{m}^{\prime}}(\bar{s}_{m,m^{\prime}})\big{\rvert}$
(174)
$\displaystyle\leq\frac{1}{n}\big{\lvert}\mu_{\bar{m},\bar{m}^{\prime}}(\bar{s}_{\bar{m},\bar{m}^{\prime}})-\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})\big{\rvert}$
$\displaystyle\hskip
45.00006pt+\frac{1}{n}\big{\lvert}\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})-\mu_{\bar{m},\bar{m}^{\prime}}(\bar{s}_{m,m^{\prime}})\big{\rvert}\,.$
(175)
The second absolute value can be bounded as follows:
$\displaystyle\frac{1}{n}\big{\lvert}$
$\displaystyle\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})-\mu_{\bar{m},\bar{m}^{\prime}}(\bar{s}_{m,m^{\prime}})\big{\rvert}$
$\displaystyle=\Big{\lvert}\sum_{a}\sum_{b}\big{(}P_{m,m^{\prime}}(a,b)-P_{\bar{m},\bar{m}^{\prime}}(a,b)\big{)}\mu_{a,b}(\bar{s}_{m,m^{\prime}})\Big{\rvert}$
(176)
$\displaystyle\leq\sum_{a}\sum_{b}\big{\lvert}P_{m,m^{\prime}}(a,b)-P_{\bar{m},\bar{m}^{\prime}}(a,b)\big{\rvert}\big{\lvert}\mu_{a,b}(\bar{s}_{m,m^{\prime}})\big{\rvert}$
(177)
$\displaystyle\leq\Delta(\hat{M},t)\sum_{a}\sum_{b}\big{\lvert}\mu_{a,b}(\bar{s}_{m,m^{\prime}})\big{\rvert}$
(178) $\displaystyle\leq K\Delta(\hat{M},t)\,,$ (179)
which also holds for every $0\leq s\leq\hat{s}$. The first absolute value,
instead, can be bounded in the following way. Suppose that
$\frac{1}{n}\,\mu_{\bar{m},\bar{m}^{\prime}}(\bar{s}_{\bar{m},\bar{m}^{\prime}})\geq\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})\,;$
(180)
the other case can be proved in the same way. Then, we can write that
$\frac{1}{n}\big{\lvert}\mu_{\bar{m},\bar{m}^{\prime}}(\bar{s}_{\bar{m},\bar{m}^{\prime}})-\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})\big{\rvert}\\\
=\frac{1}{n}\,\mu_{\bar{m},\bar{m}^{\prime}}(\bar{s}_{\bar{m},\bar{m}^{\prime}})-\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})\,.$
(181)
Furthermore, thanks to (165) and (166), we have two alternatives. If
$\sup_{s\geq
0}\mu_{m,m^{\prime}}(s)=\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})\,,$
then
$\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})\geq\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{\bar{m},\bar{m}^{\prime}})$
(182)
and we can bound (181) by
$\displaystyle\frac{1}{n}\,\mu_{\bar{m},\bar{m}^{\prime}}$
$\displaystyle(\bar{s}_{\bar{m},\bar{m}^{\prime}})-\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})$
$\displaystyle\leq\frac{1}{n}\,\mu_{\bar{m},\bar{m}^{\prime}}(\bar{s}_{\bar{m},\bar{m}^{\prime}})-\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{\bar{m},\bar{m}^{\prime}})$
(183) $\displaystyle\leq K\Delta(\hat{M},t)$ (184)
as in (179). If instead
$\sup_{s\geq
0}\mu_{m^{\prime},m}(s)=\mu_{m^{\prime},m}(\bar{s}_{m,m^{\prime}})\,,$
then
$\displaystyle\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})$
$\displaystyle\geq\frac{1}{n}\,\mu_{m^{\prime},m}(\bar{s}_{m,m^{\prime}})-K\Delta(\hat{M},t)$
(185)
$\displaystyle\geq\frac{1}{n}\,\mu_{m^{\prime},m}(\bar{s}_{\bar{m},\bar{m}^{\prime}})-K\Delta(\hat{M},t)$
(186)
$\displaystyle\geq\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{\bar{m},\bar{m}^{\prime}})-2K\Delta(\hat{M},t)$
(187)
using (172) twice. Hence, we can bound (181) by
$\displaystyle\frac{1}{n}\,$
$\displaystyle\mu_{\bar{m},\bar{m}^{\prime}}(\bar{s}_{\bar{m},\bar{m}^{\prime}})-\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})$
$\displaystyle\leq\frac{1}{n}\,\mu_{\bar{m},\bar{m}^{\prime}}(\bar{s}_{\bar{m},\bar{m}^{\prime}})-\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{\bar{m},\bar{m}^{\prime}})+2K\Delta(\hat{M},t)$
(188) $\displaystyle\leq 3K\Delta(\hat{M},t)\,,$ (189)
again as in (179). Putting this and (179) into (175) leads to (160). ∎
Finally, thanks to this lemma, we can prove our upper bound on the reliability
function at $R=0^{+}$, which coincides with the lower bound (I).
###### Theorem 4.
For any balanced pair,
$\displaystyle E^{q}(0^{+})=$
$\displaystyle\max_{Q\in\mathcal{P}(\mathcal{X})}\sup_{s\geq
0}-\\!\\!\sum_{a\in\mathcal{X}}\sum_{b\in\mathcal{X}}Q(a)Q(b)\log\\!\\!\sum_{y\in\hat{\mathcal{Y}}_{a,b}}\\!\\!W(y|a)\biggl{(}\frac{q(b,y)}{q(a,y)}\biggr{)}^{\\!\\!s}.$
(190)
###### Proof.
We already pointed out that for any subcode of $\mathcal{C}$, and in
particular for the subcode $\hat{\mathcal{C}}$ of Theorem 3, we have
$D_{\min}(\mathcal{C})\leq
D_{\min}(\hat{\mathcal{C}})\leq\frac{1}{\hat{M}(\hat{M}-1)}\sum_{m\neq
m^{\prime}}D_{m,m^{\prime}}^{(n)}$ (191)
with $m,m^{\prime}\in\hat{\mathcal{C}}$. Then, we can bound the average as
follows, similarly as what Shannon, Gallager and Berlekamp did in the maximum
likelihood setting for the particular class of pairwise reversible channels
[20]. Fix any pair of codewords
$\hat{m}\neq\hat{m}^{\prime}\in\hat{\mathcal{C}}$. Then,
$\displaystyle D_{\min}$ $\displaystyle(\hat{\mathcal{C}})$
$\displaystyle\leq\frac{1}{\hat{M}(\hat{M}-1)}\sum_{m\neq
m^{\prime}}D_{m,m^{\prime}}^{(n)}$ (192)
$\displaystyle\leq\frac{1}{\hat{M}(\hat{M}-1)}\sum_{m\neq
m^{\prime}}\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{m,m^{\prime}})+K\Delta(\hat{M},t)$
(193) $\displaystyle\leq\frac{1}{\hat{M}(\hat{M}-1)}\sum_{m\neq
m^{\prime}}\frac{1}{n}\,\mu_{m,m^{\prime}}(\bar{s}_{\hat{m},\hat{m}^{\prime}})+5K\Delta(\hat{M},t)$
(194) $\displaystyle=\frac{1}{\hat{M}(\hat{M}-1)}\sum_{a}\sum_{b}\sum_{m\neq
m^{\prime}}P_{m,m^{\prime}}(a,b)\mu_{a,b}(\bar{s}_{\hat{m},\hat{m}^{\prime}})$
$\displaystyle\hskip 135.0002pt+5K\Delta(\hat{M},t)$ (195)
$\displaystyle=\frac{1}{n}\frac{1}{\hat{M}(\hat{M}-1)}\sum_{c=1}^{n}\sum_{a}\sum_{b}\hat{M}_{c}(a)\hat{M}_{c}(b)\mu_{a,b}(\bar{s}_{\hat{m},\hat{m}^{\prime}})$
$\displaystyle\hskip 135.0002pt+5K\Delta(\hat{M},t)$ (196)
$\displaystyle=\frac{1}{n}\frac{\hat{M}}{\hat{M}-1}\sum_{c=1}^{n}\sum_{a}\sum_{b}\frac{\hat{M}_{c}(a)}{\hat{M}}\frac{\hat{M}_{c}(b)}{\hat{M}}\mu_{a,b}(\bar{s}_{\hat{m},\hat{m}^{\prime}})$
$\displaystyle\hskip 135.0002pt+5K\Delta(\hat{M},t)$ (197)
$\displaystyle\leq\frac{\hat{M}}{\hat{M}-1}\max_{Q\in\mathcal{P}(\mathcal{X})}\sum_{a}\sum_{b}Q(a)Q(b)\mu_{a,b}(\bar{s}_{\hat{m},\hat{m}^{\prime}})$
$\displaystyle\hskip 150.00023pt+5K\Delta(\hat{M},t)$ (198)
$\displaystyle\leq\frac{\hat{M}}{\hat{M}-1}\sup_{s\geq
0}\max_{Q\in\mathcal{P}(\mathcal{X})}\sum_{a}\sum_{b}Q(a)Q(b)\mu_{a,b}(s)$
$\displaystyle\hskip 150.00023pt+5K\Delta(\hat{M},t)\,,$ (199)
where (193) is due to (158), (194) is due to (160), (196) is due to Lemma 5,
and (198) is due to the fact that for every $c$,
$\bigg{\\{}\frac{\hat{M}_{c}(a)}{\hat{M}},\quad a\in\mathcal{X}\bigg{\\}}$
is a probability distribution over $\mathcal{X}$. As we already underlined,
these steps are possible thanks to the fact that all pairs of codewords in
$\hat{\mathcal{C}}$ have joint types that are both symmetrical and close to
each other, and that this combined with the fact that for all balanced pairs
we can focus the attention only on the $s$ in a known bounded interval, all
the $D_{m,m^{\prime}}^{(n)}$ that appear in the average (191) are close to
each other. Then, letting $M\to\infty$ (so that we may also let
$\hat{M}\to\infty$, by Theorem 3) and $t\to\infty$ we obtain, thanks to the
fact that $\Delta(\hat{M},t)\to 0$ in (199),
$D_{\min}(\mathcal{C})\leq\sup_{s\geq
0}\max_{Q\in\mathcal{P}(\mathcal{X})}\sum_{a}\sum_{b}Q(a)Q(b)\mu_{a,b}(s)\,,$
(200)
which is independent of the code $\mathcal{C}$. Finally, thanks to equation
(120), since we let $R\to 0$ _after_ $n\to\infty$, we obtain an upper bound on
the reliability function at $R=0^{+}$:
$\displaystyle E^{q}(0^{+})$ $\displaystyle\leq$
$\displaystyle\max_{Q\in\mathcal{P}(\mathcal{X})}\sup_{s\geq
0}-\sum_{a}\sum_{b}Q(a)Q(b)\log\\!\\!\sum_{y\in\hat{\mathcal{Y}}_{a,b}}\\!\\!W(y|a)\biggl{(}\frac{q(b,y)}{q(a,y)}\biggr{)}^{\\!\\!s},$
(201)
which equals the expurgated lower bound given by (I), proving the theorem. ∎
If the channel and decoding metric are not a balanced pair, our method fails
in that for some pair $(a,b)\in\mathcal{X}^{2}$ belonging to the set
$\mathcal{B}$ defined in (128), the function $\mu_{a,b}(s)+\mu_{b,a}(s)$ is
concave and has a horizontal asymptote at $s\to+\infty$, but it is not a
straight line; because of this, a finite $\hat{s}$ as in Lemma 6 cannot be
determined. A partial solution to this problem is to upper-bound these
functions by their horizontal asymptote. This strategy leads to a similar
upper bound as the one for balanced pairs; however, in this case, the bound is
larger than the expurgated bound at $R=0^{+}$. To obtain this bound, define
for any pair $(a,b)\in\mathcal{B}$,
$A(a,b)\triangleq\min_{y:W(y|a)>0}\frac{q(a,y)}{q(b,y)}=\max_{y:W(y|b)>0}\frac{q(a,y)}{q(b,y)}\,,$
(202)
and let
$\hat{\mathcal{Y}}_{a,b}^{A}\triangleq\bigg{\\{}y\in\hat{\mathcal{Y}}_{a,b}:\frac{q(a,y)}{q(b,y)}=A(a,b)\bigg{\\}}\,.$
(203)
Then, we can upper-bound $\mu_{a,b}(s)$ and $\mu_{b,a}(s)$ as follows.
$\displaystyle\mu_{a,b}(s)$
$\displaystyle\triangleq-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}}W(y|a)\bigg{(}\frac{q(b,y)}{q(a,y)}\bigg{)}^{s}$
(204)
$\displaystyle\leq-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}^{A}}W(y|a)A(a,b)^{-s}$
(205) $\displaystyle=s\log
A(a,b)-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}^{A}}W(y|a)$ (206)
and in the same way,
$\mu_{b,a}(s)\leq-s\log
A(a,b)-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}^{A}}W(y|b).$ (207)
Now, if we define
$\displaystyle\hat{\mu}_{a,b}(s)\triangleq s\log
A(a,b)-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}^{A}}W(y|a)$ (208)
$\displaystyle\hat{\mu}_{b,a}(s)\triangleq-s\log
A(a,b)-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}^{A}}W(y|b)$ (209)
we have $\mu_{a,b}(s)\leq\hat{\mu}_{a,b}(s)$ and
$\mu_{b,a}(s)\leq\hat{\mu}_{b,a}(s)$, and
$\hat{\mu}_{a,b}(s)+\hat{\mu}_{b,a}(s)=-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}^{A}}W(y|a)-\log\sum_{y\in\hat{\mathcal{Y}}_{a,b}^{A}}W(y|b)$
(210)
which is constant. Finally, if we set
$\hat{\mu}_{a,b}(s)\triangleq\mu_{a,b}(s)$ for all pairs
$(a,b)\in\mathcal{B}^{c}$, one can readily check that Lemma 6, Lemma 7 and
Theorem 4 (the upper bound part) still hold for any discrete memoryless
channel and decoding metric if the $\mu_{a,b}(s)$ are replaced with
$\hat{\mu}_{a,b}(s)$. Hence, for a generic pair of channel and decoding
metric, the following theorem can be proved.
###### Theorem 5.
For any discrete memoryless channel and decoding metric with
${\bar{C}_{0}=0}$,
$E^{q}(0^{+})\leq\max_{Q\in\mathcal{P}(\mathcal{X})}\sup_{s\geq
0}\sum_{a}\sum_{b}Q(a)Q(b)\hat{\mu}_{a,b}(s)\triangleq
E^{q}_{\mathrm{up}}(0^{+}).$ (211)
In such a case, the maximum distance between the expurgated lower bound and
our upper bound on $E^{q}(0^{+})$ can be estimated as follows:
$\displaystyle\big{\lvert}E^{q}_{\text{up}}$
$\displaystyle(0^{+})-E^{q}_{\text{ex}}(0^{+})\big{\rvert}$ (212)
$\displaystyle\leq\frac{1}{2}\max_{Q\in\mathcal{P}(\mathcal{X})}\sup_{s\geq
0}\sum_{a}\sum_{b}Q(a)Q(b)$ $\displaystyle\hskip
35.00005pt\big{\lvert}\hat{\mu}_{a,b}(s)+\hat{\mu}_{b,a}(s)-\mu_{a,b}(s)-\mu_{b,a}(s)\big{)}\big{\rvert}$
(213) $\displaystyle=\frac{1}{2}\max_{Q\in\mathcal{P}(\mathcal{X})}\sup_{s\geq
0}\sum_{(a,b)\in\mathcal{B}}Q(a)Q(b)$ $\displaystyle\hskip
35.00005pt\big{\lvert}\hat{\mu}_{a,b}(s)+\hat{\mu}_{b,a}(s)-\mu_{a,b}(s)-\mu_{b,a}(s)\big{)}\big{\rvert}$
(214)
$\displaystyle=\frac{1}{2}\max_{Q\in\mathcal{P}(\mathcal{X})}\sum_{(a,b)\in\mathcal{B}}Q(a)Q(b)$
$\displaystyle\hskip
35.00005pt\big{\lvert}\hat{\mu}_{a,b}(0)+\hat{\mu}_{b,a}(0)-\mu_{a,b}(0)-\mu_{b,a}(0)\big{)}\big{\rvert}$
(215)
$\displaystyle\leq\frac{1}{2}\max_{(a,b)\in\mathcal{B}}\big{\lvert}\hat{\mu}_{a,b}(0)+\hat{\mu}_{b,a}(0)-\mu_{a,b}(0)-\mu_{b,a}(0)\big{)}\big{\rvert}$
(216)
$\displaystyle=\frac{1}{2}\max_{(a,b)\in\mathcal{B}}\Bigg{(}\log\frac{\sum_{y\in\hat{\mathcal{Y}}_{a,b}}W(y|a)}{\sum_{y\in\hat{\mathcal{Y}}_{a,b}^{A}}W(y|a)}$
$\displaystyle\hskip
95.00014pt+\log\frac{\sum_{y\in\hat{\mathcal{Y}}_{a,b}}W(y|b)}{\sum_{y\in\hat{\mathcal{Y}}_{a,b}^{A}}W(y|b)}\Bigg{)}.$
(217)
Notice that for balanced pairs, definitions (129) and (203) show that the sets
$\hat{\mathcal{Y}}_{a,b}$ and $\hat{\mathcal{Y}}_{a,b}^{A}$ are equal, and
therefore the quantity in (217) is zero, as expected.
###### Example 2.
Consider the non-balanced channel-decoding metric pair of Example 1. For the
pair of inputs $(0,1)$ one has
$\hat{\mathcal{Y}}_{a,b}=\\{0,1\\}\neq\hat{\mathcal{Y}}_{a,b}^{A}=\\{1\\}$.
Therefore, the upper bound on the gap
$\big{\lvert}E^{q}_{\text{up}}(0^{+})-E^{q}_{\text{ex}}(0^{+})\big{\rvert}$ in
equation (217) for this channel-decoding pair is equal to
$\big{\lvert}E^{q}_{\text{up}}(0^{+})-E^{q}_{\text{ex}}(0^{+})\big{\rvert}\leq\frac{1}{2}\bigg{(}\log\frac{1}{\varepsilon}+\log\frac{1-\varepsilon}{1-\varepsilon}\bigg{)}=\frac{1}{2}\log\frac{1}{\varepsilon}.$
(218)
###### Remark.
Converse bounds for codes at rate $R=0$ can often be used to also deduce
bounds at $R>0$ through appropriate code coverings. Our bound, as most of the
bounds on zero-rate codes, is based on the Plotkin double counting trick of
Lemma 5, which is used in (196). In the same way as the Plotkin bound on the
minimum Hamming distance of codes at $R=0$ can be extended to $R>0$ to deduce
the Singleton bound and the Elias bound, our result can also be applied to
derive bounds at $R>0$. For maximum likelihood decoding, the idea was
initially presented by Blahut, although with a technical error in the proof
which can however be corrected by means of the Ramsey-theoretic procedure also
used here (see [19]). A similar extension can be derived for mismatched
decoding. We do not expand on this point here since it would in large part
repeat the discussion in [19] while at the same time requiring a significant
technical digression on constant composition codes and the maximization over
$s$, which would take us far from the main focus of this paper.
## References
* [1] I. Csiszár and P. Narayan, “Channel capacity for a given decoding metric,” IEEE Trans. Inf. Theory, vol. 45, no. 1, pp. 35-43, 1995.
* [2] N. Merhav, G. Kaplan, A. Lapidoth, and S. Shamai, “On information rates for mismatched decoders,” IEEE Trans. Inf. Theory, vol. 40, no. 6, pp. 1953–1967, Nov. 1994.
* [3] J. Scarlett, A. Guillén i Fàbregas, A. Somekh-Baruch and A. Martinez, “Information-theoretic foundations of mismatched decoding,” Found. Trends Commun. Inf. Theory, vol. 17, no. 2–3, pp. 149-401, 2020.
* [4] T. R. M. Fischer, “Some remarks on the role of inaccuracy in Shannon’s theory of information transmission,” Trans. 8th Prague Conf. Inf. Theory, pp. 211–226, 1978.
* [5] G. Kaplan and S. Shamai, “Information rates and error exponents of compound channels with application to antipodal signaling in a fading environment,” Arch. Elek. Uber., vol. 47, no. 4, pp. 228–239, 1993.
* [6] A. Somekh-Baruch, J. Scarlett, and A. Guillén i Fàbregas, “Generalized random Gilbert-Varshamov codes,” IEEE Trans. Inf. Theory, vol. 65, no. 6, pp. 3452–3469, 2019.
* [7] E. Asadi Kangarshahi and A. Guillén i Fàbregas, “A single-letter upper bound to the mismatch capacity,” IEEE Trans. Inf. Theory, vol. 67, no. 4, pp. 2013-2033, 2021.
* [8] A. Somekh-Baruch, “A single-letter upper bound on the mismatch capacity via a multicasting approach,” Proc. 2020 IEEE Inf. Theory Workshop, 2021. Available online: https://arxiv.org/pdf/2007.14322.pdf
* [9] E. Asadi Kangarshahi and A. Guillén i Fàbregas, “A sphere-packing exponent for mismatched decoding,” Proc. 2021 IEEE Int. Symp. Inf. Theory, to appear.
* [10] J. Scarlett, L. Peng, N. Merhav, A. Martinez and A. Guillén i Fàbregas, “Expurgated random-coding ensembles: exponents, refinements, and connections,” IEEE Trans. Inf. Theory, vol. 60, no. 8, pp. 4449-4462, 2014.
* [11] I. Csiszár and J. Körner, Information theory: coding theorems for discrete memoryless systems. Cambridge University Press, 2011.
* [12] T. M. Cover and J. A. Thomas, Elements of information theory. John Wiley & Sons, 2012.
* [13] C. E. Shannon, “The zero error capacity of a noisy channel,” IEEE Trans. Inf. Theory, vol. 2, no. 3, pp. 8-19, 1956.
* [14] J. Komlós, “A strange pigeon-hole principle,” Order, vol. 7, no. 2, pp. 107-113, 1990.
* [15] V. M. Blinovsky, “New Approach to Estimation of the Decoding Error Probability,” Prob. Inf. Trans., vol. 38, no. 1, pp. 16-19, 2002.
* [16] V. M. Blinovsky, “Code bounds for multiple packings over a nonbinary finite alphabet,” Prob. Inf. Trans., vol. 41, no. 1, pp. 23-32, 2005.
* [17] R. Ahlswede, V. Blinovsky, “Multiple packing in sum-type metric spaces,” Discrete Applied Mathematics, vol. 156, no. 9, pp. 1469-1477, 2008.
* [18] R. Diestel, Graph Theory, Springer-Verlag Berlin Heidelberg, 2017.
* [19] M. Bondaschi and M. Dalai, “A Revisitation of Low-Rate Bounds on the Reliability Function of Discrete Memoryless Channels for List Decoding,” IEEE Trans. Inf. Theory, submitted.
* [20] C. E. Shannon, R. G. Gallager, and E. R. Berlekamp, “Lower bounds to error probability for coding on discrete memoryless channels. II,” Information and Control, vol. 10, no. 5, pp. 522-552, 1967.
Marco Bondaschi is a PhD student in the Laboratory for Information in
Networked Systems at École Polytechnique Fédérale de Lausanne (EPFL),
Switzerland. He received the Bachelor’s degree in Electronics Engineering and
the Master’s degree in Communication Sciences and Multimedia from the
University of Brescia in 2017 and 2019 respectively. His main research
interests are in information theory and learning theory.
---
Albert Guillén i Fàbregas (S–01, M–05, SM–09, F–22) received the
Telecommunications Engineering Degree and the Electronics Engineering Degree
from Universitat Politècnica de Catalunya and Politecnico di Torino,
respectively in 1999, and the Ph.D. in Communication Systems from École
Polytechnique Fédérale de Lausanne (EPFL) in 2004. In 2020, he returned to a
full-time faculty position at the Department of Engineering, University of
Cambridge, where he had been a full-time faculty and Fellow of Trinity Hall
from 2007 to 2012. Since 2011 he has been an ICREA Research Professor at
Universitat Pompeu Fabra (currently on leave), where he is now an adjunct
researcher. He has held appointments at the New Jersey Institute of
Technology, Telecom Italia, European Space Agency (ESA), Institut Eurécom,
University of South Australia, Universitat Pompeu Fabra, University of
Cambridge, as well as visiting appointments at EPFL, École Nationale des
Télécommunications (Paris), Universitat Pompeu Fabra, University of South
Australia, Centrum Wiskunde & Informatica and Texas A&M University in Qatar.
His specific research interests are in the areas of information theory,
communication theory, coding theory, statistical inference. Dr. Guillén i
Fàbregas is a Member of the Young Academy of Europe, and received the Starting
and Consolidator Grants from the European Research Council, the Young Authors
Award of the 2004 European Signal Processing Conference, the 2004 Best
Doctoral Thesis Award from the Spanish Institution of Telecommunications
Engineers, and a Research Fellowship of the Spanish Government to join ESA.
Since 2013 he has been an Editor of the Foundations and Trends in
Communications and Information Theory, Now Publishers and was an Associate
Editor of the IEEE Transactions on Information Theory (2013–2020) and IEEE
Transactions on Wireless Communications (2007–2011).
---
Marco Dalai (S’05-A’06-M’11-SM’17) received the degree in Electronic
Engineering (cum laude) and the PhD in Information Engineering in 2003 and
2007 respectively from the University of Brescia Italy, where he is now an
associate professor with the Department of Information Engineering. He is a
member of the IEEE Information Theory Society, recipient of the 2014 IEEE
Information Theory Society Paper Award and currently an Associate Editor of
the IEEE Transactions on Information Theory.
---
|
# AirWare: Utilizing Embedded Audio and Infrared Signals for In-Air Hand-
Gesture Recognition
Nibhrat Lohia, Raunak Mundada, Eric C. Larson Nibhrat Lohia and Raunak Mundada
are Alumni of the Department of Statistical Science, Southern Methodist
University, Dallas, TX, 75206.
E-mail<EMAIL_ADDRESS>Eric C. Larson is a Professor in the Department of
Computer Science, Southern Methodist University, Dallas, TX, 75206Manuscript
received July, 2018.
###### Abstract
We introduce AirWare, an in-air hand-gesture recognition system that uses the
already embedded speaker and microphone in most electronic devices, together
with embedded infrared proximity sensors. Gestures identified by AirWare are
performed in the air above a touchscreen or a mobile phone. AirWare utilizes
convolutional neural networks to classify a large vocabulary of hand gestures
using multi-modal audio Doppler signatures and infrared (IR) sensor
information. As opposed to other systems which use high frequency Doppler
radars or depth cameras to uniquely identify in-air gestures, AirWare does not
require any external sensors. In our analysis, we use openly available APIs to
interface with the Samsung Galaxy S5 audio and proximity sensors for data
collection. We find that AirWare is not reliable enough for a deployable
interaction system when trying to classify a gesture set of 21 gestures, with
an average true positive rate of only 50.5% per gesture. To improve
performance, we train AirWare to identify subsets of the 21 gestures
vocabulary based on possible usage scenarios. We find that AirWare can
identify three gesture sets with average true positive rate greater than 80%
using 4–7 gestures per set, which comprises a vocabulary of 16 unique in-air
gestures.
###### Index Terms:
Convolutional Neural Networks, Deep Learning, Doppler, Gesture Recognition
## 1 Introduction
Communicating through hand gestures is ubiquitous across cultures.
Incorporating hand gestures into machine interaction has proven difficult
because one must reliably detect the gestures and infer their meaning
[lee2010search]. Even so, with the increasing variety of devices that can
interact with humans in more natural ways, in-air hand gesture recognition
systems have grown in popularity. Major technology companies like Google
[Soli], Microsoft [gupta2012soundwave], Amazon, and HP have released devices
that recognize some basic in-air hand gestures. To achieve reliability, these
devices employ specialized sensors or vision systems, like the Microsoft
Kinect, which increase cost and reduce the potential ubiquity of the device.
This is worrying because the use of in-air hand gestures to interact with a
machine is often desired only in niche scenarios when touch is inappropriate
or difficult. This is especially true for mobile devices (1) when the device
is small and touch is harder to use without occluding screen content (such as
a watch) or (2) in situational impairments, like wearing gloves or cooking,
when hands get dirty and touching a smart-phone or tablet is not desired
[dumas2009multimodal, wobbrock2006future]. There are also scenarios where in-
air gestures may add to the user experience, such as gaming or productivity
applications.
In this study, we investigate methods for detecting and classifying in-air
gestures using commodity sensors on a smartphone. More specifically, we
present AirWare, a system that fuses the information from the on-board
infrared (IR) proximity sensor of a Samsung Galaxy S5 with the Doppler shifts
detected by the microphone. Like previous work [aumi2013doplink,
chen2014airlink, gupta2012soundwave, sun2013spartacus] we play an inaudible
tone from the speakers and record from the microphone continuously. Using
signal processing and machine learning, we fuse the parameters of the IR
proximity sensor with the Doppler features to predict a large vocabulary of
different in-air gestures.
Our approach differs from previous work in that we (1) combine complementary
sensors that are already embedded in the mobile phone and (2) attempt to
classify a relatively large vocabulary of different gestures. Most previous
works only cover a few basic interactions (like panning), which does not
provide a rich interaction modality.
To inform the design and evaluate AirWare, we conducted a user study with 13
participants that performed each gesture several times (load balanced in terms
of presentation order). We show that, on average, AirWare can recognize the
full gesture vocabulary with only 50.47% average true positive rate per
gesture per user on 21 gestures. We conclude that the full 21 gesture
vocabulary is not accurate enough to support user interaction. However, using
various reduced vocabularies of 4–7 gestures, AirWare can achieve average true
positive rates greater than 75% for each reduced vocabulary. In total, the
reduced gesture sets comprise 16 unique gestures. We enumerate our
contributions as follows:
1. 1.
We investigate the performance of fusing Doppler gesture sensing techniques
with the embedded IR proximity sensor using various machine learning
algorithms. We also compare the performance of a number of convolutional
neural network architectures.
2. 2.
A human subjects evaluation: we validate the technology in a user study with
13 participants.
3. 3.
We compare two different methods for collecting gesture data. The first
requires the IR sensor to be activated and the second is a free-form system.
We conclude that the free-form system creates variability in the way gestures
are performed such that the machine learning algorithms cannot readily
identify the gestures. Therefore, the AirWare system requires users to be
instructed on how to perform each gesture.
4. 4.
We investigate personalized calibration to boost the recognition true positive
rate of the classifier, as well as providing an out-of-the-box gesture
recognition system, showing that user calibration improves the performance of
AirWare. We also investigate the amount of training data required to calibrate
the AirWare system, concluding that 2-3 examples per gesture are needed to
properly calibrate the system.
5. 5.
We investigate a number of reduced gesture vocabularies that tailor to
different application use cases, showing average true positive rates greater
than 80% among subsets of gestures. When comprising the different subset, we
conclude that AirWare can support about 16 total gestures. We also conclude
that some gesture combinations cannot be supported, such as simultaneously
identifying pans and flicks.
## 2 Related Work
Dating back to 1980, in-air gesture sensing was achieved using commodity
cameras with a high degree of success [rubine1991automatic]. The RGB image of
a user-facing camera was used to detect and follow hand movements
[hilliges2009interactions, rautaray2015vision]. Even so, privacy concerns and
the requirements of processing video (battery life, lag time) limited the
impact of the technology [hinckley2003synchronous, locken2012user,
song2014air, suarez2012hand]. To mitigate these concerns, researchers have
been innovating in how they sense hand motions. In Samsung’s Galaxy S4 and S5
smart-phones, a dedicated infrared proximity sensor is used to detect hand
motions above the phone, sensing velocity, angle, and relative distance of
hand movements. The estimation is coarse, but allows for recognition of a
number of panning gestures. We note that the IR proximity sensor is not unique
the Samsung smartphones, but is used by a number of different manufacturers.
However, these manufacturers typically do not provide access the the sensor
via a developer API. With this in mind, the AirWare methodology could be
applied to these phones in the future, once the sensors become accessible.
gupta2012soundwave used an inaudible tone played on speakers and sensed the
Doppler reflections to determine when a user moved their hands toward or away
from an interface. aumi2013doplink, bannis2014adding, sun2013spartacus, and
chen2014airlink extended this work to detect pointing and flick gestures
toward an array of objects (including smartphones) using the Doppler effect.
These previous works typically recognize 2–4 gestures and many employ more
than one set of speaker and microphone. In contrast, AirWare attempts to
classify 21 gestures and various subsets ranging from 4–7 gestures per set.
AirWare is able to classify such a large vocabulary of different gestures
because the IR and audio Doppler combination provides complementary sensing
information without any external sensors.
There have also been a number of innovative solutions that use infrared
illuminating pendants [starner2000gesture], magnetic sensors [chen2013utrack],
side mounted light sensors [butler2008sidesight], and even the unmodified GSM
antenna radiation [zhao2014sideswipe]. However, AirWare is more ambitious in
the vocabulary size of gestures we attempt to classify, as well as unique in
terms of the fused sensor outputs investigated. Moreover, AirWare does not use
external sensors, but instead employs already embedded sensors from the mobile
phone.
raj2012ultrasonic review the HCI uses of Doppler sensing from ultrasonic range
finders. Although this requires an extra sensor, it uses many of the same
techniques as audio Doppler sensing. By sending a set of ”pings” into the
environment, the proximity of the hand (or any object) can be ascertained with
high accuracy. AirWare shares some similarity in sensing techniques as we also
employ a proximity sensor. Even so, AirWare uses the embedded Galaxy S5
proximity sensor, which is considerably less precise than the external
ultrasonic sensor employed by previous works.
butler2008sidesight produce IR sensor boards attached to a mobile device that
succeed at identifying single- and multi-finger gestures adjacent to a device;
however, this work does not address in-air gestures, use built-in hardware,
combine modalities, or address differentiating a large gesture vocabulary.
Figure 1: Progression of the AirWare interface used in our data collection.
kim2016hand use micro-Doppler signatures with a convolution neural network.
These Doppler signatures are measured by continuous wave Doppler radar at
5.8GHz (rather than the audio Doppler signal employed by AirWare). Their work
encourages us to utilize convolutional neural networks to recognize in-air
gestures. Their system classifies a set of 10 gestures with a five-fold cross
validation accuracy of 85.6%. For a reduced gesture set, the accuracy
increases to 93.1%. Although the system performs well, hand motions of the
gestures are controlled. For example, swiping left to right was a quick snap
that involved the wrist and all five fingers. However, for swiping right to
left, the wrist was no longer stationary and moved with only three fingers
involved. AirWare, however, does not impose such restrictions on the users
motions and, instead, relies on the IR proximity sensor measurement along with
Doppler signatures to classify the gestures. Moreover, AirWare employs
convolutional networks on multiple sensor sources using embedded sensors,
rather than an external, high frequency RF radar system.
Similar to kim2016hand, kim2016human use deep convolution neural networks with
continuous wave Doppler radars (i.e., using RF signals) for detection of
humans and, to some degree, hand gestures. It is important to note that Kim
uses specialized high frequency radar equipment. However, AirWare employs low-
frequency audio Doppler from commodity hardware. Moreover, we employ infrared
proximity sensors to create additional information that Doppler shifts do not
capture, such as the occurrence of movements transverse to the sensing
apparatus. raj2012ultrasonic used a similar high-frequency device setup to
classify most basic gestures. The results from these works were promising but
fall in the same category of adding external sensors for recognition. Most
have used ultra high frequency sonars for collecting Doppler signatures, with
some specific spatial arrangements in some cases, thus causing greater
frequency shifts which are relatively easier to classify.
## 3 Theory of Operation
In this section, we outline the different properties of each sensing modality:
Doppler and IR proximity. We also posit an argument for why combining these
modalities is inherently complementary. These sections also summarize the ways
in which we access and pre-process each signal.
Figure 2: Average true positive rate for different short time Fourier
transform parameters. A grid search reveals that a window size of 4096, 50%
overlap, and use of $\pm 16$ bins from $f_{0}$ performs the best.
### 3.1 Audio Doppler from Speaker and Microphone
Audio Doppler sensing for gesture recognition is discussed in detail in a
number of papers [gupta2012soundwave, aumi2013doplink]. Our method generally
follows that of other papers. We play an 18 kHz sine wave from the speakers of
the mobile phone, while continuously sampling from the microphone at 48 kHz.
This means that a constant 18 kHz sine wave will be sampled from the
microphone. When an object moves toward or away form a stationary phone, the
microphone can detect Doppler frequency reflections. These manifest as
additional reflections, added to the 18 kHz sine wave. The change in frequency
is given by:
$\Delta f=\frac{f_{0}v}{c}\cos(\theta)$ (1)
where $c$ is the speed of sound, $v$ is the velocity of the object, $\theta$
is the angle between the motion of the object and the microphone, and $f_{0}$
is the frequency of the sine wave played from the speakers. When analyzing the
signal, faster motions toward the microphone result in more pronounced
frequency increases. Movement away from the microphone results in frequency
decreases. The angle between the object movement and the microphone is a key
factor. Movement transverse to the microphone results in no frequency shift.
Movement directly toward or away the microphone maximizes the possible
frequency change. As such, the frequency reflections form different hand
gestures at different angles will manifest differently in the Doppler signal.
More than just frequency, the surface area of the object determines the
magnitude changes of the frequency reflections. This means it is possible to
detect the difference between waving a finger at the microphone versus waving
a hand at the same velocity. To capture these changes in the frequency over
time, like previous approaches, we use the short time Fourier transform
(STFT). Specifically, we use a sampling rate of 48 kHz. We use a Hamming
window to reduce spectral leakage. Other parameters of the STFT need to be
chosen to trade off resolution in time and resolution in frequency (i.e., a
classic signal processing problem). It was unclear what trade-off between time
and frequency to employ in order to capture the motion of the hand and
fingers. For instance, choosing a large window size would increase our
frequency resolution (i.e., our ability to discern Doppler shifts), but would
also reduce our time resolution (i.e., our ability to observe quick
movements). As such, we decided to grid search different STFT parameters. Each
different configuration resulted in slightly different frequency profiles that
could be used as features in our machine learning algorithms. We investigated
18 different configurations based upon the following combinations of
parameters:
* •
window size (and FFT size) of 1024, 2048, and 4096 samples
* •
overlap between windows of 25%, 50%, and 75%
* •
number of frequency bins above and below $f_{0}$ to include as features, 8 or
16
The average true positive rate per gesture per user of the different
configurations is shown in Figure 2. Many configurations result in similar
performance, but the best configuration was found to be: window size of 4096
points, 50% overlap, and 16 bins above and below $f_{0}$. More details about
the machine learning and cross validation techniques are discussed later. We
save the STFT for three seconds of time data (discarding the initial startup
windows). An example of the STFT with the best found configuration can be seen
in Figure 3.
Figure 3: Zoomed spectrogram of three different gestures. Top of plots show
the output of the IR sensor angle and velocity.
To normalize and control dynamic range, we take the decibel magnitude of the
STFT. The implementation of the STFT grid search and feature extraction
techniques have been made open source and are available at [opensourceRaunak].
### 3.2 IR Sensing
The infrared proximity sensor for the Samsung Galaxy S5 is a set of four
infrared sensors surrounding an infrared LED. Infrared light reflects back
towards the four sensors when an object, like the hand, is above any of the
sensors. By detecting which sensors are activated first and in which order,
the sensor can infer what angle an object is moving laterally. The time
difference in which each of the sensors is excited determines the approximate
velocity of the object as it enters or exits the sensor area. When coupled
with Doppler sensing, these two sensing modalities have a number of
complementary features.
### 3.3 Complementary Sensors
While Doppler sensing can provide rich information about the direction of
motion towards or away from the microphone, about the relative velocity of
movement, and the relative surface area of the object, it is blind to absolute
trajectory. That is, movement right and left can look identical because they
lie in the same plane. Moreover, perpendicular movement to the microphone may
not cause any Doppler shifts and different combinations of velocity and
incidence angle can manifest similarly in the Doppler signal. For a number of
in-air gestures, this directionality and velocity are critical to
understanding the gesture. Fortunately, these lateral movements are exactly
what the IR sensor is designed to detect, which adds a complementary source of
information. However, the use of an IR proximity sensor is not a panacea. The
infrared sensor is often blind to gestures that occur at a distance from the
sensor and it is often unable to distinguish motions that have a “straight-on”
trajectory. In this way, the sensing modalities of infrared and Doppler are
quite complementary as outlined below:
Doppler Sensing:
* •
Sensitive to motion towards and away from the microphone but agnostic to
lateral motions. Angle of motion results in different reflected frequencies.
* •
Sensitive to the surface area of the object. Larger objects result in larger
magnitude reflections.
* •
Sensitive to the overall velocity towards or away from the microphone. Higher
velocity movements result in different reflected frequencies
* •
Sensitive to motions at both far and near distances from phone.
IR Proximity Sensor:
* •
Sensitive to motion lateral to sensor and, to some degree, motions toward the
sensor
* •
Can discern angle of lateral motions but not when motion is directed towards
sensor
* •
sensitive to the overall velocity of lateral motions
* •
Only sensitive to motions that occur relatively close to the sensor
Given these properties, it is easier to understand why the combination of
sensors reveals complementary details about in-air hand motions. While the IR
sensor can detect panning motions, it cannot distinguish if the motion comes
from fingers or the palm of the hand. While the Doppler signal shows velocity
of the hand as it passes by the microphone, it cannot discern if the motion is
from left to right or from top to bottom and vice-versa. These types of
differences are imperative to understand a large vocabulary of in-air hand
gestures. We investigate several methods to combine the IR sensor stream and
the STFT through traditional machine learning algorithms and convolutional
neural networks.
## 4 Spectrogram Processing
In this stage, our aim is to process and combine the features from the STFT
and the IR sensors so that they can be analyzed by a machine learning
algorithm. We start by finding the magnitude of the generated 18 kHz tone
across the entire STFT, $M_{0\mathrm{dB}}(t)$, where $t$ denotes the frame
number $(0,1,\ldots,99)$. We then isolate a band of magnitudes around the
frequency in a range of frequency bins above and below $f_{0}$. The number of
bins above and below the $f_{0}$ is a grid-searched parameter in our analysis.
We found that using 16 bins above and below the $f_{0}$ is sufficient for
classification. These ranges are shown in Figure 3. During this feature
extraction, we also eliminate the magnitude of $f_{0}$ from the spectrogram,
as the value is relatively constant in magnitude and therefore has limited
predictive capability as a feature.
After processing the STFT, we process the features extracted from the Samsung
Galaxy S5’s IR sensor. The sensor interface uses a “push” style API where the
application subscribes to notifications when the sensor is activated. The
notification includes the speed (a normalized value between 0 and 100) and
angle of any detected movements. The angle is an average of the entering and
exit angles. However, gestures that do not move laterally across the sensor
typically register as having zero velocity and zero degree angle because the
sensor cannot validly estimate the movement (but detects that an object is
close to the sensor). Each time we are notified of a movement, we log the
event and time-stamp when the event occurred.
### 4.1 Segmentation
When a user performs a gesture, it may or may not activate the IR sensor. We
performed two rounds of data collection. The first did not require that users
activate the IR sensor with the gesture and the second did require that the IR
sensor be activated. The segmentation procedure differs slightly between these
two scenarios. When we required the user to activate the IR sensor,
segmentation was straightforward: we buffer the audio signal 1.25 seconds
before and after the IR activation. When we did not require the IR sensor to
activate, we buffered 1.25 seconds before and after any “event of interest.”
We define this event to be when either the IR sensor is activated or when the
magnitude of frequency bins directly greater than and less than $f_{0}$
increase by 10 dB. Intuitively, this occurs when there is enough motion to
cause reflections of the Doppler audio signal. We also note that, when not
requiring the IR sensor to be activated, we expect an increased number of
false positives because any motion might trigger the segmentation algorithm.
Positively, requiring the IR sensor to be activated by the gesture can be
considered an effective means of reducing false positives. Negatively, it also
requires users to manipulate their gestures in a way that they always trigger
the sensor at the top of the phone. This limitation is discussed in more depth
in the next section.
Once all data is collected for all users, we employ normalization of each of
the IR features (angle and velocity) and of the entire spectrogram magnitudes
such that the all features are zero mean and unit standard deviation.
## 5 Experimental Methodology
In our pilot tests, we asked participants to perform each of the 21 gestures
in the way “that made the most sense to them.” In this way, we sought to
collect more realistic data where participants could be trained simply from a
textual prompt of what the gesture was, without explicit training or
demonstration. Therefore, we thought the gesture would be more intuitive to
the user (since they exhibit their internalization of the gesture, rather than
mimicking a gesture they were shown). However, this data was never
classifiable at a rate more than chance. We abandoned the idea that a large
gesture vocabulary could be collected without explicitly demonstrating the
gestures to participants. Based on our experience in the pilot study, we
decided to update our methodology to include showing videos of the gestures
being performed. Participants were then asked to perform the gesture to
demonstrate their understanding. Therefore, all participants were shown how to
perform the gestures and participants demonstrated their understanding to the
researchers before data collection started. Practically, this also means that
new users of AirWare would also need to go through the same instructional
videos to learn how to perform the gestures in the vocabulary. We see this as
a necessary limitation of the AirWare system: without an instructional phase,
there is too much variability among the gestures performed to detect them
reliably.
Because our pilot study had uncovered that gesture consistency might be
problematic, we conducted a user study in two phases. We chose two phases
because it was unclear how to define what a “proper” gesture consisted of.
Each phase differed in what the data collection application judged to be a
properly performed gesture. In the first phase, we collected gesture data from
the participants for every gesture in our vocabulary regardless of whether the
IR sensor was activated. That is, the user performed a gesture based upon
their memory of how the gesture was performed from the instructional videos.
In the second phase, we only informed the participant that a gesture was
performed successfully when the IR proximity sensor was activated. That is,
they were asked to repeat the gesture until they learned how to perform the
gesture while also activating the sensor at the top of the phone screen. In
this way, users needed to manipulate the way they performed the gesture such
that they understood where the proximity sensors was physically located on the
phone and how to activate it with each gesture.
Different users participated in each phase to protect against crossover
effects. That is, no user participated in both phases of the data collection.
We show later on that requiring the IR sensor to be activated greatly
increases the ability of the machine learning algorithm to correctly identify
the gesture. Practically, this means that the AirWare system will almost
certainly require an “instructional application” that trains users to perform
the gestures, and then verifies that the user understands how to perform the
gesture such that the IR sensor is activated. While this is an additional
limitation of the system because it imposes constraints on the gestures, such
an instructional application would likely be required no matter what, as
learning to perform 21 gestures for any user without some instruction can be
considered a daunting task.
In the first phase, 8 participants were recruited from university classes (age
range: 19-30, Male: 60%). During a session, participants were introduced to
the AirWare data collection mobile application and a demonstration of all the
gestures were given via the video recording. Participants were then instructed
to show the researcher each gesture. They weren’t told about the sensor
locations on the phone. The ambient environment was relatively quiet and
without many acoustic disturbances. Participants were instructed to hold the
smart-phone in one hand and perform gestures “above” the phone with the other
hand.
In the second phase, 13 participants were recruited (age range: 19-30, Male:
66%). Participants were similarly introduced to the data collection
application but were also instructed about the location of IR proximity sensor
on the phone (as described). The user interface showed the participant
whenever the IR sensor detected a movement through an animated label on the
application. The gesture data was registered only when the IR sensor detected
a movement; otherwise, the interface prompted the user to repeat the gesture.
On average, users had some initial trouble learning how to manipulate the
sensor for some gestures such as “tap” but were quickly able to alter their
strategy to tap towards the top of the phone (where the IR sensor was
located). All users were able to successfully activate the IR sensor after two
or three trials per gesture.
Figure 4: Examples of the different gestures predicted in the AirWare
vocabulary.
### 5.1 Gesture Vocabulary
Participants performed 21 different gestures as instructed on the screen of
the phone via a custom data collection app. A gesture name would appear on the
screen and the user would perform the in-air gesture (Figure 4). All sensor
data was saved locally on the phone for later processing. Users went through
each gesture one time as practice (practice data was not used in analysis) and
then were presented with a random permutation of the gestures. For
participants in the second phase, the practice session lasted as long as was
needed for the subject to learn how to activate the IR sensor. In all, each
participant performed between 5 and 10 iterations of each gesture. The
different number of gestures per participant is an artifact of the way the
gestures were randomly presented. We let participants perform gestures for 45
minutes and then ended the session. On average, each participant performed
about 250 gestures. Note that the smartphone was used for data collection
only. Subsequent analysis was performed offline.
The initial gestures chosen were based upon an informal review of other in-air
gesture sensing systems. After the initial pilot study, we refined the
gestures based upon informal discussions about what gestures were most
intuitive to perform. The final gesture set consisted of:
* •
Flick left/right/up/down (quick hand movement from wrist)
* •
Pan left/right/up/down (hand flat, movement from elbow)
* •
Slice left/right (a fast “sword” motion diagonally across phone)
* •
Zoom in/out (whole hand)
* •
Whip (motion towards phone like a holding a whip)
* •
Snap (similar to whip but snapping while moving toward the phone)
* •
Magic wand (slow waving of the fingers towards the palm)
* •
Click, double click (with finger, but not touching screen)
* •
Tap, double tap (with full hand)
* •
Circle (circular motion above the phone, hand flat)
* •
Erase (moving hand back and forth)
In our pilot studies, more gesture were included in the vocabulary. However,
some gestures which users felt were awkward or unintuitive were removed. These
included gestures such as hand wobble, finger wave in/out, and push/pull
gestures.
## 6 Machine Learning Description
To create, train and validate machine learning algorithms we use a combination
of packages in Python. Specifically, we use the “scikit-learn” library
[pedregosa2011scikit] and Keras [chollet2015keras] with the TensorFlow
[tensorflow2015-whitepaper] back-end. We chose to investigate several
different machine learning baselines and also several different convolutional
neural network architectures. It was unclear what neural network architecture
and parameters of the architecture would be optimal, so we chose to train
several variants and perform hyper parameter tuning for each architecture.
### 6.1 Baseline Models - Traditional ML algorithms
For baseline comparison, we investigated several traditional machine learning
algorithms including multi-layer perceptrons, linear support vector machines,
and random forests. The doppler information is pre-processed using Principal
Component Analysis (PCA) to reduce the dimensions. The IR information is
averaged across time steps for each gesture. For each model, the hyper-
parameters as well as the number of principal components for doppler
information are selected via a randomized grid searching strategy. Based on
our grid searching results, hyper-parameters for each model are as follows:
Random Forest:
* •
Number of trees/estimators: 1000
* •
Bootstrap: True
* •
Node split criterion: Gini-index
* •
Maximum Features: $\sqrt{N}$
* •
Number of Principal Components: 100
Support Vector Machines:
* •
Kernel: Linear kernel
* •
Penalty parameter: 10
* •
Number of Principal Components: 100
Multi-layer Perceptron:
* •
Hidden Layer 1 unit size: 500
* •
Hidden Layer 2 unit size: 250
* •
Early Stopping: True
* •
Gradient Solver: Stochastic Gradient Descent
* •
Activation Unit: Tanh
* •
L2 Regularization: 0.01
* •
Number of Principal Components: 100
We choose these algorithms because they span a wide variation of properties
including various decision boundary capabilities such as linear versus
arbitrary.
### 6.2 Convolutional Network Architectures
In addition to these baseline machine learning models, we chose to investigate
four different convolutional neural network architectures. The differences
between each model come from the number of convolutional and dense layers
employed, as well as the type of convolution employed (one dimensional versus
two dimensional). The first three models all employed one-dimensional
convolutions. This makes the processing more similar to approaches used in
natural language processing than in image processing. In text processing, one-
dimensional convolutional filters are typically convolved with the word
embedding matrix over a sequence [severyn2015learning]. The fourth model
employed two dimensional convolutional filters on the input spectrogram, which
is more common in image processing.
Each architecture follows from the basic diagram shown in Figure 5. In this
architecture, the spectrogram and IR signals are processed separately, through
similar convolutional branches of the network. They are then concatenated and
passed to dense hidden layers. The differences among the three models that
employed one-dimensional convolutions is the depth of the convolutional and
dense layers employed. The most simple architecture employs two convolutional
layers and two dense layers, followed by an output layer. Another model
employs three convolutional layers, and another model employs three dense
layers. In all models, every convolutional layer is followed by a max pool
layer. $L_{2}$ regularization is used in all convolutional layers to minimize
over-fitting. Rectified Linear Unit (ReLU) activations are used everywhere
except the final layer to speed up the training and avoid unstable gradients.
Finally a Softmax layer is used at the output which finally classifies the
gestures into 21 different classes. To more clearly reference each of the four
models, we refer to each model by number. Note that all models use two
convolutional layers to analyze the IR proximity sensor, but different number
of layers for the spectrogram branch of the network:
* •
Model 1: 1D convolutional filters, two convolutional spectrogram layers, two
dense layers
* •
Model 2: 1D convolutional filters, two convolutional spectrogram layers, four
dense layers
* •
Model 3: 1D convolutional filters, three convolutional spectrogram layers,
four dense layers
* •
Model 4: 2D convolutional filters, two convolutional spectrogram layers, two
dense layers
Figure 5: Diagram of the convolutional neural networks investigated. We vary
the number of layers in the convolutional layers and number of dense layers as
part of our analysis.
When training the network, we apply random perturbations to the input
spectrogram and IR sequences to help avoid over fitting and increase
generalization performance (i.e., data expansion). We randomly shift the data
temporally up to 10%. That is, we shift the entire spectrogram sequence
forward or backward in time randomly by up to 10%. The sequences are 2.5
seconds in duration, so this means that the the spectrogram and/or the IR
stream might shift by 250 ms. This is applied to the IR data and the
spectrogram separately (resulting in different random time shifts). This helps
with generalization performance because the Samsung Gesture API is somewhat
inconsistent in the timings for when it provides the push notification that
the IR sensor has been activated. Therefore this data expansion mirrors the
actual use case well.
### 6.3 Hyper-parameter Tuning
Based on the works of Bengio [bengio2012practical] and Bergstra et al.
[bergstra2011algorithms] we chose to use the Tree-structured Parzen Estimator
approach for hyper parameter tuning. These works established this estimation
approach to be superior for tuning hyper-parameters compared to randomized
search. During tuning, we only vary the number of convolutional filters and
kernel size for the spectrogram branch of the network because the signal size
is relatively more complex than the IR sensor stream. The filters applied to
the IR signal are held constant at 2 one dimensional filters with kernel
length of 2. The following parameters were tuned:
* •
L2 Regularization: Normal Distribution with mean 0.001 and s.d 0.0001
* •
Learning Rate ($10^{-X}$): [-6, -5, -4, -3, -2, -1, 0]
* •
Number of convolution filters: [8, 16, 32, 64]
* •
Kernel Size: [2, 3, 5]
* •
Dropout: Uniform Distribution [0, 0.99]
* •
Number of hidden layer units: [32, 64, 128, 256, 512]
* •
Kernel weight initializers: [He normal and uniform distributions], [Glorot
normal and uniform distributions], [LeCun normal and uniform distributions]
Each of the four models underwent hyper parameter tuning. When comparing the
different architectures, we use the best set of hyper parameters found for
each architecture.
## 7 Results and Summary
We divide our results into three overarching sections: comparisons between
segmentations that require IR activation versus not requiring IR activation,
classification with the full gesture set, and classification with multiple
subsets of the gesture vocabulary.
### 7.1 IR Activation Segmentation Comparison
In this section, we compare the predictive ability of gestures collected
requiring that the IR sensor be activated versus not requiring the IR sensor
to activate to segment gestures. Recall that these data sets are collected
using separate experiments and different users. In each scenario, we train the
models using leave-one-subject-out cross validation. That is, no subject’s
data is simultaneously used for training and testing. For our evaluation
metric, we choose the average true positive rate per gesture. Because class
imbalance exists, accuracy is not a good indicator of performance as classes
that occur less often will receive less weight in the evaluation. Moreover,
binary scores like recall and precision are harder to interpret when micro or
macro averaged. Per-class true positive rate, alternatively, captures how well
we perform for each gesture. For this analysis, we choose to use the random
forest baseline model, as it is the best performing baseline model (discussed
later). Table I describes the per-class true positive rate for requiring
versus not requiring the IR sensor to be activated for the random forest
model. The main conclusion we draw from Table I is that requiring the IR
sensor to activate does increase the performance of the AirWare algorithm.
Moreover, there are other advantages for requiring that the sensor be
activated such as reducing false positives and reducing needless computation.
This is because the audio Doppler signal is likely to result in a number of
false positives from movement by the user and near the user; whereas the IR
sensor is relatively robust to these types of noise. However, requiring that
the IR sensor be activated also requires users to manipulate the way they
perform in-air gestures to activate the sensor. In this way, the AirWare
system will likely require some instruction to the users for how to reliably
perform different gestures. Thus, in the remainder of our analysis we only use
the gesture set that requires IR activation to segment gestures.
TABLE I: Average true positive rate per class for differing IR sensor activation. | Per Class True Positive Rate
---|---
IR Sensor Activation | Average | STD Error
Required, N=13 | 38.92% | 0.01
Not Required, N=8 | 13.71% | 0.02
Majority | 5.34% | N/A
Chance | 4.76% | N/A
### 7.2 IR Activation and Doppler Signatures
Figure 6: Average true positive rate per gesture per user comparison for using
only IR activation, only Doppler signatures and combined sensor information to
classify the 21 class gesture set.
In this section, we analyze the performance of using IR sensor data only,
Doppler data only, and the combined sensors. With this analysis we seek to
understand how advantageous it is to combine the sensors. To investigate this
research question, our network is modified to use only the IR activation
information or only the Doppler signature information. From Figure 5 this
corresponds to only using one of the input branches in the network. The
performance of the model trained using individual sensor information is
compared with the performance of the model trained using the combined sensor
information. The cross validation strategy used in all cases is leave-one-
subject-out, wherein we train the model on $N-1$ users’ data and test the
model on the $N$th user data. If we only look at the best performing models
from Figure 6, we see that we are able to achieve an average true positive
rate of 35% per class with a standard error of 0.02% when only the IR sensor
information is used. Note that model 2 and 3 are identical when only using the
IR signal branch. In comparison, using audio Doppler only sensor information
results in a best performing model with average true positive rate of 13% with
a standard error of 0.03%. From Figure 6, we can see that the performance with
only IR information is better than performance of using Doppler only
regardless of the machine learning model employed. However, when we combine
the information from the two modalities, the performance improves for all
convolutional neural network models and the random forest model. Thus, we
conclude that combining the two sensing modalities is advantageous for in-air
gesture recognition, resulting in a performance increase of about 10% average
true positive rate per gesture. The improvements are statistically significant
based upon a two-tailed T-test ($p<0.01$). In all analyses in the remainder of
the paper, we use the combined Doppler and IR sensor modalities as features
for the machine learning models.
Figure 7: Overview of three different cross-validation strategies invetigated.
### 7.3 Full 21-Gesture Vocabulary
We now investigate the performance of the baseline machine learning models as
well as different convolutional neural network architectures described in the
previous section using different cross-validation strategies. We analyze the
performance of the classifiers through the following cross validation
strategies, each with its own practical implications. Also, an overview of
each cross validation strategy is shown in Figure 7.
Figure 8: Average true positive rate per gesture per user comparison after
combining the IR activation and Doppler signatures for all baseline and
convolutional neural network models
#### 7.3.1 Leave One User Out
We explore the performance of our classifiers using ‘leave one subject out’
cross validation strategy. The strategy used in this case is to train the
model on data from $N-1$ users and test it on the $N$th user, as described in
Figure 7. This approach analyzes whether we can classify the gestures
successfully without requiring the system to be calibrated. This implies that
for practical implementations, we can directly use a pre-trained, out-of-the-
box classifier to classify the gestures. Through this strategy, we try to
generalize the learning of our classifier across different types of users.
This is the ideal scenario for a gesture system, requiring no user input or
calibration before use. As can been seen from Figure 8, average true positive
rate per gesture per user ranges from approximately 19% to 46% for different
classifiers. Our best performing model in this case is ’Model 3’ which is a
deeper network in terms of the number of convolutional layers and dense
layers.
Breaking down the performance of Model 3 by users Figure 9 we see that our
network performs well, except for users 1, 3, and 4. We are able to achieve an
average true positive rate per gesture per user of 45.19% with a standard
error of 0.02%. We conclude that the performance of the leave-one-user-out
model is not sufficient for a gesture recognition system. Therefore, we
explored other cross validation strategies that assume a calibration phase is
employed.
#### 7.3.2 Personalized Model for Each User
In this analysis, we analyze the performance of our classifier by calibrating
the model to each user. For each user, we perform a 5-iteration 60% training
and 40% testing stratified, shuffled split, as described in Figure 7. The test
size is chosen to make sure there are at least 2 instances of each class are
present in the test data [pedregosa2011scikit]. In effect, we train 13
independent models using only data collected from a specific individual for
training and testing. We would like to see if the variation in gesture
performance within the user is able to predict the classes successfully. This
cross validation mirrors a use case where users would need to provide example
gestures to the system as a calibration phase. This is less ideal in terms of
practical usage but may be necessary to increase performance. From Figure 8,
we see that, on average, the performance deteriorates for all the models as
compared to leave one subject out. In this case, Random Forest performs
equally well compared to ’Model 3’ at approximately 24% average true positive
rate per gesture per user. From Figure 9, we can see that none of the users
benefit from a fully personalized model when compared with ‘leave one subject
out’ performance. We are able to achieve an average true positive rate of
23.68% per class with a standard error of 0.03% across users for the 21-class
gesture set. It is unclear, however, if the personalized models do not perform
consistently because the training data is limited. Convolutional networks tend
to require large amounts of training data to perform well, so it is possible
that the decrease in performance is due to a significantly smaller training
set. This motivates us to combine our two cross validation strategies in order
to increase the amount of training data, but also employ a personalized
calibration procedure.
Figure 9: Average true positive rate per gesture per user comparison for the
best performing architecture, Model 3. Results are shown for all employed
cross validation strategies.
#### 7.3.3 User Calibrated Model
In this cross validation strategy, we combine knowledge from the previous two
strategies to test the performance of the model. We first split the data based
on the user; $N-1$ users’ data in the training set. From the $N^{th}$ user
data, we perform a 5-fold 60%-40% stratified shuffle split, as done for the
personalized model. We then combine the training data from the $N-1$ users
with the 60% split of training data from the $N^{th}$ user and use the
remaining 40% of data from the $N^{th}$ user as a testing set, as shown in
Figure 7. The model performance for each user improves significantly for all
users with this training strategy. Thus, the model learns from other users as
well as the test user to classify the gestures of the test user. Note that
this training strategy, like the fully personalized model, assumes that a
calibration procedure will occur for each user of AirWare. From Figure 9, we
see that model 3 is the best performing amongst all the models. We are able to
achieve an average true positive rate of 50.45% per class per user with a
standard error of 0.03% across users for the 21-class gesture set.
If we investigate the most common confusions, we see that click gets
misclassified as double-click 26% of the times, pan down gets misclassified as
flick down (18%) and pan right gets misclassified as flick right (21%),
suggesting a close similarity between their Doppler signatures and IR
activations. The best performing gestures are erase, pans, and snap which have
average true positive rates of 89%, 72%, and 72%, respectively.
Finally, we also wish to understand about how much training data is required
before the performance of the different models begins to saturate. That is,
about how many calibration examples are required before the performance
plateaus? To investigate this question we look at the training curves for
‘Model 3’ and ‘Random Forest’ since these are the best performing models.
Figure 10: Training Curve for Model 3 and Random Forest using ‘User-
calibrated’ cross-validation strategy
Figure 10 shows the performance of ‘Model 3’ and ‘Random Forest’ as we
gradually increase the percentage of calibration data from 10% to 50%. We
increase the training size by 10% and evaluated the models using the remaining
data from the user not used in calibration. As we can see in Figure 10, both
‘Model 3’ and the ‘Random Forest’ model gradually increase performance as more
user-specific calibration data is added. Moreover, both models begin to
saturate between 30% and 50% of training data used from the user. If we assume
saturation is achieved at 50%, this corresponds to the system needing 2–3
examples of each gesture from the user during calibration.
From the above results, we can clearly see that fusing the two different
sensing modalities allows increased performance as compared to only using the
individual sensor information. However, the overall performance of 50.45% is
still dramatically less than what would be needed for a practical gesture
recognition system. We conclude that we cannot support the full 21 gesture
vocabulary at a given time. However, it may be possible to select subsets of
the gestures from the full vocabulary. Thus, we explore what simplifications
to the vocabulary can be made to increase the per gesture true positive rate
to a point of usability.
### 7.4 Using Feature Subsets
We now seek to understand if the performance of the classifier can be improved
by reducing the simultaneous number of gestures that a classifier must
distinguish for a given application. In this scenario, we wish to divide the
gestures into smaller subsets based upon what combinations of gestures are
most appropriate for different categories of applications. We assume that the
application in question somehow instructs the user of what gestures are
currently supported. If a user were to perform an unsupported gesture, the
system would misinterpret that gesture. Sub-setting the gesture set, we
enumerate 4 different categories: Generic, Mapping, and Gaming. Each gesture
set comprises 4 to 7 gestures. Together, these categories include 16 distinct
gestures. Thus, the vocabulary is large, but managed by never having more than
7 gestures available at a time. We test the performance of the model for these
reduced gesture sets using ‘user calibrated’ strategy discussed above. We also
employ the most accurate architecture as selected through previous analysis
and parameters remain the same as from our previous hyper-parameter tuning.
Confusion matrices are generated in the same manner as previously discussed. A
summary of the different reduced sets overall and per user appears in Figure
11. As shown, there are a number of users for which the system works well for
and a number of individuals that it does not always achieve high true positive
rate. In particular, users 1, 3, and 4 have reduced recognition rates compared
to other users. Upon review of the data, these also corresponded to users that
did not perform many practice trials while learning the gestures. These users
only practiced the gestures one time compared to other participants performing
gestures multiple times before they reported that they were ready to start the
experiment. As such, these participants may have rushed through the learning
of the gestures or not taken the experiment as seriously as others.
Figure 11: Performance comparison across users for different reduced gesture
sets Figure 12: Aggregate confusion matrix for user calibrated performance of
the reduced gesture sets.
Generic is comprised of a total of 7 gestures: double-tap, flicks
up/down/left/right, snap and erase. This set is most likely to be used by
interfaces that require generic up/down/left/right interactions as well as
some selection and undo interactions such as when interacting with a web
browser. With these 7 gestures, AirWare is able to achieve an average true
positive rate of 82.4% per gesture across users with a standard error of 0.02%
(Figure 11). Inherently, the double tap and snap gestures are performed in a
similar fashion, which accounts for most confusions. From Figure 12, we can
see the model confuses between these two gestures often. Similarly, we can see
that flick up and flick right are confused by the model. This is likely the
result of these gestures being performed quickly or sloppily by
participants—resulting in a number of ‘diagonal’ motions rather than
explicitly downward or leftward motions. Because of this, it might be
warranted to limit flick interactions to only left/right or only up/down.
Furthermore eliminating the snap gesture would help increase the true positive
rate of the double tap gesture.
Mapping set is focused on more immersive applications such as maps where
zooming and panning are core requirements. It consists of Zoom in/away, Pan
up/down/left/right and erase (a total of 7 gestures). We are able to achieve
an average true positive rate of 82.9% with a standard error of 0.02. (See
Figure 11.) From Figure 12, we can see the model confuses between pan down
with pan left or pan right gestures the most. This is a similar phenomenon to
users performing flicks at diagonal angles rather than directly lateral to the
phone. Even so, most pans are classified accurately.
Gaming set consists of snap, slice left/right and Whip (a total of 4 gestures)
focusing on specialized applications like gaming. We are able to achieve an
average true positive rate of 86.5% per gesture across users with a standard
error of 0.03%. (See Figure 11.) From Figure 12, we can see the model confuses
between snap and whip gestures the most, but mostly all gestures are
classified accurately.
The reduced gesture sets can be combined in different scenarios and for
different applications to achieve a gesture vocabulary of 16 unique in-air
gestures with about 80% or better average true positive rate per gesture. Many
gestures surpass 90% true positive rate. The reduced gesture sets presented
here are only an example of possible subsets. Depending on the needs of the
application, a number of reduced sets could be deployed by the AirWare system
for different usage scenarios to support an even larger vocabulary. With this
level of performance, we believe AirWare could be used in modern smartphones
for a number of application scenarios. However, the full 21 gesture vocabulary
is not accurate enough to be deployed by a gesture recognition system.
## 8 Discussions and Future Work
Despite the good performance of AirWare, there are limitations in our study
that we wish to specifically mention. First, our evaluation does not
incorporate any specific user interface task. That is, the gesture performed
by the participants were in random order and did not have an action task
associated to them. It might be possible that when users employ these gestures
for specific tasks, the way they are performed may alter in comparison to our
user study. Also taking into consideration that the gestures were recorded in
a relatively controlled setting, the performance can depreciate in real-life
environments having external acoustic disturbances or when the phone is held
in different positions. Even so, previous Doppler based research has shown the
method to be robust to many acoustic environments [gupta2012soundwave].
We have shown that user specific calibration significantly boosts
classification performance. However, this limits the scalability of our
approach because it requires users to provide calibration examples. Once
calibration examples are collected, the architecture must be retrained. This
retraining is limited by the current computing power of smart-phones.
Considering the complexity of algorithm, calibration would require cloud
support to be realistic. Even so, once trained, the neural network
architecture, short-time Fourier transform, and IR values are computationally
efficient and easy to compute from the smart-phone in real-time. The battery
considerations for such an implementation are not too much of a concern
because gestures only need to be processed when the IR sensor is activated.
This helps to further reduce the computational cost of the AirWare approach.
Even so, because we require the IR sensor to be activated, users may feel like
some gestures in the vocabulary are awkward or unintuitive to perform. This
limitation would also require that users are instructed on how to perform each
of the gestures and then the system would need to verify that the user could
complete the gestures reliably. While a limitation, we foresee this process as
something that could be incorporated into the calibration phase of the system.
We would also like to point out the problems with the Samsung S5 gesture
sensing API. Samsung has deprecated support for the device and access to the
sensor output is limited. Moreover, there is no way to access the raw sensor
values without rooting the phone. This limits the impact of our current
approach to pervade the current market, but doesn’t limit the research
contribution. This deprecation did affect our user study. Because the sensor
API was deprecated, many of the angle and velocity measures were flagged
“unknown.” We removed those incomplete records from our dataset but the
reliability of the sensor reading is called into question. As such, our
results might represent a lower bound of performance and may be further
increased with more reliable sensor readings or more expressive IR sensor
data.
Finally, we have not investigated user adoption of our vocabulary set nor have
we investigated impact of our large gesture vocabulary. We leave these
limitations to future work.
## 9 Conclusions
In conclusion, we presented AirWare, a technology that fuses the output of an
embedded smart-phone microphone and proximity sensor to recognize a gesture
set of 21 in-air hand-gestures with 50.47% average true positive rate per
gesture. While we show that combining two different sensor information streams
can significantly increase performance, we conclude that the full 21 gesture
vocabulary cannot be reliably classified for use in a deployed gesture
recognition system. However, AirWare can achieve a reliable true positive rate
per gesture for a number of reduced vocabulary gesture sets. In particular,
AirWare can achieve true positive rates of greater than 80% true positive rate
for Generic, Mapping, and Gaming gesture sets. Using these gesture sets,
AirWare can reliably classify a vocabulary of 16 unique gestures, with 4–7
gestures supported at any given time.
## Acknowledgments
The authors would like to thank students Rowdy Howell and Arya McCarthy for
their contributions in developing the AirWare mobile data collection
application.
Nibhrat Lohia completed his Masters in Statistics from the Dedman College of
Humanities and Sciences, Southern Methodist University and is currently
working as a Data Scientist in Copart, Inc. His research interests lie in the
application areas of deep neural nets and machine learning in general.
---
Raunak Mundada graduated with a Masters degree in Applied statistics and data
analysis from Southern Methodist University. His research interest lies in
machine learning and applied statistics. He is currently working as a
Statistician at GM Financial.
---
Eric C. Larson is an Assistant Professor in the department of Computer Science
and Engineering in the Bobby B. Lyle School of Engineering, Southern Methodist
University. His main research interests are in machine learning, sensing, and
signal/image processing for ubiquitous computing applications. He received his
Ph.D. in 2013 from the University of Washington. He received his B.S. and M.S.
in Electrical Engineering in 2006 and 2008, respectively, at Oklahoma State
University.
---
|
# Assessing the Impact: Does an Improvement to a Revenue Management System
Lead to an Improved Revenue?
Greta Laage111Corresponding author. IVADO Labs and École Polytechnique de
Montréal, Montréal, Canada<EMAIL_ADDRESS>Emma Frejinger 222IVADO
Labs, Canada Research Chair and DIRO, Université de Montréal, Montréal, Canada
Andrea Lodi333IVADO Labs, CERC and École Polytechnique de Montréal, Montréal,
Canada Guillaume Rabusseau444IVADO Labs, Mila, Canada CIFAR AI Chair and DIRO,
Université de Montréal, Montréal, Canada
###### Abstract
Airlines and other industries have been making use of sophisticated Revenue
Management Systems to maximize revenue for decades. While improving the
different components of these systems has been the focus of numerous studies,
estimating the impact of such improvements on the revenue has been overlooked
in the literature despite its practical importance. Indeed, quantifying the
benefit of a change in a system serves as support for investment decisions.
This is a challenging problem as it corresponds to the difference between the
generated value and the value that would have been generated keeping the
system as before. The latter is not observable. Moreover, the expected impact
can be small in relative value.
In this paper, we cast the problem as counterfactual prediction of unobserved
revenue. The impact on revenue is then the difference between the observed and
the estimated revenue. The originality of this work lies in the innovative
application of econometric methods proposed for macroeconomic applications to
a new problem setting. Broadly applicable, the approach benefits from only
requiring revenue data observed for origin-destination pairs in the network of
the airline at each day, before and after a change in the system is applied.
We report results using real large-scale data from Air Canada. We compare a
deep neural network counterfactual predictions model with econometric models.
They achieve respectively 1% and 1.1% of error on the counterfactual revenue
predictions, and allow to accurately estimate small impacts (in the order of
2%).
##### Keywords
Data analytics, Decision support systems, Performance evaluation, Revenue
Management, Airline, Counterfactual Predictions, Synthetic Controls.
## 1 Introduction
Airlines have been making use of sophisticated Revenue Management Systems
(RMSs) to maximize revenue for decades. Through interacting prediction and
optimization components, such systems handle demand bookings, cancellations
and no-shows, as well as the optimization of seat allocations and overbooking
levels. Improvements to existing systems are made by the airlines and solution
providers in an iterative fashion, aligned with the advancement of the state-
of-the-art where studies typically focus on one or a few components at a time
(Talluri and Van Ryzin, 2005). The development and maintenance of RMSs require
large investments. In practice, incremental improvements are therefore often
assessed in a proof of concept (PoC) prior to full deployment. The purpose is
then to assess the performance over a given period of time and limited to
certain markets, for example, a subset of the origin-destination pairs offered
for the movement of passengers on the airline’s network. We focus on a crucial
question in this context: _Does the improvement to the RMS lead to a
significant improvement in revenue?_ This question is difficult to answer
because the value of interest is not directly observable. Indeed, it is the
difference between the value generated during the PoC and _the value that
would have been generated_ keeping business as usual. Moreover, the magnitude
of the improvement can be small in a relative measure (for example, 1-3%)
while still representing important business value. Small relative values can
be challenging to detect with statistical confidence.
Considering the wealth of studies aiming to improve RMSs, it is surprising
that the literature focused on assessing quantitatively the impact of such
improvements is scarce. We identify two categories of studies in the
literature: First, those assessing the impact in a simplified setting
leveraging simulation (Weatherford and Belobaba, 2002; Fiig et al., 2019).
These studies provide valuable information but are subject to the usual
drawback of simulated environments. Namely, the results are valid assuming
that the simulation behaves as the real system. This is typically not true for
a number of reasons, for instance, assumptions on demand can be inaccurate and
in reality there can be a human in the loop adjusting the system. Statistical
field experiments do not have this drawback as they can be used to assess
impacts in a real setting. Studies focusing on field experiments constitute
our second category. There are, however, few applications in revenue
management (Lopez Mateos et al., 2021; Koushik et al., 2012; Pekgün et al.,
2013) and even less focus on the airline industry (Cohen et al., 2019). Each
application presents its specific set of challenges. Our work can be seen as a
field experiment whose aim is to assess if a PoC is a success or not with
respect to a given success criteria.
In practice, airlines often take a pragmatic approach and compare the value
generated during a PoC to a simple baseline: either the revenue generated at
the same time of the previous year, or the revenue generated by another market
with similar behavior as the impacted market. This approach has the advantage
of being simple. However, finding an adequate market is difficult, and the
historical variation between the generated revenue and the baseline can exceed
the magnitude of the impact that we aim to measure. In this case, the answer
to the question of interest would be inconclusive.
We propose casting the problem as counterfactual prediction of the revenue
without changing the RMS, and we compare it to the observed revenue generated
during the PoC. Before providing background on counterfactual prediction
models, we introduce some related vocabulary in the context of our
application. Consider a sample of _units_ and observations of _outcomes_ for
all units over a given time period. In our case, an example of a unit is an
origin-destination (OD) pair and the observed outcome is the associated daily
revenue. Units of interest are called _treated units_ and the other
(untreated) units are referred to as _control units_. In our case, the
_treatment_ is a change to the RMS and it only impacts the treated units (in
our example a subset of the ODs in the network). The goal is to estimate the
_untreated outcomes_ of _treated units_ defined as a function of the outcome
of the control units. In other words, the goal is to estimate what would have
been the revenue for the treated OD pairs without the change to the RMS. We
use the observed revenue of the untreated ODs for this purpose.
##### Brief background on counterfactual prediction models
Doudchenko and Imbens (2016) and Athey et al. (2018) review different
approaches for imputing missing outcomes which include the three we consider
for our application: (i) synthetic controls (Abadie and Gardeazabal, 2003;
Abadie et al., 2010) (ii) difference-in-differences (Ashenfelter and Card,
1985; Card, 1990; Card and Krueger, 1994; Athey and Imbens, 2006) and (iii)
matrix completion methods (Mazumder et al., 2010; Candès and Recht, 2009;
Candès and Plan, 2010). Doudchenko and Imbens (2016) propose a general
framework for difference-in-differences and synthetic controls where the
counterfactual outcome for the treated unit is defined as a linear combination
of the outcomes of the control units. Methods (i) and (ii) differ by the
constraints applied to the parameters of the linear combination. Those models
assume that the estimated patterns across units are stable before and after
the treatment while models from the unconfoundedness literature (Imbens and
Rubin, 2015; Rosenbaum and Rubin, 1983) estimate patterns from before
treatment to after treatment that are assumed stable across units. Athey et
al. (2018) qualify the former as vertical regression and the latter as
horizontal regression. Amjad et al. (2018) propose a robust version of
synthetic controls based on de-noising the matrix of observed outcomes. Poulos
(2017) proposes an alternative to linear regression methods, namely a non-
linear recurrent neural network. Athey et al. (2018) propose a general
framework for counterfactual prediction models under matrix completion
methods, where the incomplete matrix is the one of observed outcomes without
treatment for all units at all time periods and the missing data patterns are
not random. They draw on the literature on factor models and interactive fixed
effects (Bai, 2003; Bai and Ng, 2002) where the untreated outcome is defined
as the sum of a linear combination of covariates, that is, a low rank matrix
and an unobserved noise component.
The studies in the literature are mainly focused on macroeconomic
applications. For example, estimating the economic impact on West Germany of
the German reunification in 1990 (Abadie et al., 2015), the effect of a state
tobacco control program on per capita cigarette sales (Abadie et al., 2010)
and the effect of a conflict on per capita GDP (Abadie and Gardeazabal, 2003).
In comparison, our application exhibits some distinguishing features. First,
the number of treated units can be large since airlines may want to estimate
the impact on a representative subset of the network. Often there are
hundreds, if not thousands of ODs in the network. Second, the number of
control units is potentially large but the network structure leads to
potential spillover effects that need to be taken into account. Third, even if
the number of treated units can be large, the expected treatment effect is
typically small. In addition, airline networks are affected by other factors,
such as weather and seasonality. Their impact on the outcome needs to be
disentangled from that of the treatment.
##### Contributions
This paper offers three main contributions. First, we formally introduce the
problem and provide a comprehensive overview of existing counterfactual
prediction models that can be used to address it. Second, based on real data
from Air Canada, we provide an extensive computational study showing that the
counterfactual predictions accuracy is high when predicting revenue. We focus
on a setting with multiple treated units and a large set of controls. We
present a non-linear deep learning model to estimate the missing outcomes that
takes as input the outcome of control units as well as time-specific features.
The deep learning model achieves less than 1% error for the aggregated
counterfactual predictions over the treatment period. Third, we present a
simulation study of treatment effects showing that we can accurately estimate
the effect even when it is relatively small.
##### Paper Organization.
The remainder of the paper is structured as follows. Next we present a
thorough description of the problem. We describe in Section 3 the different
counterfactual prediction models. In Section 4, we describe our experimental
setting and the results of an extensive computational study. Finally, we
provide some concluding remarks in Section 5.
## 2 Problem Description
In this section, we provide a formal description of the problem and follow
closely the notation from Doudchenko and Imbens (2016) and Athey et al.
(2018).
We are in a panel data setting with $N$ units covering time periods indexed by
$t=1,\ldots,T$. A subset of units is exposed to a binary treatment during a
subset of periods. We observe the realized outcome for each unit at each
period. In our application, a unit is an OD pair and the realized outcome is
the booking issue date revenue at time $t$, that is, the total revenue yielded
at time $t$ from bookings made at $t$. The methodology described in this paper
is able to handle various types of treatments, assuming it is applied to a
subset of units. The set of treated units receive the treatment and the set of
control units are not subject to any treatment. The treatment effect is the
difference between the observed outcome under treatment and the outcome
without treatment. The latter is unobserved and we focus on estimating the
missing outcomes of the treated units during the treatment period.
We denote $T_{0}$ the time when the treatment starts and split the complete
observation period into a pre-treatment period $t=1,\ldots,T_{0}$ and a
treatment period $t=T_{0}+1,\ldots,T$. We denote $T_{1}=T-T_{0}$ the length of
the treatment period. Furthermore, we partition the set of units into treated
$i=1,\ldots,N^{\text{t}}$ and control units $i=N^{\text{t}}+1,\ldots,N$, where
the number of control units is $N^{\text{c}}=N-N^{\text{t}}$.
In the pre-treatment period, both control units and treated units are
untreated. In the treatment period, only the control units are untreated and,
importantly, we assume that they are unaffected by the treatment. The set of
treated pairs $(i,t)$ is
$\mathcal{M}=\\{(i,t)\text{ }i=1,\ldots,N^{\text{t}},t=T_{0}+1,\ldots,T\\},$
(1)
and the set of untreated pairs $(i,t)$ is
$\mathcal{O}=\\{(i,t)\text{
}i=1,\ldots,N^{\text{t}},t=1,\ldots,T_{0}\\}\cup\\{(i,t)\text{
}i=N^{\text{t}}+1,\ldots,N,t=1,\ldots,T\\}.$ (2)
Moreover, the treatment status is denoted by $W_{it}$ and is defined as
$W_{it}=\left\\{\begin{array}[]{@{}ll@{}}1&\text{ if }(i,t)\in\mathcal{M}\\\
0&\text{ if }(i,t)\in\mathcal{O}.\end{array}\right.$ (3)
For each unit $i$ in period $t$, we observe the treatment status $W_{it}$ and
the realized outcome $Y_{it}^{\text{obs}}=Y_{it}(W_{it})$. Our objective is to
estimate $\hat{Y}_{it}(0)\leavevmode\nobreak\ \forall(i,t)\in\mathcal{M}$.
Counterfactual prediction models define the latter as a mapping of the outcome
of the control units.
The observation matrix, denoted by $\mathbf{Y}^{\text{obs}}$ is a $N\times T$
matrix whose components are the observed outcomes for all units at all
periods. The first $N^{\text{t}}$ rows correspond to the outcomes for the
treated units and the first $T_{0}$ columns to the pre-treatment period. The
matrix $\mathbf{Y}^{\text{obs}}$ hence has a block structure,
$\mathbf{Y}^{\text{obs}}=\begin{pmatrix}\mathbf{Y}_{\text{pre}}^{\text{obs,t}}&\mathbf{Y}_{\text{post}}^{\text{obs,t}}\\\
\mathbf{Y}_{\text{pre}}^{\text{obs,c}}&\mathbf{Y}_{\text{post}}^{\text{obs,c}}\end{pmatrix},$
where $\mathbf{Y}_{\text{pre}}^{\text{obs,c}}$ (respectively
$\mathbf{Y}_{\text{pre}}^{\text{obs,t}}$) represents the $N^{\text{c}}\times
T_{0}$ (resp. $N^{\text{t}}\times T_{0}$) matrix of observed outcomes for the
control units (resp. treated units) before treatment. Similarly,
$\mathbf{Y}_{\text{post}}^{\text{obs,c}}$ (respectively
$\mathbf{Y}_{\text{post}}^{\text{obs,t}}$) represents the $N^{\text{c}}\times
T_{1}$ (resp. $N^{\text{t}}\times T_{1}$) matrix of observed outcomes for the
control units (resp. treated units) during the treatment.
Synthetic control methods have been developed to estimate the average causal
effect of a treatment (Abadie and Gardeazabal, 2003). Our focus is slightly
different as we aim at estimating the total treatment effect during the
treatment period $T_{0}+1,\ldots,T$,
$\tau=\sum_{i=1}^{N^{\text{t}}}\sum_{t=T_{0}+1}^{T}Y_{it}(1)-Y_{it}(0).$ (4)
We denote by $\hat{\tau}$ the estimated treatment effect,
$\hat{\tau}=\sum_{i=1}^{N^{\text{t}}}\sum_{t=T_{0}+1}^{T}Y_{it}^{\text{obs}}-\hat{Y}_{it}(0).$
(5)
## 3 Counterfactual Prediction Models
In this section, we describe counterfactual prediction models from the
literature that can be used to estimate the missing outcomes
$Y_{it}(0)\leavevmode\nobreak\ \forall(i,t)\in\mathcal{M}$. Namely, grouped
under synthetic control methods (Section 3.1), we describe the constrained
regressions in Doudchenko and Imbens (2016) which include difference-in-
differences and synthetic controls from Abadie et al. (2010). In Section 3.2,
we delineate the robust synthetic control estimator from Amjad et al. (2018)
followed by the matrix completion with nuclear norm minimization from Athey et
al. (2018) in Section 3.3. Note that we present all of the above with one
single treated unit, i.e., $N^{\text{t}}=1$. This is consistent with our
application as we either consider the units independently, or we sum the
outcome of all treated units to form a single one. Finally, in Section 3.4, we
propose a feed-forward neural network architecture that either considers a
single treated unit or several to relax the independence assumption.
### 3.1 Synthetic Control Methods
Doudchenko and Imbens (2016) propose the following linear structure for
estimating the unobserved $Y_{it}(0)$, $(i,t)\in\mathcal{M}$, arguing that
several methods from the literature share this structure. More precisely, it
is a linear combination of the control units,
$Y_{it}(0)=\mu+\sum_{j=N^{\text{t}}+1}^{N}\omega_{j}Y_{jt}^{\text{obs}}+e_{it}\quad\forall(i,t)\in\mathcal{M},$
(6)
where $\mu$ is the intercept,
$\bm{\omega}=(\omega_{1},\ldots,\omega_{N^{\text{c}}})^{\top}$ a vector of
$N^{\text{c}}$ parameters and $e_{it}$ an error term.
Synthetic control methods differ in the way the parameters of the linear
combination are chosen depending on specific constraints and the observed
outcomes $\mathbf{Y}_{\text{pre}}^{\text{obs,t}}$,
$\mathbf{Y}_{\text{pre}}^{\text{obs,c}}$ and
$\mathbf{Y}_{\text{post}}^{\text{obs,c}}$. We write it as an optimization
problem with an objective function minimizing the sum of least squares
$\min_{\mu,\bm{\omega}}\left\|\mathbf{Y}_{\text{pre}}^{\text{obs,t}}-\mu\mathbf{1}_{T_{0}}^{\top}-\bm{\omega}^{\top}\mathbf{Y}_{\text{pre}}^{\text{obs,c}}\right\|^{2},$
(7)
potentially subject to one or several of the following constraints
$\displaystyle\quad\mu=0$ (8)
$\displaystyle\sum_{j=N^{\text{t}}+1}^{N}\omega_{j}=1$ (9)
$\displaystyle\omega_{j}\geq 0,\quad j=N^{\text{t}}+1,\ldots,N$ (10)
$\displaystyle\omega_{j}=\bar{\omega},\quad j=N^{\text{t}}+1,\ldots,N.$ (11)
In the objective (7), $\mathbf{1}_{T_{0}}$ denotes a $T_{0}$ vector of ones.
Constraint (8) enforces no intercept and (9) constrains the sum of the weights
to equal one. Constraints (10) impose non-negative weights. Finally,
constraints (11) force all the weights to be equal to a constant. If $T_{0}\gg
N$, Doudchenko and Imbens (2016) argue that the parameters $\mu$ and
$\bm{\omega}$ can be estimated by least squares, without any of the
constraints (8)-(11) and we may find a unique solution $(\mu,\bm{\omega})$. As
we further detail in Section 4, this is the case in our application. We hence
ignore all the constraints and estimate the parameters by least squares.
#### 3.1.1 Difference-in-Differences
The Difference-In-Differences (DID) methods (Ashenfelter and Card, 1985; Card,
1990; Card and Krueger, 1994; Meyer et al., 1995; Angrist and Krueger, 1999;
Bertrand et al., 2004; Angrist and Pischke, 2008; Athey and Imbens, 2006)
consist in solving
$\displaystyle(\textit{DID})\quad\min_{\mu,\bm{\omega}}$
$\displaystyle\left\|\mathbf{Y}_{\text{pre}}^{\text{obs,t}}-\mu\mathbf{1}_{T_{0}}^{\top}-\bm{\omega}^{\top}\mathbf{Y}_{\text{pre}}^{\text{obs,c}}\right\|^{2}$
(7) s.t. $\displaystyle\text{(\ref{eq:adding_up}), (\ref{eq:no_neg}),
(\ref{eq:cst_weights})}.$
With one treated unit and $N^{\text{c}}=N-1$ control units, solving
$(\textit{DID})$ leads to the following parameters and counterfactual
predictions:
$\displaystyle\hat{\omega}_{j}^{\text{DID}}=\frac{1}{N-1},\quad j=2,\ldots,N$
(12)
$\displaystyle\hat{\mu}^{\text{DID}}=\frac{1}{T_{0}}\sum_{t=1}^{T_{0}}Y_{1t}-\frac{1}{(N-1)T_{0}}\sum_{t=1}^{T_{0}}\sum_{j=2}^{N}Y_{jt}$
(13)
$\displaystyle\hat{Y}_{1t}^{\text{DID}}(0)=\hat{\mu}^{\text{DID}}+\sum_{j=2}^{N}\hat{\omega}_{j}^{\text{DID}}Y_{jt}.$
(14)
#### 3.1.2 Abadie-Diamond-Hainmueller Synthetic Control Method
Introduced in Abadie and Gardeazabal (2003) and Abadie et al. (2010), the
synthetic control approach consists in solving
$\displaystyle\textit{(SC)}\quad\min_{\mu,\bm{\omega}}$
$\displaystyle\left\|\mathbf{Y}_{\text{pre}}^{\text{obs,t}}-\mu\mathbf{1}_{T_{0}}^{\top}-\bm{\omega}^{\top}\mathbf{Y}_{\text{pre}}^{\text{obs,c}}\right\|^{2}$
(7) s.t. $\displaystyle\text{(\ref{eq:no_intercept}), (\ref{eq:adding_up}),
(\ref{eq:no_neg})}.$
Constraints (8), (9) and (10) enforce that the treated unit is defined as a
convex combination of the control units with no intercept.
The (SC) model is challenged in the presence of non-negligible levels of noise
and missing data in the observation matrix $\mathbf{Y}^{\text{obs}}$.
Moreover, it is originally defined for a small number of control units and
relies on having deep domain knowledge to identify the controls.
#### 3.1.3 Constrained Regressions
The estimator proposed by Doudchenko and Imbens (2016) consists in solving
$\textit{(CR-
EN)}\quad\min_{\mu,\bm{\omega}}\left\|\mathbf{Y}_{\text{pre}}^{\text{obs,t}}-\mu\mathbf{1}_{T_{0}}^{\top}-\bm{\omega}^{\top}\mathbf{Y}_{\text{pre}}^{\text{obs,c}}\right\|_{2}^{2}+\lambda^{\text{CR}}\left(\frac{1-\alpha^{\text{CR}}}{2}||\bm{\omega}||_{2}^{2}+\alpha^{\text{CR}}||\bm{\omega}||_{1}\right),$
(15)
while possibly imposing a subset of the constraints (8)-(11).
The second term of the objective function (15) serves as regularization. This
is an elastic-net regularization that combines the Ridge term which forces
small values of weights and Lasso term which reduces the number of weights
different from zero. It requires two parameters $\alpha^{\text{CR}}$ and
$\lambda^{\text{CR}}$. To estimate their values, the authors propose a cross-
validation procedure, where each control unit is alternatively considered as a
treated unit and the remaining control units keep their role of control. They
are used to estimate the counterfactual outcome of the treated unit. The
parameters chosen minimize the mean-squared-error (MSE) between the
estimations and the ground truth (real data) over the $N^{\text{c}}$
validations sets.
The chosen subset of constraints depends on the application and the ratio of
the number of time periods over the number of control units. In our
experimental setting, we have a large number of pre-treatment periods, i.e.,
$T_{0}\gg N^{\text{c}}$ and we focus on solving (CR-EN) without constraints.
### 3.2 Robust Synthetic Control
To overcome the challenges of $(SC)$ described in Section 3.1.2, Amjad et al.
(2018) propose the Robust Synthetic Control algorithm. It consists in two
steps: The first one de-noises the data and the second step learns a linear
relationship between the treated units and the control units under the de-
noising setting. The intuition behind the first step is that the observation
matrix contains both the valuable information and the noise. The noise can be
discarded when the observation matrix is approximated by a low rank matrix,
estimated with singular value thresholding (Chatterjee et al., 2015). Only the
singular values associated with valuable information are kept. The authors
posit that for all units without treatment,
$Y_{it}(0)=M_{it}+\epsilon_{it},\quad i=1,\ldots,N,\quad t=1,\ldots,T,$ (16)
where $M_{it}$ is the mean and $\epsilon_{it}$ is a zero-mean noise
independent across all $(i,t)$ (recall that for $(i,t)\in\mathcal{O}$,
$Y_{it}(0)=Y_{it}^{\text{obs}}$). A key assumption is that a set of weights
$\\{\beta_{N^{\text{t}}+1},\ldots,\beta_{N}\\}$ exist such that
$M_{it}=\sum_{j=N^{\text{t}}+1}^{N}\beta_{j}M_{jt},\quad
i=1,\ldots,N^{\text{t}},\quad t=1,\ldots,T.$ (17)
Before treatment, for $t\leq T_{0}$, we observe $Y_{it}(0)$ for all treated
and control units. In fact, we observe $M_{it}(0)$ with noise. The latent
matrix of size $N\times T$ is denoted $\mathbf{M}$. We follow the notation in
Section 2: $\mathbf{M}^{\text{c}}$ is the latent matrix of control units and
$\mathbf{M}_{\text{pre}}^{\text{c}}$ the latent matrix of the control units in
the pre-treatment period. We denote $\hat{\mathbf{M}}^{\text{c}}$ the estimate
of $\mathbf{M}^{\text{c}}$ and $\hat{\mathbf{M}}_{\text{pre}}^{\text{c}}$ the
estimate of $\mathbf{M}_{\text{pre}}^{\text{c}}$. With one treated unit, $i=1$
designates the treated unit and the objective is to estimate
$\hat{\mathbf{M}}^{\text{t}}$, the latent vector of size $T$ of treated units.
The two-steps algorithm is described in Algorithm 1. It takes two
hyperparameters: the singular value threshold $\gamma$ and the regularization
coefficient $\eta$.
Algorithm 1 Robust Synthetic Control (Amjad et al., 2018)
1:Input: $\gamma$, $\eta$
2:Step 1: De-noising the data with singular value threshold
3: Singular value decomposition of $\mathbf{Y}^{\text{obs,c}}$:
$\mathbf{Y}^{\text{obs,c}}=\sum_{i=2}^{N}s_{i}u_{i}v_{i}^{\top}$
4: Select the set of singular values above $\gamma$ :
$S=\\{i:s_{i}\geq\gamma\\}$
5: Estimator $\hat{\mathbf{M}}^{\text{c}}=\frac{1}{\hat{p}}\sum_{i\in
S}s_{i}u_{i}v_{i}^{\top}$, where $\hat{p}$ is the fraction of observed data
6:Step 2: Learning the linear relationship between controls and treated units
7:
$\hat{\bm{\beta}}(\eta)=\arg\min_{\mathbf{b}\in\mathbb{R}^{N-1}}\left\|\mathbf{Y}_{\text{pre}}^{\text{obs,t}}-\hat{\mathbf{M}}_{\text{pre}}^{\text{c}\top}\bm{b}\right\|^{2}$\+
$\eta||\bm{b}||_{2}^{2}$.
8: Counterfactual means for the treatment unit:
$\hat{\mathbf{M}}^{\text{t}}=\hat{\mathbf{M}}^{\text{c}\top}\hat{\bm{\beta}}(\eta)$
9:Return $\hat{\bm{\beta}}$ :
$\hat{\bm{\beta}}(\eta)=\left(\hat{\mathbf{M}}_{\text{pre}}^{\text{c}}(\hat{\mathbf{M}}_{\text{pre}}^{\text{c}\top}+\eta\mathbf{I})\right)^{-1}\hat{\mathbf{M}}_{\text{pre}}^{\text{c}}\mathbf{Y}_{\text{
pre}}^{\text{t}}$ (18)
Amjad et al. (2018) prove that the first step of the algorithm (which de-
noises the data) allows to obtain a consistent estimator of the latent matrix.
Hence, the estimate $\hat{\mathbf{M}}^{\text{c}}$ obtained with Algorithm 1 is
a good estimate of $\mathbf{M}^{\text{c}}$ when the latter is low rank.
The threshold parameter $\gamma$ acts as a way to trade-off the bias and the
variance of the estimator. Its value can be estimated with cross-validation.
The regularization parameter $\eta\geq 0$ controls the model complexity. To
select its value, the authors recommend to take the forward chaining strategy,
which maintains the temporal aspect of the pre-treatment data. It proceeds as
follows. For each $\eta$, for each $t$ in the pre-treatment period, split the
data into 2 sets: $1,\ldots,t-1$ and $t$, where the last point serves as
validation and select as value for $\eta$ the one that minimizes the MSE
averaged over all validation sets.
### 3.3 Matrix Completion with Nuclear Norm Minimization
Athey et al. (2018) propose an approach inspired by matrix completion methods.
They posit a model similar to (16),
$Y_{it}(0)=L_{it}+\varepsilon_{it},\quad i=1,\ldots,N,\quad t=1,\ldots,T,$
(19)
where $\varepsilon_{it}$ is a measure of error. This means that during the
pre-treatment period, we observe $L_{it}$ with some noise. The objective is to
estimate the $N\times T$ matrix $\mathbf{L}$. Athey et al. (2018) assume that
the matrix $\mathbf{L}$ is low rank and hence can be estimated with a matrix
completion technique. The estimated counterfactual outcomes of treated units
without treatment $\hat{Y}_{it}(0),(i,t)\in\mathcal{M}$ is given by the
estimate ${\hat{L}_{it}},(i,t)\in\mathcal{M}$.
We use the following notation from Athey et al. (2018) to introduce their
estimator. For any matrix $\mathbf{A}$ of size $N\times T$ with missing
entries $\mathcal{M}$ and observed entries $\mathcal{O}$,
$P_{\mathcal{O}}(\mathbf{A})$ designates the matrix with values of
$\mathbf{A}$, where the missing values are replaced by 0 and
$P_{\mathcal{O}}^{\perp}(\mathbf{A})$ the one where the observed values are
replaced by 0.
They propose the following estimator of $\mathbf{L}$ from Mazumder et al.
(2010), for a fixed value of $\lambda^{\text{mc}}$, the regularization
parameter:
$\hat{\mathbf{L}}=\arg\min_{\mathbf{L}}\left\\{\frac{1}{|\mathcal{O}|}||P_{\mathcal{O}}(\mathbf{Y}^{\text{obs}}-\mathbf{L})||_{F}^{2}+\lambda^{\text{mc}}||\mathbf{L}||_{*}\right\\},$
(20)
where $||\mathbf{L}||_{F}$ is the Fröbenius norm defined by
$||\mathbf{L}||_{F}=\left(\sum_{i}\sigma_{i}(\mathbf{L})^{2}\right)^{2}=\left(\sum_{i=1}^{N}\sum_{t=1}^{T}L_{it}^{2}\right)^{2}$
(21)
with $\sigma_{i}$ the singular values and $||\mathbf{L}||_{*}$ is the nuclear
norm such that $||\mathbf{L}||_{*}=\sum_{i}\sigma_{i}(\mathbf{L})$. The first
term of the objective function (20) is the distance between the latent matrix
and the observed matrix. The second term is a regularization term encouraging
$\mathbf{L}$ to be low rank.
Athey et al. (2018) show that their proposed method and synthetic control
approaches are matrix completion methods based on matrix factorization. They
rely on the same objective function which contains the Fröbenius norm of the
difference between the unobserved and the observed matrices. Unlike synthetic
controls that impose different sets of restrictions on the factors, they only
use regularization.
Athey et al. (2018) use the convex optimization program SOFT-IMPUTE from
Mazumder et al. (2010) described in Algorithm 2 to estimate the matrix
$\mathbf{L}$. With the singular value decomposition
$\mathbf{L}=\mathbf{S\mathbf{\Sigma}R}^{\top}$, the matrix shrinkage operator
is defined by
$\text{shrink}_{\lambda^{\text{mc}}}(\mathbf{L})=\mathbf{S\tilde{\mathbf{\Sigma}}R}^{\top}$,
where $\tilde{\mathbf{\Sigma}}$ is equal to $\mathbf{\Sigma}$ with the $i$-th
singular value replaced by
$\max(\sigma_{i}(\mathbf{L})-\lambda^{\text{mc}},0)$.
Algorithm 2 SOFT-IMPUTE (Mazumder et al., 2010) for Matrix Completion with
Nuclear Norm Maximization (Athey et al., 2018)
1:Initialization:
$\mathbf{L}_{1}(\lambda^{\text{mc}},\mathcal{O})=\mathbf{P}_{\mathcal{O}}(\mathbf{Y}^{\text{obs}})$
2:for $k=1$ until
$\\{\mathbf{L}_{k}(\lambda^{\text{mc}},\mathcal{O})\\}_{k\geq 1}$ converges do
3:
$\mathbf{L}_{k+1}(\lambda^{\text{mc}},\mathcal{O})=\text{shrink}_{\frac{\lambda^{\text{mc}}|\mathcal{O}|}{2}}(\mathbf{P}_{\mathcal{O}}(\mathbf{Y}^{\text{obs}})+\mathbf{P}_{\mathcal{O}}^{\perp}\left(\mathbf{L}_{k}(\lambda)\right))$
4:end for
5:$\hat{\mathbf{L}}(\lambda^{\text{mc}},\mathcal{O})=\lim_{k\to\infty}\mathbf{L}_{k}(\lambda^{\text{mc}},\mathcal{O})$
The value of $\lambda^{\text{mc}}$ can be selected via cross-validation as
follows: For $K$ subsets of data among the observed data with the same
proportion of observed data as in the original observation matrix, for each
potential value of $\lambda_{j}^{\text{mc}}$, compute the associated estimator
$\hat{\mathbf{L}}(\lambda_{j}^{\text{mc}},\mathcal{O}_{k})$ and the MSE on the
data without $\mathcal{O}_{k}$. Select the value of $\lambda$ that minimizes
the MSE. To fasten the convergence of the algorithm, the authors recommend to
use $\hat{\mathbf{L}}(\lambda_{j}^{\text{mc}},\mathcal{O}_{k})$ as
initialization for
$\hat{\mathbf{L}}(\lambda_{j+1}^{\text{mc}},\mathcal{O}_{k})$ for each $j$ and
$k$.
### 3.4 Feed-forward Neural Network
In this section, we propose a deep learning model to estimate the missing
outcomes and detail the training of the model. We consider two possible
configurations: (i) when there is one treated unit and (ii) when there are
multiple dependent treated units. In (i), the output layer of the model has
one neuron. In (ii), the output layer contains $N^{\text{t}}$ neurons. The
model learns the dependencies between treated units and predicts
simultaneously the revenue for all of them.
We define the counterfactual outcomes of the treated units as a non-linear
function $g$ of the outcomes of the control units with parameters
$\theta^{\text{ffnn}}$ and matrix of covariates $\mathbf{X}$
$\mathbf{Y}^{\text{t}}(0)=g\left(\mathbf{Y}^{\text{obs,c}},\mathbf{X},\theta^{\text{ffnn}}\right).$
(22)
In the following subsections, we use terminology from the deep learning
literature (Goodfellow et al., 2016) but keep the notations described in
Section 2. We define $g$ to be a feed-forward neural network (FFNN)
architecture. We describe next the architecture in detail along with the
training procedure.
#### 3.4.1 Architecture
Barron (1994) shows that multilayer perceptrons (MLPs), also called FFNNs, are
considerably more efficient than linear basis functions to approximate smooth
functions. When the number of inputs $I$ grows, the required complexity for an
MLP only grows as $\mathcal{O}(I)$, while the complexity for a linear basis
function approximator grows exponentially for a given degree of accuracy. When
$N^{\text{t}}>1$, the architecture is multivariate, i.e., the output layer has
multiple neurons. It allows parameter sharing between outputs and thus
considers the treated units as dependent.
Since historical observations collected prior to the beginning of the
treatment period are untreated, the counterfactual prediction problem can be
cast as a supervised learning problem on the data prior to treatment. The
features are the observed outcomes of the control units and the targets are
the outcomes of the treated units. The pre-treatment period is used to train
and validate the neural network and the treatment period forms the test set.
This is a somewhat unusual configuration for supervised learning. Researchers
usually know the truth on the test set also and use it to evaluate the ability
to generalize. To overcome this difficulty, we describe in Section 3.4.2 a
sequential validation procedure that aims at mimicking the standard
decomposition of the dataset into training, validation and test sets.
We present in Figure 1 the model architecture. We use two input layers to
differentiate features. Input Layer 1 takes external features, and Input Layer
2 takes the lagged outcomes of control units. Let us consider the prediction
at day $t$ as illustration. When $t$ is a day, it is associated for instance
to a day of the week $\textit{dow}_{t}$, a week of the year $\textit{woy}_{t}$
and a month $m_{t}$. The inputs at Input Layer 1 could then be
$\textit{dow}_{t},\textit{woy}_{t},m_{t}$. Lagged features of control units
are $Y_{it^{\prime}},i=N^{\text{t}}+1,\ldots,N$ and
$t^{\prime}=t,t-1,\ldots,t-l,$ where $l$ is the number of lags considered.
They are fed into Input Layer 2. The output layer outputs $N^{\text{t}}$
values, one for each treated unit.
HIDDEN FC LAYERS OUTPUT LAYER FC LAYER FC LAYER INPUT LAYER 1 INPUT LAYER 2
Figure 1: FFNN Architecture with Fully Connected (FC) layers.
#### 3.4.2 Sequential Validation Procedure and Selection of Hyper-parameters
In standard supervised learning problems, the data is split into training,
validation and test datasets, where the validation dataset is used for hyper-
parameters search. Table 1 lists the hyper-parameters of our architecture and
learning algorithm. For each potential set of hyper-parameters $\Theta$, the
model is trained on the training data and we estimate the parameters
$\theta^{\text{ffnn}}$. We compute the MSE between the predictions and the
truth on the validation dataset. We select the set $\Theta$ which minimizes
the MSE.
Name | Description
---|---
Hidden size | Size of the hidden layers
Hidden layers | Number of hidden layers after the concatenation of the dense layers from Input Layer 1 and Input Layer 2.
Context size | Size of the hidden FC layer after Input Layer 1
Batch size | Batch size for the stochastic gradient descent
Dropout | Unique dropout rate determining the proportion of neurons randomly set to zero for each output of the FC layers
Learning rate | Learning rate for the stochastic gradient descent.
Historical lags | Number of days prior to the date predicted considered for the control units outcomes.
Epochs Number | Number of epochs (iterations over the training dataset) required to train the model
Table 1: Description of the hyper-parameters for the FFNN architecture.
One of the challenges of our problem is that the data have an important
temporal aspect. While this is not a time series problem, for a test set
period, we train the model with the last observed data, making the validation
step for selecting hyper-parameters difficult. To overcome this challenge, we
split chronologically the pre-treatment periods in two parts:
$\mathcal{T}_{\text{train}}$ and $\mathcal{T}_{\text{valid}}$. We train the
model on $\mathcal{T}_{\text{train}}$ with the backpropagation algorithm using
Early Stopping, a form of regularization to avoid overfitting that consists in
stopping the training when the error on the validation set increases. We
select $\Theta$ on $\mathcal{T}_{\text{valid}}$ and store $\hat{e}$, the
number of epochs it took to train the model. As a final step, we train the
model with hyper-parameters $\Theta$ for $\hat{e}$ epochs on
$\mathcal{T}_{\text{train}}$ and $\mathcal{T}_{\text{valid}}$, which gives an
estimate $\hat{\theta}^{\text{ffnn}}$. Then, we compute the counterfactual
predictions as
$\hat{\mathbf{Y}}_{t}^{\text{t}}(0)=\hat{g}(\mathbf{Y}^{\text{obs,c}},\mathbf{X},\hat{\theta}^{\text{ffnn}})$
for $t=T_{0}+1,\ldots,T$.
#### 3.4.3 Training Details
We present here some modeling and training tricks we used to achieve the best
performance with the FFNN.
##### Data Augmentation
Data augmentation is a well-known process to improve performances of neural
networks and prevent overfitting. It is often used for computer vision tasks
such as image classification (Shorten and Khoshgoftaar, 2019). It consists in
augmenting the dataset by performing simple operations such as rotation,
translation, symmetry, etc. We perform one type of data augmentation, the
homothety, which consists in increasing or reducing the (inputs, outputs)
pair. We decompose it into the following steps. Let $a$ denote the homothety
maximum coefficient, typically an integer between 1 and 4. For each batch in
the stochastic gradient descent algorithm, we multiply each sample, inputs and
outputs, by a random number uniformly distributed between $1/a$ and $a$.
##### Ensemble Learning
The ensemble learning algorithm relies on the intuition that the average
performance of good models can be better than the performance of a single best
model (Sagi and Rokach, 2018). We take a specific case of ensemble learning,
where we consider as ensemble the 15 best models that provide the lowest MSE
on the validation set from the hyper-parameter search. For each model
$k=1,\ldots,15$, we store the set of hyper-parameters $\Theta_{k}$ and the
number of training epochs $\hat{e}_{k}$. We train each model on the pre-
treatment period to estimate $\hat{\theta}_{k}^{\text{ffnn}}$. We compute the
counterfactuals
$\hat{\mathbf{Y}}_{t}^{\text{t}k}(0)=\hat{g}_{k}(\mathbf{Y}^{\text{obs,c}},\mathbf{X},\hat{\theta}_{k}^{\text{ffnn}})$
and the predicted outcome is
$\hat{\mathbf{Y}}_{t}^{\text{t}}(0)=\frac{1}{15}\sum_{k=1}^{15}\hat{\mathbf{Y}}_{t}^{\text{t}k}(0)$
for $t=T_{0}+1,\ldots,T$.
## 4 Application
This work was part of a large project with a major North American airline, Air
Canada, operating a worldwide network. The objective of the overall project
was to improve the accuracy of the demand forecasts of multiple ODs in the
network. In this work, the new demand forecasting algorithm acts as the
treatment. The details about the treatment is not part of this paper but it
drove some of the decisions, especially regarding the selection of the treated
and control units. The units correspond to the different ODs in the network
and the outcome of interest is the revenue. In this paper, we present a
computational study of a simulated treatment effect (ground truth impact is
known). This was part of the validation work done prior to the PoC. Due to the
uncertainty regarding the required duration of the treatment period, we
planned for a period of 6 months in our validation study. For the sake of
completeness, we also analyze the results for shorter treatment periods.
Unfortunately, the Covid-19 situation hit the airline industry during the time
of the PoC. It drastically changed the revenue and the operated flights making
it impossible to assess the impact of the demand forecasts.
In the next section, we first provide details of our experimental setting.
Next, in Section 4.2, we present the prediction performances of the models. In
Section 4.3, we report results from a simulation study designed to estimate
the revenue impact.
### 4.1 Experimental Setup and Data
##### Treatment Effect Definition
There are two ways of considering the daily revenue yielded from bookings: by
flight date or by booking issue date. The former is the total revenue at day
$t$ from bookings for flights departing at $t$, while the latter is the total
revenue at day $t$ from bookings made at $t$, for all possible departure dates
for the flight booked. For our study, we consider the issue date revenue as it
allows for a better estimation of the treatment effect. Indeed, as soon as the
treatment starts at day $T_{0}+1$, all bookings are affected and thus the
issue date revenue is affected. Hence, $Y_{it}(0)$ designates the untreated
issue date revenue of OD $i$ at day $t$. The treatment period is 6 months,
i.e., $T_{1}=181$ days. The drawback of the flight date revenue is that only a
subset of the flights is completely affected by the treatment, hence leading
to an underestimation of the treatment effect. Only flights whose booking
period starts at $T_{0}+1$ (or after) and for which the treatment period lasts
for the full duration of the booking period, approximately a year, are
completely affected.
##### Selection of Treated Units
The selection of the treated ODs was the result of discussions with the
airline. The objective was to have a sample of ODs representative of the
North-American market, while satisfying constraints related to the demand
managers in charge of those ODs. We select 15 non-directional treated ODs,
i.e., 30 directional treated ODs ($N^{\text{t}}=30$). For instance, if
Montreal-Boston was treated, then Boston-Montreal would be treated as well.
The selected 30 ODs represent approximately 7% of the airline’s yearly
revenue.
##### Selection of Control Units
The selection of control units depends on the treated units. Indeed, a change
of the demand forecasts for an OD affects the RMS which defines the booking
limits. Due to the network effect and the potential leg-sharing among ODs,
this would in turn affect the demand for other ODs. With the objective to
select control units that are _unaffected_ by the treatment, we use the
following restrictions:
* •
Geographic rule: for each treated OD, we consider two perimeters centered
around the origin and the destination airports, respectively. We exclude all
other OD pairs where either the origin or the destination is in one of the
perimeters.
* •
Revenue ratio rule: for all ODs operated by the airline in the network,
different from the treated ODs, we discard the ones where at least 5% of the
itineraries have a leg identical to one of the treated ODs. This is because
new pricing of OD pairs can affect the pricing of related itineraries, which
in turn affects the demand.
* •
Sparse OD rule: we exclude seasonal ODs, i.e., those that operate only at
certain times of the year. Moreover, we exclude all OD pairs that have no
revenue on more than 85% of points in our dataset.
From the remaining set of ODs, we select the 40 most correlated ODs for each
treated OD. The correlation is estimated with the Pearson correlation
coefficient. These rules led to $N^{\text{c}}=317$ control units. We note that
this selection is somewhat different from the literature, due to the network
aspect of the airline operations and the abundance of potential control units.
In Abadie et al. (2010), for instance, only a few controls are selected based
on two conditions: (i) they have similar characteristics as the treated units
and (ii) they are not affected by the treatment. The geographic restriction
and the revenue ratio rule correspond to condition (ii). The sparse OD rule
allows to partially ensure condition (i) as the treated ODs are frequent ODs
from the airline’s network. Considering a large number of controls has the
advantage to potentially leverage the ability of deep learning models to
capture the relevant information from a large set of features.
We ran several experiments with a larger set of control units, given that the
geographic rule, the revenue ratio rule and the sparse OD rule were respected.
In the following, we report results for the set of controls described above,
as they provided the best performance.
##### Models and Estimators
We compare the performance of the models and estimators detailed in Section 3:
* •
DID: Difference-in-Differences
* •
SC: Abadie-Diamond-Hainmueller Synthetic Controls
* •
CR-EN: Constrained Regressions with elastic-net regularization
* •
CR: CR-EN model with $\lambda^{\text{CR}}=0$ and $\alpha^{\text{CR}}=0$
* •
RSC: Robust Synthetic Controls
* •
MCNNM: Matrix Completion with Nuclear Norm Minimization
* •
FFNN: Feed-Forward Neural Network with Ensemble Learning. The external
features of the FFNN are the day of the week and the week of the year. We
compute a circular encoding of these two features using their polar
coordinates to ensure that days 0 and 1 (respectively, week 52 and week 1 of
the next year) are as distant as days 6 and days 0 (respectively, week 1 and
week 2).
We started the analysis by investigating the approach often used in practice,
which consists in comparing the year-over-year revenue. The counterfactual
revenue is the revenue obtained in the same period of the previous year. We
ruled out this approach due to its poor performance, both in terms of accuracy
and variance. We provide details in Section 4.2.2, where we discuss the
results.
##### Data
The observed untreated daily issue date revenue covers the period from January
2013 to February 2020 for all control and treated units. This represents
907,405 data points. To test the performances of the different models, we
select random periods of 6 months and predict the revenue values of the 30
treated ODs. In the literature, most studies use a random assignment of the
pseudo-treated unit instead of a random assignment of treated periods. In our
application, switching control units to treated units is challenging as the
control set is specific to the treated units. Hence our choice of random
assignment of periods. We refer to those periods as pseudo-treated as we are
interested in retrieving the observed values. To overcome the challenges
described in Section 3.4.2, we select random periods late in the dataset,
between November 2018 and February 2020.
##### Two scenarios for the target variables.
We consider two scenarios for the target variables: In the first – referred to
as $S1$ – we aggregate the 30 treated units to a single one. In the second –
referred to as $S2$ – we predict the counterfactual revenue for each treated
unit separately. For both scenarios, our interest concerns the total revenue
$Y_{t}=\sum_{i\in N^{\text{t}}}Y_{it}$. In the following, we provide more
details.
In $S1$, we aggregate the outcomes of the treated units to form one treated
unit, even though the treatment is applied to each unit individually. The
missing outcomes, i.e., the new target variables, are the values of
$Y_{t}^{\text{agg}}$, where
$\textit{(S1)}\quad Y_{t}^{\text{agg}}=\sum_{i=1}^{N^{\text{t}}}Y_{it}.$ (23)
The models DID, SC, CR, CR-EN are in fact regressions on $Y_{t}^{\text{agg}}$
with control unit outcomes as variates. For the models RSC and MCNNM, we
replace in the observation matrix $\mathbf{Y}^{\text{obs}}$ the $N^{\text{t}}$
rows of the treated units revenue with the values of $Y_{t}^{\text{agg}}$, for
$t=1,\ldots,T$. All models estimate $\hat{Y}_{t}^{\text{agg}}$, for
$t=1,\ldots,T$, and $\hat{Y}_{t}=\hat{Y}_{t}^{\text{agg}}$.
In $S2$, we predict the counterfactual revenue for each treated OD. For models
SC, DID, CR, CR-EN, MCNNM and RSC, this amounts to considering each treated
unit as independent from the others and we estimate a model on each treated
unit. For FFNN, we relax the independence assumption so that the model can
learn the dependencies and predict the revenue for each treated unit
simultaneously. We have an estimate of the revenue for each pair (unit, day)
in the pseudo-treatment period. Then, we estimate the total revenue at each
period as the sum over each estimated pair, namely
$\textit{(S2)}\quad\hat{Y}_{t}=\sum_{i\in N^{\text{t}}}\hat{Y}_{it}.$ (24)
##### Performance metrics
We assess performance by analyzing standard Absolute Percentage Error (APE)
and Root Mean Squared Error (RMSE). In addition, the bias of the
counterfactual prediction model is an important metric as it, in turn, leads
to a biased estimate of the impact. In our application, the observable outcome
is the issue date net revenue from the bookings whose magnitude over a 6-month
treatment period is measured in millions. A pseudo-period $p$ has a length
$T_{1p}$ and we report for each $p$ the percentage estimate of the total error
$\text{tPE}_{p}=\frac{\sum_{t=1}^{T_{1p}}\hat{Y}_{t}-\sum_{t=1}^{T_{1p}}Y_{t}}{\sum_{t=1}^{T_{1p}}Y_{t}}\times
100.$ (25)
This metric allows us to have insights on whether the model tends to
overestimate or underestimate the total revenue, which will be at use when
estimating the revenue impact. We also report $\text{tAPE}_{p}$, the absolute
values of $\text{tPE}_{p}$ for a period $p$
$\text{tAPE}_{p}=\frac{|\sum_{t=1}^{T_{1p}}\hat{Y}_{t}-\sum_{t=1}^{T_{1p}}Y_{t}|}{\sum_{t=1}^{T_{1p}}Y_{t}}\times
100.$ (26)
We present the results of $S1$ and $S2$ in the following. For confidentiality
reasons, we only report relative numbers in the remainder of the paper with
the focus of comparing the different models.
### 4.2 Prediction Performance
In this section, we start by analyzing the performance related to predicting
daily revenue, followed by an analysis of total predicted revenue in Section
4.2.2.
#### 4.2.1 Daily Predicted Revenue
We assess the performances of the models at each day $t$ of a pseudo-treatment
period, i.e., the prediction error on $\hat{Y}_{t}$ at each day $t$. We
compute the errors for each $t$ and report the values average over all the
pseudo-treatment period $p$, namely
$\text{MAPE}_{p}=\frac{1}{T_{1}}\sum_{t=1}^{T_{1}}\frac{|\hat{Y}_{t}-Y_{t}|}{Y_{t}},\quad\text{RMSE}_{p}=\sqrt{\frac{1}{T_{1}}\sum_{t=1}^{T_{1}}(\hat{Y}_{t}-Y_{t})^{2}}.$
(27)
For confidentiality reasons, we report a scaled version of $\text{RMSE}_{p}$
for each $p$, which we refer to as $\text{RMSE}_{p}^{\text{s}}$. We use the
average daily revenue of the first year of data as a scaling factor.
Figures 2 and 3 present $\text{MAPE}_{p}$ and $\text{RMSE}_{p}^{\text{s}}$ for
$p=1,\ldots,15$, where the upper graph of each figure shows results for $S1$
and the lower the results for $S2$, respectively. We note that the performance
is stable across pseudo-treated periods for all models. The values of
$\text{MAPE}_{p}$ at each period $p$ of SC, RSC and CR models are below 5%
while for FFNN it is only the case in $S2$. This is important, as the impact
we wish to measure is less than this order of magnitude.
(a) $S1$ (one model for a single aggregated unit)
(b) $S2$ (one model per treated unit)
Figure 2: Values of daily error, $\text{MAPE}_{p}$, in each pseudo-treatment
period (note that the y-axis has a different scale in the two graphs).
(a) $S1$ (one model for a single aggregated unit)
(b) $S2$ (one model per treated unit)
Figure 3: Values of daily $\text{RMSE}^{\text{s}}$ in each pseudo-treatment
period (note that the y-axis has a different scale in the two graphs).
Table 2 reports the values of the metrics averaged over all pseudo-treatment
periods for settings $S1$ and $S2$, i.e.,
$\text{MAPE}=\frac{1}{15}\sum_{p=1}^{15}\text{MAPE}_{p}$ and
$\text{RMSE}^{\text{s}}=\frac{1}{15}\sum_{p=1}^{15}\text{RMSE}_{p}^{\text{s}}$.
The results show that the best performance for both metrics and in both
scenarios is achieved by CR model. On average, it reaches a MAPE of 3.4% and
$\text{RMSE}^{\text{s}}$ of 6.0. It achieves better results than CR-EN model.
This is because we have $T\gg N$ and there are hence enough data to estimate
the coefficients without regularization.
| $S1$ | $S2$
---|---|---
| MAPE | $\text{RMSE}^{\text{s}}$ | MAPE | $\text{RMSE}^{\text{s}}$
CR | 3.4% | 6.0 | 3.4% | 6.0
CR-EN | 8.6% | 15.0 | 8.6% | 15.0
DID | 38.3% | 61.4 | 25.9% | 39.2
FFNN | 5.8% | 9.4 | 4.6% | 7.5
MCNNM | 44.2% | 70.0 | 7.8% | 14.3
RSC | 3.6% | 6.5 | 3.6% | 6.5
SC | 3.6% | 6.5 | 4.6% | 8.3
Table 2: Average of the daily MAPE and $\text{RMSE}^{\text{s}}$ over all
pseudo-treatment periods.
Models DID and MCNNM have poor performance in $S1$. This is due to the
difference in magnitude between the treated unit and the control units. In
$S2$, the performance is improved because we build one model per treated unit.
Each treated unit is then closer to the controls in terms of magnitude. Due to
the constraint (11) of equal weights, DID model is not flexible enough and its
performance does not reach that of the other models.
The FFNN model improves the MAPE by 1.2 points from $S1$ to $S2$. The neural
network models the dependencies between the treated ODs and gain accuracy by
estimating the revenue of each treated OD.
The advantage of $S2$ is that we predict separately the outcome for each unit
at each day. In addition to computing the error between $\hat{Y}_{t}$ and
$Y_{t}$ for each pseudo-treatment period, we can also compute the error
between $\hat{Y}_{it}$ and $Y_{it}$, for $i=1,\ldots,N^{\text{t}}$, and
$t=1,\ldots,T_{1}$, namely
$\text{MAPE}^{\text{od}}_{i}=\frac{1}{T_{1}}\sum_{t=1}^{T_{1}}\frac{|\hat{Y}_{it}-Y_{it}|}{Y_{it}},\quad\text{MAPE}^{\text{od}}=\frac{1}{N^{\text{t}}}\sum_{i=1}^{N^{\text{t}}}\text{MAPE}^{\text{od}}_{i}.$
(28)
Figure 4 presents the values of $\text{MAPE}^{\text{od}}$ for each pseudo-
treatment period, and Table 3 reports the average value of
$\text{MAPE}^{\text{od}}$ over all pseudo-treatment periods. It shows that
results are consistent across periods. Method SC reaches the best accuracy,
with on average 13.1% of error for the daily revenue of one treated OD. The
FFNN model has a similar performance with 13.3% of error on average. We
conclude that estimating the counterfactual revenue of one OD is difficult and
we gain significant accuracy by aggregating over the treated ODs. In the
remainder of the paper, we only consider models CR, CR-EN, FFNN, RSC and SC as
they perform best.
Figure 4: $\text{MAPE}^{\text{od}}$ for each pseudo-treatment period in $S2$. | $\text{MAPE}^{\text{od}}$
---|---
CR | 13.8%
CR-EN | 16.0%
DID | 35.6%
FFNN | 13.3%
MCNNM | 16.2%
RSC | 13.6%
SC | $\mathbf{13.1}$%
Table 3: $\text{MAPE}^{\text{od}}$ averaged over all pseudo-treatment periods
in $S2$.
#### 4.2.2 Total Predicted Revenue
In this section, we analyze the models’ performance over a complete pseudo-
treatment period. We first consider a pseudo-treatment period of 6 months, and
we then analyze the effect of a reduced length.
Figure 5 presents the value of $\text{tAPE}_{p}$ defined in (26) for pseudo-
treatment periods $p=1,\ldots,15$. The upper graph shows the results for $S1$
and the lower the results for $S2$, respectively. To illustrate treatment
impacts’ order of magnitude, we depict the 1% and 2% thresholds in dashed
lines. We note that FFNN and CR-EN models have higher variance than SC, CR and
RSC methods which stay below 3% of error at each period. Moreover, the model
FFNN is stable across all periods for $S2$.
(a) $S1$ (one model for a single aggregated unit)
(b) $S2$ (one model per treated unit)
Figure 5: Values of $\text{tAPE}_{p}$ for each pseudo-treatment period.
Table 4 reports the values of
$\text{tAPE}=\frac{1}{15}\sum_{p=1}^{15}\text{tAPE}_{p}$ for each model. All
models are able to predict the total 6-months counterfactual revenue with less
than 3.5% of error on average, in both settings. For $S1$, the CR method
reaches the best performance, with 1.1% error on average and, for $S2$, the
best is the FFNN model with 1.0% average error.
| $S1$ | $S2$
---|---|---
| tAPE | tAPE
CR | 1.1% | 1.1%
CR-EN | 2.5% | 2.5%
FFNN | 3.3% | 1.0%
RSC | 1.2% | 1.2%
SC | 1.6% | 3.3%
Table 4: tAPE over all pseudo-treatment periods
We present in Figure 6 the values of $\text{tPE}_{p}$ defined in (25) at each
period $p=1,\ldots,15$. It shows that for $S1$, the FFNN model systematically
overestimates the total counterfactual revenue while SC, CR-EN and RSC methods
systematically underestimate it. For $S2$, we observe the same behavior for
models SC, CR-EN and RSC while both FFNN and CR methods either underestimate
or overestimate the counterfactual revenue.
(a) $S1$ (one model for a single aggregated unit)
(b) $S2$ (one model per treated unit)
Figure 6: Values of $\text{tPE}_{p}$ for each pseudo-treatment period
$p=1,\ldots,15$.
##### Length of the treatment period
We now turn our attention to analyzing the effect of the treatment duration
period on performance. For this purpose, we study the variations of
$\text{tAPE}_{p}$ for different values of $T_{1}$ for the pseudo-treatment
periods $p=1,\ldots,15$. We analyze the results for each period but for
illustration purposes we focus only on the second one. We report the values
for all the other periods in Appendix Length of Treatment Period (the general
observations we describe here remain valid).
Figure 7 presents the variations of $\text{tAPE}_{2}$ against the length
$T_{1}$ for the different models. The upper graph shows the results for $S1$
and the lower one the results for $S2$, respectively. The black lines (solid
and dashed) represent the 1%, 2% and 3% thresholds. In $S1$, values of
$\text{tAPE}_{2}$ for FFNN are below 3% from 30 days. After 30 and 39 days,
respectively, $\text{tAPE}_{2}$ values for CR and SC are between 1% and 2%.
Values of $\text{tAPE}_{2}$ are below 1% from 68 days for CR-EN and from 43
days for RSC. In $S2$, $\text{tAPE}_{2}$ for FFNN is below 2% from 52 days and
below 1% from 84 days. For CR and CR-EN, it is below 2% from 10 days and 18
days, respectively. It is below 1% from 44 days for RSC. Hence, the results
show that the length of the treatment period can be less than six months as
models are accurate after only a few weeks.
(a) $S1$ (one model for a single aggregated unit)
(b) $S2$ (one model per treated unit)
Figure 7: Values of $\text{tAPE}_{2}$ varying with the length of the treatment
period $T_{1}$.
The CR, RSC and FFNN models present high accuracy with errors less than 1.2%
for the problem of counterfactual predictions on the total revenue. This is
compelling since we are interested in detecting a small treatment impact. As
anticipated in Section 4.1, we considered simpler approaches that are common
practice. For example, comparing to year-over-year revenue. In this case, the
counterfactual revenue is defined as the revenue generated during the same
period but the year before. It had a poor performance, with a tAPE between 7%
and 10% at each pseudo-treatment period. This approach is therefore not
accurate enough to detect small impacts.
In the following section, we present a validation study where we simulate
small impacts and assess our ability to estimate them with counterfactual
prediction models.
### 4.3 Validation: Revenue Impact Estimate for Known Ground Truth
We consider a pseudo-treatment period of 6 months and the setting $S2$. In
this case, models FFNN, CR and RSC provide accurate estimations of the
counterfactual total revenue with respectively 1%, 1.1% and 1.2% of error on
average over the pseudo-treatment periods. We restrict the analysis that
follows to those models. We proceed in two steps: First, we simulate a
treatment by adding a noise with positive mean to the revenue of the treated
units at each day of each pseudo-treatment period. We denote
$\tilde{Y}_{t}^{\text{obs}}$ the new treated value,
$\tilde{Y}_{t}^{\text{obs}}=Y_{t}(0)\times\epsilon,\quad\epsilon\sim\text{Lognormal}(\mu_{\epsilon},\sigma_{\epsilon}^{2})$
and $\sigma_{\epsilon}^{2}=0.0005$. We simulate several treatment impacts that
differ by the value of $\mu_{\epsilon}$. Second, we compute the impact
estimate with (5) from the counterfactual predictions and compare it to the
actual treatment applied in the first step. We present the results for one
pseudo-treatment period, $p=2$.
Table 5 reports the values of the estimated impact for different values of
$\mu_{\epsilon}$. The first row shows the values for the true counterfactuals.
This is used as reference, as it is the exact simulated impact. Results show
that RSC and CR models overestimate the impact while FFNN model underestimates
it. This is because the former underestimates the counterfactual predictions
while the latter overestimates them. Due to the high accuracy of
counterfactual predictions, both the underestimation and overestimation are
however small. We can detect impacts higher than the accuracy of the
counterfactual prediction models. The simulation shows that we are close to
the actual impact.
Counterfactuals | $\mu_{\epsilon}=0.01$ | $\mu_{\epsilon}=0.02$ | $\mu_{\epsilon}=0.03$ | $\mu_{\epsilon}=0.05$
---|---|---|---|---
Ground truth | 1.0% | 2.0% | 3.0% | 5.1%
RSC | 1.7% | 2.6% | 3.7% | 5.7%
CR | 1.5% | 2.5% | 3.5% | 5.6%
FFNN | 0.6% | 1.6% | 2.6% | 4.7%
Table 5: Estimation of the revenue impact $\hat{\tau}$ of simulated treatment.
Figure 8 presents the daily revenue on a subset of the treatment periods. The
estimation of the daily revenue impact is the difference between the simulated
revenue (solid and dashed black lines) and the counterfactual predictions
(colored lines). This figure reveals that even though the accuracy of the
daily predictions is not as good as on the complete treatment period, we can
still detect even a small daily impact.
Figure 8: Daily revenue and predictions for a subset of the pseudo-treatment
period 2. The labels in the y-axis are hidden for confidentiality reasons.
##### Prediction intervals.
It is clear that prediction intervals for the estimated revenue impact are of
high importance. However, it is far from trivial to compute them for most of
the counterfactual prediction models in our setting. Under some assumptions,
the CR model in setting $S1$ constitutes the exception. More precisely, if the
residuals satisfy conditions (i) independent and identically distributed and
(ii) normally distributed, then we can derive a prediction interval for the
sum of the daily predicted revenue. For the simulated impacts reported in
Table 5, we obtain 99% prediction intervals with widths of 2.2%. It means that
we can detect an impact of 2% or more with high probability.
Cattaneo et al. (2020) develop prediction intervals for the SC model that
account for two distinct sources of randomness: the construction of the
weights $\mathbf{\omega}$ and the unobservable stochastic error in the
treatment period. Moreover, Zhu and Laptev (2017) build prediction intervals
for neural networks predictions that consider three sources of randomness:
model uncertainty, model misspecification and data generation process
uncertainty. Both studies focus on computing prediction intervals for _each_
prediction. We face an additional issue as we need a prediction interval for
the sum of the predictions. As evidenced by these two studies, computing
accurate prediction intervals is a challenging topic on its own and we
therefore leave it for future research.
## 5 Conclusion
Revenue management systems are crucial to the profitability of airlines and
other industries. Due to their importance, solution providers and airlines
invest in the improvement of the different system components. In this context,
it is important to estimate the impact on an outcome such as revenue after a
proof of concept. We addressed this problem using counterfactual prediction
models.
In this paper, we assumed that an airline applies a treatment (a change to the
system) on a set of ODs during a limited time period. We aimed to estimate the
total impact over all of the treated units and over the treatment period. We
proceeded in two steps. First we estimated the counterfactual predictions of
the ODs’ outcome, that is the outcome if no treatment were applied. Then, we
estimated the impact as the difference between the observed revenue under
treatment and the counterfactual predictions.
We compared the performance of several counterfactual prediction models and a
deep-learning model in two different settings. In the first one, we predicted
the aggregated outcome of the treated units while in the second one, we
predicted the outcome of each treated unit and aggregated the predictions. We
showed that synthetic control methods and the deep-learning model reached a
competitive accuracy on the counterfactual predictions, which in turn allows
to accurately estimate the revenue impact. The deep-learning model reaches the
lowest error of 1% in the second setting, leveraging the dependency between
treated units. The best counterfactual prediction model, which in the second
setting assumes treated units are independent, reached 1.1% of error in both
settings. We showed that we can reduce the length of a treatment period and
preserve this level of accuracy. This can be useful as it potentially allows
to reduce the cost of proofs of concepts.
We believe that the methodology is broadly applicable to decision support
systems, and not limited to revenue management (e.g., upgrade of a software,
new marketing policy). It can assess the impact of a proof of concept under
the following fairly mild assumptions: (i) the units under consideration
(e.g., origin-destination pairs, markets, sites or products) can be divided
into two subsets, one affected by the treatment and one that is unaffected
(ii) time can be divided into two (not necessarily consecutive) periods, a
pre-treatment period and a treatment period (iii) the outcome of interest (any
objective function value, for example, revenue, cost or market share) can be
measured for each unit.
Finally, we will dedicate future research to devise prediction intervals for
the sum of the counterfactual predictions, which in turn will lead to a
prediction interval for the estimated impact.
## Acknowledgements
We are grateful for the invaluable support from the whole Crystal AI team who
built the demand forecasting solution. The team included personnel from both
Air Canada and IVADO Labs. In particular, we would like to thank Richard
Cleaz-Savoyen and the Revenue Management team for careful reading and comments
that have helped improving the manuscript. We also thank Florian Soudan from
Ivado Labs and Pedro Garcia Fontova from Air Canada for their help and advice
in training the neural network models. We would like to especially thank
William Hamilton from IVADO Labs who has contributed with ideas and been
involved in the results analysis. Maxime Cohen provided valuable comments that
helped us improve the manuscript.We express our gratitude to Peter Wilson (Air
Canada) who gave valuable business insights guiding the selection of control
units. The project was partially funded by Scale AI. Finally, the first author
would like to thank Louise Laage and the third author would like to thank Luca
Nunziata and Marco Musumeci for many stimulating discussions on counterfactual
prediction and synthetic control.
## References
* Abadie and Gardeazabal (2003) Abadie, A. and Gardeazabal, J. The economic costs of conflict: A case study of the Basque Country. _American Economic Review_ , 93(1):113–132, 2003\.
* Abadie et al. (2010) Abadie, A., Diamond, A., and Hainmueller, J. Synthetic control methods for comparative case studies: Estimating the effect of California’s tobacco control program. _Journal of the American Statistical Association_ , 105(490):493–505, 2010.
* Abadie et al. (2015) Abadie, A., Diamond, A., and Hainmueller, J. Comparative Politics and the Synthetic Control Method. _American Journal of Political Science_ , 59(2):495–510, 2015.
* Amjad et al. (2018) Amjad, M., Shah, D., and Shen, D. Robust synthetic control. _The Journal of Machine Learning Research_ , 19(1):802–852, 2018.
* Angrist and Krueger (1999) Angrist, J. D. and Krueger, A. B. Empirical strategies in labor economics. In _Handbook of Labor Economics_ , volume 3, 1277–1366. Elsevier, 1999.
* Angrist and Pischke (2008) Angrist, J. D. and Pischke, J.-S. _Mostly harmless econometrics: An empiricist’s companion_. Princeton University Press, 2008.
* Ashenfelter and Card (1985) Ashenfelter, O. and Card, D. Using the longitudinal structure of earnings to estimate the effect of training programs. _The Review of Economics and Statistics_ , 67(4):648–660, 1985.
* Athey and Imbens (2006) Athey, S. and Imbens, G. W. Identification and inference in nonlinear difference-in-differences models. _Econometrica_ , 74(2):431–497, 2006.
* Athey et al. (2018) Athey, S., Bayati, M., Doudchenko, N., Imbens, G., and Khosravi, K. Matrix Completion Methods for Causal Panel Data Models, 2018.
* Bai (2003) Bai, J. Inferential theory for factor models of large dimensions. _Econometrica_ , 71(1):135–171, 2003.
* Bai and Ng (2002) Bai, J. and Ng, S. Determining the number of factors in approximate factor models. _Econometrica_ , 70(1):191–221, 2002.
* Barron (1994) Barron, A. R. Approximation and estimation bounds for artificial neural networks. _Machine learning_ , 14(1):115–133, 1994.
* Bertrand et al. (2004) Bertrand, M., Duflo, E., and Mullainathan, S. How much should we trust differences-in-differences estimates? _The Quarterly Journal of Economics_ , 119(1):249–275, 2004.
* Candès and Plan (2010) Candès, E. J. and Plan, Y. Matrix completion with noise. _Proceedings of the IEEE_ , 98(6):925–936, 2010\.
* Candès and Recht (2009) Candès, E. J. and Recht, B. Exact matrix completion via convex optimization. _Foundations of Computational Mathematics_ , 9(6):717, 2009.
* Card (1990) Card, D. The impact of the Mariel boatlift on the Miami labor market. _Industrial and Labor Relation_ , 43(2):245–257, 1990.
* Card and Krueger (1994) Card, D. and Krueger, A. B. Minimum wages and employment: A case study of the fast-food industry in New Jersey and Pennsylvania. _American Economic Review_ , 84(4):772–793, 1994\.
* Cattaneo et al. (2020) Cattaneo, M. D., Feng, Y., and Titiunik, R. Prediction intervals for synthetic control methods. _arXiv:1912.07120_ , 2020.
* Chatterjee et al. (2015) Chatterjee, S. et al. Matrix estimation by universal singular value thresholding. _The Annals of Statistics_ , 43(1):177–214, 2015\.
* Cohen et al. (2019) Cohen, M., Jacquillat, A., and Serpa, J. A field experiment on airline lead-in fares. Technical report, Working Paper, 2019.
* Doudchenko and Imbens (2016) Doudchenko, N. and Imbens, G. W. Balancing, regression, difference-in-differences and synthetic control methods: A synthesis. Technical report, National Bureau of Economic Research, 2016.
* Fiig et al. (2019) Fiig, T., Weatherford, L. R., and Wittman, M. D. Can demand forecast accuracy be linked to airline revenue? _Journal of Revenue and Pricing Management_ , 18(4):291–305, 2019.
* Goodfellow et al. (2016) Goodfellow, I., Bengio, Y., and Courville, A. _Deep Learning_. MIT Press, 2016.
* Imbens and Rubin (2015) Imbens, G. W. and Rubin, D. B. _Causal inference in statistics, social, and biomedical sciences_. Cambridge University Press, 2015.
* Koushik et al. (2012) Koushik, D., Higbie, J. A., and Eister, C. Retail Price Optimization at InterContinental Hotels Group. _INFORMS Journal on Applied Analytics_ , 42(1):45–57, 2012.
* Lopez Mateos et al. (2021) Lopez Mateos, D., Cohen, M. C., and Pyron, N. Field Experiments for Testing Revenue Strategies in the Hospitality Industry. _Cornell Hospitality Quarterly_ , 1–10, 2021.
* Mazumder et al. (2010) Mazumder, R., Hastie, T., and Tibshirani, R. Spectral regularization algorithms for learning large incomplete matrices. _Journal of Machine Learning Research_ , 11:2287–2322, 2010\.
* Meyer et al. (1995) Meyer, B. D., Viscusi, W. K., and Durbin, D. L. Workers’ compensation and injury duration: evidence from a natural experiment. _The American Economic Review_ , 85(3):322–340, 1995.
* Pekgün et al. (2013) Pekgün, P., Menich, R. P., Acharya, S., Finch, P. G., Deschamps, F., Mallery, K., Sistine, J. V., Christianson, K., and Fuller, J. Carlson Rezidor Hotel Group Maximizes Revenue Through Improved Demand Management and Price Optimization. _INFORMS Journal on Applied Analytics_ , 43(1):21–36, 2013.
* Poulos (2017) Poulos, J. RNN-based counterfactual time-series prediction. _arXiv preprint arXiv:1712.03553_ , 2017.
* Rosenbaum and Rubin (1983) Rosenbaum, P. R. and Rubin, D. B. The central role of the propensity score in observational studies for causal effects. _Biometrika_ , 70(1):41–55, 1983.
* Sagi and Rokach (2018) Sagi, O. and Rokach, L. Ensemble learning: A survey. _Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery_ , 8(4), 2018.
* Shorten and Khoshgoftaar (2019) Shorten, C. and Khoshgoftaar, T. M. A survey on image data augmentation for deep learning. _Journal of Big Data_ , 6(1):60, 2019.
* Talluri and Van Ryzin (2005) Talluri, K. T. and Van Ryzin, G. J. _The Theory and Practice of Revenue Management_. Springer Science & Business Media, 2005.
* Weatherford and Belobaba (2002) Weatherford, L. and Belobaba, P. Revenue impacts of fare input and demand forecast accuracy in airline yield management. _Journal of the Operational Research Society_ , 53(8):811–821, 2002.
* Zhu and Laptev (2017) Zhu, L. and Laptev, N. Deep and confident prediction for time series at Uber. In _2017 IEEE International Conference on Data Mining Workshops (ICDMW)_ , 103–110, 2017.
## Appendix
### Length of Treatment Period
We present here the results on the analysis of the length of the treatment-
period for all pseudo-treatment periods.
(a) $S1$
(b) $S2$
Figure 9: Values of tAPE varying with the length of the treatment period for
pseudo-treatment period 1
(a) $S1$
(b) $S2$
Figure 10: Values of tAPE varying with the length of the treatment period for
pseudo-treatment period 3
(a) $S1$
(b) $S2$
Figure 11: Values of tAPE varying with the length of the treatment period for
pseudo-treatment period 4
(a) $S1$
(b) $S2$
Figure 12: Values of tAPE varying with the length of the treatment period for
pseudo-treatment period 5
(a) $S1$
(b) $S2$
Figure 13: Values of tAPE varying with the length of the treatment period for
pseudo-treatment period 6
(a) $S1$
(b) $S2$
Figure 14: Values of tAPE varying with the length of the treatment period for
pseudo-treatment period 7
(a) $S1$
(b) $S2$
Figure 15: Values of tAPE varying with the length of the treatment period for
pseudo-treatment period 8
(a) $S1$
(b) $S2$
Figure 16: Values of tAPE varying with the length of the treatment period for
pseudo-treatment period 9
(a) $S1$
(b) $S2$
Figure 17: Values of tAPE varying with the length of the treatment period for
pseudo-treatment period 10
(a) $S1$
(b) $S2$
Figure 18: Values of tAPE varying with the length of the treatment period for
pseudo-treatment period 11
(a) $S1$
(b) $S2$
Figure 19: Values of tAPE varying with the length of the treatment period for
pseudo-treatment period 12
(a) $S1$
(b) $S2$
Figure 20: Values of tAPE varying with the length of the treatment period for
pseudo-treatment period 13
(a) $S1$
(b) $S2$
Figure 21: Values of tAPE varying with the length of the treatment period for
pseudo-treatment period 14
(a) $S1$
(b) $S2$
Figure 22: Values of tAPE varying with the length of the treatment period for
pseudo-treatment period 15
|
# Learning From Revisions:
Quality Assessment of Claims in Argumentation at Scale
Gabriella Skitalinskaya Jonas Klaff
Department of Computer Science
University of Bremen
Bremen, Germany
{gabski<EMAIL_ADDRESS>
& Henning Wachsmuth
Department of Computer Science
Paderborn University
Paderborn, Germany
<EMAIL_ADDRESS>
###### Abstract
Assessing the quality of arguments and of the claims the arguments are
composed of has become a key task in computational argumentation. However,
even if different claims share the same stance on the same topic, their
assessment depends on the prior perception and weighting of the different
aspects of the topic being discussed. This renders it difficult to learn
topic-independent quality indicators. In this paper, we study claim quality
assessment irrespective of discussed aspects by comparing different revisions
of the same claim. We compile a large-scale corpus with over 377k claim
revision pairs of various types from kialo.com, covering diverse topics from
politics, ethics, entertainment, and others. We then propose two tasks: (a)
assessing which claim of a revision pair is better, and (b) ranking all
versions of a claim by quality. Our first experiments with embedding-based
logistic regression and transformer-based neural networks show promising
results, suggesting that learned indicators generalize well across topics. In
a detailed error analysis, we give insights into what quality dimensions of
claims can be assessed reliably. We provide the data and scripts needed to
reproduce all results.111Data and
code:https://github.com/GabriellaSky/claimrev
## 1 Introduction
Assessing argument quality is as important as it is questionable in nature. On
the one hand, identifying the good and the bad claims and reasons for arguing
on a given topic is key to convincingly support or attack a stance in debating
technologies Rinott et al. (2015), argument search Ajjour et al. (2019), and
similar. On the other hand, argument quality can be considered on different
granularity levels and from diverse perspectives, many of which are inherently
subjective Wachsmuth et al. (2017a); they depend on the prior beliefs and
stance on a topic as well as on the personal weighting of different aspects of
the topic Kock (2007).
Claim before Revision | Claim after Revision | Type
---|---|---
Dogs can help disabled people function better. | Dogs can help disabled people to navigate the world better. | Claim Clarification
African American soldiers joined unionists to fight for their freedom. | Black soldiers joined unionists to fight for their freedom. | Typo / Grammar Correction
Elections insure the independence of the judiciary. | Elections ensure the independence of the judiciary. | Typo / Grammar Correction
Israel has a track record of selling US arms to third countries without authorization. | Israel has a track record of selling US arms to third countries without authorization (https://www.jstor.org/stable/1149008?seq=1#page_scan_tab_contents). | Corrected / Added links
Table 1: Four examples of claims from Kialo before and after revision, along
with the type of revision performed.
Existing research largely ignores this limitation, by focusing on learning to
predict argument quality based on subjective assessments of human annotators
(see Section 2 for examples). In contrast, Habernal and Gurevych (2016)
control for topic and stance to compare the convincingness of arguments.
Wachsmuth et al. (2017b) abstract from an argument’s text, assessing its
relevance only structurally. Lukin et al. (2017) and El Baff et al. (2020)
focus on personality-specific and ideology-specific quality perception,
respectively, whereas Toledo et al. (2019a) asked annotators to disregard
their own stance in judging length-restricted arguments. However, none of
these approaches controls for the concrete aspects of a topic that the
arguments claim and reason about. This renders it difficult to learn what
makes an argument and its building blocks good or bad in general.
In this paper, we study quality in argumentation irrespective of the discussed
topics, aspects, and stances by assessing different revisions of the basic
building blocks of arguments, i.e., claims. Such revisions are found in large
quantities on online debate platforms such as kialo.com, where users post
claims, other users suggest revisions to improve claim quality (in terms of
clarity, grammaticality, grounding, etc.), and moderators approve or
disapprove them. By comparing the quality of different revisions of the same
instance, we argue that we can learn general quality characteristics of
argumentative text and, to a wide extent, abstract from prior perceptions and
weightings.
To address the proposed problem, we present a new large-scale corpus,
consisting of 124k unique claims from kialo.com spanning a diverse range of
topics related to politics, ethics, and several others (Section 3). Using
distant supervision, we derive a total number of 377k claim revision pairs
from the platform, each reflecting a quality improvement, often, with a
specified revision type. Four examples are shown in Table 1. To the best of
our knowledge, this is the first corpus to target quality assessment based on
claim revisions. In a manual annotation study, we provide support for our
underlying hypothesis that a revision improves a claim in most cases, and we
test how much the revision types correlate with known argument quality
dimensions.
Given the corpus, we study two tasks: (a) how to compare revisions of a claim
by quality and (b) how to rank a set of claim revisions. As initial approaches
to the first task, we select in Section 4 a “traditional” logistic regression
model based on word embeddings as well as transformer-based neural networks
Vaswani et al. (2017), such as BERT Devlin et al. (2019) and SBERT Reimers and
Gurevych (2019). For the ranking task, we consider the Bradley-Terry-Luce
model Bradley and Terry (1952); Luce (2012) and SVMRank Joachims (2006). They
achieve promising results, indicating that the compiled corpus allows learning
topic-independent characteristics associated with the quality of claims
(Section 5). To understand what claim quality improvements can be assessed
reliably, we then carry out a detailed error analysis for different revision
types and numbers of revisions.
The main contributions of our work are: (1) A new corpus for topic-independent
claim quality assessment, with distantly supervised quality improvement labels
of claim revision pairs, (2) initial promising approaches to the tasks of
claim quality classification and ranking, and (3) insights into what works
well in claim quality assessment and what remains to be solved.
## 2 Related Work
In the recent years, there has been an increase of research on the quality of
arguments and the claims and reasoning they are composed of. Wachsmuth et al.
(2017a) describe argumentation quality as a multidimensional concept that can
be considered from a logical, rhetorical, and dialectical perspectives. To
achieve a common understanding, the authors suggest a unified framework with
15 quality dimensions, which together give a holistic quality evaluation at a
certain abstraction level. They point out, that several dimensions may be
perceived differently depending on the target audience. In recent follow-up
work, Wachsmuth and Werner (2020) examined how well each dimension can be
assessed only based on plain text only.
Most existing quality assessment approaches target a single dimension. On
mixed-topic student essays, Persing and Ng (2013) learn to score the clarity
of an argument’s thesis, Persing and Ng (2015) do the same for argument
strength, and Stab and Gurevych (2017) classify whether an argument’s premises
sufficiently support its conclusion. All these are trained on pointwise
quality annotations in the form of scores or binary judgments. Gretz et al.
(2019) provide a corpus with crowdsourced quality annotations for 30,497
arguments, the largest to date for pointwise argument quality. The authors
studied how their annotations correlate with the 15 dimensions from the
framework of Wachsmuth et al. (2017a), finding that only global relevance and
effectiveness are captured. Similarly, Lauscher et al. (2020) built a new
corpus based on the framework to then exploit interactions between the
dimensions in a neural approach. We present a small related annotation study
for our dataset below. However, we follow Habernal and Gurevych (2016) in that
we cast argument quality assessment as a relation classification problem,
where the goal is to identify the better among a pair of instances.
In particular, Habernal and Gurevych (2016) created a dataset with argument
convincingness pairs on 32 topics. To mitigate annotator bias, the arguments
in a pair always have the same stance on the same topic. The more convincing
argument is then predicted using a feature-rich SVM and a simple bidirectional
LSTM. Other approaches to the same task map passage representations to real-
valued scores using Gaussian Process Preference Learning Simpson and Gurevych
(2018) or represent arguments by the sum of their token embeddings Potash et
al. (2017), later extended by a Feed Forward Neural Network Potash et al.
(2019). Recently, Gleize et al. (2019) employed a Siamese neural network to
rank arguments by the convincingness of evidence. In our experiments below, we
take on some of these ideas, but also explore the impact of transformer-based
methods such as BERT Devlin et al. (2019), which have been shown to predict
argument quality well Gretz et al. (2019).
Potash et al. (2017) observed that longer arguments tend to be judged better
in existing corpora, a phenomenon we will also check for below. Toledo et al.
(2019b) prevent such bias in their corpora for both pointwise and pairwise
quality, by restricting the length of arguments to 8–36 words. The authors
define quality as the level of preference for an argument over other arguments
with the same stance, asking annotators to disregard their own stance. For a
more objective assessment of argument relevance, Wachsmuth et al. (2017b)
abstract from content, ranking arguments only based on structural relations,
but they employ majority human assessments for evaluation. Lukin et al. (2017)
take a different approach, including knowledge about the personality of the
reader into the assessment, and El Baff et al. (2020) study the impact of
argumentative texts on people depending on their political ideology.
As can be seen, several approaches aim to control for length, stance,
audience, or similar. However, all of them still compare argumentative texts
with different content and meaning in terms of the aspects of topics being
discussed. In this work, we assess quality based on different revisions of the
same text. In this setting, the quality is primarily focused on how a text is
formulated, which will help to better understand what influences argument
quality in general, irrespective of the topic. To be able to do so, we refer
to online debate portals.
Debate portals give users the opportunity to discuss their views on a wide
range of topics. Existing research has used the rich argumentative content and
structure of different portals for argument mining, including createdebate.com
Habernal and Gurevych (2015), idebate.org Al-Khatib et al. (2016), and others.
Also, large-scale debate portal datasets form the basis of applications such
as argument search engines Ajjour et al. (2019). Unlike these works, we
exploit debate portals for studying quality. Tan et al. (2016) predicted
argument persuasiveness in the discussion forum ChangeMyView from ground-truth
labels given by opinion posters, and Wei et al. (2016) used user upvotes and
downvotes for the same purpose. Here, we resort to kialo.com, where users
cannot only state argumentative claims and vote on the impact of claims
submitted by others, but they can also help improve claims by suggesting
revisions, which are approved or disapproved by moderators. While Durmus et
al. (2019) assessed quality based on the impact value of claims from
kialo.com, we derive information on quality from the revision history of
claims.
The only work we are aware of that analyzes revision quality of argumentative
texts is the study of Afrin and Litman (2018). From the corpus of Zhang et al.
(2017) containing 60 student essays with three draft versions each, 940
sentence writing revision pairs were annotated for whether the revision
improves essay quality or not. The authors then trained a random forest
classifier for automatic revision quality classification. In contrast, instead
of sentences, we shift our focus to claims. Moreover, our dataset is orders of
magnitude larger and includes notably longer revision chains, which enables
deeper analyses and more reliable prediction of revision quality using data-
intensive methods.
## 3 Data
Here, we present our corpus created based on claim revision histories
collected from kialo.com.
### 3.1 A New Corpus based on Kialo
Kialo is a typical example of an online debate portal for collaborative
argumentative discussions, where participants jointly develop complex pro/con
debates on a variety of topics. The scope ranges from general topics
(religion, fair trade, etc.) to very specific ones, for instance, on
particular policy-making (e.g., whether wealthy countries should provide
citizens with a universal basic income). Each debate consists of a set of
claims and is associated with a list of related pre-defined generic
categories, such as politics, ethics, education, and entertainment.
What differentiates Kialo from other portals is that it allows editing claims
and tracking changes made in a discussion. All users can help improve existing
claims by suggesting edits, which are then accepted or rejected by the
moderator team of the debate. As every suggested change is discussed by the
community, this collaborative process should lead to a continuous improvement
of claim quality and a diverse set of claims for each topic.
As a result of the editing process, claims in a debate have a version history
in the format of claim pairs, forming a chain where one claim is the successor
of another and is considered to be of higher quality (examples found in Table
1). In addition, claim pairs may have a revision type label assigned to them
via a non-mandatory free form text field, where moderators explain the reason
of revision.
#### Base Corpus
To compile the corpus, we scraped all 1628 debates found on Kialo until June
26th, 2020, related to over 1120 categories. They contain 124,312 unique
claims along with their revision histories, which comprise of 210,222 pairwise
relations. The average number of revisions per claim is 1.7 and the maximum
length of a revision chain is 36. 74% of all pairs have a revision type.
Overall, there are 8105 unique revision type labels in the corpus. 92% of
labeled claim pairs refer to three types only: Claim Clarification,
Typo/Grammar Correction, and Corrected/Added Links. An overview of the
distribution of revision labels is given in Table 2. We refer to the resulting
corpus as ClaimRevBASE.
Corpus | Type of Instances | Instances
---|---|---
ClaimRevBASE | Total claim pairs | 210 222
| Claim Clarification | 63 729
| Typo/Grammar Correction | 59 690
| Corrected/Added Links | 17 882
| Changed Meaning of Claim | 1 178
| Misc | 10 464
| None | 57 279
ClaimRevEXT | Total claim pairs | 377 659
| Revision distance 1 | 77 217
| Revision distance 2 | 27 819
| Revision distance 3 | 10 753
| Revision distance 4 | 4 460
| Revision distance 5 | 2 055
| Revision distance 6+ | 2 008
Both Corpora | Claim revision chains | 124 312
Table 2: Statistics of the two provided corpus versions. ClaimRevBASE: Number
of claim pairs in total and of each revision type. ClaimRevEXT: Number of
claim pairs in total and of each revision distance. The bottom line shows the
number of unique revision chains in the corpora.
Data pre-processing included removing all claim pairs from debates carried out
in languages other than English. Also, we considered claims with less than
four characters as uninformative and left them out. As we seek to compare
different versions of the same claim, claim version pairs with a general
change of meaning do not satisfy this description. Thus, we removed such pairs
from the corpus, too (inspecting the data revealed that such pairs were mostly
generated due to debate restructuring). For this, we assessed the cosine
similarity of a given claim pair using spacy.io and remove a pair if the score
is lower than the threshold of 0.8.
#### Extended Corpus
To increase the diversity of data available for training models, without
actually collecting new data, we applied data augmentation. ClaimRevBASE
consists of consecutive claim version pairs, i.e., if a claim $v$ has four
versions, it will be represented by three three pairs: $(v_{1},v_{2})$,
$(v_{2},v_{3})$, and $(v_{3},v_{4})$, where $v_{1}$ is the original claim and
$v_{4}$ is the latest version. We extend this data by adding all pairs between
non-consecutive versions that are inferrable transitively. Considering the
previous example, this means we add $(v_{1},v_{3})$, $(v_{1},v_{4})$, and
$(v2,v4)$. This is based on our hypothesis that every argument version is of
higher quality than its predecessors, which we come back to below. Figure 1
illustrates the data augmentation. We call the augmented corpus ClaimRevEXT.
Figure 1: Visual representation of relations between revisions. Solid and
dashed lines denote original and inferred non-consecutive relations
respectively.
For this corpus, we introduce the concept of revision distance, by which we
mean the number of revisions between two versions. For example, the distance
between $v_{1}$ and $v_{2}$ would be 1, whereas the distance between $v_{1}$
and $v_{3}$ would be 2. The distribution of the revision distances across
ClaimRevEXT is summarized in Table 2.
The number of claim pairs of the 20 most frequent categories in both corpus
versions are presented in Figure 2. We will restrict our view to the topics in
these categories in our experiments.
Figure 2: Number of claim revision pairs in each debate category of the two
provided versions of our corpus (ClaimRevBASE, ClaimRevEXT).
### 3.2 Data Consistency on Kialo
While collaborative content creation enables leveraging the wisdom of large
groups of individuals toward solving problems, it also poses challenges in
terms of quality control, because it relies on varying perceptions of quality,
backgrounds, expertise, and personal objectives of the moderators. To assess
the consistency of the distantly-supervised corpus annotations, we carried out
two annotation studies on samples of our corpus.
#### Consistency of Relative Quality
In this study, we aimed to capture the general perception of claim quality on
a meta-level, by deriving a data-driven quality assessment based on the
revision histories. This was based on our hypothesis that every claim version
is better than its predecessor. To test the validity of this hypothesis, two
authors of this paper annotated whether a revision increases, decreases, or
does not affect the overall claim quality. For this purpose, we randomly
sampled 315 claim revision pairs, found in the supplementary material.
The results clearly support our hypothesis, showing an increase in quality in
292 (93%) of the annotated cases at a Cohen’s $\kappa$ agreement of 0.75,
while 8 (3%) of the revisions had no effect on quality and only 6 (2%) led to
a decrease. On the remaining 2%, the annotators did not reach an agreement.
#### Consistency of Revision Type Labels
Our second annotation study focused on the reliability of the revision type
labels. We restricted our view to the top three revision labels, which cover
96% of all revisions. We randomly sampled 140–150 claim pairs per each
revision type, 440 in total. For each claim pair, the same annotators as above
provided a label for the revision type from the following set: Claim
Clarification, Typo/Grammar Correction, Corrected/Added Links, and Other.
Comparing the results to the original labels in the corpus revealed that the
annotators strongly agreed with the labels, namely, with Cohen’s $\kappa$ of
0.82 and 0.76 respectively. The level of agreement between the annotators was
even higher ($\kappa$ = 0.84). In further analysis, we observed that most
confusion happened between the revision types Typo/Grammar correction and
Claim Clarification. This may be due to the non-strict nature of the revision
type labels, which leaves space for different interpretations on a case-to-
case basis. Still, we conclude that the revision type labels seem reliable in
general.
### 3.3 Quality Dimensions on Kialo
To explore the relationship between the revision types on Kialo and argument
quality in general, we conducted a third annotation study. In particular, for
each of the 315 claim pairs from Section 3.2, one of the authors of this paper
provided a label indicating whether the revision improved for each of the 15
quality dimensions defined by Wachsmuth et al. (2017a) or not. It should be
noted that the annotators reached an agreement on the revision type for all
these pairs.
| Clarification | Grammar | Links
---|---|---|---
Cogency | -0.31 | -0.31 | 0.65
Local Acceptability | 0.38 | -0.20 | -0.19
Local Relevance | 0.44 | -0.25 | -0.22
Local Sufficiency | -0.28 | -0.33 | 0.62
Effectiveness | 0.02 | -0.35 | 0.34
Credibility | 0.06 | -0.16 | 0.10
Emotional Appeal | 0.00 | 0.00 | 0.00
Clarity | -0.16 | 0.35 | -0.18
Appropriateness | 0.01 | 0.02 | -0.04
Arrangement | 0.00 | 0.00 | 0.00
Reasonableness | 0.07 | -0.04 | -0.04
Global Acceptability | 0.37 | 0.42 | -0.82
Global Relevance | 0.02 | -0.43 | 0.42
Global Sufficiency | 0.00 | 0.00 | 0.00
Overall | -0.05 | 0.00 | 0.05
Pairs with revision type | 120 | 100 | 95
Table 3: Pearson’s $r$ correlation in our annotation study between increases
in the 15 quality dimensions of Wachsmuth et al. (2017a) and the main revision
types: Claim Clarification, Typo/Grammar Correction, Corrected/Added Links.
Moderate and high correlations are shown in bold ($r\geq 0.3$).
Table 3 shows Pearson’s $r$ rank correlation for each quality dimension for
the three main revision types. We observe a strong correlation between the
revision type Corrected/Added Links and the logical quality dimensions Cogency
(0.65) and Local Sufficiency (0.62), which matches the main purpose of such
revisions: to add supporting information to a claim. The high negative
correlation of this revision type with Global Acceptability (-0.82) indicates
that improvements regarding the dimension in question are more prominent in
other types. Complementarily, Claim Clarification mainly improves the other
logical dimensions (Local Acceptability 0.38, Local Relevance 0.44), matching
the intuition that a clarification helps to ensure a correct understanding of
the meaning. Typo/Grammar corrections, finally, rather seem to support an
acceptable linguistic shape, improving Clarity (0.35) and Global Acceptability
(0.42).
Finding only low correlations for many rhetorical dimensions (credibility,
emotional appeal, etc.) as well as for overall quality, we conclude that the
revisions on Kialo seem to target primarily the general form a well-phrased
claim should have.
## 4 Approaches
To study the two proposed tasks, claim quality classification and claim
quality ranking, on the given corpus, we consider the following approaches.
### 4.1 Claim Quality Classification
We cast this task as a pairwise classification task, where the objective is to
compare two versions of the same claim and determine which one is better. To
solve this task, we compare four methods:
#### Length
To check whether there is a bias towards longer claims in the data, we use a
trivial method which assumes that claims with more characters are better.
#### S-BOW
As a “traditional” method, we employ the siamese bag-of-words embedding
(S-BOW) as described by Potash et al. (2017). We concatenate two bag-of-words
matrices, each representing a claim version from a pair, and input the
concatenated matrix to a logistic regression. We also test whether information
on length improves S-BOW.
#### BERT
We select the BERT model, as it has become the standard neural baseline. BERT
is a pre-trained deep bidirectional transformer language model Devlin et al.
(2019). For our experiments we use the pre-trained version bert-base-cased, as
implemented in the huggingface library.222Huggingface library,
https://huggingface.co/transformers/pretrained˙models.html We fine-tune the
model for two epochs using the Adam optimizer with learning rate 1e-5. 333We
chose the number of epochs empirically, picking the best learning rate out of
{5e-7, 5e-6,1e-5,2e-5,3e-5}.
#### SBERT
We also use Sentence-BERT (SBERT) to learn to represent each claim version as
a sentence embedding Reimers and Gurevych (2019), opposed to the token-level
embeddings of standard BERT models. We fine-tune SBERT based on bert-base-
cased using a siamese network structure, as implemented in the sentence-
transformers library.444Sentence-transformers library, https://www.sbert.net/
We set the numbers of epochs to one which is recommended by the authors
Reimers and Gurevych (2019), and we use a batch-size of 16, Adam optimizer
with learning rate 1e-5, and a linear learning rate warm-up over 10% of the
training data. Our default pooling strategy is MEAN.
| $v_{1}$ | $v_{2}$ | $v_{3}$
---|---|---|---
$v_{1}$ | 0 | 0.018 | 0.002
$v_{2}$ | 0.982 | 0 | 0.428
$v_{3}$ | 0.998 | 0.572 | 0
Table 4: Example of a pairwise score matrix for ranking of three claim
revisions, $v_{1}$–$v_{3}$, given the following pairwise scores:
$(v_{1},v_{2})=(0.018,0.982)$, $(v_{2},v_{3})=(0.428,0.572)$, and
$(v_{1},v_{3})=(0.002,0.998)$.
### 4.2 Claim Quality Ranking
In contrast to the previous task, we cast this problem as a sequence-pair
regression task. After obtaining all pairwise scores using S-BOW, BERT, and
SBERT respectively, we map the pairwise labels to real-valued scores and rank
them using the following models, once for each method.
#### BTL
| | Test set: ClaimRevBASE | | Test set: ClaimRevEXT
---|---|---|---|---
| | Random-Split | Cross-Category | | Random-Split | Cross-Category
Model | | Accuracy | MCC | Accuracy | MCC | | Accuracy | MCC | Accuracy | MCC
Length | | 61.3 / 61.3 | 0.23 / 0.23 | 60.7 / 60.7 | 0.21 / 0.21 | | 60.8 / 60.8 | 0.22 / 0.22 | 60.0 / 60.0 | 0.20 / 0.20
SBOW | | 62.0 / 62.6 | 0.24 / 0.25 | 61.4 / 61.4 | 0.23 / 0.23 | | 64.9 / 65.4 | 0.30 / 0.31 | 63.9 / 64.1 | 0.28 / 0.28
SBOW + Length | | 65.1 / 65.5 | 0.30 / 0.31 | 64.8 / 64.4 | 0.29 / 0.29 | | 67.1 / 67.5 | 0.34 / 0.35 | 66.1 / 66.2 | 0.32 / 0.32
BERT | | 75.5 / 75.2 | 0.51 / 0.51 | 75.1 / 74.1 | 0.51 / 0.49 | | 76.4 / 76.5 | 0.53 / 0.53 | 76.2 / 75.4 | 0.53 / 0.51
SBERT | | 76.2 / 76.2 | 0.53 / 0.52 | 75.5 / 75.4 | 0.51 / 0.51 | | 77.4 / 77.7 | 0.55 / 0.55 | 76.8 / 76.8 | 0.54 / 0.54
Random baseline | | 50.0 / 50.0 | 0.00 / 0.00 | 50.0 / 50.0 | 0.00 / 0.00 | | 50.0 / 50.0 | 0.00 / 0.00 | 50.0 / 50.0 | 0.00 / 0.00
Single claim baseline | | 57.7 / 58.1 | 0.17 / 0.17 | 57.7 / 57.3 | 0.17 / 0.16 | | 58.8 / 59.8 | 0.20 / 0.20 | 58.9 / 58.9 | 0.20 / 0.20
Table 5: Claim quality classification results: Accuracy and Matthew
Correlation Coefficient (MCC) for all tested approaches in the random-split
and the cross-category setting on the two corpus versions. The first value in
each value pair is obtained by a model trained on ClaimRevBASE, the second by
a model trained on ClaimRevEXT. All improvements from one row to the next are
significant at $p<$ 0.001 according to a two-sided Student’s $t$-test.
For mapping, we use the well-established Bradley-Terry-Luce (BTL) model
Bradley and Terry (1952); Luce (2012), in which items are ranked according to
the probability that a given item beats an item chosen randomly. We feed the
BTL model a pairwise-comparison matrix for all revisions related to a claim,
generated as follows: Each row represents the probability of the revision
being better than other revisions. All diagonal values are set to zero. Table
4 illustrates an example for a set of three argument revisions.
#### SVMRank
Additionally, we employ SVMRank Joachims (2006), which views the ranking
problem as a pairwise classification task. First, we change the input data,
provided as a ranked list, into a set of ordered pairs, where the (binary)
class label for every pair is the order in which the elements of the pair
should be ranked. Then, SVMRank learns by minimizing the error of the order
relation when comparing all possible combinations of candidate pairs. Given
the nature of the algorithm we cannot work with token embeddings obtained from
BERT directly. Thus, we utilize one of most commonly used approaches to
transform token embeddings to a sentence embedding: extracting the special
[CLS] token vector Reimers and Gurevych (2019); May et al. (2019). In our
experiments we select a linear kernel for the SVM and use
PySVMRank,555PySVMRank, https://github.com/ds4dm/PySVMRank a python API to the
SVMrank library written in C.666SVMrank,
www.cs.cornell.edu/people/tj/svm˙light/svm˙rank.html
## 5 Experiments and Discussion
We now present empirical experiments with the approaches from Section 4. The
goal is to evaluate how hard it is to compare and rank the claim revisions in
our corpus from Section 3 by quality.
### 5.1 Experimental Setup
We carry out experiments in two settings. The first considers creating random
splits over revision histories, ensuring that all versions of the same claim
are in a single split in order to avoid data leakage. We assign 80% of the
revision histories to the training set and the remaining 20% to the test set.
A drawback of this setup is that it is not clear how well models generalize to
unseen debate categories. In the second setting, we therefore evaluate the
methods also in a cross-category setup using a leave-one-category-out
paradigm, which ensures that all claims from the same debate category are
confined to a single split. We split the data in this way to evaluate if our
models learn independent features that are applicable across the diverse set
of categories. To assess the effect of adding augmented data, we evaluate all
models on both ClaimRevBASE and ClaimRevEXT.
| Random-Split | Cross-Category
---|---|---
Model | $r$ | $\rho$ | Top-1 | NDCG | MRR | $r$ | $\rho$ | Top-1 | NDCG | MRR
BTL + SBOW+L | 0.38 | 0.37 | 0.62 | 0.94 | 0.79 | 0.36 | 0.35 | 0.60 | 0.94 | 0.78
BTL + BERT | 0.60 | 0.59 | 0.74 | 0.96 | 0.86 | 0.58 | 0.57 | 0.72 | 0.96 | 0.85
BTL + SBERT | 0.63 | 0.62 | 0.77 | 0.97 | 0.87 | 0.62 | 0.61 | 0.75 | 0.97 | 0.86
SVMRank + SBOW+L | 0.18 | 0.18 | 0.50 | 0.93 | 0.73 | 0.24 | 0.23 | 0.52 | 0.93 | 0.75
SVMRank + BERT CLS | 0.50 | 0.49 | 0.67 | 0.95 | 0.84 | 0.51 | 0.51 | 0.67 | 0.96 | 0.84
SVMRank + SBERT | 0.70 | 0.70 | 0.79 | 0.97 | 0.90 | 0.73 | 0.72 | 0.80 | 0.98 | 0.91
Random baseline | 0.00 | 0.00 | 0.42 | 0.91 | 0.68 | 0.00 | 0.00 | 0.42 | 0.91 | 0.67
Table 6: Claim quality ranking results: Pearson’s $r$ and Spearman’s $\rho$
correlation as well as top-1 accuracy for all tested approaches in the random-
split and the cross-category setting on ClaimRevEXT.In all cases, SVMRank +
SBERT is significantly better than all others at $p<$ 0.001 according to a
two-sided Student’s $t$-test.
For quality classification, we report accuracy and the Matthews correlation
coefficient Matthews (1975). We report the mean results over five runs in the
random setting and the mean results across all test categories in the cross-
category setting. To ensure balanced class labels, we create one false claim
pair for each true claim pair by shuffling the order of the claims:
$(v_{1},v_{2},true)\rightarrow(v_{2},v_{1},false)$, where the label denotes
whether the second claim in the pair is of higher quality. We report results
obtained by models trained on ClaimRevBASE and ClaimRevEXT as score pairs in
Table 5.
To measure ranking performance, we calculate Pearson’s $r$ and Spearman’s
$\rho$ correlation, as well as NDCG and MRR. We also compute the Top-1
accuracy, i.e. the proportion of claim sets, where the latest version has been
ranked best. We average the results on each claim set across the test set for
each metric. Afterwards we average the results across five runs or across all
categories, depending on the chosen setting.
### 5.2 Claim Quality Classification
The results in Table 5 show that a claim’s length is a weak indicator of
quality (up to 61.3 accuracy). An intuitive explanation is that, even though
claims with more information may be better, it is also important to keep them
readable and concise.
Despite SBOW’s good performance on predicting convincingness Potash et al.
(2017), the claim quality in our corpus cannot be captured by a model of such
simplicity (maximum accuracy of 65.4). We point out that adding other
linguistic features (for example, part-of-speech tags or sentiment scores) may
further improve SBOW. Exemplarily, we equip SBOW with length features and
observe a significant improvement (up to 67.5).
As for the transformer-based methods, we see that BERT and SBERT consistently
outperform SBOW in all settings on both corpus versions, with SBERT’s accuracy
of up to 77.7 being best.777Additionally, we have experimented with an
adversarial training algorithm, ELECTRA Clark et al. (2020), and obtained
results slightly better than BERT, yet inferior to SBERT. We omit to report
these results here, since they did not provide any further notable insights.
A comparison of the performance of the methods depending on the corpus used
for training in Table 5 shows the effect of augmenting the original Kialo
data. In most cases, the results obtained by models trained on ClaimRevEXT are
comparable (slightly higher/lower) than results obtained by models trained on
ClaimRevBASE. This means that adding relations between non-consecutive claim
versions does not improve the reliability of methods. Given that the
performance scores obtained on the ClaimRevEXT test set are evidently higher
than on the ClaimRevBASE test set, we can conclude that the augmented cases
are easier to classify and the cumulative difference in quality is more
evident.
We can also see in Table 5 that the trained models are able to generalize
across categories; the accuracy and MCC scores in the random split and cross-
category settings for each method are very similar, with only a slight drop in
the cross-category setting. This indicates that the nature of the revisions is
relatively consistent among all categories, yet reveals the existence of some
category-dependent features.
To find out whether BERT really captures the relative revision quality and not
only lexical features present in the original claim, we introduced a Single
claim baseline, analogous to the hypothesis-only baseline in natural language
inferencePoliak et al. (2018). It can be seen that the accuracy and MCC scores
are low across all settings (maximum accuracy of 59.8), which indicates that
BERT indeed captures relative revision quality mostly.
### 5.3 Claim Quality Ranking
Table 6 lists the results of our ranking experiments, which show patterns
similar to the results achieved in the classification task.
We can observe similar patterns in both of the selected ranking approaches:
SBERT consistently outperforms all other considered approaches across all
settings (up to 0.73 and 0.72 in Pearson’s $r$ and Spearman’s $\rho$
accordingly). BERT and SBERT outperform SBOW, indicating that transformer-
based methods are more capable of capturing the relative quality of revisions.
While BTL + BERT obtains results comparable to BTL + SBERT, we find that using
the CLS-vector as a sentence embedding representation leads to lower results.
We point out, though, that using other sentence embeddings and/or pooling
strategies (for example, averaged BERT embeddings) may further improve
results.
Similar to the results of the classification task, we observe only a slight
performance drop in the cross-category setting when using BTL for ranking, yet
an increase when using SVMRank, again emphasizing the topic-independent nature
of claim quality in our corpus.
Task | Label | Accuracy | Instances
---|---|---|---
Type | Claim Clarification | 69.7 | 12 856
| Typo/Grammar Correction | 83.6 | 12 125
| Corrected/Added Links | 89.3 | 3 660
| Changed Meaning of Claim | 57.3 | 232
| Misc | 67.2 | 2 130
| None | 78.3 | 45 842
Distance | Revision distance 1 | 76.2 | 42 341
| Revision distance 2 | 79.6 | 17 478
| Revision distance 3 | 80.6 | 8 023
| Revision distance 4 | 81.0 | 3 979
| Revision distance 5 | 79.5 | 2 103
| Revision distance 6+ | 74.9 | 2 921
| All | 77.7 | 76 845
Table 7: Accuracy of the best model, SBERT, on each single revision type and
distance in ClaimRevEXT, along with the number of instances per each case.
### 5.4 Error Analysis
To further explore the capabilities and limitations of the best model, SBERT,
we analyzed its performance on each revision type and distance.
As the upper part of Table 7 shows, SBERT is highly capable of assessing
revisions related to the correction and addition of links and supporting
information. This revision type also obtained the highest correlations between
quality dimensions and type of revision (see Table 3), which indicates that
the patterns of changes performed within this type are more consistent. In
contrast, we observe that the model fails to address revisions related to the
changed meaning of a claim. On the one hand, this may be due to the fact that
such examples are underrepresented in the data. On the other hand, the
consideration of such examples in the selected tasks is questionable, since
changing the meaning of claim is usually considered as the creation of a new
claim and not a new version of a claim.
An insight from the lower part of Table 7 is that the accuracy of predictions
increases from revision distance 1 to 4. We obtain better results when
comparing non-consecutive claims than when comparing claim pairs with distance
of 1. An intuitive explanation is that, since each single revision should
ideally improve the quality of a claim, the more revisions a claim undergoes,
the more evident the quality improvement should be. For distances $>5$, the
accuracy starts to decrease again, but this may be due to the limited number
of cases given.
## 6 Conclusion and Future Work
In this paper, we have proposed a new way of assessing quality in
argumentation by considering different revisions of the same claim. This
allows us to focus on characteristics of quality regardless of the discussed
topics, aspects, and stances in argumentation. We provide a new corpus of web
claims, which is the first large-scale corpus to target quality assessment and
revision processes on a claim level. We have carried out initial experiments
on this corpus using traditional and transformer-based models, yielding
promising results but also pointing to limitations. In a detailed analysis we
have studied different kinds of claim revisions and provided insights into the
aspects of a claim that influence the users’ perception of quality. Such
insights could help improve writing support in educational settings, or
identify the best claims for debating technologies and argument search.
We seek to encourage further research on how to help online debate platforms
automate the process of quality control and design automatic quality
assessment systems. Such systems can be used to indicate if the suggested
revisions increase the quality of an argument or recommend the type of
revision needed. We leave it for future work to investigate whether the
learned concepts of quality are transferable to content from other
collaborative online platforms (such as idebate.org or Wikipedia), or to data
from other domains, such as student essays and forum discussions.
## Acknowledgments
We thank Andreas Breiter for feedback on early drafts, and the anonymous
reviewers for their helpful comments. This work was partially funded by the
Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under
project number 374666841, SFB 1342.
## References
* Afrin and Litman (2018) Tazin Afrin and Diane Litman. 2018. Annotation and classification of sentence-level revision improvement. In _Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications_ , pages 240–246, New Orleans, Louisiana. Association for Computational Linguistics.
* Ajjour et al. (2019) Yamen Ajjour, Henning Wachsmuth, Johannes Kiesel, Martin Potthast, Matthias Hagen, and Benno Stein. 2019. Data acquisition for argument search: The args.me corpus. In _KI 2019: Advances in Artificial Intelligence - 42nd German Conference on AI, Kassel, Germany, September 23-26, 2019, Proceedings_ , pages 48–59.
* Al-Khatib et al. (2016) Khalid Al-Khatib, Henning Wachsmuth, Matthias Hagen, Jonas Köhler, and Benno Stein. 2016. Cross-domain mining of argumentative text through distant supervision. In _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 1395–1404. Association for Computational Linguistics.
* Bradley and Terry (1952) Ralph Allan Bradley and Milton E. Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. _Biometrika_ , 39(3/4):324–345.
* Clark et al. (2020) Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. In _International Conference on Learning Representations_.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Durmus et al. (2019) Esin Durmus, Faisal Ladhak, and Claire Cardie. 2019. The role of pragmatic and discourse context in determining argument impact. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 5668–5678, Hong Kong, China. Association for Computational Linguistics.
* El Baff et al. (2020) Roxanne El Baff, Henning Wachsmuth, Khalid Al Khatib, and Benno Stein. 2020. Analyzing the persuasive effect of style in news editorial argumentation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 3154–3160, Online. Association for Computational Linguistics.
* Gleize et al. (2019) Martin Gleize, Eyal Shnarch, Leshem Choshen, Lena Dankin, Guy Moshkowich, Ranit Aharonov, and Noam Slonim. 2019. Are you convinced? choosing the more convincing evidence with a Siamese network. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 967–976, Florence, Italy. Association for Computational Linguistics.
* Gretz et al. (2019) Shai Gretz, Roni Friedman, Edo Cohen-Karlik, Assaf Toledo, Dan Lahav, Ranit Aharonov, and Noam Slonim. 2019. A large-scale dataset for argument quality ranking: Construction and analysis. _arXiv preprint arXiv:1911.11408_.
* Habernal and Gurevych (2015) Ivan Habernal and Iryna Gurevych. 2015. Exploiting debate portals for semi-supervised argumentation mining in user-generated web discourse. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 2127–2137. Association for Computational Linguistics.
* Habernal and Gurevych (2016) Ivan Habernal and Iryna Gurevych. 2016. Which argument is more convincing? analyzing and predicting convincingness of web arguments using bidirectional LSTM. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1589–1599, Berlin, Germany. Association for Computational Linguistics.
* Joachims (2006) Thorsten Joachims. 2006. Training linear svms in linear time. In _Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , KDD ’06, page 217–226, New York, NY, USA. Association for Computing Machinery.
* Kock (2007) Christian Kock. 2007. Dialectical obligations in political debate. _Informal Logic_ , 27(3):233–247.
* Lauscher et al. (2020) Anne Lauscher, Lily Ng, Courtney Napoles, and Joel Tetreault. 2020. Rhetoric, logic, and dialectic: Advancing theory-based argument quality assessment in natural language processing. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 4563–4574, Barcelona, Spain (Online). International Committee on Computational Linguistics.
* Luce (2012) R Duncan Luce. 2012. _Individual choice behavior: A theoretical analysis_. Courier Corporation.
* Lukin et al. (2017) Stephanie Lukin, Pranav Anand, Marilyn Walker, and Steve Whittaker. 2017. Argument Strength is in the Eye of the Beholder: Audience Effects in Persuasion. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_ , pages 742–753. Association for Computational Linguistics.
* Matthews (1975) B.W. Matthews. 1975. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. _Biochimica et Biophysica Acta (BBA) - Protein Structure_ , 405(2):442 – 451.
* May et al. (2019) Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019\. On measuring social biases in sentence encoders. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 622–628, Minneapolis, Minnesota. Association for Computational Linguistics.
* Persing and Ng (2013) Isaac Persing and Vincent Ng. 2013. Modeling thesis clarity in student essays. In _Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 260–269, Sofia, Bulgaria. Association for Computational Linguistics.
* Persing and Ng (2015) Isaac Persing and Vincent Ng. 2015. Modeling argument strength in student essays. In _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 543–552, Beijing, China. Association for Computational Linguistics.
* Poliak et al. (2018) Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In _Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics_ , pages 180–191, New Orleans, Louisiana. Association for Computational Linguistics.
* Potash et al. (2017) Peter Potash, Robin Bhattacharya, and Anna Rumshisky. 2017. Length, interchangeability, and external knowledge: Observations from predicting argument convincingness. In _Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 342–351, Taipei, Taiwan. Asian Federation of Natural Language Processing.
* Potash et al. (2019) Peter Potash, Adam Ferguson, and Timothy J. Hazen. 2019. Ranking passages for argument convincingness. In _Proceedings of the 6th Workshop on Argument Mining_ , pages 146–155, Florence, Italy. Association for Computational Linguistics.
* Reimers and Gurevych (2019) Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
* Rinott et al. (2015) Ruty Rinott, Lena Dankin, Carlos Alzate Perez, Mitesh M. Khapra, Ehud Aharoni, and Noam Slonim. 2015. Show me your evidence - an automatic method for context dependent evidence detection. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 440–450, Lisbon, Portugal. Association for Computational Linguistics.
* Simpson and Gurevych (2018) Edwin Simpson and Iryna Gurevych. 2018. Finding convincing arguments using scalable Bayesian preference learning. _Transactions of the Association for Computational Linguistics_ , 6:357–371.
* Stab and Gurevych (2017) Christian Stab and Iryna Gurevych. 2017. Recognizing insufficiently supported arguments in argumentative essays. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_ , pages 980–990, Valencia, Spain. Association for Computational Linguistics.
* Tan et al. (2016) Chenhao Tan, Vlad Niculae, Cristian Danescu-Niculescu-Mizil, and Lillian Lee. 2016\. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. In _Proceedings of the 25th International Conference on World Wide Web_ , WWW ’16, page 613–624, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.
* Toledo et al. (2019a) Assaf Toledo, Shai Gretz, Edo Cohen-Karlik, Roni Friedman, Elad Venezian, Dan Lahav, Michal Jacovi, Ranit Aharonov, and Noam Slonim. 2019a. Automatic argument quality assessment - New datasets and methods. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 5625–5635. Association for Computational Linguistics.
* Toledo et al. (2019b) Assaf Toledo, Shai Gretz, Edo Cohen-Karlik, Roni Friedman, Elad Venezian, Dan Lahav, Michal Jacovi, Ranit Aharonov, and Noam Slonim. 2019b. Automatic argument quality assessment - new datasets and methods. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 5625–5635, Hong Kong, China. Association for Computational Linguistics.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, _Advances in Neural Information Processing Systems 30_ , pages 5998–6008. Curran Associates, Inc.
* Wachsmuth et al. (2017a) Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Alberdingk Thijm, Graeme Hirst, and Benno Stein. 2017a. Computational argumentation quality assessment in natural language. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_ , pages 176–187, Valencia, Spain. Association for Computational Linguistics.
* Wachsmuth et al. (2017b) Henning Wachsmuth, Benno Stein, and Yamen Ajjour. 2017b. “PageRank” for argument relevance. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_ , pages 1117–1127, Valencia, Spain. Association for Computational Linguistics.
* Wachsmuth and Werner (2020) Henning Wachsmuth and Till Werner. 2020. Intrinsic quality assessment of arguments. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 6739–6745, Barcelona, Spain (Online). International Committee on Computational Linguistics.
* Wei et al. (2016) Zhongyu Wei, Yang Liu, and Yi Li. 2016. Is this post persuasive? ranking argumentative comments in online forum. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 195–200, Berlin, Germany. Association for Computational Linguistics.
* Zhang et al. (2017) Fan Zhang, Homa B. Hashemi, Rebecca Hwa, and Diane Litman. 2017. A corpus of annotated revisions for studying argumentative writing. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1568–1578, Vancouver, Canada. Association for Computational Linguistics.
|
# Self-similar solutions to the Hesse flow
Shun Maeta Department of Mathematics, Shimane University, Nishikawatsu 1060
Matsue, 690-8504, Japan<EMAIL_ADDRESS>or<EMAIL_ADDRESS>
###### Abstract.
We define a Hesse soliton, that is, a self-similar solution to the Hesse flow
on Hessian manifolds. On information geometry, the $e$-connection and the
$m$-connection are important, which do not coincide with the Levi-Civita one.
Therefore, it is interesting to consider a Hessian manifold with a flat
connection which does not coincide with the Levi-Civita one. We call it a
proper Hessian manifold. In this paper, we show that any compact proper Hesse
soliton is expanding and any non-trivial compact gradient Hesse soliton is
proper. Furthermore, we show that the dual space of a Hesse-Einstein manifold
can be understood as a Hesse soliton.
###### Key words and phrases:
Hesse flow; Hesse solitons; Hessian manifolds; Information geometry
###### 2010 Mathematics Subject Classification:
53B12, 53E99, 35C06, 62B11
The author is partially supported by the Grant-in-Aid for Young Scientists,
No.19K14534, Japan Society for the Promotion of Science.
## 1\. Introduction
Machine learning might be one of the most powerful tool for the human race.
Information geometry is regarded as a basic and important mathematical
research in this field, which was introduced by S. Amari. In particular,
Exponential families and Mixture families of probability distributions are
important in information geometry. They have the dual flat structure. Hence,
interestingly, they have Hessian structure.
Information geometry is built on the basis of differential geometry. The
geometric flow is one of the most powerful tool in the theory of differential
geometry. In particular, the Ricci flow is powerful. In fact, as is well
known, Poincare conjecture was solved by the Ricci flow. The Ricci flow is
defined as follows by using the Ricci tensor ${\rm Ric}(g(t))$ (cf. [6]):
$\frac{\partial}{\partial t}g(t)=-2{\rm Ric}(g(t)),$
where $g(t)$ is the time dependent Riemannian metric on a Riemannian manifold
$(M,g(t))$. Therefore, Hessian manifolds can be deeply understood by
considering the geometric flow for the second Koszul form $\beta$. In fact,
$\beta$ plays a similar role to that of the Ricci tensor in a Hessian manifold
(i.e., a manifold with a Hessian structure). The flow is called the Hesse flow
(or the Hesse-Koszul flow) defined by M. Mirghafouri and F. Malek (cf. [7],
see also [9]):
$\frac{\partial}{\partial t}g(t)=2\beta(g(t)).$
They proved the short-time existence, the global existence and the uniqueness
of it. S. Puechmorel and T. D. Tô [9] studied some convergence theorems of it
on compact Hessian manifolds.
A self-similar solution i.e., a soliton equation plays an important and
fundamental role in the study of a geometric flow. In fact, the Ricci soliton
which is the self-similar solution to the Ricci flow plays an important role
in solving Poincare conjecture and the geometrization conjecture.
Therefore, in this paper, we define the self-similar solution to the Hesse
flow and study it. In particular, we show that the expanding case is important
in compact Hesse solitons. We also show that one can understand that a non
trivial gradient Hesse soliton is interesting from the point of view
information geometry. In particular, under the second Koszul form coincides
with the dual one, any compact gradient Hesse soliton must be Hesse-Einstein.
Hesse-Einstein manifolds can be regarded to a notion of Einstein manifolds in
Riemannian geometry. Furthermore, we show that one can understand the dual
space of a Hesse-Einstein manifold as a Hesse soliton.
## 2\. Preliminary
In this section, we set up terminology and define some notions which are
related to Hessian manifolds and information geometry.
### 2.1. Riemannian geometry
Let $(M,g)$ be an $n$-dimensional Riemannian manifold. As is well known, the
Levi-Civita connection $\nabla:TM\times C^{\infty}(TM)\rightarrow
C^{\infty}(TM)$ is the unique connection on $TM$, which is compatible with the
metric and is torsion free:
$Xg(Y,Z)=g(\nabla_{X}Y,Z)+g(Y,\nabla_{X}Z),$ $\nabla_{X}Y-\nabla_{Y}X=[X,Y].$
The Riemannian curvature tensor is defined by
$Rm(X,Y)Z=\nabla_{X}\nabla_{Y}Z-\nabla_{Y}\nabla_{X}Z-\nabla_{[X,Y]}Z,$
for any vector field $X,Y,Z\in\mathfrak{X}(M).$ We use the notations
${R^{i}}_{jkl}$ as
$Rm\left(\frac{\partial}{\partial x_{k}},\frac{\partial}{\partial
x_{l}}\right)\frac{\partial}{\partial
x_{j}}=\sum_{i}^{n}{R^{i}}_{jkl}\frac{\partial}{\partial x_{i}},$
and $R_{ijkl}=g^{ip}{R^{p}}_{jkl}$. The Ricci and scalar curvatures are
defined by $R_{ij}=R_{ijik}$ and $R=R_{ii}$.
### 2.2. Hessian manifolds
Let $M$ be an $n$-dimensional smooth manifold. A connection $D$ is said to be
flat if $D$ satisfies that it is torsion free and the curvature tensor
$Rm^{D}$ vanishes, that is,
$D_{X}Y-D_{Y}X=[X,Y],$
and
$Rm^{D}(X,Y)Z:=D_{X}D_{Y}Z-D_{Y}D_{X}Z-D_{[X,Y]}Z=0.$
###### Definition 2.1.
A Riemannian metric $g$ on a flat manifold $(M,D)$ is called a Hessian metric
if $g$ can be locally expressed by
$g=Dd\varphi,$
for some smooth function $\varphi$, that is,
$g_{ij}=\frac{\partial^{2}\varphi}{\partial x_{i}\partial x_{j}},$
for an affine coordinate system. $(D,g)$ is called a Hessian structure.
$(M,D,g)$ is called a Hessian manifold.
Let $D^{\prime}=2\nabla-D$, then $D^{\prime}$ is also a flat connection and
$(D^{\prime},g)$ is a Hessian structure. $D^{\prime}$ is called the dual
connection and $(D^{\prime},g)$ is called the dual Hessian structure of
$(D,g)$.
The flat connection $D$ and the dual one $D^{\prime}$ satisfy that
$Xg(Y,Z)=g(D_{X}Y,Z)+g(Y,D^{\prime}_{X}Z).$
###### Definition 2.2.
Let $v_{g}$ be the volume form of $g$ and $X$ be a vector field on $M$. The
first and second Koszul forms $\alpha$ and $\beta$ of $(M,D)$ are defined by
(2.1) $D_{X}v_{g}=\alpha(X)v_{g},$ (2.2) $\beta=D\alpha.$
We denote by $\gamma$ the difference tensor of $\nabla$ and $D$:
$\gamma_{X}Y=\nabla_{X}Y-D_{X}Y.$
Here we remark that since $D_{\partial_{i}}\partial_{j}=0$, the components
${\gamma^{i}}_{jk}$ of $\gamma$ with respect to affine coordinate systems
coincide with the Christoffel symbols ${\Gamma^{i}}_{jk}$ of $\nabla$, where
$\partial_{i}=\frac{\partial}{\partial x_{i}}$.
A tensor field $H$ of type $(1,3)$ defined by the covariant differential
$H=D\gamma$
of $\gamma$ is said to be the Hessian curvature tensor for $(D,g)$. The
components ${H^{i}}_{jkl}$ of $H$ is given by
${H^{i}}_{jkl}=\frac{\partial{\gamma^{i}}_{jl}}{\partial x_{k}}.$
###### Proposition 2.3 (Proposition 3.4 in [11]).
On a Hessian manifold, the following holds:
$(1)$ $\alpha(X)={\rm Trace}\,\gamma_{X}$.
$(2)$ $\alpha_{i}={\gamma^{r}}_{ri}$.
$(3)$ $\beta_{ij}={H^{r}}_{rij}={H_{ijr}}^{r}$.
The second Koszul form $\beta$ plays a similar role to that of the Ricci
tensor and one can define Hesse-Einstein manifolds:
###### Definition 2.4.
Let $(M,D,g)$ be a Hessian manifold. If the following holds
$\beta=\lambda g,$
then $(M,D,g)$ is called a Hesse-Einstein manifold.
###### Proposition 2.5 (Proposition 2.3 in [11]).
The curvature tensor of a Hessian manifold $(M,D)$ is given as follows.
(2.3)
${R^{i}}_{jkl}={\gamma^{i}}_{lr}{\gamma^{r}}_{jk}-{\gamma^{i}}_{kr}{\gamma^{r}}_{jl}.$
This implies the following.
(2.4) $\displaystyle
R_{jk}={R^{s}}_{jsk}={\gamma^{s}}_{kr}{\gamma^{r}}_{js}-\alpha_{r}{\gamma^{r}}_{jk},$
and
(2.5) $\displaystyle R=|{\gamma}|^{2}-|\alpha|^{2}.$
We can use the following notations without confusion: For example,
$\gamma_{ijk}\gamma_{ist}={\gamma^{i}}_{jk}\gamma_{ist},$
$H_{rrij}={H^{r}}_{rij}$, $\alpha_{r}\gamma_{rij}=\alpha^{r}\gamma_{rij}$,
etc.
### 2.3. Information geometry
Hessian manifolds play an important role in information geometry: For
$\Omega_{n}=\\{1,2,\cdots,n\\}$, let
$S_{n-1}:=\left\\{p:\Omega_{n}\rightarrow\mathbb{R}_{+};\sum_{\omega\in\Omega_{n}}p(\omega)=1\right\\}$
be a set of all probability distribution on $\Omega_{n},$ where
$\mathbb{R}_{+}=\\{x\in\mathbb{R};x>0\\}$. As is well known, one can regard it
as a manifold (see for example [5]). A metric $g^{F}$ on $S_{n-1}$ such as
$g^{F}_{p}(X,Y)=\sum_{\omega=1}^{n}p(\omega)(X\log p(\omega))(Y\log
p(\omega))$ is called a Fisher information metric. For each
$\alpha\in\mathbb{R}$, $\nabla^{(\alpha)}$ is determined by
$g^{F}_{p}(\nabla^{(\alpha)}_{X}Y,Z)=g^{F}_{p}(\nabla_{X}Y,Z)-\frac{\alpha}{2}\sum_{\omega=1}^{n}p(\omega)(X\log
p(\omega))(Y\log p(\omega))(Z\log p(\omega)),$
where $\nabla$ is the Levi-Civita connection compatible with $g^{F}.$
$\nabla^{(\alpha)}$ is called the $\alpha$-connection.
Chentsov (cf. [3]) shows that an extremely natural invariance requirement of
$S_{n-1}$ determines a metric and a connection of $S_{n-1}$, that is, the
metric is the Fisher information metric and the connection is the
$\alpha$-connection on $S_{n-1}$. This means that on information geometry, the
$\alpha$-connection is the most natural connection.
The $\alpha$-connection $\nabla^{(\alpha)}$ satisfies that
$Xg^{F}(Y,Z)=g^{F}(\nabla^{(\alpha)}_{X}Y,Z)+g^{F}(Y,\nabla^{(-\alpha)}_{X}Z).$
The most important case is $\alpha=1$. It is known that for
$(g^{F},\nabla^{(1)},\nabla^{(-1)})$, $S_{n-1}$ is the dual flat manifold and
$g^{F}$ can be written $g^{F}_{ij}=\partial_{i}\partial_{j}\varphi$ for an
affine coordinate system (cf [1]). Hence, $(S_{n-1},\nabla^{(1)},g^{F})$ is a
Hessian manifold with $\nabla^{(1)}\not=\nabla.$
Therefore, to apply Hessian geometry to information geometry, it is important
to consider a Hessian manifold $(M,D,g)$ with $D\not=\nabla.$ From this, we
define the following:
###### Definition 2.6.
Let $(M,D)$ be a Hessian manifold. If $D\not=\nabla$, $M$ is called a proper
Hessian manifold.
## 3\. Hesse solitons
Let $(M,D,g)$ be a Hessian manifold. In this section, we consider self-similar
solutions to the Hesse flow $\partial_{t}g=2\beta$. We first consider the
Hesse flow from the point of view of the Laplacian on Hessian manifolds. H.
Shima [10] considered the Laplacian on Hessian manifolds.
###### Definition 3.1 ([10]).
Let $\mathcal{A}^{p,q}$ be the tensor product
$(\overset{p}{\wedge}TM)\otimes(\overset{q}{\wedge}T^{*}M)$. Let $v_{g}$ be
the volume element determined by $g$. We identify $v_{g}$ with $v_{g}\otimes
1\in\mathcal{A}^{n,0}$ and set $\overline{v}_{g}=1\otimes
v_{g}\in\mathcal{A}^{0,n}$. For any vector field $X$, we define interior
product operators by
$i(X):\mathcal{A}^{p,q}\rightarrow\mathcal{A}^{p-1,q},~{}~{}i(X)\omega=\omega(X,\cdots;\cdots),$
$\overline{i}(X):\mathcal{A}^{p,q}\rightarrow\mathcal{A}^{p,q-1},~{}~{}\overline{i}(X)\omega=\omega(\cdots;X,\cdots).$
A coboundary operator
$\partial:\mathcal{A}^{p,q}\rightarrow\mathcal{A}^{p+1,q}$ is defined by
$\partial=e(dx_{i})D_{\frac{\partial}{\partial x_{i}}},$
where $e$ is an exterior product operator defined by
$e(\omega):\eta\in\mathcal{A}^{p,q}\rightarrow\omega\wedge\eta\in\mathcal{A}^{p+r,q+s},$
for $\omega\in\mathcal{A}^{r,s}.$
The adjoint operator of $\partial$ is denoted by
$\delta=(-1)^{p}\star^{-1}\partial\star$ on the space $\mathcal{A}^{p,q}$ ,
where $\star$ is the star operator
$\star:\mathcal{A}^{p,q}\rightarrow\mathcal{A}^{n-p,n-q}$ defined by
$\displaystyle(\star\omega)$
$\displaystyle(X_{1},\cdots,X_{n-p};Y_{1},\cdots,Y_{n-q})v_{g}\wedge\overline{v}_{g}$
$\displaystyle=$
$\displaystyle~{}\omega\wedge\overline{i}(X_{1})g\wedge\cdots\wedge\overline{i}(X_{n-p})g\wedge
i(Y_{1})g\wedge\cdots\wedge i(Y_{n-q})g.$
Then, we can define the Laplacian on a Hessian manifold as follows
$\Delta=\partial\delta+\delta\partial.$
By the above definition, Shima [10] showed that
$\Delta g=\beta.$
Therefore, interestingly, one can obtain that the Hesse flow can be written as
$\frac{\partial}{\partial t}g(t)=2\Delta g(t).$
In the following of this section, we consider a self-similar solution to the
Hesse flow:
$g(t)=\sigma(t)\psi^{*}(t)g(0),$
where $g(0)$ is the Hessian metric $g$,
$\sigma(t):\mathbb{R}\rightarrow\mathbb{R_{+}}$ is a smooth function and
$\psi(t):M\rightarrow M$ is a 1-parameter family of diffeomorphisms. By
differentiating, we have
(3.1)
$2\beta(g(t))=\frac{d}{dt}\sigma(t)\psi^{*}(t)g(0)+\sigma(t)\psi^{*}(t)(\mathcal{L}_{X}g(0)),$
where $\mathcal{L}_{X}$ denotes the Lie derivative by the time dependent
vector field $X$ such that $X(\psi(t)(p))=\frac{d}{dt}(\psi(t)(p))$ for any
$p\in M$.
###### Claim 1.
$\beta(cg)=\beta(g)$ for any positive constant $c\in\mathbb{R}$.
###### Proof.
Let $v_{g}$ and $v_{cg}$ be the volume forms of $g$ and $cg$, respectively.
Assume that $\alpha^{c}$ is the first Koszul form for $cg$. By definition
(2.1), we have
$D_{X}v_{cg}=\alpha^{c}(X)v_{cg}.$
From this and the definition of the volume form, we obtain
$D_{X}v_{g}=\alpha^{c}(X)v_{g}.$
Therefore, we have
$\alpha^{c}=\alpha.$
From this and the definition of the second Koszul form (2.2),
$\beta(cg)=D\alpha^{c}=D\alpha=\beta(g).$
∎
By (3.1) and Claim 1, one has
(3.2) $2\beta(g(t))=\frac{d}{dt}\sigma(t)g(t)+\mathcal{L}_{Y}g(t),$
where $Y(t)=\sigma(t)X(t)$. Therefore, we define self-similar solutions to the
Hesse flow as follows:
###### Definition 3.2.
Let $(M,D,g=Dd\varphi)$ be a Hessian manifold. If there exist a vector field
$X$ and $\lambda\in\mathbb{R}$, such that
(3.3) $\beta-\frac{1}{2}\mathcal{L}_{X}g=\lambda g,$
then, $M$ is called a Hesse soliton. If $\lambda>0,\lambda=0,\lambda<0$, then
the Hesse soliton is called expanding, steady or shrinking, respectively. If
there exists a smooth function $f$ on $M$ such that $X={\rm grad}\,f$, that
is,
(3.4) $\beta-\nabla\nabla f=\lambda g,$
then the Hesse soliton $(M,D,g,f)$ is called a gradient Hesse soliton, where
$\nabla\nabla f$ is the Hessian of $f$. $f$ is called a potential function.
Hesse-Einstein manifolds are trivial solutions of Hesse solitons. Therefore,
if a Hesse soliton is Hesse-Einstein, then it is called trivial. If a Hesse
soliton is a proper Hessian manifold, then it is called a proper Hesse
soliton.
## 4\. Existence and non-existence theorems for
Hesse solitons
In this section, we show some existence and non-existence theorems for Hesse
solitons.
###### Theorem 4.1.
$(1)$ There exist no compact shrinking Hesse solitons.
$(2)$ Any compact steady Hesse soliton is non proper and trivial.
Unlike in the case (1), in the case (2), we remark that there exist non proper
and trivial steady Hesse solitons. We will consider it later.
We first show that the following lemma.
###### Lemma 4.2.
On any Hessian manifold, the following formula holds.
(4.1) $\displaystyle\frac{1}{2}\Delta R=$
$\displaystyle~{}~{}\nabla_{i}\nabla_{j}\alpha_{k}\gamma_{ijk}-\nabla_{r}\nabla_{r}\alpha_{i}\alpha_{i}+|\nabla\gamma|^{2}+|{\rm
Rm}|^{2}+|{\rm Ric}|^{2}$
$\displaystyle+R_{ij}\beta_{ij}-R_{ij}\nabla_{i}\alpha_{j}.$
###### Proof.
Since
$\gamma_{ijk}=\frac{1}{2}\frac{\partial g_{ij}}{\partial
x_{k}}~{}~{}~{}\text{and}~{}~{}~{}g_{ij}=\partial_{i}\partial_{j}\varphi,$
we have
$\nabla_{i}\gamma_{jkl}=\frac{1}{2}\partial_{i}\partial_{j}\partial_{k}\partial_{l}\varphi-(\gamma_{rij}\gamma_{rkl}+\gamma_{rik}\gamma_{rjl}+\gamma_{ril}\gamma_{rjk}).$
This means that $\nabla_{i}\gamma_{jkl}$ is symmetry with respect to
$i,j,k,l$.
A direct computation shows that
$\displaystyle\nabla_{r}\nabla_{r}\gamma_{ijk}=$
$\displaystyle~{}\nabla_{r}\nabla_{i}\gamma_{rjk}$ $\displaystyle=$
$\displaystyle~{}\nabla_{i}\nabla_{r}\gamma_{rjk}+R_{rirp}\gamma_{pjk}+R_{rijp}\gamma_{rpk}+R_{rikp}\gamma_{rjp}$
$\displaystyle=$
$\displaystyle~{}\nabla_{i}\nabla_{r}\gamma_{rjk}+(\gamma_{rpt}\gamma_{tir}-\alpha_{t}\gamma_{tip})\gamma_{pjk}$
$\displaystyle+(\gamma_{rpt}\gamma_{tij}-\gamma_{rjt}\gamma_{tip})\gamma_{rpk}+(\gamma_{rpt}\gamma_{tik}-\gamma_{rkt}\gamma_{tip})\gamma_{rjp}$
$\displaystyle=$
$\displaystyle~{}\nabla_{i}\nabla_{j}\alpha_{k}+\gamma_{rpt}(\gamma_{tir}\gamma_{pjk}+\gamma_{tij}\gamma_{rpk}+\gamma_{tik}\gamma_{rjp})$
$\displaystyle-\alpha_{t}\gamma_{tip}\gamma_{pjk}-\gamma_{rjt}\gamma_{tip}\gamma_{rpk}-\gamma_{rkt}\gamma_{tip}\gamma_{rjp},$
where the first and third equalities follow from the symmetric property of
$\nabla_{i}\gamma_{jkl}$ with respect to $i,j,k,l$, and the second one follows
from the Ricci identity. From this, one has
(4.2) $\displaystyle\frac{1}{2}\Delta|\gamma|^{2}=$
$\displaystyle~{}\nabla_{r}\nabla_{r}\gamma_{ijk}\gamma_{ijk}+|\nabla\gamma|^{2}$
$\displaystyle=$
$\displaystyle~{}\\{\nabla_{i}\nabla_{j}\alpha_{k}+\gamma_{rpt}(\gamma_{tir}\gamma_{pjk}+\gamma_{tij}\gamma_{rpk}+\gamma_{tik}\gamma_{rjp})$
$\displaystyle-\alpha_{t}\gamma_{tip}\gamma_{pjk}-\gamma_{rjt}\gamma_{tip}\gamma_{rpk}-\gamma_{rkt}\gamma_{tip}\gamma_{rjp}\\}\gamma_{ijk}+|\nabla\gamma|^{2}.$
Substituting
$\displaystyle|{\rm Rm}|^{2}=$ $\displaystyle~{}R_{ijkl}R_{ijkl}$
$\displaystyle=$
$\displaystyle~{}\gamma_{rpt}\gamma_{tir}\gamma_{pjk}\gamma_{ijk}+\gamma_{rpt}\gamma_{tij}\gamma_{rpk}\gamma_{ijk}-\gamma_{rjt}\gamma_{tip}\gamma_{rpk}\gamma_{ijk}-\gamma_{rkt}\gamma_{tip}\gamma_{rjp}\gamma_{ijk},$
into (4.2), we have
$\displaystyle\frac{1}{2}\Delta|\gamma|^{2}=$
$\displaystyle~{}\nabla_{i}\nabla_{j}\alpha_{k}\gamma_{ijk}+|\nabla\gamma|^{2}+|{\rm
Rm}|^{2}$
$\displaystyle+\gamma_{rpk}\gamma_{tik}\gamma_{rjp}\gamma_{ijk}-\alpha_{t}\gamma_{tip}\gamma_{pjk}\gamma_{ijk}.$
From this, (2.4),
$\displaystyle|{\rm
Ric}|^{2}=\gamma_{skr}\gamma_{rjs}\gamma_{kip}\gamma_{ijp}-\gamma_{skr}\gamma_{rjs}\alpha_{i}\gamma_{ijk}-\alpha_{r}\gamma_{rjk}\gamma_{ikp}\gamma_{pji}+\alpha_{r}\alpha_{i}\gamma_{rjk}\gamma_{ijk},$
and
$\nabla_{i}\alpha_{j}=\beta_{ij}-\gamma_{rij}\alpha_{r},$
one has
$\displaystyle\frac{1}{2}\Delta|\gamma|^{2}=$
$\displaystyle~{}\nabla_{i}\nabla_{j}\alpha_{k}\gamma_{ijk}+|\nabla\gamma|^{2}+|{\rm
Rm}|^{2}+|{\rm
Ric}|^{2}+\gamma_{ijk}\gamma_{rjk}\alpha_{p}\gamma_{ipr}-\alpha_{i}\alpha_{r}\gamma_{ijk}\gamma_{rjk}$
$\displaystyle=$
$\displaystyle~{}~{}\nabla_{i}\nabla_{j}\alpha_{k}\gamma_{ijk}+|\nabla\gamma|^{2}+|{\rm
Rm}|^{2}+|{\rm Ric}|^{2}+R_{ij}\beta_{ij}-R_{ij}\nabla_{i}\alpha_{j}.$
Therefore, we have
$\displaystyle\frac{1}{2}\Delta R=$
$\displaystyle~{}~{}\nabla_{i}\nabla_{j}\alpha_{k}\gamma_{ijk}-\nabla_{r}\nabla_{r}\alpha_{i}\alpha_{i}+|\nabla\gamma|^{2}+|{\rm
Rm}|^{2}+|{\rm Ric}|^{2}$
$\displaystyle+R_{ij}\beta_{ij}-R_{ij}\nabla_{i}\alpha_{j}.$
∎
By using the above lemma, one can show Theorem 4.1.
###### Proof of Theorem 4.1.
By taking the trace of (3.3), we have
$\beta_{ii}-{\rm div}\,X=\lambda n.$
From this, one has
$\displaystyle\lambda\,n\,\mathrm{Vol}(M,g)=$
$\displaystyle~{}\int_{M}\beta_{ii}-{\rm div}\,Xv_{g}$ $\displaystyle=$
$\displaystyle~{}\int_{M}\nabla_{i}\alpha_{i}+|\alpha|^{2}-{\rm div}\,Xv_{g}$
$\displaystyle=$ $\displaystyle~{}\int_{M}|\alpha|^{2}v_{g}\geq 0,$
where the last equality follows from Stokes’ theorem. Hence, one has
$\lambda\geq 0$. Therefore, there exist no compact shrinking Hesse solitons.
If $\lambda=0$, then one has $\alpha=0$. Furthermore, by Lemma 4.2, we have
$\displaystyle\frac{1}{2}\Delta R=$ $\displaystyle~{}|\nabla\gamma|^{2}+|{\rm
Rm}|^{2}+|{\rm Ric}|^{2}.$
By Green’s formula, one has
$\int_{M}|{\rm Ric}|^{2}+|{\rm Rm}\,|^{2}+|\nabla\gamma|^{2}v_{g}=0.$
Therefore, $M$ is flat, in particular $R=0.$ From this and (2.5), we have
$\gamma=0$, that is, $D=\nabla.$ Furthermore, we also have $\beta=0$.
Therefore, it is also trivial. ∎
As mentioned above, we consider the properness of steady Hesse solitons. The
equation of steady Hesse solitons is
$\beta-\frac{1}{2}\mathcal{L}_{X}g=0.$
From the proof of (2) of Theorem 4.1, any compact steady Hesse soliton is flat
and non proper. Hence, any compact steady Hesse soliton is
$\mathcal{L}_{X}g=0.$
One can construct many examples of steady Hesse solitons by taking the vector
field $X$ as Killing vector field:
###### Proposition 4.3.
Let $(M,g,X)$ be a non proper Hessian manifold with a Killing vector field $X$
$($and the Levi-Civita connection $\nabla)$. Then, $\beta$ and
$\mathcal{L}_{X}g$ vanishes, and therefore, $(M,g,X)$ is a steady Hesse
soliton.
We remark that if $\alpha=\alpha^{\prime}$, then $D=\nabla$ on compact Hessian
manifolds. In fact, since the volume form is parallel,
$\alpha^{\prime}(X)=D^{\prime}_{X}v_{g}=(2\nabla-D)v_{g}=-D_{X}v_{g}=-\alpha(X)$,
that is, $\alpha=-\alpha^{\prime}$, thus one has $\alpha=\alpha^{\prime}=0.$
By Lemma 4.2 and Green’s formula, one has
$\int_{M}|{\rm Rm}|^{2}+|{\rm Ric}|^{2}+|\nabla\gamma|^{2}v_{g}=0.$
From this and (2.5), we have $\gamma=0$, that is, $D=\nabla.$ On a complete
Hessian manifold, it is well known that E. Calabi obtained the same conclusion
(cf. [2]), that is, any complete Hessian manifold with $\alpha=0$ satisfies
$D=\nabla.$
From the above arguments, we consider a more general problem. Obviously, if a
Hessian manifold $M$ is non proper, then $\beta=\beta^{\prime}(=0)$. In
particular, $\nabla\alpha=0.$ In fact, since $\alpha=-\alpha^{\prime}$, one
has
$\beta^{\prime}=D^{\prime}\alpha^{\prime}=-D^{\prime}\alpha=-(2\nabla-D)\alpha=\beta-2\nabla\alpha.$
Therefore, $\beta^{\prime}-\beta=2\nabla\alpha.$
However, the converse is not true, that is, even if $\beta=\beta^{\prime}$,
$M$ might not satisfies that $D=\nabla$. In fact, the following example
satisfies $\beta=\beta^{\prime}=\frac{n}{2}g$, but $D\not=\nabla.$ This means
that there exist proper Hesse-Einstein manifolds.
###### Example.
Let
$\Omega=\left\\{x\in\mathbb{R}^{n};x_{n}>\sqrt{\displaystyle\sum_{i=1}^{n-1}(x_{i})^{2}}\right\\}~{}~{}~{}\text{and}~{}~{}~{}\varphi=-\log\left(x_{n}^{2}-\left(\displaystyle\sum_{i=1}^{n-1}(x_{i})^{2}\right)\right).$
Then, $(\Omega,D,g=Dd\varphi)$ is a Hessian structure on $\Omega$.
From the above argument, it is interesting to consider the problem: “Does
there exist non trivial Hesse soliton with $\beta=\beta^{\prime}$ (that is,
$\nabla\alpha=0)$?”
We first consider a complete Einstein Hessian manifold with a non-negative
Einstein constant $\lambda$, that is, a Hessian manifold with ${\rm
Ric}=\lambda g$ with $\lambda\geq 0.$
###### Proposition 4.4.
Any complete Einstein Hessian manifold with a non-negative Einstein constant
$\lambda$ and $\beta=\beta^{\prime}$ is flat and $\nabla\gamma=0.$
###### Proof.
By the assumption,
$0=\nabla_{i}\alpha_{j}=\beta_{ij}-{\gamma^{r}}_{ij}\alpha_{r}=\beta_{ij}-g^{rs}\gamma_{rij}\alpha_{s}.$
From this, (4.1) and the assumption,
$\displaystyle\frac{1}{2}\Delta R=$ $\displaystyle~{}|\nabla\gamma|^{2}+|{\rm
Rm}|^{2}+|{\rm Ric}|^{2}+\lambda\beta_{ii}$ $\displaystyle=$
$\displaystyle~{}|\nabla\gamma|^{2}+|{\rm Rm}|^{2}+|{\rm
Ric}|^{2}+\lambda|\alpha|^{2}$ $\displaystyle\geq$
$\displaystyle\frac{1}{n}R^{2},$
where the last inequality follows from the Schwarz inequality. Since the Ricci
curvature is non-negative, by the Omori-Yau maximum principle (cf. [8], [12]),
$R=0.$ Hence $M$ is flat and $\nabla\gamma=0.$ ∎
###### Lemma 4.5.
Any Hesse soliton with $\beta=\beta^{\prime}$ satisfies that ${\rm div}\,X$ is
constant.
###### Proof.
Since
$0=\nabla_{i}\alpha_{j}=\beta_{ij}-{\gamma^{r}}_{ij}\alpha_{r}=\beta_{ij}-g^{rs}\gamma_{rij}\alpha_{s},$
we have
$\beta_{ii}=|\alpha|^{2}.$
Hence, one has
$\nabla_{k}\beta_{ii}=0.$
By the equation of Hesse solitons (3.3),
$0=\nabla_{k}\beta_{ii}=\nabla_{k}({\rm div}\,X+n\lambda)=\nabla_{k}{\rm
div}\,X,$
which implies that ${\rm div}\,X$ is constant. ∎
By Lemma 4.5, one can show the following:
###### Proposition 4.6.
If compact Hesse solitons satisfy $\beta=\beta^{\prime}$, then ${\rm
div}\,X=0$.
###### Proof.
By Lemma 4.5, ${\rm div}\,X$ is constant, say $C$. By Stokes’ theorem,
$0=\int_{M}{\rm div}\,Xv_{g}=C\,\mathrm{Vol}(M).$
Thus, $C=0$, that is, ${\rm div}\,X=0$. ∎
In particular, if $M$ is gradient, by the standard maximum principle, one can
obtain the following.
###### Corollary 4.7.
Any compact gradient Hesse soliton with $\beta=\beta^{\prime}$ is trivial.
A similar result for complete Hesse solitons can be obtained.
###### Proposition 4.8.
Any complete gradient Hesse soliton with $\beta=\beta^{\prime}$ and non-
negative Ricci curvature is trivial.
###### Proof.
By Lemma 4.5,
$\displaystyle\Delta|\nabla f|^{2}=$ $\displaystyle~{}2|\nabla\nabla
f|^{2}+2{\rm Ric}(\nabla f,\nabla f)+2g(\nabla f,\nabla\Delta f)$
$\displaystyle=$ $\displaystyle~{}2|\nabla\nabla f|^{2}+2{\rm Ric}(\nabla
f,\nabla f)\geq 0.$
Hence, Omori-Yau maximum principle shows that $|\nabla f|^{2}$ is constant,
say $C$. Assume that $C>0$. Since $\nabla\nabla f=0$, $\Delta f=0.$ From this,
$\displaystyle\Delta e^{f}=~{}|\nabla f|^{2}e^{f}+\Delta fe^{f}=~{}|\nabla
f|^{2}e^{f}>0.$
By Omori-Yau maximum principle again, $e^{f}$ is constant, that is, $f$ is
constant, which is a contradiction.
∎
By Corollary 4.7, it is interesting to consider non trivial gradient Hesse
solitons from the point of view of information geometry.
###### Corollary 4.9.
Any compact non trivial gradient Hesse soliton is proper.
###### Proof.
By Corollary 4.7, $\nabla\alpha\not=0$ at some point $p\in M$, i.e., on some
open set $\Omega\ni p$. Hence, $\nabla\gamma\not=0$ on $\Omega$. In fact, if
$\nabla\gamma=0$ at $q\in\Omega$, then we have $\nabla\alpha=0$ at $q$, which
is a contradiction.
Therefore, $\gamma\not=0$ on some set $\tilde{\Omega}$ of $M$, which means
that the soliton is proper.
∎
By the same argument, one can show the following.
###### Corollary 4.10.
Any complete non trivial gradient Hesse soliton with non-negative Ricci
curvature is proper.
## 5\. Dual Hessian structure
In this section, we consider the dual space of a Hessian manifold $(M,D,g)$
and show that one can understand the dual space of a Hesse-Einstein manifold
as a Hesse soliton.
###### Theorem 5.1.
Let $(M,D,g)$ be a Hesse soliton,
$\beta-\frac{1}{2}\mathcal{L}_{X}g=\lambda g,$
then the dual space $(M,D^{\prime},g)$ is also a Hesse soliton which satisfies
that
$\beta^{\prime}-\frac{1}{2}\mathcal{L}_{(X-2\alpha^{\sharp})}g=\lambda g,$
where $\sharp$ is a musical isomorphism $\sharp:TM^{*}\rightarrow TM$,
$g(\alpha^{\sharp},X)=\alpha(X),$
for any vector field $X$ on $M$.
###### Proof.
By the definition of the musical isomorphism $\sharp$,
$\displaystyle(\nabla\alpha)(Y,Z)=$ $\displaystyle~{}(\nabla_{Y}\alpha)(Z)$
$\displaystyle=$ $\displaystyle~{}Y\alpha(Z)-\alpha(\nabla_{Y}Z)$
$\displaystyle=$
$\displaystyle~{}Yg(\alpha^{\sharp},Z)-g(\alpha^{\sharp},\nabla_{Y}Z)$
$\displaystyle=$
$\displaystyle~{}g(\nabla_{Y}\alpha^{\sharp},Z)+g(\alpha^{\sharp},\nabla_{Y}Z)-g(\alpha^{\sharp},\nabla_{Y}Z)$
$\displaystyle=$ $\displaystyle~{}g(\nabla_{Y}\alpha^{\sharp},Z).$
Since $\beta$ and $\beta^{\prime}$ are symmetric 2 forms,
$\nabla\alpha=\frac{1}{2}(\beta^{\prime}-\beta)$ is also a symmetric 2 form.
Thus,
$\displaystyle 2(\nabla\alpha)(Y,Z)=$
$\displaystyle~{}(\nabla\alpha)(Y,Z)+(\nabla\alpha)(Z,Y)$ $\displaystyle=$
$\displaystyle~{}g(\nabla_{Y}\alpha^{\sharp},Z)+g(\nabla_{Z}\alpha^{\sharp},Y)$
$\displaystyle=$ $\displaystyle~{}\mathcal{L}_{\alpha^{\sharp}}g(Y,Z).$
Since $2\nabla\alpha=\beta-\beta^{\prime}$ and $(M,D,g)$ is a Hesse soliton
$\beta-\frac{1}{2}\mathcal{L}_{X}(Y,Z)=\lambda g,$
one has
$\displaystyle\beta^{\prime}(Y,Z)=$
$\displaystyle~{}\beta(Y,Z)-2(\nabla\alpha)(Y,Z)$ $\displaystyle=$
$\displaystyle~{}\frac{1}{2}\mathcal{L}_{X}g(Y,Z)+\lambda
g(Y,Z)-\mathcal{L}_{\alpha^{\sharp}}g(Y,Z)$ $\displaystyle=$
$\displaystyle~{}\frac{1}{2}\mathcal{L}_{(X-2\alpha^{\sharp})}g(Y,Z)+\lambda
g(Y,Z).$
∎
We consider gradient Hesse solitons. If the first Koszul form $\alpha$ is
exact, that is, $\alpha=dF$ for some smooth function $F$ on $M$, then the
Hesse soliton of the dual space is also gradient.
###### Corollary 5.2.
Let $(M,D,g,f)$ be a gradient Hesse soliton, such that the first Koszul form
is exact, that is, $\alpha=dF$ for some smooth function $F$ on $M$. Then the
dual space $(M,D^{\prime},g)$ is also a gradient Hesse soliton with the
potential function $f-2F.$
###### Proof.
Since $\alpha=dF$, we have
$g(\alpha^{\sharp},Y)=\alpha(Y)=dF(Y)=XF=g(\nabla F,Y),$
for any vector field $Y$ on $M$. Thus, we have
$\alpha^{\sharp}=\nabla F.$
By Theorem 5.1, the proof is complete. ∎
One can understand the dual space of Hesse-Einstein manifolds as Hesse
solitons.
###### Corollary 5.3.
Let $(M,D,g)$ be a Hesse-Einstein manifold,
$\beta=\lambda g,$
then the dual space is a Hesse soliton $(M,D^{\prime},g,-2\alpha^{\sharp})$,
that is, it satisfies
$\beta^{\prime}-\frac{1}{2}\mathcal{L}_{(-2\alpha^{\sharp})}g=\lambda g.$
## References
* [1] S. Amari, Information Geometry and Its Applications, Applied Mathematical Sciences, 194. Springer, [Tokyo], (2016).
* [2] E. Calabi, Improper affine hyperspheres of convex type and a generalization of a theorem by K. Jörgens, Mich. Math. J. 5, (1958) 105-126.
* [3] N. N. Chentsov, Statistical Decision Rules and Optimal Inference, American Mathematical Society, Providence, RI (1982).
* [4] B. Chow, S-C. Chu, D. Glickenstein, C. Guenther, J. Isenberg, T. Ivey, D. Knopf, P. Lu, F. Luo and L. Ni, The Ricci Flow: Techniques and Applications Part1 Mathematical Surveys and Monographs 135, Amer. Math. Soc., (2007).
* [5] A. Fujiwara, Foundations of information geometry, Makinoshoten, (2015).
* [6] R. Hamilton, Three-manifolds with positive Ricci curvature, J. Differential Geom. (1982) 17, 255–306.
* [7] M. Mirghafouri, F. Malek, Long-time existence of a geometric flow on closed Hessian manifolds, J. Geom. Phys. 119 (2017) 54-65.
* [8] H. Omori, Isometric immersions of Riemannian manifolds, J. Math. Soc. Japan. (1967) 19, 205-214.
* [9] S. Puechmorel and T. D. Tô, Convergence of the Hesse-Koszul flow on compact Hessian manifolds, arXiv:2001.02940 [math.DG].
* [10] H. Shima, Vanishing theorems for compact Hessian manifolds, Ann. Inst. Fourier, Grenoble, 36-3 (1986), 183-205.
* [11] H. Shima, The geometry of Hessian structures, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, (2007).
* [12] S.-T. Yau, Harmonic functions on complete Riemannian manifolds, Comm. Pure Appl. Math. 28, (1975), 201–228.
|
1
The emergence of visual semantics through communication games
Daniela Mihai11footnotemark: 1, Jonathon Hare11footnotemark: 1
11footnotemark: 1Vision Learning and Control, Electronics and Computer
Science, University of Southampton.
Keywords: emergent communication, feature learning, visual system
Abstract
The emergence of communication systems between agents which learn to play
referential signalling games with realistic images has attracted a lot of
attention recently. The majority of work has focused on using fixed,
pretrained image feature extraction networks which potentially bias the
information the agents learn to communicate. In this work, we consider a
signalling game setting in which a ‘sender’ agent must communicate the
information about an image to a ‘receiver’ who must select the correct image
from many distractors. We investigate the effect of the feature extractor’s
weights and of the task being solved on the visual semantics learned by the
models. We first demonstrate to what extent the use of pretrained feature
extraction networks inductively bias the visual semantics conveyed by emergent
communication channel and quantify the visual semantics that are induced.
We then go on to explore ways in which inductive biases can be introduced to
encourage the emergence of semantically meaningful communication without the
need for any form of supervised pretraining of the visual feature extractor.
We impose various augmentations to the input images and additional tasks in
the game with the aim to induce visual representations which capture
conceptual properties of images. Through our experiments, we demonstrate that
communication systems which capture visual semantics can be learned in a
completely self-supervised manner by playing the right types of game. Our work
bridges a gap between emergent communication research and self-supervised
feature learning.
## 1 Introduction
Deep-agent emergent language research aims to develop agents that can
cooperate with others, including humans. To achieve this goal, these agents
necessarily communicate with particular protocols through communication
channels. In emergent-communication research, the communication protocols are
learned by the agents, and researchers often investigate how these protocols
compare to natural human languages. In this paper, we study the emergence of
visual semantics in such learned communication protocols, in the context of
referential signalling games (DK. Lewis, 1969). Although previous research has
looked into how pre-linguistic conditions, such as the input representation
(either symbolic or raw pixel input) (Lazaridou ., 2018), affect the nature of
the communication protocol, we highlight some features of the referential game
that can improve the semantics, and hence push it towards a more natural form,
and away from a pure image-hashing solution that could naïvely solve the game
perfectly. We then explore the effects of linking language learning with
feature learning in a completely self-supervised setting where no information
on the objects present in a scene is provided to the model at any point. We
thus seek to build a bridge between recent research in self-supervised feature
learning with recent advances in self-supervised game play with emergent
communication channels.
The idea that agents might learn language by playing visually grounded games
has a long history (Cangelosi Parisi, 2002; Steels, 2012). Research in this
space has recently had something of a resurgence with the introduction of a
number of models that simulate the play of referential games (DK. Lewis, 1969)
using realistic visual inputs (Lazaridou ., 2017; Havrylov Titov, 2017; Lee .,
2017). On one hand, these works have shown that the agents can learn to
successfully communicate to play these games; however, on the other hand,
there has been much discussion as to whether the agents are really learning a
communication system grounded in what humans would consider to be the
semantics of visual scenes. Bouchacourt Baroni (2018) highlight this issue in
the context of a pair of games designed by Lazaridou . (2017) which involved
the sender and receiver agents being presented with pairs of images. They show
that the internal representations of the agents are perfectly aligned, which
allows them to successfully play the game but does not enforce capturing
conceptual properties. Moreover, when the same game is played with images made
up of random noise, the agents still succeed at communicating, which suggests
that they agree on and rely on incomprehensible low-level properties of the
input which drift away from human-interpretable properties. This finding
should perhaps not be so surprising; it is clear to see that one easy way for
agents to successfully play these visual communication games would be by
developing schemes which create hash-codes from the visual content at very low
levels (perhaps even at the pixel level).
Havrylov Titov (2017) explored a different, and potentially harder, game than
that proposed by Lazaridou . (2017). In their game (see Section 3 for full
details), the sender sees the target image and the receiver sees a batch of
images formed of a number of distractor images plus the target one. The sender
agent is then allowed to send a variable-length message, up to a maximum
length, from a fixed vocabulary to the receiver. The later then needs to use
that message to identify the target. As opposed to Lazaridou . (2017)’s game
in which both agents see only a pair of images, this setting requires the
message to include information that will allow the receiver to pick the target
image from a batch of 128 images. In their work, they show some qualitative
examples in which it does appear that the generated language does in some way
convey the visual semantics of the scene (in terms of ‘objectness‘ —
correlations between the sequences of tokens of the learnt language and
objects, as perceived by humans, known to exist within the images). There are
however many open questions from this analysis; one of the key questions is to
what extent the ImageNet-pretrained VGG-16 CNN (Simonyan Zisserman, 2015) used
in the model is affecting the language protocol that emerges.
In this work, we explore visual semantics in the context of Havrylov Titov
(2017)’s game by carefully controlling the visual feature extractor that is
used and augmenting the game play in different ways. We seek to explore what
factors encourage the emergent language to convey visual semantics rather than
falling back to a communication system that just learns hashes of the input
images. More concretely, we:
* •
Study the effect of different weights in the CNN used to generate the features
(pretrained on ImageNet and frozen as in the original work, randomly
initialised and frozen, and, learned end-to-end in the model). We find that
models with a feature extractor pretrained in a supervised way capture the
most semantic content in the emergent protocol.
* •
Investigate the effect of augmentations that make the game harder by changing
the image given to the sender (adding noise and/or random rotations), but not
the receiver. Overall, adding noise seems to only make the game slightly
harder as the communication success drops, while rotation improves the visual
semantics metrics.
* •
Explore the effect of independently augmenting the images given to the sender
and the receiver (random cropping and resize to the original image size,
random rotations and colour distortion), so they do no see the exact same
image. We show that it is possible to get a fully learned model that captures
similar amounts of semantic notions as a model with a pretrained feature
extractor.
* •
Extend the game to include a secondary task (guessing the rotation of the
sender’s input) in order to assess whether having agents perform more diverse
tasks might lead to stronger visual semantics emerging. We find that without a
complex sequence of data augmentation transforms and any supervision, a more
meaningful communication protocol can emerge between agents that solve
multiple tasks.
* •
Analyse the effect of pretraining the feature extractor network in a self-
supervised framework before engaging in the multi-task game. We show that
solving such a self-supervised task helps ground the emergent protocol without
any human supervision and is even more beneficial for the semantic content
captured by a fully learned model.
We draw attention to the fact that other than in the cases where we use
pretrained feature extractors, our simulations are completely self-supervised,
and there is no explicit signal of what a human would understand as the
‘visual semantics’ given to the models at any point. If our models are to
communicate visual semantics through their communication protocols, then they
must learn how to extract features that provide suitable information on those
semantics from raw image pixel data.
The remainder of this paper is structured as follows: Section 2 looks at
related work, which necessarily covers a wide range of topics. Section 3
describes our baseline game and model, building upon Havrylov Titov (2017).
Section 4 describes and discusses a range of investigations that explore what
can make the emergent communication protocol convey more semantically
meaningful information. Finally, section 5 concludes by summarising our
findings and makes suggestions for ways in which these could be taken further
forward in the future.
## 2 Related Work
In this section, we cover the background literature relevant to our work: the
emergence of semantic concepts in artificial communication protocols, without
previously embedded knowledge from pretrained features. As our work seeks to
bridge research in disparate areas, our discussion necessarily crosses a broad
range of topics from the ‘meaning of images’ to emergent communication through
game play to feature learning, whilst at the same time considering neural
architectures that allow us to build models that can be trained. We first
discuss the way humans perceive real-world scenes and what it is that one
comprehends as visual semantics. We then proceed and give an overview of the
history of multi-agent cooperation games which led to the research field of
emergent communication. We look at recent advances that allow us to train
emergent communication models parameterised by neural networks using gradient-
based methods, and end by looking at recent advances in feature learning.
### 2.1 What do humans perceive as ‘visual semantics’?
When presented with a natural image, humans are capable of answering questions
about any objects or creatures, and about any relationships between them
(Biederman, 2017). In this work, we focus on the first question, the _what?_ ,
i.e. the object category (or the list of categories). Research on the way
humans perceive real-world scenes such as Biederman (1972) talk about the
importance of meaningful and coherent context in perceptual recognition of
objects. Their study compares the accuracy of identifying a single object in a
real-world jumbled scene versus in a coherent scene. On the other hand,
theories such as that by Henderson Hollingworth (1999) support the idea the
object identification is independent of global scene context. A slightly more
recent psychophysical study (Fei-Fei ., 2007) shows that humans, in a single
glance of a natural image, are capable of recognising and categorising
individual objects in the scene and distinguishing between environments,
whilst also perceiving more complex features such as activities performed or
social interactions.
Despite the debate between these two and many other models of scene and object
perception, it is clear that the notion of ‘objects’ is important in how a
scene is understood by a human. Throughout this work we consider an object-
based description of natural images (aligned with what humans would consider
to be objects or object categories) to be suitable for the measurement of
semantics captured by an emergent communication protocol. Our specific
measures are detailed in Section 3.3.
### 2.2 Emergent Communication
#### Background.
The emergence of language in multi-agent settings has traditionally been
studied in the language evolution literature which is concerned with the
evolution of communication protocols from scratch (Steels, 1997; Nowak
Krakauer, 1999). These early works survey mathematical models and software
simulations with artificial agents to explore how various aspects of language
have begun and continue to evolve. One key finding of Nowak Krakauer (1999) is
that signal-object associations are only possible when the information
transfer is beneficial for both parties involved, and hence that _cooperation_
is a vital prerequisite for language evolution. Our work is inspired by the
renewed interest in the field of emergent communication which uses
contemporary deep learning methods to train agents on referential
communication games (Baroni, 2020; Chaabouni ., 2019; Li Bowling, 2019;
Lazaridou ., 2018; Cao ., 2018; Evtimova ., 2017; Havrylov Titov, 2017;
Lazaridou ., 2017; Lee ., 2017; Mordatch Abbeel, 2017; Sukhbaatar ., 2016).
Their works all build toward the long-standing goal of having specialised
agents that can interact with each other and with humans to cooperatively
solve tasks and hence assist them in the daily life such as going through
different chores.
#### Protolanguage and Properties.
Recent work by Baroni (2020) highlights some of the priorities in current
emergent language research and sketches the characteristics of a useful
_protolanguage_ for deep agents. It draws on the idea from linguistics that
human language has gone through several stages before reaching the full-blown
form it has today, and it had to start from a limited set of simple
constructions (Bickerton, 2014). By providing a realistic scenario of a daily
interaction between humans and deep agents, Baroni (2020) emphasises that a
useful protolanguage first needs to use words in order to categorise
perceptual input; then allow the creation of new words as new concepts are
encountered, and only after, deal with predication structures (i.e. between
object-denoting words and property-or-action-denoting words). The focus of our
work is on the categorisation phase as we explore whether it is possible for
deep agents to develop a language which captures visual concepts whilst
simultaneously learning features from natural images in a completely self-
supervised way.
In the referential game setting used in our work, the protolanguage is formed
of variable-length sequences of discrete tokens, which are chosen from a
predefined, fixed vocabulary. The learned protocol is not grounded in any way,
such that the messages are not forced to be similar to those of natural
language. As described in Section 2.1, we believe it is a reasonable
assumption that if the game were to be played by human agents they would
capture the object’s category and its properties that help distinguish the
target from the distractor images.
### 2.3 Games
Lewis’s classic signalling games (DK. Lewis, 1969) have been extensively
studied for language evolution purposes (Steels, 1997; Nowak Krakauer, 1999),
but also in game theory under the name ‘cheap talk’ games. These games are
coordination problems in which agents must choose one of several alternative
actions, but in which, their decisions are influenced by their expectations of
other agents’ actions. Similar to Lewis’s games, image reference games are
coordination problems between multiple agents that require a _limited_
communication channel through which information can be exchanged for solving a
cooperative task. The task usually requires one agent transmitting information
about an image, and a second agent guessing the correct image from several
others based on the received message (Lazaridou ., 2017, 2018; Havrylov Titov,
2017). Other examples of cooperative tasks which require communication between
multiple agents include: language translation (Lee ., 2017), logic riddles
(Foerster ., 2016), simple dialog (Das ., 2017) and negotiation (M. Lewis .,
2017).
One of the goals in emergent communication research is for the developed
_protolanguage_ to receive no, or as little as possible, human supervision.
However, reaching coordination between agents solving a cooperative task,
while developing a human-friendly communication protocol has been shown to be
extremely difficult (Lowe ., 2019; Chaabouni ., 2019; Kottur ., 2017). In
these games, the emergent language has no prior meaning (neither semantics nor
syntax) and it converges to develop these by learning to solve the task
through many trials or attempts. Lee . (2019) proposes a translation task
(i.e. encoding a source language sequence and decoding it into a target
language) via a third pivot language. They show that auxiliary constraints on
this pivot language help to best retain original syntax and semantics. Other
approaches (Havrylov Titov, 2017; Lazaridou ., 2017; Lee ., 2017) directly
force the agents to imitate natural language by using pretrained visual
vectors, which already encode information about objects. Lowe . (2020), on the
other hand, discusses the benefits of combining expert knowledge supervision
and self-play, with the end goal of making human-in-the-loop language learning
algorithms more efficient.
Our work builds upon Havrylov Titov (2017)’s referential game (which we
describe in more detail in Section 3) but is also trying to learn the feature
extractor, in contrast to the original game in which the feature extractor was
pretrained on an object classification task. Therefore, the extracted features
are not grounded in the natural language. We take inspiration from all the
mentioned papers and investigate to which extent the communication protocol
can be encouraged to capture semantics and learn a useful feature extractor in
a completely self-supervised way by just solving the predetermined task.
### 2.4 Differentiable neural models of representation
The research works in the previous two subsections predominantly utilise
models that communicate with sequences of discrete tokens. Particularly in
recent work, the token-producing and token-consuming parts of the models are
typically modelled with neural architectures such as variants of recurrent
neural networks such as LSTMs. One of the biggest challenges with these models
is that the production of discrete tokens necessarily involves a sampling step
in which the next token is drawn from a categorical distribution, which is
itself parameterised by a neural network. Such a sampling operation is non-
differentiable, and thus, until recently, the only way to learn such models
was by using reinforcement learning, and in particular unbiased, but high-
variance monte-carlo estimation methods such as the REINFORCE algorithm
(Williams, 1992) and its variants.
Over the last six years there has been much interest in neural-probabilistic
latent variable models, perhaps most epitomised by Kingma Welling (2014)’s
Variational Autoencoder (VAE). The VAE is an autoencoder that models its
latent space, not as continuous fixed-length vectors, but as multivariate
normal distributions. The decoder part of the VAE however only takes a single
sample of the distribution as input. Although they contain a discrete
stochastic operation in the middle of the network (sampling
$\bm{y}\sim\mathcal{N}(\bm{\mu},\bm{\Sigma})$), VAEs are able to be trained
with gradient descent using what has become popularly known as the
reparameterisation trick since the publication of the VAE model (Kingma
Welling, 2014), although the idea itself is much older (Williams, 1992).
The reparameterisation trick only applies directly when the distribution can
be factored into a function that is continuous and differentiable almost
everywhere. In 2017 this limitation was addressed independently by two set
papers (Maddison ., 2017; Jang ., 2017) that introduced what we now know as
the Gumbel Softmax estimator, which is a reparameterisation that allows us to
sample a categorical distribution
($t\sim\operatorname{Cat}(p_{1},\dots,p_{K})\,;\,\sum_{i}p_{i}=1$) from its
logits $\bm{x}$.
One way to utilise this is to use the Gumbel-softmax approximation during
training, and replace it with the hard max at test time, however this can
often lead to problems because the model can learn to exploit information
leaked through the continuous variables during training. A final trick, the
straight-through operator, can be used to circumvent this problem (Jang .,
2017). Combining the Gumbel-softmax trick with the $\operatorname{STargmax}$
results in the Straight-Through Gumbel Softmax (ST-GS) which gives discrete
samples and with a usable gradient. The straight-through operator is biased
but has low variance; in practice, it works very well and is better than the
high-variance unbiased estimates one could get through REINFORCE (Havrylov
Titov, 2017). In short, this trick allows us to train neural network models
that incorporate fully discrete sampling operations using gradient-based
methods in a fully end-to-end fashion.
To conclude this subsection we would like to highlight that autoencoders,
variational autoencoders and many of the models used for exploring emergent
communication with referential games are all inherently linked. All of these
models attempt to compress raw data into a small number of latent variables,
and thereby capture salient information, whilst discarding information which
is not relevant to the task at hand. The only thing that is different in these
models is the choice of how the latent variables are modelled. In particular,
the central part of the model by Havrylov Titov (2017) that we build upon in
this paper (see Section 3), is essentially an autoencoder where the latent
variable is a variable-length sequence of categorical variables111the loss
used is not one of reconstruction, however, it certainly strongly encourages
the receiving agent to reconstruct the feature vector produced by the sender
agent; this is in many ways similar to the variational autoencoder variant
demonstrated by Jang . (2017) which used fixed length sequences of Bernoulli
or categorical variables.
### 2.5 Feature Learning
Among a variety of unsupervised approaches for feature representation
learning, the self-supervised learning framework is one of the most successful
as it uses pretext tasks such as image inpainting (Pathak ., 2016), predicting
image patches location (Doersch ., 2015) and image rotations (Gidaris .,
2018). Such pretext tasks allow for the target objective to be computed
without supervision and require high-level image understanding. As a result,
high-level semantics are captured in the visual representations which are used
to solve tasks such visual referential games. Kolesnikov . (2019) provide an
extensive overview of self-supervised methods for feature learning.
Recently, some of the most successful self-supervised algorithms for visual
representation learning are using the idea of contrasting positive pairs
against negative pairs. Hénaff . (2019) tackles the task of representation
learning with an unsupervised objective, Contrastive Predictive Coding (van
den Oord ., 2018), which extracts stable structure from still images.
Similarly, Ji . (2018) presents a clustering objective that maximises the
mutual information between class assignments for pairs of images. They learn a
neural network classifier from scratch which directly outputs semantic labels,
rather than high dimensional representations that need external processing to
be used for semantic clustering. Despite the recent surge of interest, Chen .
(2020) has shown through the strength of their approach that self-supervised
learning still remains undervalued. They propose a simple framework, SimCLR,
for contrastive visual representation learning. SimCLR learns meaningful
representations by maximising similarity between differently augmented views
of the same image in the latent space. One of the main contributions of this
work is that it outlines the critical role of data augmentations in defining
effective tasks to learn useful representations. We will also explore this
framework in some of our experiments detailed in Section 4.2.
Our attempt at encouraging the emergence of semantics in the learned
communication protocol is most similar to previous works which combine
multiple pretext tasks into a single self-supervision task (Chen ., 2019;
Doersch Zisserman, 2017). Multi-task learning (MTL) rests on the hypothesis
that people often apply knowledge learned from previous tasks to learn a new
one. Similarly, when multiple tasks are learned in parallel using a shared
representation, knowledge from one task can benefit the other tasks (Caruana,
1997). MTL has proved itself useful in language modelling for models such as
BERT (Devlin ., 2018) which obtains state-of-the-art results on eleven natural
language processing tasks. More recently, Radford . (2019) combine MTL and
language model pretraining, and propose MT-DNN, a model for learning
representations across multiple natural language understanding tasks. In this
work, we are also interested in the effect of solving multiple tasks on the
semantics captured in the communication protocol.
## 3 Baseline Experimental Setup
In this section we provide the details of our experimental setup; we start
from Havrylov Titov (2017)’s image reference game. The objective of the game
is for the Sender agent to communicate information about an image it has been
given to allow the Receiver agent to correctly pick the image from a set
containing many (127 in all experiments) distractor images.
### 3.1 Model Architecture
$\displaystyle h^{r}_{1}$$\displaystyle h^{r}_{2}$$\displaystyle
h^{r}_{3}$$\displaystyle h^{r}_{4}$$\displaystyle h^{r}_{5}$$\displaystyle
h^{s}_{1}$$\displaystyle h^{s}_{2}$$\displaystyle h^{s}_{3}$$\displaystyle
h^{s}_{4}$$\displaystyle h^{s}_{5}$$\displaystyle h^{s}_{0}$$\displaystyle
w_{1}$$\displaystyle w_{2}$$\displaystyle w_{3}$$\displaystyle
w_{4}$$\displaystyle w_{5}$SenderReceiverBatchNormProjectionBatchNormVGG16
relu 7VGG16 relu 7VGG16 relu 7VGG16 relu 7EmbeddingSoSEmbeddingST-GSProjection
Figure 1: Havrylov Titov (2017)’s game setup and model architecture.
Havrylov Titov (2017)’s model and game are illustrated in Figure 1. The Sender
agent utilises an LSTM to generate a sequence of tokens given a hidden state
initialised with visual information and a Start of Sequence (SoS) token. To
ensure that a sequence of only discrete tokens is transmitted, the output
token logits produced by the LSTM cell at each timestep are sampled with the
Straight-Through Gumbel Softmax operator (GS-ST).222Havrylov Titov (2017)
experimented with ST-GS, the relaxed Gumbel Softmax and REINFORCE in their
work, however, we focus our attention on ST-GS here. The Receiver agent uses
an LSTM to decode the sequence of tokens produced by the Sender, from which
the output is projected into a space that allows the Receiver’s image vectors
to be compared using the inner product. Havrylov Titov (2017) use a fixed
VGG16 CNN pretrained on ImageNet to extract image features in both agents. The
model is trained using a hinge-loss objective to maximise the score for the
correct image whilst simultaneously forcing the distractor images to have low
scores. The Sender can generate messages up to a given maximum length; shorter
codes are generated by the use of an end of sequence (EoS) token. Although not
mentioned in the original paper, we found that the insertion of a BatchNorm
layer in the Sender between the CNN and LSTM, and after the LSTM in the
Receiver, was critical for learnability of the model and reproduction of the
original experimental results.
### 3.2 Training Details
Our experiments use the model described above with some modifications under
different experimental settings. In all cases, we perform experiments using
the CIFAR-10 dataset rather than the COCO dataset used in the original work
(to replicate the original results requires multiple GPUs due to the memory
needed, as well as considerable training time333We found that about 32GB of
RAM spread across four RTX-2080Ti GPUs was required with the sender, receiver
and feature extractor each being placed on a different GPU, and the loss being
computed on the forth. Each epoch of 74624 games (for each batch of 128 images
we played the 128 possible games by taking each image in turn as the target)
took around 7 minutes to complete. The convergence of the communication rate
to a steady level took at least 70 epochs.). In light of the smaller
resolution images and lower diversity of class information, we choose a word
embedding dimension of 64, hidden state dimension of 128, and total vocabulary
size of 100 (including the EoS token). We also limit the maximum message
length to 5 tokens. The training data is augmented using color jitter
($p_{bri}=0.1,p_{con}=0.1,p_{sat}=0.1,p_{hue}=0.1$), random grayscale
transformation ($p=0.1$), and random horizontal flipping ($p=0.5$), so there
is very low probability of the model seeing exactly the same image more than
once during training. The batch size is set to 128, allowing for the Receiver
to see features from the target image plus 127 distractors. Most simulations
converge or only slowly improve after about 60 epochs, however for
consistency, all results are reported on models trained to 200 epochs where
convergence was observed to be guaranteed for well-initialised
models444Certain model configurations were more sensitive to initialisation;
this is discussed further in the next section..
### 3.3 Metrics
Our key objective is to measure how much visual semantic information is being
captured by the emergent language. If humans were to play this game, it is
clear, as discussed in Section 2.1, that a sensible strategy would be to
describe the target image by its semantic content (e.g. “a yellow car front-
on” in the case of the example in Figure 1). It is also reasonable to assume
in the absence of strong knowledge about the make-up of the dataset (for
example, that the colour yellow is relatively rare) that a semantic
description of the object in the image (a “car”) should have a strong part to
play in the communicated message if visual semantics are captured. Work such
as Hare . (2006) considers the semantic gap between object/class labels and
the full semantics, significance of the image. However, in the case of the
CIFAR-10 dataset in which most images have a single subject, “objectness” can
be considered a reasonable measure of semantics.
With this in mind, we can measure to what extent the communicated messages
capture the object by looking at how the target class places in the ranked
list of images produced by the Receiver. More specifically, in the top-5
ranked images guessed by the Receiver, we can calculate the number of times
the target object category appears, and across all the images we can compute
the average of the ranks of the images with the matching category. In the
former case, if the model captures more semantic information, the number will
increase; in the latter, the mean-rank decreases if the model captures more
semantic information. A model which is successful at communicating and
performs almost ideal hashing would have an expected top-5 number of the
target class approaching 1.0 and expected average rank of 60, whilst a model
that completely captures the “objectness” (and still guesses the correct
image) would have an expected top-5 target class count of 5 and expected mean
rank of 7.35. In addition to these metrics for measuring visual semantics, we
also measure top-1 and top-5 communication success rate (receiver guesses
correctly in the top-1 and top-5 positions) and the message length for each
trial. On average across all games, there are 13.7 images with the correct
object category in each game (on the basis that the images are uniformly drawn
without replacement from across the 10 classes and the correct image and its
class are drawn from within this). If the message transmitted only contained
information about the object class, then the communication success, when
considering the top-1 and top-5 choices of the Receiver, would be on average
0.07, and 0.36 respectively. Since we observe that throughout the experiments
there is a significant trade-off between the semantics measures and the top-1
communication rate, we consider top-5 rate a better indication of the capacity
of the model to succeed at the task while learning notions of semantics. If
the communication rate in top-5 is higher than the average, it means that the
message must contain additional information about the correct image, beyond
the type of object. However, we do not easily have the tools to find out what
that extra information might be; it could be visual semantics such as
attributes of the object, but it could also be some robust hashing scheme.
## 4 Experiments, Results and Discussion
This section describes a number of experiments and investigations into the
factors that influence the emergence of visual semantics in the baseline
experimental setup described in the previous section, as well as extended
versions of that baseline model. We start by exploring to what extent using a
pretrained feature extractor influences what the agents learn to communicate
and then look at different ways in which semantically meaningful communication
can be encouraged without any form of supervision (including supervised
pretraining).
### 4.1 The effect of different weights in the visual feature extractor
Generating and communicating hash codes is very clearly an optimal (if very
unhuman) way to play the image guessing game successfully. In Havrylov Titov
(2017)’s original work there was qualitative evidence that this did not happen
when the model was trained, and that visual semantics were captured. An
important first question is: to what extent is this caused by the pretrained
feature extractor?
We attempt to answer this question by exploring three different model
variants: the original model with the CNN fixed and initialised with ImageNet
weights; the CNN fixed, but initialised randomly; and, the CNN initialised
randomly, but allowed to update its weights during training. Results from
these experiments are summarised in Table 1. The first observation relates to
the visual-semantics measures; it is clear (and unsurprising) that the
pretrained model captures the most semantics of all the models. It is also
reasonable that we observe less semantic alignment with the end-to-end model;
without external biases, this model should be expected to move towards a
hashing solution. It is perhaps somewhat surprising however that the end-to-
end model and the random model have a similar communication success rate,
however, it is already known that a randomly initialised CNN can provide
reasonable features (Saxe ., 2011). During training, the Sender and Receiver
convergence had particularly low variance with both the end-to-end and random
models, allowing the agents to quickly evolve a successful strategy. This is
in contrast to the pretrained model which had markedly higher variance as can
be seen from the plots in Figure 2.
Figure 2: The game-play and semantic performance over the training epochs of
the three model variants using a: pretrained, random or fully learned feature
extractor CNN. The loss plot shows that the learned and random models converge
much faster than the pretrained one, and have lower variance allowing the
agents to evolve a successful game strategy.
One might question if the end-to-end model would be handicapped because it had
more weights to learn in the same number of epochs (200 for all models),
however, as the results show, the end-to-end model has the best performance.
We also investigated if the models required more training time; nevertheless,
training all the models for 1000 epochs yielded only a $2\%$ improvement in
communication rate across the board.
Table 1: The effect of different weights in the feature extractor CNN. Measures are averaged across 7 runs of the game for each model on the CIFAR-10 validation set. Communication rate values in brackets are standard deviations across games, which show the sensitivity to different model initialisations and training runs. The message length standard deviation is measured across each game and averaged across the 7 runs, and show how much variance there is in transmitted message length. Feature extractor | Comm. | Message | Top-5 | #Target | Target
---|---|---|---|---|---
| rate | length | comm. | class | class
| | | rate | in top-5 | avg. rank
Pretrained & fixed | 0.90 ($\pm$0.02) | 4.93 ($\pm$0.34) | 1 | 1.86 | 46.25
Random & frozen | 0.93 ($\pm$0.03) | 4.90 ($\pm$0.39) | 1 | 1.69 | 51.65
Learned end-end | 0.94 ($\pm$0.02) | 4.90 ($\pm$0.39) | 1 | 1.5 | 57.14
Table 2: The effect of different weights in the feature extractor CNN when the model is augmented by adding noise and/or random rotations to the Sender agent’s input images, and when independently augmenting both agent’s inputs images following the SimCLR framework (Chen ., 2020). Measures as per Table 1. Feature extractor | Comm. | Message | Top-5 | #Target | Target
---|---|---|---|---|---
| rate | length | comm. | class | class
| | | rate | in top-5 | avg. rank
Sender images augmented with Gaussian noise:
Pretrained & fixed | 0.89 ($\pm$0.02) | 4.93 ($\pm$0.33) | 0.99 | 1.86 | 46.39
Random & frozen | 0.94 ($\pm$0.01) | 4.90 ($\pm$0.38) | 1 | 1.66 | 52.45
Learned end-end | 0.94 ($\pm$0.02) | 4.92 ($\pm$0.33) | 1 | 1.51 | 57.33
Sender images augmented with random rotations:
Pretrained & fixed | 0.8 ($\pm$0.05) | 4.94 ($\pm$0.32) | 0.99 | 2.03 | 42.9
Random & frozen | 0.80 ($\pm$0.12) | 4.87 ($\pm$0.45) | 0.98 | 1.7 | 51.43
Learned end-end | 0.92 ($\pm$0.04) | 4.92 ($\pm$0.32) | 1 | 1.59 | 55.8
Sender images augmented with Gaussian noise and random rotations:
Pretrained & fixed | 0.76 ($\pm$0.02) | 4.92 ($\pm$0.38) | 0.98 | 2.01 | 42.85
Random & frozen | 0.67 ($\pm$0.26) | 4.77 ($\pm$0.57) | 0.92 | 1.62 | 51.37
Learned end-end | 0.90 ($\pm$0.06) | 4.94 ($\pm$0.29) | 1 | 1.58 | 55.8
Sender & Receiver images independently augmented (SimCLR-like):
Pretrained & fixed | 0.48 ($\pm$0.03) | 4.90 ($\pm$0.41) | 0.86 | 2.14 | 38.08
Random & fixed | 0.42 ($\pm$0.10) | 4.92 ($\pm$0.33) | 0.85 | 1.68 | 47.94
Learned end-end | 0.72 ($\pm$0.05) | 4.91 ($\pm$0.39) | 0.98 | 2.00 | 42.37
### 4.2 Making the game harder with augmentation
We next investigate the behaviour of the same three model variants while
playing a slightly more difficult game. The input image to the Sender is
randomly transformed, and thus will not be pixel-identical with any of those
seen by the Receiver. For the model to communicate well it must either capture
the semantics or learn to generate highly-robust hash codes.
#### Noise and Rotation.
We start by utilising transformations made from random noise and random
rotations. The added noise is generated from a normal distribution with mean 0
and variance 0.1, and the rotations applied to the input images are randomly
chosen from {$None$, $None$, $None$, $None$}.
The first part of Table 2 shows the effect of adding either noise or
rotations, or both. In general noise results in a slight increase in the
communication success rate. More interestingly, for randomly rotated Sender
images the augmentation tends to increase the visual semantics captured by all
the models, although this is most noticeable in the pretrained variant. At the
same time, the communication success rate of the pretrained model drops; it is
an open question as to whether this could be resolved by sending a longer
message. Finally, the models augmented with both noise and rotations do no
show any improvement over the rotation only game in terms of the semantics
measure. As one might guess, noise only makes the game harder, a fact which is
reflected in the slight drop of communication success, but does not explicitly
encourage semantics.
#### More complex transformations.
We continue by adding a more complex composition of data augmentations to the
game. Chen . (2020) have recently shown that combinations of multiple data
augmentation operations have a critical role in contrastive self-supervised
learning algorithms and improve the quality of the learned representations. We
implement their transformation setup in our game, with sender and receiver
having differently augmented views of the same image. We follow the
combination proposed by Chen . for the CIFAR-10 experiment which consists in
sequentially applying: random cropping (with flip and resize to the original
image size) and random colour distortions555The details of the data
augmentations are provided in the appendix of Chen . (2020) and available at
https://github.com/google-research/simclr. We test if the combination does
improve the learned representations in a self-supervised framework as ours,
which however does not use a contrastive loss in the latent space, but the
aforementioned hinge-loss objective (see Section 3.1). It is also worth noting
that we continue using a VGG-16 feature extractor, as opposed to the ResNet
(He ., 2016) variants used by Chen . (2020). The game is played as described
in Section 3, but this time each image is randomly transformed twice, giving
two completely independent views of the same example, and hence, making the
game objective harder than with the noise and rotation transformations666In
the noise and rotation case only the sender’s image was transformed. It is
conceivable in this case that the sender might learn to de-noise or un-rotate
the feature in order to establish a communication protocol. If images are
transformed on both sides of the model, the agents won’t have an easy way of
learning a ‘correct’ inverse transform..
The lower part of Table 2 shows the results of the newly-augmented game for
the different configurations of feature extractors used previously (pretrained
with ImageNet and fixed; random and fixed; and, learned end-to-end). The
results show that, indeed, by extending the augmentations and composing them
randomly and independently for Sender and Receiver, the communication task
becomes harder, hence the communication success is lower than in the previous
experiments. However, as Chen . (2020)’s results have also shown, the quality
of the representations improves considerably, especially for the model
‘Learned end-end’, and this is reflected in the improvement of our measures
for the amount of semantic information captured in the learned communication
protocol. Specifically, the number of times the target class appears in top-5
predictions increases by almost half a point for the pretrained and learned
model, and the average rank of the target class lowers (over 10 units for the
learned model) which indicates that the protocol captures more content
information and is less susceptible to only hashing the images. Using this
approach, the learned model achieves the highest communication success while
also getting semantic results close to the model with an ImageNet pretrained
feature extractor.
It is particularly interesting to observe that by the relative simplicity of
applying the same transformations to the images as Chen . (2020) we encourage
semantic alignment in a completely different model architecture and loss
function. This suggests that the value of Chen . (2020)’s proposal for
contrastive learning is more towards the choice of features rather than the
particular contrastive loss methodology.
### 4.3 Making the game harder with multiple objectives
$\displaystyle h^{r}_{1}$$\displaystyle h^{r}_{2}$$\displaystyle
h^{r}_{3}$$\displaystyle h^{r}_{4}$$\displaystyle h^{r}_{5}$$\displaystyle
h^{s}_{1}$$\displaystyle h^{s}_{2}$$\displaystyle h^{s}_{3}$$\displaystyle
h^{s}_{4}$$\displaystyle h^{s}_{5}$$\displaystyle h^{s}_{0}$$\displaystyle
w_{1}$$\displaystyle w_{2}$$\displaystyle w_{3}$$\displaystyle
w_{4}$$\displaystyle w_{5}$SenderReceiverBatchNormProjectionVGG16 relu
7EmbeddingSoSEmbeddingRandom rotation$\displaystyle\theta\
\in\left\\{0^{\degree},90^{\degree},180^{\degree},270^{\degree}\right\\}$MLPST-
GSBatchNormVGG16 relu 7VGG16 relu 7VGG16 relu 7Projection
Figure 3: Extended game with the Receiver also required to guess the
orientation of the Sender’s image.
$\displaystyle h^{r}_{1}$$\displaystyle h^{r}_{2}$$\displaystyle
h^{r}_{3}$$\displaystyle h^{r}_{4}$$\displaystyle h^{r}_{5}$$\displaystyle
h^{s}_{1}$$\displaystyle h^{s}_{2}$$\displaystyle h^{s}_{3}$$\displaystyle
h^{s}_{4}$$\displaystyle h^{s}_{5}$$\displaystyle h^{s}_{0}$$\displaystyle
w_{1}$$\displaystyle w_{2}$$\displaystyle w_{3}$$\displaystyle
w_{4}$$\displaystyle w_{5}$SenderReceiverBatchNormProjectionVGG16 relu
7EmbeddingSoSEmbeddingRandom rotation$\displaystyle\theta\
\in\left\\{0^{\degree},90^{\degree},180^{\degree},270^{\degree}\right\\}$MLPST-
GSBatchNormVGG16 relu 7VGG16 relu 7VGG16 relu 7Projection
Figure 4: Extended game with the Sender augmented with an additional loss
based on predicting the orientation of the input image.
The experimental results with the model setups shown in Tables 1 and 2 clearly
show that the fully-learned models always collapse towards game-play solutions
which are not aligned with human notations of visual semantics. Conversely,
the use of a network that was pretrained in a supervised fashion to classify
real-world images has a positive effect on the ability of the communication
system to capture visual semantics. On the other hand, using a different
experimental setup involving a complex set of independent transformations of
the images given to the sender and receiver helps the learned model acquire
and use more of the visual-semantic information, similar to the pretrained
model. However, this improvement comes at the cost of reducing the
communication success rate as the game becomes much harder when using the
proposed augmentations.
We continue by exploring if it might be possible for a communication protocol
with notions of visual semantics to emerge directly from pure self-supervised
game-play. In order to achieve this, we propose that the agents should not
only learn to play the referential game, but they should also be able to play
other games (or solve other tasks). In our initial experiments we formulate a
setup where the agents not only have to play the augmented version of the game
described in Section 4.2 (with both noise and rotations randomly applied to
the image given to the Sender, but not the Receiver), but also one of the
agents has to guess the rotation of the image given to the Sender as shown in
Figures 3 and 4.
This choice of the additional task is motivated by Gidaris . (2018) who showed
that a self-supervised rotation prediction task could lead to good features
for transfer learning, on the premise that in order to predict rotation the
model needed to recognise the object. The rotation prediction network consists
of three linear layers with Batch Normalisation before the activation
functions. The first two layers use ReLU activations, and the final layer uses
a Softmax to predict the probability of the four possible rotation classes.
Except for the final layer, each layer outputs 200-dimensional vectors. Cross-
Entropy is used as the loss function for the rotation prediction task
($\mathcal{L}_{rotation}$). All other model parameters and the game-loss
definition match those described in Section 3.
Table 3: End-to-End learned models with an additional rotation prediction task. Measures as per Table 1, except for the inclusion of the accuracy of rotation prediction. Model | Comm. | Top-5 | #Target | Target | Rot.
---|---|---|---|---|---
| rate | comm. | class | class | acc.
| | rate | in top-5 | avg. rank |
Receiver-Predicts (Fig. 3) | 0.58 | 0.96 | 1.85 | 48.75 | 0.80
Sender-Predicts (Fig. 4) | 0.72 | 0.98 | 2.05 | 42.89 | 0.83
Results of these experiments are shown in Table 3. We ran a series of
experiments to find optimal weightings for the two losses such that the models
succeed at the communication task while also acquiring notions of visual
semantics. Both experiments presented, with the Sender-predicts model (Figure
4) and the Receiver-predicts model (Figure 3), used a weighted addition
$0.5\cdot\mathcal{L}_{rotation}+\mathcal{L}_{game}$, where
$\mathcal{L}_{game}$ refers to the original hinge-loss objective for the game
proposed by Havrylov Titov (2017). For the latter model we also tried using
additive loss with learned weights (following Kendall . (2018)) however this
created a model with good game-play performance, but an inability to predict
rotation (and poor semantic representation ability).
Training these models is harder than the original Sender-Receiver model
because the gradients pull the visual feature extractor in different
directions; the game achieves good performance when the features behave like
hash codes, whereas the rotation prediction task requires much more structured
features. This conflict means that it is difficult to train the models such
that they have the ability to solve both tasks. Clearly further work in
developing optimisation strategies for these multi-game models is of critical
importance in future work.
Whilst there is still a way to go to achieve the best levels of game-play
performance shown in Tables 1 and 2, it is clear that these fully self-
supervised end-to-end trained models can both learn a communication system to
play the game(s) that diverges from a hashing solution towards something that
better captures semantics. The lower game-play performance might however just
be a trade-off one has to live with when encouraging semantics with a fixed
maximum message length; this is discussed further at the end of the following
subsection.
### 4.4 Playing games with self-supervised pretraining
Having observed that a completely learned model, with the right augmentations
or instructed to solve multiple tasks which enforce notions of ‘objectness’,
can already acquire some visual semantics, we end by exploring the effect of
combining these two approaches: the multi-task game described in Section 4.3
with the previously mentioned self-supervised SimCLR framework (Chen ., 2020).
The goal of this is to test whether a pretrained feature extractor, also
trained on a task which does not require human intervention, can further
improve the meaning of the communication protocol, pushing it towards a more
human-like version. This set of experiments was performed with the Sender-
Predicts model described in Section 4.3. We employ independent augmentations
for the Sender and Receiver agents that match those detailed in the second
half of Section 4.2. To some extent, this resembles Lowe .’s Supervised Self-
Play approach in which self-play in a multi-agent communication game and
expert knowledge are interleaved. In our case, however, the VGG16 feature
extractor network was pretrained with Chen . (2020)’s framework in a
completely self-supervised way.
Table 4: The effect of interleaving self-supervision and multi-agent game-play. The game setup has two tasks: Sender Predicting Rotation as per Table 3, while using various augmentations (original and SimCLR same or individual). Feature Extractor | Comm. | Top-5 | #Target | Target | Rot.
---|---|---|---|---|---
| rate | comm. | class | class | acc.
| | rate | in top-5 | avg. rank |
Sender & Receiver images augmented with the original transforms:
Learned end-end | 0.72 | 0.98 | 2.05 | 42.89 | 0.83
Pretrained SS end-end | 0.84 | 0.99 | 2.19 | 40.19 | 0.79
Pretrained SS & fixed | 0.80 | 0.99 | 2.23 | 39.72 | 0.7
Sender & Receiver images augmented with SimCLR transforms:
Learned end-end | 0.53 | 0.92 | 2.22 | 37.16 | 0.80
Pretrained SS end-end | 0.49 | 0.89 | 2.18 | 38.74 | 0.79
Pretrained SS & fixed | 0.42 | 0.85 | 2.14 | 39.57 | 0.78
The results of the multi-objective game played with the Sender-predicts model,
in the initial setup and with the modified SimCLR transforms, are presented in
Table 4. We compare the different type of weights in the feature extractor
again: learned end-to-end, pretrained in a self-supervised way and fixed, or
allowed to change during the game-play. For the games which only start with a
self-supervised pretrained VGG16, we chose to fix the weights of the feature
extractor for the first 5 epochs before allowing any updates. This was based
on empirical results which showed that it helped to stabilise the LSTM and
Gumbal-softmax part of the models before allowing the gradients to flow
through the pretrained feature extractor part. We hypothesise that this is due
to the risk of bad initialisation in the LSTMs which can cause the models to
fail to converge at the communication task. This observation can be
generalised over all the experiments in this work, as all the models with a
fixed feature extractor appear to be slightly more unstable than those with
learned ones, in contrast to fully learned models which always converged (see
Figure 2).
As the results show, the model which best captures visual semantics is the one
learned end-to-end using the SimCLR transforms. It is again obvious that
between the two setups, the second makes the game significantly harder as the
agents are now also required to extract and encode information about the
object orientation, on top of seeing independently augmented input images.
This is reflected in the drop of the top-1 communication success, although
this does not hold for the top-5 rate. If the semantics improve, it implicitly
means that more of the object category is captured in the learned language
which diverges from a hashing protocol. As previously mentioned in Section
3.3, if the model only transmitted information about the object, the top-5
communication rate would be on average 0.36. Since this metric is
significantly higher, it implies that the message must contain additional
information, beyond the type of object. This could be visual semantics such as
attributes of the object, but it could also just be a more robust hashing
scheme based on pixel or low-level feature values.
Another interesting observation is that using a self-supervised pretrained
feature extractor, in the original setup, helps improve communication success
and the semantics measures at the same time. This finding confirms that self-
supervised pretraining in this type of game can be as beneficial, or even
better, as the supervised pretraining on ImageNet used in a less complex
variant of the game (see Table 2).
## 5 Conclusions and Future Work
In this paper, we have explored different factors that influence the human
interpretability of a communication protocol, that emerges from a pair of
agents learning to play a referential signalling game with natural images. We
first quantify the effect that using a pretrained visual feature extractor has
on the ability of the language to capture visual semantics. We empirically
show that using pretrained feature extractor weights from a supervised task
inductively biases the emergent communication channel to become more
semantically aligned, whilst both random-fixed and learned feature extractors
have less semantic alignment, but better game-play ability due to their
ability to learn hashing schemes that robustly identify particular images
using very low-level information.
We then perform an analysis of the effect that different forms of data
augmentation and transformation have on the agents’ ability to communicate
object type related information. Inducement of zero-mean Gaussian noise into
the sender’s image does not serve to improve the semantic alignment of
messages but does perhaps have a mild effect of improving the robustness of
the hashing scheme learned by the models. The addition of rotation to the
sender’s image results in a mild improvement in the semantic alignment,
although in the case of the models with fixed feature extractors this is at
the cost of game-play success rate. More complex combinations of data
transforms applied independently to the sender’s and receiver’s images, are
demonstrated to give a sizeable boost to the visual semantic alignment for the
model learned in an end-to-end fashion.
We then demonstrate that it is possible to formulate a multiple-game setting
in which the emergent language is more semantically grounded also without the
need for any outside supervision. We note these models represent difficult
multi-task learning problems, and that the next steps in this direction would
benefit from full consideration of multi-task learning approaches which deal
with multiple objectives that conflict (e.g. Sener Koltun, 2018; Kendall .,
2018).
Finally, we have shown that pretraining the visual feature extractor on a
self-supervised task, such as that of Chen . (2020), can further improve the
quality of the semantics notions captured by a fully learned model. One way of
looking at self-supervised pretraining is to consider it as self-play of a
different game, before engaging in the main communication task/game. From this
point of view, further work in the area of emergent communication should
explore other combinations of self-supervised tasks. Creating environments in
which agents have to solve multiple tasks, concurrently or sequentially, while
using the correct type of data augmentations seems to balance the trade-off
between performing the task well and developing a communication protocol
interpretable by humans. As Lowe . (2020) has also shown, interleaving
supervision and self-play can benefit multi-agent tasks while reducing the
amount of necessary human intervention.
Clearly there are many research directions that lead on from the points we
have highlighted above. We, however, would draw attention to perhaps the two
most important ones: better disentanglement and measurement of semantics; and
more investigations into the role of self-play with multiple tasks.
If emergent communication channels are to be truly equatable to the way that
humans communicate performing similar tasks, then we need to build models that
more clearly disentangle different aspects of the semantics of the visual
scenes they describe. Although throughout the paper we have used ‘objectness’
as an initial measure of semanticity, we would be the first to admit how crude
this is. We have highlighted in the discussion of results, that when a model
has both high semantics (using our objectness measures) and high game-play
success rates we do not know what kind of information is being conveyed, in
addition to information about the object, to allow the model to successfully
play the game; it could be information about semantically meaningful object
attributes (or even other objects in the scene), or it could just be some form
of robust hash code describing the pixels. The reality of current models is
that it’s probably somewhere in between, but it is clear that what is needed
is a better-formalised strategy to distinguish between the two possibilities.
We suspect that to achieve this we require a much more nuanced dataset with
very fine-grained labels of objects and their attributes. This would then
ultimately allow the challenge of disentangling meaningful semantic attribute
values in the communication protocol to be addressed.
Our experimental results clearly show that pretraining, which can be seen as a
form of self-play, can clearly benefit a model. Building upon these results we
would like to encourage further research in the emergent communication area to
consider self-supervision as additional games which can be combined with the
communication task as a way of encouraging human-interpretability of emergent
communication protocols. Such a direction seems entirely natural given what is
known and has been observed about how human infants learn.
## References
* Baroni (2020) baroni2020ratBaroni, M. 2020\. Rat big, cat eaten! Ideas for a useful deep-agent protolanguage Rat big, cat eaten! ideas for a useful deep-agent protolanguage. arXiv preprint arXiv:2003.11922.
* Bickerton (2014) bickerton2014moreBickerton, D. 2014\. More than nature needs More than nature needs. Harvard University Press.
* Biederman (1972) biederman1972perceivingBiederman, I. 1972\. Perceiving real-world scenes Perceiving real-world scenes. Science177404377–80.
* Biederman (2017) biederman2017semanticsBiederman, I. 2017\. On the semantics of a glance at a scene On the semantics of a glance at a scene. Perceptual organization Perceptual organization ( 213–253). Routledge.
* Bouchacourt Baroni (2018) BouchacourtB18Bouchacourt, D. Baroni, M. 2018\. How agents see things: On visual representations in an emergent language game How agents see things: On visual representations in an emergent language game. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018 Proceedings of the 2018 conference on empirical methods in natural language processing, brussels, belgium, october 31 - november 4, 2018 ( 981–985).
* Cangelosi Parisi (2002) cangelosi2002simulatingCangelosi, A. Parisi, D. 2002\. Simulating the evolution of language Simulating the evolution of language. Springer-Verlag New York, Inc.
* Cao . (2018) cao2018emergentCao, K., Lazaridou, A., Lanctot, M., Leibo, JZ., Tuyls, K. Clark, S. 2018\. Emergent Communication through Negotiation Emergent communication through negotiation. International Conference on Learning Representations. International conference on learning representations. https://openreview.net/forum?id=Hk6WhagRW
* Caruana (1997) caruana1997multitaskCaruana, R. 1997\. Multitask learning Multitask learning. Machine learning28141–75.
* Chaabouni . (2019) chaab2019antieffChaabouni, R., Kharitonov, E., Dupoux, E. Baroni, M. 2019\. Anti-efficient encoding in emergent communication Anti-efficient encoding in emergent communication. CoRRabs/1905.12561. http://arxiv.org/abs/1905.12561
* Chen . (2020) chen2020simpleChen, T., Kornblith, S., Norouzi, M. Hinton, G. 2020\. A simple framework for contrastive learning of visual representations A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709.
* Chen . (2019) chen2019selfChen, T., Zhai, X., Ritter, M., Lucic, M. Houlsby, N. 2019\. Self-supervised gans via auxiliary rotation loss Self-supervised gans via auxiliary rotation loss. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Proceedings of the ieee conference on computer vision and pattern recognition ( 12154–12163).
* Das . (2017) das2017learningDas, A., Kottur, S., Moura, JM., Lee, S. Batra, D. 2017\. Learning cooperative visual dialog agents with deep reinforcement learning Learning cooperative visual dialog agents with deep reinforcement learning. Proceedings of the IEEE international conference on computer vision Proceedings of the ieee international conference on computer vision ( 2951–2960).
* Devlin . (2018) devlin2018bertDevlin, J., Chang, MW., Lee, K. Toutanova, K. 2018\. Bert: Pre-training of deep bidirectional transformers for language understanding Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
* Doersch . (2015) doersch2015unsupervisedDoersch, C., Gupta, A. Efros, AA. 2015\. Unsupervised visual representation learning by context prediction Unsupervised visual representation learning by context prediction. Proceedings of the IEEE International Conference on Computer Vision Proceedings of the ieee international conference on computer vision ( 1422–1430).
* Doersch Zisserman (2017) Doersch_2017_ICCVDoersch, C. Zisserman, A. 2017Oct. Multi-Task Self-Supervised Visual Learning Multi-task self-supervised visual learning. The IEEE International Conference on Computer Vision (ICCV). The ieee international conference on computer vision (iccv).
* Evtimova . (2017) evtimova2017emergentEvtimova, K., Drozdov, A., Kiela, D. Cho, K. 2017\. Emergent communication in a multi-modal, multi-step referential game Emergent communication in a multi-modal, multi-step referential game. arXiv preprint arXiv:1705.10369.
* Fei-Fei . (2007) fei2007weFei-Fei, L., Iyer, A., Koch, C. Perona, P. 2007\. What do we perceive in a glance of a real-world scene? What do we perceive in a glance of a real-world scene? Journal of vision7110–10.
* Foerster . (2016) FoersterAFW16aFoerster, JN., Assael, YM., de Freitas, N. Whiteson, S. 2016\. Learning to Communicate with Deep Multi-Agent Reinforcement Learning Learning to communicate with deep multi-agent reinforcement learning. CoRRabs/1605.06676. http://arxiv.org/abs/1605.06676
* Gidaris . (2018) gidaris2018unsupervisedGidaris, S., Singh, P. Komodakis, N. 2018\. Unsupervised Representation Learning by Predicting Image Rotations Unsupervised representation learning by predicting image rotations. International Conference on Learning Representations. International conference on learning representations.
* Hare . (2006) hare2006mindHare, JS., Lewis, PH., Enser, PG. Sandom, CJ. 2006\. Mind the gap: another look at the problem of the semantic gap in image retrieval Mind the gap: another look at the problem of the semantic gap in image retrieval. Multimedia Content Analysis, Management, and Retrieval 2006 Multimedia content analysis, management, and retrieval 2006 ( 6073, 607309).
* Havrylov Titov (2017) Havrylov2017Havrylov, S. Titov, I. 2017\. Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols Emergence of language with multi-agent games: Learning to communicate with sequences of symbols. I. Guyon . (), Advances in Neural Information Processing Systems 30 Advances in neural information processing systems 30 ( 2149–2159). Curran Associates, Inc.
* He . (2016) he2016deepHe, K., Zhang, X., Ren, S. Sun, J. 2016\. Deep residual learning for image recognition Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition Proceedings of the ieee conference on computer vision and pattern recognition ( 770–778).
* Hénaff . (2019) henaff2019dataHénaff, OJ., Razavi, A., Doersch, C., Eslami, S. van den Oord, A. 2019\. Data-Efficient Image Recognition with Contrastive Predictive Coding Data-efficient image recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272.
* Henderson Hollingworth (1999) henderson1999highHenderson, JM. Hollingworth, A. 1999\. High-level scene perception High-level scene perception. Annual review of psychology501243–271.
* Jang . (2017) DBLP:conf/iclr/JangGP17Jang, E., Gu, S. Poole, B. 2017\. Categorical Reparameterization with Gumbel-Softmax Categorical reparameterization with gumbel-softmax. 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. 5th international conference on learning representations, ICLR 2017, toulon, france, april 24-26, 2017, conference track proceedings.
* Ji . (2018) ji2019Ji, X., Henriques, JF. Vedaldi, A. 2018\. Invariant Information Distillation for Unsupervised Image Segmentation and Clustering Invariant information distillation for unsupervised image segmentation and clustering. CoRRabs/1807.06653. http://arxiv.org/abs/1807.06653
* Kendall . (2018) kendall2017multiKendall, A., Gal, Y. Cipolla, R. 2018\. Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Proceedings of the ieee conference on computer vision and pattern recognition (CVPR).
* Kingma Welling (2014) kingma2014autoencodingKingma, DP. Welling, M. 2014\. Auto-Encoding Variational Bayes. Auto-encoding variational bayes. Y. Bengio Y. LeCun (), ICLR. Iclr. http://dblp.uni-trier.de/db/conf/iclr/iclr2014.html#KingmaW13
* Kolesnikov . (2019) Kolesnikov_2019_CVPRKolesnikov, A., Zhai, X. Beyer, L. 2019June. Revisiting Self-Supervised Visual Representation Learning Revisiting self-supervised visual representation learning. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). The ieee conference on computer vision and pattern recognition (cvpr).
* Kottur . (2017) KotturMLB17Kottur, S., Moura, JMF., Lee, S. Batra, D. 2017\. Natural Language Does Not Emerge ’Naturally’ in Multi-Agent Dialog Natural language does not emerge ’naturally’ in multi-agent dialog. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017 Proceedings of the 2017 conference on empirical methods in natural language processing, EMNLP 2017, copenhagen, denmark, september 9-11, 2017 ( 2962–2967).
* Lazaridou . (2018) lazaridou2018emergenceLazaridou, A., Hermann, KM., Tuyls, K. Clark, S. 2018\. Emergence of Linguistic Communication from Referential Games with Symbolic and Pixel Input Emergence of linguistic communication from referential games with symbolic and pixel input. International Conference on Learning Representations. International conference on learning representations. https://openreview.net/forum?id=HJGv1Z-AW
* Lazaridou . (2017) lazaridou2017multiLazaridou, A., Peysakhovich, A. Baroni, M. 2017\. Multi-agent cooperation and the emergence of (natural) language Multi-agent cooperation and the emergence of (natural) language. International Conference on Learning Representations. International conference on learning representations.
* Lee . (2019) lee2019counteringLee, J., Cho, K. Kiela, D. 2019\. Countering language drift via visual grounding Countering language drift via visual grounding. arXiv preprint arXiv:1909.04499.
* Lee . (2017) lee2017emergentLee, J., Cho, K., Weston, J. Kiela, D. 2017\. Emergent translation in multi-agent communication Emergent translation in multi-agent communication. arXiv preprint arXiv:1710.06922.
* DK. Lewis (1969) Lewis1969-LEWCAP-4Lewis, DK. 1969\. Convention: A Philosophical Study Convention: A philosophical study. Wiley-Blackwell.
* M. Lewis . (2017) lewis2017dealLewis, M., Yarats, D., Dauphin, YN., Parikh, D. Batra, D. 2017\. Deal or no deal? end-to-end learning for negotiation dialogues Deal or no deal? end-to-end learning for negotiation dialogues. arXiv preprint arXiv:1706.05125.
* Li Bowling (2019) li2019easeLi, F. Bowling, M. 2019\. Ease-of-teaching and language structure from emergent communication Ease-of-teaching and language structure from emergent communication. Advances in Neural Information Processing Systems Advances in neural information processing systems ( 15825–15835).
* Lowe . (2019) lowe2019pitfallsLowe, R., Foerster, J., Boureau, YL., Pineau, J. Dauphin, Y. 2019\. On the pitfalls of measuring emergent communication On the pitfalls of measuring emergent communication. Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems Proceedings of the 18th international conference on autonomous agents and multiagent systems ( 693–701).
* Lowe . (2020) Lowe*2020OnLowe, R., Gupta, A., Foerster, J., Kiela, D. Pineau, J. 2020\. On the interaction between supervision and self-play in emergent communication On the interaction between supervision and self-play in emergent communication. International Conference on Learning Representations. International conference on learning representations. https://openreview.net/forum?id=rJxGLlBtwH
* Maddison . (2017) DBLP:conf/iclr/MaddisonMT17Maddison, CJ., Mnih, A. Teh, YW. 2017\. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables The concrete distribution: A continuous relaxation of discrete random variables. 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. 5th international conference on learning representations, ICLR 2017, toulon, france, april 24-26, 2017, conference track proceedings.
* Mordatch Abbeel (2017) DBLP:journals/corr/MordatchA17Mordatch, I. Abbeel, P. 2017\. Emergence of Grounded Compositional Language in Multi-Agent Populations Emergence of grounded compositional language in multi-agent populations. CoRRabs/1703.04908. http://arxiv.org/abs/1703.04908
* Nowak Krakauer (1999) nowak1999evolutionNowak, MA. Krakauer, DC. 1999\. The evolution of language The evolution of language. Proceedings of the National Academy of Sciences96148028–8033.
* Pathak . (2016) pathak2016contextPathak, D., Krahenbuhl, P., Donahue, J., Darrell, T. Efros, AA. 2016\. Context encoders: Feature learning by inpainting Context encoders: Feature learning by inpainting. Proceedings of the IEEE conference on computer vision and pattern recognition Proceedings of the ieee conference on computer vision and pattern recognition ( 2536–2544).
* Radford . (2019) radford2019languageRadford, A., Wu, J., Child, R., Luan, D., Amodei, D. Sutskever, I. 2019\. Language models are unsupervised multitask learners Language models are unsupervised multitask learners. OpenAI Blog189.
* Saxe . (2011) Saxe:2011:RWU:3104482.3104619Saxe, AM., Koh, PW., Chen, Z., Bhand, M., Suresh, B. Ng, AY. 2011\. On Random Weights and Unsupervised Feature Learning On random weights and unsupervised feature learning. Proceedings of the 28th International Conference on International Conference on Machine Learning Proceedings of the 28th international conference on international conference on machine learning ( 1089–1096). USAOmnipress.
* Sener Koltun (2018) SenerNips2018Sener, O. Koltun, V. 2018\. Multi-Task Learning as Multi-Objective Optimization Multi-task learning as multi-objective optimization. S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi R. Garnett (), Advances in Neural Information Processing Systems 31 Advances in neural information processing systems 31 ( 527–538). Curran Associates, Inc.
* Simonyan Zisserman (2015) simonyan2014verySimonyan, K. Zisserman, A. 2015\. Very Deep Convolutional Networks for Large-Scale Image Recognition Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations. International conference on learning representations.
* Steels (1997) steels1997syntheticSteels, L. 1997\. The synthetic modeling of language origins The synthetic modeling of language origins. Evolution of communication111–34.
* Steels (2012) steels2012experimentsSteels, L. 2012\. Experiments in cultural language evolution Experiments in cultural language evolution ( 3). John Benjamins Publishing.
* Sukhbaatar . (2016) SukhbaatarSF16Sukhbaatar, S., Szlam, A. Fergus, R. 2016\. Learning Multiagent Communication with Backpropagation Learning multiagent communication with backpropagation. CoRRabs/1605.07736. http://arxiv.org/abs/1605.07736
* van den Oord . (2018) oord2018CPCvan den Oord, A., Li, Y. Vinyals, O. 2018\. Representation learning with contrastive predictive coding Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.
* Williams (1992) Williams1992Williams, RJ. 199205\. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn.83–4229–256. https://doi.org/10.1007/BF00992696 10.1007/BF00992696
|
# Consistent specification testing under spatial dependence††thanks: We thank
the editor, co-editor and three referees for insightful comments that improved
the paper. We are grateful to Swati Chandna, Miguel Delgado, Emmanuel Guerre,
Fernando López Hernandéz, Hon Ho Kwok, Arthur Lewbel, Daisuke Murakami, Ryo
Okui and Amol Sasane for helpful comments, and audiences at YEAP 2018
(Shanghai University of Finance and Economics), NYU Shanghai, Carlos III
Madrid, SEW 2018 (Dijon), Aarhus University, SEA 2018 (Vienna), EcoSta 2018
(Hong Kong), Hong Kong University, AFES 2018 (Cotonou), ESEM 2018 (Cologne),
CFE 2018 (Pisa), University of York, Penn State, Michigan State, University of
Michigan, Texas A&M, 1st Southampton Workshop on Econometrics and Statistics
and MEG 2019 (Columbus). We also thank Xifeng Wen from the Experiment and Data
Center of Antai College of Economics and Management (SJTU) for expert
computing assistance.
Abhimanyu Gupta Department of Economics, University of Essex, Wivenhoe Park,
Colchester CO4 3SQ, UK. E-mail<EMAIL_ADDRESS>supported by ESRC
grant ES/R006032/1. Xi Qu Antai College of Economics and Management, Shanghai
Jiao Tong University, Shanghai, China, 200052. E-mail:
<EMAIL_ADDRESS>supported by the National Natural Science Foundation
of China, Project Nos. 72222007, 71973097 and 72031006.
###### Abstract
We propose a series-based nonparametric specification test for a regression
function when data are spatially dependent, the ‘space’ being of a general
economic or social nature. Dependence can be parametric, parametric with
increasing dimension, semiparametric or any combination thereof, thus covering
a vast variety of settings. These include spatial error models of varying
types and levels of complexity. Under a new smooth spatial dependence
condition, our test statistic is asymptotically standard normal. To prove the
latter property, we establish a central limit theorem for quadratic forms in
linear processes in an increasing dimension setting. Finite sample performance
is investigated in a simulation study, with a bootstrap method also justified
and illustrated. Empirical examples illustrate the test with real-world data.
Keywords: Specification testing, nonparametric regression, spatial dependence,
cross-sectional dependence
JEL Classification: C21, C55
## 1 Introduction
Models for spatial dependence have recently become the subject of vigorous
research. This burgeoning interest has roots in the needs of practitioners who
frequently have access to data sets featuring inter-connected cross-sectional
units. Motivated by these practical concerns, we propose a specification test
for a regression function in a general setup that covers a vast variety of
commonly employed spatial dependence models and permits the complexity of
dependence to increase with sample size. Our test is consistent, in the sense
that a parametric specification is tested with asymptotically unit power
against a nonparametric alternative. The ‘spatial’ models that we study are
not restricted in any way to be geographic in nature, indeed ‘space’ can be a
very general economic or social space. Our empirical examples feature conflict
alliances and technology externalities as examples of ‘spatial dependence’,
for instance.
Specification testing is an important problem, and this is reflected in a huge
literature studying consistent tests. Much of this is based on independent,
and often also identically distributed, data. However data frequently exhibit
dependence and consequently a branch of the literature has also examined
specification tests under time series dependence. Our interest centers on
dependence across a ‘space’, which differs quite fundamentally from dependence
in a time series context. Time series are naturally ordered and locations of
the observations can be observed, or at least the process generating these
locations may be modelled. It can be imagined that concepts from time series
dependence be extended to settings where the data are observed on a geographic
space and dependence can be treated as a decreasing function of distance
between observations. Indeed much work has been done to extend notions of time
series dependence in this type of setting, see e.g. Jenish and Prucha (2009,
2012).
However, in a huge variety of economics and social science applications agents
influence each other in ways that do not conform to such a setting. For
example, farmers affect the demand of farmers in the same village but not in
different villages, as in Case (1991). Likewise, price competition among firms
exhibits spatial features (Pinkse et al. (2002)), input-output relations lead
to complementarities between sectors (Conley and Dupor (2003)), co-author
connections form among scientists (Oettl (2012), Mohnen (2022)), R&D
spillovers occur through technology and product market spaces (Bloom et al.
(2013)), networks form due to allegiances in conflicts (König et al. (2017))
and overlapping bank portfolios lead to correlated lending decisions (Gupta et
al. (2021)). Such examples cannot be studied by simply extending results
developed for time series and illustrate the growing need for suitable
methods.
A very popular model for general spatial dependence is the spatial
autoregressive (SAR) class, due to Cliff and Ord (1973). The key feature of
SAR models, and various generalizations such as SARMA (SAR moving average) and
matrix exponential spatial specifications (MESS, due to LeSage and Pace
(2007)), is the presence of one or more spatial weight matrices whose elements
characterize the links between agents. As noted above, these links may form
for a variety of reasons, so the ‘spatial’ terminology represents a very
general notion of space, such as social or economic space. Key papers on the
estimation of SAR models and their variants include Kelejian and Prucha (1998)
and Lee (2004), but research on various aspects of these is active and
ongoing, see e.g. Robinson and Rossi (2015); Hillier and Martellosio (2018a,
b); Kuersteiner and Prucha (2020); Han et al. (2021); Hahn et al. (2020).
Unlike work focusing on independent or time series data, a general drawback of
spatially oriented research has been the lack of general unified theory.
Typically, individual papers have studied specific special cases of various
spatial specifications. A strand of the literature has introduced the notion
of a cross-sectional linear-process to help address this problem, and we
follow this approach. This representation can accommodate SAR models in the
error term (so called spatial error models (SEM)) as a special case, as well
as variants like SARMA and MESS, whence its generality is apparent. The
linear-process structure shares some similarities with that familiar from the
time series literature (see e.g. Hannan (1970)). Indeed, time series versions
may be regarded as very special cases but, as stressed before, the features of
spatial dependence must be taken into account in the general formulation. Such
a representation was introduced by Robinson (2011) and further examined in
other situations by Robinson and Thawornkaiwong (2012) (partially linear
regression), Delgado and Robinson (2015) (non-nested correlation testing), Lee
and Robinson (2016) (series estimation of nonparametric regression) and
Hidalgo and Schafgans (2017) (cross-sectionally dependent panels).
In this paper, we propose a test statistic similar to that of Hong and White
(1995), based on estimating the nonparametric specification via series
approximations. Assuming an independent and identically distributed sample,
their statistic is based on the sample covariance between the residual from
the parametric model and the discrepancy between the parametric and
nonparametric fitted values. Allowing additionally for spatial dependence
through the form of a linear process as discussed above, our statistic is
shown to be asymptotically standard normal, consistent and possessing
nontrivial power against local alternatives of a certain type. To prove
asymptotic normality, we present a new central limit theorem (CLT) for
quadratic forms in linear processes in an increasing dimension setting that
may be of independent interest. A CLT for quadratic forms under time series
dependence in the context of series estimation can be found in Gao and Anh
(2000), and our result can be viewed as complementary to this. The setting of
Su and Qu (2017) is a very special case of our framework. There has been
recent interest in specification testing for spatial models, see for example
Sun (2020) for a kernel-based model specification test and Lee et al. (2020)
for a consistent omnibus test. We contribute to this literature by studying a
linear process based increasing parameter dimension framework.
Our linear process framework permits spatial dependence to be parametric,
parametric with increasing dimension, semiparametric or any combination
thereof, thus covering a vast variety of settings. A class of models of great
empirical interest are ‘higher-order’ SAR models in the outcome variables, but
with spatial dependence structure also in the errors. We initially present the
familiar nonparametric regression to clarify the exposition, and then cover
this class as the main model of interest. Our theory covers as special cases
SAR, SMA, SARMA, MESS models for the error term. These specifications may be
of any fixed spatial order, but our theory also covers the case where they are
of increasing order.
Thus we permit a more complex model of spatial dependence as more data become
available, which encourages a more flexible approach to modelling such
dependence as stressed by Gupta and Robinson (2015, 2018) in a higher-order
SAR context, Huber (1973), Portnoy (1984, 1985) and Anatolyev (2012) in a
regression context and Koenker and Machado (1999) for the generalized method
of moments setting, amongst others. This literature focuses on a sequence of
true models, rather than a sequence of models approximating an infinite true
model. Our paper also takes the same approach. On the other hand, in the
spatial setting, Gupta (2018a) considers increasing lag models as
approximations to an infinite lag model with lattice data and also suggests
criteria for choice of lag length.
Our framework is also extended to the situation where spatial dependence
occurs through nonparametric functions of raw distances (these may be
exogenous economic or social distances, say), as in Pinkse et al. (2002). This
allows for greater flexibility in modelling spatial weights as the
practitioner only has to choose an exogenous economic distance measure and
allow the data to determine the functional form. It also adds a degree of
robustness to the theory by avoiding potential parametric misspecification.
The case of geographical data is also covered, for example the important
classes of Matérn and Wendland (see e.g. Gneiting (2002)) covariance
functions. Finally, we introduce a new notion of smooth spatial dependence
that provides more primitive, and checkable, conditions for certain properties
than extant ones in the literature.
To illustrate the performance of the test in finite samples, we present Monte
Carlo simulations that exhibit satisfactory small sample properties. The test
is demonstrated in three empirical examples, including two based on recently
published work on social networks: Bloom et al. (2013) (R&D spillovers in
innovation), König et al. (2017) (conflict alliances during the Congolese
civil war). Another example studies cross-country spillovers in economic
growth. Our test may or may not reject the null hypothesis of a linear
regression in these examples, illustrating its ability to distinguish well
between the null and alternative models.
The next section introduces our basic setup using a nonparametric regression
with no SAR structure in responses. We treat this abstraction as a base case,
and Section 3 discusses estimation and defines the test statistic, while
Section 4 introduces assumptions and the key asymptotic results of the paper.
Section 5 examines the most commonly employed higher-order SAR models, while
Section 6 deals with nonparametric spatial error structures. Nonparametric
specification tests are often criticized for poor finite sample performance
when using the asymptotic critical values. In Section 7 we present a bootstrap
version of our testing procedure. Sections 8 and 9 contain a study of finite
sample performance and the empirical examples respectively, while Section 10
concludes. Proofs are contained in appendices, including a supplementary
online appendix which also contains additional simulation results.
For the convenience of the reader, we collect some frequently used notation
here. First, we introduce three notational conventions for any parameter $\nu$
for the rest of the paper: $\nu\in\mathbb{R}^{d_{\nu}}$, $\nu_{0}$ denotes the
true value of $\nu$ and for any scalar, vector or matrix valued function
$f(\nu)$, we denote $f\equiv f(\nu_{0})$. Let $\overline{\varphi}(\cdot)$
(respectively $\underline{\varphi}(\cdot)$) denote the largest (respectively
smallest) eigenvalue of a generic square nonnegative definite matrix argument.
For a generic matrix $A$, denote
$\left\|A\right\|=\left[\overline{\varphi}(A^{\prime}A)\right]^{1/2}$, i.e.
the spectral norm of $A$ which reduces to the Euclidean norm if $A$ is a
vector. $\left\|A\right\|_{R}$ denotes the maximum absolute row sum norm of a
generic matrix $A$ while
$\left\|A\right\|_{F}=\left[tr(AA^{\prime})\right]^{1/2}$, the Frobenius norm.
Throughout the paper $|\cdot|$ is absolute value when applied to a scalar and
determinant when applied to a matrix. Denote by $c$ ($C$) generic positive
constants, independent of any quantities that tend to infinity, and
arbitrarily small (big).
## 2 Setup
To illustrate our approach, we first consider the nonparametric regression
$y_{i}=\theta_{0}\left(x_{i}\right)+u_{i},i=1,\ldots,n,$ (2.1)
where $\theta_{0}(\cdot)$ is an unknown function and $x_{i}$ is a vector of
strictly exogenous explanatory variables with support
$\mathcal{X}\subset\mathbb{R}^{k}$. Spatial dependence is explicitly modeled
via the error term $u_{i}$, which we assume is generated by:
$u_{i}=\sum_{s=1}^{\infty}b_{is}\varepsilon_{s},$ (2.2)
where $\varepsilon_{s}$ are independent random variables, with zero mean and
identical variance $\sigma_{0}^{2}$. Further conditions on the
$\varepsilon_{s}$ will be assumed later. The linear process coefficients
$b_{is}$ can depend on $n$, as may the covariates $x_{i}$. This is generally
the case with spatial models and implies that asymptotic theory ought to be
developed for triangular arrays. There are a number of reasons to permit
dependence on sample size. The $b_{is}$ can depend on spatial weight matrices,
which are usually normalized for both stability and identification purposes.
Such normalizations, e.g. row-standardization or division by spectral norm,
may be $n$-dependent. Furthermore, $x_{i}$ often includes underlying
covariates of ‘neighbors’ defined by spatial weight matrices. For instance,
for some $n\times 1$ covariate vector $z$ and exogenous spatial weight matrix
$W\equiv W_{n}$, a component of $x_{i}$ can be $e_{i}^{\prime}Wz$, where
$e_{i}$ has unity in the $i$-th position and zeros elsewhere, which depends on
$n$. Thus, subsequently, any spatial weight matrices will also be allowed to
depend on $n$. Finally, treating triangular arrays permits re-labelling of
quantities that is often required when dealing with spatial data, due to the
lack of natural ordering, see e.g. Robinson (2011). We suppress explicit
reference to this $n$-dependence of various quantities for brevity, although
mention will be made of this at times to remind the reader of this feature.
Now, assume the existence of a $d_{\gamma}\times 1$ vector $\gamma_{0}$ such
that $b_{is}=b_{is}(\gamma_{0})$, possibly with $d_{\gamma}\rightarrow\infty$
as $n\rightarrow\infty$, for all $i=1,\ldots,n$ and $s\geq 1$. Let $u$ be the
$n\times 1$ vector with typical element $u_{i}$, $\varepsilon$ be the infinite
dimensional vector with typical element $\varepsilon_{s},$ and $B$ be an
infinite dimensional matrix (Cooke, 1950) with typical element $b_{is}.$ In
matrix form,
$u=B\varepsilon\text{ and
}\mathcal{E}\left(uu^{\prime}\right)=\sigma_{0}^{2}BB^{\prime}=\sigma_{0}^{2}\Sigma\equiv\sigma_{0}^{2}\Sigma\left(\gamma_{0}\right).$
(2.3)
We assume that $\gamma_{0}\in\Gamma$, where $\Gamma$ is a compact subset of
$\mathbb{R}^{d_{\gamma}}$. With $d_{\gamma}$ diverging, ensuring $\Gamma$ has
bounded volume requires some care, see Gupta and Robinson (2018). For a known
function $f(\cdot)$, our aim is to test
$H_{0}:P[\theta_{0}\left(x_{i}\right)=f(x_{i},\alpha_{0})]=1,\text{ for some
}\alpha_{0}\in\mathcal{A}\subset\mathbb{R}^{d_{\alpha}},$ (2.4)
against the global alternative $H_{1}:P\left[\theta_{0}\left(x_{i}\right)\neq
f(x_{i},\alpha)\right]>0,\text{ for all }\alpha\in\mathcal{A}$.
We now nest commonly used models for spatial dependence in (2.3). Introduce a
set of $n\times n$ spatial weight (equivalently network adjacency) matrices
$W_{j}$, $j=1,\ldots,m_{1}+m_{2}$. Each $W_{j}$ can be thought of as
representing dependence through a particular space. Now, consider models of
the form $\Sigma(\gamma)=A^{-1}(\gamma)A^{\prime-1}(\gamma)$. For example,
with $\xi$ denoting a vector of iid disturbances with variance
$\sigma_{0}^{2}$, the model with SARMA$(m_{1},m_{2})$ errors is
$u=\sum_{j=1}^{m_{1}}\gamma_{j}W_{j}u+\sum_{j=m_{1}+1}^{m_{1}+m_{2}}\gamma_{j}W_{j}\xi+\xi$,
with
$A(\gamma)=\left(I_{n}+\sum_{j=m_{1}+1}^{m_{1}+m_{2}}\gamma_{j}W_{j}\right)^{-1}\left(I_{n}-\sum_{j=1}^{m_{1}}\gamma_{j}W_{j}\right)$,
assuming conditions that guarantee the existence of the inverse. Such
conditions can be found in the literature, see e.g. Lee and Liu (2010) and
Gupta and Robinson (2018). The SEM model is obtained by setting $m_{2}=0$
while the model with SMA errors has $m_{1}=0$. The model with MESS$(m)$ errors
(LeSage and Pace (2007), Debarsy et al. (2015)) is
$u=\exp\left(\sum_{j=1}^{m}\gamma_{j}W_{j}\right)\xi,A(\gamma)=\exp\left(-\sum_{j=1}^{m}\gamma_{j}W_{j}\right).$
In some cases the space under consideration is geographic i.e. the data may be
observed at irregular points in Euclidean space. Making the identification
$u_{i}\equiv U\left(t_{i}\right)$, $t_{i}\in\mathbb{R}^{d}$ for some $d>1$,
and assuming covariance stationarity, $U(t)$ is said to follow an isotropic
model if, for some function $\delta$ on $\mathbb{R}$, the covariance at lag
$s$ is $r(s)=\mathcal{E}\left[U(t)U(t+s)\right]=\delta(\|s\|)$. An important
class of parametric isotropic models is that of Matérn (1986), which can be
parameterized in several ways, see e.g. Stein (1999). Denoting by $\Gamma_{f}$
the Gamma function and by $\mathcal{K}_{\gamma_{1}}$ the modified Bessel
function of the second kind (Gradshteyn and Ryzhik (1994)), take
$\delta(\left\|s\right\|,\gamma)=\left(2^{\gamma_{1}-1}\Gamma_{f}(\gamma_{1})\right)^{-1}\left(\gamma_{2}^{-1}\sqrt{2\gamma_{1}}\left\|s\right\|\right)^{\gamma_{1}}\mathcal{K}_{\gamma_{1}}\left(\gamma_{2}^{-1}\sqrt{2\gamma_{1}}\left\|s\right\|\right),$
with $\gamma_{1},\gamma_{2}>0$ and $d_{\gamma}=2$. With $d_{\gamma}=3$,
another model takes
$\delta(\left\|s\right\|,\gamma)=\gamma_{1}\exp\left(-\left\|s/\gamma_{2}\right\|^{\gamma_{3}}\right)$,
see e.g. De Oliveira et al. (1997), Stein (1999). Fuentes (2007) considers
this model with $\gamma_{3}=1$, as well as a specific parameterization of the
Matèrn covariance function.
## 3 Test statistic
We estimate $\theta_{0}(\cdot)$ via a series approximation. Certain technical
conditions are needed to allow for $\mathcal{X}$ to have unbounded support. To
this end, for a function $g(x)$ on $\mathcal{X}$, define a weighted sup-norm
(see e.g. Chen et al. (2005), Chen (2007), Lee and Robinson (2016)) by
$\left\|g\right\|_{w}=\sup_{x\in\mathcal{X}}\left|g(x)\right|\left(1+\left\|x\right\|^{2}\right)^{-w/2},\text{
for some }w>0$. Assume that there exists a sequence of functions
$\psi_{i}:=\psi\left(x_{i}\right):\mathbb{R}^{k}\mapsto\mathbb{R}^{p}$, where
$p\rightarrow\infty$ as $n\rightarrow\infty$, and a $p\times 1$ vector of
coefficients $\beta_{0}$ such that
$\theta_{0}\left(x_{i}\right)=\psi_{i}^{\prime}\beta_{0}+e\left(x_{i}\right),$
(3.1)
where $e(\cdot)$ satisfies:
###### Assumption R.1.
There exists a constant $\mu>0$ such that
$\left\|e\right\|_{w_{x}}=O\left(p^{-\mu}\right),$ as $p\rightarrow\infty$,
where $w_{x}\geq 0$ is the largest value such that
$\sup_{i=1,\ldots,n}\mathcal{E}\left\|x_{i}\right\|^{w_{x}}<\infty$, for all
$n$.
By Lemma 1 in Appendix B of Lee and Robinson (2016), this assumption implies
that
$\sup_{i=1,\ldots,n}\mathcal{E}\left(e^{2}\left(x_{i}\right)\right)=O\left(p^{-2\mu}\right).$
(3.2)
Due to the large number of assumptions in the paper, sometimes with changes
reflecting only the various setups we consider, we prefix assumptions with R
in this section and the next, to signify ‘regression’. In Section 5 the prefix
is SAR, for ‘spatial autoregression’, while in Section 6 we use NPN, for
‘nonparametric network’.
Let
$y=(y_{1},\ldots,y_{n})^{\prime},{\theta_{0}}=(\theta_{0}\left(x_{1}\right),\ldots,\theta_{0}\left(x_{n}\right))^{\prime},\Psi=(\psi_{1},\ldots,\psi_{n})^{\prime}$.
We will estimate $\gamma_{0}$ using a quasi maximum likelihood estimator
(QMLE) based on a Gaussian likelihood, although Gaussianity is nowhere
assumed. For any admissible values $\beta$, $\sigma^{2}$ and $\gamma$, the
(multiplied by $2/n$) negative quasi log likelihood function based on using
the approximation (3.1) is
${L}(\beta,\sigma^{2},\gamma)=\ln\left(2\pi\sigma^{2}\right)+\frac{1}{n}\ln\left|\Sigma\left(\gamma\right)\right|+\frac{1}{n\sigma^{2}}(y-\Psi\beta)^{\prime}\Sigma\left(\gamma\right)^{-1}(y-\Psi\beta),$
(3.3)
which is minimised with respect to $\beta$ and $\sigma^{2}$ by
$\displaystyle\bar{\beta}\left(\gamma\right)$ $\displaystyle=$
$\displaystyle\left(\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}y,$
(3.4) $\displaystyle\bar{\sigma}^{2}\left(\gamma\right)$ $\displaystyle=$
$\displaystyle{n^{-1}}y^{\prime}E(\gamma)^{\prime}M(\gamma)E(\gamma)y,$ (3.5)
where
$M(\gamma)=I_{n}-E(\gamma)\Psi\left(\Psi^{\prime}\Sigma(\gamma)^{-1}\Psi\right)^{-1}\Psi^{\prime}E(\gamma)^{\prime}$
and $E(\gamma)$ is the $n\times n$ symmetric matrix such that
$E(\gamma)E(\gamma)^{\prime}=\Sigma(\gamma)^{-1}$. The use of the approximate
likelihood relies on the negligibility of $e(\cdot)$, which in turn permits
the replacement of $\theta_{0}(\cdot)$ by $\psi^{\prime}\beta_{0}$ with
asymptotically negligible cost. Thus the concentrated likelihood function is
$\mathcal{L}(\gamma)=\ln(2\pi)+\ln\bar{\sigma}^{2}(\gamma)+\frac{1}{n}\ln\left|\Sigma\left(\gamma\right)\right|.$
(3.6)
We define the QMLE of $\gamma_{0}$ as $\widehat{\gamma}=\text{arg
min}_{\gamma\in\Gamma}\mathcal{L}(\gamma)$ and the QMLEs of $\beta_{0}$ and
$\sigma_{0}^{2}$ as $\widehat{\beta}=\bar{\beta}\left(\widehat{\gamma}\right)$
and $\widehat{\sigma}^{2}=\bar{\sigma}^{2}\left(\widehat{\gamma}\right)$. At a
given $x_{1},\ldots,x_{n}$, the series estimate of $\theta_{0}$ is defined as
$\widehat{\theta}=\left(\hat{\theta}(x_{1}),\ldots,\hat{\theta}(x_{n})\right)^{\prime}=\left(\psi(x_{1})^{\prime}\widehat{\beta},\ldots,\psi(x_{n})^{\prime}\widehat{\beta}\right)^{\prime}.$
(3.7)
Let $\widehat{\alpha}_{n}\equiv\widehat{\alpha}$ denote an estimator
consistent for $\alpha_{0}$ under $H_{0}$, for example the (nonlinear) least
squares estimator. Note that $\widehat{\alpha}$ is consistent only under
$H_{0}$, so we introduce a general probability limit of $\widehat{\alpha}$, as
in Hong and White (1995).
###### Assumption R.2.
There exists a deterministic sequence $\alpha_{n}^{*}\equiv\alpha^{*}$ such
that $\widehat{\alpha}-\alpha^{*}=O_{p}\left(1/\sqrt{n}\right)$.
Examples of estimators that satisfy this assumption include (nonlinear) least
squares, generalized method of moments estimators or adaptive efficient
weighted least squares (Stinchcombe and White, 1998).
Following Hong and White (1995), define the regression error $u_{i}\equiv
y_{i}-f(x_{i},\alpha^{\ast})$ and the specification error
$v_{i}\equiv\theta_{0}(x_{i})-f(x_{i},\alpha^{\ast})$. Our test statistic is
based on a scaled and centered version of
$\widehat{m}_{n}=\widehat{\sigma}^{-2}\widehat{{v}}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\widehat{{u}}/n=\widehat{\sigma}^{-2}\left(\widehat{{\theta}}-{f}\left(x,\widehat{\alpha}\right)\right)^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left(y-{f}\left(x,\widehat{\alpha}\right)\right)/n$,
where
$f(x,\alpha)=\left(f\left(x_{1},\alpha\right),\ldots,f\left(x_{n},\alpha\right)\right)^{\prime}$.
Precisely, it is defined as
$\mathscr{T}_{n}=\frac{n\widehat{m}_{n}-p}{\sqrt{2p}}.$ (3.8)
The motivation for such a centering and scaling stems from the fact that, for
fixed $p$, $n\widehat{m}_{n}$ has an asymptotic $\chi^{2}_{p}$ distribution.
Such a distribution has mean $p$ and variance $2p$, and it is a well-known
fact that
$\left(\chi^{2}_{p}-p\right)/{\sqrt{2p}}\overset{d}{\longrightarrow}N(0,1),\text{
as }p\rightarrow\infty$. This motivates our use of (3.8) and explains why we
aspire to establish a standard normal distribution under the null hypothesis.
Intuitively, the test statistic is based on the sample covariance between the
residual from the parametric model and the discrepancy between the parametric
and nonparametric fitted values, as in Hong and White (1995).
Hong and White (1995) also note that, due to the nonparametric nature of the
problem, such a statistic vanishes faster than the parametric
($n^{\frac{1}{2}}$) rate, thus a $n^{\frac{1}{2}}$-normalization leads to
degeneracy of the test. A proper normalization as in (3.8) will yield a non-
degenerate limiting distribution. As Hong and White (1995) noted, our test is
one-sided. This is because asymptotically negative values of our test
statistic can occur only under the null, while under the alternative it tends
to a positive, increasing number. Thus, we reject the null if our test
statistic is on the right tail.
## 4 Asymptotic theory
### 4.1 Consistency of $\widehat{\gamma}$
We first provide conditions under which our estimator $\widehat{\gamma}$ of
$\gamma_{0}$ is consistent. Such a property is necessary for the results that
follow. The following assumption is a rather standard type of asymptotic
boundedness and full-rank condition on $\Sigma(\gamma)$.
###### Assumption R.3.
$\varlimsup_{n\rightarrow\infty}\sup_{\gamma\in\Gamma}\bar{\varphi}\left(\Sigma(\gamma)\right)<\infty\text{
and
}\varliminf_{n\rightarrow\infty}\inf_{\gamma\in\Gamma}\underline{\varphi}\left(\Sigma(\gamma)\right)>0.$
###### Assumption R.4.
The $u_{i},i=1,\ldots,n,$ satisfy the representation (2.2). The
$\varepsilon_{s}$, $s\geq 1$, have zero mean, finite third and fourth moments
$\mu_{3}$ and $\mu_{4}$ respectively and, denoting by $\sigma_{ij}(\gamma)$
the $(i,j)$-th element of $\Sigma(\gamma)$ and defining
$b_{is}^{\ast}={b_{is}}/{\sigma_{ii}^{\frac{1}{2}}},\;i=1,\ldots,n,\;n\geq
1,s\geq 1,$ we have
$\underset{n\rightarrow\infty}{\overline{\lim}}\sup_{i=1,\ldots,n}\sum_{s=1}^{\infty}\left|b_{is}^{\ast}\right|+\sup_{s\geq
1}\underset{n\rightarrow\infty}{\overline{\lim}}\sum_{i=1}^{n}\left|b_{is}^{\ast}\right|<\infty.$
(4.1)
By Assumption R.3, $\sigma_{ii}$ is bounded and bounded away from zero, so the
normalization of the $b_{is}$ in Assumption R.4 is well defined. The
summability conditions in (4.1) are typical conditions on linear process
coefficients that are needed to control dependence; for instance in the case
of stationary time series $b^{*}_{is}=b^{*}_{i-s}$. The infinite linear
process assumed in (2.2) is further discussed by Robinson (2011), who
introduced it, and also by Delgado and Robinson (2015). These assumptions
imply an increasing-domain asymptotic setup and preclude infill asymptotics.
Because we often need to consider the difference between values of the matrix-
valued function $\Sigma(\cdot)$ at distinct points, it is useful to introduce
an appropriate concept of ‘smoothness’. This concept has been employed before
in economics, see e.g. Chen (2007), and is defined below.
###### Definition 1.
Let $\left(X,\left\|\cdot\right\|_{X}\right)$ and
$\left(Y,\left\|\cdot\right\|_{Y}\right)$ be Banach spaces, $\mathscr{L}(X,Y)$
be the Banach space of linear continuous maps from $X$ to $Y$ with norm
$\left\|T\right\|_{\mathscr{L}(X,Y)}=\sup_{\left\|x\right\|_{X}\leq
1}\left\|T(x)\right\|_{Y}$ and $U$ be an open subset of $X$. A map
$F:U\rightarrow Y$ is said to be Fréchet-differentiable at $u\in U$ if there
exists $L\in\mathscr{L}(X,Y)$ such that
$\lim_{\left\|h\right\|_{X}\rightarrow
0}\frac{F(u+h)-F(u)-L(h)}{\left\|h\right\|_{X}}=0.$ (4.2)
$L$ is called the Fréchet-derivative of $F$ at $u$. The map $F$ is said to be
Fréchet-differentiable on $U$ if it is Fréchet-differentiable for all $u\in
U$.
The above definition extends the notion of a derivative that is familiar from
real analysis to the functional spaces and allows us to check high-level
assumptions that past literature has imposed. To the best of our knowledge,
this is the first use of such a concept in the literature on spatial/network
models. Denote by $\mathcal{M}^{n\times n}$ the set of real, symmetric and
positive semi-definite $n\times n$ matrices. Let $\Gamma^{o}$ be an open
subset of $\Gamma$ and consider the Banach spaces
$\left(\Gamma,\left\|\cdot\right\|_{g}\right)$ and $\left(\mathcal{M}^{n\times
n},\left\|\cdot\right\|\right)$, where $\left\|\cdot\right\|_{g}$ is a generic
$\ell_{p}$ norm, $p\geq 1$. The following assumption ensures that
$\Sigma(\cdot)$ is a ‘smooth’ function, in the sense of Fréchet-smoothness.
###### Assumption R.5.
The map $\Sigma:\Gamma^{o}\rightarrow\mathcal{M}^{n\times n}$ is Fréchet-
differentiable on $\Gamma^{o}$ with Fréchet-derivative denoted
$D\Sigma\in\mathscr{L}\left(\Gamma^{o},\mathcal{M}^{n\times n}\right)$.
Furthermore, the map $D\Sigma$ satisfies
$\sup_{\gamma\in\Gamma^{o}}\left\|D\Sigma(\gamma)\right\|_{\mathscr{L}\left(\Gamma^{o},\mathcal{M}^{n\times
n}\right)}\leq C.$ (4.3)
Assumption R.5 is a functional smoothness condition on spatial dependence. It
has the advantage of being checkable for a variety of commonly employed
models. For example, a first-order SEM has
$\Sigma(\gamma)=A^{-1}(\gamma)A^{\prime-1}(\gamma)$ with $A=I_{n}-\gamma W$.
Corollary CS.1 in the supplementary appendix shows
$\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right)=\gamma^{\dagger}A^{-1}(\gamma)\left(G^{\prime}(\gamma)+G(\gamma)\right)A^{\prime-1}(\gamma)$,
at a given point $\gamma\in\Gamma^{o}$, where $G(\gamma)=WA^{-1}(\gamma)$.
Then, taking
$\left\|W\right\|+\sup_{\gamma\in\Gamma}\left\|A^{-1}(\gamma)\right\|<C$ (4.4)
yields Assumption R.5. Condition (4.4) limits the extent of spatial dependence
and is very standard in the spatial literature; see e.g. Lee (2004) and
numerous subsequent papers employing similar conditions.
Fréchet derivatives for higher-order SAR, SMA, SARMA and MESS error structures
are computed in supplementary appendix S.D, in Lemmas LS.5-LS.6 and
Corollaries CS.1-CS.2. Strictly speaking, Gateaux differentiability might
suffice for the type of results that we target. We opt for Fréchet
differentiability because this derivative map is linear and continuous or,
equivalently, a bounded linear operator, a property that makes Assumption R.5
more reasonable.
The following proposition is very useful in ‘linearizing’ perturbations in the
$\Sigma(\cdot)$.
###### Proposition 4.1.
If Assumption R.5 holds, then for any $\gamma_{1},\gamma_{2}\in\Gamma^{o}$,
$\left\|\Sigma\left(\gamma_{1}\right)-\Sigma\left(\gamma_{2}\right)\right\|\leq
C\left\|\gamma_{1}-\gamma_{2}\right\|.$ (4.5)
To illustrate how the concept of Fréchet-differentiability allows us to check
high-level assumptions extant in the literature, a consequence of Proposition
4.1 is the following corollary, a version of which appears as an assumption in
Delgado and Robinson (2015).
###### Corollary 4.1.
For any $\gamma^{*}\in\Gamma^{o}$ and any $\eta>0$,
$\underset{n\rightarrow\infty}{\overline{\lim}}\sup_{\gamma\in\left\\{\gamma:\left\|\gamma-\gamma^{*}\right\|<\eta\right\\}\cap\Gamma^{o}}\left\|\Sigma(\gamma)-\Sigma\left(\gamma^{*}\right)\right\|<C\eta.$
(4.6)
We now introduce regularity conditions needed to establish the consistency of
$\hat{\gamma}$. Define
$\sigma^{2}\left(\gamma\right)=n^{-1}\sigma^{2}tr\left(\Sigma(\gamma)^{-1}\Sigma\right)=n^{-1}\sigma^{2}\left\|E(\gamma)E^{-1}\right\|_{F}^{2},$
which is nonnegative by definition and bounded by Assumption R.3, red with the
matrix $E(\gamma)$ defined after (3.5).
###### Assumption R.6.
$c\leq\sigma^{2}\left(\gamma\right)\leq C$ for all $\gamma\in\Gamma$.
###### Assumption R.7.
$\gamma_{0}\in\Gamma$ and, for any $\eta>0$,
$\varliminf_{n\rightarrow\infty}\inf_{\gamma\in\overline{\mathcal{N}}^{\gamma}(\eta)}\frac{n^{-1}tr\left(\Sigma(\gamma)^{-1}\Sigma\right)}{\left|\Sigma(\gamma)^{-1}\Sigma\right|^{1/n}}>1,$
(4.7)
where
$\overline{\mathcal{N}}^{\gamma}(\eta)=\Gamma\setminus\mathcal{N}^{\gamma}(\eta)$
and
$\mathcal{N}^{\gamma}(\eta)=\left\\{\gamma:\left\|\gamma-\gamma_{0}\right\|<\eta\right\\}\cap\Gamma$.
###### Assumption R.8.
$\left\\{\underline{\varphi}\left(n^{-1}\Psi^{\prime}\Psi\right)\right\\}^{-1}+\overline{\varphi}\left(n^{-1}\Psi^{\prime}\Psi\right)=O_{p}(1)$.
Assumption R.6 is a boundedness condition originally considered in Gupta and
Robinson (2018), while Assumptions R.7 and R.8 are identification conditions.
Indeed, Assumption R.7 requires that $\Sigma(\gamma)$ be identifiable in a
small neighborhood around $\gamma_{0}$. This is apparent on noticing that the
ratio in (4.7) is at least one by the inequality between arithmetic and
geometric means, and equals one when $\Sigma(\gamma)=\Sigma$. Similar
assumptions arise frequently in related literature, see e.g. Lee (2004),
Delgado and Robinson (2015). Assumption R.8 is a typical asymptotic
boundedness and non-multicollinearity condition, see e.g. Newey (1997) and
much other literature on series estimation. Primitive conditions for this
assumption to hold require the convergence (in matrix norm) of
$n^{-1}\Psi^{\prime}\Psi$ to its expectation, and this entails restrictions on
the extent of spatial dependence in the $x_{i}$. A reference is Lee and
Robinson (2016), wherein consider Assumption A.4 and the proof of Theorem 1.
By Assumption R.3, R.8 implies
$\sup_{\gamma\in\Gamma}\left\\{\underline{\varphi}\left(n^{-1}\Psi^{\prime}\Sigma(\gamma)^{-1}\Psi\right)\right\\}^{-1}=O_{p}(1)$.
###### Theorem 4.1.
Under either $H_{0}$ or $H_{1}$, Assumptions R.1-R.8 and
$p^{-1}+\left(d_{\gamma}+p\right)/n\rightarrow 0$ as $n\rightarrow\infty$,
$\left\|\left(\widehat{\gamma},\hat{\sigma}^{2}\right)-\left(\gamma_{0},\sigma_{0}^{2}\right)\right\|\overset{p}{\longrightarrow}0.$
### 4.2 Asymptotic properties of the test statistic
Write $\Sigma_{j}(\gamma)=\partial\Sigma(\gamma)/\partial\gamma_{j}$,
$j=1,\ldots,d_{\gamma}$, the matrix differentiated element-wise. While
Assumption R.5 guarantees that these partial derivatives exist, the next
assumption imposes a uniform bound on their spectral norms.
###### Assumption R.9.
$\varlimsup_{n\rightarrow\infty}\sup_{j=1,\ldots,d_{\gamma}}\left\|\Sigma_{j}(\gamma)\right\|<C$.
We will later consider the sequence of local alternatives
$H_{\ell n}\equiv
H_{\ell}:f(x_{i},\alpha_{n}^{\ast})=\theta_{0}(x_{i})+(p^{1/4}/n^{1/2})h(x_{i}),a.s.,$
(4.8)
where $h$ is square integrable on the support $\mathcal{X}$ of the $x_{i}$.
Under the null $H_{0}$, we have $h(x_{i})=0$, a.s..
###### Assumption R.10.
For each $n\in\mathbb{N}$ and $i=1,\ldots,n$, the function
$f:\mathcal{X}\times\mathcal{A}\rightarrow\mathbb{R}$ is such that
$f\left(x_{i},\alpha\right)$ is measurable for each $\alpha\in\mathcal{A}$,
$f\left(x_{i},\cdot\right)$ is a.s. continuous on $\mathcal{A}$, with
$\sup_{\alpha\in\mathcal{A}}f^{2}\left(x_{i},\alpha\right)\leq
D_{n}\left(x_{i}\right)$, where $\sup_{n\in\mathbb{N}}D_{n}\left(x_{i}\right)$
is integrable and $\sup_{\alpha\in\mathcal{A}}\left\|\partial
f\left(x_{i},\alpha\right)/\partial\alpha\right\|^{2}\leq
D_{n}\left(x_{i}\right)$,
$\sup_{\alpha\in\mathcal{A}}\left\|\partial^{2}f\left(x_{i},\alpha\right)/\partial\alpha\partial\alpha^{\prime}\right\|\leq
D_{n}\left(x_{i}\right)$, all holding a.s..
Define the infinite-dimensional matrix
$\mathscr{V}=B^{\prime}\Sigma^{-1}\Psi\left(\Psi^{\prime}\Sigma^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma^{-1}B$,
which is symmetric, idempotent and has rank $p$. We now show that our test
statistic is approximated by a quadratic form in $\varepsilon$, weighted by
$\mathscr{V}$.
###### Theorem 4.2.
Under Assumptions R.1-R.10,
$p^{-1}+p\left(p+d_{\gamma}^{2}\right)/n+\sqrt{n}/p^{\mu+1/4}\rightarrow 0$,
as $n\rightarrow\infty$, and $H_{0}$,
$\mathscr{T}_{n}-{\left(\sigma_{0}^{-2}\varepsilon^{\prime}\mathscr{V}\varepsilon-p\right)}/{\sqrt{2p}}=o_{p}(1).$
###### Assumption R.11.
$\underset{n\rightarrow\infty}{\overline{\lim}}\left\|\Sigma^{-1}\right\|_{R}<\infty.$
Because $\left\|\Sigma^{-1}\right\|\leq\left\|\Sigma^{-1}\right\|_{R}$, this
restriction on spatial dependence is somewhat stronger than a restriction on
spectral norm but is typically imposed for central limit theorems in this type
of setting, cf. Lee (2004), Delgado and Robinson (2015), Gupta and Robinson
(2018). The next assumption is needed in our proofs to check a Lyapunov
condition. A typical approach would be assume moments of order $4+\epsilon$,
for some $\epsilon>0$. Due to the linear process structure under
consideration, taking $\epsilon=4$ makes the proof tractable, see for example
Delgado and Robinson (2015).
###### Assumption R.12.
The $\varepsilon_{s}$, $s\geq 1$, have finite eighth moment.
The next assumption is strong if the basis functions $\psi_{ij}(\cdot)$ are
polynomials, requiring all moments to exist in that case.
###### Assumption R.13.
$\mathcal{E}\left|\psi_{ij}\left(x\right)\right|<C$, $i=1,\ldots,n$ and
$j=1,\ldots,p$.
The next theorem establishes the asymptotic normality of the approximating
quadratic form introduced above.
###### Theorem 4.3.
Under Assumptions R.3, R.4, R.8, R.11-R.13 and $p^{-1}+p^{3}/n\rightarrow 0$,
as $n\rightarrow\infty$,
${\left(\sigma_{0}^{-2}\varepsilon^{\prime}\mathscr{V}\varepsilon-p\right)}/{\sqrt{2p}}\overset{d}{\longrightarrow}N(0,1).$
This is a new type of CLT, integrating both a linear process framework as well
as an increasing dimension element. A linear-quadratic form in iid
disturbances is treated by Kelejian and Prucha (2001), while a quadratic form
in a linear process framework is treated by Delgado and Robinson (2015).
However both results are established in a parametric framework, entailing no
increasing dimension aspect of the type we face with $p\rightarrow\infty$.
Next, we summarize the properties of our test statistic in a theorem that
records its asymptotic normality under the null, consistency and ability to
detect local alternatives at $p^{1/4}/n^{1/2}$ rate. This rate has been found
also by De Jong and Bierens (1994) and Gupta (2018b). Introduce the quantity
$\varkappa=\left({\sqrt{2}\sigma_{0}^{2}}\right)^{-1}\operatorname{plim}_{n\rightarrow\infty}{n^{-1}h^{\prime}\Sigma^{-1}h}$,
where $h=\left(h\left(x_{1}\right),\ldots,h\left(x_{n}\right)\right)^{\prime}$
and $h\left(x_{i}\right)$ is from (4.8).
###### Theorem 4.4.
Under the conditions of Theorems 4.2 and 4.3, (1)
$\mathscr{T}_{n}\overset{d}{\rightarrow}N(0,1)$ under $H_{0}$, (2)
$\mathscr{T}_{n}$ is a consistent test statistic, (3)
$\mathscr{T}_{n}\overset{d}{\rightarrow}N\left(\varkappa,1\right)$ under local
alternatives $H_{\ell}$.
## 5 Models with SAR structure in responses
We now introduce the SAR model
$y_{i}=\sum_{j=1}^{d_{\lambda}}\lambda_{0j}w_{i,j}^{\prime}y+\theta_{0}\left(x_{i}\right)+u_{i},i=1,\ldots,n,$
(5.1)
where $W_{j}$, $j=1,\ldots,d_{\lambda}$, are known spatial weight matrices
with $i$-th rows denoted $w_{i,j}^{\prime}$, as discussed earlier, and
$\lambda_{0j}$ are unknown parameters measuring the strength of spatial
dependence. We take $d_{\lambda}$ to be fixed for convenience of exposition.
The error structure remains the same as in (2.2). Here spatial dependence
arises not only in errors but also responses. For example, this corresponds to
a situation where agents in a network influence each other both in their
observed and unobserved actions. Note that the error term $u_{i}$ can be
generated by the same $W_{j}$, or different ones.
While the model in (5.1) is new in the literature, some related ones are
discussed here. Models such as (5.1) but without dependence in the error
structure are considered by Su and Jin (2010) and Gupta and Robinson (2015,
2018), but the former consider only $d_{\lambda}=1$ and the latter only
parametric $\theta_{0}(\cdot)$. Linear $\theta_{0}(\cdot)$ and $d_{\lambda}>1$
are permitted by Lee and Liu (2010), but the dependence structure in errors
differs from what we allow in (5.1). Using the same setup as Su and Jin (2010)
and independent disturbances, a specification test for the linearity of
$\theta_{0}(\cdot)$ is proposed by Su and Qu (2017). In comparison, our model
is much more general and our test can handle more general parametric null
hypotheses. We thank a referee for pointing out that (5.1) is a particular
case of Sun (2016) when $u_{i}$ are iid and of Malikov and Sun (2017) when
$d_{\lambda}=1$.
Denoting $S(\lambda)=I_{n}-\sum_{j=1}^{d_{\lambda}}\lambda_{j}W_{j}$, the
quasi likelihood function based on Gaussianity and conditional on covariates
is
$\displaystyle
L(\beta,\sigma^{2},\phi)=\log{(2\pi\sigma^{2})}-\frac{2}{n}\log{\left|{S\left(\lambda\right)}\right|}+\frac{1}{n}\log{\left|{\Sigma\left(\gamma\right)}\right|}$
$\displaystyle+\frac{1}{\sigma^{2}{n}}\left(S\left(\lambda\right)y-\Psi\beta\right)^{\prime}\Sigma(\gamma)^{-1}\left(S\left(\lambda\right)y-\Psi\beta\right),$
(5.2)
at any admissible point
$\left(\beta^{\prime},\phi^{\prime},\sigma^{2}\right)^{\prime}$ with
$\phi=\left(\lambda^{\prime},\gamma^{\prime}\right)^{\prime}$, for nonsingular
$S(\lambda)$ and $\Sigma(\gamma)$. For given
$\phi=\left(\lambda^{\prime},\gamma^{\prime}\right)^{\prime}$, (5.2) is
minimised with respect to $\beta$ and $\sigma^{2}$ by
$\displaystyle\bar{\beta}\left(\phi\right)$ $\displaystyle=$
$\displaystyle\left(\Psi^{\prime}\Sigma(\gamma)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma(\gamma)^{-1}S\left(\lambda\right)y,$
(5.3) $\displaystyle\bar{\sigma}^{2}\left(\phi\right)$ $\displaystyle=$
$\displaystyle{n^{-1}}y^{\prime}S^{\prime}\left(\lambda\right)E(\gamma)^{\prime}M(\gamma)E(\gamma)S\left(\lambda\right)y.$
(5.4)
The QMLE of $\phi_{0}$ is
$\widehat{\phi}=\operatorname*{arg\,min}_{\phi\in\Phi}\mathcal{L}\left(\phi\right)$,
where
$\mathcal{L}\left(\phi\right)=\log\bar{\sigma}^{2}\left(\phi\right)+{n^{-1}}\log\left|S^{\prime-1}\left(\lambda\right)\Sigma(\gamma)S^{-1}\left(\lambda\right)\right|,$
(5.5)
and $\Phi=\Lambda\times\Gamma$ is taken to be a compact subset of
$\mathbb{R}^{d_{\lambda}+d\gamma}$. The QMLEs of $\beta_{0}$ and
$\sigma_{0}^{2}$ are defined as
$\bar{\beta}\left(\widehat{\phi}\right)\equiv\widehat{\beta}$ and
$\bar{\sigma}^{2}\left(\widehat{\phi}\right)\equiv\widehat{\sigma}^{2}$
respectively. The following assumption controls spatial dependence and is
discussed below equation (4.4).
###### Assumption SAR.1.
$\max_{j=1,\ldots,d_{\lambda}}\left\|W_{j}\right\|+\left\|S^{-1}\right\|<C$.
Writing $T(\lambda)=S(\lambda)S^{-1}$ and
$\phi=\left(\lambda^{\prime},\gamma^{\prime}\right)^{\prime}$, define the
quantity
$\sigma^{2}\left(\phi\right)=n^{-1}\sigma_{0}^{2}tr\left(T^{\prime}(\lambda)\Sigma(\gamma)^{-1}T(\lambda)\Sigma\right)=n^{-1}\sigma_{0}^{2}\left\|E(\gamma)T(\lambda)E^{-1}\right\|_{F}^{2},$
which is nonnegative by definition and bounded by Assumptions R.3 and SAR.1.
The assumptions below directly extend Assumptions R.6 and R.7 to the present
setup.
###### Assumption SAR.2.
$c\leq\sigma^{2}\left(\phi\right)\leq C$, for all $\phi\in\Phi$.
###### Assumption SAR.3.
$\phi_{0}\in\Phi$ and, for any $\eta>0$,
$\varliminf_{n\rightarrow\infty}\inf_{\phi\in\overline{\mathcal{N}}^{\phi}(\eta)}\frac{n^{-1}tr\left(T^{\prime}(\lambda)\Sigma(\gamma)^{-1}T(\lambda)\Sigma\right)}{\left|T^{\prime}(\lambda)\Sigma(\gamma)^{-1}T(\lambda)\Sigma\right|^{1/n}}>1,$
(5.6)
where
$\overline{\mathcal{N}}^{\phi}(\eta)=\Phi\setminus\mathcal{N}^{\phi}(\eta)$
and
$\mathcal{N}^{\phi}(\eta)=\left\\{\phi:\left\|\phi-\phi_{0}\right\|<\eta\right\\}\cap\Phi$.
We now introduce an identification condition that is required in the setup of
this section.
###### Assumption SAR.4.
$\beta_{0}\neq 0$ and for any $\eta>0$,
${P\left(\varliminf_{n\rightarrow\infty}\inf_{\left(\lambda^{\prime},\gamma^{\prime}\right)^{\prime}\in\Lambda\times\overline{\mathcal{N}}^{\gamma}(\eta)}n^{-1}\beta_{0}^{\prime}\Psi^{\prime}T^{\prime}(\lambda)E(\gamma)^{\prime}M\left(\gamma\right)E(\gamma)T(\lambda)\Psi\beta_{0}/\left\|\beta_{0}\right\|^{2}>0\right)=1.}$
(5.7)
Upon performing minimization with respect to $\beta$, the event inside the
probability in (5.7) is equivalent to the event
${\varliminf_{n\rightarrow\infty}\min_{\beta\in\mathbb{R}^{p}}\inf_{\left(\lambda^{\prime},\gamma^{\prime}\right)^{\prime}\in\Lambda\times\overline{\mathcal{N}}^{\gamma}(\eta)}n^{-1}\left(\Psi\beta-T(\lambda)\Psi\beta_{0}\right)^{\prime}\Sigma(\gamma)^{-1}\left(\Psi\beta-T(\lambda)\Psi\beta_{0}\right)/\left\|\beta_{0}\right\|^{2}>0,}$
which is analogous to the identification condition for the nonlinear
regression model with a parametric linear factor in Robinson (1972), weighted
by the inverse of the error covariance matrix. This reduces the condition to a
scalar form of a rank condition, making the identifying nature of the
assumption transparent. A similar identifying assumption is used by Gupta and
Robinson (2018).
###### Theorem 5.1.
Under either $H_{0}$ or $H_{1}$, Assumptions R.1-R.5, R.8, SAR.1-SAR.4 and
$p^{-1}+\left(d_{\gamma}+p\right)/n\rightarrow 0,\text{ as
}n\rightarrow\infty,$
$\left\|\left(\widehat{\phi},\widehat{\sigma}^{2}\right)-\left(\phi_{0},{\sigma_{0}}^{2}\right)\right\|\overset{p}{\longrightarrow}0$
as $n\rightarrow\infty$.
The test statistic $\mathscr{T}_{n}$ can be constructed as before but with the
null residuals redefined to incorporate the spatially lagged terms, i.e.
$\hat{u}=S(\hat{\lambda})y-f(x,\hat{\alpha})$. Then we have the following
theorem.
###### Theorem 5.2.
Under Assumptions R.1-R.5, R.8-R.10, SAR.1-SAR.4,
$p^{-1}+p\left(p+d_{\gamma}^{2}\right)/n+\sqrt{n}/p^{\mu+1/4}+d^{2}_{\gamma}/p\rightarrow
0,\text{ as }n\rightarrow\infty,$
and $H_{0}$,
$\mathscr{T}_{n}-{\left(\sigma_{0}^{-2}\varepsilon^{\prime}\mathscr{V}\varepsilon-p\right)}/{\sqrt{2p}}=o_{p}(1).$
###### Theorem 5.3.
Under the conditions of Theorems 4.3, 5.1 and 5.2, (1)
$\mathscr{T}_{n}\overset{d}{\rightarrow}N(0,1)$ under $H_{0}$, (2)
$\mathscr{T}_{n}$ is a consistent test statistic, (3)
$\mathscr{T}_{n}\overset{d}{\rightarrow}N\left(\varkappa,1\right)$ under local
alternatives $H_{\ell}$.
## 6 Nonparametric spatial weights
In this section we are motivated by settings where spatial dependence occurs
through nonparametric functions of raw distances (this may be geographic,
social, economic, or any other type of distance), as is the case in Pinkse et
al. (2002), for example. In their kind of setup, $d_{ij}$ is a raw distance
between units $i$ and $j$ and the corresponding element of the spatial weight
matrix is given by $w_{ij}=\zeta_{0}\left(d_{ij}\right)$, where
$\zeta_{0}(\cdot)$ is an unknown nonparametric function. Pinkse et al. (2002)
use such a setup in a SAR model like (5.1), but with a linear regression
function. In contrast, in keeping with the focus of this paper we instead
model dependence in the errors in this manner. Our formulation is rather
general, covering, for example, a specification like
$w_{ij}=f\left(\gamma_{0},\zeta_{0}\left(d_{ij}\right)\right)$, with
$f(\cdot)$ a _known_ function, $\gamma_{0}$ an _unknown_ parameter of possibly
increasing dimension, and $\zeta_{0}(\cdot)$ an _unknown_ nonparametric
function. For the sake of simplicity, we do not permit the $x_{i}$ in this
section to be generated by such nonparametric weight matrices although they
can be generated from other, known weight matrices.
Let $\Xi$ be a compact space of functions, on which we will specify more
conditions later. For notational simplicity we abstract away from the SAR
dependence in the responses. Thus we consider (2.1), but with
$u_{i}=\sum_{s=1}^{\infty}b_{is}\left(\gamma_{0},\zeta_{0}\left({z_{i}}\right)\right)\varepsilon_{s},$
(6.1)
where
$\zeta_{0}(\cdot)=\left(\zeta_{01}(\cdot),\ldots,\zeta_{0d_{\zeta}}(\cdot)\right)^{\prime}$
is a fixed-dimensional vector of real-valued nonparametric functions with
$\zeta_{0\ell}\in\Xi$ for each $\ell=1,\ldots,d_{\zeta}$, and ${z}_{i}$ a
fixed-dimensional vector of data, independent of the $\varepsilon_{s}$, $s\geq
1$, with support $\mathcal{Z}$. One can also take $z_{i}$ to be a fixed
distance measure. We base our estimation on approximating each
$\zeta_{0\ell}({z_{i}})$, $\ell=1,\ldots,d_{\zeta}$, with the series
representation $\delta_{0\ell}^{\prime}\varphi_{\ell}({z_{i}})$, where
$\varphi_{\ell}\left({z_{i}}\right)\equiv\varphi_{\ell}$ is an $r_{\ell}\times
1$ ($r_{\ell}\rightarrow\infty$ as $n\rightarrow\infty$) vector of basis
functions with typical function $\varphi_{\ell k}$, $k=1,\ldots,r_{\ell}$. The
set of linear combinations $\delta_{\ell}^{\prime}\varphi_{\ell}({z_{i}})$
forms the sequence of sieve spaces $\Phi_{r_{\ell}}\subset\Xi$ as
$r_{\ell}\rightarrow\infty$, for any $\ell=1,\ldots,d_{\zeta}$, and
$\zeta_{0\ell}\left({z}\right)=\delta_{0\ell}^{\prime}\varphi_{\ell}+\nu_{\ell},$
(6.2)
with the following restriction on the function space $\Xi$:
###### Assumption NPN.1.
For some scalars $\kappa_{\ell}>0$,
$\left\|\nu_{\ell}\right\|_{w_{z}}=O\left(r_{\ell}^{-\kappa_{\ell}}\right),$
as $r_{\ell}\rightarrow\infty$, $\ell=1,\ldots,d_{\zeta}$, where $w_{z}\geq 0$
is the largest value such that
$\sup_{z\in\mathcal{Z}}\mathcal{E}\left\|z\right\|^{w_{z}}<\infty$
Just as Assumption R.1 implied (3.2), by Lemma 1 of Lee and Robinson (2016),
we obtain
$\sup_{z\in\mathcal{Z}}\mathcal{E}\left(\nu^{2}_{\ell}\right)=O\left(r_{\ell}^{-2\kappa_{\ell}}\right),\ell=1,\ldots,d_{\zeta}.$
(6.3)
Thus we now have an infinite-dimensional nuisance parameter $\zeta_{0}(\cdot)$
and increasing-dimensional nuisance parameter $\gamma$. Writing
$\sum_{\ell=1}^{d_{\zeta}}r_{\ell}=r$ and
$\tau=(\gamma^{\prime},\delta^{\prime}_{1},\ldots,\delta^{\prime}_{d_{\zeta}})^{\prime}$,
which has increasing dimension $d_{\tau}=d_{\gamma}+r$, define
$\varsigma(r)=\sup_{z\in\mathcal{Z};\ell=1,\ldots,d_{\zeta}}\left\|\varphi_{\ell}\right\|.$
Write $\Sigma(\tau)$ for the covariance matrix of the $n\times 1$ vector of
$u_{i}$ in (6.1), with $\delta_{\ell}^{\prime}\varphi_{\ell}$ replacing each
admissible function $\zeta_{\ell}(\cdot)$. This is analogous to the definition
of $\Sigma(\gamma)$ in earlier sections, and indeed after conditioning on $z$
it can be treated in a similar way because $d_{\gamma}\rightarrow\infty$ was
already permitted. For example, suppose that $u=(I_{n}-W)^{-1}\varepsilon$,
where $\left\|W\right\|<1$ and the elements satisfy
$w_{ij}=\zeta_{0}\left(d_{ij}\right)$, $i,j=1,\ldots,n$, for some fixed
distances $d_{ij}$ and unknown function $\zeta_{0}(\cdot)$, see e.g. Pinkse
(1999). Approximating $\zeta_{0}(z)=\tau_{0}^{\prime}\varphi(z)+\nu$, for some
$r\times 1$ basis function vector $\varphi(z)$ and approximation error $\nu$,
we define $W(\tau)$ as the $n\times n$ matrix with elements
$w_{ij}(\tau)=\tau_{0}^{\prime}\varphi\left(d_{ij}\right)$, and set
$\Sigma(\tau)=\text{var}\left((I_{n}-W(\tau))^{-1}\varepsilon\right)=\sigma_{0}^{2}(I_{n}-W(\tau))^{-1}(I_{n}-W^{\prime}(\tau))^{-1}$.
For any admissible values $\beta$, $\sigma^{2}$ and $\tau$, the redefined
(multiplied by $2/n$) negative quasi log likelihood function based on using
the approximations (3.1) and (6.2) is
${L}(\beta,\sigma^{2},\tau)=\ln\left(2\pi\sigma^{2}\right)+\frac{1}{n}\ln\left|\Sigma\left(\tau\right)\right|+\frac{1}{n\sigma^{2}}(y-\Psi\beta)^{\prime}\Sigma\left(\tau\right)^{-1}(y-\Psi\beta),$
(6.4)
which is minimised with respect to $\beta$ and $\sigma^{2}$ by
$\displaystyle\bar{\beta}\left(\tau\right)$ $\displaystyle=$
$\displaystyle\left(\Psi^{\prime}\Sigma\left(\tau\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\tau\right)^{-1}y,$
(6.5) $\displaystyle\bar{\sigma}^{2}\left(\tau\right)$ $\displaystyle=$
$\displaystyle{n^{-1}}y^{\prime}E(\tau)^{\prime}M(\tau)E(\tau)y,$ (6.6)
where
$M(\tau)=I_{n}-E(\tau)\Psi\left(\Psi^{\prime}\Sigma(\tau)^{-1}\Psi\right)^{-1}\Psi^{\prime}E(\tau)^{\prime}$
and $E(\tau)$ is the $n\times n$ symmetric matrix such that
$E(\tau)E(\tau)^{\prime}=\Sigma(\tau)^{-1}$. Thus the concentrated likelihood
function is
$\mathcal{L}(\tau)=\ln(2\pi)+\ln\bar{\sigma}^{2}(\tau)+\frac{1}{n}\ln\left|\Sigma\left(\tau\right)\right|.$
(6.7)
Again, for compact $\Gamma$ and sieve coefficient space $\Delta$, the QMLE of
$\tau_{0}$ is $\widehat{\tau}=\text{arg
min}_{\tau\in\Gamma\times\Delta}\mathcal{L}(\tau)$ and the QMLEs of $\beta$
and $\sigma^{2}$ are $\widehat{\beta}=\bar{\beta}\left(\widehat{\tau}\right)$
and $\widehat{\sigma}^{2}=\bar{\sigma}^{2}\left(\widehat{\tau}\right)$. The
series estimate of $\theta_{0}$ is defined as in (3.7). Define also the
product Banach space $\mathcal{T}=\Gamma\times\Xi^{d_{\zeta}}$ with norm
$\left\|\left(\gamma^{\prime},\zeta^{\prime}\right)^{\prime}\right\|_{\mathcal{T}_{w}}=\left\|\gamma\right\|+\sum_{\ell=1}^{d_{\zeta}}\left\|\zeta_{\ell}\right\|_{w}$,
and consider the map $\Sigma:\mathcal{T}^{o}\rightarrow\mathcal{M}^{n\times
n}$, where $\mathcal{T}^{o}$ is an open subset of $\mathcal{T}$.
###### Assumption NPN.2.
The map $\Sigma:\mathcal{T}^{o}\rightarrow\mathcal{M}^{n\times n}$ is Fréchet-
differentiable on $\mathcal{T}^{o}$ with Fréchet-derivative denoted
$D\Sigma\in\mathscr{L}\left(\mathcal{T}^{o},\mathcal{M}^{n\times n}\right)$.
Furthermore, conditional on ${z}$, the map $D\Sigma$ satisfies
$\sup_{t\in\mathcal{T}^{o}}\left\|D\Sigma(t)\right\|_{\mathscr{L}\left(\mathcal{T}^{o},\mathcal{M}^{n\times
n}\right)}\leq C,$ (6.8)
on its domain $\mathcal{T}^{o}$.
This assumption can be checked in a similar way to how we checked Assumption
R.5, where a diverging dimension for the argument was already permitted.
###### Proposition 6.1.
If Assumption NPN.2 holds, then for any $t_{1},t_{2}\in\mathcal{T}^{o}$,
conditional on $z$,
$\left\|\Sigma\left(t_{1}\right)-\Sigma\left(t_{2}\right)\right\|\leq
C\varsigma(r)\left\|t_{1}-t_{2}\right\|.$ (6.9)
###### Corollary 6.1.
For any $t^{*}\in\mathcal{T}^{o}$ and any $\eta>0$, conditional on $z$,
$\underset{n\rightarrow\infty}{\overline{\lim}}\sup_{t\in\left\\{t:\left\|t-t^{*}\right\|<\eta\right\\}\cap\mathcal{T}^{o}}\left\|\Sigma(t)-\Sigma\left(t^{*}\right)\right\|<C\varsigma(r)\eta.$
(6.10)
###### Assumption NPN.3.
$c\leq\sigma^{2}\left(\tau\right)\leq C$ for $\tau\in\Gamma\times\Delta$,
conditional on $z$.
Denote $\Sigma\left(\tau_{0}\right)=\Sigma_{0}$. Note that this is not the
true covariance matrix, which is
$\Sigma\equiv\Sigma\left(\gamma_{0},\zeta_{0}\right)$.
###### Assumption NPN.4.
$\tau_{0}\in\Gamma\times\Delta$ and, for any $\eta>0$, conditional on $z$,
$\varliminf_{n\rightarrow\infty}\inf_{\tau\in\overline{\mathcal{N}}^{\tau}(\eta)}\frac{n^{-1}tr\left(\Sigma(\tau)^{-1}\Sigma_{0}\right)}{\left|\Sigma(\tau)^{-1}\Sigma_{0}\right|^{1/n}}>1,$
(6.11)
where
$\overline{\mathcal{N}}^{\tau}(\eta)=(\Gamma\times\Delta)\setminus\mathcal{N}^{\tau}(\eta)$
and
$\mathcal{N}^{\tau}(\eta)=\left\\{\tau:\left\|\tau-\tau_{0}\right\|<\eta\right\\}\cap(\Gamma\times\Delta)$.
###### Remark 1.
Expressing the identification condition in Assumption NPN.4 in terms of $\tau$
implies that identification is guaranteed via the sieve spaces
$\Phi_{r_{\ell}}$, $\ell=1,\ldots,d_{\zeta}$. This approach is common in the
sieve estimation literature, see e.g. Chen (2007), p. 5589, Condition 3.1.
###### Theorem 6.1.
Under either $H_{0}$ or $H_{1}$, Assumptions R.1-R.4 (with R.3 and R.4 holding
for $t\in\mathcal{T}$ rather than $\gamma\in\Gamma$), R.8, NPN.1-NPN.4 and
$p^{-1}+\left(\min_{\ell=1,\ldots,d_{\zeta}}r_{\ell}\right)^{-1}+\left(d_{\gamma}+p+\max_{\ell=1,\ldots,d_{\zeta}}r_{\ell}\right)/n\rightarrow
0$ as $n\rightarrow\infty$,
$\left\|\left(\widehat{\tau},\hat{\sigma}^{2}\right)-\left(\tau_{0},\sigma^{2}_{0}\right)\right\|\overset{p}{\longrightarrow}0.$
###### Theorem 6.2.
Under the conditions of Theorems 4.2 and 6.1, but with $\tau$ and
$\mathcal{T}$ replacing $\gamma$ and $\Gamma$ in assumptions prefixed with R
and $p\rightarrow\infty$,
$\left(\min_{\ell=1,\ldots,d_{\zeta}}r_{\ell}\right)^{-1}+\frac{p^{2}}{n}+\frac{\sqrt{n}}{p^{\mu+1/4}}+p^{1/2}\varsigma(r)\left(\frac{d_{\gamma}+\displaystyle\max_{\ell=1,\ldots,d_{\zeta}}r_{\ell}}{\sqrt{n}}+\sqrt{\sum_{\ell=1}^{d_{\zeta}}r_{\ell}^{-2\kappa_{\ell}}}\right)\rightarrow
0,$
as $n\rightarrow\infty$, and $H_{0}$,
$\mathscr{T}_{n}-\left({\sigma_{0}^{-2}\varepsilon^{\prime}\mathscr{V}\varepsilon-p}\right)/{\sqrt{2p}}=o_{p}(1).$
###### Theorem 6.3.
Let the conditions of Theorems 4.3 and 6.2 hold, but with $\tau$ and
$\mathcal{T}$ replacing $\gamma$ and $\Gamma$ in assumptions prefixed with R.
Then (1) $\mathscr{T}_{n}\overset{d}{\rightarrow}N(0,1)$ under $H_{0}$, (2)
$\mathscr{T}_{n}$ is a consistent test statistic, (3)
$\mathscr{T}_{n}\overset{d}{\rightarrow}N\left(\varkappa,1\right)$ under local
alternatives $H_{\ell}$.
## 7 Fixed-regressor residual-based bootstrap test
The performance of nonparametric tests based on asymptotic distributions often
leaves something to be desired in finite samples. An alternative approach is
to use the bootstrap approximation. In this section, we propose a bootstrap
version of our test, focusing on the setting of Section 5. In our simulations
and empirical studies, we consider test statistics based on both
$\widehat{m}_{n}=\widehat{\sigma}^{-2}\widehat{v}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\widehat{u}/n$
and
$\widetilde{m}_{n}=\widehat{\sigma}^{-2}(\widehat{u}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\widehat{u}-\widehat{\eta}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\widehat{\eta})/n$,
where $\widehat{\eta}=S(\hat{\lambda})y-\widehat{\theta}$, i.e., the residual
from nonparametric estimation, $\hat{u}=S(\hat{\lambda})y-f(x,\hat{\alpha})$,
and $\hat{v}=\hat{\theta}-f(x,\hat{\alpha})$. Analogous to the definition of
$\mathscr{T}_{n}$, define the statistic
$\mathscr{T}_{n}^{a}={\left(n\widetilde{m}_{n}-p\right)}/{\sqrt{2p}}.$ In the
case of no spatial autoregressive term, and under the power series,
$\mathscr{T}_{n}^{a}$ and $\mathscr{T}_{n}$ are numerically identical, as was
observed by Hong and White (1995). However, in the SARSE setting a difference
arises due to the spatial structure in the response $y$. We show that
$\mathscr{T}_{n}^{a}-\mathscr{T}_{n}=o_{p}(1)$ under the null or local
alternatives in Theorem TS.1 in the online supplementary appendix.
The bootstrap versions of the test statistics $\mathscr{T}_{n}$ and
$\mathscr{T}_{n}^{a}$ are
$\displaystyle\mathscr{T}_{n}^{*}$ $\displaystyle=$
$\displaystyle\frac{n\widehat{m}_{n}^{\ast}-p}{\sqrt{2p}}=\frac{\widehat{\sigma}^{\ast-2}\widehat{{v}}^{\ast\prime}\Sigma\left(\widehat{\gamma}^{\ast}\right)^{-1}\widehat{{u}}^{\ast}-p}{\sqrt{2p}}$
$\displaystyle\mathscr{T}_{n}^{a\ast}$ $\displaystyle=$
$\displaystyle\frac{n\widetilde{m}_{n}^{\ast}-p}{\sqrt{2p}}=\frac{\widehat{\sigma}^{\ast-2}(\widehat{{u}}^{\ast\prime}\Sigma\left(\widehat{\gamma}^{\ast}\right)^{-1}\widehat{{u}}^{\ast}-\widehat{\eta}^{\ast\prime}\Sigma\left(\widehat{\gamma}^{\ast}\right)^{-1}\widehat{\eta}^{\ast})-p}{\sqrt{2p}},$
respectively, where $\widehat{{u}}^{\ast}$ is the bootstrap residual vector
under the null, $\widehat{\eta}^{*}$ is the bootstrap residual vector under
the alternative,
$\widehat{{v}}^{\ast}=\widehat{\theta}^{\ast}(x)-f(x,\widehat{\alpha}^{\ast})$,
and $\left(\widehat{\gamma}^{\ast},\lambda^{*},\widehat{\sigma}^{\ast
2},\widehat{\theta}^{\ast},\widehat{\alpha}^{\ast}\right)$ is the estimator
using the bootstrap sample. We elaborate on the bootstrap statistics using the
SARARMA($m_{1}$,$m_{2},m_{3}$) model as an example:
$y=\sum_{k=1}^{m_{1}}\lambda_{k}W_{1k}y+\theta(x)+u\text{,
}u=\sum_{l=1}^{m_{2}}\gamma_{2l}W_{2l}u+\sum_{l=1}^{m_{3}}\gamma_{3l}W_{3l}\xi+\xi.$
Following Jin and Lee (2015), we first deduct the empirical mean of the
residual vector from
$\widehat{{\xi}}=\left(\sum_{l=1}^{m_{3}}\widehat{\gamma}_{3l}W_{3l}+I_{n}\right)^{-1}\left(I_{n}-\sum_{l=1}^{m_{2}}\widehat{\gamma}_{2l}W_{2l}\right)\left(y-\sum_{k=1}^{m_{1}}\widehat{\lambda}_{k}W_{1k}y-\widehat{{\theta}}_{n}\right)$
to obtain
$\widetilde{{\xi}}=(I_{n}-\frac{1}{n}l_{n}l_{n}^{\prime})\widehat{{\xi}}$.
Next, we sample randomly with replacement $n$ times from elements of
$\widetilde{{\xi}}$ to obtain a vector of $\mathbf{\xi}^{\ast}.$ After this,
we generate the bootstrap sample $y^{\ast}$ by treating
$\widehat{{f}}=f(x,\widehat{\alpha})$, $\hat{\lambda}$ and $\widehat{\gamma}$
as the true parameter:
$y^{\ast}=\left(I_{n}-\sum_{k=1}^{m_{1}}\widehat{\lambda}_{k}W_{1k}\right)^{-1}\left(\widehat{{f}}+\left(I_{n}-\sum_{l=1}^{m_{2}}\widehat{\gamma}_{2l}W_{2l}\right)^{-1}\left(\sum_{l=1}^{m_{3}}\widehat{\gamma}_{3l}W_{3l}+I_{n}\right){\xi}^{\ast}\right).$
We estimate the model based on the bootstrap sample $y^{\ast}$ using QMLE to
obtain the estimator
$\widehat{{\theta}}^{\ast}=\psi^{\prime}\widehat{\beta}^{\ast},$
$\widehat{\lambda}^{\ast}$, and $\widehat{\gamma}^{\ast}$ under the
alternative hypothesis and $\widehat{\alpha}^{\ast}$ under the null hypothesis
of $\theta(x)=f(x,\alpha_{0}).$ Then,
$\widehat{{\eta}}^{\ast}=y^{\ast}-\sum_{k=1}^{m_{1}}\widehat{\lambda}_{k}^{\ast}W_{1k}y^{\ast}-\widehat{{\theta}}^{\ast}$,
$\widehat{{u}}^{\ast}=y^{\ast}-\sum_{k=1}^{m_{1}}\widehat{\lambda}_{k}^{\ast}W_{1k}y^{\ast}-f(x,\widehat{\alpha}^{\ast}).$
This procedure is repeated $B$ times to obtain the sequence
$\left\\{\mathscr{T}_{nj}^{\ast}\right\\}_{j=1}^{B}$. We reject the null when
$p^{\ast}=B^{-1}\sum_{j=1}^{B}\mathbf{1(}\mathscr{T}_{n}<\mathscr{T}_{nj}^{\ast})$
is smaller than the given level of significance. An identical procedure holds
for the test based on $\mathscr{T}_{n}^{a\ast}.$ The asymptotic validity of
the bootstrap method can be shown as in Theorem 4 of Su and Qu (2017) and
Lemma 2 in Jin and Lee (2015), and detailed analysis can be found in the
supplementary appendix, see proof of Theorem TS.1.
## 8 Finite sample performance
### 8.1 Parametric error spatial structure
Taking $n=60,100,200$, we choose two specifications to generate $y$ from the
SARARMA($m_{1}$,$m_{2},m_{3}$) models:
$\displaystyle\text{SARARMA(}0\text{,1,0): }$ $\displaystyle
y=\theta(x)+u,\text{ }u=\gamma_{2}W_{2}u+\xi$
$\displaystyle\text{SARARMA(}1\text{,0,1): }$ $\displaystyle
y=\lambda_{1}W_{1}y+\theta(x)+u,\text{ }u=\gamma_{3}W_{3}\xi+\xi,$
where $\xi$ is $N(0,I_{n})$. The DGP of $\theta(x)$ is
$\theta(x_{i})=x_{i}^{\prime}\alpha+cp^{1/4}n^{-1/2}\sin(x_{i}^{\prime}\alpha),$
where $x_{i}^{\prime}\alpha=1+x_{1i}+x_{2i}$, with $x_{1i}=(z_{i}+z_{1i})/2$,
$x_{2i}=(z_{i}+z_{2i})/2$. We choose two settings: compactly supported
regressors where $z_{i},z_{1i}$ and $z_{2i}$ are i.i.d., $U[0,2\pi]$ and
unboundedly supported regressors where $z_{i},z_{1i}$ and $z_{2i}$ are i.i.d.
$N(0,1).$ We report the compact support setting in the main text, while the
results for unbounded support are reported in the online supplement.
We use three series bases for our experiments: power (polynomial) series of
third and fourth order ($p=10,p=15$), trigonometric series
$trig_{1}=\left(1,\sin\left(x_{1}\right),\sin\left(x_{1}/2\right),\sin\left(x_{2}\right),\sin\left(x_{2}/2\right),\cos\left(x_{1}\right),\cos\left(x_{1}/2\right),\cos\left(x_{2}\right),\cos\left(x_{2}/2\right)\right)^{\prime}$
and
$trig_{2}=\left(trig_{1}^{\prime},\sin\left(x_{1}^{2}\right),\cos\left(x_{1}^{2}\right),\sin\left(x_{2}^{2}\right),\cos\left(x_{2}^{2}\right)\right)^{\prime}$,
and the B-spline bases of fourth and seventh order ($p=9,p=14$), We also set
$\gamma_{2}=0.3$, $\lambda_{1}=0.3$ and $\gamma_{3}=0.4$; the value $c=0,3,6$
indicates the null hypothesis and the local alternatives. The spatial weight
matrices are generated using LeSage’s code make_neighborsw from
http://www.spatial-econometrics.com/, where the row-normalized sparse matrices
are generated by choosing a specific number of the closest locations from
randomly generated coordinates and we set the number of neighbors to be
$n/20$. We employ 100 bootstrap replications in each of 500 Monte Carlo
replications except for the SARARMA(1,0,1) design with $n=200$, where we set
50 bootstrap replications in view of the computation time. We report the
rejection frequencies of tests based on bootstrap critical values in the main
text, while tests based on asymptotic critical values are reported in the
online supplement.
Tables 1-4 report the empirical rejection frequencies using the bootstrap test
statistics $\mathscr{T}_{n}^{\ast}$ (Tables 1, 3) and
$\mathscr{T}_{n}^{a\ast}$ (Tables 2, 4), when nominal levels are given by 1%,
5% and 10%. To see how the choice of $p$ and the basis functions affect small
sample outcomes, we report two sets of results for each basis function family:
the first row for each value of $c$ is from the smaller $p$ ($p=9$ or $10$),
while the second row is from the larger $p$ ($p=14$ or $15$). We summarize
some important findings. First, we see that for most DGPs, our bootstrap test
is closer to the nominal level than the asymptotic test (reported in the
online supplement) although the sizes of both types of tests improve generally
as the sample size increases. Second, both bootstrap and asymptotic tests are
powerful in detecting any deviations from linearity in the local alternatives.
The patterns are similar across all cases: the bootstrap generally affords
better size control, albeit not always.
All three types of bases give qualitatively similar results, but we note that
$\mathscr{T}_{n}^{\ast}=\mathscr{T}_{n}^{\ast a}$ when using polynomial series
under the SARARMA(0,1,0) model, as observed in Hong and White (1995). When
using trigonometric and B-spline series, tests based on these two statistics
give slightly different rejection rates. However, under the SARARMA(1,0,1)
model, all series give quantitatively different results, as illustrated in
Tables 3 and 4. When using B-spline bases, $p=14$ does not perform well
compared to $p=9$. In the other cases, both choices of $p$ work well.
### 8.2 Nonparametric error spatial structure
Now we examine finite sample performance in the setting of Section 6. The
three DGPs of $\theta(x)$ are the same as the parametric setting but we
generate the $n\times n$ matrix $W^{*}$ as
$w^{*}_{ij}=\Phi(-d_{ij})I(c_{ij}<0.05)$ if $i\neq j$, and $w^{*}_{ii}=0$,
where $\Phi(\cdot)$ is the standard normal cdf, $d_{ij}\sim$iid $U[-3,3]$, and
$c_{ij}\sim$iid $U[0,1]$. From this construction, we ensure that $W^{*}$ is
sparse with no more than $5\%$ elements being nonzero. Then, $y$ is generated
from $y=\theta(x)+u,\text{ }u=Wu+\xi,$ where $\xi\sim N(0,I_{n})$ and
$W=W^{*}/{1.2\overline{\varphi}\left(W^{*}\right)}$, ensuring the existence of
$(I-W)^{-1}$. In estimation, we know the distance $d_{ij}$ and the indicator
$I(c_{ij}<0.05)$, but we do not know the functional form of $w_{ij}$, so we
approximate elements in $W$ by
$\widehat{w}_{ij}=\sum_{l=0}^{r}a_{l}d_{ij}^{l}I(c_{ij}<0.05)\text{ if }i\neq
j\text{; }\widehat{w}_{ii}=0.$
Table 5 reports the rejection rates using 500 Monte Carlo simulation at the 5%
asymptotic level 1.645 using polynomial bases with $r=2,3,4,5$ and
$p=10,15,20$. We take $n=150,300,500,600,700$, larger sample sizes than
earlier because two nonparametric functions must be estimated in this spatial
setting. The two largest bandwidths ($r=5,p=20$) are only employed for the
largest sample size $n=700$. We observe a clear pattern of rejection rates
approaching the theoretical level as sample size increases. Power improves as
$c$ increases for all designs and is non-trivial in all cases even for $c=3$.
Sizes are acceptable for $n=500$, particularly when $p=15$. Size performance
improves further as $n=600$, indicating asymptotic stability. Note that with
two diverging bandwidths ($p$ and $r$), we expect sizes to improve in a
diagonal pattern going from top left corner to bottom right corner in Table 5.
This is indeed the case. For $n=700$, we observe that the pairs
$(r,p)=(5,15),(5,20)$ deliver acceptable sizes.
## 9 Empirical applications
In this section, we illustrate the specification test presented in previous
sections using several empirical examples.
### 9.1 Conflict alliances
This example is based on a study of how a network of military alliances and
enmities affects the intensity of a conflict, conducted by König et al.
(2017). They stress that understanding the role of informal networks of
military alliances and enmities is important not only for predicting outcomes,
but also for designing and implementing policies to contain or put an end to
violence. König et al. (2017) obtain a closed-form characterization of the
Nash equilibrium and perform an empirical analysis using data on the Second
Congo War, a conflict that involves many groups in a complex network of
informal alliances and rivalries.
To study the fighting effort of each group the authors use a panel data model
with individual fixed effects, where key regressors include total fighting
effort of allies and enemies. They further correct the potential spatial
correlation in the error term by using a spatial heteroskedasticity and
autocorrelation robust standard error. We use their data and the main
structure of the specification and build a cross-sectional SAR(2) model with
two weight matrices, $W^{A}$ ($W^{A}_{ij}=1$ if group $i$ and $j$ are allies,
and $W^{A}_{ij}=0$ otherwise) and $W^{E}$ ($W^{E}_{ij}=1$ if group $i$ and $j$
are enemies, and $W^{E}_{ij}=0$ otherwise):
$y=\lambda_{1}W^{A}y+\lambda_{2}W^{E}y+\mathbf{1}_{n}\beta_{0}+X\beta+u,$
where $y$ is a vector of fighting efforts of each group and $X$ includes the
current rainfall, rainfall from the last year, and their squares.111We follow
the analysis in the original paper and do not row normalize. This is because
the economic content of the weight matrices is defined by total fights of
allies or enemies. To consider the spatial correlation in the error term, we
consider both the Error SARMA(1,0) and Error SARMA(0,1) structures. For these,
we employ a spatial weight matrix $W^{d}$, based on the inverse distance
between group locations and set to be 0 after 150 km, following König et al.
(2017). The idea is that geographical spatial correlation dies out as groups
become further apart. We also report results using a nonparametric estimator
of the spatial weights, as described in Section 6 and studied in simulations
in Section 8. For the nonparametric estimator we take $r=2$.
In the original dataset, there are 80 groups, but groups 62 and 63 have the
same variables and the same locations, so we drop one group and end up with a
sample of 79 groups. We use data from 1998 as an example and further use the
pooled data from all years as a robustness check. $H_{0}$ stands for
restricted model where the linear functional form of the regression is
imposed, while $H_{1}$ stands for the unrestricted model where we use basis
functions comprising of power series with $p=10$. In all our specifications,
the test statistics are negative, so we cannot reject the null hypothesis that
the model is correctly specified. As Table 6 indicates, this failure to reject
the null persists when we use pooled data from 13 years, yielding 1027
observations. Thus we conclude that a linear specification is not
inappropriate for this setting. One possible reason is that the original
regression, though linear, has already included the squared terms of the
rainfall as regressors. This finding is robust to using the bootstrap tests of
Section 7, which generally yield smaller p-values but unchanged conclusions.
### 9.2 Innovation spillovers
This example is based on the study of the impact of R&D on growth from Bloom
et al. (2013). They develop a general framework incorporating two types of
spillovers: a positive effect from technology (knowledge) spillovers and a
negative ‘business stealing’ effect from product market rivals. They implement
this model using panel data on U.S. firms.
We consider the Productivity Equation in Bloom et al. (2013):
$\ln
y=\varphi_{1}\ln(R\&D)+\varphi_{2}\ln(Sptec)+\varphi_{3}\ln(Spsic)+\varphi_{4}X+error,$
(9.1)
where $y$ is a vector of sales, $R\&D$ is a vector of R&D stocks, and
regressors in $X$ include the log of capital ($Capital$), log of labor
($Labor$), $R\&D$, a dummy for missing values in $R\&D$, a price index, and
two spillover terms constructed as the log of $W_{SIC}R\&D$ ($Spsic$) and the
log of $W_{TEC}R\&D$ ($Sptec$), where $W_{SIC}$ measures the product market
proximity and $W_{TEC}$ measures the technological proximity. Specifically,
they define
$W_{SIC,ij}={S_{i}S_{j}^{\prime}}/{(S_{i}S_{i}^{\prime})^{1/2}(S_{j}S_{j}^{\prime})^{1/2}},W_{TEC,ij}={T_{i}T_{j}^{\prime}}/{(T_{i}T_{i}^{\prime})^{1/2}(T_{j}T_{j}^{\prime})^{1/2}},$
where $S_{i}=(S_{i1},S_{i2},\ldots,S_{i597})^{\prime}$, with $S_{ik}$ being
the share of patents of firm $i$ in the four digit industry $k$ and
$T_{i}=(T_{i1},T_{i2},\ldots,T_{i426})^{\prime}$, with $T_{i\tau}$ being the
share of patents of firm $i$ in technology class $\tau$. Focusing on a cross-
sectional analysis, we use observations from the year 2000 and obtain a sample
size of 577\. Both weight matrices are row normalized.
The column FE of Table 7 is from Table 5 of Bloom et al. (2013) based on their
panel fixed effects estimation and we use it as a baseline for comparison.
This table reports results for SARARMA(0,1,0) models using $W_{SIC}$ and
$W_{TEC}$ separately. We use both $W_{SIC}$ and $W_{TEC}$ simultaneously in
SARARMA(0,2,0), SARARMA(0,2,0), and Error MESS(2) models, reported in Table 8.
In all of these specifications, the test statistics are larger than 1.645, so
we reject the null hypothesis of the linear specification. This rejection also
persists with the bootstrap tests, albeit the p-values go up compared to the
asymptotic ones. However, we can say even more as our estimation also sheds
light on spatial effects in the disturbances in (9.1). As before $H_{0}$
imposes linear functional form of the regressors, while $H_{1}$ uses the
nonparametric series estimate employing power series with $p=10$. Regardless
of the specification of the regression function, the disturbances suggest a
strong spatial effect as the coefficients on $W_{TEC}$ and $W_{SIC}$ are large
in magnitude.
### 9.3 Economic growth
The final example is based on the study of economic growth rate in Ertur and
Koch (2007). Knowledge accumulated in one area might depend on knowledge
accumulated in other areas, especially in its neighborhoods, implying the
possible existence of spatial spillover effects. These questions are of
interest to both economists as well as regional scientists. For example,
Autant-Bernard and LeSage (2011) examine spatial spillovers associated with
research expenditures for French regions, while Ho et al. (2013) examine the
international spillover of economic growth through bilateral trade amongst
OECD countries, Cuaresma and Feldkircher (2013) study spatially correlated
growth spillovers in the income convergence process of Europe, and Evans and
Kim (2014) study the spatial dynamics of growth and convergence in Korean
regional incomes.
In this section, we want to test the linear SAR model specification in Ertur
and Koch (2007). Their dataset covers a sample of 91 countries over the period
1960-1995, originally from Heston et al. (2002), obtained from the Penn World
Tables (PWT version 6.1). The variables in use include per worker income in
1960 ($y60$) and 1995 ($y95$), average rate of growth between 1960 and 1995
$(gy)$, average investment rate of this period ($s$), and average rate of
growth of working-age population ($n_{p}$).
Ertur and Koch (2007) consider the model
$y=\lambda Wy+X\beta+WX\theta+\varepsilon,$ (9.2)
where the dependent variable is log real income per worker $\ln(y95)$,
elements of the explanatory variable $X=(x_{1}^{\prime},x_{2}^{\prime})$
include log investment rate $\ln(s)=x_{1}$ and log physical capital effective
rate of depreciation $\ln(n_{p}+0.05)=x_{2}$, with corresponding subscripted
coefficients $\beta_{1},\beta_{2},\theta_{1},\theta_{2}$. A restricted
regression based on the joint constraints $\beta_{1}=-\beta_{2}$ and
$\theta_{1}=-\theta_{2}$ (these constraints are implied by economic theory) is
also considered in Ertur and Koch (2007). The model (9.2) has regressors
$(X,WX)$ and iid errors, so the test derived in Section 5 can be directly
applied here. Denoting by $d_{ij}$ the great-circle distance between the
capital cities of countries $i$ and $j$, one construction of $W$ takes
$w_{ij}=d_{ij}^{-2}$ while the other takes $w_{ij}=e^{-2d_{ij}}$, following
Ertur and Koch (2007).
Table 9 presents the estimation and testing results based on using linear and
quadratic power series basis functions with $p=10$ and a sample size of
$n=91$. We impose additive structure in our estimation to at least alleviate
the curse of dimensionality, always a concern in nonparametric estimation. We
also use only linear and quadratic basis functions to reduce the number of
terms for series estimation.
We cannot reject linearity of the regression function for the unrestricted
model. On the other hand, linearity is rejected for the restricted model,
which is the preferred specification of Ertur and Koch (2007), with
$w_{ij}=e^{-2d_{ij}}$. Thus, not only can we conclude that the specification
of the model is under suspicion we can also infer this is due to constraints
from economic theory. The findings are supported by the bootstrap tests of
Section 7.
## 10 Conclusion
This paper justifies a specification test for the regression function in a
model where data are spatially dependent. The test is based on a nonparametric
series approximation and is consistent. The paper also allows for some
robustness in error spatial dependence by permitting this to be a
nonparametric function of an underlying economic distance. On the other hand,
our Section 5 imposes correct specification of the spatial weight matrices
$W_{j}$ in the SAR context, while Sun (2020) allows these to be nonparametric
functions as well. Thus our work acts as a complement to existing results in
the literature and future work might combine both aspects.
$\mathscr{T}_{n}^{\ast}$ | SARARMA(0,1,0)
---|---
| | PS | | | | Trig | | | | B-s |
$n=60$ | 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.1
${\small c=0}$ | ${\small 0.008}$ | ${\small 0.032}$ | ${\small 0.084}$ | | ${\small 0.004}$ | ${\small 0.04}$ | ${\small 0.092}$ | | ${\small 0.006}$ | ${\small 0.048}$ | ${\small 0.104}$
| ${\small 0.004}$ | ${\small 0.038}$ | ${\small 0.096}$ | | ${\small 0.004}$ | ${\small 0.038}$ | ${\small 0.094}$ | | ${\small 0.006}$ | ${\small 0.034}$ | ${\small 0.098}$
${\small c=3}$ | ${\small 0.036}$ | ${\small 0.154}$ | ${\small 0.296}$ | | ${\small 0.092}$ | ${\small 0.276}$ | ${\small 0.39}$ | | ${\small 0.098}$ | ${\small 0.292}$ | ${\small 0.470}$
| ${\small 0.154}$ | ${\small 0.414}$ | ${\small 0.62}$ | | ${\small 0.056}$ | ${\small 0.22}$ | ${\small 0.374}$ | | ${\small 0.036}$ | ${\small 0.150}$ | ${\small 0.292}$
${\small c=6}$ | ${\small 0.22}$ | ${\small 0.544}$ | ${\small 0.748}$ | | ${\small 0.454}$ | ${\small 0.794}$ | ${\small 0.908}$ | | ${\small 0.432}$ | ${\small 0.814}$ | ${\small 0.938}$
| ${\small 0.844}$ | ${\small 0.992}$ | ${\small 1}$ | | ${\small 0.314}$ | ${\small 0.714}$ | ${\small 0.872}$ | | ${\small 0.174}$ | ${\small 0.542}$ | ${\small 0.732}$
${\small n=100}$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.006}$ | ${\small 0.044}$ | ${\small 0.098}$ | | ${\small 0.002}$ | ${\small 0.04}$ | ${\small 0.09}$ | | ${\small 0.008}$ | ${\small 0.038}$ | ${\small 0.110}$
| ${\small 0.012}$ | ${\small 0.046}$ | ${\small 0.096}$ | | ${\small 0.006}$ | ${\small 0.036}$ | ${\small 0.102}$ | | ${\small 0.01}$ | ${\small 0.056}$ | ${\small 0.108}$
${\small c=3}$ | ${\small 0.294}$ | ${\small 0.578}$ | ${\small 0.72}$ | | ${\small 0.214}$ | ${\small 0.508}$ | ${\small 0.626}$ | | ${\small 0.272}$ | ${\small 0.572}$ | ${\small 0.712}$
| ${\small 0.37}$ | ${\small 0.662}$ | ${\small 0.824}$ | | ${\small 0.194}$ | ${\small 0.45}$ | ${\small 0.632}$ | | ${\small 0.188}$ | ${\small 0.46}$ | ${\small 0.63}$
${\small c=6}$ | ${\small 0.95}$ | ${\small 0.99}$ | ${\small 0.996}$ | | ${\small 0.902}$ | ${\small 0.99}$ | ${\small 0.998}$ | | ${\small 0.922}$ | ${\small 0.994}$ | ${\small 1}$
| ${\small 0.992}$ | ${\small 0.998}$ | ${\small 1}$ | | ${\small 0.856}$ | ${\small 0.988}$ | ${\small 1}$ | | ${\small 0.852}$ | ${\small 0.98}$ | ${\small 0.998}$
$\small n=200$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.006}$ | ${\small 0.038}$ | ${\small 0.104}$ | | ${\small 0.008}$ | ${\small 0.042}$ | ${\small 0.112}$ | | ${\small 0.024}$ | ${\small 0.074}$ | ${\small 0.132}$
| ${\small 0.006}$ | ${\small 0.048}$ | ${\small 0.088}$ | | ${\small 0.016}$ | ${\small 0.038}$ | ${\small 0.082}$ | | ${\small 0.022}$ | ${\small 0.074}$ | ${\small 0.144}$
${\small c=3}$ | ${\small 0.178}$ | ${\small 0.402}$ | ${\small 0.55}$ | | ${\small 0.162}$ | ${\small 0.374}$ | ${\small 0.532}$ | | ${\small 0.314}$ | ${\small 0.516}$ | ${\small 0.654}$
| ${\small 0.282}$ | ${\small 0.56}$ | ${\small 0.694}$ | | ${\small 0.136}$ | ${\small 0.346}$ | ${\small 0.468}$ | | ${\small 0.19}$ | ${\small 0.37}$ | ${\small 0.542}$
${\small c=6}$ | ${\small 0.846}$ | ${\small 0.968}$ | ${\small 0.984}$ | | ${\small 0.796}$ | ${\small 0.95}$ | ${\small 0.98}$ | | ${\small 0.89}$ | ${\small 0.976}$ | ${\small 0.986}$
| ${\small 0.982}$ | ${\small 0.998}$ | ${\small 1}$ | | ${\small 0.776}$ | ${\small 0.934}$ | ${\small 0.974}$ | | ${\small 0.852}$ | ${\small 0.946}$ | ${\small 0.982}$
Table 1: Rejection probabilities of SARARMA(0,1,0) using bootstrap test $\mathscr{T}_{n}^{\ast}$ at 1, 5, 10% levels, power series (PS), trigonometric (Trig) and B-spline (B-s) bases. $\mathscr{T}_{n}^{a\ast}$ | SARARMA(0,1,0)
---|---
| | PS | | | | Trig | | | | B-s |
$n=60$ | 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.1
${\small c=0}$ | ${\small 0.008}$ | ${\small 0.032}$ | ${\small 0.084}$ | | ${\small 0.004}$ | ${\small 0.04}$ | ${\small 0.092}$ | | ${\small 0.01}$ | ${\small 0.07}$ | ${\small 0.132}$
| ${\small 0.004}$ | ${\small 0.038}$ | ${\small 0.096}$ | | ${\small 0.004}$ | ${\small 0.038}$ | ${\small 0.094}$ | | ${\small 0.004}$ | ${\small 0.038}$ | ${\small 0.096}$
${\small c=3}$ | ${\small 0.036}$ | ${\small 0.154}$ | ${\small 0.296}$ | | ${\small 0.09}$ | ${\small 0.274}$ | ${\small 0.384}$ | | ${\small 0.164}$ | ${\small 0.376}$ | ${\small 0.558}$
| ${\small 0.154}$ | ${\small 0.414}$ | ${\small 0.62}$ | | ${\small 0.056}$ | ${\small 0.22}$ | ${\small 0.376}$ | | ${\small 0.036}$ | ${\small 0.152}$ | ${\small 0.288}$
${\small c=6}$ | ${\small 0.22}$ | ${\small 0.544}$ | ${\small 0.748}$ | | ${\small 0.444}$ | ${\small 0.794}$ | ${\small 0.906}$ | | ${\small 0.56}$ | ${\small 0.892}$ | ${\small 0.956}$
| ${\small 0.844}$ | ${\small 0.992}$ | ${\small 1}$ | | ${\small 0.312}$ | ${\small 0.714}$ | ${\small 0.87}$ | | ${\small 0.174}$ | ${\small 0.532}$ | ${\small 0.732}$
${\small n=100}$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.006}$ | ${\small 0.044}$ | ${\small 0.098}$ | | ${\small 0.004}$ | ${\small 0.038}$ | ${\small 0.092}$ | | ${\small 0.012}$ | ${\small 0.048}$ | ${\small 0.112}$
| ${\small 0.012}$ | ${\small 0.046}$ | ${\small 0.096}$ | | ${\small 0.006}$ | ${\small 0.036}$ | ${\small 0.106}$ | | ${\small 0.01}$ | ${\small 0.056}$ | ${\small 0.106}$
${\small c=3}$ | ${\small 0.294}$ | ${\small 0.578}$ | ${\small 0.72}$ | | ${\small 0.214}$ | ${\small 0.504}$ | ${\small 0.63}$ | | ${\small 0.28}$ | ${\small 0.564}$ | ${\small 0.72}$
| ${\small 0.37}$ | ${\small 0.662}$ | ${\small 0.824}$ | | ${\small 0.194}$ | ${\small 0.45}$ | ${\small 0.632}$ | | ${\small 0.196}$ | ${\small 0.466}$ | ${\small 0.64}$
${\small c=6}$ | ${\small 0.95}$ | ${\small 0.99}$ | ${\small 0.996}$ | | ${\small 0.900}$ | ${\small 0.99}$ | ${\small 0.998}$ | | ${\small 0.932}$ | ${\small 0.992}$ | ${\small 1}$
| ${\small 0.992}$ | ${\small 0.998}$ | ${\small 1}$ | | ${\small 0.856}$ | ${\small 0.988}$ | ${\small 1}$ | | ${\small 0.86}$ | ${\small 0.984}$ | ${\small 0.998}$
$\small n=200$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.006}$ | ${\small 0.038}$ | ${\small 0.104}$ | | ${\small 0.012}$ | ${\small 0.046}$ | ${\small 0.114}$ | | ${\small 0.014}$ | ${\small 0.048}$ | ${\small 0.132}$
| ${\small 0.006}$ | ${\small 0.048}$ | ${\small 0.088}$ | | ${\small 0.016}$ | ${\small 0.042}$ | ${\small 0.08}$ | | ${\small 0.022}$ | ${\small 0.07}$ | ${\small 0.14}$
${\small c=3}$ | ${\small 0.178}$ | ${\small 0.402}$ | ${\small 0.55}$ | | ${\small 0.162}$ | ${\small 0.38}$ | ${\small 0.524}$ | | ${\small 0.282}$ | ${\small 0.476}$ | ${\small 0.608}$
| ${\small 0.282}$ | ${\small 0.56}$ | ${\small 0.694}$ | | ${\small 0.134}$ | ${\small 0.35}$ | ${\small 0.466}$ | | ${\small 0.198}$ | ${\small 0.37}$ | ${\small 0.514}$
${\small c=6}$ | ${\small 0.846}$ | ${\small 0.968}$ | ${\small 0.984}$ | | ${\small 0.802}$ | ${\small 0.952}$ | ${\small 0.978}$ | | ${\small 0.848}$ | ${\small 0.95}$ | ${\small 0.982}$
| ${\small 0.982}$ | ${\small 0.998}$ | ${\small 1}$ | | ${\small 0.774}$ | ${\small 0.934}$ | ${\small 0.972}$ | | ${\small 0.84}$ | ${\small 0.932}$ | ${\small 0.97}$
Table 2: Rejection probabilities of SARARMA(0,1,0) using bootstrap test $\mathscr{T}_{n}^{a\ast}$ at 1, 5, 10% levels, power series (PS), trigonometric (Trig) and B-spline (B-s) bases. $\mathscr{T}_{n}^{\ast}$ | SARARMA(1,0,1)
---|---
| | PS | | | | Trig | | | | B-s |
$n=60$ | 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.1
${\small c=0}$ | ${\small 0.006}$ | ${\small 0.054}$ | ${\small 0.08}$ | | ${\small 0.012}$ | ${\small 0.062}$ | ${\small 0.106}$ | | ${\small 0.016}$ | ${\small 0.044}$ | ${\small 0.086}$
| ${\small 0.016}$ | ${\small 0.062}$ | ${\small 0.118}$ | | ${\small 0.026}$ | ${\small 0.09}$ | ${\small 0.138}$ | | ${\small 0.016}$ | ${\small 0.048}$ | ${\small 0.088}$
${\small c=3}$ | ${\small 0.08}$ | ${\small 0.264}$ | ${\small 0.402}$ | | ${\small 0.082}$ | ${\small 0.256}$ | ${\small 0.406}$ | | ${\small 0.08}$ | ${\small 0.288}$ | ${\small 0.475}$
| ${\small 0.132}$ | ${\small 0.41}$ | ${\small 0.578}$ | | ${\small 0.096}$ | ${\small 0.222}$ | ${\small 0.354}$ | | ${\small 0.048}$ | ${\small 0.192}$ | ${\small 0.282}$
${\small c=6}$ | ${\small 0.266}$ | ${\small 0.588}$ | ${\small 0.748}$ | | ${\small 0.266}$ | ${\small 0.616}$ | ${\small 0.782}$ | | ${\small 0.218}$ | ${\small 0.604}$ | ${\small 0.772}$
| ${\small 0.444}$ | ${\small 0.804}$ | ${\small 0.894}$ | | ${\small 0.204}$ | ${\small 0.474}$ | ${\small 0.658}$ | | ${\small 0.198}$ | ${\small 0.496}$ | ${\small 0.612}$
${\small n=100}$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.006}$ | ${\small 0.054}$ | ${\small 0.116}$ | | ${\small 0.012}$ | ${\small 0.046}$ | ${\small 0.114}$ | | ${\small 0.014}$ | ${\small 0.042}$ | ${\small 0.09}$
| ${\small 0.02}$ | ${\small 0.056}$ | ${\small 0.112}$ | | ${\small 0.012}$ | ${\small 0.044}$ | ${\small 0.088}$ | | ${\small 0.034}$ | ${\small 0.058}$ | ${\small 0.118}$
${\small c=3}$ | ${\small 0.134}$ | ${\small 0.366}$ | ${\small 0.496}$ | | ${\small 0.132}$ | ${\small 0.346}$ | ${\small 0.514}$ | | ${\small 0.162}$ | ${\small 0.46}$ | ${\small 0.59}$
| ${\small 0.222}$ | ${\small 0.556}$ | ${\small 0.732}$ | | ${\small 0.242}$ | ${\small 0.542}$ | ${\small 0.698}$ | | ${\small 0.08}$ | ${\small 0.234}$ | ${\small 0.372}$
${\small c=6}$ | ${\small 0.566}$ | ${\small 0.832}$ | ${\small 0.916}$ | | ${\small 0.59}$ | ${\small 0.888}$ | ${\small 0.96}$ | | ${\small 0.548}$ | ${\small 0.898}$ | ${\small 0.952}$
| ${\small 0.732}$ | ${\small 0.964}$ | ${\small 0.986}$ | | ${\small 0.476}$ | ${\small 0.846}$ | ${\small 0.918}$ | | ${\small 0.432}$ | ${\small 0.796}$ | ${\small 0.874}$
${\small n=200}$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.04}$ | ${\small 0.086}$ | ${\small 0.11}$ | | ${\small 0.026}$ | ${\small 0.076}$ | ${\small 0.108}$ | | ${\small 0.02}$ | ${\small 0.06}$ | ${\small 0.09}$
| ${\small 0.03}$ | ${\small 0.078}$ | ${\small 0.114}$ | | ${\small 0.032}$ | ${\small 0.074}$ | ${\small 0.118}$ | | ${\small 0.038}$ | ${\small 0.086}$ | ${\small 0.112}$
${\small c=3}$ | ${\small 0.186}$ | ${\small 0.4}$ | ${\small 0.524}$ | | ${\small 0.242}$ | ${\small 0.432}$ | ${\small 0.526}$ | | ${\small 0.29}$ | ${\small 0.516}$ | ${\small 0.626}$
| ${\small 0.402}$ | ${\small 0.636}$ | ${\small 0.754}$ | | ${\small 0.244}$ | ${\small 0.42}$ | ${\small 0.542}$ | | ${\small 0.184}$ | ${\small 0.36}$ | ${\small 0.458}$
${\small c=6}$ | ${\small 0.718}$ | ${\small 0.904}$ | ${\small 0.962}$ | | ${\small 0.78}$ | ${\small 0.942}$ | ${\small 0.982}$ | | ${\small 0.73}$ | ${\small 0.948}$ | ${\small 0.978}$
| ${\small 0.872}$ | ${\small 0.98}$ | ${\small 0.998}$ | | ${\small 0.794}$ | ${\small 0.948}$ | ${\small 0.98}$ | | ${\small 0.772}$ | ${\small 0.914}$ | ${\small 0.94}$
Table 3: Rejection probabilities of SARARMA(1,0,1) using bootstrap test $\mathscr{T}_{n}^{\ast}$ at 1, 5, 10% levels, power series (PS), trigonometric (Trig) and B-spline (B-s) bases. $\mathscr{T}_{n}^{a\ast}$ | SARARMA(1,0,1)
---|---
| | PS | | | | Trig | | | | B-s |
$n=60$ | 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.1
${\small c=0}$ | ${\small 0.006}$ | ${\small 0.052}$ | ${\small 0.084}$ | | ${\small 0.014}$ | ${\small 0.064}$ | ${\small 0.096}$ | | ${\small 0.012}$ | ${\small 0.044}$ | ${\small 0.104}$
| ${\small 0.012}$ | ${\small 0.068}$ | ${\small 0.114}$ | | ${\small 0.024}$ | ${\small 0.088}$ | ${\small 0.13}$ | | ${\small 0.018}$ | ${\small 0.038}$ | ${\small 0.068}$
${\small c=3}$ | ${\small 0.092}$ | ${\small 0.27}$ | ${\small 0.396}$ | | ${\small 0.08}$ | ${\small 0.25}$ | ${\small 0.406}$ | | ${\small 0.118}$ | ${\small 0.382}$ | ${\small 0.56}$
| ${\small 0.164}$ | ${\small 0.408}$ | ${\small 0.596}$ | | ${\small 0.102}$ | ${\small 0.242}$ | ${\small 0.37}$ | | ${\small 0.046}$ | ${\small 0.15}$ | ${\small 0.23}$
${\small c=6}$ | ${\small 0.268}$ | ${\small 0.596}$ | ${\small 0.752}$ | | ${\small 0.248}$ | ${\small 0.61}$ | ${\small 0.792}$ | | ${\small 0.23}$ | ${\small 0.56}$ | ${\small 0.808}$
| ${\small 0.518}$ | ${\small 0.824}$ | ${\small 0.9}$ | | ${\small 0.206}$ | ${\small 0.484}$ | ${\small 0.658}$ | | ${\small 0.176}$ | ${\small 0.43}$ | ${\small 0.56}$
${\small n=100}$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.008}$ | ${\small 0.058}$ | ${\small 0.122}$ | | ${\small 0.01}$ | ${\small 0.046}$ | ${\small 0.116}$ | | ${\small 0.004}$ | ${\small 0.04}$ | ${\small 0.82}$
| ${\small 0.024}$ | ${\small 0.062}$ | ${\small 0.118}$ | | ${\small 0.014}$ | ${\small 0.044}$ | ${\small 0.096}$ | | ${\small 0.028}$ | ${\small 0.056}$ | ${\small 0.074}$
${\small c=3}$ | ${\small 0.14}$ | ${\small 0.36}$ | ${\small 0.494}$ | | ${\small 0.122}$ | ${\small 0.354}$ | ${\small 0.52}$ | | ${\small 0.186}$ | ${\small 0.4}$ | ${\small 0.524}$
| ${\small 0.252}$ | ${\small 0.566}$ | ${\small 0.73}$ | | ${\small 0.272}$ | ${\small 0.568}$ | ${\small 0.696}$ | | ${\small 0.04}$ | ${\small 0.148}$ | ${\small 0.214}$
${\small c=6}$ | ${\small 0.536}$ | ${\small 0.818}$ | ${\small 0.914}$ | | ${\small 0.554}$ | ${\small 0.884}$ | ${\small 0.948}$ | | ${\small 0.58}$ | ${\small 0.914}$ | ${\small 0.95}$
| ${\small 0.786}$ | ${\small 0.958}$ | ${\small 0.974}$ | | ${\small 0.478}$ | ${\small 0.834}$ | ${\small 0.916}$ | | ${\small 0.328}$ | ${\small 0.586}$ | ${\small 0.678}$
${\small n=200}$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.04}$ | ${\small 0.08}$ | ${\small 0.116}$ | | ${\small 0.03}$ | ${\small 0.076}$ | ${\small 0.102}$ | | ${\small 0.016}$ | ${\small 0.036}$ | ${\small 0.072}$
| ${\small 0.026}$ | ${\small 0.064}$ | ${\small 0.108}$ | | ${\small 0.028}$ | ${\small 0.06}$ | ${\small 0.122}$ | | ${\small 0.008}$ | ${\small 0.014}$ | ${\small 0.02}$
${\small c=3}$ | ${\small 0.176}$ | ${\small 0.382}$ | ${\small 0.516}$ | | ${\small 0.22}$ | ${\small 0.438}$ | ${\small 0.526}$ | | ${\small 0.262}$ | ${\small 0.45}$ | ${\small 0.55}$
| ${\small 0.41}$ | ${\small 0.632}$ | ${\small 0.738}$ | | ${\small 0.256}$ | ${\small 0.428}$ | ${\small 0.538}$ | | ${\small 0.06}$ | ${\small 0.124}$ | ${\small 0.164}$
${\small c=6}$ | ${\small 0.704}$ | ${\small 0.894}$ | ${\small 0.948}$ | | ${\small 0.746}$ | ${\small 0.934}$ | ${\small 0.976}$ | | ${\small 0.69}$ | ${\small 0.916}$ | ${\small 0.974}$
| ${\small 0.914}$ | ${\small 0.986}$ | ${\small 0.996}$ | | ${\small 0.776}$ | ${\small 0.93}$ | ${\small 0.976}$ | | ${\small 0.482}$ | ${\small 0.612}$ | ${\small 0.66}$
Table 4: Rejection probabilities of SARARMA(1,0,1) using bootstrap test $\mathscr{T}_{n}^{a\ast}$ at 1, 5, 10% levels, power series (PS), trigonometric (Trig) and B-spline (B-s) bases. | $r=2$ | $r=3$ | $r=4$ | $r=5$
---|---|---|---|---
$n=150$ | $p=10$ | $p=15$ | $p=10$ | $p=15$ | $p=10$ | $p=15$ | $p=10$ | $p=15$ | $p=20$
$c=0$ | 0.0860 | 0.2020 | 0.1180 | 0.2060 | 0.1420 | 0.2240 | | |
$c=3$ | 0.3320 | 0.6340 | 0.3700 | 0.6380 | 0.3760 | 0.6700 | | |
$c=6$ | 0.9060 | 0.9920 | 0.9180 | 0.9940 | 0.9220 | 0.9960 | | |
$n=300$ | | | | | | | | |
$c=0$ | 0.0820 | 0.0960 | 0.0880 | 0.1080 | 0.1060 | 0.1100 | | |
$c=3$ | 0.2680 | 0.5980 | 0.2600 | 0.6120 | 0.2780 | 0.6220 | | |
$c=6$ | 0.8140 | 0.9980 | 0.8160 | 0.9980 | 0.8220 | 0.9980 | | |
$n=500$ | | | | | | | | |
$c=0$ | 0.0280 | 0.0420 | 0.0260 | 0.0400 | 0.0360 | 0.0480 | | |
$c=3$ | 0.2320 | 0.6660 | 0.2400 | 0.6620 | 0.2460 | 0.6680 | | |
$c=6$ | 0.8920 | 1 | 0.9040 | 1 | 0.9000 | 1 | | |
$n=600$ | | | | | | | | |
$c=0$ | 0.0320 | 0.0500 | 0.0340 | 0.0540 | 0.0360 | 0.0540 | | |
$c=3$ | 0.3140 | 0.6480 | 0.3080 | 0.6280 | 0.3120 | 0.6460 | | |
$c=6$ | 0.9220 | 1 | 0.9180 | 1 | 0.9180 | 1 | | |
$n=700$ | | | | | | | | |
$c=0$ | 0.0260 | 0.0300 | 0.0280 | 0.0380 | 0.0280 | 0.0380 | 0.0280 | 0.0420 | 0.0580
$c=3$ | 0.2420 | 0.5540 | 0.2400 | 0.5480 | 0.2520 | 0.5500 | 0.2420 | 0.5600 | 0.6920
$c=6$ | 0.9580 | 0.9980 | 0.9560 | 0.9980 | 0.9600 | 0.9980 | 0.9500 | 0.9980 | 1
Table 5: Rejection probabilities of $\mathscr{T}_{n}$ at 5% asymptotic level, nonparametric spatial error structure. | 1998 | Pooled
---|---|---
| $H_{0}$ | p-value | $H_{1}$ | p-value | $H_{0}$ | p-value | $H_{1}$ |
| SARARMA(2,1,0)
$W^{A}y$ | -0.005 | $<$0.001 | -0.003 | $<$0.001 | 0.013 | $<$0.001 | 0.013 | $<$0.001
$W^{E}y$ | 0.130 | $<$0.001 | 0.129 | $<$0.001 | 0.121 | $<$0.001 | 0.121 | $<$0.001
$W^{d}$ | -0.159 | 0.281 | -0.225 | $<$0.001 | -0.086 | 0.033 | -0.086 | 0.033
$\mathscr{T}_{n}$ | | | -1.921 | 0.973 | | | -2.531 | 0.994
$\mathscr{T}_{n}^{\ast}$ | | | | 0.840 | | | | 0.940
$\mathscr{T}_{n}^{a}$ | | | -1.918 | 0.972 | | | -2.547 | 0.995
$\mathscr{T}_{n}^{a\ast}$ | | | | 0.870 | | | | 0.730
| SARARMA(2,0,1)
$W^{A}y$ | 0.001 | $<$0.01 | 0.011 | $<$0.01 | 0.013 | $<$0.01 | 0.013 | $<$0.01
$W^{E}y$ | 0.127 | $<$0.01 | 0.122 | $<$0.01 | 0.121 | $<$0.01 | 0.121 | $<$0.01
$W^{d}$ | -0.153 | $<$0.01 | -0.050 | $<$0.01 | -0.086 | $<$0.01 | -0.086 | 0.025
$\mathscr{T}_{n}$ | | | -1.763 | 0.961 | | | -2.421 | 0.992
$\mathscr{T}_{n}^{\ast}$ | | | | 0.900 | | | | 0.990
$\mathscr{T}_{n}^{a}$ | | | -2.349 | 0.991 | | | -2.423 | 0.992
$\mathscr{T}_{n}^{a\ast}$ | | | | 0.850 | | | | 0.790
| Nonparametric
$W^{A}y$ | -0.052 | $<$0.001 | -0.011 | $<$0.001 | 0.033 | $<$0.001 | 0.033 | $<$0.001
$W^{E}y$ | 0.149 | $<$0.001 | 0.133 | $<$0.001 | 0.110 | $<$0.001 | 0.109 | $<$0.001
$W^{d}$ | | | | | | | |
$\mathscr{T}_{n}$ | | | -1.294 | 0.902 | | | -2.314 | 0.990
$\mathscr{T}_{n}^{\ast}$ | | | | 0.830 | | | | 0.850
$\mathscr{T}_{n}^{a}$ | | | -1.898 | 0.971 | | | -2.530 | 0.994
$\mathscr{T}_{n}^{a\ast}$ | | | | 0.660 | | | | 0.910
Table 6: The estimates and test statistics for the conflict data. ∗ denotes
the bootstrap p-value.
Variables | FE | SARARMA(0,1,0), $W_{TEC}$
---|---|---
| | p-value | $H_{0}$ | p-value | $H_{1}$ | p-value
$\ln(Spsic)$ | -0.005 | 0.649 | 0.007 | 0.574 | 0.015 | 0.166
$\ln(Sptec)$ | 0.191 | $<$0.001 | 0.006 | 0.850 | -0.001 | 0.998
$\ln(Lab.)$ | 0.636 | $<$0.001 | 0.572 | $<$0.001 | |
$\ln(Cap.)$ | 0.154 | $<$0.001 | 0.336 | $<$0.001 | |
$\ln(R\&D)$ | 0.043 | $<$0.001 | 0.081 | $<$0.001 | |
$W_{TEC}$ | | | 0.835 | $<$0.001 | 0.829 | $<$0.001
$\mathscr{T}_{n}$ | | | | | 15.528 | $<$0.001
$\mathscr{T}_{n}^{*}$ | | | | | | 0.050
Variables | SARARMA(0,1,0), $W_{SIC}$
---|---
| $H_{0}$ | p-value | $H_{1}$ | p-value
$\ln(Spsic)$ | 0.008 | 0.620 | 0.017 | 0.193
$\ln(Sptec)$ | 0.039 | 0.157 | 0.020 | 0.336
$\ln(Lab.)$ | 0.571 | $<$0.001 | |
$\ln(Cap.)$ | 0.318 | $<$0.001 | |
$\ln(R\&D)$ | 0.082 | $<$0.001 | |
$W_{SIC}$ | 0.722 | $<$0.001 | 0.724 | $<$0.001
$\mathscr{T}_{n}$ | | | 10.451 | $<$0.001
$\mathscr{T}_{n}^{*}$ | | | | $<$0.001
Table 7: The estimates and test statistics for the R&D data, SARARMA(0,1,0). ∗
denotes the bootstrap p-value. The price index as well as a dummy variable for
missing value in R&D are included, but we only report the coefficients
reported in Bloom et al. (2013).
Variables | SARARMA(0,2,0)
---|---
| ${\small H}_{0}$ | p-value | ${\small H}_{1}$ | p-value
$\ln{\small(Spsic)}$ | 0.009 | 0.587 | 0.018 | 0.170
$\ln{\small(Sptec)}$ | 0.044 | 0.112 | 0.026 | 0.236
$\ln{\small(Lab.)}$ | 0.573 | $<$0.001 | |
$\ln{\small(Cap.)}$ | 0.315 | $<$0.001 | |
$\ln{\small(R\&D)}$ | 0.082 | $<$0.001 | |
${\small W}_{SIC}$ | 0.696 | $<$0.001 | 0.693 | $<$0.001
${\small W}_{TEC}$ | 0.157 | 0.092 | 0.164 | 0.079
$\mathscr{T}_{n}$ | | | 10.485 | $<$0.001
$\mathscr{T}_{n}^{*}$ | | | | 0.060
Variables | SARARMA(0,0,2)
---|---
| ${\small H}_{0}$ | p-value | ${\small H}_{1}$ | p-value
$\ln{\small(Spsic)}$ | -0.0002 | 0.991 | 0.013 | 0.266
$\ln{\small(Sptec)}$ | 0.033 | 0.200 | 0.017 | 0.434
$\ln{\small(Lab.)}$ | 0.565 | $<$0.01 | |
$\ln{\small(Cap.)}$ | 0.334 | $<$0.01 | |
$\ln{\small(R\&D)}$ | 0.076 | $<$0.01 | |
${\small W}_{SIC}$ | 0.624 | $<$0.01 | 0.728 | $<$0.001
${\small W}_{TEC}$ | 0.312 | 0.123 | 0.321 | 0.112
$\mathscr{T}_{n}$ | | | 15.144 | $<$0.001
$\mathscr{T}_{n}^{*}$ | | | | 0.020
Variables | Error MESS(2)
---|---
| ${\small H}_{0}$ | p-value | ${\small H}_{1}$ | p-value
$\ln{\small(Spsic)}$ | 0.002 | 0.788 | 0.014 | 0.040
$\ln{\small(Sptec)}$ | 0.045 | 0.025 | 0.027 | 0.088
$\ln{\small(Lab.)}$ | 0.569 | $<$0.001 | |
$\ln{\small(Cap.)}$ | 0.323 | $<$0.001 | |
$\ln{\small(R\&D)}$ | 0.077 | $<$0.001 | |
${\small W}_{SIC}$ | 0.775 | $<$0.001 | 0.836 | $<$0.001
${\small W}_{TEC}$ | 0.338 | 0.010 | 0.380 | 0.004
$\mathscr{T}_{n}$ | | | 12.776 | $<$0.001
$\mathscr{T}_{n}^{*}$ | | | | 0.050
Table 8: The estimates and test statistics for the R&D data, SARARMA(0,2,0) and Error MESS(2). ∗ denotes the bootstrap p-value. The price index as well as a dummy variable for missing value in R&D are included, but we only report the coefficients reported in Bloom et al. (2013). Variable | $w_{ij}^{\ast}=d_{ij}^{-2}$ for $i\neq j$ | $w_{ij}^{\ast}=e^{-2d_{ij}}$ for $i\neq j$
---|---|---
| estimate | p-value | estimate | p-value
Constant | $1.0711$ | $0.608$ | $0.5989$ | $0.798$
$\ln(s)$ | $0.8256$ | $<0.001$ | $0.7938$ | $<0.001$
$\ln(n_{p}+0.05)$ | $-1.4984$ | $0.008$ | $-1.4512$ | $0.009$
$W\ln(s)$ | $-0.3159$ | $0.075$ | $-0.3595$ | $0.020$
$W\ln(n_{p}+0.05)$ | $0.5633$ | $0.498$ | $0.1283$ | $0.856$
$Wy$ | $0.7360$ | $<0.001$ | $0.6510$ | $<0.001$
$\mathscr{T}_{n}$ | $-1.88$ | 0.970 | $-2.08$ | 0.981
$\mathscr{T}_{n}^{*}$ | | 0.850 | | 0.900
$\mathscr{T}_{n}^{a}$ | $-1.90$ | 0.971 | $-2.05$ | 0.980
$\mathscr{T}_{n}^{a*}$ | | 0.820 | | 0.810
Restricted regression | | | |
Constant | $2.1411$ | $<0.001$ | $2.9890$ | $<0.001$
$\ln(s)-\ln(n+0.05)$ | $0.8426$ | $<0.001$ | $0.8195$ | $<0.001$
$W[\ln(s)-\ln(n_{p}+0.05)]$ | $-0.2675$ | $0.122$ | $-0.2589$ | $0.098$
$W\ln(y)$ | $0.7320$ | $<0.001$ | $0.6380$ | $<0.001$
$\mathscr{T}_{n}$ | $0.30$ | 0.382 | $4.04$ | $<0.001$
$\mathscr{T}_{n}^{*}$ | | 0.500 | | $<0.001$
$\mathscr{T}_{n}^{a}$ | $0.10$ | 0.460 | $4.50$ | $<0.001$
$\mathscr{T}_{n}^{a*}$ | | 0.560 | | $0.040$
Table 9: The estimates and test statistics of the linear SAR model for the
growth data. ∗ denotes the bootstrap p-value.
Appendix
## Appendix A Proofs of theorems and propositions
###### Proof of Proposition 4.1:.
Because the map $\Sigma:\Gamma^{o}\rightarrow\mathcal{M}^{n\times n}$ is
Fréchet-differentiable on $\Gamma^{o}$, it is also Gâteaux-differentiable and
the two derivative maps coincide. Thus by Theorem 1.8 of Ambrosetti and Prodi
(1995),
$\left\|\Sigma\left(\gamma_{1}\right)-\Sigma\left(\gamma_{2}\right)\right\|\leq\sup_{\gamma\in\ell\left[\gamma_{1},\gamma_{2}\right]}\left\|D\Sigma(\gamma)\right\|\left\|\gamma_{1}-\gamma_{2}\right\|,$
where
$\ell\left[\gamma_{1},\gamma_{2}\right]=\left\\{t\gamma_{1}+(1-t)\gamma_{2}:t\in[0,1]\right\\}$.
The claim now follows by (4.3) in Assumption 8. ∎
###### Proof of Theorem 4.1.
This a particular case of the proof of Theorem 5.1 with $\lambda=0$, and so
$S(\lambda)=I_{n}$. ∎
###### Proof of Theorem 4.2.
In the supplementary appendix. ∎
###### Proof of Theorem 4.3.
We would like to establish the asymptotic unit normality of
$\frac{\sigma_{0}^{-2}\varepsilon^{\prime}\mathscr{V}\varepsilon-p}{\sqrt{2p}}.$
(A.1)
Writing $q=\sqrt{2p}$, the ratio in (A.1) has zero mean and variance equal to
one, and may be written as $\sum_{s=1}^{\infty}w_{s}$, where
$w_{s}=\sigma_{0}^{-2}q^{-1}v_{ss}\left(\varepsilon_{s}^{2}-\sigma_{0}^{2}\right)+2\sigma_{0}^{-2}q^{-1}\mathbf{1}(s\geq
2)\varepsilon_{s}\sum_{t<s}v_{st}\varepsilon_{t},$ with $v_{st}$ the typical
element of $\mathscr{V}$, with $s,t=1,2,\ldots,$. We first show that
$w_{\ast}\overset{p}{\longrightarrow}0,$ (A.2)
where $w_{\ast}=w-w_{S}$, $w_{S}=\sum_{s=1}^{S}w_{s}$ and $S=S_{n}$ is a
positive integer sequence that is increasing in $n$. All expectations in the
sequel are taken conditional on $X$. By Chebyshev’s inequality proving
$\mathcal{E}w_{\ast}^{2}\overset{p}{\rightarrow}0$ (A.3)
is sufficient to establish (A.2). Notice that $\mathcal{E}w_{s}^{2}\leq
Cq^{-2}v_{ss}^{2}+Cq^{-2}\mathbf{1}(s\geq 2)\sum_{t<s}v_{st}^{2}\leq
Cq^{-2}\sum_{t\leq s}v_{st}^{2},$ so that, writing
$\mathscr{M}=\Sigma^{-{1}}\Psi[\Psi^{\prime}\Sigma^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma^{-{1}}$,
$\displaystyle\sum_{s=S+1}^{\infty}\mathcal{E}w_{s}^{2}\leq
Cq^{-2}\sum_{s=S+1}^{\infty}\sum_{t\leq s}v_{st}^{2}\leq
Cq^{-2}\sum_{s=S+1}^{\infty}b_{s}^{\prime}M\sum_{t\leq
s}b_{t}b_{t}^{\prime}\mathscr{M}b_{s}$ (A.4) $\displaystyle\leq$
$\displaystyle
Cq^{-2}\left\|\Sigma\right\|\sum_{s=S+1}^{\infty}b_{s}^{\prime}\mathscr{M}^{2}b_{s}\leq
Cq^{-2}\sum_{s=S+1}^{\infty}\sum_{i,j,k=1}^{n}b_{is}b_{kt}m_{ij}m_{kj}$
$\displaystyle\leq$ $\displaystyle
Cq^{-2}\sum_{s=S+1}^{\infty}\sum_{i,k=1}^{n}\left|b_{is}^{\ast}\right|\left|b_{ks}^{\ast}\right|\sum_{j=1}^{n}\left(m_{kj}^{2}+m_{ij}^{2}\right),$
where $m_{ij}$ is the $(i,j)$-th element of $\mathscr{M}$ and we have used the
inequality $|ab|\leq\left(a^{2}+b^{2}\right)/2$ in the last step. Now, denote
by $h_{i}^{\prime}$ the $i$-th row of the $n\times p$ matrix
$\Sigma^{-1}\Psi$. Denoting the elements of $\Sigma^{-1}$ by
$\Sigma^{-1}_{ij}$ and $\psi_{jl}=\psi\left(x_{jl}\right)$, $h_{i}$ has
entries $h_{il}=\sum_{j=1}^{n}\Sigma^{-1}_{ij}\psi_{jl}$, $l=1,\ldots,p$. We
have
$\left|h_{il}\right|=O_{p}\left(\sum_{j=1}^{n}\left|\Sigma^{-1}_{ij}\right|\right)=O_{p}\left(\left\|\Sigma^{-1}\right\|_{R}\right)=O_{p}(1)$,
uniformly, by Assumptions R.11 and R.13. Thus, we have
$\left\|h_{i}\right\|=O_{p}\left(\sqrt{p}\right)$, uniformly in $i$. As a
result,
$\left|m_{ij}\right|=n^{-1}\left|h_{i}^{\prime}\left(n^{-1}\Psi^{\prime}\Sigma^{-1}\Psi\right)^{-1}h_{j}\right|=O_{p}\left(n^{-1}\left\|h_{i}\right\|\left\|h_{j}\right\|\right)=O_{p}\left(pn^{-1}\right),$
(A.5)
uniformly in $i,j$, by Assumption R.11. Similarly, note that
$\displaystyle\sum_{j=1}^{n}m_{ij}^{2}$ $\displaystyle=$ $\displaystyle
n^{-1}h_{i}^{\prime}\left(n^{-1}\Psi^{\prime}\Sigma^{-1}\Psi\right)^{-1}\left(n^{-1}\Psi^{\prime}\Sigma^{-2}\Psi\right)\left(n^{-1}\Psi^{\prime}\Sigma^{-1}\Psi\right)^{-1}h_{i}$
(A.6) $\displaystyle\leq$ $\displaystyle
n^{-1}\left\|h_{i}\right\|^{2}\left\|\left(n^{-1}\Psi^{\prime}\Sigma^{-1}\Psi\right)^{-1}\right\|^{2}\left\|n^{-1}\Psi^{\prime}\Sigma^{-2}\Psi\right\|$
$\displaystyle=$ $\displaystyle
O_{p}\left(pn^{-2}\left\|\Psi\right\|^{2}\left\|\Sigma^{-1}\right\|^{2}\right)=O_{p}\left(pn^{-1}\right),$
uniformly in $i$. Thus (A.4) is
$O_{p}\left(q^{-2}pn^{-1}\sum_{i=1}^{n}\sum_{s=S+1}^{\infty}\left|b_{is}^{\ast}\right|\sum_{t=1}^{n}\left|b_{ks}^{\ast}\right|\right)=O_{p}\left(q^{-2}p\sup_{i=1,\ldots,n}\sum_{s=S+1}^{\infty}\left|b_{is}^{\ast}\right|\right),$
(A.7)
by Assumption R.4. By the same assumption, there exists $S_{in}$ such that
$\sum_{s=S_{in}+1}^{\infty}\left|b_{is}^{\ast}\right|\leq\epsilon_{n}$ for any
decreasing sequence $\epsilon_{n}\rightarrow 0$ as $n\rightarrow\infty$.
Choosing $S=\max_{i=1,\ldots,n}S_{in}$ in $w_{S}$, we deduce that (A.7) is
$O_{p}\left(q^{-2}p\epsilon_{n}\right)=O_{p}\left(\epsilon_{n}\right)=o_{p}(1)$,
proving (A.3). Thus we need only focus on $w_{S}$, and seek to establish that
$w_{S}\longrightarrow_{d}N(0,1),\text{ as }n\rightarrow\infty.$ (A.8)
From Scott (1973), (A.8) follows if
$\sum_{s=1}^{S}\mathcal{E}w_{s}^{4}\overset{p}{\longrightarrow}0,\text{ as
}n\rightarrow\infty,$ (A.9)
and
$\sum_{s=1}^{S}\left[\mathcal{E}\left(w_{s}^{2}\left.{}\right|\varepsilon_{t},t<s\right)-\mathcal{E}\left(w_{s}^{2}\right)\right]\overset{p}{\longrightarrow}0,\text{
as }n\rightarrow\infty.$ (A.10)
We show (A.9) first. Evaluating the expectation and using (A.6) yields
$\displaystyle\mathcal{E}w_{s}^{4}$ $\displaystyle\leq$ $\displaystyle
Cq^{-4}v_{ss}^{4}+Cq^{-4}\sum_{t<s}v_{st}^{4}\leq Cq^{-4}\left(\sum_{t\leq
s}v_{st}^{2}\right)^{2}\leq Cq^{-4}\left(b_{s}^{\prime}\mathscr{M}\sum_{t\leq
s}b_{t}b_{t}^{\prime}\mathscr{M}b_{s}\right)^{2}$ $\displaystyle\leq$
$\displaystyle
Cq^{-4}\left(b_{s}^{\prime}\mathscr{M}^{2}b_{s}\right)^{2}=Cq^{-4}\sum_{i,j,k=1}^{n}b_{is}b_{ks}m_{ij}m_{kj}\leq
Cq^{-4}\sum_{i,k=1}^{n}\left|b^{*}_{is}\right|\left|b^{*}_{ks}\right|\sum_{j=1}^{n}\left(m^{2}_{ij}+m^{2}_{kj}\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(q^{-4}pn^{-1}\left(\sum_{i=1}^{n}\left|b^{*}_{is}\right|\right)^{2}\right),$
whence
$\displaystyle\sum_{s=1}^{S}\mathcal{E}w_{s}^{4}$ $\displaystyle=$
$\displaystyle
O_{p}\left(q^{-4}pn^{-1}\sum_{s=1}^{S}\left(\sum_{i=1}^{n}\left|b^{*}_{is}\right|\right)^{2}\right)=O_{p}\left(q^{-4}pn^{-1}\sum_{s=1}^{S}\left(\sum_{i=1}^{n}\left|b_{is}^{\ast}\right|\right)\right)=O_{p}\left(q^{-4}p\right),$
by Assumption R.4. Thus (A.9) is established. Notice that
$\mathcal{E}\left(\left.w_{s}^{2}\right|\epsilon_{t},t<s\right)$ equals
$4q^{-2}\sigma_{0}^{-4}\left\\{\left(\mu_{4}-\sigma_{0}^{4}\right)v_{ss}^{2}+2\mu_{3}\mathbf{1}(s\geq
2)\sum_{t<s}v_{st}v_{ss}\varepsilon_{t}\right\\}+4q^{-2}\sigma_{0}^{-2}\mathbf{1}(s\geq
2)\left(\sum_{t<s}v_{st}\varepsilon_{t}\right)^{2},$
and
$\mathcal{E}w_{s}^{2}=4q^{-2}\sigma_{0}^{-4}\left(\mu_{4}-\sigma_{0}^{4}\right)v_{ss}^{2}+4q^{-2}\mathbf{1}(s\geq
2)\sum_{t<s}v_{st}^{2},$ so that (A.10) is bounded by a constant times
$q^{-2}\sum_{s=2}^{S}\sum_{t<s}v_{st}v_{ss}\varepsilon_{t}+\left\\{\sum_{s=2}^{S}\left(\sum_{t<s}v_{st}\varepsilon_{t}\right)^{2}-\sigma_{0}^{2}\sum_{t<s}v_{st}^{2}\right\\}.$
(A.11)
By transforming the range of summation, the square of the first term in (A.11)
has expectation bounded by
$Cq^{-4}\mathcal{E}\left(\sum_{t=1}^{S-1}\sum_{s=t+1}^{S}v_{st}v_{ss}\varepsilon_{t}\right)^{2}\leq
Cq^{-4}\sum_{t=1}^{S-1}\left(\sum_{s=t+1}^{S}v_{st}v_{ss}\right)^{2},$ (A.12)
where the factor in parentheses on the RHS of (A.12) is
$\displaystyle\sum_{s,r=t+1}^{S}b_{s}^{\prime}\mathscr{M}b_{s}b_{s}^{\prime}\mathscr{M}b_{t}b_{r}^{\prime}\mathscr{M}b_{r}b_{r}^{\prime}\mathscr{M}b_{t}\leq\sum_{s,r=t+1}^{S}\left|b_{s}^{\prime}\mathscr{M}b_{s}b_{r}^{\prime}\mathscr{M}b_{r}\right|\left|b_{s}^{\prime}\mathscr{M}b_{t}\right|\left|b_{r}^{\prime}\mathscr{M}b_{t}\right|$
$\displaystyle\leq$ $\displaystyle
C\sum_{s,r=t+1}^{S}\sum_{i,j,k,l=1}^{n}\left|b_{is}\right|\left|m_{ij}\right|\left|b_{jr}\right|\left|b_{ks}\right|\left|m_{lk}\right|\left|b_{kr}\right|\left|b_{s}^{\prime}\mathscr{M}b_{t}\right|\left|b_{r}^{\prime}\mathscr{M}b_{t}\right|$
$\displaystyle\leq$ $\displaystyle
C\left(\sup_{i,j}\left|m_{ij}\right|\right)^{2}\left(\sup_{s\geq
1}\sum_{i=1}^{n}\left|b_{is}^{\ast}\right|\right)^{4}\sum_{s,r=t+1}^{S}\left|b_{s}^{\prime}\mathscr{M}b_{t}\right|\left|b_{r}^{\prime}\mathscr{M}b_{t}\right|$
$\displaystyle=$ $\displaystyle
O_{p}\left(p^{2}n^{-2}\left(\sum_{s=t+1}^{S}\left|b_{t}^{\prime}\mathscr{M}b_{s}\right|\right)^{2}\right)=O_{p}\left(p^{2}n^{-2}\left(\sum_{s=t+1}^{S}\sum_{i,j=1}^{n}\left|b_{it}^{\ast}\right|\left|m_{ij}\right|\left|b_{js}^{\ast}\right|\right)^{2}\right),$
where we used Assumptions R.4 and (A.5). Now Assumptions R.4, R.11 and (A.5)
imply that
$\displaystyle\sum_{s=t+1}^{S}\sum_{i,j=1}^{n}\left|b_{it}^{\ast}\right|\left|m_{ij}\right|\left|b_{js}^{\ast}\right|$
$\displaystyle=$ $\displaystyle
O_{p}\left(\sup_{i,j}\left|m_{ij}\right|\sup_{t}\sum_{i=1}^{n}\left|b_{it}^{\ast}\right|\sum_{j=1}^{n}\sum_{s=t+1}^{S}\left|b_{js}^{\ast}\right|\right)=O_{p}\left(p\sup_{t}\sum_{i=1}^{n}\left|b_{it}^{\ast}\right|\right),$
so (A.12) is
$O_{p}\left(q^{-4}p^{4}n^{-2}\sup_{t}\left(\sum_{i=1}^{n}\left|b_{it}^{\ast}\right|\right)\left(\sum_{i=1}^{n}\left(\sum_{t=1}^{S-1}\left|b_{it}^{\ast}\right|\right)\right)\right)$.
By Assumption R.4 the latter is $O_{p}\left(q^{-4}p^{4}n^{-1}\right)$ and
therefore the first term in (A.11) is $O_{p}\left(p^{2}n^{-1}\right)$, which
is negligible.
Once again transforming the summation range and using the inequality
$|a+b|^{2}\leq C\left(a^{2}+b^{2}\right)$, we can bound the square of the
second term in (A.11) by a constant times
$\left(\sum_{t=1}^{S-1}\sum_{s=t+1}^{S}v_{st}^{2}\left(\varepsilon_{t}^{2}-\sigma_{0}^{2}\right)\right)^{2}+\left(2\sum_{t=1}^{S-1}\sum_{r=1}^{t-1}\sum_{s=t+1}^{S}v_{st}v_{sr}\varepsilon_{t}\varepsilon_{r}\right)^{2}.$
(A.13)
Using Assumption R.4, the expectations of the two terms in (A.13) are bounded
by a constant times $\alpha_{1}$ and a constant times $\alpha_{2}$,
respectively, where
$\alpha_{1}=\sum_{t=1}^{S-1}\left(\sum_{s=t+1}^{S}v_{st}^{2}\right)^{2},\alpha_{2}=\sum_{t=1}^{S-1}\sum_{r=1}^{t-1}\left(\sum_{s=t+1}^{S}v_{st}v_{sr}\right)^{2}.$
Thus (A.13) is $O_{p}\left(\alpha_{1}+\alpha_{2}\right)$. Now by (A.5),
Assumptions R.4, R.11 and elementary inequalities $\alpha_{2}$ is bounded by
$\displaystyle\sum_{t=1}^{S-1}\sum_{r=1}^{t-1}\sum_{s=t+1}^{S}\sum_{u=t+1}^{S}b_{s}^{\prime}\mathscr{M}b_{t}b_{s}^{\prime}\mathscr{M}b_{r}b_{u}^{\prime}\mathscr{M}b_{t}b_{u}^{\prime}\mathscr{M}b_{r}$
$\displaystyle=$ $\displaystyle
O_{p}\left(q^{-4}\sum_{s,r,t,u=1}^{S}\sum_{i,j=1}^{n}\left|b_{ir}^{\ast}\right|\left|m_{ij}\right|\left|b_{js}^{\ast}\right|\sum_{i,j=1}^{n}\left|b_{ir}^{\ast}\right|\left|m_{ij}\right|\left|b_{ju}^{\ast}\right|\sum_{i,j=1}^{n}\left|b_{it}^{\ast}\right|\left|m_{ij}\right|\left|b_{js}^{\ast}\right|\sum_{i,j=1}^{n}\left|b_{it}^{\ast}\right|\left|m_{ij}\right|\left|b_{ju}^{\ast}\right|\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(q^{-4}pn^{-1}\sum_{s,r,t=1}^{S}\left(\sum_{i,j=1}^{n}\left|b_{ir}^{\ast}\right|\left|m_{ij}\right|\left|b_{js}^{\ast}\right|\right)\left(\sum_{i,j=1}^{n}\left|b_{ir}^{\ast}\right|\left|m_{ij}\right|\sum_{u=1}^{S}\left|b_{ju}^{\ast}\right|\right)\right.$
$\displaystyle\times$
$\displaystyle\left.\sum_{i,j=1}^{n}\left|b_{it}^{\ast}\right|\left|m_{ij}\right|\left|b_{js}^{\ast}\right|\sum_{i=1}^{n}\left|b_{it}^{\ast}\right|\sup_{u}\sum_{j=1}^{n}\left|b_{ju}^{\ast}\right|\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(q^{-4}p^{2}n^{-2}\sum_{s,r=1}^{S}\left(\sum_{i,j=1}^{n}\left|b_{ir}^{\ast}\right|\left|m_{ij}\right|\left|b_{js}^{\ast}\right|\right)\sum_{i=1}^{n}\left|b_{ir}^{\ast}\right|\sum_{j=1}^{n}\left(\sum_{u=1}^{S}\left|b_{ju}^{\ast}\right|\right)\left(\sum_{i,j=1}^{n}\sum_{t=1}^{S}\left|b_{it}^{\ast}\right|\left|m_{ij}\right|\left|b_{js}^{\ast}\right|\right)\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(q^{-4}p^{2}n^{-1}\sum_{i,j=1}^{n}\left(\sum_{r=1}^{S}\left|b_{ir}^{\ast}\right|\right)\left|m_{ij}\right|\left(\sum_{s=1}^{S}\left|b_{js}^{\ast}\right|\right)\left(\sup_{j}\sum_{i=1}^{n}\left|m_{ij}\right|\right)\sum_{j=1}^{n}\left|b_{js}^{\ast}\right|\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(q^{-4}p^{2}n^{-1}\sup_{k}\sum_{i,j=1}^{n}\left|m_{ij}\right|\sum_{i=1}^{n}\left|m_{ik}\right|\right)=O_{p}\left(q^{-4}p^{2}n^{-1}\sup_{k}\sum_{i,j,\ell=1}^{n}\left|m_{ij}\right|\left|m_{\ell
k}\right|\right)$ $\displaystyle=$ $\displaystyle
O_{p}\left(q^{-4}p^{2}n^{-1}\sup_{k}\sum_{i,j,\ell=1}^{n}\left(m_{ij}^{2}+m_{\ell
k}^{2}\right)\right)=O_{p}\left(q^{-4}p^{2}n^{-1}\sum_{i,j,\ell=1}^{n}\left(m_{ij}^{2}+m_{\ell
j}^{2}\right)\right)$ $\displaystyle=$ $\displaystyle
O_{p}\left(q^{-4}p^{2}n^{-1}\sum_{i,j=1}^{n}m_{ij}^{2}\right)=O_{p}\left(q^{-4}p^{2}\sup_{j}\sum_{i=1}^{n}m_{ij}^{2}\right)=O_{p}\left(pn^{-1}\right),$
where we used (A.6) in the last step. A similar use of the conditions of the
theorem and (A.5) implies that $\alpha_{1}$ is
$\displaystyle
O_{p}\left(q^{-4}\sum_{t=1}^{S-1}\left\\{\sum_{s=t+1}^{S}\left(\sum_{i,j=1}^{n}\left|m_{ij}\right|\left|b_{jt}^{\ast}\right|\left|b_{is}^{\ast}\right|\right)^{2}\right\\}^{2}\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(q^{-4}\left(\sup_{i,j}\left|m_{ij}\right|\right)^{4}\sum_{t=1}^{S-1}\left\\{\sum_{s=t+1}^{S}\left(\sum_{i=1}^{n}\left|b_{is}^{\ast}\right|\sum_{j=1}^{n}\left|b_{jt}^{\ast}\right|\right)^{2}\right\\}^{2}\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(q^{-4}p^{4}n^{-4}\sum_{t=1}^{S-1}\left\\{\sum_{s=t+1}^{S}\left(\sum_{i=1}^{n}\left|b_{is}^{\ast}\right|\right)^{2}\left(\sum_{j=1}^{n}\left|b_{jt}^{\ast}\right|\right)^{2}\right\\}^{2}\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(q^{-4}p^{4}n^{-4}\sum_{t=1}^{S-1}\left(\sum_{s=t+1}^{S}\left(\sum_{i=1}^{n}\left|b_{is}^{\ast}\right|\right)^{2}\right)^{2}\left(\sum_{j=1}^{n}\left|b_{jt}^{\ast}\right|\right)^{4}\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(q^{-4}p^{4}n^{-4}\left(\sum_{t=1}^{S-1}\sum_{j=1}^{n}\left|b_{jt}^{\ast}\right|\right)\left(\sum_{s=t+1}^{S}\sum_{i=1}^{n}\left|b_{is}^{\ast}\right|\right)^{2}\sup_{s}\left(\sum_{i=1}^{n}\left|b_{is}^{\ast}\right|\right)^{2}\sup_{t}\left(\sum_{j=1}^{n}\left|b_{jt}^{\ast}\right|\right)^{3}\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(q^{-4}p^{4}n^{-1}\right)=O_{p}\left(p^{2}n^{-1}\right)$
proving (A.10), as $p^{2}/n\rightarrow 0$ by the conditions of the theorem. ∎
###### Proof of Theorem 4.4.
In supplementary appendix. ∎
###### Proof of Theorem 5.1.
Due to the similarity with proofs in Delgado and Robinson (2015) and Gupta and
Robinson (2018), the details are in the supplementary appendix. ∎
###### Proof of Theorem 5.2.
Denote $\theta^{\ast}$ as the solution of
$\min_{\theta}\mathcal{E}\left(y_{i}-\sum_{j=1}^{d_{\lambda}}\lambda_{j}w_{i,j}^{\prime}y-\theta(x_{i})\right)^{2}$.
Put $\theta_{i}^{\ast}=\theta^{\ast}(x_{i})$, $\theta_{0i}=\theta_{0}(x_{i})$,
$\widehat{\theta}_{i}=\psi_{i}^{\prime}\widehat{\beta}$ ,
$\widehat{f}_{i}=f(x_{i},\widehat{\alpha})$,
$f_{i}^{\ast}=f(x_{i},\alpha^{\ast})$. Then
$\widehat{u}_{i}=y_{i}-\sum_{j=1}^{d_{\lambda}}\widehat{\lambda}_{j}w_{i,j}^{\prime}y-f(x_{i},\widehat{\alpha})=u_{i}+\theta_{0i}+\sum_{j=1}^{d_{\lambda}}(\lambda_{j_{0}}-\widehat{\lambda}_{j})w_{i,j}^{\prime}y-\widehat{f}_{i}$.
Proceeding as in the proof of Theorem 4.2, we obtain
$n\widehat{m}_{n}=\widehat{\sigma}^{-2}u^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}u+\widehat{\sigma}^{-2}\sum_{j=1}^{7}A_{j}$.
Thus, compared to the test statistic with no spatial lag, cf. the proof of
Theorem 4.2, we have the additional terms
$\displaystyle A_{5}$ $\displaystyle=$
$\displaystyle\sum_{j=1}^{d_{\lambda}}(\lambda_{j_{0}}-\widehat{\lambda}_{j})y^{\prime}W_{j}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\sum_{j=1}^{d_{\lambda}}(\lambda_{j_{0}}-\widehat{\lambda}_{j})W_{j}y,$
$\displaystyle A_{6}$ $\displaystyle=$
$\displaystyle\sum_{j=1}^{d_{\lambda}}(\lambda_{j_{0}}-\widehat{\lambda}_{j})y^{\prime}W_{j}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}(u+\theta_{0}-\widehat{f}),$
$\displaystyle A_{7}$ $\displaystyle=$
$\displaystyle\left(\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left(u\mathbf{+}e\right)-e+\theta_{0}-\widehat{f}\right)^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\sum_{j=1}^{d_{\lambda}}(\lambda_{j_{0}}-\widehat{\lambda}_{j})W_{j}y.$
We now show that $A_{\ell}=o_{p}(\sqrt{p}),\ell>4$, so the leading term in
$n\widehat{m}_{n}$ is the same as before. First
$\left\|y\right\|=O_{p}(\sqrt{n})$ from
$y=(I_{n}-\sum_{j=1}^{d_{\lambda}}\lambda_{j_{0}}W_{j})^{-1}\left(\theta_{0}+u\right)$.
Then, with
$\left\|\lambda_{{}_{0}}-\widehat{\lambda}\right\|=O_{p}\left(\sqrt{d_{\gamma}/n}\right)$
by Lemma LS.2, we have
$\displaystyle\left|A_{5}\right|$ $\displaystyle\leq$
$\displaystyle\left\|\lambda_{{}_{0}}-\widehat{\lambda}\right\|^{2}\sum_{j=1}^{d_{\lambda}}\left\|W_{j}\right\|^{2}\sup_{\gamma,j}\left\|\Sigma\left(\gamma\right)^{-1}\frac{1}{n}\Psi\left(\frac{1}{n}\Psi^{\prime}\Sigma\left({\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\right\|\left\|y\right\|^{2}$
$\displaystyle=$ $\displaystyle
O_{p}\left(d_{\gamma}/n\right)O_{p}(1)O_{p}(n)=O_{p}\left(d_{\gamma}\right)=o_{p}(\sqrt{p}).$
Uniformly in $\gamma$ and $j$,
$\displaystyle\mathcal{E}\left(u^{\prime}S^{-1\prime}W_{j}^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}u\right)$
$\displaystyle=$
$\displaystyle\mathcal{E}tr\left(\left(\frac{1}{n}\Psi^{\prime}\Sigma\left({\gamma}\right)^{-1}\Psi\right)^{-1}\frac{1}{n}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\Sigma
S^{-1\prime}W_{j}^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi\right)=O_{p}(p)$
and
$\displaystyle\mathcal{E}\left(\theta_{0}^{\prime}S^{-1\prime}W_{j}^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}u\right)^{2}$
$\displaystyle=$ $\displaystyle
O_{p}\left(\left\|S^{-1}\right\|^{2}\sup_{\gamma}\left\|\Sigma\left(\gamma\right)^{-1}\right\|^{4}\left\|\frac{1}{n}\Psi\left(\frac{1}{n}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\right\|^{2}\sup_{j}\left\|W_{j}\right\|^{2}\left\|\Sigma\right\|\left\|\theta_{0}\right\|^{2}\right)=O_{p}(n).$
Similarly,
$\theta_{0}^{\prime}S^{-1\prime}W_{j}^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}W_{j}\theta_{0}=O_{p}(n),$
uniformly. Therefore,
$\displaystyle\left|\sum_{j=1}^{d_{\lambda}}(\lambda_{j_{0}}-\widehat{\lambda}_{j})y^{\prime}W_{j}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}u\right|$
$\displaystyle=$
$\displaystyle\left|\sum_{j=1}^{d_{\lambda}}(\lambda_{j_{0}}-\widehat{\lambda}_{j})\left(\theta_{0}+u\right)^{\prime}S^{-1\prime}W_{j}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}u\right|$
$\displaystyle\leq$ $\displaystyle
d_{\lambda}\left\|\lambda_{{}_{0}}-\widehat{\lambda}\right\|\sup_{\gamma,j}\left|\theta_{0}^{\prime}S^{-1\prime}W_{j}^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}u\right|$
$\displaystyle+d_{\lambda}\left\|\lambda_{{}_{0}}-\widehat{\lambda}\right\|\sup_{\gamma,j}\left|u^{\prime}S^{-1\prime}W_{j}^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}u\right|$
$\displaystyle=$ $\displaystyle
O_{p}\left(\sqrt{d_{\gamma}/n}\right)O_{p}(\sqrt{n})+O_{p}\left(\sqrt{d_{\gamma}/n}\right)O_{p}(p)=O_{p}\left(\sqrt{d_{\gamma}}\right)=o_{p}\left(\sqrt{p}\right),$
and
$\displaystyle\left|\sum_{j=1}^{d_{\lambda}}(\lambda_{j_{0}}-\widehat{\lambda}_{j})y^{\prime}W_{j}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}(\theta_{0}-\widehat{f})\right|$
$\displaystyle\leq$ $\displaystyle
d_{\lambda}\left\|\lambda_{{}_{0}}-\widehat{\lambda}\right\|\left\|y\right\|\sup_{j}\left\|W_{j}\right\|\sup_{\gamma}\left\|\frac{1}{n}\Psi\left(\frac{1}{n}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi\right)^{-1}\Psi\right\|\sup_{\gamma}\left\|\Sigma\left(\gamma\right)^{-1}\right\|^{2}\left\|\theta_{0}-\widehat{f}\right\|$
$\displaystyle=$ $\displaystyle
O_{p}\left(\sqrt{d_{\gamma}/n}\right)O_{p}\left(\sqrt{n}\right)O_{p}\left(p^{1/4}\right)=O_{p}\left(\sqrt{d_{\gamma}}p^{1/4}\right)=o_{p}(\sqrt{p}),$
so that $A_{6}=o_{p}(\sqrt{p})$. Finally,
$\displaystyle\left|\sum_{j=1}^{d_{\lambda}}(\lambda_{j_{0}}-\widehat{\lambda}_{j})y^{\prime}W_{j}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}e\right|$
$\displaystyle\leq$ $\displaystyle
d_{\lambda}\left\|\lambda_{{}_{0}}-\widehat{\lambda}\right\|\left\|y\right\|\sup_{j}\left\|W_{j}\right\|\sup_{\gamma}\left\|\frac{1}{n}\Psi\left(\frac{1}{n}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi\right)^{-1}\Psi\right\|\sup_{\gamma}\left\|\Sigma\left(\gamma\right)^{-1}\right\|^{2}\left\|e\right\|$
$\displaystyle=$ $\displaystyle
O_{p}\left(\sqrt{d_{\gamma}/n}\right)O_{p}\left(\sqrt{n}\right)O_{p}\left(p^{-\mu}\sqrt{n}\right)=O_{p}\left(\sqrt{d_{\gamma}}p^{-\mu}\sqrt{n}\right)=o_{p}(\sqrt{p}),$
and
$\displaystyle\left|(e+\theta_{0}-\widehat{f})^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\sum_{j=1}^{d_{\lambda}}(\lambda_{j_{0}}-\widehat{\lambda}_{j})W_{j}y\right|$
$\displaystyle\leq$ $\displaystyle
d_{\lambda}\left\|\lambda_{{}_{0}}-\widehat{\lambda}\right\|\left(\left\|e\right\|+\left\|\theta_{0}-\widehat{f}\right\|\right)\sup_{\gamma}\left\|\Sigma\left(\gamma\right)^{-1}\right\|\sup_{j}\left\|W_{j}\right\|\left\|y\right\|$
$\displaystyle=$ $\displaystyle
O_{p}\left(\sqrt{d_{\gamma}/n}\right)O_{p}\left(p^{-\mu}\sqrt{n}+p^{1/4}\right)O_{p}\left(\sqrt{n}\right)=O_{p}\left(\sqrt{d_{\gamma}}p^{-\mu}\sqrt{n}+\sqrt{d_{\gamma}}p^{1/4}\right)=o_{p}(\sqrt{p}),$
implying that $A_{7}=o_{p}(\sqrt{p}).$ ∎
###### Proof of Theorem 5.3.
Omitted as it is similar to the proof of Theorem 4.4. ∎
###### Proof of Proposition 6.1:.
Because the map $\Sigma:\mathcal{T}^{o}\rightarrow\mathcal{M}^{n\times n}$ is
Fréchet-differentiable on $\mathcal{T}^{o}$, it is also Gâteaux-differentiable
and the two derivative maps coincide. Thus by Theorem 1.8 of Ambrosetti and
Prodi (1995),
$\left\|\Sigma(t_{1})-\Sigma(t_{1})\right\|\leq\sup_{t\in\mathcal{T}^{o}}\left\|D\Sigma(t)\right\|_{\mathscr{L}\left(\mathcal{T}^{o},\mathcal{M}^{n\times
n}\right)}\left(\left\|\gamma_{1}-\gamma_{2}\right\|+\sum_{\ell=1}^{d_{\zeta}}\left\|\left(\delta_{\ell
1}-\delta_{\ell 2}\right)^{\prime}\varphi_{\ell}\right\|_{w}\right),$ (A.14)
where
$\displaystyle\sum_{\ell=1}^{d_{\zeta}}\left\|\left(\delta_{\ell
1}-\delta_{\ell 2}\right)^{\prime}\varphi_{\ell}\right\|_{w}$ $\displaystyle=$
$\displaystyle\sum_{\ell=1}^{d_{\zeta}}\sup_{z\in\mathcal{Z}}\left|\left(\delta_{\ell
1}-\delta_{\ell
2}\right)^{\prime}\varphi_{\ell}\right|\left(1+\left\|z\right\|^{2}\right)^{-w/2}$
$\displaystyle\leq$ $\displaystyle\sum_{\ell=1}^{d_{\zeta}}\left\|\delta_{\ell
1}-\delta_{\ell
2}\right\|\sup_{z\in\mathcal{Z}}\left\|\varphi_{\ell}\right\|\left(1+\left\|z\right\|^{2}\right)^{-w/2}$
$\displaystyle\leq$ $\displaystyle
C\varsigma(r)\sum_{\ell=1}^{d_{\zeta}}\left\|\delta_{\ell 1}-\delta_{\ell
2}\right\|\leq C\varsigma(r)\left\|t_{1}-t_{2}\right\|.$
The claim now follows by (6.8) in Assumption NPN.2, because
$\left\|\gamma_{1}-\gamma_{2}\right\|\leq
C\varsigma(r)\left\|t_{1}-t_{2}\right\|$ for some suitably chosen $C$. ∎
###### Proof of Theorem 6.1.
The proof is omitted as it is entirely analogous to that of Theorem 5.1, with
the exception of one difference when proving equicontinuity. In the setting of
Section 6, we obtain via Proposition 6.1 that
$\left\|\Sigma(\tau)-\Sigma\left(\tau^{*}\right)\right\|=O_{p}\left(\varepsilon\right)$,
the $\varsigma(r)$ factor being omitted because only finitely many
neighborhoods contribute due to compactness of $\mathcal{T}$. ∎
###### Proof of Theorem 6.2.
Writing,
$\delta(z)=\left(\widehat{\delta}_{1}^{\prime}\varphi_{1}(z),\ldots,\widehat{\delta}_{d_{\zeta}}^{\prime}\varphi_{d_{\zeta}}(z)\right)^{\prime}$
and taking
$t_{1}=\left(\widehat{\gamma}^{\prime},\hat{\delta}(z)^{\prime}\right)^{\prime}$
and $t_{2}=\left(\gamma_{0}^{\prime},\zeta_{0}(z)^{\prime}\right)^{\prime}$ in
Proposition 6.1 implies (we suppress the argument $z$)
$\displaystyle\left\|\Sigma\left(\widehat{\tau}\right)-\Sigma\right\|=O_{p}\left(\varsigma(r)\left(\left\|\widehat{\gamma}-\gamma_{0}\right\|+\left\|\widehat{\delta}-\zeta_{0}\right\|\right)\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(\varsigma(r)\left(\left\|\widehat{\tau}-\tau_{0}\right\|+\left\|\nu\right\|\right)\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(\varsigma(r)\max\left\\{\sqrt{d_{\tau}/n},\sqrt{\sum_{\ell=1}^{d_{\zeta}}r_{\ell}^{-2\kappa_{\ell}}}\right\\}\right),$
uniformly on $\mathcal{Z}$. Thus we have
$\left\|\Sigma\left(\widehat{\tau}\right)^{-1}-\Sigma^{-1}\right\|\leq\left\|\Sigma\left(\widehat{\tau}\right)^{-1}\right\|\left\|\Sigma\left(\widehat{\tau}\right)-\Sigma\right\|\left\|\Sigma^{-1}\right\|=O_{p}\left(\varsigma(r)\max\left\\{\sqrt{d_{\tau}/n},\sqrt{\sum_{\ell=1}^{d_{\zeta}}r_{\ell}^{-2\kappa_{\ell}}}\right\\}\right).$
And similarly,
$\displaystyle\left\|\left(\frac{1}{n}\Psi^{\prime}\Sigma\left(\widehat{\tau}\right)^{-1}\Psi\right)^{-1}-\left(\frac{1}{n}\Psi^{\prime}\Sigma^{-1}\Psi\right)^{-1}\right\|$
$\displaystyle\leq$
$\displaystyle\left\|\left(\frac{1}{n}\Psi^{\prime}\Sigma\left(\widehat{\tau}\right)^{-1}\Psi\right)^{-1}\right\|\left\|\frac{1}{n}\Psi^{\prime}\left(\Sigma\left(\widehat{\tau}\right)^{-1}-\Sigma^{-1}\right)\Psi\right\|\left\|\left(\frac{1}{n}\Psi^{\prime}\Sigma^{-1}\Psi\right)^{-1}\right\|$
$\displaystyle=$ $\displaystyle
O_{p}\left(\left\|\Sigma\left(\widehat{\tau}\right)^{-1}-\Sigma^{-1}\right\|\right)=O_{p}\left(\varsigma(r)\max\left\\{\sqrt{d_{\tau}/n},\sqrt{\sum_{\ell=1}^{d_{\zeta}}r_{\ell}^{-2\kappa_{\ell}}}\right\\}\right).$
As in the proof of Theorem 4.2,
$n\widehat{m}_{n}=\widehat{\sigma}^{-2}{u}^{\prime}\Sigma\left(\widehat{\tau}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\tau}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\tau}\right)^{-1}{u}+\widehat{\sigma}^{-2}\sum_{k=1}^{4}A_{k},$
where $\gamma$ in the parametric setting is changed to $\tau$ in this
nonparametric setting. Then, by the MVT,
$\displaystyle\left|u^{\prime}\left(\Sigma\left(\widehat{\tau}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\tau}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\tau}\right)^{-1}-\Sigma^{-1}\Psi[\Psi^{\prime}\Sigma^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma^{-1}\right)u\right|$
$\displaystyle\leq$ $\displaystyle
2\left(\sup_{t}\left\|\frac{1}{\sqrt{n}}u^{\prime}\Sigma\left(t\right)^{-1}\Psi\right\|\left\|\left(\frac{1}{n}\Psi^{\prime}\Sigma\left(t\right)^{-1}\Psi\right)^{-1}\right\|\right)\sum_{j=1}^{d_{\tau}}\left\|\frac{1}{\sqrt{n}}\Psi^{\prime}\left(\Sigma\left(\widetilde{\tau}\right)^{-1}\Sigma_{j}\left(\widetilde{\tau}\right)\Sigma\left(\widetilde{\tau}\right)^{-1}\right)u\right\|$
$\displaystyle\times$
$\displaystyle\left|\widetilde{\tau}_{j}-\tau_{j0}\right|+2\sup_{t}\left\|\frac{1}{\sqrt{n}}u^{\prime}\Sigma\left(t\right)^{-1}\Psi\right\|\left\|\left(\frac{1}{n}\Psi^{\prime}\Sigma\left(t\right)^{-1}\Psi\right)^{-1}\right\|\left\|\frac{1}{\sqrt{n}}\Psi^{\prime}\left(\Sigma_{0}-\Sigma\right)u\right\|$
$\displaystyle+\left\|\frac{1}{\sqrt{n}}u^{\prime}\Sigma^{-1}\Psi\right\|^{2}\left\|\left(\frac{1}{n}\Psi^{\prime}\Sigma\left(\widehat{\tau}\right)^{-1}\Psi\right)^{-1}-\left(\frac{1}{n}\Psi^{\prime}\Sigma^{-1}\Psi\right)^{-1}\right\|$
$\displaystyle=$ $\displaystyle
O_{p}(\sqrt{p})O_{p}(d_{\tau}\sqrt{p}\varsigma(r)/\sqrt{n})+O_{p}(\sqrt{p})O_{p}\left(\sqrt{p}\varsigma(r)\sqrt{\sum_{\ell=1}^{d_{\zeta}}r_{\ell}^{-2\kappa_{\ell}}}\right)$
$\displaystyle+$ $\displaystyle
O_{p}(p)O_{p}\left(\varsigma(r)\max\left\\{\sqrt{d_{\tau}/n},\sqrt{\sum_{\ell=1}^{d_{\zeta}}r_{\ell}^{-2\kappa_{\ell}}}\right\\}\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(p\varsigma(r)\max\left\\{d_{\tau}/\sqrt{n},\sqrt{\sum_{\ell=1}^{d_{\zeta}}r_{\ell}^{-2\kappa_{\ell}}}\right\\}\right)=o_{p}(\sqrt{p}),$
where the last equality holds under the conditions of the theorem. Next, it
remains to show $A_{k}=o_{p}(p^{1/2}),k=1,\ldots,4$. The order of $A_{k}$,
$k\leq 3$, is the same as the parametric case:
$\displaystyle\left|A_{1}\right|$ $\displaystyle=$
$\displaystyle\left|{u}^{\prime}\Sigma\left(\widehat{\tau}\right)^{-1}\left({\theta}_{0}-\widehat{{f}}\right)\right|\leq\sup_{\alpha,t}\left\|u^{\prime}\Sigma\left(t\right)^{-1}\frac{\partial{f}(x,{\alpha})}{\partial\alpha_{j}}\right\|\left|\alpha_{j}^{\ast}-\widetilde{\alpha}_{j}\right|+\frac{p^{1/4}}{n^{1/2}}\sup_{t}\left\|u^{\prime}\Sigma\left(t\right)^{-1}h\right\|$
$\displaystyle=$ $\displaystyle
O_{p}(\sqrt{n})O_{p}(\frac{1}{\sqrt{n}})+O(\frac{p^{1/4}}{n^{1/2}})O_{p}(\sqrt{n})=O_{p}(p^{1/4})=o_{p}(p^{1/2}),$
$\displaystyle|A_{2}|$ $\displaystyle=$
$\displaystyle\left|(u\mathbf{+}\theta_{0}-\widehat{f})^{\prime}\left(\Sigma\left(\widehat{\tau}\right)^{-1}-\Sigma\left(\widehat{\tau}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\tau}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\tau}\right)^{-1}\right)e\right|$
$\displaystyle\leq$
$\displaystyle\sup_{t}|u^{\prime}\Sigma\left(t\right)^{-1}e|+\sup_{t}\left|u^{\prime}\Sigma\left(t\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(t\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(t\right)^{-1}e\right|$
$\displaystyle+\left\|{\theta}_{0}-\widehat{{f}}\right\|\sup_{t}\left(\left\|\Sigma\left(t\right)^{-1}\right\|+\left\|\Sigma\left(t\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(t\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(t\right)^{-1}\right\|\right)\left\|e\right\|$
$\displaystyle=$ $\displaystyle
O_{p}(p^{-\mu}n^{1/2})+O_{p}(p^{-\mu+1/4}n^{1/2})=O_{p}(p^{-\mu+1/4}n^{1/2})=o_{p}(\sqrt{p}),$
$\displaystyle\left|A_{3}\right|$ $\displaystyle=$
$\displaystyle\left|{u}^{\prime}\Sigma\left(\widehat{\tau}\right)^{-1}\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\tau}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\tau}\right)^{-1}({\theta}_{0}-\widehat{{f}})\right|$
$\displaystyle\leq$
$\displaystyle\sup_{\alpha,t}\sum_{j=1}^{d_{\alpha}}\left\|u^{\prime}\Sigma\left(t\right)^{-1}\Psi\left(\Psi^{\prime}\Sigma\left(t\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(t\right)^{-1}\frac{\partial{f}(x,{\alpha})}{\partial\alpha_{j}}\right\|\left|\alpha_{j}^{\ast}-\widetilde{\alpha}_{j}\right|$
$\displaystyle+\frac{p^{1/4}}{n^{1/2}}\sup_{t}\left\|u^{\prime}\Sigma\left(t\right)^{-1}\Psi\left(\Psi^{\prime}\Sigma\left(t\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(t\right)^{-1}h\right\|$
$\displaystyle=$ $\displaystyle
O_{p}(1)+O_{p}(p^{1/4})=O_{p}(p^{1/4})=o_{p}(p^{1/2}).$
However, $A_{4}$ has a different order. Under $H_{\ell}$,
$\displaystyle A_{4}$ $\displaystyle=$
$\displaystyle\left({\theta}_{0}-\widehat{{f}}\right)^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left({\theta}_{0}-\widehat{{f}}\right)$
$\displaystyle=$
$\displaystyle\left({\theta}_{0}-\widehat{{f}}\right)^{\prime}\Sigma_{0}^{-1}\left({\theta}_{0}-\widehat{{f}}\right)+\left({\theta}_{0}-\widehat{{f}}\right)^{\prime}\left(\Sigma\left(\widehat{\tau}\right)^{-1}-\Sigma^{-1}\right)\left({\theta}_{0}-\widehat{{f}}\right)$
$\displaystyle=$
$\displaystyle\frac{p^{1/2}}{n}h^{\prime}\Sigma_{0}^{-1}h+o_{p}(1)+O_{p}\left(p^{1/2}\right)O_{p}\left(\varsigma(r)\max\left\\{\sqrt{d_{\tau}/n},\sqrt{\sum_{\ell=1}^{d_{\zeta}}r_{\ell}^{-2\kappa_{\ell}}}\right\\}\right)$
$\displaystyle=$
$\displaystyle\frac{p^{1/2}}{n}h^{\prime}\Sigma_{0}^{-1}h+o_{p}(\sqrt{p}),$
where the last equality holds under the conditions of the theorem. Combining
these together, we have
$n\widehat{m}_{n}=\widehat{\sigma}^{-2}\widehat{{v}}^{\prime}\Sigma\left(\widehat{\tau}\right)^{-1}\widehat{{u}}={\sigma_{0}^{-2}}\varepsilon^{\prime}\mathscr{V}\varepsilon+\left({p^{1/2}}/{n}\right){h}^{\prime}\Sigma_{0}^{-1}{h}+o_{p}(\sqrt{p}),$
under $H_{\ell}$ and the same expression holds with $h=0$ under $H_{0}$. ∎
###### Proof of Theorem 6.3.
Omitted as it is similar to the proof of Theorem 4.4. ∎
Supplementary online appendix to ‘Consistent specification testing under
spatial dependence’
Abhimanyu Gupta and Xi Qu
## Appendix S.A Additional simulation results: Unboundedly supported
regressors and asymptotic critical values
This section provides additional simulation results using the same design as
in Section 8 of the main body of the paper. Recall that the paper reports only
bootstrap results for the compactly supported regressors case. Here we include
results using asymptotic critical values for both the compactly and unbounded
supported regressor cases, as well as bootstrap results for the latter,
focusing on the SARARMA(0,1,0) model. The results are in Tables OT.1-OT.4 and
our findings match those in the main text, with the bootstrap typically
offering better size control.
## Appendix S.B Proofs of Theorems 4.2 and 4.4
###### Proof of Theorem 4.2.
From Corollary 4.1 and Lemma LS.2,
$\left\|\Sigma\left(\widehat{\gamma}\right)-\Sigma\right\|=O_{p}\left(\left\|\widehat{\gamma}-\gamma_{0}\right\|\right)=\sqrt{d_{\gamma}/n}$,
so we have, from Assumption R.3,
$\left\|\Sigma\left(\widehat{\gamma}\right)^{-1}-\Sigma^{-1}\right\|\leq\left\|\Sigma\left(\widehat{\gamma}\right)^{-1}\right\|\left\|\Sigma\left(\widehat{\gamma}\right)-\Sigma\right\|\left\|\Sigma^{-1}\right\|=O_{p}\left(\left\|\widehat{\gamma}-\gamma_{0}\right\|\right)=\sqrt{d_{\gamma}/n}.$
(S.B.1)
Similarly,
$\displaystyle\left\|\left(\frac{1}{n}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}-\left(\frac{1}{n}\Psi^{\prime}\Sigma^{-1}\Psi\right)^{-1}\right\|$
$\displaystyle\leq$
$\displaystyle\left\|\left(\frac{1}{n}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\right\|\left\|\frac{1}{n}\Psi^{\prime}\left(\Sigma\left(\widehat{\gamma}\right)^{-1}-\Sigma^{-1}\right)\Psi\right\|\left\|\left(\frac{1}{n}\Psi^{\prime}\Sigma^{-1}\Psi\right)^{-1}\right\|$
$\displaystyle\leq$
$\displaystyle\sup_{\gamma\in\Gamma}\left\|\left(\frac{1}{n}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi\right)^{-1}\right\|\left\|\Sigma\left(\widehat{\gamma}\right)^{-1}-\Sigma^{-1}\right\|\left\|\frac{1}{\sqrt{n}}\Psi\right\|^{2}=O_{p}\left(\left\|\widehat{\gamma}-\gamma_{0}\right\|\right)=\sqrt{d_{\gamma}/n}.$
By Assumption R.2, we have $\widehat{\alpha}-\alpha^{\ast}=O_{p}(1/\sqrt{n})$.
Denote by $\theta^{\ast}(x)=\psi(x)^{\prime}\beta^{\ast}$, where
$\beta^{\ast}=\operatorname*{arg\,min}_{\beta}\mathcal{E}[y_{i}-\psi(x_{i})^{\prime}\beta)]^{2}$,
and set $\theta_{ni}=\theta(x_{i})$, $\theta_{0i}=\theta_{0}(x_{i})$,
$\widehat{\theta}_{i}=\psi_{i}^{\prime}\widehat{\beta}$,
$\widehat{f}_{i}=f(x_{i},\widehat{\alpha})$,
$f_{i}^{\ast}=f(x_{i},\alpha^{\ast})$. Then
$\widehat{u}_{i}=y_{i}-f(x_{i},\widehat{\alpha})=u_{i}+\theta_{0i}-\widehat{f}_{i}$.
Let
${\theta_{0}}=(\theta_{0}\left(x_{1}\right),\ldots,\theta_{0}\left(x_{n}\right))^{\prime}$
as before, with similar component-wise notation for the $n$-dimensional
vectors ${\theta^{\ast}}$, $\widehat{f}$, and $u$. As the approximation error
is ${e}={\theta}_{0}-{\theta}^{\ast}={\theta}_{0}-\Psi\beta^{\ast}$,
$\displaystyle\widehat{{\theta}}-{\theta}^{\ast}$ $\displaystyle=$
$\displaystyle\Psi(\widehat{\beta}-\beta^{\ast})=\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}({u+\theta}_{0}-\Psi\beta^{\ast})$
$\displaystyle=$
$\displaystyle\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}({u+e}),$
so that
$\displaystyle n\widehat{m}_{n}$ $\displaystyle=$
$\displaystyle\widehat{\sigma}^{-2}\widehat{{v}}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\widehat{{u}}=\widehat{\sigma}^{-2}\left(\widehat{{\theta}}-\widehat{f}\right)^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left(y-\widehat{f}\right)$
$\displaystyle=$
$\displaystyle\widehat{\sigma}^{-2}\left(\widehat{{\theta}}-{\theta}^{\ast}+{\theta}^{\ast}-{\theta}_{0}+{\theta}_{0}-\widehat{{f}}\right)^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left({u+\theta}_{0}-\widehat{{f}}\right)$
$\displaystyle=$
$\displaystyle\widehat{\sigma}^{-2}\left[\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}({u+e}{)}-{e}+{\theta}_{0}-\widehat{{f}}\right]^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left({u+\theta}_{0}-\widehat{{f}}\right)$
$\displaystyle=$
$\displaystyle\widehat{\sigma}^{-2}{u}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}{u}+\widehat{\sigma}^{-2}{{u}^{\prime}}\Sigma\left(\widehat{\gamma}\right)^{-1}{\left({\theta}_{0}-\widehat{{f}}\right)}$
$\displaystyle{-}$
$\displaystyle\widehat{\sigma}^{-2}\left({u+\theta}_{0}-\widehat{{f}}\right)^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left(I-\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\right){e}$
$\displaystyle+$
$\displaystyle\widehat{\sigma}^{-2}\left({\theta}_{0}-\widehat{{f}}\right)^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}{u}$
$\displaystyle+$
$\displaystyle\widehat{\sigma}^{-2}({\theta}_{0}-\widehat{{f}})^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}({\theta}_{0}-\widehat{{f}})$
$\displaystyle=$
$\displaystyle\widehat{\sigma}^{-2}{u}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}u+\widehat{\sigma}^{-2}\left(A_{1}+A_{2}+A_{3}+A_{4}\right),$
say. First, for any vector $g$ comprising of conditioned random variables,
$\mathcal{E}\left[\left(u^{\prime}\Sigma(\gamma)^{-1}{g}\right)^{2}\right]=g^{\prime}\Sigma(\gamma)^{-1}\Sigma\Sigma(\gamma)^{-1}{g}\leq\sup_{\gamma\in\Gamma}\left\|\Sigma(\gamma)^{-1}\right\|^{2}\left\|\Sigma\right\|\left\|g\right\|^{2}=O_{p}\left(\left\|g\right\|^{2}\right),$
uniformly in $\gamma\in\Gamma$, where the expectation is taken conditional on
$g$. Similarly,
$\displaystyle\mathcal{E}\left[\left(u^{\prime}\Sigma(\gamma)^{-1}\Psi\left(\Psi^{\prime}\Sigma(\gamma)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma(\gamma)^{-1}{g}\right)^{2}\right]$
$\displaystyle=$ $\displaystyle
g^{\prime}\Sigma(\gamma)^{-1}\Psi\left(\Psi^{\prime}\Sigma(\gamma)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma(\gamma)^{-1}\Sigma\Sigma(\gamma)^{-1}\Psi\left(\Psi^{\prime}\Sigma(\gamma)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma(\gamma)^{-1}{g}$
$\displaystyle\leq$
$\displaystyle\sup_{\gamma\in\Gamma}\left\|\Sigma(\gamma)^{-1}\right\|^{4}\left\|\Sigma\right\|\left\|\frac{1}{n}\Psi\left(\frac{1}{n}\Psi^{\prime}\Sigma(\gamma)^{-1}\Psi\right)^{-1}\Psi^{\prime}\right\|^{2}\left\|g\right\|^{2}=O_{p}\left(\left\|g\right\|^{2}\right),$
uniformly and, for any $j=1$, …, $d_{\gamma}$,
$\displaystyle\mathcal{E}\left[\left(u^{\prime}\Sigma(\gamma)^{-1}\Sigma_{j}\left(\gamma\right)\Sigma\left(\gamma\right)^{-1}{g}\right)^{2}\right]$
$\displaystyle=$ $\displaystyle
g^{\prime}\Sigma(\gamma)^{-1}\Sigma_{j}\left(\gamma\right)\Sigma\left(\gamma\right)^{-1}\Sigma\Sigma(\gamma)^{-1}\Sigma_{j}\left(\gamma\right)\Sigma\left(\gamma\right)^{-1}{g}$
$\displaystyle\leq$
$\displaystyle\sup_{\gamma\in\Gamma}\left\|\Sigma(\gamma)^{-1}\right\|^{4}\left\|\Sigma_{j}\left(\gamma\right)\right\|^{2}\left\|\Sigma\right\|\left\|g\right\|^{2}=O_{p}\left(\left\|g\right\|^{2}\right).$
Let $\Psi_{k}$ be the $k$-th column of $\Psi$, $k=1,\ldots,p$. Then, we have
$\left\|\Psi_{k}/\sqrt{n}\right\|=O_{p}(1)$ and for any $\gamma\in\Gamma$,
$\displaystyle\mathcal{E}\left\|\frac{1}{\sqrt{n}}u^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi\right\|^{2}$
$\displaystyle\leq$
$\displaystyle{\sum_{k=1}^{p}\mathcal{E}\left(u^{\prime}\Sigma\left(\gamma\right)^{-1}\frac{1}{\sqrt{n}}\Psi_{k}\right)^{2}}=O_{p}\left({p}\right),$
$\displaystyle\mathcal{E}\left\|\frac{1}{\sqrt{n}}u^{\prime}\Sigma\left(\gamma\right)^{-1}\Sigma_{j}\left(\gamma\right)\Sigma\left(\gamma\right)^{-1}\Psi\right\|^{2}$
$\displaystyle\leq$
$\displaystyle{\sum_{k=1}^{p}\mathcal{E}\left(u^{\prime}\Sigma\left(\gamma\right)^{-1}\Sigma_{j}\left(\gamma\right)\Sigma\left(\gamma\right)^{-1}\frac{1}{\sqrt{n}}\Psi_{k}\right)^{2}}=O({p}).$
Therefore, by Chebyshev’s inequality,
$\sup_{\gamma\in\Gamma}\left\|\frac{1}{\sqrt{n}}u^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi\right\|=O_{p}(\sqrt{p})\text{
\ \ and \ \
}\sup_{\gamma\in\Gamma}\left\|\frac{1}{\sqrt{n}}u^{\prime}\Sigma\left(\gamma\right)^{-1}\Sigma_{j}\left(\gamma\right)\Sigma\left(\gamma\right)^{-1}\Psi\right\|=O_{p}(\sqrt{p}).$
By the decomposition
$\displaystyle
u^{\prime}\left(\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}-\Sigma^{-1}\Psi[\Psi^{\prime}\Sigma^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma^{-1}\right)u$
$\displaystyle=$ $\displaystyle
u^{\prime}\left(\Sigma\left(\widehat{\gamma}\right)^{-1}+\Sigma^{-1}\right)\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\left(\sum_{i=1}^{n}e_{in}e_{in}^{\prime}\right)\left(\Sigma\left(\widehat{\gamma}\right)^{-1}-\Sigma^{-1}\right)u$
$\displaystyle+u^{\prime}\Sigma^{-1}\Psi\left([\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}-[\Psi^{\prime}\Sigma^{-1}\Psi]^{-1}\right)\Psi^{\prime}\Sigma^{-1}u$
$\displaystyle=$ $\displaystyle
u^{\prime}\left(\Sigma\left(\widehat{\gamma}\right)^{-1}+\Sigma^{-1}\right)\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\left(\sum_{i=1}^{n}e_{in}e_{in}^{\prime}\right)\sum_{j=1}^{d_{\gamma}}\left(\Sigma\left(\widetilde{\gamma}\right)^{-1}\Sigma_{j}\left(\widetilde{\gamma}\right)\Sigma\left(\widetilde{\gamma}\right)^{-1}\right)$
$\displaystyle\times$ $\displaystyle
u(\widetilde{\gamma}_{j}-\gamma_{j0})+u^{\prime}\Sigma^{-1}\Psi\left([\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}-[\Psi^{\prime}\Sigma^{-1}\Psi]^{-1}\right)\Psi^{\prime}\Sigma^{-1}u,$
where $e_{in}$ is an $n\times 1$ vector with $i$-th entry one and zeros
elsewhere, so $\sum_{i=1}^{n}e_{in}e_{in}^{\prime}=I_{n}$, and
$\displaystyle
e_{in}^{\prime}\left(\Sigma\left(\widehat{\gamma}\right)^{-1}-\Sigma^{-1}\right)u$
$\displaystyle=$
$\displaystyle\sum_{j=1}^{d_{\gamma}}e_{in}^{\prime}\left(\Sigma\left(\widetilde{\gamma}\right)^{-1}\Sigma_{j}\left(\widetilde{\gamma}\right)\Sigma\left(\widetilde{\gamma}\right)^{-1}\right)u(\widetilde{\gamma}_{j}-\gamma_{j0})$
$\displaystyle=$ $\displaystyle
e_{in}^{\prime}\sum_{j=1}^{d_{\gamma}}\left(\Sigma\left(\widetilde{\gamma}\right)^{-1}\Sigma_{j}\left(\widetilde{\gamma}\right)\Sigma\left(\widetilde{\gamma}\right)^{-1}\right)u(\widetilde{\gamma}_{j}-\gamma_{j0})$
where $\widetilde{\gamma}$ is a value between $\widehat{\gamma}$ and
$\gamma_{0}$ due to the mean value theorem. We have
$\displaystyle\left|u^{\prime}\left(\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}-\Sigma^{-1}\Psi[\Psi^{\prime}\Sigma^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma^{-1}\right)u\right|$
$\displaystyle\leq$ $\displaystyle
2\sup_{\gamma\in\Gamma}\left\|\frac{1}{\sqrt{n}}u^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi\right\|\left\|\left(\frac{1}{n}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi\right)^{-1}\right\|\sum_{j=1}^{d_{\gamma}}\left\|\frac{1}{\sqrt{n}}\Psi^{\prime}\left(\Sigma\left(\gamma\right)^{-1}\Sigma_{j}\left(\gamma\right)\Sigma\left(\gamma\right)^{-1}\right)u\right\|$
$\displaystyle\times$
$\displaystyle\left|\widetilde{\gamma}_{j}-\gamma_{j0}\right|+\left\|\frac{1}{\sqrt{n}}u^{\prime}\Sigma^{-1}\Psi\right\|^{2}\left\|\left(\frac{1}{n}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}-\left(\frac{1}{n}\Psi^{\prime}\Sigma^{-1}\Psi\right)^{-1}\right\|$
$\displaystyle=$ $\displaystyle
O_{p}(\sqrt{p})O_{p}(d_{\gamma}\sqrt{p}/\sqrt{n})+O_{p}(p)O_{p}(\sqrt{d_{\gamma}}/\sqrt{n})=O_{p}(d_{\gamma}p/\sqrt{n})=o_{p}(\sqrt{p}),$
where the last equality holds under the conditions of the theorem.
It remains to show that
$A_{i}=o_{p}\left({p^{1/2}}\right),i=1,\ldots,4.$ (S.B.2)
It is convenient to perform the calculations under $H_{\ell}$, which covers
$H_{0}$ as a particular case. Using the mean value theorem and either $H_{0}$
or $H_{\ell}$, we can express
${\theta}_{0i}-\widehat{{f}}_{i}={f}_{i}^{\ast}-\widehat{{f}}_{i}-(p^{1/4}/n^{1/2}){h_{i}}=\sum_{j=1}^{d_{\alpha}}\frac{\partial{f}(x_{i},\widetilde{\alpha})}{\partial\alpha_{j}}(\alpha_{j}^{\ast}-\widetilde{\alpha}_{j})-\frac{p^{1/4}}{n^{1/2}}{h_{i},}$
(S.B.3)
where $\widetilde{\alpha}_{j}$ is a value between $\alpha_{j}^{\ast}$ and
$\widehat{\alpha}_{j}$. Then, for any $j=1,\ldots,d_{\alpha}$,
$\left|\alpha_{j}^{\ast}-\widetilde{\alpha}_{j}\right|{=}O_{p}(1/\sqrt{n})$.
Based on
$\sup_{\gamma\in\Gamma}\left|u^{\prime}\Sigma(\gamma)^{-1}\Psi\left(\Psi^{\prime}\Sigma(\gamma)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma(\gamma)^{-1}{g}\right|{=}O_{p}\left(\left\|g\right\|\right)\text{
and
}\sup_{\gamma\in\Gamma}\left|u^{\prime}\Sigma(\gamma)^{-1}g\right|=O_{p}\left(\left\|g\right\|\right)$
for any $\gamma\in\Gamma$ and any conditioned vector $g$, if we take
$g={\partial{f}(x,{\alpha})}/{\partial\alpha_{j}}$ or $g=h$, then both satisfy
$O_{p}\left(\left\|g\right\|\right)=O_{p}\left(\sqrt{n}\right)$ and it follows
that
$\displaystyle\left|A_{1}\right|$ $\displaystyle=$
$\displaystyle\left|{u}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left({\theta}_{0}-\widehat{{f}}\right)\right|\leq\sup_{\gamma,\alpha}\sum_{j=1}^{d_{\alpha}}\left\|u^{\prime}\Sigma(\gamma)^{-1}\frac{\partial{f}(x,{\alpha})}{\partial\alpha_{j}}\right\|\left|\alpha_{j}^{\ast}-\widetilde{\alpha}_{j}\right|+\frac{p^{1/4}}{n^{1/2}}\sup_{\gamma}\left\|u^{\prime}\Sigma(\gamma)^{-1}h\right\|$
$\displaystyle=$ $\displaystyle
O_{p}(\sqrt{n})O_{p}\left(\frac{1}{\sqrt{n}}\right)+O\left(\frac{p^{1/4}}{n^{1/2}}\right)O_{p}(\sqrt{n})=O_{p}(p^{1/4})=o_{p}(p^{1/2}).$
Similarly,
$\displaystyle\left|A_{3}\right|$ $\displaystyle=$
$\displaystyle\left|{u}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}({\theta}_{0}-\widehat{{f}})\right|$
$\displaystyle\leq$
$\displaystyle\sup_{\gamma,\alpha}\sum_{j=1}^{d_{\alpha}}\left\|u^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\frac{\partial{f}(x,{\alpha})}{\partial\alpha_{j}}\right\|\left|\alpha_{j}^{\ast}-\widetilde{\alpha}_{j}\right|$
$\displaystyle+\frac{p^{1/4}}{n^{1/2}}\sup_{\gamma}\left\|u^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}h\right\|$
$\displaystyle=$ $\displaystyle
O_{p}(1)+O_{p}(p^{1/4})=O_{p}(p^{1/4})=o_{p}(p^{1/2}).$
Also, by Assumptions R.2 and R.10, we have
$\left\|{\theta}_{0}-\widehat{{f}}\right\|\leq\sup_{\alpha}\sum_{j=1}^{d_{\alpha}}\left\|\frac{\partial{f}(x,{\alpha})}{\partial\alpha_{j}}\right\|\left|\alpha_{j}^{\ast}-\widetilde{\alpha}_{j}\right|+\left\|h\right\|\frac{p^{1/4}}{n^{1/2}}=O_{p}(p^{1/4}).$
(S.B.4)
By (3.2), we have $\left\|e\right\|=O(p^{-\mu}n^{1/2})$ and
$\displaystyle|A_{2}|$ $\displaystyle=$
$\displaystyle\left|(u\mathbf{+}\theta_{0}-\widehat{f})^{\prime}\left(\Sigma\left(\widehat{\gamma}\right)^{-1}-\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\right)e\right|$
$\displaystyle\leq$
$\displaystyle\sup_{\gamma}|u^{\prime}\Sigma\left(\gamma\right)^{-1}e|+\sup_{\gamma}\left|u^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}e\right|$
$\displaystyle+\left\|{\theta}_{0}-\widehat{{f}}\right\|\sup_{\gamma}\left(\left\|\Sigma\left(\gamma\right)^{-1}\right\|+\left\|\Sigma\left(\gamma\right)^{-1}\Psi[\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\right\|\right)\left\|e\right\|$
$\displaystyle=$ $\displaystyle
O_{p}(p^{-\mu}n^{1/2})+O_{p}(p^{-\mu+1/4}n^{1/2})=O_{p}(p^{-\mu+1/4}n^{1/2})=o_{p}(\sqrt{p}).$
where the last equality holds under the conditions of the theorem. Finally,
under $H_{\ell}$,
$\displaystyle A_{4}$ $\displaystyle=$
$\displaystyle\left({\theta}_{0}-\widehat{{f}}\right)^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left({\theta}_{0}-\widehat{{f}}\right)$
$\displaystyle=$
$\displaystyle\left({\theta}_{0}-\widehat{{f}}\right)^{\prime}\Sigma^{-1}\left({\theta}_{0}-\widehat{{f}}\right)+\left({\theta}_{0}-\widehat{{f}}\right)^{\prime}\left(\Sigma\left(\widehat{\gamma}\right)^{-1}-\Sigma^{-1}\right)\left({\theta}_{0}-\widehat{{f}}\right)$
$\displaystyle=$
$\displaystyle\frac{p^{1/2}}{n}h^{\prime}\Sigma^{-1}h+o_{p}(1)+O_{p}\left(p^{1/2}d_{\gamma}^{1/2}/n^{1/2}\right)=\frac{p^{1/2}}{n}h^{\prime}\Sigma^{-1}h+o_{p}(\sqrt{p}).$
Combining these together, we have
$n\widehat{m}_{n}=\widehat{\sigma}^{-2}\widehat{{v}}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\widehat{{u}}=\frac{1}{\sigma_{0}^{2}}\varepsilon^{\prime}\mathscr{V}\varepsilon{+}\frac{p^{1/2}}{n}{h}^{\prime}\Sigma^{-1}{h}+o_{p}(\sqrt{p}),$
under $H_{\ell}$ and the same expression holds with $h=0$ under $H_{0}$.
∎
###### Proof of Theorem 4.4.
(1) Follows from Theorems 4.2 and 4.3. (2) Following reasoning analogous to
the proofs of Theorems 4.2 and 4.3, it can be shown that under $H_{1}$,
$\widehat{m}_{n}=n^{-1}{\sigma}^{*-2}(\theta_{0}-f^{\ast})^{\prime}\Sigma\left(\gamma^{\ast}\right)^{-1}(\theta_{0}-f^{\ast})+o_{p}(1).$
Then,
$\mathscr{T}_{n}=\left(n\widehat{m}_{n}-p\right)/{\sqrt{2p}}=\left({n}/{\sqrt{p}}\right){(\theta_{0}-f^{\ast})^{\prime}\Sigma\left(\gamma^{\ast}\right)^{-1}(\theta_{0}-f^{\ast})}/\left({\sqrt{2}n\sigma^{\ast
2}}\right)+o_{p}\left({n}/{\sqrt{p}}\right)$
and for any nonstochastic sequence $\\{C_{n}\\}$, $C_{n}=o(n/p^{1/2})$,
$P(\mathscr{T}_{n}>C_{n})\rightarrow 1,$ so that consistency follows. (3)
Follows from Theorems 4.2 and 4.3. ∎
## Appendix S.C Proof of Theorem 5.1
###### Proof.
We prove the result under $H_{1}$, which is the more challenging case as it
involves nonparametric estimation. The proof under $H_{0}$ is similar. We will
show $\widehat{\phi}\overset{p}{\rightarrow}\phi_{0}$, whence
$\widehat{\beta}\overset{p}{\rightarrow}\beta_{0}$ and
$\widehat{\sigma}^{2}\overset{p}{\rightarrow}\sigma^{2}_{0}$ follow from (5.3)
and (5.4) respectively. First note that
$\mathcal{L}\left(\phi\right)-\mathcal{L}=\log\overline{\sigma}^{2}\left(\phi\right)/\overline{\sigma}^{2}-n^{-1}\log\left|T^{\prime}(\lambda)\Sigma(\gamma)^{-1}T(\lambda)\Sigma\right|=\log\overline{\sigma}^{2}\left(\phi\right)/\sigma^{2}\left(\phi\right)-\log\overline{\sigma}^{2}/\sigma_{0}^{2}+\log
r(\phi),$ (S.C.1)
where recall that
$\sigma^{2}\left(\phi\right)=n^{-1}\sigma_{0}^{2}tr\left(T^{\prime}(\lambda)\Sigma(\gamma)^{-1}T(\lambda)\Sigma\right),\text{
}\overline{\sigma}^{2}=\overline{\sigma}^{2}\left(\phi_{0}\right)=n^{-1}u^{\prime}E^{\prime}MEu,$
using (5.4) and also
$r(\phi)=n^{-1}tr\left(T^{\prime}(\lambda)\Sigma(\gamma)^{-1}T(\lambda)\Sigma\right)/\left|T^{\prime}(\lambda)\Sigma(\gamma)^{-1}T(\lambda)\Sigma\right|^{1/n}$.
We have
$\overline{\sigma}^{2}\left(\phi\right)=n^{-1}\left\\{S^{-1^{\prime}}\left(\Psi\beta_{0}+u\right)\right\\}^{\prime}S^{\prime}(\lambda)E(\gamma)^{\prime}M\left(\gamma\right)E(\gamma)S(\lambda)S^{-1}\left(\Psi\beta_{0}+u\right)=c_{1}\left(\phi\right)+c_{2}\left(\phi\right)+c_{3}\left(\phi\right)$,
where
$\displaystyle c_{1}\left(\phi\right)$ $\displaystyle=$ $\displaystyle
n^{-1}\beta_{0}^{\prime}\Psi^{\prime}T^{\prime}(\lambda)E(\gamma)^{\prime}M\left(\gamma\right)E(\gamma)T(\lambda)\Psi\beta_{0},$
$\displaystyle\text{\ }c_{2}\left(\phi\right)$ $\displaystyle=$ $\displaystyle
n^{-1}\sigma_{0}^{2}tr\left(T^{\prime}(\lambda)E(\gamma)^{\prime}M\left(\gamma\right)E(\gamma)T(\lambda)\Sigma\right),$
$\displaystyle c_{3}\left(\phi\right)$ $\displaystyle=$ $\displaystyle
n^{-1}tr\left(T^{\prime}(\lambda)E(\gamma)^{\prime}M\left(\gamma\right)E(\gamma)T(\lambda)\left(uu^{\prime}-\sigma_{0}^{2}\Sigma\right)\right)$
$\displaystyle+$ $\displaystyle
2n^{-1}\beta_{0}^{\prime}\Psi^{\prime}T^{\prime}(\lambda)E(\gamma)^{\prime}M\left(\gamma\right)E(\gamma)T(\lambda)u.$
Note that in the particular cases of Theorems 4.1 and 6.1, where
$T(\lambda)=I_{n}$, the $c_{1}$ term vanishes because
$M\left(\gamma\right)E(\gamma)\Psi=0$ and $M\left(\tau\right)E(\tau)\Psi=0$.
Proceeding with the current, more general proof
$\displaystyle\log\frac{\overline{\sigma}^{2}\left(\phi\right)}{\sigma^{2}\left(\phi\right)}$
$\displaystyle=$
$\displaystyle\log\frac{\overline{\sigma}^{2}\left(\phi\right)}{\left(c_{1}\left(\phi\right)+c_{2}\left(\phi\right)\right)}+\log\frac{c_{1}\left(\phi\right)+c_{2}\left(\phi\right)}{\sigma^{2}\left(\phi\right)}$
$\displaystyle=$
$\displaystyle\log\left(1+\frac{c_{3}\left(\phi\right)}{c_{1}\left(\phi\right)+c_{2}\left(\phi\right)}\right)+\log\left(1+\frac{c_{1}\left(\phi\right)-f\left(\phi\right)}{\sigma^{2}\left(\phi\right)}\right),$
where
$f\left(\phi\right)=n^{-1}\sigma_{0}^{2}tr\left(E^{\prime-1}T^{\prime}(\lambda)E(\gamma)^{\prime}\left(I_{n}-M\left(\gamma\right)\right)E(\gamma)T(\lambda)E^{-1}\right).$
Then (S.C.1) implies
$\displaystyle
P\left(\left\|\widehat{\phi}-\phi_{0}\right\|\in\overline{\mathcal{N}}^{\;\phi}\left(\eta\right)\right)$
$\displaystyle=$ $\displaystyle P\left(\inf_{\phi\in\text{
}\overline{\mathcal{N}}^{\;\phi}\left(\eta\right)}\mathcal{L}\left(\phi\right)-\mathcal{L}\leq
0\right)$ $\displaystyle\leq$ $\displaystyle
P\left(\log\left(1+\underset{\phi\in\text{
}\overline{\mathcal{N}}^{\;\phi}\left(\eta\right)}{\sup}\left|\frac{c_{3}\left(\phi\right)}{c_{1}\left(\phi\right)+c_{2}\left(\phi\right)}\right|\right)+\left|\log\left(\overline{\sigma}^{2}/\sigma_{0}^{2}\right)\right|\right.$
$\displaystyle\left.\geq\inf_{\phi\in\text{
}\overline{\mathcal{N}}^{\;\phi}\left(\eta\right)}\left(\log\left(1+\frac{c_{1}\left(\phi\right)-f\left(\phi\right)}{\sigma^{2}\left(\phi\right)}\right)+\log
r(\phi)\right)\right),$
where recall that
$\overline{\mathcal{N}}^{\;\phi}\left(\eta\right)=\Phi\backslash\mathcal{N}^{\phi}\left(\eta\right),$
$\mathcal{N}^{\phi}\left(\eta\right)=\left\\{\phi:\left\|\phi-\phi_{0}\right\|<\eta\right\\}\cap\Phi.$
Because $\overline{\sigma}^{2}/\sigma_{0}^{2}\overset{p}{\rightarrow}1,$ the
property $\log\left(1+x\right)=x+o\left(x\right)$ as $x\rightarrow 0$ implies
that it is sufficient to show that
$\displaystyle\underset{\phi\in\text{
}\overline{\mathcal{N}}^{\;\phi}\left(\eta\right)}{\sup}\left|\frac{c_{3}\left(\phi\right)}{c_{1}\left(\phi\right)+c_{2}\left(\phi\right)}\right|$
$\displaystyle\overset{p}{\longrightarrow}$ $\displaystyle\text{ }0,$ (S.C.2)
$\displaystyle\underset{\phi\in\text{
}\overline{\mathcal{N}}^{\;\phi}\left(\eta\right)}{\sup}\left|\frac{f\left(\phi\right)}{\sigma^{2}\left(\phi\right)}\right|$
$\displaystyle\overset{p}{\longrightarrow}$ $\displaystyle\text{ }0,$ (S.C.3)
$\displaystyle P\left(\inf_{\phi\in\text{
}\overline{\mathcal{N}}^{\;\phi}\left(\eta\right)}\left\\{\frac{c_{1}\left(\phi\right)}{\sigma^{2}\left(\phi\right)}+\log
r(\phi)\right\\}>0\right)$ $\displaystyle\longrightarrow$ $\displaystyle\text{
}1.$ (S.C.4)
Because
$\overline{\mathcal{N}}^{\;\phi}\left(\eta\right)\subseteq\left\\{\Lambda\times\overline{\mathcal{N}}^{\;\gamma}\left(\eta/2\right)\right\\}\cup\left\\{\overline{\mathcal{N}}^{\;\lambda}\left(\eta/2\right)\times\Gamma\right\\}$,
we have
$\displaystyle P\left(\inf_{\phi\in\text{
}\overline{\mathcal{N}}^{\;\phi}\left(\eta\right)}\left\\{\frac{c_{1}\left(\phi\right)}{\sigma^{2}\left(\phi\right)}+\log
r(\phi)\right\\}>0\right)$ $\displaystyle\geq$ $\displaystyle
P\left(\min\left\\{\underset{\Lambda\times\overline{\mathcal{N}}^{\;\gamma}\left(\eta/2\right)}{\inf}\frac{c_{1}\left(\phi\right)}{\sigma^{2}\left(\phi\right)},\underset{\overline{\mathcal{N}}^{\;\lambda}\left(\eta/2\right)}{\inf}\log
r(\phi)\right\\}>0\right)$ $\displaystyle\geq$ $\displaystyle
P\left(\min\left\\{\underset{\Lambda\times\overline{\mathcal{N}}^{\;\gamma}\left(\eta/2\right)}{\inf}\frac{c_{1}\left(\phi\right)}{C},\underset{\overline{\mathcal{N}}^{\lambda}\left(\eta/2\right)}{\inf}\log
r(\phi)\right\\}>0\right),$
from Assumption SAR.2, whence Assumptions SAR.3 and SAR.4 imply (S.C.4). Again
using Assumption SAR.2, uniformly in $\phi$,
$\left|f\left(\phi\right)/\sigma^{2}\left(\phi\right)\right|=O_{p}\left(\left|f\left(\phi\right)\right|\right)$
and
$\displaystyle\left|f\left(\phi\right)\right|$ $\displaystyle=$ $\displaystyle
O_{p}\left(tr\left(E^{\prime-1}T^{\prime}(\lambda)\Sigma(\gamma)^{-1}\Psi\left(\Psi^{\prime}\Sigma(\gamma)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma(\gamma)^{-1}T(\lambda)E^{-1}\right)/n\right)$
(S.C.5) $\displaystyle=$ $\displaystyle
O_{p}\left(tr\left(E^{\prime-1}T^{\prime}(\lambda)\Sigma(\gamma)^{-1}\Psi\Psi^{\prime}\Sigma(\gamma)^{-1}T(\lambda)E^{-1}\right)/n^{2}\right)=O_{p}\left(\left\|\Psi^{\prime}\Sigma(\gamma)^{-1}T(\lambda)E^{-1}/n\right\|_{F}^{2}\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(\left\|\Psi/n\right\|_{F}^{2}\overline{\varphi}^{2}\left(\Sigma(\gamma)^{-1}\right)\left\|T(\lambda)\right\|^{2}\left\|E^{-1}\right\|^{2}\right)=O_{p}\left(\left\|\Psi/n\right\|_{F}^{2}\left\|T(\lambda)\right\|^{2}\overline{\varphi}\left(\Sigma\right)/\underline{\varphi}^{2}\left(\Sigma(\gamma)\right)\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(\left\|T(\lambda)\right\|^{2}/n\right),$
where we have twice made use of the inequality
$\left\|AB\right\|_{F}\leq\left\|A\right\|_{F}\left\|B\right\|$ (S.C.6)
for generic multiplication compatible matrices $A$ and $B$. (S.C.3) now
follows by Assumption SAR.1 and compactness of $\Lambda$ because
$T(\lambda)=I_{n}+\sum_{j=1}^{d_{\lambda}}\left(\lambda_{0j}-\lambda_{j}\right)G_{j}$.
Finally consider (S.C.2). We first prove pointwise convergence. For any fixed
$\phi\in\overline{\mathcal{N}}^{\;\phi}\left(\eta\right)$ and large enough
$n$, Assumptions SAR.2 and SAR.4 imply
$\displaystyle\left\\{c_{1}\left(\phi\right)\right\\}^{-1}$ $\displaystyle=$
$\displaystyle O_{p}\left(\left\|\beta_{0}\right\|^{-2}\right)=O_{p}(1)$
(S.C.7) $\displaystyle\left\\{c_{2}\left(\phi\right)\right\\}^{-1}$
$\displaystyle=$ $\displaystyle O_{p}(1),$ (S.C.8)
because
$\left\\{n^{-1}\sigma_{0}^{2}tr\left(T^{\prime}(\lambda)\Sigma(\gamma)^{-1}T(\lambda)E^{-1}\right)\right\\}^{-1}=O_{p}(1)$
and, proceeding like in the bound for $\left|f(\phi)\right|$,
$tE^{\prime-1}r\left(E^{\prime-1}T^{\prime}(\lambda)E(\gamma)^{\prime}\left(I-M\left(\gamma\right)\right)E(\gamma)T(\lambda)E^{-1}\right)=O_{p}\left(\left\|T(\lambda)\right\|^{2}/n\right)=O_{p}\left(1/n\right)$.
In fact it is worth noting for the equicontinuity argument presented later
that Assumptions SAR.2 and SAR.4 actually imply that (S.C.7) and (S.C.8) hold
uniformly over $\overline{\mathcal{N}}^{\phi}(\eta)$, a property not needed
for the present pointwise arguments. Thus
$c_{3}\left(\phi\right)/\left(c_{1}\left(\phi\right)+c_{2}\left(\phi\right)\right)=O_{p}\left(\left|c_{3}\left(\phi\right)\right|\right)$
where, writing
$\mathfrak{B}(\phi)=T^{\prime}(\lambda)E(\gamma)^{\prime}M\left(\gamma\right)E(\gamma)T(\lambda)$
with typical element $\mathfrak{b}_{rs}(\phi)$, $r,s=1,\ldots,n$,
$c_{3}\left(\phi\right)$ has mean $0$ and variance
$O_{p}\left(\frac{\left\|\mathfrak{B}(\phi)\Sigma\right\|_{F}^{2}}{n^{2}}+\frac{\sum_{r,s,t,v=1}^{n}\mathfrak{b}_{rs}(\phi)\mathfrak{b}_{tv}(\phi)\kappa_{rstv}}{n^{2}}+\frac{\left\|\beta_{0}^{\prime}\Psi^{\prime}\mathfrak{B}(\phi)E^{-1}\right\|^{2}}{n^{2}}\right),$
(S.C.9)
with $\kappa_{rstv}$ denoting the fourth cumulant of
$u_{r},u_{s},u_{t},u_{v}$, $r,s,t,v=1,\ldots,n$. Under the linear process
assumed in Assumption R.4 it is known that
$\sum_{r,s,t,v=1}^{n}\kappa^{2}_{rstv}=O(n).$ (S.C.10)
Using (S.C.6) and Assumptions SAR.1 and R.3, the first term in parentheses in
(S.C.9) is
$\displaystyle
O_{p}\left(\left\|\mathfrak{B}(\phi)\right\|_{F}^{2}\overline{\varphi}^{2}\left(\Sigma\right)/n^{2}\right)$
$\displaystyle=$ $\displaystyle
O_{p}\left(\left\|T(\lambda)\right\|_{F}^{2}\left\|E(\gamma)\right\|^{4}\left\|M(\gamma)\right\|^{2}\left\|T(\lambda)\right\|^{2}/n^{2}\right)$
(S.C.11) $\displaystyle=$ $\displaystyle
O_{p}\left(\left\|T(\lambda)\right\|^{4}/n\underline{\varphi}^{2}\left(\Sigma(\gamma)\right)\right)=O_{p}\left(\left\|T(\lambda)\right\|^{4}/n\right),$
while the second is similarly
$O_{p}\left\\{\left(\left\|\mathfrak{B}(\phi)\right\|_{F}^{2}/n\right)\left(\sum_{r,s,t,v=1}^{n}\kappa^{2}_{rstv}/n^{2}\right)^{\frac{1}{2}}\right\\}=o_{p}\left(\left\|T(\lambda)\right\|^{4}\right),$
(S.C.12)
using (S.C.10). Finally, the third term in parentheses in (S.C.9) is
$O_{p}\left(\left\|\mathfrak{B}(\phi)\right\|^{2}/n\right)=O_{p}\left(\left\|T(\lambda)\right\|^{4}/n\right).$
(S.C.13)
By compactness of $\Lambda$ and Assumption SAR.1, (S.C.11), (S.C.12) and
(S.C.13) are negligible, thus pointwise convergence is established.
Uniform convergence will follow from an equicontinuity argument. First, for
arbitrary $\varepsilon>0$ we can find points
$\phi_{*}=\left(\lambda^{\prime}_{*},\gamma^{\prime}_{*}\right)^{\prime}$,
possibly infinitely many, such that the neighborhoods
$\left\|\phi-\phi^{*}\right\|<\varepsilon$ form an open cover of
$\overline{\mathcal{N}}^{\phi}(\eta)$. Since $\Phi$ is compact any open cover
has a finite subcover and thus we may in fact choose finitely many
$\phi_{*}=\left(\lambda^{\prime}_{*},\gamma^{\prime}_{*}\right)^{\prime}$,
whence it suffices to prove
$\underset{\left\|\phi-\phi_{{}_{\ast}}\right\|<\varepsilon}{\sup}\left|\frac{c_{3}\left(\phi\right)}{c_{1}\left(\phi\right)+c_{2}\left(\phi\right)}-\frac{c_{3}\left(\phi_{\ast}\right)}{c_{1}\left(\phi_{\ast}\right)+c_{2}\left(\phi_{\ast}\right)}\right|\overset{p}{\longrightarrow}0.$
Proceeding as in Gupta and Robinson (2018), we denote the two components of
$c_{3}\left(\phi\right)$ by $c_{31}\left(\phi\right),$
$c_{32}\left(\phi\right),$ and are left with establishing the negligibility of
$\displaystyle\frac{\left|c_{31}\left(\phi\right)-c_{31}\left(\phi_{\ast}\right)\right|}{c_{2}\left(\phi\right)}+\frac{\left|c_{32}\left(\phi\right)-c_{32}\left(\phi_{\ast}\right)\right|}{c_{1}\left(\phi\right)}+\frac{\left|c_{3}\left(\phi_{\ast}\right)\right|}{c_{1}\left(\phi\right)c_{1}\left(\phi_{\ast}\right)}\left|c_{1}\left(\phi_{\ast}\right)-c_{1}\left(\phi\right)\right|$
(S.C.14) $\displaystyle+$
$\displaystyle\frac{\left|c_{3}\left(\phi_{\ast}\right)\right|}{c_{2}\left(\phi\right)c_{2}\left(\phi_{\ast}\right)}\left|c_{2}\left(\phi_{\ast}\right)-c_{2}\left(\phi\right)\right|,$
uniformly on $\left\|\phi-\phi_{{}_{\ast}}\right\|<\varepsilon$. By the fact
that (S.C.7) and (S.C.8) hold uniformly over $\Phi$, we first consider only
the numerators in the first two terms in (S.C.14). As in the proof of Theorem
1 of Delgado and Robinson (2015), (S.C.6) implies that
$\mathcal{E}\left(\sup_{\left\|\phi-\phi_{{}_{\ast}}\right\|<\varepsilon}\left|c_{31}\left(\phi\right)-c_{31}\left(\phi_{\ast}\right)\right|\right)$
is bounded by
$n^{-1}\left(\mathcal{E}\left\|u\right\|^{2}+\sigma_{0}^{2}tr\Sigma\right)\sup_{\left\|\phi-\phi_{{}_{\ast}}\right\|<\varepsilon}\left\|\mathfrak{B}(\phi)-\mathfrak{B}(\phi_{*})\right\|=O_{p}\left(\sup_{\left\|\phi-\phi_{{}_{\ast}}\right\|<\varepsilon}\left\|\mathfrak{B}(\phi)-\mathfrak{B}(\phi_{*})\right\|\right),$
because $\mathcal{E}\left\|u\right\|^{2}=O(n)$ and $tr\Sigma=O(n)$.
$\mathfrak{B}(\phi)-\mathfrak{B}(\phi_{*})$ can be written as
$\displaystyle\left(T(\lambda)-T(\lambda_{*})\right)^{\prime}E(\gamma)^{\prime}M(\gamma)E(\gamma)T(\lambda)+T(\lambda_{*})^{\prime}\Sigma^{\prime}(\gamma_{*})M(\gamma_{*})E(\gamma_{*})\left(T(\lambda)-T(\lambda_{*})\right)$
(S.C.15) $\displaystyle+$ $\displaystyle
T^{\prime}(\lambda_{*})\left(E(\gamma)^{\prime}M(\gamma)E(\gamma)-E(\gamma_{*})^{\prime}M(\gamma_{*})E(\gamma_{*})\right)T(\lambda),$
which, by the triangle inequality, has spectral norm bounded by
$\displaystyle\left\|T(\lambda)-T(\lambda_{*})\right\|\left(\left\|E(\gamma)\right\|^{2}\left\|T(\lambda)\right\|+\left\|E(\gamma_{*})\right\|^{2}\left\|T(\lambda_{*})\right\|\right)$
$\displaystyle+$
$\displaystyle\left\|T(\lambda_{*})\right\|\left\|E(\gamma)^{\prime}M(\gamma)E(\gamma)-E(\gamma_{*})^{\prime}M(\gamma_{*})E(\gamma_{*})\right\|\left\|T(\lambda)\right\|$
$\displaystyle=$ $\displaystyle
O_{p}\left(\left\|T(\lambda)-T(\lambda_{*})\right\|+\left\|E(\gamma)^{\prime}M(\gamma)E(\gamma)-E(\gamma_{*})^{\prime}M(\gamma_{*})E(\gamma_{*})\right\|\right).$
By Assumption SAR.1 the first term in parentheses on the right side of
(LABEL:cons7) is bounded uniformly on
$\left\|\phi-\phi_{*}\right\|<\varepsilon$ by
$\sum_{j=1}^{d_{\lambda}}\left|\lambda_{j}-\lambda_{*j}\right|\left\|G_{j}\right\|\leq\max_{j=1,\ldots,d_{\lambda}}\left\|G_{j}\right\|\left\|\lambda-\lambda_{*}\right\|=O_{p}(\varepsilon),$
(S.C.17)
while because
$E(\gamma)^{\prime}M(\gamma)E(\gamma)=n^{-1}\Sigma(\gamma)^{-1}\Psi\left(n^{-1}\Psi^{\prime}\Sigma(\gamma)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma(\gamma)^{-1}$
for any $\gamma\in\Gamma$, the second one can be decomposed into terms with
bounds typified by
$\displaystyle
n^{-1}\left\|\Sigma(\gamma)^{-1}-\Sigma(\gamma_{*})^{-1}\right\|\left\|\Psi\right\|^{2}\left\|\left(n^{-1}\Psi^{\prime}\Sigma(\gamma)^{-1}\Psi\right)^{-1}\right\|\left\|\Sigma(\gamma)^{-1}\right\|^{2}$
$\displaystyle\leq$ $\displaystyle
n^{-1}\left\|\Sigma(\gamma)-\Sigma(\gamma_{*})\right\|\left\|\Psi\right\|^{2}\left\|\left(n^{-1}\Psi^{\prime}\Sigma(\gamma)^{-1}\Psi\right)^{-1}\right\|\left\|\Sigma(\gamma)^{-1}\right\|^{3}\left\|\Sigma(\gamma_{*})^{-1}\right\|$
$\displaystyle=$ $\displaystyle
O_{p}\left(\left\|\Sigma(\gamma)-\Sigma(\gamma_{*})\right\|\right)=O_{p}(\varepsilon),$
uniformly on $\left\|\phi-\phi_{*}\right\|<\varepsilon$, by Assumptions R.3
and R.8, Proposition 4.1 and the inequality
$\left\|A\right\|\leq\left\|A\right\|_{F}$ for a generic matrix $A$, so that
$\sup_{\left\|\phi-\phi_{*}\right\|<\varepsilon}\left\|\mathfrak{B}(\phi)-\mathfrak{B}(\phi_{*})\right\|=O_{p}(\varepsilon).$
(S.C.18)
Thus equicontinuity of the first term in (S.C.14) follows because
$\varepsilon$ is arbitrary. The equicontinuity of the second term in (S.C.14)
follows in much the same way. Indeed
$\sup_{\left\|\phi-\phi_{*}\right\|<\varepsilon}c_{32}\left(\phi\right)-c_{32}\left(\phi_{\ast}\right)=2n^{-1}\beta_{0}^{\prime}\Psi^{\prime}\sup_{\left\|\phi-\phi_{*}\right\|<\varepsilon}\left(\mathfrak{B}(\phi)-\mathfrak{B}(\phi_{*})\right)u=O_{p}\left(\sup_{\left\|\phi-\phi_{*}\right\|<\varepsilon}\left\|\mathfrak{B}(\phi)-\mathfrak{B}(\phi_{*})\right\|\right)=O_{p}(\varepsilon)$,
using earlier arguments and (S.C.18). Because $c_{1}(\phi)$ is bounded and
bounded away from zero in probability (see S.C.7) for sufficiently large $n$
and all $\phi\in\overline{\mathcal{N}}^{\phi}(\eta)$, the third term in
(S.C.14) may be bounded by
${\left|c_{3}(\phi_{*})\right|}/{c_{1}(\phi_{*})}\left(1+{c_{1}(\phi_{*})}/{c_{1}(\phi)}\right)\overset{p}{\longrightarrow}0,$
convergence being uniform on $\left\|\phi-\phi_{*}\right\|<\varepsilon$ by
pointwise convergence of $c_{3}(\phi)/\left(c_{1}(\phi)+c_{2}(\phi)\right)$,
cf. Gupta and Robinson (2018). The uniform convergence to zero of the fourth
term in (S.C.14) follows in identical fashion, because $c_{2}(\phi)$ is
bounded and bounded away from zero (see (S.C.8)) in probability for
sufficiently large $n$ and all $\phi\in\overline{\mathcal{N}}^{\phi}(\eta)$.
This concludes the proof. ∎
## Appendix S.D Lemmas
###### Lemma LS.1.
Under the conditions of Theorem 4.1,
$c_{1}(\gamma)=n^{-1}\beta^{\prime}\Psi^{\prime}E^{\prime}(\gamma)M(\gamma)E(\gamma)\Psi\beta+o_{p}(1).$
###### Proof.
First,
$c_{1}(\gamma)=n^{-1}\beta^{\prime}\Psi^{\prime}E^{\prime}(\gamma)M(\gamma)E(\gamma)\Psi\beta+c_{12}(\gamma)+c_{13}(\gamma),$
with
$c_{12}(\gamma)=2n^{-1}{e}^{\prime}E^{\prime}(\gamma)M(\gamma)E(\gamma)\Psi\beta$
and
$c_{13}(\gamma)=n^{-1}{e}^{\prime}E^{\prime}(\gamma)M(\gamma)E(\gamma){e}$. It
is readily seen that $c_{12}(\gamma)$ and $c_{13}(\gamma)$ are negligible. ∎
###### Lemma LS.2.
Under the conditions of Theorem 4.2 or Theorem 5.2,
$\left\|\widehat{\gamma}-\gamma_{0}\right\|=O_{p}\left(\sqrt{d_{\gamma}/n}\right).$
###### Proof.
We show the details for the setting of Theorem 4.2 and omit the details for
the setting of Theorem 5.2. Write $l=\partial
L(\beta_{0},\gamma_{0})/\partial\gamma$. By Robinson (1988), we have
$\left\|\widehat{\gamma}-\gamma_{0}\right\|=O_{p}\left(\left\|l\right\|\right)$.
Now $l=\left(l_{1},\ldots,l_{d_{\gamma}}\right)^{\prime}$, with
$l_{j}=n^{-1}tr\left(\Sigma^{-1}\Sigma_{j}\right)-n^{-1}\sigma_{0}^{-2}u^{\prime}\Sigma^{-1}\Sigma_{j}\Sigma^{-1}u$.
Next,
$\mathcal{E}\left\|l\right\|^{2}=\sum_{j=1}^{d_{\gamma}}\mathcal{E}\left(l_{j}^{2}\right)$
and
$\mathcal{E}\left(l_{j}^{2}\right)=\frac{1}{n^{2}\sigma_{0}^{4}}var\left(u^{\prime}\Sigma^{-1}\Sigma_{j}\Sigma^{-1}u\right)=\frac{1}{n^{2}\sigma_{0}^{4}}var\left(\varepsilon^{\prime}B^{\prime}\Sigma^{-1}\Sigma_{j}\Sigma^{-1}B\varepsilon\right)=\frac{1}{n^{2}\sigma_{0}^{4}}var\left(\varepsilon^{\prime}D_{j}\varepsilon\right),$
(S.D.1)
say. But, writing $d_{j,st}$ for a typical element of the infinite dimensional
matrix $D_{j}$, we have
$var\left(\varepsilon^{\prime}D_{j}\varepsilon\right)=\left(\mu_{4}-3\sigma_{0}^{4}\right)\sum_{s=1}^{\infty}d_{j,ss}^{2}+2\sigma_{0}^{4}tr\left(D_{j}^{2}\right)=\left(\mu_{4}-3\sigma_{0}^{4}\right)\sum_{s=1}^{\infty}d_{j,ss}^{2}+2\sigma_{0}^{4}\sum_{s,t=1}^{\infty}d_{j,st}^{2}.$
(S.D.2)
Next, by Assumptions R.4, R.3 and R.9
$\sum_{s=1}^{\infty}d_{j,ss}^{2}=\sum_{s=1}^{\infty}\left(b_{s}^{\prime}\Sigma^{-1}\Sigma_{j}\Sigma^{-1}b_{s}\right)^{2}\leq\left(\sum_{s=1}^{\infty}\left\|b_{s}\right\|^{2}\right)\left\|\Sigma^{-1}\right\|^{2}\left\|\Sigma_{j}\right\|=O\left(\sum_{j=1}^{n}\sum_{s=1}^{\infty}b^{*2}_{js}\right)=O(n).$
(S.D.3)
Similarly,
$\sum_{s,t=1}^{\infty}d_{j,st}^{2}=\sum_{s=1}^{\infty}b_{s}^{\prime}\Sigma^{-1}\Sigma_{j}\Sigma^{-1}\left(\sum_{t=1}^{\infty}b_{t}b_{t}^{\prime}\right)\Sigma^{-1}\Sigma_{j}\Sigma^{-1}b_{s}=\sum_{s=1}^{\infty}b_{s}^{\prime}\Sigma^{-1}\Sigma_{j}\Sigma^{-1}\Sigma_{j}\Sigma^{-1}b_{s}=O(n).$
(S.D.4)
Using (S.D.3) and (S.D.4) in (S.D.2) implies that
$\mathcal{E}\left(l_{j}^{2}\right)=O\left(n^{-1}\right)$, by (S.D.1). Thus we
have $\mathcal{E}\left\|l\right\|^{2}=O\left(d_{\gamma}/n\right)$, and thus
$\left\|l\right\|=O_{p}\left(\sqrt{d_{\gamma}/n}\right)$, by Markov’s
inequality, proving the lemma. ∎
###### Lemma LS.3.
Under the conditions of Theorem 4.3,
$\mathcal{E}\left({\sigma_{0}^{-2}}\varepsilon^{\prime}\mathscr{V}\varepsilon\right)=p$
and
$Var\left({\sigma_{0}^{-2}}\varepsilon^{\prime}\mathscr{V}\varepsilon\right)/2p\rightarrow
1$.
###### Proof.
As
$\mathcal{E}\left({\sigma_{0}^{-2}}\varepsilon^{\prime}\mathscr{V}\varepsilon\right)=tr\left(\mathcal{E}[B^{\prime}\Sigma^{-1}\Psi(\Psi^{\prime}\Sigma^{-1}\Psi)^{-1}\Psi^{\prime}\Sigma^{-1}B]\right)=p,$
and
$Var\left(\frac{1}{\sigma_{0}^{2}}\varepsilon^{\prime}\mathscr{V}\varepsilon\right)=\left(\frac{\mu_{4}}{\sigma_{0}^{4}}-3\right)\sum_{s=1}^{\infty}\mathcal{E}(v_{ss}^{2})+\mathcal{E}[tr(\mathscr{V}\mathscr{V}^{\prime})+tr(\mathscr{V}^{2})]=\left(\frac{\mu_{4}}{\sigma_{0}^{4}}-3\right)\sum_{s=1}^{\infty}v_{ss}^{2}+2p,$
(S.D.5)
it suffices to show that
$(2p)^{-1}\sum_{s=1}^{\infty}v_{ss}^{2}\overset{p}{\rightarrow}0.$ (S.D.6)
Because $v_{ss}=b_{s}^{\prime}\mathscr{M}b_{s}$, we have
$v_{ss}^{2}=\left(\sum_{i,j=1}^{n}b_{is}b_{js}m_{ij}\right)^{2}$. Thus, using
Assumption R.4 and (A.5), we have
$\displaystyle\sum_{s=1}^{\infty}v_{ss}^{2}$ $\displaystyle\leq$
$\displaystyle\left(\sup_{i,j}\left|m_{ij}\right|\right)^{2}\sum_{s=1}^{\infty}\left(\sum_{i,j=1}^{n}\left|b^{*}_{is}\right|\left|b^{*}_{js}\right|\right)^{2}=O_{p}\left(p^{2}n^{-2}\left(\sup_{s}\sum_{i=1}^{n}\left|b^{*}_{is}\right|\right)^{3}\sum_{i=1}^{n}\sum_{s=1}^{\infty}\left|b^{*}_{is}\right|\right)$
(S.D.7) $\displaystyle=$ $\displaystyle O_{p}\left(p^{2}n^{-1}\right),$
establishing (S.D.6) because $p^{2}/n\rightarrow 0$. ∎
###### Lemma LS.4.
Under the conditions of Theorem 6.2,
$\left\|\widehat{\tau}-\tau_{0}\right\|=O_{p}\left(\sqrt{d_{\tau}/n}\right).$
###### Proof.
The proof is similar to that of Lemma LS.2 and is omitted. ∎
Denote $H(\gamma)=I_{n}+\sum_{j=m_{1}+1}^{m_{1}+m_{2}}\gamma_{j}W_{j}$ and
$K(\gamma)=I_{n}-\sum_{j=1}^{m_{1}}\gamma_{j}W_{j}$. Let
$G_{j}(\gamma)=W_{j}K^{-1}(\gamma)$, $j=1,\ldots,m_{1}$,
$T_{j}=H^{-1}(\gamma)W_{j}$, $j=m_{1}+1,\ldots,m_{1}+m_{2}$ and, for a generic
matrix $A$, denote $\overline{A}=A+A^{\prime}$. Our final conditions may
differ according to whether the $W_{j}$ are of general form or have ‘single
nonzero diagonal block structure’, see e.g Gupta and Robinson (2015). To
define these, denote by $V$ an $n\times n$ block diagonal matrix with $i$-th
block $V_{i}$, a $s_{i}\times s_{i}$ matrix, where
$\sum_{i=1}^{m_{1}+m_{2}}s_{i}=n$, and for $i=1,...,m_{1}+m_{2}$ obtain
$W_{j}$ from $V$ by replacing each $V_{j}$, $j\neq i$, by a matrix of zeros.
Thus $V=\sum_{i=1}^{m_{1}+m_{2}}W_{j}$.
###### Lemma LS.5.
For the spatial error model with SARMA$(p,q)$ errors, if
$\sup_{\gamma\in\Gamma^{o}}\left(\left\|K^{-1}(\gamma)\right\|+\left\|K^{\prime-1}(\gamma)\right\|+\left\|H^{-1}(\gamma)\right\|+\left\|H^{\prime-1}(\gamma)\right\|\right)+\max_{j=1,\ldots,m_{1}+m_{2}}\left\|W_{j}\right\|<C,$
(S.D.8)
then
$\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right)=A^{-1}(\gamma)\left(\sum_{j=1}^{m_{1}}\gamma^{\dagger}_{j}\overline{H^{-1}(\gamma)G_{j}(\gamma)}+\sum_{j=m_{1}+1}^{m_{1}+m_{2}}\gamma^{\dagger}_{j}\overline{T_{j}(\gamma)}\right)A^{\prime-1}(\gamma).$
###### Proof.
We first show that $D\Sigma\in\mathscr{L}\left(\Gamma^{o},\mathcal{M}^{n\times
n}\right)$. Clearly, $D\Sigma$ is a linear map and (S.D.8)
$\left\|\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right)\right\|\leq
C\left\|\gamma^{\dagger}\right\|_{1},$
in the general case and
$\left\|\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right)\right\|\leq
C\max_{j=1,\ldots,m_{1}+m_{2}}\left|\gamma_{j}^{\dagger}\right|,$
in the ‘single nonzero diagonal block’ case. Thus $D\Sigma$ is a bounded
linear operator between two normed linear spaces, i.e. it is a continuous
linear operator.
With $A(\gamma)=H^{-1}(\gamma)K(\gamma)$, we now show that
$\frac{\left\|A^{-1}\left(\gamma+\gamma^{\dagger}\right)A^{\prime-1}\left(\gamma+\gamma^{\dagger}\right)-A^{-1}\left(\gamma\right)A^{\prime-1}\left(\gamma\right)-\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right)\right\|}{\left\|\gamma^{\dagger}\right\|_{g}}\rightarrow
0,\text{ as }\left\|\gamma^{\dagger}\right\|_{g}\rightarrow 0,$ (S.D.9)
where $\left\|\cdot\right\|_{g}$ is either the 1-norm or the max norm on
$\Gamma$. First, note that
$\displaystyle
A^{-1}\left(\gamma+\gamma^{\dagger}\right)A^{\prime-1}\left(\gamma+\gamma^{\dagger}\right)-A^{-1}(\gamma)A^{\prime-1}(\gamma)$
(S.D.10) $\displaystyle=$ $\displaystyle
A^{-1}\left(\gamma+\gamma^{\dagger}\right)\left(A^{-1}\left(\gamma+\gamma^{\dagger}\right)-A^{-1}(\gamma)\right)^{\prime}+\left(A^{-1}\left(\gamma+\gamma^{\dagger}\right)-A^{-1}(\gamma)\right)A^{-1}(\gamma)$
$\displaystyle=$
$\displaystyle-A^{-1}\left(\gamma+\gamma^{\dagger}\right)A^{\prime-1}\left(\gamma+\gamma^{\dagger}\right)\left(A\left(\gamma+\gamma^{\dagger}\right)-A(\gamma)\right)^{\prime}A^{\prime-1}(\gamma)$
$\displaystyle-$ $\displaystyle
A^{-1}\left(\gamma+\gamma^{\dagger}\right)\left(A\left(\gamma+\gamma^{\dagger}\right)-A(\gamma)\right)A^{-1}(\gamma)A^{\prime-1}(\gamma).$
Next,
$\displaystyle A\left(\gamma+\gamma^{\dagger}\right)-A(\gamma)$
$\displaystyle=$ $\displaystyle
H^{-1}\left(\gamma+\gamma^{\dagger}\right)K\left(\gamma+\gamma^{\dagger}\right)-H^{-1}\left(\gamma\right)K\left(\gamma\right)$
$\displaystyle=$ $\displaystyle
H^{-1}\left(\gamma+\gamma^{\dagger}\right)\left(K\left(\gamma+\gamma^{\dagger}\right)-K(\gamma)\right)$
$\displaystyle+$ $\displaystyle
H^{-1}\left(\gamma+\gamma^{\dagger}\right)\left(H\left(\gamma\right)-H\left(\gamma+\gamma^{\dagger}\right)\right)H^{-1}\left(\gamma\right)K\left(\gamma\right)$
$\displaystyle=$
$\displaystyle-H^{-1}\left(\gamma+\gamma^{\dagger}\right)\left(\sum_{j=1}^{m_{1}}\gamma_{j}^{\dagger}W_{j}+\sum_{j=m_{1}+1}^{m_{1}+m_{2}}\gamma_{j}^{\dagger}W_{j}H^{-1}(\gamma)K(\gamma)\right).$
Substituting (LABEL:sem_SARMA1) in (S.D.10) implies that
$A^{-1}\left(\gamma+\gamma^{\dagger}\right)A^{\prime-1}\left(\gamma+\gamma^{\dagger}\right)-A^{-1}(\gamma)A^{\prime-1}(\gamma)=\Delta_{1}\left(\gamma,\gamma^{\dagger}\right)+\Delta_{2}\left(\gamma,\gamma^{\dagger}\right)=\Delta\left(\gamma,\gamma^{\dagger}\right),$
(S.D.12)
say, where
$\displaystyle\Delta_{1}\left(\gamma,\gamma^{\dagger}\right)$ $\displaystyle=$
$\displaystyle
A^{-1}\left(\gamma+\gamma^{\dagger}\right)A^{\prime-1}\left(\gamma+\gamma^{\dagger}\right)\left(\sum_{j=1}^{m_{1}}\gamma_{j}^{\dagger}W^{\prime}_{j}+K^{\prime}(\gamma)H^{\prime-1}(\gamma)\sum_{j=m_{1}+1}^{m_{1}+m_{2}}\gamma_{j}^{\dagger}W^{\prime}_{j}\right)$
$\displaystyle\times$ $\displaystyle
H^{\prime-1}\left(\gamma+\gamma^{\dagger}\right)A^{\prime-1}(\gamma),$
$\displaystyle\Delta_{2}\left(\gamma,\gamma^{\dagger}\right)$ $\displaystyle=$
$\displaystyle
A^{-1}\left(\gamma+\gamma^{\dagger}\right)H^{-1}\left(\gamma+\gamma^{\dagger}\right)\left(\sum_{j=1}^{m_{1}}\gamma_{j}^{\dagger}W_{j}+\sum_{j=m_{1}+1}^{m_{1}+m_{2}}\gamma_{j}^{\dagger}W_{j}H^{-1}(\gamma)K(\gamma)\right)$
$\displaystyle\times$ $\displaystyle A^{-1}(\gamma)A^{\prime-1}(\gamma).$
From the definitions above and recalling that
$A(\gamma)=H^{-1}(\gamma)K(\gamma)$, we can write
$\Delta\left(\gamma,\gamma^{\dagger}\right)=A^{-1}\left(\gamma+\gamma^{\dagger}\right)\Upsilon\left(\gamma,\gamma^{\dagger}\right)A^{\prime-1}\left(\gamma\right),$
(S.D.13)
with
$\displaystyle\Upsilon\left(\gamma,\gamma^{\dagger}\right)$ $\displaystyle=$
$\displaystyle\sum_{j=1}^{m_{1}}\gamma_{j}^{\dagger}G^{\prime}_{j}\left(\gamma+\gamma^{\dagger}\right)H^{\prime-1}\left(\gamma+\gamma^{\dagger}\right)+A^{\prime-1}\left(\gamma+\gamma^{\dagger}\right)A^{\prime}(\gamma)\sum_{j=m_{1}+1}^{m_{1}+m_{2}}\gamma_{j}^{\dagger}T^{\prime}_{j}\left(\gamma+\gamma^{\dagger}\right)$
$\displaystyle+$
$\displaystyle\sum_{j=1}^{m_{1}}\gamma_{j}^{\dagger}H^{-1}\left(\gamma+\gamma^{\dagger}\right)G_{j}\left(\gamma\right)+\sum_{j=m_{1}+1}^{m_{1}+m_{2}}\gamma_{j}^{\dagger}T_{j}\left(\gamma+\gamma^{\dagger}\right).$
Then (S.D.12) implies that
$\displaystyle
A^{-1}\left(\gamma+\gamma^{\dagger}\right)A^{\prime-1}\left(\gamma+\gamma^{\dagger}\right)-A^{-1}(\gamma)A^{\prime-1}(\gamma)-\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right)$
(S.D.14) $\displaystyle=$ $\displaystyle
A^{-1}\left(\gamma+\gamma^{\dagger}\right)A^{\prime-1}\left(\gamma+\gamma^{\dagger}\right)-A^{-1}(\gamma)A^{\prime-1}(\gamma)-\Delta\left(\gamma,\gamma^{\dagger}\right)-\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right)+\Delta\left(\gamma,\gamma^{\dagger}\right)$
$\displaystyle=$
$\displaystyle\Delta\left(\gamma,\gamma^{\dagger}\right)-\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right),$
so to prove (S.D.9) it is sufficient to show that
$\frac{\left\|\Delta\left(\gamma,\gamma^{\dagger}\right)-\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right)\right\|}{\left\|\gamma^{\dagger}\right\|_{g}}\rightarrow
0\text{ as }\left\|\gamma^{\dagger}\right\|_{g}\rightarrow 0.$ (S.D.15)
The numerator in (S.D.15) can be written as
$\sum_{i=1}^{7}\Pi_{i}\left(\gamma,\gamma^{\dagger}\right)A^{\prime-1}(\gamma)$
by adding, subtracting and grouping terms, where (omitting the argument
$\left(\gamma,\gamma^{\dagger}\right)$)
$\displaystyle\Pi_{1}$ $\displaystyle=$ $\displaystyle
A^{-1}\left(\gamma+\gamma^{\dagger}\right)\sum_{j=1}^{m_{1}}\gamma_{j}^{\dagger}G^{\prime}_{j}\left(\gamma+\gamma^{\dagger}\right)H^{\prime-1}(\gamma)\left(H(\gamma)-H\left(\gamma+\gamma^{\dagger}\right)\right)^{\prime}H^{\prime-1}\left(\gamma+\gamma^{\dagger}\right),$
$\displaystyle\Pi_{2}$ $\displaystyle=$ $\displaystyle
A^{-1}\left(\gamma+\gamma^{\dagger}\right)\sum_{j=1}^{m_{1}}\gamma_{j}^{\dagger}H^{-1}\left(\gamma+\gamma^{\dagger}\right)\left(H(\gamma)-H\left(\gamma+\gamma^{\dagger}\right)\right)H^{-1}(\gamma)G_{j}\left(\gamma\right),$
$\displaystyle\Pi_{3}$ $\displaystyle=$ $\displaystyle
A^{-1}\left(\gamma+\gamma^{\dagger}\right)\sum_{j=m_{1}+1}^{m_{1}+m_{2}}\gamma_{j}^{\dagger}\left(A^{-1}\left(\gamma+\gamma^{\dagger}\right)-A^{-1}\left(\gamma\right)\right)T^{\prime}_{j}\left(\gamma+\gamma^{\dagger}\right),$
$\displaystyle\Pi_{4}$ $\displaystyle=$
$\displaystyle\left(A^{-1}\left(\gamma+\gamma^{\dagger}\right)-A^{-1}\left(\gamma\right)\right)\sum_{j=m_{1}+1}^{m_{1}+m_{2}}\gamma_{j}^{\dagger}\overline{T_{j}\left(\gamma+\gamma^{\dagger}\right)},$
$\displaystyle\Pi_{5}$ $\displaystyle=$ $\displaystyle
A^{-1}(\gamma)\sum_{j=m_{1}+1}^{m_{1}+m_{2}}\gamma_{j}^{\dagger}\overline{H^{-1}\left(\gamma+\gamma^{\dagger}\right)\left(H(\gamma)-H\left(\gamma+\gamma^{\dagger}\right)\right)H^{-1}(\gamma)W_{j}},$
$\displaystyle\Pi_{6}$ $\displaystyle=$
$\displaystyle\Delta\left(\gamma,\gamma^{\dagger}\right)\sum_{j=1}^{m_{1}}\gamma_{j}^{\dagger}W_{j}^{\prime}H^{\prime-1}(\gamma),$
$\displaystyle\Pi_{7}$ $\displaystyle=$
$\displaystyle\left(A^{-1}\left(\gamma+\gamma^{\dagger}\right)-A^{-1}\left(\gamma\right)\right)\sum_{j=1}^{m_{1}}\gamma_{j}^{\dagger}H^{-1}(\gamma)G_{j}(\gamma).$
By (S.D.8), (S.D.13) and replication of earlier techniques, we have
$\max_{i=1,\ldots,7}\sup_{\gamma\in\Gamma^{o}}\left\|\Pi_{i}\left(\gamma,\gamma^{\dagger}\right)A^{-1}(\gamma)\right\|\leq
C\left\|\gamma^{\dagger}\right\|^{2}_{g},$ (S.D.16)
where the norm used on the RHS of (S.D.16) depends on whether we are
considering the general case or the ‘single nonzero diagonal block’ case. Thus
$\frac{\left\|\Delta\left(\gamma,\gamma^{\dagger}\right)-\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right)\right\|}{\left\|\gamma^{\dagger}\right\|_{g}}\leq
C\left\|\gamma^{\dagger}\right\|_{g}\rightarrow 0\text{ as
}\left\|\gamma^{\dagger}\right\|_{g}\rightarrow 0,$
proving (S.D.15) and thus (S.D.9). ∎
###### Corollary CS.1.
For the spatial error model with SAR$(m_{1})$ errors,
$\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right)=K^{-1}(\gamma)\sum_{j=1}^{m_{1}}\gamma^{\dagger}_{j}\overline{G_{j}(\gamma)}K^{\prime-1}(\gamma).$
###### Proof.
Taking $m_{2}=0$ in Lemma LS.5, the elements involving sums from $m_{1}+1$ to
$m_{1}+m_{2}$ do not arise and $H(\gamma)=I_{n}$, proving the claim. ∎
###### Corollary CS.2.
For the spatial error model with SMA$(m_{2})$ errors,
$\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right)=H(\gamma)\sum_{j=1}^{m_{2}}\gamma^{\dagger}_{j}\overline{T_{j}(\gamma)}H^{\prime}(\gamma).$
###### Proof.
Taking $m_{1}=0$ in Lemma LS.5, the elements involving sums from $1$ to
$m_{1}$ do not arise and $K(\gamma)=I_{n}$, proving the claim. ∎
###### Lemma LS.6.
For the spatial error model with MESS$(m_{1})$ errors, if
$\max_{j=1,\ldots,m_{1}}\left(\left\|W_{j}\right\|+\left\|W^{\prime}_{j}\right\|\right)<1,$
(S.D.17)
then
$\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right)=\exp\left(\sum_{j=1}^{m_{1}}\gamma_{j}\left(W_{j}+W_{j}^{\prime}\right)\right)\sum_{j=1}^{m_{1}}\gamma_{j}^{\dagger}\left(W_{j}+W_{j}^{\prime}\right).$
###### Proof.
Clearly $D\Sigma\in\mathscr{L}\left(\Gamma^{o},\mathcal{M}^{n\times
n}\right)$. Next,
$\displaystyle\left\|A^{-1}\left(\gamma+\gamma^{\dagger}\right)A^{\prime-1}\left(\gamma+\gamma^{\dagger}\right)-A^{-1}(\gamma)A^{\prime-1}(\gamma)-\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right)\right\|$
(S.D.18) $\displaystyle=$
$\displaystyle\left\|\exp\left(\sum_{j=1}^{m_{1}}\left(\gamma_{j}+\gamma^{\dagger}_{j}\right)\left(W_{j}+W_{j}^{\prime}\right)\right)-\exp\left(\sum_{j=1}^{m_{1}}\gamma_{j}\left(W_{j}+W_{j}^{\prime}\right)\right)-\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right)\right\|$
$\displaystyle=$
$\displaystyle\left\|\exp\left(\sum_{j=1}^{m_{1}}\gamma_{j}\left(W_{j}+W_{j}^{\prime}\right)\right)\left(\exp\left(\sum_{j=1}^{m_{1}}\gamma^{\dagger}_{j}\left(W_{j}+W_{j}^{\prime}\right)\right)-I_{n}-\sum_{j=1}^{m_{1}}\gamma^{\dagger}_{j}\left(W_{j}+W_{j}^{\prime}\right)\right)\right\|$
$\displaystyle\leq$
$\displaystyle\left\|\exp\left(\sum_{j=1}^{m_{1}}\gamma_{j}\left(W_{j}+W_{j}^{\prime}\right)\right)\right\|\left\|\exp\left(\sum_{j=1}^{m_{1}}\gamma^{\dagger}_{j}\left(W_{j}+W_{j}^{\prime}\right)\right)-I_{n}-\sum_{j=1}^{m_{1}}\gamma^{\dagger}_{j}\left(W_{j}+W_{j}^{\prime}\right)\right\|$
$\displaystyle\leq$ $\displaystyle
C\left\|I_{n}+\sum_{j=1}^{p}\gamma^{\dagger}_{j}\left(W_{j}+W_{j}^{\prime}\right)+\sum_{k=2}^{\infty}\left\\{\sum_{j=1}^{m_{1}}\gamma^{\dagger}_{j}\left(W_{j}+W_{j}^{\prime}\right)\right\\}^{k}-I_{n}-\sum_{j=1}^{m_{1}}\gamma^{\dagger}_{j}\left(W_{j}+W_{j}^{\prime}\right)\right\|$
$\displaystyle\leq$ $\displaystyle
C\left\|\sum_{k=2}^{\infty}\left\\{\sum_{j=1}^{m_{1}}\gamma^{\dagger}_{j}\left(W_{j}+W_{j}^{\prime}\right)\right\\}^{k}\right\|\leq
C\sum_{k=2}^{\infty}\sum_{j=1}^{m_{1}}\left|\gamma^{\dagger}_{j}\right|\left\|\left(W_{j}+W_{j}^{\prime}\right)\right\|^{k}$
$\displaystyle\leq$ $\displaystyle
C\sum_{k=2}^{\infty}\left\|\gamma^{\dagger}\right\|^{k}_{g},$
by (S.D.17), without loss of generality, and again the norm used in (S.D.18)
depending on whether we are in the general or the ‘single nonzero diagonal
block’ case. Thus
$\frac{\left\|A^{-1}\left(\gamma+\gamma^{\dagger}\right)A^{\prime-1}\left(\gamma+\gamma^{\dagger}\right)-A^{-1}(\gamma)A^{\prime-1}(\gamma)-\left(D\Sigma(\gamma)\right)\left(\gamma^{\dagger}\right)\right\|}{\left\|\gamma^{\dagger}\right\|_{g}}\leq
C\sum_{k=2}^{\infty}\left\|\gamma^{\dagger}\right\|^{k-1}_{g}\rightarrow 0,$
as $\left\|\gamma^{\dagger}\right\|_{g}\rightarrow 0$, proving the claim. ∎
###### Theorem TS.1.
Under the conditions of Theorem 4.4 or 5.3,
$\mathscr{T}_{n}-\mathscr{T}_{n}^{a}=o_{p}(1)$ as $n\rightarrow\infty$.
###### Proof.
It suffices to show that
$n\widetilde{m}_{n}=n\widehat{m}_{n}+o_{p}(\sqrt{p})$. As
$\widehat{\eta}=y-\widehat{{\theta}},$ $\widehat{u}=y-\widehat{f}$, and
$\widehat{v}=\widehat{\theta}-\widehat{f}$, we have
$\widehat{u}=\widehat{\eta}+\widehat{v}$ and
$\displaystyle n\widetilde{m}_{n}$ $\displaystyle=$
$\displaystyle\widehat{\sigma}^{-2}\left(\widehat{u}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\widehat{u}-\widehat{\eta}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\widehat{\eta}\right)=\widehat{\sigma}^{-2}\left(2\widehat{u}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\widehat{v}-\widehat{v}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\widehat{v}\right)$
(S.D.19) $\displaystyle=$ $\displaystyle
2n\widehat{m}_{n}-\widehat{\sigma}^{-2}\left[\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}({u+e}{)}-{e}+{\theta}_{0}-\widehat{{f}}\right]^{\prime}$
$\displaystyle\Sigma\left(\widehat{\gamma}\right)^{-1}\left[\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}({u+e}{)}-{e}+{\theta}_{0}-\widehat{{f}}\right]$
$\displaystyle=$ $\displaystyle
2n\widehat{m}_{n}-\widehat{\sigma}^{-2}u^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}u\mathbf{-}\widehat{\sigma}^{-2}\left({\theta}_{0}-\widehat{{f}}\right)^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left({\theta}_{0}-\widehat{{f}}\right)$
$\displaystyle\mathbf{+}\widehat{\sigma}^{-2}\left(2({\theta}_{0}-\widehat{{f}})-e\right)^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left(I-\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\right)e$
$\displaystyle-2\widehat{\sigma}^{-2}\left({\theta}_{0}-\widehat{{f}}\right)^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}u$
$\displaystyle=$ $\displaystyle
2n\widehat{m}_{n}-\left(n\widehat{m}_{n}-\widehat{\sigma}^{-2}\left(A_{1}+A_{2}+A_{3}+A_{4}\right)\right)-\widehat{\sigma}^{-2}A_{4}$
$\displaystyle\mathbf{+}\widehat{\sigma}^{-2}\left(2({\theta}_{0}-\widehat{{f}})-e\right)^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left(I-\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\right)e-2\widehat{\sigma}^{-2}A_{3}$
$\displaystyle=$ $\displaystyle
n\widehat{m}_{n}+\widehat{\sigma}^{-2}\left(A_{1}+A_{2}-A_{3}\right)$
$\displaystyle+\widehat{\sigma}^{-2}\left(2({\theta}_{0}-\widehat{{f}})-e\right)^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left(I-\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\right)e.$
In the proof of Theorem 4.2, we have shown that
$\left|\left({\theta}_{0}-\widehat{{f}}\right)^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left(I-\Psi[\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi]^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\right)e\right|=o_{p}(\sqrt{p})$
in the process of proving $|A_{2}|=o_{p}(\sqrt{p})$. Along with
$\displaystyle\left|e^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left(I-\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\right)e\right|$
$\displaystyle\leq$
$\displaystyle\left|e^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}e\right|+\left|e^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}e\right|$
$\displaystyle\leq$
$\displaystyle\left\|e\right\|^{2}\sup_{\gamma\in\Gamma}\left\|\Sigma\left(\gamma\right)^{-1}\right\|+\left\|e\right\|^{2}\sup_{\gamma\in\Gamma}\left\|\Sigma\left(\gamma\right)^{-1}\right\|^{2}\left\|\frac{1}{n}\Psi\left(\frac{1}{n}\Psi^{\prime}\Sigma\left(\gamma\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\right\|$
$\displaystyle=$ $\displaystyle
O_{p}\left(\left\|e\right\|^{2}\right)=O_{p}\left(p^{-2\mu}n\right)=o_{p}(\sqrt{p}),$
we complete the proof that
$n\widetilde{m}_{n}=n\widehat{m}_{n}+o_{p}(\sqrt{p}).$ In the SAR setting of
Section 5,
$\displaystyle n\widetilde{m}_{n}$ $\displaystyle=$
$\displaystyle\widehat{\sigma}^{-2}\left(\widehat{u}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\widehat{u}-\widehat{\eta}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\widehat{\eta}\right)=\widehat{\sigma}^{-2}\left(2\widehat{u}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\widehat{v}-\widehat{v}^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\widehat{v}\right)$
$\displaystyle=$ $\displaystyle
2n\widehat{m}_{n}-\widehat{\sigma}^{-2}\left[\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left(u+e+\sum_{j=1}^{d_{\lambda}}(\lambda_{j_{0}}-\widehat{\lambda}_{j})W_{j}y\right)-e+\theta_{0}-\widehat{f}\right]^{\prime}$
$\displaystyle\Sigma\left(\widehat{\gamma}\right)^{-1}\left[\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left(u+e+\sum_{j=1}^{d_{\lambda}}(\lambda_{j_{0}}-\widehat{\lambda}_{j})W_{j}y\right)-e+\theta_{0}-\widehat{f}\right].$
Compared to the expression in (S.D.19), we have the additional terms
$-\widehat{\sigma}^{-2}\sum_{j=1}^{d_{\lambda}}(\lambda_{j_{0}}-\widehat{\lambda}_{j})W_{j}y^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\sum_{j=1}^{d_{\lambda}}(\lambda_{j_{0}}-\widehat{\lambda}_{j})W_{j}y$
and
$-2\widehat{\sigma}^{-2}\sum_{j=1}^{d_{\lambda}}(\lambda_{j_{0}}-\widehat{\lambda}_{j})W_{j}y^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\left(\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\Psi\right)^{-1}\Psi^{\prime}\Sigma\left(\widehat{\gamma}\right)^{-1}\left(u+\theta_{0}-\widehat{f}\right).$
Both terms are $o_{p}(\sqrt{p})$ from the orders of $A_{5}$ and $A_{6}$ in the
proof of Theorem 5.2. Hence, in the SAR setting,
$n\widetilde{m}_{n}=n\widehat{m}_{n}+o_{p}(\sqrt{p})$ also holds.
We now present similar calculations that justify the validity of our bootstrap
test for the SARARMA($m_{1}$,$m_{2},m_{3}$) model. The bootstrapped test
statistic is constructed with
$n\widehat{m}_{n}^{\ast}=\widehat{{v}}^{\ast\prime}\Sigma\left(\widehat{\gamma}^{\ast}\right)^{-1}\widehat{{u}}^{\ast}=(\widehat{{\theta}}_{n}^{\ast}-{f}(x,\widehat{\alpha}_{n}^{\ast}))^{\prime}\Sigma\left(\widehat{\gamma}^{\ast}\right)^{-1}\left((I_{n}-\sum_{k=1}^{m_{1}}\widehat{\lambda}_{k}^{\ast}W_{1k})y^{\ast}-{f}(x,\widehat{\alpha}_{n}^{\ast})\right).$
Let $J_{n}=(I_{n}-\frac{1}{n}l_{n}l_{n}^{\prime})$. As
$y=S(\lambda)^{-1}(\theta(x)+R(\gamma)\xi)$, we have
$\displaystyle\widetilde{\mathbf{\xi}}$ $\displaystyle=$ $\displaystyle
J_{n}\widehat{\mathbf{\xi}}$ $\displaystyle=$ $\displaystyle
J_{n}\left(\left(\sum_{l=1}^{m_{3}}\gamma_{3l}W_{3l}+I_{n}\right)^{-1}+\left(\sum_{l=1}^{m_{3}}\gamma_{3l}W_{3l}+I_{n}\right)^{-1}\sum_{l=1}^{m_{3}}(\gamma_{3l}-\widehat{\gamma}_{3l})W_{3l}\left(\sum_{l=1}^{m_{3}}\widehat{\gamma}_{3l}W_{3l}+I_{n}\right)^{-1}\right)$
$\displaystyle\times\left(I_{n}-\sum_{l=1}^{m_{2}}\gamma_{2l}W_{2l}+\sum_{l=1}^{m_{2}}(\gamma_{2l}-\widehat{\gamma}_{2l})W_{2l}\right)\left(S(\lambda)y-\theta(x)+\sum_{k=1}^{m_{1}}(\lambda_{k}-\widehat{\lambda}_{k})W_{1k}y+\theta(x)-\psi^{\prime}\widehat{\beta}\right)$
$\displaystyle=$
$\displaystyle\xi-\frac{1}{n}l_{n}l_{n}^{\prime}\xi+J_{n}\left(\sum_{l=1}^{m_{3}}\gamma_{3l}W_{3l}+I_{n}\right)^{-1}\left(I_{n}-\sum_{l=1}^{m_{2}}\gamma_{2l}W_{2l}\right)\left(\sum_{k=1}^{m_{1}}(\lambda_{k}-\widehat{\lambda}_{k})W_{1k}y+\theta(x)-\psi^{\prime}\widehat{\beta}\right)$
$\displaystyle+J_{n}\left(\sum_{l=1}^{m_{3}}\gamma_{3l}W_{3l}+I_{n}\right)^{-1}\sum_{l=1}^{m_{2}}(\gamma_{2l}-\widehat{\gamma}_{2l})W_{2l}\left(S(\lambda)y-\theta(x)+\sum_{k=1}^{m_{1}}(\lambda_{k}-\widehat{\lambda}_{k})W_{1k}y+\theta(x)-\psi^{\prime}\widehat{\beta}\right)$
$\displaystyle+J_{n}\left(\sum_{l=1}^{m_{3}}\gamma_{3l}W_{3l}+I_{n}\right)^{-1}\sum_{l=1}^{m_{3}}(\gamma_{3l}-\widehat{\gamma}_{3l})W_{3l}\left(\sum_{l=1}^{m_{3}}\widehat{\gamma}_{3l}W_{3l}+I_{n}\right)^{-1}$
$\displaystyle\times\left(I_{n}-\sum_{l=1}^{m_{2}}\gamma_{2l}W_{2l}+\sum_{l=1}^{m_{2}}(\gamma_{2l}-\widehat{\gamma}_{2l})W_{2l}\right)\left(S(\lambda)y-\theta(x)+\sum_{k=1}^{m_{1}}(\lambda_{k}-\widehat{\lambda}_{k})W_{1k}y+\theta(x)-\psi^{\prime}\widehat{\beta}\right),$
which can be written as
$\widetilde{\mathbf{\xi}}=\xi+\sum_{j=1}^{r}\zeta_{1n,j}p_{nj}+\sum_{j=1}^{s}\zeta_{2n,j}Q_{nj}\xi,$
where $p_{nj}$ is an $n$-dimensional vector with bounded elements,
$Q_{nj}=[q_{nj,i}]$ is an $n\times n$ matrix with bounded row and column sum
norms, and $\zeta_{1n,j}$ and $\zeta_{2n,j}$’s are equal to
$l_{n}^{\prime}\xi/n$, elements of $\lambda_{k}-\widehat{\lambda}_{k}$,
$\gamma_{2l}-\widehat{\gamma}_{2l}$, $\theta(x)-\psi^{\prime}\widehat{\beta}$
or their products. This differs from the proof of Lemma 2 in Jin and Lee
(2015) in the term $\theta(x)-\psi^{\prime}\widehat{\beta}$ and potentially
increasing order of $d_{\gamma}$. Then,
$\zeta_{1n,j}=O_{p}(\sqrt{p^{1/2}/n}\vee\sqrt{d_{\gamma}/n})$ and
$\zeta_{2n,j}=O_{p}(\sqrt{p^{1/2}/n}\vee\sqrt{d_{\gamma}/n})$, instead of
$O_{p}(\sqrt{1/n})$ as in Jin and Lee (2015). Based on this result, the
assumptions in Theorem 4 of Su and Qu (2017) hold, so the validility of our
bootstrap test directly follows.
∎
| | PS | | | | Trig | | | | B-s |
---|---|---|---|---|---|---|---|---|---|---|---
| 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.10
$n=60$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.01}$ | ${\small 0.032}$ | ${\small 0.05}$ | | ${\small 0.01}$ | ${\small 0.028}$ | ${\small 0.054}$ | | ${\small 0.02}$ | ${\small 0.042}$ | ${\small 0.064}$
| ${\small 0.036}$ | ${\small 0.084}$ | ${\small 0.122}$ | | ${\small 0.02}$ | ${\small 0.056}$ | ${\small 0.084}$ | | ${\small 0.044}$ | ${\small 0.008}$ | ${\small 0.11}$
${\small c=3}$ | ${\small 0.07}$ | ${\small 0.156}$ | ${\small 0.194}$ | | ${\small 0.166}$ | ${\small 0.248}$ | ${\small 0.296}$ | | ${\small 0.208}$ | ${\small 0.302}$ | ${\small 0.372}$
| ${\small 0.454}$ | ${\small 0.58}$ | ${\small 0.658}$ | | ${\small 0.172}$ | ${\small 0.29}$ | ${\small 0.358}$ | | ${\small 0.166}$ | ${\small 0.274}$ | ${\small 0.346}$
${\small c=6}$ | ${\small 0.37}$ | ${\small 0.532}$ | ${\small 0.644}$ | | ${\small 0.688}$ | ${\small 0.806}$ | ${\small 0.854}$ | | ${\small 0.688}$ | ${\small 0.82}$ | ${\small 0.884}$
| ${\small 0.998}$ | ${\small 1}$ | ${\small 1}$ | | ${\small 0.676}$ | ${\small 0.822}$ | ${\small 0.866}$ | | ${\small 0.576}$ | ${\small 0.726}$ | ${\small 0.81}$
$n=100$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.008}$ | ${\small 0.03}$ | ${\small 0.044}$ | | ${\small 0.006}$ | ${\small 0.012}$ | ${\small 0.028}$ | | ${\small 0.016}$ | ${\small 0.028}$ | ${\small 0.042}$
| ${\small 0.022}$ | ${\small 0.052}$ | ${\small 0.068}$ | | ${\small 0.004}$ | ${\small 0.028}$ | ${\small 0.05}$ | | ${\small 0.018}$ | ${\small 0.048}$ | ${\small 0.062}$
${\small c=3}$ | ${\small 0.352}$ | ${\small 0.478}$ | ${\small 0.574}$ | | ${\small 0.27}$ | ${\small 0.39}$ | ${\small 0.484}$ | | ${\small 0.376}$ | ${\small 0.518}$ | ${\small 0.614}$
| ${\small 0.54}$ | ${\small 0.666}$ | ${\small 0.744}$ | | ${\small 0.288}$ | ${\small 0.412}$ | ${\small 0.508}$ | | ${\small 0.316}$ | ${\small 0.462}$ | ${\small 0.544}$
${\small c=6}$ | ${\small 0.984}$ | ${\small 0.99}$ | ${\small 0.99}$ | | ${\small 0.956}$ | ${\small 0.986}$ | ${\small 0.992}$ | | ${\small 0.98}$ | ${\small 0.992}$ | ${\small 0.994}$
| ${\small 0.998}$ | ${\small 0.998}$ | ${\small 0.998}$ | | ${\small 0.948}$ | ${\small 0.99}$ | ${\small 0.992}$ | | ${\small 0.956}$ | ${\small 0.99}$ | ${\small 0.996}$
$n=200$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.002}$ | ${\small 0.016}$ | ${\small 0.034}$ | | ${\small 0.002}$ | ${\small 0.014}$ | ${\small 0.034}$ | | ${\small 0.038}$ | ${\small 0.074}$ | ${\small 0.102}$
| ${\small 0.008}$ | ${\small 0.026}$ | ${\small 0.048}$ | | ${\small 0.012}$ | ${\small 0.028}$ | ${\small 0.036}$ | | ${\small 0.01}$ | ${\small 0.036}$ | ${\small 0.074}$
${\small c=3}$ | ${\small 0.176}$ | ${\small 0.29}$ | ${\small 0.356}$ | | ${\small 0.164}$ | ${\small 0.256}$ | ${\small 0.312}$ | | ${\small 0.388}$ | ${\small 0.354}$ | ${\small 0.606}$
| ${\small 0.34}$ | ${\small 0.496}$ | ${\small 0.582}$ | | ${\small 0.144}$ | ${\small 0.274}$ | ${\small 0.356}$ | | ${\small 0.168}$ | ${\small 0.282}$ | ${\small 0.376}$
${\small c=6}$ | ${\small 0.888}$ | ${\small 0.942}$ | ${\small 0.96}$ | | ${\small 0.818}$ | ${\small 0.898}$ | ${\small 0.934}$ | | ${\small 0.944}$ | ${\small 0.974}$ | ${\small 0.986}$
| ${\small 0.99}$ | ${\small 0.998}$ | ${\small 1}$ | | ${\small 0.816}$ | ${\small 0.904}$ | ${\small 0.944}$ | | ${\small 0.862}$ | ${\small 0.932}$ | ${\small 0.954}$
Table OT.1: Rejection probabilities of SARARMA(0,1,0) using asymptotic test
${\mathscr{T}_{n}}$ at 1, 5, 10% levels, power series (PS), trigonometric
(Trig) and B-spline (B-s) bases. Compactly supported regressors.
| | PS | | | | Trig | | | | B-s |
---|---|---|---|---|---|---|---|---|---|---|---
| 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.10
$n=60$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.01}$ | ${\small 0.032}$ | ${\small 0.05}$ | | ${\small 0.01}$ | ${\small 0.028}$ | ${\small 0.054}$ | | ${\small 0.06}$ | ${\small 0.01}$ | ${\small 0.016}$
| ${\small 0.036}$ | ${\small 0.084}$ | ${\small 0.122}$ | | ${\small 0.02}$ | ${\small 0.056}$ | ${\small 0.084}$ | | ${\small 0.044}$ | ${\small 0.008}$ | ${\small 0.116}$
${\small c=3}$ | ${\small 0.07}$ | ${\small 0.156}$ | ${\small 0.194}$ | | ${\small 0.16}$ | ${\small 0.252}$ | ${\small 0.292}$ | | ${\small 0.09}$ | ${\small 0.138}$ | ${\small 0.186}$
| ${\small 0.454}$ | ${\small 0.58}$ | ${\small 0.658}$ | | ${\small 0.174}$ | ${\small 0.29}$ | ${\small 0.358}$ | | ${\small 0.166}$ | ${\small 0.272}$ | ${\small 0.34}$
${\small c=6}$ | ${\small 0.37}$ | ${\small 0.532}$ | ${\small 0.644}$ | | ${\small 0.682}$ | ${\small 0.798}$ | ${\small 0.85}$ | | ${\small 0.514}$ | ${\small 0.644}$ | ${\small 0.714}$
| ${\small 0.998}$ | ${\small 1}$ | ${\small 1}$ | | ${\small 0.676}$ | ${\small 0.822}$ | ${\small 0.866}$ | | ${\small 0.572}$ | ${\small 0.714}$ | ${\small 0.8}$
$n=100$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.008}$ | ${\small 0.03}$ | ${\small 0.044}$ | | ${\small 0.006}$ | ${\small 0.012}$ | ${\small 0.026}$ | | ${\small 0}$ | ${\small 0.004}$ | ${\small 0.006}$
| ${\small 0.022}$ | ${\small 0.052}$ | ${\small 0.068}$ | | ${\small 0.006}$ | ${\small 0.028}$ | ${\small 0.05}$ | | ${\small 0.018}$ | ${\small 0.05}$ | ${\small 0.062}$
${\small c=3}$ | ${\small 0.352}$ | ${\small 0.478}$ | ${\small 0.574}$ | | ${\small 0.268}$ | ${\small 0.396}$ | ${\small 0.486}$ | | ${\small 0.158}$ | ${\small 0.23}$ | ${\small 0.288}$
| ${\small 0.54}$ | ${\small 0.666}$ | ${\small 0.744}$ | | ${\small 0.288}$ | ${\small 0.412}$ | ${\small 0.508}$ | | ${\small 0.322}$ | ${\small 0.466}$ | ${\small 0.55}$
${\small c=6}$ | ${\small 0.984}$ | ${\small 0.99}$ | ${\small 0.99}$ | | ${\small 0.958}$ | ${\small 0.986}$ | ${\small 0.992}$ | | ${\small 0.918}$ | ${\small 0.97}$ | ${\small 0.98}$
| ${\small 0.998}$ | ${\small 0.998}$ | ${\small 0.998}$ | | ${\small 0.952}$ | ${\small 0.99}$ | ${\small 0.992}$ | | ${\small 0.96}$ | ${\small 0.99}$ | ${\small 0.998}$
$n=200$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.002}$ | ${\small 0.016}$ | ${\small 0.034}$ | | ${\small 0.002}$ | ${\small 0.018}$ | ${\small 0.038}$ | | ${\small 0}$ | ${\small 0}$ | ${\small 0}$
| ${\small 0.008}$ | ${\small 0.026}$ | ${\small 0.048}$ | | ${\small 0.012}$ | ${\small 0.028}$ | ${\small 0.032}$ | | ${\small 0.01}$ | ${\small 0.036}$ | ${\small 0.064}$
${\small c=3}$ | ${\small 0.176}$ | ${\small 0.29}$ | ${\small 0.356}$ | | ${\small 0.156}$ | ${\small 0.258}$ | ${\small 0.312}$ | | ${\small 0.022}$ | ${\small 0.03}$ | ${\small 0.044}$
| ${\small 0.34}$ | ${\small 0.496}$ | ${\small 0.582}$ | | ${\small 0.144}$ | ${\small 0.272}$ | ${\small 0.352}$ | | ${\small 0.154}$ | ${\small 0.266}$ | ${\small 0.352}$
${\small c=6}$ | ${\small 0.888}$ | ${\small 0.942}$ | ${\small 0.96}$ | | ${\small 0.816}$ | ${\small 0.908}$ | ${\small 0.936}$ | | ${\small 0.43}$ | ${\small 0.522}$ | ${\small 0.554}$
| ${\small 0.99}$ | ${\small 0.998}$ | ${\small 1}$ | | ${\small 0.816}$ | ${\small 0.904}$ | ${\small 0.944}$ | | ${\small 0.856}$ | ${\small 0.924}$ | ${\small 0.944}$
Table OT.2: Rejection probabilities of SARARMA(0,1,0) using asymptotic test
${\mathscr{T}_{n}}^{a}$ at 1, 5, 10% levels, power series (PS), trigonometric
(Trig) and B-spline (B-s) bases. Compactly supported regressors.
| | PS | ${\mathscr{T}_{n}}=\mathscr{T}_{n}^{a}$ | | | Trig | $\mathscr{T}_{n}$ | | | Trig | $\mathscr{T}_{n}^{a}$
---|---|---|---|---|---|---|---|---|---|---|---
| 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.10
$n=60$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.02}$ | ${\small 0.05}$ | ${\small 0.072}$ | | ${\small 0.016}$ | ${\small 0.038}$ | ${\small 0.052}$ | | ${\small 0.016}$ | ${\small 0.038}$ | ${\small 0.052}$
| ${\small 0.038}$ | ${\small 0.082}$ | ${\small 0.11}$ | | ${\small 0.038}$ | ${\small 0.06}$ | ${\small 0.08}$ | | ${\small 0.038}$ | ${\small 0.06}$ | ${\small 0.08}$
${\small c=3}$ | ${\small 0.106}$ | ${\small 0.158}$ | ${\small 0.224}$ | | ${\small 0.062}$ | ${\small 0.11}$ | ${\small 0.146}$ | | ${\small 0.062}$ | ${\small 0.11}$ | ${\small 0.146}$
| ${\small 0.152}$ | ${\small 0.25}$ | ${\small 0.31}$ | | ${\small 0.09}$ | ${\small 0.158}$ | ${\small 0.204}$ | | ${\small 0.09}$ | ${\small 0.158}$ | ${\small 0.204}$
${\small c=6}$ | ${\small 0.552}$ | ${\small 0.686}$ | ${\small 0.73}$ | | ${\small 0.234}$ | ${\small 0.352}$ | ${\small 0.482}$ | | ${\small 0.236}$ | ${\small 0.354}$ | ${\small 0.43}$
| ${\small 0.634}$ | ${\small 0.774}$ | ${\small 0.82}$ | | ${\small 0.404}$ | ${\small 0.542}$ | ${\small 0.642}$ | | ${\small 0.404}$ | ${\small 0.542}$ | ${\small 0.642}$
$n=100$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.008}$ | ${\small 0.024}$ | ${\small 0.036}$ | | ${\small 0.002}$ | ${\small 0.018}$ | ${\small 0.036}$ | | ${\small 0.002}$ | ${\small 0.018}$ | ${\small 0.036}$
| ${\small 0.024}$ | ${\small 0.05}$ | ${\small 0.068}$ | | ${\small 0.012}$ | ${\small 0.026}$ | ${\small 0.052}$ | | ${\small 0.012}$ | ${\small 0.026}$ | ${\small 0.052}$
${\small c=3}$ | ${\small 0.162}$ | ${\small 0.262}$ | ${\small 0.342}$ | | ${\small 0.142}$ | ${\small 0.22}$ | ${\small 0.286}$ | | ${\small 0.142}$ | ${\small 0.22}$ | ${\small 0.286}$
| ${\small 0.216}$ | ${\small 0.332}$ | ${\small 0.408}$ | | ${\small 0.164}$ | ${\small 0.274}$ | ${\small 0.35}$ | | ${\small 0.164}$ | ${\small 0.274}$ | ${\small 0.35}$
${\small c=6}$ | ${\small 0.824}$ | ${\small 0.894}$ | ${\small 0.926}$ | | ${\small 0.79}$ | ${\small 0.868}$ | ${\small 0.892}$ | | ${\small 0.79}$ | ${\small 0.866}$ | ${\small 0.894}$
| ${\small 0.888}$ | ${\small 0.944}$ | ${\small 0.952}$ | | ${\small 0.862}$ | ${\small 0.896}$ | ${\small 0.928}$ | | ${\small 0.862}$ | ${\small 0.896}$ | ${\small 0.928}$
$n=200$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.006}$ | ${\small 0.018}$ | ${\small 0.032}$ | | ${\small 0.008}$ | ${\small 0.022}$ | ${\small 0.032}$ | | ${\small 0.008}$ | ${\small 0.022}$ | ${\small 0.032}$
| ${\small 0.012}$ | ${\small 0.032}$ | ${\small 0.068}$ | | ${\small 0.01}$ | ${\small 0.026}$ | ${\small 0.046}$ | | ${\small 0.01}$ | ${\small 0.026}$ | ${\small 0.046}$
${\small c=3}$ | ${\small 0.096}$ | ${\small 0.182}$ | ${\small 0.258}$ | | ${\small 0.076}$ | ${\small 0.152}$ | ${\small 0.212}$ | | ${\small 0.078}$ | ${\small 0.15}$ | ${\small 0.208}$
| ${\small 0.126}$ | ${\small 0.24}$ | ${\small 0.33}$ | | ${\small 0.098}$ | ${\small 0.184}$ | ${\small 0.26}$ | | ${\small 0.098}$ | ${\small 0.184}$ | ${\small 0.26}$
${\small c=6}$ | ${\small 0.754}$ | ${\small 0.858}$ | ${\small 0.892}$ | | ${\small 0.596}$ | ${\small 0.728}$ | ${\small 0.794}$ | | ${\small 0.596}$ | ${\small 0.724}$ | ${\small 0.79}$
| ${\small 0.84}$ | ${\small 0.918}$ | ${\small 0.944}$ | | ${\small 0.684}$ | ${\small 0.794}$ | ${\small 0.866}$ | | ${\small 0.684}$ | ${\small 0.792}$ | ${\small 0.866}$
Table OT.3: Rejection probabilities of SARARMA(0,1,0) using asymptotic tests
${\mathscr{T}_{n}},{\mathscr{T}_{n}}^{a}$ at 1, 5, 10% levels, power series
(PS) and trigonometric (Trig) bases. Unboundedly supported regressors.
| | PS | ${\mathscr{T}_{n}}^{\ast}=\mathscr{T}_{n}^{a\ast}$ | | | Trig | $\mathscr{T}_{n}^{\ast}$ | | | Trig | $\mathscr{T}_{n}^{a\ast}$
---|---|---|---|---|---|---|---|---|---|---|---
| 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.10 | | 0.01 | 0.05 | 0.10
$n=60$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.008}$ | ${\small 0.058}$ | ${\small 0.108}$ | | ${\small 0.01}$ | ${\small 0.046}$ | ${\small 0.124}$ | | ${\small 0.01}$ | ${\small 0.046}$ | ${\small 0.124}$
| ${\small 0.008}$ | ${\small 0.042}$ | ${\small 0.094}$ | | ${\small 0.006}$ | ${\small 0.044}$ | ${\small 0.102}$ | | ${\small 0.006}$ | ${\small 0.044}$ | ${\small 0.102}$
${\small c=3}$ | ${\small 0.052}$ | ${\small 0.17}$ | ${\small 0.318}$ | | ${\small 0.036}$ | ${\small 0.14}$ | ${\small 0.21}$ | | ${\small 0.036}$ | ${\small 0.14}$ | ${\small 0.21}$
| ${\small 0.034}$ | ${\small 0.16}$ | ${\small 0.184}$ | | ${\small 0.034}$ | ${\small 0.132}$ | ${\small 0.234}$ | | ${\small 0.034}$ | ${\small 0.132}$ | ${\small 0.234}$
${\small c=6}$ | ${\small 0.35}$ | ${\small 0.67}$ | ${\small 0.808}$ | | ${\small 0.16}$ | ${\small 0.392}$ | ${\small 0.556}$ | | ${\small 0.16}$ | ${\small 0.392}$ | ${\small 0.558}$
| ${\small 0.262}$ | ${\small 0.656}$ | ${\small 0.794}$ | | ${\small 0.204}$ | ${\small 0.468}$ | ${\small 0.66}$ | | ${\small 0.204}$ | ${\small 0.468}$ | ${\small 0.66}$
$n=100$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.006}$ | ${\small 0.05}$ | ${\small 0.102}$ | | ${\small 0.006}$ | ${\small 0.05}$ | ${\small 0.11}$ | | ${\small 0.004}$ | ${\small 0.05}$ | ${\small 0.112}$
| ${\small 0.012}$ | ${\small 0.054}$ | ${\small 0.128}$ | | ${\small 0.004}$ | ${\small 0.044}$ | ${\small 0.112}$ | | ${\small 0.004}$ | ${\small 0.044}$ | ${\small 0.112}$
${\small c=3}$ | ${\small 0.13}$ | ${\small 0.342}$ | ${\small 0.516}$ | | ${\small 0.128}$ | ${\small 0.324}$ | ${\small 0.488}$ | | ${\small 0.126}$ | ${\small 0.32}$ | ${\small 0.488}$
| ${\small 0.122}$ | ${\small 0.326}$ | ${\small 0.498}$ | | ${\small 0.114}$ | ${\small 0.298}$ | ${\small 0.474}$ | | ${\small 0.114}$ | ${\small 0.298}$ | ${\small 0.474}$
${\small c=6}$ | ${\small 0.766}$ | ${\small 0.932}$ | ${\small 0.974}$ | | ${\small 0.728}$ | ${\small 0.92}$ | ${\small 0.974}$ | | ${\small 0.728}$ | ${\small 0.92}$ | ${\small 0.972}$
| ${\small 0.774}$ | ${\small 0.934}$ | ${\small 0.968}$ | | ${\small 0.732}$ | ${\small 0.898}$ | ${\small 0.952}$ | | ${\small 0.732}$ | ${\small 0.898}$ | ${\small 0.952}$
$n=200$ | | | | | | | | | | |
${\small c=0}$ | ${\small 0.03}$ | ${\small 0.056}$ | ${\small 0.088}$ | | ${\small 0.028}$ | ${\small 0.06}$ | ${\small 0.098}$ | | ${\small 0.028}$ | ${\small 0.06}$ | ${\small 0.098}$
| ${\small 0.028}$ | ${\small 0.084}$ | ${\small 0.128}$ | | ${\small 0.022}$ | ${\small 0.068}$ | ${\small 0.118}$ | | ${\small 0.022}$ | ${\small 0.068}$ | ${\small 0.118}$
${\small c=3}$ | ${\small 0.17}$ | ${\small 0.346}$ | ${\small 0.49}$ | | ${\small 0.132}$ | ${\small 0.286}$ | ${\small 0.384}$ | | ${\small 0.13}$ | ${\small 0.288}$ | ${\small 0.38}$
| ${\small 0.178}$ | ${\small 0.34}$ | ${\small 0.488}$ | | ${\small 0.128}$ | ${\small 0.274}$ | ${\small 0.416}$ | | ${\small 0.128}$ | ${\small 0.274}$ | ${\small 0.416}$
${\small c=6}$ | ${\small 0.794}$ | ${\small 0.92}$ | ${\small 0.966}$ | | ${\small 0.682}$ | ${\small 0.866}$ | ${\small 0.93}$ | | ${\small 0.678}$ | ${\small 0.864}$ | ${\small 0.93}$
| ${\small 0.84}$ | ${\small 0.936}$ | ${\small 0.976}$ | | ${\small 0.698}$ | ${\small 0.888}$ | ${\small 0.93}$ | | ${\small 0.698}$ | ${\small 0.888}$ | ${\small 0.93}$
Table OT.4: Rejection probabilities of SARARMA(0,1,0) using bootstrap tests
${\mathscr{T}_{n}}^{\ast},{\mathscr{T}_{n}}^{a\ast}$ at 1, 5, 10% levels,
power series (PS) and trigonometric (Trig) bases. Unboundedly supported
regressors.
## References
* Ambrosetti and Prodi (1995) Ambrosetti, A. and G. Prodi (1995). A Primer of Nonlinear Analysis. Cambridge University Press.
* Anatolyev (2012) Anatolyev, S. (2012). Inference in regression models with many regressors. Journal of Econometrics 170, 368–382.
* Autant-Bernard and LeSage (2011) Autant-Bernard, C. and J. P. LeSage (2011). Quantifying knowledge spillovers using spatial autoregressive models. Journal of Regional Science 51, 471–496.
* Bloom et al. (2013) Bloom, N., M. Schankerman, and J. van Reenen (2013). Identifying technology pillovers and product market rivalry. Econometrica 81, 1347–1393.
* Case (1991) Case, A. C. (1991). Spatial patterns in household demand. Econometrica 59, 953–965.
* Chen (2007) Chen, X. (2007). Large sample sieve estimation of semi-nonparametric models, Volume 6B, Chapter 76, pp. 5549–5632. North Holland.
* Chen et al. (2005) Chen, X., H. Hong, and E. Tamer (2005). Measurement error models with auxiliary data. Review of Economic Studies 72, 343–366.
* Cliff and Ord (1973) Cliff, A. D. and J. K. Ord (1973). Spatial Autocorrelation. London: Pion.
* Conley and Dupor (2003) Conley, T. G. and B. Dupor (2003). A spatial analysis of sectoral complementarity. Journal of Political Economy 111, 311–352.
* Cooke (1950) Cooke, R. G. (1950). Infinite Matrices & Sequence Spaces. Macmillan and Company, London.
* De Jong and Bierens (1994) De Jong, R. M. and H. J. Bierens (1994). On the limit behavior of a chi-square type test if the number of conditional moments tested approaches infinity. Econometric Theory 10, 70–90.
* De Oliveira et al. (1997) De Oliveira, V., B. Kedem, and D. A. Short (1997). Bayesian prediction of transformed Gaussian random fields. Journal of the American Statistical Association 92, 1422–1433.
* Debarsy et al. (2015) Debarsy, N., F. Jin, and L. F. Lee (2015). Large sample properties of the matrix exponential spatial specification with an application to FDI. Journal of Econometrics 188, 1–21.
* Delgado and Robinson (2015) Delgado, M. and P. M. Robinson (2015). Non-nested testing of spatial correlation. Journal of Econometrics 187, 385–401.
* Ertur and Koch (2007) Ertur, C. and W. Koch (2007). Growth, technological interdependence and spatial externalities: theory and evidence. Journal of Applied Econometrics 22, 1033–1062.
* Evans and Kim (2014) Evans, P. and J. U. Kim (2014). The spatial dynamics of growth and convergence in Korean regional incomes. Applied Economics Letters 21, 1139–1143.
* Fuentes (2007) Fuentes, M. (2007). Approximate likelihood for large irregularly spaced spatial data. Journal of the American Statistical Association 102, 321–331.
* Gao and Anh (2000) Gao, J. and V. Anh (2000). A central limit theorem for a random quadratic form of strictly stationary processes. Statistics and Probability Letters 49, 69–79.
* Gneiting (2002) Gneiting, T. (2002). Nonseparable, stationary covariance functions for space-time data. Journal of the American Statistical Association 97, 590–600.
* Gradshteyn and Ryzhik (1994) Gradshteyn, I. S. and I. M. Ryzhik (1994). Table of Integrals, Series and Products (5th ed.). Academic Press, London.
* Gupta (2018a) Gupta, A. (2018a). Autoregressive spatial spectral estimates. Journal of Econometrics 203, 80–95.
* Gupta (2018b) Gupta, A. (2018b). Nonparametric specification testing via the trinity of tests. Journal of Econometrics 203, 169–185.
* Gupta et al. (2021) Gupta, A., S. Kokas, and A. Michaelides (2021). Credit market spillovers in a financial network. Working paper.
* Gupta and Robinson (2015) Gupta, A. and P. M. Robinson (2015). Inference on higher-order spatial autoregressive models with increasingly many parameters. Journal of Econometrics 186, 19–31.
* Gupta and Robinson (2018) Gupta, A. and P. M. Robinson (2018). Pseudo maximum likelihood estimation of spatial autoregressive models with increasing dimension. Journal of Econometrics 202, 92–107.
* Hahn et al. (2020) Hahn, J., G. Kuersteiner, and M. Mazzocco (2020). Joint time-series and cross-section limit theory under mixingale assumptions. Econometric Theory, first published online 11 August 2020\. doi:10.1017/S0266466620000316, 17pp.
* Han et al. (2021) Han, X., L.-f. Lee, and X. Xu (2021). Large sample properties of Bayesian estimation of spatial econometric models. Econometric Theory 37, 708–746.
* Hannan (1970) Hannan, E. J. (1970). Multiple Time Series. John Wiley & Sons.
* Heston et al. (2002) Heston, A., R. Summers, and B. Aten (2002). Penn World Tables Verison 6.1. Downloadable dataset, Center for International Comparisons at the University of Pennsylvania.
* Hidalgo and Schafgans (2017) Hidalgo, J. and M. Schafgans (2017). Inference and testing breaks in large dynamic panels with strong cross sectional dependence. Journal of Econometrics 196, 259–274.
* Hillier and Martellosio (2018a) Hillier, G. and F. Martellosio (2018a). Exact and higher-order properties of the MLE in spatial autoregressive models, with applications to inference. Journal of Econometrics 205, 402–422.
* Hillier and Martellosio (2018b) Hillier, G. and F. Martellosio (2018b). Exact likelihood inference in group interaction network models. Econometric Theory 34, 383–415.
* Ho et al. (2013) Ho, C.-Y., W. Wang, and J. Yu (2013). Growth spillover through trade: A spatial dynamic panel data approach. Economics Letters 120, 450–453.
* Hong and White (1995) Hong, Y. and H. White (1995). Consistent specification testing via nonparametric series regression. Econometrica 63, 1133–1159.
* Huber (1973) Huber, P. J. (1973). Robust regression: Asymptotics, conjectures and Monte Carlo. The Annals of Statistics 1, 799–821.
* Jenish and Prucha (2009) Jenish, N. and I. R. Prucha (2009). Central limit theorems and uniform laws of large numbers for arrays of random fields. Journal of Econometrics 150, 86–98.
* Jenish and Prucha (2012) Jenish, N. and I. R. Prucha (2012). On spatial processes and asymptotic inference under near-epoch dependence. Journal of Econometrics 170, 178 – 190.
* Jin and Lee (2015) Jin, F. and L. F. Lee (2015). On the bootstrap for Moran’s i test for spatial dependence. Journal of Econometrics 184, 295–314.
* Kelejian and Prucha (1998) Kelejian, H. H. and I. R. Prucha (1998). A generalized spatial two-stage least squares procedure for estimating a spatial autoregressive model with autoregressive disturbances. Journal of Real Estate Finance and Economics 17, 99–121.
* Kelejian and Prucha (2001) Kelejian, H. H. and I. R. Prucha (2001). On the asymptotic distribution of the Moran $I$ test statistic with applications. Journal of Econometrics 104, 219–257.
* Koenker and Machado (1999) Koenker, R. and J. A. F. Machado (1999). GMM inference when the number of moment conditions is large. Journal of Econometrics 93, 327–344.
* König et al. (2017) König, M. D., D. Rohner, M. Thoenig, and F. Zilibotti (2017). Networks in conflict: Theory and evidence from the Great War of Africa. Econometrica 85, 1093–1132.
* Kuersteiner and Prucha (2020) Kuersteiner, G. M. and I. R. Prucha (2020). Dynamic spatial panel models: Networks, common shocks, and sequential exogeneity. Econometrica 88, 2109–2146.
* Lee et al. (2020) Lee, J., P. C. B. Phillips, and F. Rossi (2020). Consistent misspecification testing in spatial autoregressive models. Cowles Foundation Discussion Paper no. 2256.
* Lee and Robinson (2016) Lee, J. and P. M. Robinson (2016). Series estimation under cross-sectional dependence. Journal of Econometrics 190, 1–17.
* Lee (2004) Lee, L. F. (2004). Asymptotic distributions of quasi-maximum likelihood estimators for spatial autoregressive models. Econometrica 72, 1899–1925.
* Lee and Liu (2010) Lee, L. F. and X. Liu (2010). Efficient GMM estimation of high order spatial autoregressive models with autoregressive disturbances. Econometric Theory 26, 187–230.
* LeSage and Pace (2007) LeSage, J. P. and R. Pace (2007). A matrix exponential spatial specification. Journal of Econometrics 140, 190–214.
* Malikov and Sun (2017) Malikov, E. and Y. Sun (2017). Semiparametric estimation and testing of smooth coefficient spatial autoregressive models. Journal of Econometrics 199, 12–34.
* Matérn (1986) Matérn, B. (1986). Spatial Variation. Almaenna Foerlaget, Stockholm.
* Mohnen (2022) Mohnen, M. (2022). Stars and brokers: Peer effects among medical scientists. Management Science 68, 2377–3174.
* Newey (1997) Newey, W. K. (1997). Convergence rates and asymptotic normality for series estimators. Journal of Econometrics 79, 147–168.
* Oettl (2012) Oettl, A. (2012). Reconceptualizing stars: scientist helpfulness and peer performance. Management Science 58, 1122–1140.
* Pinkse (1999) Pinkse, J. (1999). Asymptotic properties of Moran and related tests and testing for spatial correlation in probit models. Mimeo: Department of Economics, University of British Columbia and University College London.
* Pinkse et al. (2002) Pinkse, J., M. E. Slade, and C. Brett (2002). Spatial price competition: A semiparametric approach. Econometrica 70, 1111–1153.
* Portnoy (1984) Portnoy, S. (1984). Asymptotic behavior of ${M}$-estimators of $p$ regression parameters when $p^{2}/n$ is large. I. Consistency. The Annals of Statistics 12, 1298–1309.
* Portnoy (1985) Portnoy, S. (1985). Asymptotic behavior of ${M}$-estimators of $p$ regression parameters when $p^{2}/n$ is large; II. Normal approximation. The Annals of Statistics 13, 1403–1417.
* Robinson (1972) Robinson, P. M. (1972). Non-linear regression for multiple time-series. Journal of Applied Probability 9, 758–768.
* Robinson (1988) Robinson, P. M. (1988). The stochastic difference between econometric statistics. Econometrica 56, 531–548.
* Robinson (2011) Robinson, P. M. (2011). Asymptotic theory for nonparametric regression with spatial data. Journal of Econometrics 165, 5–19.
* Robinson and Rossi (2015) Robinson, P. M. and F. Rossi (2015). Refined tests for spatial correlation. Econometric Theory 31, 1249–1280.
* Robinson and Thawornkaiwong (2012) Robinson, P. M. and S. Thawornkaiwong (2012). Statistical inference on regression with spatial dependence. Journal of Econometrics 167, 521–542.
* Scott (1973) Scott, D. J. (1973). Central limit theorems for martingales and for processes with stationary increments using a Skorokhod representation approach. Advances in Applied Probability 5, 119–137.
* Stein (1999) Stein, M. (1999). Interpolation of Spatial Data. Springer-Verlag, New York.
* Stinchcombe and White (1998) Stinchcombe, M. B. and H. White (1998). Consistent specification testing with nuisance parameters present only under the alternative. Econometric Theory 14, 295–325.
* Su and Jin (2010) Su, L. and S. Jin (2010). Profile quasi-maximum likelihood estimation of partially linear spatial autoregressive models. Journal of Econometrics 157, 18–33.
* Su and Qu (2017) Su, L. and X. Qu (2017). Specification test for spatial autoregressive models. Journal of Business & Economic Statistics 35, 572–584.
* Sun (2016) Sun, Y. (2016). Functional-coefficient spatial autoregressive models with nonparametric spatial weights. Journal of Econometrics 195, 134–153.
* Sun (2020) Sun, Y. (2020). The LLN and CLT for U-statistics under cross-sectional dependence. Journal of Nonparametric Statistics 32, 201–224.
|
# VConstruct: Filling Gaps in Chl-a Data Using a Variational Autoencoder
Matthew Ehrler
Department of Computer Science
University of Victoria
Victoria B.C
<EMAIL_ADDRESS>
Neil Ernst
Department of Computer Science
University of Victoria
Victoria B.C
<EMAIL_ADDRESS>
###### Abstract
Remote sensing of Chlorophyll-a is vital in monitoring climate change.
Chlorphyll-a measurements give us an idea of the algae concentrations in the
ocean, which lets us monitor ocean health. However, a common problem is that
the satellites used to gather the data are commonly obstructed by clouds and
other artifacts. This means that time series data from satellites can suffer
from spatial data loss. There are a number of algorithms that are able to
reconstruct the missing parts of these images to varying degrees of accuracy,
with Data INterpolating Empirical Orthogonal Functions (DINEOF) being the
current standard. However, DINEOF is slow, suffers from accuracy loss in
temporally homogenous waters, reliant on temporal data, and only able to
generate a single potential reconstruction. We propose a machine learning
approach to reconstruction of Chlorophyll-a data using a Variational
Autoencoder (VAE). Our accuracy results to date are competitive with but
slightly less accurate than DINEOF. We show the benefits of our method
including vastly decreased computation time and ability to generate multiple
potential reconstructions. Lastly, we outline our planned improvements and
future work.
## 1 Introduction
Phytoplankton and ocean colour are considered “Essential Climate Variables"
for measuring and predicting climate systems and ocean health [3]. Measuring
phytoplankton and ocean colour is cost effective on a global scale as well as
relevant to climate models. Chlorophyll-a (Chl-a) is a commonly used metric to
estimate phytoplankton levels (measured in units of ${mg}/m^{3}$) and can be
derived from ocean colour [1]. Additionally, Chl-a can also be used to detect
harmful algae blooms which can be fatal to marine life [11]. As climate change
progresses harmful algae blooms will increase in frequency. An increase of 2°C
in sea temperature will double the window of opprtunity for Harmful Algae
Blooms in the Puget Sound[9]. Several different satellites provide Chl-a
measurements but the Sentinel-3
mission111https://sentinel.esa.int/web/sentinel/missions/sentinel-3 will be
the focus of this paper.
One of the biggest problems faced when using these measurements is the loss of
spatial data due to clouds, sunglint or various other factors which can affect
the atmospheric correction process [11]. Various algorithms exist to
reconstruct the missing data with the most effective being those based off of
extracting Empirical Orthographic Functions (EOF) from the data [12]. The most
accurate and commonly used of these algorithms is Data INterpolating Empirical
Orthogonal Functions (DINEOF) which iteratively calculate EOFs based on the
input data [5, 12]. DINEOF is fairly slow [12] and performs poorly in more
temporally homogenous waters such as a river mouth [5].
Machine learning has also been successful in reconstructing Chl-a data. Park
et al. use a tree based model to reconstruct algae in polar regions [10]. This
method is effective but requires knowledge of the domain to properly tune it.
This makes it much less generalizable and therefore less effective than DINEOF
as DINEOF works with no a priori knowledge. DINCAE is a very new machine
learning approach to reconstructing data [2], which has also been shown to
work on Chl-a [4]. DINCAE is accurate, but shares a drawback with DINEOF in
that it can only generate a single possible reconstruction. Being able to see
multiple potential reconstructions and potentially select a better one based
on data that may not be able to be easily or quickly incorporated into the
model. For example if we had Chl-a concentrations manually measured from
missing areas, we could then generate reconstructions until we find one that
better matches the measured values. This would be much faster than changing
DINCAE or DINEOF to incorporate the new data.
The approach we outline in this paper is based on the Variational Autoencoder
(VAE) from Kingma et al. [8] as well as Attribute2Image’s improvements in
making generated images less random [13]. The dimensionality reduction in a
VAE is somewhat similar to the Singular Value Decomposition (SVD) used in
DINEOF. The potential to leverage performance improvements using quantum
annealing machines with VAEs was another motivation. [7].
In this paper we apply a model similar to Attribute2Image as well as Ivanov et
al’s inpainting model to Chl-a data from the Salish Sea area surrounding
Vancouver Island [13, 6]. We compare it to the industry standard DINEOF using
experiments modeled after Hilborn et al.’s experiments [5]. This area was
chosen as it contains both areas of high and low temporal homogeneity in terms
of algae concentrations, which was determined by Hilborn et al to be something
DINEOF is sensitive to [5].
## 2 Method
### 2.1 Dataset and Preprocessing
The dataset we use comes from the Algae Explorer
Project222https://algaeexplorer.ca/. This project used 1566 images taken daily
from 2016-04-25 to 2020-09-30. For our experiments we use a 250x250 pixel
slice from each day to create a 1566x250x250 dataset.
We then preprocess the data for DINEOF using a similar process to Hilborn et
al. [5]. The data for VConstruct uses a similar process to Han et al. [4].
We select five days for testing at random from all days that have very low
cloud cover. This allows us to add artificial clouds and measure accuracy by
comparing to the original complete image.
### 2.2 DINEOF Testing
As we are using different satellite data than Hilborn et al., we cannot
compare directly to their results and need to devise a similar experiment [5].
Since DINEOF is not “trained" like ML models, we cannot do conventional
testing with a withheld dataset. For our experiment we use the five testing
images selected in preprocessing, and then overlay artificial clouds on these
images to create our testing set, which is then inserted back into the full
set of images. Samples are shown in the Appendix, Figs. 2 and 3. After running
DINEOF we then compare these reconstructions with the known full image and
report accuracy.
This scheme slightly biases the experiment towards DINEOF as DINEOF has access
to the cloudy testing images when generating EOFs where VConstruct does not.
This is unfortunately unavoidable but the effect seems minimal.
### 2.3 VConstruct Model
The VConstruct model is based on the Variational Autoencoder [8], it consists
of an encoder, decoder and attribute network. All network layers are fully
connected layers with ReLU activation functions.
The encoder and decoder layers function exactly like they do in a conventional
VAE. The encoder network compresses an image down to a lower dimensional
latent space and learns a distribution it can later sample from during testing
when the complete image is unknown. The decoder takes the output of the
encoder network, or random sample from the learnt distribution, and attempts
to reconstruct the original image. We use Kullback–Leibler divergence and
Reconstruction loss for our loss function.
The attribute network is based off the work of Yan et al. and Ivanov et al.
[6, 13]. The network extracts an attribute vector from a cloudy image which
represents what is “known” about the cloudy image. This attribute vector then
influences the previously random image generation of the decoder network so
that it generates a potential reconstruction.
These three networks make up the training configuration of VConstruct and can
be seen in Fig. 1. When testing we cannot use the Encoder network as we do not
know the complete image, so the network is replaced with a random sample from
the distribution learnt in training. The parts that switch out are indicated
by the dashed lines.
Cloudy image 250x250 62500 x1 Attribute Network 1024 x1 512 x1 128 x1 Complete
image 250x250 62500 x1 Encoder Network (Training Configuration) 1024 x1 512 x1
256 x1 Concat Random Sample 256x1 (Testing Configuration) 384 x1 512 x1 1024
x1 Decoder Network 62500 x1 Reconstructed image 250x250 Skip ConnectionSkip
ConnectionSkip Connection
Figure 1: Training Configuration for VConstruct
### 2.4 VConstruct Testing
We train VConstruct by using all of the complete images marked in
preprocessing (minus the five testing images which are withheld) with
artificial clouds overlaid. The model is trained for 150 epochs. After
training we use the five testing images, randomly selected in preprocessing,
with the same artificial cloud mask as DINEOF and calculate the same metrics.
## 3 Results and Discussion
Table 1 presents the results of reconstructing the five randomly selected
testing days. We show results for an area off the coast of Victoria and an
area by the mouth of the Fraser River. RMSE (Root Mean Squared Error) and
$R^{2}$ (Correlation Coefficient) are reported. The Fraser River mouth is an
area of high temporal homogeneity, which is identified by Hilborn et al. as a
problem area for DINEOF [5]. The actual reconstructed images can be found in
the appendix.
Table 1: Testing Results. Last row reflects overall mean performance. RMSE | $R^{2}$
---|---
Victoria Coast | Fraser River Mouth | Victoria Coast | Fraser River Mouth
DINEOF | VConstruct | DINEOF | VConstruct | DINEOF | VConstruct | DINEOF | VConstruct
.104 | .125 | .183 | .152 | .247 | -.089 | .759 | .834
.093 | .096 | .209 | .234 | .667 | .646 | .788 | .736
.078 | .08 | .131 | .119 | .569 | .552 | .797 | .833
.071 | .086 | .154 | .193 | .736 | .614 | .789 | .688
.067 | .068 | .164 | .176 | .499 | .472 | .898 | .883
.0826 | .091 | .1684 | .1748 | .544 | .439 | .806 | .791
For the Victoria Coast VConstruct matches DINEOF’s RMSE and $R^{2}$ in 3/5
days but DINEOF has a better average score. For the Fraser River Mouth we see
VConstruct outperforms DINEOF on 2/5 tests and nearly matches its average
score, particularly in $R^{2}$.
### 3.1 Other Benefits of VConstruct
VConstruct also provides a few benefits unrelated to accuracy, the first being
computation time. VConstruct is parallelized and runs on a GPU. Once trained
VConstruct is able to reconstruct in roughly 10 milliseconds as opposed to the
10 minutes it took for DINEOF on the testing computer. This decrease in
computation time allows researchers to reconstruct much larger datasets, which
was an important concern raised by the oceanographer we consulted for this
project.
VConstruct also has a few advantages that apply to DINCAE (the recent Chl-a
approach from [2]) as well as DINEOF. Currently VConstruct is fully atemporal,
meaning that we do not need data from a previous time period to perform
reconstructions. This is significant as it allows us to reconstruct data even
if nothing is known about previous time periods.
Since VConstruct is based off of a VAE we can resample the random distribution
to provide different possible images. From an oceanographic perspective, this
allows us to generate new possible reconstructions. This is useful when
subsequently collected field-truthed data was from a missing area that
invalidated the initial reconstruction. For example, the dataset we are using
is field-truthed using HPLC derived Chl-a measurements from provincial
ferries. Since reconstruction only takes a few milliseconds we could generate
and test 1000s of possible images in the same time it takes for DINEOF to run.
### 3.2 Future Work
We evaluated the approach using two specific test areas. Expanding the
training set by using data from other areas in the Salish Sea is important,
because different oceanographic areas have different factors affecting Chl-a
concentrations. The Salish Sea describes waters including Puget Sound, Strait
of Georgia, and the Strait of Juan de Fuca in the US Pacific Northwest/Western
Canada. We plan on making the accuracy testing more rigorous in the next
iteration. We also plan on testing the effects of adding temporality to the
input data. We initially chose to pursue atemporality as data is very commonly
missing. However, temporal data is likely to improve accuracy when available.
Lastly, VConstruct uses fully connected layers for simplicity but DINCAE has
shown success using convolutional layers so this will be tested in the future.
## 4 Conclusion
We have shown that VConstruct and machine learning in general can be used to
reconstruct remotely sensed measurements of Chl-a, which is important in
oceanographic climate change research. Even though VConstruct does not match
or beat DINEOF in every accuracy test, we feel we have shown its potential for
highly accurate reconstructions, particularly in areas of high homogeneity
where DINEOF performs poorly. We also show VConstruct’s other potential
benefits, including better computation time as well as its ability to generate
a high number of different potential reconstructions. Remote sensing is an
important part of monitoring the climate and climate change, but is limited by
cloud cover and other factors which result in data loss. These factors make
data reconstruction an important part of climate change research.
## 5 Acknowledgements
Special thanks to Yvonne Coady, Maycira Costa, Derek Jacoby, and Christian
Marchese for their input and feedback.
## References
* [1] S. Alvain, C. Moulin, Y. Dandonneau, and F. M. Bréon, “Remote sensing of phytoplankton groups in case 1 waters from global SeaWiFS imagery,” _Deep Sea Research Part I: Oceanographic Research Papers_ , vol. 52, no. 11, pp. 1989–2004, Nov. 2005. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0967063705001536
* [2] A. Barth, A. Alvera-Azcárate, M. Licer, and J.-M. Beckers, “DINCAE 1.0: a convolutional neural network with error estimates to reconstruct sea surface temperature satellite observations,” _Geoscientific Model Development_ , vol. 13, no. 3, pp. 1609–1622, Mar. 2020. [Online]. Available: https://gmd.copernicus.org/articles/13/1609/2020/
* [3] S. Bojinski and M. M. Verstraete, “(PDF) The Concept of Essential Climate Variables in Support of Climate Research, Applications, and Policy,” _ResearchGate_ , 2014. [Online]. Available: https://www.researchgate.net/publication/271271716_The_Concept_of_Essential_Climate_Variables_in_Support_of_Climate_Research_Applications_and_Policy
* [4] Z. Han, Y. He, G. Liu, and W. Perrie, “Application of DINCAE to Reconstruct the Gaps in Chlorophyll-a Satellite Observations in the South China Sea and West Philippine Sea,” _Remote Sensing_ , vol. 12, no. 3, p. 480, Jan. 2020\. [Online]. Available: https://www.mdpi.com/2072-4292/12/3/480
* [5] A. Hilborn and M. Costa, “Applications of DINEOF to Satellite-Derived Chlorophyll-a from a Productive Coastal Region,” _Remote Sensing_ , vol. 10, no. 9, p. 1449, Sep. 2018. [Online]. Available: https://www.mdpi.com/2072-4292/10/9/1449
* [6] O. Ivanov, M. Figurnov, and D. Vetrov, “Variational autoencoder with arbitrary conditioning.”
* [7] A. Khoshaman, W. Vinci, B. Denis, E. Andriyash, H. Sadeghi, and M. H. Amin, “Quantum variational autoencoder.”
* [8] D. P. Kingma and M. Welling, “Auto-Encoding Variational Bayes,” _arXiv:1312.6114 [cs, stat]_ , May 2014, arXiv: 1312.6114. [Online]. Available: http://arxiv.org/abs/1312.6114
* [9] S. K. Moore, V. L. Trainer, N. J. Mantua, M. S. Parker, E. A. Laws, L. C. Backer, and L. E. Fleming, “Impacts of climate variability and future climate change on harmful algal blooms and human health,” in _Environmental Health_ , vol. 7, no. S2. Springer, 2008, p. S4.
* [10] J. Park, J.-H. Kim, H.-c. Kim, B.-K. Kim, D. Bae, Y.-H. Jo, N. Jo, and S. H. Lee, “Reconstruction of Ocean Color Data Using Machine Learning Techniques in Polar Regions: Focusing on Off Cape Hallett, Ross Sea,” _Remote Sensing_ , vol. 11, no. 11, p. 1366, Jan. 2019. [Online]. Available: https://www.mdpi.com/2072-4292/11/11/1366
* [11] D. Sirjacobs, A. Alvera-Azcárate, A. Barth, G. Lacroix, Y. Park, B. Nechad, K. Ruddick, and J.-M. Beckers, “Cloud filling of ocean colour and sea surface temperature remote sensing products over the Southern North Sea by the Data Interpolating Empirical Orthogonal Functions methodology,” _Journal of Sea Research_ , vol. 65, no. 1, pp. 114–130, Jan. 2011. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1385110110001036
* [12] M. H. Taylor, M. Losch, M. Wenzel, and J. Schröter, “On the Sensitivity of Field Reconstruction and Prediction Using Empirical Orthogonal Functions Derived from Gappy Data,” _Journal of Climate_ , vol. 26, no. 22, pp. 9194–9205, Nov. 2013. [Online]. Available: https://journals.ametsoc.org/jcli/article/26/22/9194/34073/On-the-Sensitivity-of-Field-Reconstruction-and
* [13] X. Yan, J. Yang, K. Sohn, and H. Lee, “Attribute2Image: Conditional Image Generation from Visual Attributes,” _arXiv:1512.00570 [cs]_ , Oct. 2016, arXiv: 1512.00570. [Online]. Available: http://arxiv.org/abs/1512.00570
## Appendix A Reconstruction Test Images
Figure 2: Test Results Victoria Coast Figure 3: Test Results Fraser River
Mouth
|
# Discrete Choice Analysis with Machine Learning Capabilities
Youssef M. Aboutaleb† Mazen Danaf† Yifei Xie† Moshe E. Ben-Akiva † †
Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA
02139, USA<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
(…; …)
###### Abstract
This paper discusses capabilities that are essential to models applied in
policy analysis settings and the limitations of direct applications of off-
the-shelf machine learning methodologies to such settings. Traditional
econometric methodologies for building discrete choice models for policy
analysis involve combining data with modeling assumptions guided by subject-
matter considerations. Such considerations are typically most useful in
specifying the systematic component of random utility discrete choice models
but are typically of limited aid in determining the form of the random
component. We identify an area where machine learning paradigms can be
leveraged, namely in specifying and systematically selecting the best
specification of the random component of the utility equations. We review two
recent novel applications where mixed-integer optimization and cross-
validation are used to algorithmically select optimal specifications for the
random utility components of nested logit and logit mixture models subject to
interpretability constraints.
###### keywords:
Discrete Choice, Machine Learning, Policy Analysis, Algorithmic Model
Selection
††volume: 21
## 1 Introduction
Machine learning techniques are increasing our capacity to discover complex
nonlinear patterns in high-dimensional data; see Bishop (2006) and Hastie et
al. (2009). The impressive predictive powers of machine learning have found
useful applications in many fields. It is natural to reflect on whether and
how these techniques can be applied to advance the field of discrete choice
analysis.
Traditional machine learning techniques are built for prediction problems.
Prediction, an associational (or correlational) concept, can be addressed
through sophisticated data fitting techniques (Pearl, 2000). Discrete choice
models, on the other hand, are typically deployed in policy analysis settings
(Manski, 2013). Policy analysis demands answers to questions that can only be
resolved by establishing a sense of causation. To draw conclusions requires
that data be combined with sufficient domain knowledge assumptions.
Algorithms of systematic data-driven model selection and ideas of cross-
validation and regularization are prominent in machine learning methodologies.
The appeal of such notions and methods over the sometimes arbitrary
specification decisions in traditional econometric models remains; see Athey
(2018).
The goal of this paper is two-fold. The first is to clearly lay out the main
capabilities required of (discrete choice) models developed for policy
analysis and demonstrate some of the inadequacies of direct applications of
off-the-shelf machine learning techniques to such settings. The second goal is
to describe a framework where machine learning capabilities can be used to
enhance the predictive powers of traditional discrete choice models without
compromising their interpretability or suitability for policy analysis. We
present two applications of this approach namely in automating the
specification of the random component of the utility equations in nested logit
(Aboutaleb, 2019) and mixed logit (Aboutaleb et al., 2021).
#### Organization of this paper
* •
Section 2 introduces three levels of questions of interest in a typical policy
analysis setting. A primer on supervised machine learning is presented along
with a reflection on the core methodological differences between theory-driven
econometric models such as discrete choice models and the data-fitting
methodologies of machine learning.
* •
Section 3 presents, in detail, typical capabilities required of models used
for policy analysis and demonstrates the inadequacy of off-the-shelf
supervised machine learning.
* •
Section 4 reviews recent attempts in the literature to apply machine learning
techniques to discrete choice analysis.
* •
Section 5 identifies appealing capabilities of machine learning and presents
the incorporation of such capabilities to the nested logit and logit mixture
models.
* •
Section 6 summarizes the main conclusions and take-aways of this paper.
## 2 Background
#### The inference problem
Consider a population of interest whose members are characterized by features
(covariates) x in an input space $\mathcal{X}$ and outcome (response) $y$ in
an output space $\mathcal{Y}$ with some joint probability distribution
$\mathbb{P}(\textbf{x},y)$ which is assumed to exist but is not necessarily
known a priori. The classical inference problem of interest is to infer
outcome $y$ as a function of features x. This generally entails learning (some
function of) the conditional probability distribution
$\mathbb{P}(y|\mathbf{x})$.
The conditional distribution provides the researcher of a model of the
population under study. Three questions could be asked of this model:
* Q1
What is the distribution of $y$ conditional on some observed value of
$\textbf{x}_{obs}$?
* Q2
What is the distribution of $y$ conditional on an extrapolated value
$\textbf{x}_{ext}$ off the support of $\mathbb{P}(\textbf{x})$?
* Q3
What is the distribution of $y$ given an intervention that sets the value of x
to $\textbf{x}_{int}$?
It will be clear through this paper that off-the-shelf supervised machine
learning, unguided by theory, can only reliably address the first question–
which is a prediction question. While policy analysis applications are
typically also concerned with the second and third questions.
#### Supervised Machine Learning
The paradigm of supervised machine learning is that of learning to predict by
example. Given an i.i.d sample of input/output pairs
$\mathcal{D}=\\{(\textbf{x}_{i},y_{i})\\}_{i=1}^{n}$, called training data,
the problem of supervised learning is that of finding a well-fitting function
$\hat{f}:\mathcal{X}\rightarrow\mathcal{Y}$. The fitted function $\hat{f}$ is
said to generalize well if $\hat{f}(\textbf{x})$ is a good estimate of $y$ on
data pairs $(\textbf{x},y)$ drawn according to $\mathbb{P}(\textbf{x},y)$ and
not limited to the specific pairs in the training sample $\mathcal{D}$.
This paradigm requires specifying a loss function
$\ell:\mathcal{Y}\times\mathcal{Y}:\rightarrow[0,\infty)$ for measuring the
quality of predictions. This provides an objective measure for choosing $f$.
The risk of a function $f$ is defined as the expected loss over the
distribution of values the data pairs can take:
${R}(f)=\int_{\mathcal{X}\times\mathcal{Y}}\ell(y,f(\textbf{x}))\mathbb{P}(d\textbf{x},dy)$
(2.1)
The squared loss $\ell(y,f(\textbf{x}))=(y-f(\textbf{x}))^{2}$ is typical for
prediction tasks, where $\mathcal{Y}=\mathbb{R}$, and the logistic loss
$\ell(y,f(\textbf{x}))=log(2+exp(-yf(\textbf{x})))$ is typical for
classification tasks, where $\mathcal{Y}=\\{-1,1\\}$.
The problem of supervised learning is then to solve:
$\min_{f}R(f)$ (2.2)
given only $\mathcal{D}$. An exact solution for general functions $f$ and
losses $\ell$ is clearly not possible. Another complication is that the joint
distribution over which the expectation is taken is not known a priori. A path
for tractability then is to restrict functions $f$ to some hypothesis space
$\mathcal{H}$, for example the linear functions $f(x)=\beta^{T}\textbf{x}$,
and to replace the expected risk by the empirical risk calculated from the
data:
$\hat{R}(f)=\frac{1}{n}\sum_{i=1}^{n}\ell(y_{i},f(\textbf{x}_{i}))$ (2.3)
The learning problem is then approximated by minimizing the empirical risk
over a restricted hypothesis space $\mathcal{H}$:
$\min_{f\subset\mathcal{H}}\hat{R}(f)$ (2.4)
Since the sampled data $\mathcal{D}$ are random and in practice the
measurement pairs are noisy, if the hypothesis space is large relative to the
sample size, it can happen that the empirical risk is not a good approximation
to the expected risk. A typical behavior is that $f$ fits to noise in the
observed sample and
$\min_{f\subset\mathcal{H}}\hat{R}(f)\ll\min_{f}R(f)$ (2.5)
This phenomenon known as over-fitting. A way of mitigating against this is to
consider a regularizer $G:\mathcal{H}:\rightarrow[0,\infty)$ that penalizes
complexity in $f$. The objective function in (4) is replaced by
$\min_{f\subset\mathcal{H}}\hat{R}(f)+\lambda G(f)$ (2.6)
for $\lambda>0$ Rosasco and Poggio (2017).
The parameter $\lambda$ is determined through a procedure known as cross-
validation. The main idea is that since the empirical risk evaluated on the
training data (training loss) is not a good approximation to the true risk, a
random sample is held out from $\mathcal{D}$ and used to approximate the true
risk at solutions to (6) for various values of $\lambda$. The evaluation of
the empirical risk (3) approximation on this independent hold out sample is
called the validation loss. The optimal amount of regularization $\lambda$ is
chosen so that the validation loss is minimized. Such $\lambda$ balances the
complexity and ‘generalizability’ of the function $f$ for the given learning
task. There is a trade-off: overly complex functions $f$ tend to fit to noise
and generalize poorly. The right amount of penalty applied to the complexity
gives a best fitting generalizable model Hasti et al. (2001). Cross-validation
is a method to determine this right amount of penalty. There are other
approaches to prevent overfitting, these include model averaging (‘boosting’)
and estimating separate models on subsamples of the data (‘bagging’) Athey
(2018).
The model does not need to learn the entire conditional distribution to make a
prediction. The conditional mean or quantile is usually sufficient depending
on the choice of the loss function $\ell$. Indeed the solution to (2) for the
squared loss error can be shown to be simply the conditional expectation over
the conditional distribution Hasti et al. (2001):
$f(\textbf{x})=\mathbb{E}(y|\mathbf{x})$ (2.7)
This is also known as the regression function.
Depending on the use-case and the sample size, the hypothesis space
$\mathcal{H}$ can be adapted to accommodate general functions with severe non-
linearities. Two of the common possibilities include:
1. 1.
$f(\textbf{x})=\beta^{T}\phi(\textbf{x})$
2. 2.
$f(\textbf{x})=\phi(\beta^{T}\textbf{x})$
Where $\phi(.)$ is a non-linear function. Noting that the latter choice can be
iterated
$f(\textbf{x})=\phi(\beta_{L}^{T}\phi(\beta_{L-1}^{T}\ldots\phi(\beta_{1}^{T}\textbf{x})))$
to arrive at a basic multi-layer neural net Rosasco and Poggio (2017).
In addition to the choice of hypothesis space $\mathcal{H}$, there are two
main modeling assumptions:
1. 1.
The data are drawn independently.
2. 2.
The data are identically distributed- there exists a fixed underlying
distribution.
The appeal of supervised machine learning is in its ability to perform well on
prediction tasks by fitting complicated and generalizable functional forms to
discover sophisticated patterns in the data with little specification or input
from the user. The success of supervised machine learning, however, hinges on
some form of biased estimation. The bias is a direct result of regularization
which trades off parameter un-biasedness for lower prediction variance Breiman
et al. (2001).
The i.i.d assumption holds so long as prediction is limited to features drawn
according to the same fixed joint distribution that generated the data used in
the training procedure. Supervised machine learning models are therefore
excellent candidates for answering questions of the type:
> Q1 What is the distribution of $y$ conditional on some observed value of
> $\textbf{x}_{obs}$?
absent any interpretability considerations.
#### Discrete choice models and the econometric approach
Discrete choice models deal with inference problems where the output space is
discrete or categorical $\mathcal{Y}=\\{1,2,3,...,T\\}$ for $T\in\mathbb{N}$.
The researcher observes choices made by a population of decision makers. Under
the widely adopted random utility maximization framework McFadden (1981), each
decision maker ranks the alternatives in the choice set in order of preference
as represented by a utility function. Each alternative is characterized by a
utility and is chosen if and only if its utility exceeds the utility of all
other alternatives Ben-Akiva and Lerman (1985). Each utility equation includes
a random error term, because it is not possible to model every aspect of an
alternative or the decision maker in the utility equation.
In contrast to the supervised machine learning approach, the traditional
econometric approach to the inference problem is a more theory-driven process.
This involves building a structural model for
$\mathbb{P}(y|\mathbf{x})$–combining data with subject-matter assumptions and
knowledge of the sampling process through which the data was obtained. These
assumptions guide the specification of the systematic component of the utility
equations and the handling of potential selection bias or endogeneity. Under
this paradigm, transparent models with a strong theoretical base are the
ideal.
It is understood that there might well be present countless influences, non-
linearities, missing attributes and heterogeneities that are unaccounted for
in the systematic part. A stochastic or random component will also need to be
incorporated to account for such aspects. A few alternative model
specifications are estimated on the full dataset and statistical theory is
used to determine goodness of fit. A main consideration of model specification
and estimation is the recovery of unbiased, or at least consistent, estimates
of the policy parameters of interest.
The parameters of the estimated models carry clear subject-matter
interpretations. These are subjected to sanity checks (the signs and relative
magnitudes for example) and a determination is made as to whether the
systematic or random specifications need to be modified. Often, a number of
revisions are required before the model is deemed fit-for-use from a policy
analysis perspective.
In essence, econometric model building is an effort to create causal models
Angrist and Pischke (2008) Greene (2003). With the understanding that
identifying causality from observational data is at best somewhat tentative
and must be combined with assumptions founded on subject-matter assumptions
and knowledge of the sampling process Manski (2009). From this stand point,
empirical fit is not the only consideration for model choice.
## 3 Models for Analysis
Discrete choice policy analysis aims to predict behaviour in counterfactual
settings where the attributes of alternatives or the characteristics of
current decision makers change, new alternatives become available, existing
ones become unavailable, or new decision makers arise Manski (2013). Policy
analysis settings present hypothetical what-if questions such as “what will
happen if we raise transit fares?”. The answers to such questions requires
models that can infer consequences of actions, i.e. models that capture a
sense of causation. Different causal mechanisms have different implications
for policy.
We motivate this discussion by quoting an example from Manski (2009):
> Suppose that you observe the almost simultaneous movements of a person and
> of his image in a mirror. Does the mirror image induce the person’s
> movements, does the image reflect the person’s movements, or do the person
> and image move together in response to a common external stimulus? Data
> alone can not answer this question. Even if you were able to observe
> innumerable instances in which persons and their mirror images move
> together, you would not be able to logically deduce the process at work. To
> reach a conclusion requires that you understand something of optics and of
> human behavior.
A model that only captures correlations or associations will rightly predict
that an image will always appear whenever a person moves in front of a mirror.
However, it can not correctly infer the effect on the person’s movements of an
intervention–say the shattering of that mirror. “No image, therefore no
motion!” falsely concludes the correlational model with high confidence. Such
is the kind of hypothetical what-if questions of policy analysis (although
typically of a more constructive type).
Supervised learning models are only optimized directly to capture
correlations: a function $f$ is chosen so that the risk (2) is minimized over
the joint distribution. As the reflection problem shows, addressing what-if
extrapolative or interventional questions based on observational data requires
that these observations be combined with assumptions on the underlying
generating mechanisms Manski (1993):
> Data + Assumptions $\rightarrow{}$ Conclusions
The only other resolution being the initiation of a new sampling processes to
collect experimental data, which is typically impractical in many policy
settings Manski (2009).
The next sections discuss features that are essential to models deployed in
policy analysis settings. We argue that these models must provide meaningful
extrapolations ( Section 3.1), answers to interventions (Section 3.2), and
must be interpretable (Section 3.3).
### 3.1 Extrapolation: Theory as a Substitute for Data
Consider the demand $y$ of a commodity or service modelled as function of its
price $x$ shown in Figure 1. The goal is to determine how the demand will
respond to changes in price perhaps due to a proposed tax. It is very typical
that the range of values over which prices were observed is limited– prices
just do not change enough. The goal is to build a model relating demand to
price, a function of $\mathbb{P}(y|x)$, and use this model to extrapolate
values of $y$ for values of $x$ outside the range of historically observed
prices.
The supervised machine learning paradigm is one of maximizing fit. A model
will be to chosen capture the non-linear trend in the observed data– perhaps
the second order polynomial shown in red in Figure 1. This model, chosen to
maximize empirical fit, is perfectly suitable for studying how the demand
changes for different price points within the locality of historically
observed prices. Extrapolations outside that range, without sufficient
assumptions, are hard to justify as we will make precise why shortly.
An econometric approach to this problem will start with a theory– that demand
for a product responds negatively to increases in its price. The negative
estimated slope of the simple linear model used, the blue line in Figure 1,
confirms the researcher’s a priori expectations. Extrapolations based off this
model are based on a theory which is most needed when making predictions
outside the range of observed values. To quote Hal Varian Varian (1993):
> Naive empiricism can only predict what has happened in the past. It is the
> theory—the underlying model—that allows us to extrapolate.
Figure 1: The shape of an empirically fitted model is only governed by the
cloud of training data points. Without meaningful restrictions, extrapolations
off the training range are hard to justify.
Model specifications that maximize fit as the only consideration are not
enough to provide meaningful extrapolations. To make this argument more
precise consider the general inference setting described in Section 1, and
suppose we seek to answer the second question identified:
> Q2 What is the distribution of $y$ conditional on an extrapolated value
> $\textbf{x}_{ext}$ off the support of $\mathbb{P}(\textbf{x})$?
The only way to infer $\mathbb{P}(y|\textbf{x}=\textbf{x}_{ext})$ at
$\textbf{x}_{ext}$ outside the support of $\mathbb{P}(\textbf{x})$ is to
impose assumptions enabling one to deduce
$\mathbb{P}(y|\textbf{x}=\textbf{x}_{ext})$ from $\mathbb{P}(y|\textbf{x})$.
For concreteness, consider the conditional mean $\mathbb{E}[y|\textbf{x}]$ and
look at the two possible ways of its estimation: nonparametric and parametric.
Smoothness regularity assumptions such as continuity or differentiability that
enable the nonparmateric estimation of $\mathbb{E}[y|\textbf{x}]$ from finite
samples imply that $\mathbb{E}[y|\textbf{x}=\textbf{x}_{1}]$ is near
$\mathbb{E}[y|\textbf{x}=\textbf{x}_{2}]$ when $\textbf{x}_{1}$ is near
$\textbf{x}_{2}$. This assumption restricts the behaviour of
$\mathbb{E}[y|\textbf{x}]$ locally. Let $\textbf{x}_{0}$ be the point on the
support of $\mathbb{P}(\textbf{x})$ nearest to $\textbf{x}_{ext}$. It is not
clear whether the distance between $\textbf{x}_{0}$ and $\textbf{x}_{ext}$
should be interpreted as small enough to be governed by these local
restrictions. Extrapolation therefore requires an assumption that restricts
the behaviour of $\mathbb{E}[y|\textbf{x}]$ globally. This enables the
deduction of $\textbf{x}_{ext}$ from knowledge of $\mathbb{E}[y|\textbf{x}]$
at values of x that are not necessarily near $\textbf{x}_{ext}$ Manski (2009).
Recall from Section 1 that a parametric estimation of
$\mathbb{E}[y|\textbf{x}]$ is obtained by minimizing the squared loss
empirical risk over a restricted class of functions $f\subset\mathcal{H}$.
Values of x outside the support of $\mathbb{P}(\textbf{x})$ have no bearing on
the value of the empirical risk and therefore have no bearing on the shape of
the fitted function outside the support of $\mathbb{P}(\textbf{x})$. In other
words, without sufficient restrictions on $\mathcal{H}$, extrapolations off
the support are arbitrary.
Global restrictions make assumptions about how the conditional distribution
varies with x. These restrictions are chosen by the researcher in line with a
priori subject-matter expectations on that relationship. Consider again the
conditional mean $\mathbb{E}[y|\textbf{x}]$. The common linear regression
assumption is to restrict $\mathbb{E}[y|\textbf{x}]$ to be linear. Other
possible assumptions include restricting $\mathbb{E}[y|\textbf{x}]$ to be
convex or monotone increasing (in all or some of the covariates x). These and
other restrictions enable meaningful extrapolations off the support of
$\mathbb{P}(\textbf{x})$.
From this perspective, the primary function of theory is to justify the
imposition of global assumptions that enable extrapolation.
### 3.2 Intervention: Structural Assumptions Specify Invariant Aspects
Suppose variables $x$ and $y$ are observed to be strongly positively
correlated as in Figure 2. Does $x$ cause $y$? Is it the other way around? or
Is there, perhaps, a confounding variable $u$ that causes both $x$ and $y$?
Observational data alone can never answer this question even if the researcher
had access to innumerable observations of pairs $(x,y)$. Yet the underlying
data generating process needs to be uncovered before the researcher is able to
answer interventional questions. Ignoring this step will lead to misleading
conclusions.
Figure 2: Any number of data generating mechanisms may be consistent with
available empirical evidence. The three alternative models on the right
produce the same joint distribution of $x$ and $y$. Each model, however, has
different implications on how the value of one variable will change in
response to an interventional policy changing the value of the other variable.
This presents an identification problem. Observational data must be combined
with structural assumptions, motivated by subject-matter knowledge of $x$ and
$y$, for a resolution.
An excellent example is provided in Athey (2018). Suppose the researcher has
access to observational data of hotel room prices and their occupancy rates.
Since hotels tend to raise their prices during peak season, occupancy rates
are observed to be positively correlated with room prices. Without making any
structural assumptions, this data can only answer prediction questions of the
first type. For example, an agency seeking to estimate hotel occupancies based
on published room rates. What if instead we ask of the model the impact of a
proposed luxury tax on occupancy rates? The model will suggest that raising
room prices will lead to higher occupancy rates! This an instance of the
logical fallacy: cum hoc ergo propter hoc (with this, therefore because of
this).
What went wrong? Evaluating the effect of interventional policies breaks the
assumption of a fixed data generating process that underpins supervised
machine learning. Structural assumptions that encode a sense of causality are
therefore needed Brockman (2019):
> With regard to causal reasoning, we find that you can do very little with
> any form of model-blind curve fitting, or any statistical inference, no
> matter how sophisticated the fitting process is.
Supervised machine learning models, which only learn to capture correlations,
can not answer interventional questions which require, in addition to data,
strong structural assumptions. Prediction tasks are well managed by these
models only under conditions similar to those of the training data
$\mathcal{D}$. Recall that one of assumptions of supervised machine learning
models is that the data, $\mathcal{D}$, are identically distributed according
to some fixed joint distribution. The problem of answering interventional
questions is that of making predictions under situations that are changing–
the assumption that the joint distribution is fixed is not necessarily valid
in the “mutilated” world.
Answering questions of the third type:
> Q3 What is the distribution of $y$ given an intervention that sets the value
> of x to $\textbf{x}_{int}$?
requires combining data with sufficiently strong assumptions on the nature of
the modeled world. Nothing in the specification of a joint distribution
function $\mathbb{P}(x,y)$ identifies how it is to change in response to
interventions. This information must be provided by causal assumptions which
identify relationships that remain invariant when external conditions change
Pearl (2000).
### 3.3 Interpretability: Amenability to Scrutiny is a Prerequisite to
Credibility
The ultimate goal of analysis is to uncover insights on the behavior of a
population under study–connecting observed data to reality, and to use those
insights in forecasting and planning. Any model is only a simplification of
reality. It will include the salient features of a relationship of interest
and will often require a number of sufficient maintained assumptions to meet
the demands of policy analysis as discussed in earlier sections. The
requirement that the model be used in answering ambitious introspective policy
questions sets the bar high. For a model’s recommendations to have credibility
it must withstand scrutiny. This includes justifications for any assumptions
made and an understanding of why the model’s output is what it is.
Trust that the model’s results are sensible must first be established before
the model is applied to policy analysis. A model’s interpretability is its
gateway to establishing trust. Interpretable models are amenable to scrutiny–
a prerequisite to credibility.
Transparent models are the gold standard in interpretability. Transparency
entails a full understanding of the model’s mechanisms and assumptions. Each
of the model’s parameters admits intuitive subject-matter explanations. A
wrong parameter sign, such as a positive coefficient for cost in a demand
model, could be a strong cue that the model may be miss-specified. The
researcher knows what is wrong and what to fix. Such an understanding confers
a “certificate of credibility” to the model–a guarantee, in essence, that
while the model’s predictions may be imprecise, the results are ‘in the right
direction’. With such credibility, trust is established and the model is
suitable for policy analysis.
Black box models are much harder, if not impossible, to fully scrutinize. The
parameters of such models are not identifiable and do not carry subject-matter
interpretations. It is sometimes still possible to query these models and
extract information in a post hoc analysis Lipton (2016). A major problem
remains. When the output does not conform to a priori expectations and the
results are counter intuitive, the parameters provide no clues as to what went
wrong and what should be fixed. It is not clear whether the problem is in
training, in method or because things have changed in the environment Pearl
(2019).
## 4 Direct Machine learning applications to discrete choice
This section surveys efforts in the literature of applying machine learning
paradigms and techniques to models of discrete choice.
#### Direct comparisons of fit
Several studies in the literature compare the predictive accuracy of machine
learning models such as neural nets and support vector machines to classical
discrete choice models (such as flat and nested logit models) in various
applications including travel mode choice Zhang and Xie (2008) Omrani (2015)
Hagenauer and Helbich (2017), airline itinerary choice Lhéritier et al.
(2019), and car ownership Paredes et al. (2017). The unanimous conclusion that
machine learning models provide a better fit is hardly a surprise. The
usability of these models for policy analysis is suspect as we have
demonstrated in the previous section.
#### Post hoc analysis of black box models
A few studies consider the application of non-transparent models to discrete
choice settings and rely on post hoc analysis of output for insight. van
Cranenburgh and Kouwenhoven (2019) used a neural network to estimate the value
of time distribution using stated choice experiments with a faster/more
expensive alternative and a slower/cheaper alternative. The authors claim that
this method can uncover the distribution of value of time and its moments
without making strong assumptions on the shape of the distribution or the
error terms, while incorporating covariates and accounting for panel effects.
Wang and Zhao (2018) proposes an empirical method to extract valuable
econometric information from neural networks, such as choice probabilities,
elasticities, and marginal rates of substitution. Their results show that when
economic information is aggregated over the population or ensembled over
models, the analysis can reveal roughly S-shaped choice probability curves,
and result in a reasonable median value-of-time. The authors admit, however,
that at the disaggregate level, some of the results are counter-intuitive
(such as positive cost and travel time effects on the choice probabilities,
and infinite value of time).
#### Algorithms for big data
A number of researchers studied the use of specific optimization algorithms
that are traditionally used to train machine learning models to facilitate the
estimation of discrete choice models over large datasets.
Lederrey et al. introduced an algorithm called Window Moving Average -
Adaptive Batch Size, inspired by Stochastic Gradient Descent, used it to
estimate mutlinomial and nested logit models. The improvement in likelihood is
evaluated at each step, and the batch size is increased when the improvement
is too low using smoothing techniques.
In the context of logit mixture models, Braun and McAuliffe (2010) proposed a
variational inference method for estimating models with random coefficients.
Variational procedures were developed for empirical Bayes and fully Bayesian
inference, by solving a sequence of unconstrained convex optimization
problems. After comparing their estimators to those obtained from the standard
MCMC - Hierarchical Bayes method Allenby (1997) Allenby and Rossi (1998) Train
(2009) on real and synthetic data, the authors concluded that variational
methods achieve accuracy competitive with MCMC at a small fraction of the
computational cost. The same conclusions are found by several studies
including Bansal et al. (2019),Depraetere and Vandebroek (2017), and Tan
(2017). Krueger et al. (2019) extended this estimator to account for inter-
and intra-consumer heterogeneity, however, they noted that the results were
noticeably less accurate than those obtained from MCMC, mainly because of the
restrictive mean-field assumption of variational Bayes.
#### Hybrid machine learning and discrete choice models
Sifringer et al. (2018) introduced the Learning Multinomial Logit model, where
the utility specification of a traditional multinomial logit is augmented with
a non-linear representation arising from a neural net. The rationale behind
this method was to divide the systematic part of the utility specification
into an interpretable part (where the variables are chosen by the modeler),
and a black-box part that aims at discovering a good utility specification
from available data. This method relies on the fact that mutlinomial logit can
be represented as a convolutional neural network with a single layer and
linear activation functions.
#### Machine learning to inform model specification
Bentz and Merunka (2000) showed that a feedforward neural network with softmax
output units and shared weights can be viewed as a generalization of the
multinomial logit model (MNL). The authors also indicated that the main
difference between the two methods lies in the ability of neural nets to model
non-linear preferences, without a priori assumptions on the utility function.
The authors concluded that the if fitted function is not too complex, it is
possible to detect and identify some low order non-linear effects from the
neural nets by projecting the function on sub-sets of the input space, and use
the results to obtain a better specification for MNL.
van Cranenburgh and Alwosheel (2019) developed a neural net based approach to
investigate decision rule heterogeneity among travelers. The neural nets were
trained to recognize the choice patterns of four distinct decision rules:
Random Utility Maximization, Random Regret Minimization, Lexicographic, and
Random. This method was applied to a Stated Choice experiment on commuters’
value of time, and cross-validation was used to compare the results against
those obtained from traditional discrete choice analysis methods. The authors
concluded that neural nets can successfully recover decision rule
heterogeneity.
## 5 Discrete Choice Models with Machine Learning Capabilities
How can machine learning paradigms be leveraged to advance the field of
discrete choice? Our motivation for applications of machine learning to
discrete choice is directed both by its limitations– that without
incorporating strong structural assumptions and addressing issues of
interpretability, machine learning cannot be used for answering the
extrapolative and interventional questions of policy analysis– and its
strengths: machine learning provides flexibility in model specification, and
systematic methods for model selection.
So far, we have established that:
1. 1.
Fully data-driven methodologies need to be tempered with structural
assumptions with respect to policy variables of interest.
2. 2.
Imposing meaningful subject-matter global restrictions on the hypothesis space
$\mathcal{H}$ allows for meaningful extrapolations.
3. 3.
Structural assumptions are needed to establish causality from observational
data
4. 4.
Stcrutiny, at least with respect to the policy variables, is required to asses
the model’s fit for use.
Domain knowledge typically informs such assumptions and restrictions and
guides assessments of model suitability. Such knowledge is most applicable in
specifying the systematic component of random utility discrete choice models
and least applicable in determining the specification of the random component.
This identifies an area where machine learning paradigms can be leveraged,
namely in specifying and systematically selecting the best random utility
specification.
The systematic component is specified with a priori expectations on the signs
and relative magnitudes of the parameters Ben-Akiva and Lerman (1985). For
example, addition travel cost and time represent added disutility in travel
demand, the parameters corresponding to cost and time are expected to be
negative in a linear specification of the model. The value of travel time,
calculated as the relative magnitude of these parameters is commonly used to
assess model specification.
#### Where domain knowledge does not help
While subject-matter knowledge informs the specification of the systematic
utility equations, specifying random aspects of the model can be more
challenging. For concreteness, we consider two examples: nested logit and
logit mixture models.
First consider the problem of specifying the nesting structure in nested logit
models. Researches often use their understanding of the choice situation under
study to group ‘similar’ alternatives into nests. Alternatives grouped in the
same nest share a common error term accounting for shared similarities not
directly included in the systematic component. However, a priori expectations
about the optimal nesting structure are sometimes misguided. The correlations
in the error components depend largely on the variables entering the
systematic part of the utility. If the systematic utility equations account
for most of the correlation between two similar alternatives, then grouping
these alternatives under the same nest does not necessarily improve over flat
logit. The researcher typically tests and report two or three alternative
nesting structure specifications for robustness. A comprehensive test of all
possible structures is impractical for many modeling situations.
In logit mixture models, the parameters in the systematic utility equations
are treated as random variables– usually normally distributed with mean and
covariance to be estimated from the data. Off-diagonal elements in the
covariance matrix indicate that a decision maker’s preferences for one
attribute are related to their preferences for another attribute Hess and
Train (2017). The researcher has some leeway in specifying which of these off-
diagonal elements to estimate and which to constraint to zero. In practice,
these models are typically estimated under two extreme assumptions: either a
full or a diagonal covariance matrix James (2018). A full co-variance matrix
implies correlations between all the distributed parameters, while a diagonal
matrix implies that these parameters are uncorrelated. Ignoring correlations
between parameters can distort the estimated distribution of ratios of
coefficients, representing the values of willingness-to-pay (WTP) and marginal
rates of substitution Hess et al. (2017). In practice, it is usually difficult
for researchers to hypothesize which subsets of variables are correlated.
The following sections present machine learning data driven methodologies for
algorithmically selecting the random specification of the utility components
of nested logit (Section 5.1) and logit mixture models (Section 5.2) subject
to interpretability considerations. The optimal random specification is
determined using optimization techniques, regularization, and out-of-sample
validation. The econometric tradition of specifying the systematic component
the utility remains. The models remain transparent, and the parameters can be
used to estimate trade-offs, willingness to pay values, and elasticities as
before.
### 5.1 Learning Structure in Nested Logit Models
Nested logit is a popular modeling tool in econometrics and transportation
science when one wants to model the choice that an individual makes from a set
of mutually exclusive alternatives McFadden (1981) Ben-Akiva and Lerman
(1985). The nested logit model provide a flexible modeling structure by
allowing for correlations between the random components of the alternatives in
the choice set.
In specifying a nested logit model, the researcher hypothesizes a nesting
structure over the choice set and proceeds to estimate the model parameters
(the coefficients in the utility equations that determine the relative
attractiveness of choices to the decision maker). Each nest is associated with
a scale parameter (which is also estimated), and quantifies the degree of
intra-nest correlation Ben-Akiva and Lerman (1985). The nesting structure
determines how the alternatives are correlated, and the scales determine by
what amount they are correlated.
The large feasible set of possible nesting structures presents a significant
modeling challenge in deciding which nesting structure best reflects the
underlying choice behavior of the population. The current modus operandi is to
use domain knowledge to substantially reduce the feasible set to a small set
of candidate structures. This is done at the risk of potentially excluding
some ostensibly non-intuitive structures which might actually provide a better
description of the choice behaviour of the population under study Koppelman
and Bhat (2006). This is the core motivation of Aboutaleb (2019) for taking a
more holistic view of nested logit model estimation, i.e., one that optimizes
over structure as well as parameters.
Aboutaleb (2019) formulates and solves the nested logit structure learning
problem as a mixed-integer nonlinear programming (MINLP) problem– which
entails optimizing not only over the parameters of the model but also over all
valid nest structures. In other words, rather than assuming a nesting
structure a priori, the goal is to reveal this structure from the data. To
ensure that the learned tree is consistent with utility maximization, the
MINLP is constrained so that the scales increase with increasing nesting
level. The authors penalize complexity in two ways: the number of nests and
the nesting level. The optimal model complexity is chosen through a cross-
validation procedure.
In advocating for a data-driven approach for specifying a nested logit
structure, we are in no way diminishing the role of the modeler or the
importance of domain-specific knowledge in specifying and designing good
discrete choice models. Recall that the utility of an alternative to the
decision makers under study is given by a sum of a systematic component and a
random component. It is the modeler’s purview to correctly specify the
systematic part of the utility equation. Specifying the random part, however,
is a tricky business and the optimal structure may be counter-intuitive. In
fact, the optimal error structure is not independent of the specification of
the systematic part. If all aspects of the choice behavior that account for
correlation between choices can be fully captured in the systematic part, no
nesting is needed.
### 5.2 Sparse Covariance Estimation in Logit Mixture Models
Logit mixtures permit the modeling of taste heterogeneity by allowing the
model parameters to be randomly distributed across the population under study
Train (2009). The modeler’s task is to specify the systematic part of the
utility equations, as well as the mixing distributions of the distributed
parameters and any assumptions on the structure of the covariance matrix.
Researchers typically specify either a full or diagonal covariance matrix.
Keane and Wasi (2013) compared different specifications with full, diagonal,
and restricted covariance matrices and concluded that a full covariance matrix
might not be needed in some cases. They concluded that different
specifications fit best on different datasets, which means that researchers
cannot know, without testing, which restrictions to impose.
As the number of combinations of all possible covariance matrix specifications
grows super-exponentially with the number of distributed parameters, it is not
practically feasible for the modeler to comprehensively compare all possible
specifications of the covariance matrix in order to determine an optimal
specification to use.
Sparse specifications of the covariance matrix are desirable since the number
of covariance elements grows quadratically with the number of distributed
parameters. Consequently, sparser models provide efficiency gains in the
estimation process compared to estimating a full covariance matrix.
Aboutaleb et al. (2021) presents the Mixed-integer Sparse Covariance (MISC)
algorithm which uses a mixed-integer program to find an optimal block diagonal
covariance matrix structure for any desired sparsity level using Markov Chain
Monte Carlo (MCMC) posterior draws from the full covariance matrix. The
optimal sparsity level of the covariance matrix is determined using out-of-
sample validation. Furthermore, unlike Bayesian Lasso-based penalties in the
statistics literature, the method in Aboutaleb et al. (2021) does not penalize
the non-zero covariances. This is a desirable feature, since penalizing the
non-zero covariances may lead to underestimating the heterogeneity in the
population under study (the covariance estimates will be biased towards
towards zero).
## 6 Concluding Remarks
Supervised machine learning methods emphasize empirical fit as the objective,
predictive success being the only criterion as opposed to issues of
interpretation or establishing causality. This imposes an intrinsic limitation
to the application of such models to policy analysis. Prediction is indeed
important from several perspectives. From a policy analysis standpoint,
however, the success of a model is best judged from its ability to predict in
new contexts. We have established the following:
1. 1.
Machine learning and other empirical models that only maximize fit are
excellent candidates for prediction problems where interpretability is not a
primary consideration and the prediction is localized to situations directly
similar to the training environment.
2. 2.
Discrete choice models seek to answer “what-if” extrapolatiove and
interventional questions that cannot be fully resolved from observational data
alone. Instead data must be combined with domain knowledge assumptions.
3. 3.
Efforts to combine aspects of machine learning with discrete choice methods
must not come at the cost of interpretability.
4. 4.
Machine learning concepts such as regularization and cross validation have
merit in providing a systematic and principled model selection mechanism.
5. 5.
We presented an implementation of such algorithmic model selection techniques
applied to two of the most common discrete choice models: the nested logit and
the logit mixture model.
We reviewed recent machine learning inspired methodologies for algorithmically
selecting the random specifications in nested logit and logit mixtures that
maximize fit subject to interpretability considerations. The econometric
tradition of specifying the systematic component the utility remains. The
models remain transparent, and the parameters can be used to estimate trade-
offs, willingness to pay values, and elasticities as before. We have simply
automated what the modeler would ideally like to have done: compare all
possible nesting tree (or covariance structure) specifications that ‘make
sense” and choose the best one based on likelihood ratio or some other
statistical test.
## References
* Aboutaleb (2019) Aboutaleb, Y. M. (2019). Learning structure in nested logit models. Master’s thesis, Massachusetts Institute of Technology.
* Aboutaleb et al. (2021) Aboutaleb, Y. M., M. Danaf, Y. Xie, and M. Ben-Akiva (2021). Sparse covariance estimation in logit mixture models. The Econometrics Journal (To Appear).
* Allenby (1997) Allenby, G. (1997). An introduction to hierarchical bayesian modeling. In Tutorial notes, Advanced Research Techniques Forum, American Marketing Association.
* Allenby and Rossi (1998) Allenby, G. M. and P. E. Rossi (1998). Marketing models of consumer heterogeneity. Journal of econometrics 89(1-2), 57–78.
* Angrist and Pischke (2008) Angrist, J. D. and J.-S. Pischke (2008). Mostly harmless econometrics: An empiricist’s companion. Princeton university press.
* Athey (2018) Athey, S. (2018). The impact of machine learning on economics. In The economics of artificial intelligence: An agenda. University of Chicago Press.
* Bansal et al. (2019) Bansal, P., R. Krueger, M. Bierlaire, R. A. Daziano, and T. H. Rashidi (2019). Bayesian estimation of mixed multinomial logit models: Advances and simulation-based evaluations. arXiv preprint arXiv:1904.03647.
* Ben-Akiva and Lerman (1985) Ben-Akiva, M. E. and S. R. Lerman (1985). Discrete choice analysis: theory and application to travel demand, Volume 9. MIT press.
* Bentz and Merunka (2000) Bentz, Y. and D. Merunka (2000). Neural networks and the multinomial logit for brand choice modelling: a hybrid approach. Journal of Forecasting 19(3), 177–200.
* Bishop (2006) Bishop, C. M. (2006). Pattern recognition and machine learning. springer.
* Braun and McAuliffe (2010) Braun, M. and J. McAuliffe (2010). Variational inference for large-scale models of discrete choice. Journal of the American Statistical Association 105(489), 324–335.
* Breiman et al. (2001) Breiman, L. et al. (2001). Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical science 16(3), 199–231.
* Brockman (2019) Brockman, J. (2019). Possible Minds: Twenty-five Ways of Looking at AI. Penguin Press.
* Depraetere and Vandebroek (2017) Depraetere, N. and M. Vandebroek (2017). A comparison of variational approximations for fast inference in mixed logit models. Computational Statistics 32(1), 93–125.
* Greene (2003) Greene, W. H. (2003). Econometric analysis.
* Hagenauer and Helbich (2017) Hagenauer, J. and M. Helbich (2017). A comparative study of machine learning classifiers for modeling travel mode choice. Expert Systems with Applications 78, 273–282.
* Hasti et al. (2001) Hasti, T., R. Tibshirani, and J. Friedman (2001). The elements of statistical learning.
* Hastie et al. (2009) Hastie, T., R. Tibshirani, and J. Friedman (2009). The elements of statistical learning: data mining, inference, and prediction. Springer Science & Business Media.
* Hess et al. (2017) Hess, S., P. Murphy, H. Le, and W. Y. Leong (2017). Estimation of new monetary valuations of travel time, quality of travel, and safety for singapore. Transportation Research Record 2664(1), 79–90.
* Hess and Train (2017) Hess, S. and K. Train (2017). Correlation and scale in mixed logit models. Journal of choice modelling 23, 1–8.
* James (2018) James, J. (2018). Estimation of factor structured covariance mixed logit models. Journal of choice modelling 28, 41–55.
* Keane and Wasi (2013) Keane, M. and N. Wasi (2013). Comparing alternative models of heterogeneity in consumer choice behavior. Journal of Applied Econometrics 28(6), 1018–1045.
* Koppelman and Bhat (2006) Koppelman, F. S. and C. Bhat (2006). A self instructing course in mode choice modeling: multinomial and nested logit models.
* Krueger et al. (2019) Krueger, R., P. Bansal, M. Bierlaire, R. A. Daziano, and T. H. Rashidi (2019). Variational bayesian inference for mixed logit models with unobserved inter-and intra-individual heterogeneity. arXiv preprint arXiv:1905.00419.
* (25) Lederrey, G., V. Lurkin, T. Hillel, and M. Bierlaire. Stochastic optimization with adaptive batch size: Discrete choice models as a case study.
* Lhéritier et al. (2019) Lhéritier, A., M. Bocamazo, T. Delahaye, and R. Acuna-Agost (2019). Airline itinerary choice modeling using machine learning. Journal of Choice Modelling 31, 198–209.
* Lipton (2016) Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490.
* Manski (1993) Manski, C. F. (1993). Identification of endogenous social effects: The reflection problem. The review of economic studies 60(3), 531–542.
* Manski (2009) Manski, C. F. (2009). Identification for prediction and decision. Harvard University Press.
* Manski (2013) Manski, C. F. (2013). Public policy in an uncertain world: analysis and decisions. Harvard University Press.
* McFadden (1981) McFadden, D. (1981). Econometric models of probabilistic choice. Structural analysis of discrete data with econometric applications 198272.
* Omrani (2015) Omrani, H. (2015). Predicting travel mode of individuals by machine learning. Transportation Research Procedia 10, 840–849.
* Paredes et al. (2017) Paredes, M., E. Hemberg, U.-M. O’Reilly, and C. Zegras (2017). Machine learning or discrete choice models for car ownership demand estimation and prediction? In 2017 5th IEEE International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS), pp. 780–785. IEEE.
* Pearl (2000) Pearl, J. (2000). Causality: models, reasoning and inference, Volume 29. Springer.
* Pearl (2019) Pearl, J. (2019). The limitations of opaque learning machines.
* Rosasco and Poggio (2017) Rosasco, L. and T. Poggio (2017, December). Machine learning: a regularization approach mit-9.520 lectures notes.
* Sifringer et al. (2018) Sifringer, B., V. Lurkin, and A. Alahi (2018). Let me not lie: Learning multinomial logit. arXiv preprint arXiv:1812.09747.
* Tan (2017) Tan, L. S. (2017). Stochastic variational inference for large-scale discrete choice models using adaptive batch sizes. Statistics and Computing 27(1), 237–257.
* Train (2009) Train, K. E. (2009). Discrete choice methods with simulation. Cambridge university press.
* van Cranenburgh and Alwosheel (2019) van Cranenburgh, S. and A. Alwosheel (2019). An artificial neural network based approach to investigate travellers’ decision rules. Transportation Research Part C: Emerging Technologies 98, 152–166.
* van Cranenburgh and Kouwenhoven (2019) van Cranenburgh, S. and M. Kouwenhoven (2019). Using artificial neural networks for recovering the value-of-travel-time distribution. In International Work-Conference on Artificial Neural Networks, pp. 88–102. Springer.
* Varian (1993) Varian, H. R. (1993). What use is economic theory?
* Wang and Zhao (2018) Wang, S. and J. Zhao (2018). Using deep neural network to analyze travel mode choice with interpretable economic information: An empirical example. arXiv preprint arXiv:1812.04528.
* Zhang and Xie (2008) Zhang, Y. and Y. Xie (2008). Travel mode choice modeling with support vector machines. Transportation Research Record 2076(1), 141–150.
|
# Filtered formal groups, Cartier duality, and derived algebraic geometry
Tasos Moulinos
###### Abstract
We develop a notion of formal groups in the filtered setting and describe a
duality relating these to a specified class of filtered Hopf algebras. We then
study a deformation to the normal cone construction in the setting of derived
algebraic geometry. Applied to the unit section of a formal group
$\widehat{\mathbb{G}}$ this provides a $\mathbb{G}_{m}$-equivariant
degeneration of $\widehat{\mathbb{G}}$ to its tangent Lie algebra. We prove a
unicity result on complete filtrations, which, in particular, identifies the
resulting filtration on the coordinate algebra of this deformation with the
adic filtration on the coordinate algebra of $\widehat{\mathbb{G}}$. We use
this in a special case, together with the aforementioned notion of Cartier
duality, to recover the filtration on the filtered circle of [MRT19]. Finally,
we investigate some properties of $\widehat{\mathbb{G}}$-Hochschild homology,
set out in loc. cit., and describe “lifts” of these invariants to the setting
of spectral algebraic geometry.
## 1 Introduction
The starting point of this work arises from the construction in [MRT19] of the
_filtered circle_ , an object of algebro-geometric nature, capturing the
$k$-linear homotopy type of $S^{1}$, the topological circle. This construction
is motivated by the schematization problem due to Grothendieck, stated most
generally in finding a purely algebraic description of the $\mathbb{Z}$-linear
homotopy type of an arbitrary topological space $X$.
In the process of doing this, the authors realized that there was an
inextricable link between this construction, and the theory of formal groups
and Cartier duality, as set out in [Car62]. We briefly review the
relationship. The filtered circle is obtained as the classifying stack
$B\mathbb{H}$ where $\mathbb{H}$ is a $\mathbb{G}_{m}$-equivariant family of
group schemes parametrized by the affine line, $\mathbb{A}^{1}$. This family
of schemes interpolates between two affine group schemes, $\mathsf{Fix}$ and
$\mathsf{Ker}$; these can be traced to the work of [SS01] where they are shown
to arise via Cartier duality from the formal multiplicative and formal
additive groups, $\widehat{\mathbb{G}_{m}}$ and $\widehat{\mathbb{G}_{a}}$
respectively. The filtered circle $S^{1}_{fil}$ is then obtained as
$B\mathbb{H}$, the classifying stack over $\mathbb{A}^{1}/\mathbb{G}_{m}$ of
$\mathbb{H}$. By taking the derived mapping space out of $S^{1}_{fil}$ in
$\mathbb{A}^{1}/\mathbb{G}_{m}$-parametrized derived stacks, one recovers
precisely Hochshild homology together with a functorial filtration.
There is no reason to stop at $\widehat{\mathbb{G}_{m}}$ or
$\widehat{\mathbb{G}_{a}}$ however. In loc. cit., the authors proposed, given
an arbitrary $1$-dimensional formal group $\widehat{\mathbb{G}}$, the
following generalized notion of Hochshild homology of simplicial commutative
rings:
$\operatorname{HH}^{\widehat{\mathbb{G}}}(-):\operatorname{sCAlg}_{k}\to\operatorname{sCAlg}_{k},\,\,\,\,\,A\mapsto\operatorname{HH}^{\widehat{\mathbb{G}}}(A):=R\Gamma(\operatorname{Map}_{\operatorname{dStk}_{k}}(B\widehat{\mathbb{G}}^{\vee},{\operatorname{Spec}}A)).$
The right hand side is the derived mapping space out of
$B\widehat{\mathbb{G}}^{\vee}$, the classifying stack of the Cartier dual of
$\widehat{\mathbb{G}}$. For $\widehat{\mathbb{G}}=\widehat{\mathbb{G}_{m}}$
one recovers Hochshild homology, via a natural equivalence of derived schemes
$\operatorname{Map}(B\mathsf{Fix},X)\to\operatorname{Map}(S^{1},X)$
and for $\widehat{\mathbb{G}}=\widehat{\mathbb{G}_{a}}$ one recovers the
derived de Rham algebra (cf. [TV11])via an equivalence
$\operatorname{Map}(B\mathsf{Ker},X)\simeq\mathbb{T}_{X|k}[-1]={\operatorname{Spec}}(\operatorname{Sym}(\mathbb{L}_{X|k}[1])$
with the shifted (negative) tangent bundle. One may now ask the following
natural questions: if one replaces $\widehat{\mathbb{G}_{m}}$ with an
arbitrary formal group $\widehat{\mathbb{G}}$, does one obtain a similar
degeneration? Is there a sense in which such a degeneration is canonical?
The overarching aim of this paper is to address some of these questions by
further systematizing some of the above ideas, particularly using further
ideas from spectral and derived algebraic geometry.
### 1.1 Filtered formal groups
The first main undertaking of this paper is to introduce a notion of _filtered
formal group_. For now, we give the following rough definition, postponing the
full definition to Section 4:
###### Definition 1.1 (cf. Definition 4.29).
A _filtered formal group_ is an abelian cogroup object $A$ in the category of
complete filtered algebras
$\operatorname{CAlg}(\widehat{\operatorname{Fil}}_{R})$ which are discrete at
the level of underlying algebras.
Heuristically, these give rise to stacks
$\widehat{\mathbb{G}}\to\mathbb{A}^{1}/\mathbb{G}_{m},$
for which the pullback $\pi^{*}(\widehat{\mathbb{G}})$ along the the smooth
atlas $\pi:\mathbb{A}^{1}\to\mathbb{A}^{1}/\mathbb{G}_{m}$ is a formal group
over $\mathbb{A}^{1}$ in the classical sense.
From the outset we restrict to a full subcategory of complete filtered
algebras, for which there exists a well-behaved duality theory. Our setup is
inspired by the framework of [Lur18] and the notion of smooth coalgebra
therein. Namely, we restrict to complete filtered algebras that arise as the
duals of _smooth filtered coalgebras_ (cf. Definition 4.14). The abelian
cogroup structure on a complete filtered algebra $A$ then corresponds to the
structure of an abelian group object on the corresponding coalgebra. As
everything in sight is discrete, hence $1$-categorical (cf. Remark 3.3) this
is precisely the data of a comonoid in smooth coalgebras, i.e. a filtered Hopf
algebra. Inspired by the classical Cartier duality correspondence over a field
between formal groups and affine group schemes, we refer to this as as
filtered Cartier duality.
###### Remark 1.2.
We acknowledge that the phrase “Cartier duality” has a variety of different
meanings throughout the literature (e.g. duality between finite group schemes,
$p$-divisible groups, etc.) For us, this will always mean a contravariant
correspondence between (certain full subcategories of) formal groups and
affine group schemes, originally observed by Cartier over a field in [Car62].
###### Remark 1.3.
In this paper we are concerned with filtered formal groups
$\widehat{\mathbb{G}}\to\mathbb{A}^{1}/\mathbb{G}_{m}$ whose “fiber over
${\operatorname{Spec}}k\to\mathbb{A}^{1}/\mathbb{G}_{m}$” recovers a classical
(discrete) formal group. We conjecture that the duality theory of Section 4
holds true in the filtered, spectral setting. Nevertheless, as this takes us
away from our main applications, we have stayed away from this level of
generality.
As it turns out, the notion of a complete filtered algebra, and hence
ultimately the notion of a filtered formal group is of a rigid nature. To this
effect, we demonstrate the following unicity result on complete filtered
algebras $A_{n}$ with a specified associated graded:
###### Theorem 1.4.
Let $A$ be an commutative ring which is complete with respect to the $I$-adic
topology induced by some ideal $I\subset A$. Let
$A_{n}\in\operatorname{CAlg}(\widehat{\operatorname{Fil}}_{k})$ be a
(discrete) complete filtered algebra with underlying object $A$. Suppose there
is an inclusion
$A_{1}\to I$
of $A$-modules inducing an equivalence
$\operatorname{gr}(A_{n})\simeq\operatorname{gr}(F_{I}^{*}(A))=\operatorname{Sym}_{gr}(I/I^{2})$
of graded objects, where $I/I^{2}$ is of pure weight $1$. Then
$A_{n}=F_{I}^{*}A$, namely the filtration in question is the $I$-adic
filtration.
Hence, if $A$ is an augmented algebra, there can only be one (multiplicative)
filtration on $A$ satisfying the conditions of 1.4, the $I$-adic filtration.
We will observe that the comultipliciation on the coordinate algebra of a
formal group preserves this filtration, so that the formal group structure
lifts uniquely as well.
### 1.2 Deformation to the normal cone
Our next order of business is to study a deformation to the normal cone
construction in the setting of derived algebraic geometry. In essence this
takes a closed immersion $\mathcal{X}\to\mathcal{Y}$ of classical schemes and
gives a $\mathbb{G}_{m}$ equivariant family of formal schemes over
$\mathbb{A}^{1}$, generically equivalent to the formal completion
$\widehat{\mathcal{Y}_{\mathcal{X}}}$ which degenerate to the normal bundle of
$N_{\mathcal{X}|\mathcal{Y}}$ formally completed at the identity section. When
applied to a formal group $\widehat{\mathbb{G}}$ produces a
$\mathbb{G}_{m}$-equivariant $1$-parameter family of formal groups over the
affine line.
###### Theorem 1.5.
Let $f:{\operatorname{Spec}}(k)\to\widehat{\mathbb{G}}$ be the unit section of
a formal group $\widehat{\mathbb{G}}$. Then there exists a stack
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})\to\mathbb{A}^{1}/\mathbb{G}_{m}$
such that there is a map
$\mathcal{X}\times\mathbb{A}^{1}/\mathbb{G}_{m}\to
Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})$
whose fiber over $1\in\mathbb{A}^{1}/\mathbb{G}_{m}$ is
${\operatorname{Spec}}k\to\widehat{\mathbb{G}}$
and whose fiber over $0\in\mathbb{A}^{1}/\mathbb{G}_{m}$ is
${\operatorname{Spec}}k\to\widehat{T_{\widehat{\mathbb{G}}|k}}\simeq\widehat{\mathbb{G}_{a}},$
the formal completion of the tangent Lie algebra of $\widehat{\mathbb{G}}$.
We would like to point out that the constructions occur in the derived
setting, but the outcome is a degeneration between formal groups, which
belongs to the realm of classical geometry. One may then apply the
aforementioned _filtered Cartier duality_ to this construction to obtain a
group scheme
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})^{\vee}$ over
$\mathbb{A}^{1}/\mathbb{G}_{m}$, thereby equipped with a canonical filtration
on the cohomology of the (classical) Cartier dual
$\widehat{\mathbb{G}}^{\vee}$.
By [Mou19, Proposition 7.3],
$\mathcal{O}(Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})$
acquires the structure of a complete filtered algebra. We have the following
characterization of the resulting filtration on
$\mathcal{O}(Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})$
relating the deformation to the normal cone construction with the $I$-adic
filtration of Theorem 1.4.
###### Corollary 1.6.
Let $\widehat{\mathbb{G}}$ be a formal group over $k$. Then there exists a
unique filtered formal group with $\mathcal{O}(\widehat{\mathbb{G}})$ as its
underlying object. In particular, there is an equivalence
$\mathcal{O}(Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})\simeq
F^{*}_{ad}A$
of abelian cogroup objects in
$\operatorname{CAlg}(\widehat{\operatorname{Fil}}_{k})$.
Hence, the deformation to the normal cone construction applied to a formal
group $\widehat{\mathbb{G}}$ produces a _filtered formal group_.
Next, we specialize to the case of the formal multiplicative group
$\widehat{\mathbb{G}_{m}}$. By applying Theorem 1.5 to the unit section
${\operatorname{Spec}}k\to\widehat{\mathbb{G}_{m}}$, we recover the filtration
on the group scheme
$\mathsf{Fix}:=\operatorname{Ker}(F-1:\mathbb{W}(-)\to\mathbb{W}(-))$
of Frobenius fixed points on the Witt vector scheme and show that this
filtration arises via Cartier duality precisely from a certain
$\mathbb{G}_{m}$-equivariant family of formal groups over $\mathbb{A}^{1}$. As
a consequence, the formal group defined is precisely an instance of the
deformation to the normal cone of the unit section
${\operatorname{Spec}}k\to\widehat{\mathbb{G}_{m}}$.
###### Theorem 1.7.
Let $\mathbb{H}\to\mathbb{A}^{1}/\mathbb{G}_{m}$ be the filtered group scheme
of [MRT19]. This arises as the Cartier dual
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}}_{m})^{\vee}$ of the
deformation to the normal cone of the unit section
${\operatorname{Spec}}k\to\mathbb{G}_{m}$. Namely, there exists an equivalence
of group schemes over $\mathbb{A}^{1}/\mathbb{G}_{m}$
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})^{\vee}\to\mathbb{H}.$
Putting this together with Corollary 1.6, we obtain the following curious
characterization of the HKR filtration on Hochschild homology studied in
[MRT19]:
###### Corollary 1.8.
The HKR filtration on Hochschild homology is functorially induced by way of
filtered Cartier duality, by the $I$-adic filtration on
$\mathcal{O}(\widehat{\mathbb{G}}_{m})\simeq k[[t]]$.
### 1.3 Filtration on $\widehat{\mathbb{G}}$-Hochschild homology
One may of course apply the deformation to the normal cone construction to an
arbitrary formal group of height $n$ over any base commutative ring. As a
consequence, one obtains a canonical filtration on the aforementioned
$\widehat{\mathbb{G}}$-Hochschild homology
###### Corollary 1.9.
(cf. 7.3) Let $\widehat{\mathbb{G}}$ be an arbitrary formal group. The functor
$\operatorname{HH}^{\widehat{\mathbb{G}}}(-):\operatorname{sCAlg}_{R}\to{\operatorname{Mod}}_{R}$
admits a refinement to the $\infty$-category of filtered $R$-modules
$\widetilde{\operatorname{HH}^{\widehat{\mathbb{G}}}(-)}:\operatorname{sCAlg}_{R}\to{\operatorname{Mod}}_{R}^{filt},$
such that
$\operatorname{HH}^{\widehat{\mathbb{G}}}(-)\simeq\operatorname{}\operatorname{colim}_{(\mathbb{Z},\leq)}\widetilde{\operatorname{HH}^{\widehat{\mathbb{G}}}(-)}$
In other words, $\operatorname{HH}^{\widehat{\mathbb{G}}}(A)$ admits an
exhaustive filtration for any formal group $\widehat{\mathbb{G}}$ and
simplicial commutative algebra $A$.
### 1.4 A family of group schemes over the sphere
We now shift our attention over to the topological context. In [Lur18], Lurie
defines a notion of formal groups intrinsic to the setting of spectral
algebraic geometry. We explore a weak notion of Cartier duality in this setup,
between formal groups over an $E_{\infty}$-ring and affine group schemes,
interpreted as group like commutative monoids in the category of spectral
schemes. Leveraging this notion of Cartier duality, we demonstrate the
existence a family of spectral group schemes for each height $n$. Since
Cartier duality is compatible with base-change, one rather easily sees that
these spectral schemes provide lifts of various affine group schemes.
###### Theorem 1.10.
Let $\widehat{\mathbb{G}}$ be a formal group over ${\operatorname{Spec}}k$,
for $k$ a finite field of height $n$. Let
$\mathsf{Fix}_{\widehat{\mathbb{G}}}:=\widehat{\mathbb{G}}^{\vee}$ be its
Cartier dual affine group scheme. Then there exists a functorial lift
$\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}}\to{\operatorname{Spec}}R^{un}_{\widehat{\mathbb{G}}}$
giving the following Cartesian square of affine spectral schemes:
$\textstyle{\mathsf{Fix}_{\widehat{\mathbb{G}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi^{\prime}}$$\scriptstyle{p^{\prime}}$$\textstyle{\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{Spec(\mathbb{F}_{p})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{{\operatorname{Spec}}(R^{un}_{\widehat{\mathbb{G}}})}$
Moreover, $\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}}$ will be a group-like
commutative monoid object in the $\infty$-category of spectral stacks
$sStk_{R^{un}_{\widehat{\mathbb{G}}}}$ over $R^{un}_{\widehat{\mathbb{G}}}$.
The spectral group scheme of the theorem arises as the weak Cartier dual of
the universal deformation of the formal group $\widehat{\mathbb{G}}$; this
naturally lives over the _spectral deformation ring_
$R^{un}_{\widehat{\mathbb{G}}_{0}}$. This $E_{\infty}$ ring studied in
[Lur18], corepresents the formal moduli problem sending a complete
(noetherian) $E_{\infty}$ ring $A$ to the space of deformations of
$\widehat{\mathbb{G}}_{0}$ to $A$ and is a spectral enhancement of the
classical deformation rings of Lubin and Tate. A key such example arises from
the restriction to $\mathbb{F}_{p}$ of the subgroup scheme $\mathsf{Fix}$ of
of fixed points on the Witt vector scheme, in height one.
### 1.5 Liftings of $\widehat{\mathbb{G}}$-twisted Hochshild homology
Finally we study an $E_{\infty}$ (as opposed to simplicial commutative)
variant of $\widehat{\mathbb{G}}$-Hochshild homology. For an $E_{\infty}$
$k$-algebra, this will be defined in an analogous manner to
$\operatorname{HH}^{\widehat{\mathbb{G}}}(A)$ (see Definition 9.1). We
conjecture that for a simplicial commutative algebra $A$ with underlying
$E_{\infty}$-algebra, denoted by $\theta(A)$, this recovers the underlying
$E_{\infty}$ algebra of the simplicial commutative algebra
$HH^{\widehat{\mathbb{G}}}(A)$. In the case of the formal multiplicative group
$\widehat{\mathbb{G}_{m}}$, we verify this to be true, so that one recovers
Hochschild homology.
These theories now admit lifts to the associated spectral deformation rings:
###### Theorem 1.11.
Let $\widehat{\mathbb{G}}$ be a height $n$ formal group over a finite field
$k$ of characteristic $p$, and let $R^{un}_{\widehat{\mathbb{G}}}$ be the
associated spectral deformation $E_{\infty}$ ring. Then there exists a functor
$\operatorname{THH}^{\widehat{\mathbb{G}}}:\operatorname{CAlg}_{R^{un}_{\widehat{\mathbb{G}}}}\to\operatorname{CAlg}_{R^{un}_{\widehat{\mathbb{G}}}}$
defined as
$\operatorname{THH}^{\widehat{\mathbb{G}}}(A):=R\Gamma(\operatorname{Map}(B\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}},{\operatorname{Spec}}A),\mathcal{O})$
This lifts $\widehat{\mathbb{G}}$-Hochshild homology in the sense that if $A$
is a $k$-algebra for which there exists a
$R^{un}_{\widehat{\mathbb{G}}}$-algebra lift $\widetilde{A}$ with
$\widetilde{A}\otimes_{R^{un}_{\widehat{\mathbb{G}}}}k\simeq A$
then there is a canonical equivalence, cf Theorem 9.7,
$\operatorname{THH}^{\widehat{\mathbb{G}}}(\widetilde{A})\otimes_{R^{un}_{\widehat{\mathbb{G}}}}k\simeq\operatorname{HH}^{\widehat{\mathbb{G}}}(A)$
We tie the various threads of this work together in the speculative final
section where we discuss the question of lifting the filtration on
$\operatorname{HH}^{\widehat{\mathbb{G}}}(-)$, defined in section 7 as a
consequence of the degeneration of $\widehat{\mathbb{G}}$ to
$\mathbb{A}^{1}/\mathbb{G}_{m}$, to a filtration on the topological lift
$\operatorname{THH}^{\widehat{\mathbb{G}}}(-)$.
### 1.6 Future work
We work over a ring of integers $\mathcal{O}_{K}$ in a local field extension
$K\supset\mathbb{Q}_{p}$ of degree one obtains a formal group, known as the
_Lubin-Tate formal group_. This is canonically associated to a choice of
uniformizer $\pi\in\mathcal{O}_{K}$. In future work, we investigate analogues
of the construction of $\mathbb{H}$ in [MRT19], which will be related by
Cartier duality to this Lubin-Tate formal group. By the results of this paper,
these filtered group schemes will have a canonical degeneration arising from
the deformation to the normal cone construction of the Cartier dual formal
groups.
In another vein, we expect the study of these spectral lifts
$\operatorname{THH}^{\widehat{\mathbb{G}}}(-)$ to be an interesting direction.
For example, there is the question of filtrations, and to what extent they
lift to $\operatorname{THH}^{\widehat{\mathbb{G}}}(-)$. One could try to base-
change this along the map to the orientation classifier
$R^{un}_{\widehat{\mathbb{G}}}\to R^{or}_{\widehat{\mathbb{G}}},$
cf. [Lur18]. Roughly, this is a complex orientable $E_{\infty}$ ring with the
universal property that it classifies oriented deformations of the spectral
formal group $\widehat{\mathbb{G}}^{un}$; these are oriented in that they
coincide with the formal group corresponding to a complex orientation on the
underlying $E_{\infty}$ algebra of coefficients. For example, one obtains
$p$-complete $K$-theory in height one. It is conceivable questions about
filtrations and the like would be more tractable over this ring.
Outline We begin in section 2 with a short overview of the perspective on
formal groups which we adopt. In section 3 we describe some preliminaries from
derived algebraic geometry. In section 4, we construct the deformation to the
normal cone and apply it to the case of the unit section of a formal group. In
section 5 we apply this construction to the formal multiplicative group
$\widehat{\mathbb{G}_{m}}$ and relate the resulting degeneration of formal
groups to constructions in [MRT19]. In section 6, we study resulting
filtrations on the associated $\widehat{\mathbb{G}}$-Hochshild homologies. We
begin section 7 with a brief overview of the ideas which we borrow from
[Lur18] in the context of formal groups spectral algebraic geometry, and lift
describe a family of spectral group schemes that arise in this setting that
correspond to height $n$ formal groups over characteristic $p$ finite fields.
In section 8, we study lifts $\operatorname{THH}^{\widehat{\mathbb{G}}}(-)$ of
$\widehat{\mathbb{G}}$-Hochschild homology to the sphere, with a key input the
group schemes of the previous section. Finally, we end with a short
speculative discussion in section 9 about potential filtrations on
$\operatorname{THH}^{\widehat{\mathbb{G}}}(-)$
Conventions We often work over the $p$-local integers $\mathbb{Z}_{(p)}$, and
so we typically use $k$ to denote a fixed commutative
$\mathbb{Z}_{(p)}$-algebra. If we use the notation $R$ for a ring or ring
spectrum, then we are not necessarily working $p$-locally. In another vein, we
work freely in the setting of $\infty$-categories and higher algebra from
[Lur]. We would also like to point out that our use of the notation
${\operatorname{Spec}}(-)$ depends on the setting; in particular when working
with spectral schemes, ${\operatorname{Spec}}(A)$ denotes the spectral scheme
corresponding to the $E_{\infty}$-algebra $A$. Finally, we will always be
working in the commutative setting, so we implicitly assume all relevant
algebras, coalgebras, formal groups, etc. are (co)commutative.
Acknowledgements. I would like to thank Marco Robalo and Bertrand Toën for
their collaboration in [MRT19] which led to many of the ideas presented in
this work. I would also like to thank Bertrand Toën for various helpful
conversations and ideas which have made their way into this paper. This work
is supported by the grant NEDAG ERC-2016-ADG-741501.
## 2 Basic notions from derived algebraic geometry
In this section we review some of the relevant concepts that we shall use from
the setting of derived algebraic geometry. We recall that there are two
variants, one whose affine objects are connective $E_{\infty}$-rings, and one
where the affine objects are simplicial commutative rings. We review parallel
constructions from both simultaneously, as we will switch between both
settings.
Let
$\mathcal{C}=\\{\operatorname{CAlg}^{\operatorname{cn}}_{R},\operatorname{sCAlg}_{R}\\}$
denote either of the $\infty$-category of connective $R$-algebras or the
$\infty$-category of simplicial commutative algebras. Recall that the latter
can be characterised as the completion via sifted colimits of the category of
(discrete) free $R$-algebras. Over a commutative ring $R$, there exists a
functor
$\theta:\operatorname{sCAlg}_{R}\to\operatorname{CAlg}^{\operatorname{cn}};$
which takes the underlying connective $E_{\infty}$-algebra of a simplicial
commutative algebra. This preserves limits and colimits so is in fact monadic
and comonadic.
In any case one may define a derived stack via its functor of points, as an
object of the $\infty$-category $\operatorname{Fun}(\mathcal{C},\mathcal{S})$
satisfying hyperdescent with respect to a suitable topology on
$\mathcal{C}^{op}$, e.g the étale topology. Throughout the sequel we
distinguish the context we are work in by letting $\operatorname{dStk}_{R}$
denote the $\infty$-category of derived stacks and let
$\operatorname{sStk}_{R}$ denote the $\infty$-category of “spectral stacks”.
In either cases, one obtains an $\infty$-topos, which is Cartesian closed, so
that it makes sense to talk about internal mapping objects: given any two
$X,Y\in\operatorname{Fun}(\mathcal{C},\mathcal{S})$, one forms the mapping
stack $\operatorname{Map}_{\mathcal{C}}(X,Y)$, In various cases of interest,
if the source and/or target is suitably representable by a derived scheme or a
derived Artin stack, then this is the case for
$\operatorname{Map}_{\mathcal{C}}(X,Y)$ as well.
There is a certain type of base-change result that we will use, cf. [HLP14,
Proposition A.1.5] [Lur16, Proposition 9.1.5.7].
###### Proposition 2.1.
Let $f:\mathcal{X}\to{\operatorname{Spec}}R$ be a geometric stack over
${\operatorname{Spec}}R$. Assume that one of the two conditions hold :
* •
$\mathcal{X}$ is a derived scheme
* •
The morphism $f$ is of finite cohomological dimension over
${\operatorname{Spec}}R$, so that the global sections functor sends
$\operatorname{QCoh}(\mathcal{X})_{\geq
0}\to({{\operatorname{Mod}}_{R}})_{\geq-n}$ for some positive integer $n$.
Then, for $g:{\operatorname{Spec}}R^{\prime}\to{\operatorname{Spec}}R$, the
following diagram of stable $\infty$-categories
$\textstyle{{\operatorname{Mod}}_{R}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g^{*}}$$\scriptstyle{f^{*}}$$\textstyle{\operatorname{QCoh}(\mathcal{X})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g^{\prime*}}$$\textstyle{\operatorname{Sp}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime*}}$$\textstyle{\operatorname{QCoh}(\mathcal{X}_{R^{\prime}})}$
is right adjointable, and so, the Beck-Chevalley natural transformation of
functors $g^{*}f_{*}\simeq
f^{\prime}_{*}g^{\prime*}:\operatorname{QCoh}(\mathcal{X})\to{\operatorname{Mod}}_{R^{\prime}}$
is an equivalence.
### 2.1 Formal algebraic geometry and derived formal descent
In this paper, we shall often find ourselves in the setting of formal
algebraic geometry and formal schemes. Hence we recall some basic notions in
this setting. We end this subsection with a notion of formal descent which is
intrinsic to the derived setting. This phenomenon will be exploited in Section
5.
A (underived) _formal affine scheme_ corresponds to the following piece of
data:
###### Definition 2.2.
We define an adic $R$-algebra to be an $R$-algebra $A$ together with an ideal
$I\subset A$ endowing a topology on $A$.
###### Construction 2.3.
Let $A$ be an adic commutative ring having a finitely generated ideal of
definition $I\subseteq\pi_{0}A$. Then there exists a tower $...\to A_{3}\to
A_{2}\to A_{1}$ with the properties that
1. 1.
each of the maps $A_{i+1}\to A_{i}$ is a surjection with nilpotent kernel.
2. 2.
the canoncial map
$\operatorname{colim}\operatorname{Map}_{\operatorname{CAlg}}(A_{n},B)\to\operatorname{Map}_{\operatorname{CAlg}}(A,B)$
induces an equivalence of the left hand side with the sumamand of
$\operatorname{Map}_{\operatorname{CAlg}}(A,B)$ consisting of maps $\phi:A\to
B$ annihilating some poer of the ideal $I$.
3. 3.
Each of the rings $A_{i}$ is finitely projective when regarded as an
$A$-module.
One now defines $\operatorname{Spf}A$ to be the filtered colimit of
$\operatorname{colim}_{i}{\operatorname{Spec}}A_{i}$
in the category of locally ringed spaces. In fact, is is the left Kan
extension of the ${\operatorname{Spec}}(-)$ functor along the inclusion
$\operatorname{CAlg}\to\operatorname{CAlg}^{ad}$.
###### Definition 2.4.
A formal scheme over $R$ is a functor
$X:\operatorname{CAlg}_{R}^{0}\to\operatorname{Set}$
which is Zariski locally of the above form. A (commutative) formal group is an
abelian group object in the category of formal schemes. By remark 3.3, this
consists of the data of a formal scheme $\widehat{\mathbb{G}}$ which takes
values in groups, which commutes with direct sums.
There is a rather surprising descent statement one can make in the setting of
derived algebraic geometry. For this we first recall the notion of formal
completion. We remark that in this section we are always working in the
locally Noetherian context.
###### Definition 2.5.
Let $f:X\to Y$ be a closed immersion of schemes. We define the formal
completion to be the following stack $\widehat{Y}_{X}$ whose functor of points
is given by
$\widehat{Y}_{X}(R)=Y(R)\times_{Y(R_{red})}X(R_{red})$
where $R_{red}$ denotes the reduced ring $(\pi_{0}R)_{red}$.
Although defined in this way as a stack, this is actually representable by an
object in the category of formal schemes, commonly referred to as the formal
completion of $Y$ along $X$.
We form the nerve $N(f)_{\bullet}$ of the map $f:X\to Y$, which we recall is a
simplicial object that in degree $n$ is the $(n+1)$-fold product
$N(f)_{n}=X\times_{Y}X\cdot\cdot\cdot\times_{Y}X$
The augmentation map of this simplicial object naturally factors through the
formal completion (by the universal property the formal completion satisfies).
We borrow the following key proposition from [Toë14]:
###### Theorem 2.6.
The augmentation morphism $N(f)_{\bullet}\to\widehat{Y}_{X}$ displays
$\widehat{Y}_{X}$ as the colimit of the diagram $N(f)_{\bullet}$ in the
category of derived schemes: this gives an equivalence
$\operatorname{Map}_{dStk}(\widehat{Y}_{X},Z)\simeq\lim_{n\in\Delta}\operatorname{Map}_{dSch}(N(f)_{n},Z)$
for any derived scheme
###### Remark 2.7.
At its core, this is a consequence of [Car08, Theorem 4.4] on derived
completions in stable homotopy, giving a model for the completion of a
$A$-module spectrum along a map of ring spectra $f:A\to B$ to be the
totalization of a certain cosimplicial diagram of spectra obtained via a
certain co-Nerve construction.
### 2.2 Tangent and Normal bundles
Let $X$ be a derived stack, and $E\in\operatorname{Perf}(X)$ a perfect complex
of Tor amplitude concentrated in degrees $[0,n]$ Then the we have the
following notion, cf [Toë14, Section 3]:
###### Definition 2.8.
We defined to linear stack associated to $E$ to be the functor $\mathbb{V}(E)$
sending
$({\operatorname{Spec}}A\to
X)\mapsto\operatorname{Map}_{{\operatorname{Mod}}_{A}}(u^{*}(E),A)$
###### Example 2.9.
Let $\mathcal{O}[n]\in\operatorname{Perf}(X)$ be a shift of the structure
sheaf. Then $\mathbb{V}(\mathcal{O}[n])$ is simply $K(\mathbb{G}_{a},-n)$. For
a general perfect complex $E$, this $\mathbb{V}(E)$ may be obtained by taking
various twisted forms and finite limits of these $K(\mathbb{G}_{a},-n)$.
###### Definition 2.10.
Let $f:X\to Y$ be a map of derived stacks. We define the normal bundle stack
to be $\mathbb{V}(T_{X|Y}[1])$. This will be a derived stack over $X$; if $f$
is a closed immersion of classical schemes then this will be representable by
a derived scheme.
###### Example 2.11.
Let $i:{\operatorname{Spec}}k\to\widehat{\mathbb{G}}$ be the unit section of a
formal group. This is a lci closed immersion, hence the cotangent complex is
concentrated in (homological) degree $1$; thus the tangent complex is just $k$
in degree $-1$. It follows that the normal bundle
$\mathbb{V}(T_{k|\widehat{\mathbb{G}}}[1])$ is just
$K(\mathbb{G}_{a},0)=\mathbb{G}_{a}$, the additive group. In fact we may
identify the normal bundle with the tangent Lie algebra of
$\widehat{\mathbb{G}}$.
## 3 Formal groups and Cartier duality
In this section we review some ideas pertaining to the theory of (commutative)
formal groups which will be used throughout this paper. In particular we
carefully review the notion of Cartier duality as introduced by Cartier in
[Car62], and also described in [Haz78, Section 37].
There are several perspectives one may adopt when studying formal groups. In
general, one may think of them as an abelian group object in the category of
formal schemes or representable formal moduli problems. In this paper we will
be focusing on the somewhat restricted setting of formal groups which arise
from certain types of Hopf algebras. In this setting one has a particularly
well behaved duality theory which we shall exploit. Furthermore it is this
structure which has been generalized by Lurie in [Lur18] to the setting of
spectral algebraic geometry.
### 3.1 Abelian group objects
We start off with the notions of abelian group and commutative monoid objects
in an arbitrary $\infty$-category and review their distinction.
###### Definition 3.1.
Let $\mathcal{C}$ be an $\infty$-category which admits finite limits. A
commutative monoid object is a functor
$M:\operatorname{Fin}_{*}\to\mathcal{C}$ with the property that for each $n$,
the natural maps $M(\rho(\langle n\rangle)\to M(\rho\langle 1\rangle)$ induce
equivalences $M(\rho\langle n\rangle)\simeq M(\langle 1\rangle)^{n}$ in
$\mathcal{C}$.
In addition, a commutative monoid $M$ is grouplike if for every object
$C\in\mathcal{C}$, the commutative monoid $\pi_{0}\operatorname{Map}(C,M)$ is
an abelian group.
We now define the somewhat contrasting notion of abelian group object. This
will be part of the relevant structure on a formal group in the spectral
setting.
###### Definition 3.2.
Let $\mathcal{C}$ be an $\infty$-category. Then the $\infty$-category of
abelian objects of $\mathcal{C}$, $\operatorname{Ab}(\mathcal{C})$ is defined
to be
$\operatorname{Fun}^{\times}(\operatorname{Lat}^{op},\mathcal{C}),$
the category of product preserving functors from the category
$\operatorname{Lat}$ of finite rank abelian groups into $\mathcal{C}$.
###### Remark 3.3.
Let $\mathcal{C}$ be a small category. Then an abelian group object $A$ is
such that its representable presheaf $h_{A}$ takes values in abelian groups.
Furthermore, in this setting, the two notions of abelian groups and grouplike
commutative monoid objects coincide.
### 3.2 Formal groups and Cartier duality over a field
Before setting the stage for the various manifestations of Cartier duality to
appear we say a few things about Hopf algebras, as they are central to this
work. We begin with a brief discussion of what happens over a field $k$.
###### Definition 3.4.
For us, a (commmutative, cocommutative) Hopf algebra $H$ over $R$ is an
abelian group object in the category of coalgebras over $k$.
Unpacking the definition, and using the fact the category of coalgebras is
equipped with a Cartesian monoidal structure (it is the opposite category of a
category of commutative algebra objects), we see that this is just another way
of identifying bialgebra objects $H$ with an antipode map
$i:H\to H;$
this arises from the “abelian group structure” on the underlying coalgebra.
###### Construction 3.5.
Let $H$ be a Hopf algebra. Then one may define a functor
$\operatorname{coSpec}(H):\operatorname{CAlg}\to\operatorname{Ab},\,\,\,\,\,R\mapsto\operatorname{Gplike}(H\otimes_{k}R)=\\{x|\Delta(x)=x\otimes
x\\},$
assigning to a commutative ring $R$ the set of grouplike elements of
$R\otimes_{k}H$. The Hopf algebra structure on $H$ endows these sets with an
abelian group structure, which is what makes the above an abelian group
object-valued functor. In fact, this will be a formal scheme and there will be
an equivalence
$\operatorname{coSpec}(H)\simeq\operatorname{Spf}(H^{\vee})$
where $H^{\vee}$, the linear dual of $H$ is an $R$-algebra, complete with
respect to an $I$-adic topology induced by an ideal of definition $I\subset
R$. Hence we arrive at our first interpretation of a formal group; these
correspond precisely to Hopf algebras.
###### Construction 3.6.
Let us unpack the previous construction from an algebraic vantage point. Over
a field $k$, there is an equivalence
$\operatorname{cCAlg}_{k}\simeq\operatorname{Ind}(\operatorname{cCAlg}^{fd}_{k})$
where $\operatorname{cCAlg}^{fd}_{k}$ denotes the category of coalgebras whose
underlying vector space is finite dimensional. By standard duality, there is
an equivalence between
$\operatorname{Ind}(\operatorname{cCAlg}^{fd}_{k})\simeq\operatorname{Pro}({\operatorname{CAlg}^{fd}_{k}})$
where we remark that
$\operatorname{cCAlg}^{fd}_{k}\simeq({\operatorname{CAlg}^{fd}_{k}})^{op}$.
This may then be promoted to a duality between abelian group/cogroup objects:
$\mathsf{Hopf}_{k}:=\operatorname{Ab}(\operatorname{cCAlg}_{k})\simeq\operatorname{coAb}(\operatorname{Pro}({\operatorname{CAlg}^{fd}_{k}}))$
(3.7)
###### Remark 3.8.
The interchange of display (3.7) is precisely the underlying idea of Cartier
duality of formal groups and affine group schemes. Recall that Hopf algebras
correspond contravariantly via the ${\operatorname{Spec}}(-)$ functor to
affine group schemes. Hence one has
$AffGp_{k}^{op}\simeq\mathsf{Hopf}_{k}\simeq\operatorname{FG}_{k},$
where the left hand side denotes the category of affine group schemes over
$k$. The functor on the right is given by the functor
$\operatorname{coSpec}(-)$ described above. We remark that in this setting,
the category of Hopf algebras over the field $k$ is actually abelian, hence
the categories of formal groups and affine group schemes are themselves
abelian.
### 3.3 Formal groups and Cartier duality over a commutative ring
Over a general commutative ring $R$, the duality theory between formal groups
and affine group schemes isn’t quite as simple to describe. In practice, one
restricts to certain subcategories on both sides, which then fit under the
Ind-Pro duality framework of Construction 3.6. This will be achieved by
imposing a condition on the underlying coalgebra of the Hopf algebras at hand.
###### Remark 3.9.
We study coalgebras following the conventions of [Lur18, Section 1.1]. In
particular, if $C$ is a coalgebra over $R$, we always require that the
underlying $R$-module of $C$ is flat. This is done as in [Lur18], to ensure
that $C$ remains a coalgebra in the setting of higher algebra. Furthermore, we
implicitly assume that all coalgebras appearing in this text are
(co)commutative.
To an arbitrary coalgebra, one may functorially associate a presheaf on the
category of affine schemes given by the cospectrum functor
$\operatorname{coSpec}:\operatorname{cCAlg}_{R}\to\operatorname{Fun}(\operatorname{CAlg}_{R},\operatorname{Set}).$
###### Definition 3.10.
Let $C$ be a coalgebra. We define $\operatorname{coSpec}(C)$ to be the functor
$\operatorname{coSpec}(C):\operatorname{CAlg}_{R}\to\operatorname{Set}$
sending $R\mapsto\operatorname{Gplike}(C\otimes_{k}R)=\\{x|\Delta(x)=x\otimes
x\\}$
The $\operatorname{coSpec}(-)$ functor is fully faithful when restricted to a
certain class of coalgebras. We borrow the following definition from [Lur18].
See also [Str99] for a related notion of _coalgebra with good basis_.
###### Definition 3.11.
Fix $R$ and let $C$ be a (co-commutative) coalgebra over $R$. We say $C$ is
_smooth_ if its underlying $R$-module is flat, and if it is isomorphic to the
divided power coalgebra
$\Gamma^{*}_{R}(M):=\bigoplus_{n\geq 0}\Gamma^{n}_{R}(M)$
for some projective $R$-module $M$. Here, $\Gamma^{n}_{R}(M)$ denotes the
invariants for the action of the symmetric group $\Sigma_{n}$ on $M^{\otimes
n}$.
Given an arbitrary coalgebra $C$ over $R$, the linear dual
$C^{\vee}=\operatorname{Map}(C,R)$ acquires a canonical $R$-algebra structure.
In general $C$ cannot be recovered from $C^{\vee}$. However, in the smooth
case, the dual $C$ acquires the additional structure of a topology on
$\pi_{0}$ giving it the structure of an adic $R$ algebra. This allows us to
recover $C$, via the following proposition, c.f. [Lur18, Theorem 1.3.15]:
###### Proposition 3.12.
Let $C,D\in\operatorname{cCAlg}^{sm}_{R}$ be smooth coalgebras. Then
$R$-linear duality induces a homotopy equivalence
$\operatorname{Map}_{\operatorname{cCAlg}_{R}}(C,D)\simeq\operatorname{Map}^{\operatorname{cont}}_{\operatorname{CAlg}_{R}}(C^{\vee},D^{\vee}).$
###### Remark 3.13.
One can go further and characterize intrinsically all adic $R$-algebras that
arise as duals of smooth coalgebras. These will be equivalent to
$\widehat{\operatorname{Sym}^{*}(M)}$, the completion along the augmentation
ideal $\operatorname{Sym}^{\geq 1}(M)$ for some $M$ a projective $R$-module of
finite type.
###### Remark 3.14.
Fix $C$ a smooth coalgebra. There is always a canonical map of stacks
$\operatorname{coSpec}(C)\to{\operatorname{Spec}}(A)$ where $A=C^{\vee}$, but
it is typically not an equivalence. The condition that $C$ is smooth
guarantees precisely that there is an induced equivalence
$\operatorname{coSpec}(C)\to\operatorname{Spf}(A)\subseteq{\operatorname{Spec}}A$,
where $\operatorname{Spf}(A)$ denotes the formal spectrum of the adic $R$
algebra $A$. In particular $\operatorname{coSpec}(C)$ is a formal scheme in
the sense of [Lur16, Chapter 8]
###### Proposition 3.15 (Lurie).
Let $R$ be an commutative ring. Then the construction
$C\mapsto\operatorname{cSpec}(C)$ induces a fully faithful embedding of
$\infty$-categories
$\operatorname{cCAlg}^{sm}_{R}\to\operatorname{Fun}(\operatorname{CAlg}^{0}_{R},\mathcal{S})$
Moreover this comutes with finite products and base-change.
###### Proof.
This essentially follows from the fact that a smooth coalgebra can be
recovered from its adic $E_{\infty}$-algebra. ∎
###### Construction 3.16.
As a consequence of the fact that the $\operatorname{coSpec}(-)$ functor
preserves finite products, this can be upgraded to a fully faithful embedding
of abelian group objects in smooth coalgebras
$\operatorname{Ab}(\operatorname{cCAlg})\to\operatorname{Ab}(f\operatorname{Sch})$
into formal groups. Unless otherwise mentioned we will focus on formal groups
of this form. Hence, we use the notation $\operatorname{FG}_{R}$ to denote the
category of coalgebraic formal groups over $R$. We refer to this equivalence
as Cartier duality.
We would like to interpret the above correspondence geometrically. Let
$AffGrp^{b}_{R}$ be the subcategory of affine group schemes, corresponding via
the ${\operatorname{Spec}}(-)$ functor to the category $\mathsf{Hopf}^{sm}$,
which we use to denote the category of Hopf algebras whose underlying
coalgebra is smooth. Meanwhile, a cogroup object $\widehat{H}$ in the category
of adic algbras corepresents a functor
$F:\operatorname{CAlg}^{ad}\to Grp,\,\,\,R\mapsto
Hom_{\operatorname{CAlg}^{ad}}(\widehat{H},R),$
where the group structure arises from the co-group structure on $H$.
Essentially by definition, this is exactly the data of a formal group, so we
may identify the category of formal groups with the category
$\operatorname{coAb}(\operatorname{CAlg}^{ad})$.
We have identified the categories in question as those of affine group schemes
and formal groups respectively; one can further conclude that these dualities
are representable by certain distinguished objects in these categories.
###### Proposition 3.17.
cf [Haz78, Proposition 37.3.6, 37.3.11 ] There exist natural bijections
${\operatorname{Hom}}_{\mathsf{Hopf}^{sm}}(A[t,t^{-1}],C)\cong{\operatorname{Hom}}_{\operatorname{CAlg}^{ad}}(D(C),A)$
${\operatorname{Hom}}_{\operatorname{CoAb}({\operatorname{CAlg}_{B}^{ad}})}(B[[T]],A)\cong{\operatorname{Hom}}_{\operatorname{CAlg}}(D^{T}(A),B).$
Here, for a coalgebra $C$, $D(C)$ is the linear dual and for a topological
algebra $A$ $D^{T}(A)=\operatorname{Map}_{cont}(A,R)$ _continuous dual_
One can put this all together to see that there are duality functors which are
moreover represented by the multiplicative group and the formal multiplicative
group respectively.
One has the following expected base-change property:
###### Proposition 3.18.
Let $\widehat{\mathbb{G}}$ be a formal group over ${\operatorname{Spec}}R$,
and suppose there is a map $f:R\to S$ be a map of commutative rings. Let
$\widehat{\mathbb{G}}_{S}$ denote the formal group over
${\operatorname{Spec}}S$ obtained by base change. Then there is a natural
isomorphism
$D^{T}(\widehat{\mathbb{G}}|_{S})\simeq D^{T}(\widehat{\mathbb{G}})_{S}$
of affine group schemes over ${\operatorname{Spec}}S$.
## 4 Filtered formal groups
We define here a notion of a filtered formal group, along with Cartier duality
for these. We discuss here only (“underived”) formal groups over discrete
commutative rings but we conjecture that these notions generalize to the case
where $R$ is a connective $E_{\infty}$ ring.
### 4.1 Filtrations and $\mathbb{A}^{1}/\mathbb{G}_{m}$
We first recall a few preliminaries about filtered objects.
###### Definition 4.1.
Let $R$ be an $E_{\infty}$-ring. We set
$\operatorname{Fil}_{R}:=\operatorname{Fun}(\mathbb{Z}^{op},{\operatorname{Mod}}_{R}),$
where $\mathbb{Z}$ is viewed as a category with morphisms given by the partial
ordering and refer to this as the $\infty$-category of filtered $R$-modules.
###### Remark 4.2.
The $\infty$-category $\operatorname{Fil}_{R}$ is symmetric monoidal with
respect to the Day convolution product.
###### Definition 4.3.
There exist functors
$\operatorname{Und}:\operatorname{Fil}_{R}\to{\operatorname{Mod}}_{R}\,\,\,\,\,\,\,\,\,\operatorname{gr}:\operatorname{Fil}_{R}\to\operatorname{Gr}_{R},$
such that to a filtered $R$-module $M$, one associates its underlying object
$\operatorname{Und}(M)=\operatorname{colim}_{n\to-\infty}M_{n}$ and
$\operatorname{gr}(M)=\oplus_{n}\operatorname{cofib}(M_{n+1}\to M_{n})$
respectively.
###### Example 4.4.
Let $A$ be a commutative ring, and $I\subset A$ be an ideal of $A$. We define
a filtration $F^{*}_{I}(A)$ with
$F^{n}_{I}(A)=\begin{cases}A,\,\,\,\,\,\,n\leq 0\\\ I^{n}\,\,\,\,\,\,n\geq
1\end{cases}$
This is the _I-adic_ filtration on $A$.
###### Definition 4.5.
There exists a notion of completeness in the setting of filtrations. We say a
filtered $R$-module $M$ is complete if
$\lim_{n}M_{n}\simeq 0$
Alternatively, $M$ is complete if $\lim M_{-\infty}/M_{n}\simeq
M_{-\infty}=\operatorname{Und}(M)$. We denote that $\infty$-category of
filtered modules which are complete by $\widehat{\operatorname{Fil}}_{R}$.
This will be a localization of $\widehat{\operatorname{Fil}}_{R}$ and will
come equipped with a completed symmetric monoidal,such that the _completion_
functor
$\widehat{(-)}:\operatorname{Fil}_{R}\to\widehat{\operatorname{Fil}}_{R}$
is symmetric monoidal.
###### Construction 4.6.
The category of filtered $R$-modules, as a $R$-linear stable $\infty$-category
can be equipped with several different $t$-structures. We will occasionally
work with the _neutral_ t-structure on $\operatorname{Fil}_{R}$, defined so
that $F^{*}(M)\in(\operatorname{Fil}_{R})_{\geq 0}$ if
$F^{n}(M)\in\operatorname{(}{\operatorname{Mod}}_{k})_{\geq 0}$ for all
$n\in\mathbb{Z}$. Similarly, $F^{*}(M)\in(\operatorname{Fil}_{R})_{\leq 0}$ if
$F^{n}(M)\in\operatorname{(}{\operatorname{Mod}}_{R})_{\leq 0}$ for all
$n\in\mathbb{Z}$.
We remark that the standard $t$-structure on ${\operatorname{Mod}}_{R}$ is
compatible with sequential colimits (cf. [Lur, Definition 1.2.2.12]. This has
the consequence that if $F^{*}(M)\in\operatorname{Fil}_{R}^{\heartsuit}$ then
$\operatorname{colim}_{n\to-\infty}F^{n}(M)=\operatorname{Und}(F^{*}(M))\in{\operatorname{Mod}}_{k}^{\heartsuit}.$
We occasionally refer to filtered $R$-modules with are in the heart of this
$t$-structure as discrete.
We now briefly recall the description of filtered objects in terms of quasi-
coherent sheaves over the stack $\mathbb{A}^{1}/\mathbb{G}_{m}$. This quotient
stack may be defined as the quotient of
$\mathbb{A}^{1}={\operatorname{Spec}}(R[t])$ by the canonical
$\mathbb{G}_{m}={\operatorname{Spec}}(R[t,t^{-1}]$ action induced by the
inclusion $\mathbb{G}_{m}\hookrightarrow\mathbb{A}^{1}$ arrow of group
schemes. This comes equipped with two distinguished points
$0:{\operatorname{Spec}}k\cong\mathbb{G}_{m}/\mathbb{G}_{m}\to\mathbb{A}^{1}/\mathbb{G}_{m}$
$1:B\mathbb{G}_{m}={\operatorname{Spec}}k/\mathbb{G}_{m}\to\mathbb{A}^{1}/\mathbb{G}_{m}$
which we often refer to in this work as the generic and special/closed point
respectively. We remark that the quotient map
$\pi:\mathbb{A}^{1}\to\mathbb{A}^{1}/\mathbb{G}_{m}$ is a smooth (and hence
fppf) atlas for $\mathbb{A}^{1}/\mathbb{G}_{m}$, making
$\mathbb{A}^{1}/\mathbb{G}_{m}$ into an Artin stack.
###### Theorem 4.7.
There exists a symmetric monoidal equivalence
$\operatorname{Fil}_{R}\to\operatorname{QCoh}(\mathbb{A}^{1}/\mathbb{G}_{m})$
Furthermore, under this equivalence, one may identify the underlying object
and associated graded functors with pullbacks along $1$ and $0$ respectively.
### 4.2 Formal algebraic geometry over $\mathbb{A}^{1}/\mathbb{G}_{m}$
We propose in this section the rough heuristic that an affine formal scheme
over $\mathbb{A}^{1}/\mathbb{G}_{m}$ should be interpreted as none other than
a complete filtered algebra. We justify this by showing that a complete
filtered algebra quasi-coherent, as sheaf over $\mathbb{A}^{1}/\mathbb{G}_{m}$
satisfies a form of completeness directly related to the standard notion of
$t$-adic completeness for a $R[t]$-algebra $A.$ This may then be pulled back
along the atlas $\mathbb{A}^{1}\to\mathbb{A}^{1}/\mathbb{G}_{m}$ to an adic
commutative $R[t]$-algebra, which is complete with respect to multiplication
by $t$.
###### Construction 4.8.
Recall, e.g., by [Lur15], that there is an equivalence
$\operatorname{Fil}_{R}\simeq{\operatorname{Mod}}_{R[t]}(\operatorname{Gr}_{R}),$
where $R[t]$ is given the grading such that $t$ sits in weight $-1$. More
precisely it is the graded $E_{\infty}$ algebra given by
$R[t]=\begin{cases}R,\,\,\,\,\,n\leq 0,\\\ 0,\,\,\,\,\,\,n>0\end{cases}$
One has a map
$R[t]\to\underline{\operatorname{Map}}_{gr}(X,X)$
in $\operatorname{Gr}_{R}$ making $X\in\operatorname{Gr}_{R})$ into a
$R[t]$-module. There is an equivalence of $E_{1}$-algebras
$R[t]\simeq\operatorname{Free}_{E_{1}}(R(1))$ making $R[t]$ expressible as the
free $E_{1}$ algebra on $R(1)$. Unpackaging all this, we obtain a map
$R\to\underline{\operatorname{Map}}_{gr}(X,X)\otimes R(-1)$
which precisely singles out the structure maps of the filtration on $X$.
###### Definition 4.9.
We say a graded $R[t]$-module
$X\in{\operatorname{Mod}}_{R[t]}(\operatorname{Gr}_{R})$ is $(t)$-complete if
and only if the limit of the following sequence of multiplication by $t$
$...\xrightarrow{t}X\otimes R(n+1)\xrightarrow{t}X\otimes
R(n)\xrightarrow{t}X\otimes R(n-1)\xrightarrow{t}...$
vanishes, where the product here is the Day convolution symmetric monoidal
structure on $\operatorname{Gr}_{R}$
It is immediately clear that the above agrees with the notion of completeness
in the sense of Definition 4.5. Namely $X\in\operatorname{Fil}_{R}$ is
complete if it is complete in the sense of Definition 4.9 when viewed as an
object of ${\operatorname{Mod}}_{R[t]}(\operatorname{Gr}_{R})$.
We would like to use this observation to show that completeness may further be
checked after “forgetting the grading”, i.e upon pullback of the associated
quasi-coherent sheaf on $\mathbb{A}^{1}/\mathbb{G}_{m}$ along
$\pi:\mathbb{A}^{1}\to\mathbb{A}^{1}/\mathbb{G}_{m}$. First, recall the
relevant (unfiltered/ ungraded) classical notion of $t$-completeness:
###### Definition 4.10.
Fix $R[t]$, the polynomial algebra in one generator (with no additional
structure of a grading). An $R[t]$-module $M$ is $t$-complete if the limit of
the tower
$...M\xrightarrow{t}M\xrightarrow{t}M\xrightarrow{t}...$
vanishes. By [Lur16, 8.2.4.15], there is an equivalence
$\operatorname{QCoh}(\widehat{\mathbb{A}^{1}})\simeq{\operatorname{Mod}}^{\operatorname{Cpl}(t)}$
where the right hand side denotes $t$-complete $R[t]$-modules and the left
hand side denotes the $R$-linear $\infty$-category of quasi-coherent sheaves
on the formal completion of the affine line
$\widehat{\mathbb{A}^{1}}=\operatorname{Spf}R[[t]]$.
Now we use this to show that completeness can be tested upon pullback to
$\mathbb{A}^{1}$.
###### Proposition 4.11.
Let
$X\in\operatorname{Fil}_{R}\in\operatorname{QCoh}(\mathbb{A}^{1}/\mathbb{G}_{m})$
be a filtered $R$-module. Then $X$ is complete as a filtered object if and
only if its pullback $\pi^{*}(X)\in\operatorname{QCoh}(\mathbb{A}^{1})$ is
complete, as an $R[t]$-algebra.
###### Proof.
By the above discussion, we express completeness as the property that
$\lim(...\xrightarrow{t}X\otimes R(n)\xrightarrow{t}X\otimes
R(n-1)\xrightarrow{t}...)$
vanishes in the $\infty$-category
$\operatorname{Gr}_{R}\simeq\operatorname{Fun}(\mathbb{Z},{\operatorname{Mod}}_{R})$
of of graded $R$-modules, where $\mathbb{Z}$ is viewed as discrete
$E_{\infty}$-space. We would like to show that the limit vanishes upon
applying
$\bigoplus:\operatorname{Gr}_{R}\to{\operatorname{Mod}}_{R}\,\,\,\,\,(X)_{n}\mapsto\bigoplus_{n}X_{n}$
By [Mou19, Proposition 4.2] this functor will preserve the limit, as it
satisfies the equivalent conditions for the comonadic Barr-Beck theorem, so
that the limit vanishes in ${\operatorname{Mod}}_{R}$. Conversely, suppose $X$
is a filtered $R$-module which has the property that
$\bigoplus_{n\in\mathbb{Z}}X_{n}$ is complete as an $R[t]$-module. This means
that the limit along multiplication by $t$ in ${\operatorname{Mod}}_{R}$
vanishes. However, we may apply [Mou19, Proposition 4.2] again to see that
this limit is actually created in $\operatorname{Gr}_{R}$, and moreover the
functor $\bigoplus$ preserves this limit. In particular, this means that
$\lim(...\xrightarrow{t}X\otimes R(n)\xrightarrow{t}X\otimes
R(n-1)\xrightarrow{t}...)$ vanishes in $\operatorname{Gr}_{R}$, as we wanted
to show. ∎
###### Remark 4.12.
We see therefore that if $A$ is a complete filtered algebra over $R$, then it
gives rise to a commutative algebra
$\pi^{*}(A)\in\operatorname{QCoh}(\mathbb{A}^{1}/\mathbb{G}_{m})\simeq{\operatorname{Mod}}_{R[t]}$,
which can be endowed with a topology with respect to the ideal $(t)$ with
respect to which it is complete. By [Lur16, Proposition 8.1.2.1, 8.1.5.1],
algebras of this form embed fully faithfully into $\operatorname{sStk}_{R[t]}$
the $\infty$-category of spectral stacks over $\mathbb{A}^{1}_{R}$, with
essential image being precisely the _formal affine schemes_ over
$\mathbb{A}^{1}_{R}$.
### 4.3 Filtered Cartier duality
We adopt the approach to formal groups in [Lur18], described above where they
are in particular smooth coalgebras $C$ with
$C=\bigoplus_{i\geq 0}\Gamma^{i}(M)$
where $M$ is a (discrete) projective module of finite type. Here, $\Gamma^{n}$
for each $n$ denotes the $n$the divided power functor, which for a dualizable
module $M$, can be alternatively defined as
$\Gamma^{n}(M):=\operatorname{Sym}^{n}(M^{\vee})^{\vee},$
that is to say as the dual of the symmetric powers functor
###### Construction 4.13.
By the results of
$\cite[cite]{[\@@bibref{}{brantner2019deformation}{}{}]},\cite[cite]{[\@@bibref{}{raksit2020hochschild}{}{}]}$,
these can be extended to the $\infty$-categories ${\operatorname{Mod}}_{k}$,
$\operatorname{Gr}({\operatorname{Mod}}_{R})$,
$\operatorname{Fil}({\operatorname{Mod}}_{k})$ of $R$-modules, graded
$R$-modules and filtered $R$-modules, respectively. These are referred to as
the _derived symmetric powers_
In particular, the $n$th (derived) divided power functors
$\Gamma_{gr}^{n}:\operatorname{Gr}_{R}\to\operatorname{Gr}_{R}\,\,\,\,\,\,\Gamma_{fil}^{n}:\operatorname{Fil}_{R}\to\operatorname{Fil}_{R}$
make sense in the graded and filtered contexts as well.
###### Definition 4.14.
Let $M$ be a filtered $R$-module whose underlying object is a discrete
projective $R$-module of finite type such that $\operatorname{gr}(M)$ is
concentrated in non-positive weights. A smooth filtered coalgebra is a
coalgebra of the form
$C=\bigoplus_{n\geq 0}\Gamma_{fil}^{n}(M)$
Note that this acquires a canonical coalgebra structure, as in [Lur18,
Construction 1.1.11]. Indeed if we apply $\Gamma^{*}$ to $M\oplus M$, we
obtain compatible maps
$\Gamma^{n^{\prime}+n^{\prime\prime}}(M\oplus
M)\to\Gamma^{n^{\prime}}(M)\otimes\Gamma^{n^{\prime\prime}}(M)$
where this is to be interpreted in terms of the Day convolution product. As in
the unfiltered case in [Lur18, Construction 1.1.11], these assemble to give
equivalences
$\Gamma^{*}(M\oplus M)\simeq\Gamma^{*}(M)\otimes\Gamma^{*}(M)$
Via the diagonal map $M\to M\oplus M$ (recall
$\operatorname{Fil}({\operatorname{Mod}}_{k})$ is stable), this gives rise to
a map
$\Delta:\Gamma^{*}(M)\to\Gamma^{*}(M\oplus
M)\simeq\Gamma^{*}(M)\otimes\Gamma^{*}(M)$
which one can verify exhibits $\Gamma^{*}(M)$ as a coalgebra in the category
of filtered $k$-modules.
###### Proposition 4.15.
Let $M$ be a dualizable filtered $R$-module. Then the formation of divided
powers is compatible with the associated graded and underlying object
functors.
###### Proof.
Let $\operatorname{Und}:\operatorname{Fil}_{R}\to{\operatorname{Mod}}_{R}$ and
$\operatorname{gr}:\operatorname{Fil}_{R}\to\operatorname{Gr}_{R}$ denote the
underlying object and associated graded functors respectively. Each of these
functors commute with colimits and are symmetric monoidal. Thus, we are
reduced to showing that each of these functors commutes with the divided power
functor
$\Gamma_{fil}^{n}(M)=\operatorname{Sym}^{n}(M^{\vee})^{\vee}$
The statement now follows from the fact that $\operatorname{Und}$ and
$\operatorname{gr}$, being symmetric monoidal, commute with dualizable objects
and that they commute with $\operatorname{Sym}^{n}$, which follows from the
discussion in [Rak20, 4.2.25]. ∎
###### Definition 4.16.
The category of smooth filtered coalgebras
$\operatorname{cCAlg}(\operatorname{Fil}_{k})^{sm}$ is the full subcategory of
filtered coalgebras generated by objects of this form. Namely,
$C\in\operatorname{cCAlg}(\operatorname{Fil}_{R})^{sm}$ if there exists a
filtered module $M$ which is dualizable, discrete and zero in positive degrees
for which
$C\simeq\bigoplus_{n\geq 0}\Gamma^{n}_{fil}(M)$
###### Remark 4.17.
The filtered module $M$ in the above defintion is of the form
$...\supset M_{-2}\supset M_{-1}\supset M_{0}\supset 0...$
which is eventually constant.
We now give the first defintion of a filtered formal group:
###### Definition 4.18.
A filtered formal group is an abelian group object in the category of smooth
coalgebras. That is to say it is a product preserving functor
$F:Lat^{op}\to\operatorname{cCAlg}(\operatorname{Fil}_{R})^{sm}$
###### Construction 4.19.
Let $M\in\operatorname{Fil}_{R}$ be a filtered $R$-module. We denote the
(weak) dual by $\underline{Map}_{Fil}(M,R)$. Note that if $M$ has a
commutative coalgebra structure, then this acquires the structure of a
commutative algebra.
###### Example 4.20.
Let $C=\oplus\Gamma_{fil}^{n}(M)$. Then one has an equivalence
$C^{\vee}\simeq(\bigoplus\Gamma^{n}(M))^{\vee}\simeq\prod_{n}\operatorname{Sym}^{n}(M^{\vee})$
This is a complete filtered algebra.
###### Proposition 4.21.
Let $C$ be a filtered smooth coalgebra, and let $C^{\vee}$ denote its
(filtered) dual. Then at the level of the underlying object there is an
equivalence
$\operatorname{Und}C^{\vee}\simeq\prod\operatorname{Sym}^{*}(N)$
for some projective module $N$ of finite type.
###### Proof.
We unpack what the weak dual functor does on the $n$th filtering degree of a
filtered $R$-module. If $M\in\operatorname{Fil}_{R}$, then this may be
described as
$M^{\vee}_{n}=\underline{Map}_{Fil}(M,R)_{n}\simeq\operatorname{fib}(M_{\infty}^{\vee}\to
M^{\vee}_{1-n})$
where $M^{\vee}_{\infty}$ is the dual of the underlying $R$-module. Now let
$M=C$ be a smooth coalgebra, so that
$C=\bigoplus\Gamma^{n}(N)$
for $N$ as in Definition 4.14. Then $\Gamma^{n}(N)$ for each $n$ will be
concentrated in negative filtering degrees so that $C_{1-n}^{\vee}\simeq 0$
for all $n$ where $C_{n}$ is nontrivial. Hence we have the following
description for the underlying object of $C^{\vee}$:
$\operatorname{Und}(C^{\vee})\simeq\operatorname{colim}_{n}\operatorname{fib}(C^{\vee}_{\infty}\to
C^{\vee}_{1-n})\simeq\operatorname{fib}\operatorname{colim}_{n}(C_{\infty}^{\vee}\to
C_{1-n}^{\vee})=\operatorname{colim}_{n}C^{\vee}_{\infty}.$
In particular, since $C_{1-n}$ eventually vanishes, we obtain the colimit of
the constant diagram associated to $C^{\vee}_{\infty}$. Hence
$\operatorname{Und}(C^{\vee})\simeq\operatorname{Und}(C)^{\vee}\simeq\prod_{m\geq
0}\operatorname{Sym}_{R}^{m}(N)$
This shows in particular that weak duality of these smooth filtered coalgebras
commutes with underlying object functor. ∎
###### Remark 4.22.
The above proposition justifies the definition 4.14 of smooth filtered
coalgebras which we propose. In general it is not clear that weak duality
commutes with the underlying object functor (although this of course hold true
on dualizable objects).
###### Proposition 4.23.
The assignment
$\operatorname{cCAlg}^{sm}(\operatorname{Fil}_{R})\to\operatorname{CAlg}(\widehat{\operatorname{Fil}_{R}})$
given by
$C\mapsto C^{\vee}=\operatorname{Map}(C,R)$
is fully faithful
###### Proof.
Let $D$ and $C$ be two arbitrary smooth coalgebras. We would like to display
an equivalence of mapping spaces
$\operatorname{Map}_{\operatorname{cCAlg}^{sm}(\operatorname{Fil}_{R})}(D,C)\simeq\operatorname{Map}_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_{R}})}(C^{\vee},D^{\vee});$
(4.24)
Each of $C$ and $D$ may be written as a colimit, internally to filtered
objects,
$C\simeq\operatorname{colim}C_{k},\,\,\,\,D\simeq\operatorname{colim}D_{m}$
where
$C_{k}=\bigoplus_{0\leq i\leq
k}\Gamma^{i}(M);\,\,\,\,\,\,D_{m}=\bigoplus_{0\leq i\leq m}\Gamma^{i}(N).$
Hence the map (4.24) may be rewriten as a limit of maps of the form
$\operatorname{Map}_{\operatorname{cCAlg}^{sm}(\operatorname{Fil}_{R})}(D_{m},C)\to\operatorname{Map}_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_{R}})}(C^{\vee},D_{m}^{\vee})$
(4.25)
The left side of this may now be rewritten as
$\operatorname{Map}_{\operatorname{cCAlg}^{sm}(\operatorname{Fil}_{R})}(D_{m},\operatorname{colim}_{k}C_{k})$
Now, the object $D_{m}$ will be compact by inspection (in fact, its underlying
object is just a compact projective $k$-module) so that the above mapping
space is equivalent to
$\operatorname{colim}_{k}\operatorname{Map}_{\operatorname{cCAlg}^{sm}(\operatorname{Fil}_{R})}(D_{m},C_{k})$
We would now like to make a similar type of identification on the right hand
side of the map (4.25). For this note that as a complete filtered algebra,
$C^{\vee}\simeq\lim_{k}C_{k}^{\vee}$. Note that there is a canonical map
$\operatorname{colim}_{k}\operatorname{Map}(C_{k}^{\vee},D_{m})\to\operatorname{Map}(\lim
C_{k}^{\vee},D_{m})$
By lemma 4.26 this is an equivalence. Each term $C_{k}^{\vee}$ as a filtered
object is zero in high enough positive filtration degrees. As limits in
filtered objects are created object-wise, one sees that the essential image of
the above map consists of morphisms
$\lim_{k}C_{k}^{\vee}\to C_{j}^{\vee}\to D_{m}$
which factor through some $C_{j}^{\vee}$. Since $D_{m}$ is itself of the same
form, then every map factors through some $C_{j}^{\vee}$. Hence we obtain the
desired decomposition on the right hand side of (4.25). It follows that the
morphism of mapping spaces (4.24) decomposes into maps
$\operatorname{Map}(D_{m},C_{k})\to\operatorname{Map}(C_{k}^{\vee},D_{m}^{\vee}).$
These are equivalences because $D_{j}$ and $C_{k}$ are dualizable for every
$j,k$, and the duality functor $(-)^{\vee}$ gives rise to an anti-equivalence
between commutative algebra and commutative coalgebra objects whose underlying
objects are dualizable. Assembling this all together we conclude that (4.24)
is an equivalence. ∎
###### Lemma 4.26.
The canonical map of spaces
$\operatorname{colim}\operatorname{Map}_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_{R}})}(C_{k}^{\vee},D_{m})\to\operatorname{Map}_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_{R}})}(\lim_{k}C_{k}^{\vee},D_{m})$
induced by the projection maps $\pi_{k}:\lim_{k}C_{k}\to C_{k}$ is an
equivalence.
###### Proof.
Fix an index $k$. We claim that the following is a pullback square of spaces:
$\textstyle{\operatorname{Map}_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_{R}})}(C_{k}^{\vee},D_{m})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi_{k}^{*}}$$\scriptstyle{\operatorname{Und}}$$\textstyle{\operatorname{Map}_{\operatorname{CAlg}}(C_{k}^{\vee},D_{m})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi_{k}^{*}}$$\textstyle{\operatorname{Map}_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_{R}})}(\lim_{k}C_{k}^{\vee},D_{m})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\operatorname{Und}}$$\textstyle{\operatorname{Map}_{\operatorname{CAlg}}(\lim_{k}C_{k}^{\vee},D_{m})}$
(4.27)
Note first that even though $\operatorname{Und}(-)$ does not generally
preserve limits, it will preserve these particular limits by Proposition 4.21.
To prove the claim, we see that the pullback
$\operatorname{Map}_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_{R}})}(\lim_{k}C_{k}^{\vee},D_{m})\times_{\operatorname{Map}_{\operatorname{CAlg}}(\lim_{k}C_{k}^{\vee},D_{m})}\operatorname{Map}_{\operatorname{CAlg}}(C_{k}^{\vee},D_{m})$
parametrizes, up to higher coherent homotopy, ordered pairs $(f,g)$ with
$f:\lim C_{k}^{\vee}\to D_{m}$
a map of filtered algebras and
$g_{k}:C_{k}^{\vee}\to D_{m}$
a map at the level of underlying algebras, such that there is a factorization
of the underlying map
$\operatorname{Und}(f)\simeq\pi_{k}^{*}(g_{k})=g_{k}\circ\pi_{k}$
along the map $\pi_{k}:\lim_{k}C_{k}^{\vee}\to C_{k}^{\vee}$. Recall that
$\pi_{k}$ is also the underlying map of a morphism of filtered objects; since
the composition $\operatorname{Und}(f)=g_{k}\circ\pi_{k}$ respects the
filtration this means that $g_{k}$ itself must respect the filtration as well.
This in particular gives rise to an inverse
$\operatorname{Map}_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_{R}})}(\lim_{k}C_{k}^{\vee},D_{m})\times_{\operatorname{Map}_{\operatorname{CAlg}}(\lim_{k}C_{k}^{\vee},D_{m})}\operatorname{Map}_{\operatorname{CAlg}}(C_{k}^{\vee},D_{m})\to\operatorname{Map}_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_{R}})}(C_{k}^{\vee},D_{m})$
of the canonical map
$\operatorname{Map}_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_{R}})}(C_{k}^{\vee},D_{m})\to\operatorname{Map}_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_{R}})}(\lim_{k}C_{k}^{\vee},D_{m})\times_{\operatorname{Map}_{\operatorname{CAlg}}(\lim_{k}C_{k}^{\vee},D_{m})}\operatorname{Map}_{\operatorname{CAlg}}(C_{k}^{\vee},D_{m})$
induced by the universal property of the pullback, which proves the claim. Now
let $P_{k}$ denote the fiber of the left vertical map of 4.27. One sees that
the fiber of the map
$\operatorname{Map}_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_{R}})}(\lim_{k}C_{k}^{\vee},D_{m})\times_{\operatorname{Map}_{\operatorname{CAlg}}(\lim_{k}C_{k}^{\vee},D_{m})}\operatorname{Map}_{\operatorname{CAlg}}(C_{k}^{\vee},D_{m})$
of the statement is $\operatorname{colim}P_{k}$. We would like to show that
this is contractible. By the claim, this is equivalent to
$\operatorname{colim}P^{und}_{k}$, where $P^{und}_{k}$ for each $k$ is the
fiber of the right hand vertical map of 4.27. By [Lur16], this is
contractible. We will be done upon showing that the essential image the map in
the statement is all of
$\operatorname{Map}_{\operatorname{CAlg}}(\lim_{k}C_{k}^{\vee},D)$. To this
end we see that the essential image consists of maps
$\lim_{k}C_{k}^{\vee}\to C_{j}^{\vee}\to D_{m}$
which factor through some $C_{j}^{\vee}$. However, since the underlying
algebra of $D_{m}$ is nilpotent, every map factors through such a
$C_{j}^{\vee}$. ∎
###### Remark 4.28.
We remark that this is ultimately an example of the standard duality between
ind and pro objects of an $\infty$-category $\mathcal{C}$. Indeed, one has a
duality between algebras and coalgebras in $\operatorname{Fil}_{k}$ whose
underlying objects are dualizable. The equivalence of proposition 4.23 is an
equivalence between certain full subcategories of
$\operatorname{Ind}(\operatorname{cCAlg}^{\omega,fil})$ and
$\operatorname{Pro}(\operatorname{CAlg}^{\omega,fil})$.
###### Definition 4.29.
Let $\mathcal{D}$ denote the essential image of the duality functor of
Proposition 4.23. Then, we define a the category of (commutative) cogroup
objects $\operatorname{coAb}(\mathcal{D})$ to just be an abelian group object
of the opposite category (i.e. of the category of smooth filtered coalgebras.
As $(-)^{\vee}$ is an anti-equivalence of $\infty$-categories, this implies
that Cartesian products on $\operatorname{cCAlg(\operatorname{Fil}_{k}})^{sm}$
are sent to coCartesian products on $\mathcal{D}$. Hence, this functor sends
group objects to cogroup object. We refer to an object
$C\in\operatorname{coAb}(\mathcal{D})$ as a _filtered formal group_.
###### Remark 4.30.
If $C^{\vee}$ is discrete (which is the setting we are primarily concerned
with for the moment) then a commutative cogroup structure on $C$ is none other
than a (co)commutative comonoid structure on $C^{\vee}$, making it into a
bialgebra in complete filtered $R$-modules.
###### Construction 4.31 (Cartier duality).
Let
$(-)^{\vee}:\operatorname{cCAlg(\operatorname{Fil}_{R}})^{sm}\to\mathcal{D}$
be the equivalence of Proposition 4.23. This may now be promoted to an
equivalence
$(-)^{\vee}:\operatorname{Ab}(\operatorname{cCAlg(\operatorname{Fil}_{R}})^{sm})\to\operatorname{CoAb}(\mathcal{D})$
We refer to the correspondence which is implemented by this equivalence as
_filtered Cartier duality_.
###### Remark 4.32.
We explain our usage of the term _filtered Cartier duality_. As we saw in
Section 3.2, classical Cartier duality gives rise to an (anti)-equivalence
between formal groups and affine groups schemes, at least in the most well-
behaved situation over a field. An abelian group object in smooth filtered
coalgebras will be none other than a filtered Hopf algebra. This is due to the
fact that we ultimately still restrict to the a $1$-categorical setting where
remark 3.3 applies, so abelian group objects agree with grouplike commutative
monoids. Out of this, therefore, one may extract an relative affine group
scheme over $\mathbb{A}^{1}/\mathbb{G}_{m}$. Hence, 4.31 may be viewed as a
correspondence between filtered formal groups and a full subcategory of
relatively affine group schemes over $\mathbb{A}^{1}/\mathbb{G}_{m}$.
Next we prove a unicity result on complete filtered algebra structures with
underlying object a commutative ring $A$ and specified associated graded, (cf.
Theorem 1.4).
###### Proposition 4.33.
Let $A$ be an commutative ring which is complete with respect to the $I$-adic
topology induced by some ideal $I\subset A$. Let
$A_{n}\in\operatorname{CAlg}(\widehat{\operatorname{Fil}}_{R})$ be a
(discrete) complete filtered algebra with underlying object $A$. Suppose there
is an inclusion
$A_{1}\to I$
of $A$-modules inducing an equivalence
$\operatorname{gr}(A_{n})\simeq\operatorname{gr}(F_{I}^{*}(A))=\operatorname{Sym}_{gr}(I/I^{2})$
of graded objects, where $I/I^{2}$ is of pure weight $1$. Then
$A_{n}=F_{I}^{*}A$, namely the filtration in question is the $I$-adic
filtration.
###### Proof.
Let $A_{n}$ be a complete filtered algebra with these properties. The map
$A_{1}\to I$
in the hypothesis extends by multiplicativity to a map
$A_{n}\to F_{I}^{*}(A).$
In degree 2 for example, being that $A_{2}\to A_{1}$ is the fiber of the map
$A_{1}\to I/I^{2}$, there is an induced $A$-module map
$A_{2}\to I^{2}$
fitting into the left hand column of the following diagram:
$\textstyle{A_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{A_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I/I^{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I^{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I/I^{2}}$
By assumption, one obtains an isomorphism of graded objects
$\operatorname{gr}(A_{n})\cong\operatorname{gr}(F_{I}^{*}(A))$
after passing to the associated graded of this map. Since both filtered
objects are complete, and since the associated graded functor when restricted
to complete objects is conservative, we deduce that the map
$A_{n}\to F_{I}^{*}(A)$
is an equivalence of filtered algebras. In particular, this implies that the
inclusion $A_{1}\to I$ is surjective at the level of discrete modules, so that
$A_{1}=I$. We claim that this is enough to deduce that $A_{n}$ is the $I$-adic
filtration, up to equality. For this, we need to show that there is an
equality $A_{n}=I^{n}$ for every positive integer $n$ and that the structure
maps $A_{n+1}\to A_{n}$ of the filtration are simply the inclusions. Indeed,
in each degree, we now have equivalences
$A_{n}\simeq I^{n}$
of $A$-modules, which moreover admit monomorphisms into $A$. The category of
such objects is a poset category, and so any isomorphic objects are equal;
hence we conclude $A_{n}=I^{n}$ for all $n$. ∎
###### Remark 4.34.
In particular, we may choose $A_{n}\in\mathcal{D}$, the image of the duality
functor from smooth filtered coalgebras. In this case,
$I=\operatorname{Sym}^{\geq 1}(M)$, the augmentation ideal of
$\widehat{\operatorname{Sym}}(M)$ for $M$ some projective module of finite
type.
Now let $\mathbb{G}$ be a formal group law over ${\operatorname{Spec}}k$, and
let $\mathcal{O}(\mathbb{G})$ be its complete adic algebra of functions. This
acquires a comultiplication
$\mathcal{O}(\widehat{\mathbb{G}})\to\mathcal{O}(\widehat{\mathbb{G}})\widehat{\otimes}\mathcal{O}(\widehat{\mathbb{G}})$
and counit
$\epsilon:\mathcal{O}(\widehat{\mathbb{G}})\to R$
making $\mathcal{O}(\widehat{\mathbb{G}})$ into a abelian cogroup object in
$\mathcal{D}$. By Proposition 4.33, at the level of underlying $k$-algebras,
there is a uniquely determined complete filtered algebra $F_{ad}^{*}A$ such
that
$\operatorname{colim}_{n\to-\infty}F^{n}_{ad}A\simeq\mathcal{O}(\widehat{\mathbb{G}})$
We show that this inherits the cogroup structure as well:
###### Corollary 4.35.
The comultiplication
$\Delta:\mathcal{O}(\widehat{\mathbb{G}})\to\mathcal{O}(\widehat{\mathbb{G}})\widehat{\otimes}\mathcal{O}(\widehat{\mathbb{G}})$
can be promoted to a map of filtered complete algebras. Thus, there is a
unique filtered formal group, i.e. an abelian cogroup object in the category
$\mathcal{D}$ with associated graded free on a filtered module concentrated in
weight one and with underlying object is $\mathcal{O}(\widehat{\mathbb{G}})$,
which refines the comultiplication on $\mathcal{O}(\widehat{\mathbb{G}})$.
###### Proof.
We need to show that the comultiplication
$\Delta:\mathcal{O}(\widehat{\mathbb{G}})\to\mathcal{O}(\widehat{\mathbb{G}})\widehat{\otimes}\mathcal{O}(\widehat{\mathbb{G}})$
preserves the adic filtration. Let us assume first that the formal group is
$1$-dimensional and oriented so that $\mathcal{O}(\mathbb{G})\simeq R[[x]]$.
We remark that every formal group is locally oriented. In this case, by the
formal group law is given in coordinates by the power series
$f(x_{1},x_{2})=x_{1}+x_{2}+\sum_{i,j\geq 1}a_{i,j}x^{i}y^{j}$
with suitable $a_{i,j}$. In particular, the image of the ideal commensurate
with the filtration is contained in $I^{\otimes 2}=(x_{1},x_{2})$, the ideal
commensurate with the filtration on
$\mathcal{O}(\widehat{\mathbb{G}})\widehat{\otimes}\mathcal{O}(\widehat{\mathbb{G}})\cong
R[[x_{1},x_{2}]]$. Note that this is itself the $(x_{1},x_{2})$-adic
filtration on $R[[x_{1},x_{2}]]$. By multiplicativity, $\Delta(I^{n})\subset
I^{\otimes 2n}$ for all $n$. This shows that $\Delta$ preserves the
filtration, making giving $F^{*}_{I}A$ a unique coalgebra structure compatible
with the formal group structure on $\widehat{\mathbb{G}}$. The same argument
works in higher dimensions. ∎
## 5 The deformation of a formal group
### 5.1 Deformation to the normal cone
To a pointed formal moduli problem (such as a formal group) one may associate
an equivariant family over $\mathbb{A}^{1}$, whose fiber over $\lambda\neq 0$
recovers $G$. We will use this construction in the sequel to produce
filtrations on the associated Hochschild homology theories. The author would
like to thank Bertrand Toën for the idea behind this construction, and in fact
related constructions appear in [Toë20]. A variant of this construction in the
characteristic zero setting also appears in [GR17, Chapter IV.5]. We would
also like to point out [KR18].
The construction pertains to more than just formal groups. Indeed let
$\mathcal{X}\to\mathcal{Y}$ be closed immersion of locally Noetherian schemes.
We construct a filtration on $\widehat{\mathcal{Y}_{\mathcal{X}}}$, the formal
completion of $\mathcal{Y}$ along $\mathcal{X}$, with associated graded the
shifted tangent complex $T_{\mathcal{X}|\mathcal{Y}}[1]$.
###### Proposition 5.1.
There exists a filtered stack $S_{fil}^{0}\to\mathbb{A}^{1}/\mathbb{G}_{m}$,
whose underlying object is the constant stack
$S^{0}={\operatorname{Spec}}k\sqcup{\operatorname{Spec}}k$ and whose
associated graded is ${\operatorname{Spec}}(k[\epsilon]/(\epsilon^{2}))$.
###### Proof.
Morally one should think of this as families of two points degenerating into
each other over the special fiber. For a more rigorous construction, one may
begin with the nerve of the unit map of commutative algebra objects in the
$\infty$-category $\operatorname{QCoh}(\mathbb{A}^{1}/\mathbb{G}_{m})$:
${}\mathcal{O}_{\mathbb{A}^{1}/\mathbb{G}_{m}}\to
0_{*}(\mathcal{O}_{B\mathbb{G}_{m}}),$ (5.2)
where $0:B\mathbb{G}_{m}\to\mathbb{A}^{1}/\mathbb{G}_{m}$ is the closed point.
This gives rise to a groupoid object (cf. [Lur09, Section 6.1.2])in the
$\infty$-category
$\operatorname{CAlg}(\operatorname{QCoh}(\mathbb{A}^{1}/\mathbb{G}_{m}))$.
We now give a more explicit description of this groupoid object. The structure
sheaf $\mathcal{O}_{\mathbb{A}^{1}/\mathbb{G}_{m}}$ may be identified with the
graded polynomial algebra $k[t]$, where $t$ is of weight $1$. In degree $1$
one obtains the following fiber product
$\mathcal{O}_{\mathbb{A}^{1}/\mathbb{G}_{m}}\times_{0_{*}(\mathcal{O}_{B\mathbb{G}_{m}})}\mathcal{O}_{\mathbb{A}^{1}/\mathbb{G}_{m}}$
(5.3)
which may be thought of as the graded algebra
$k[t_{1},t_{2}]/(t_{1}+t_{2})(t_{1}-t_{2}).$
viewed as an algebra over $k[t]$. If we apply the ${\operatorname{Spec}}$
functor relative to $\mathbb{A}^{1}/\mathbb{G}_{m}$, we obtain the scheme
corresponding to the union of the diagonal and antidiagonal in the plane. The
pullback of this fiber product to ${\operatorname{Mod}}_{k}$ is
$k\times_{1^{*}0_{*}(\mathcal{O}_{B\mathbb{G}_{m}})}k\simeq
k\times_{0}k=k\oplus k$
The pullback to $\operatorname{QCoh}(B\mathbb{G}_{m})$ is
$k[\epsilon]/\epsilon^{2}$, the trivial square-zero extension of $k$ by $k$.
To see this we pull back the fiber product (5.3) to
$\operatorname{QCoh}(B\mathbb{G}_{m})$, which gives the following homotopy
cartesian square
$\textstyle{k[\epsilon]/(\epsilon^{2})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{k\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{k\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{k\oplus
k[1]}$
in this category. Hence, we may define
$S^{0}_{fil}:={\operatorname{Spec}}_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\mathcal{O}_{\mathbb{A}^{1}/\mathbb{G}_{m}}\times_{0_{*}(\mathcal{O}_{B\mathbb{G}_{m}})}\mathcal{O}_{\mathbb{A}^{1}/\mathbb{G}_{m}})$
as the relative spectrum (over $\mathbb{A}^{1}/\mathbb{G}_{m}$). ∎
By construction, this admits a map
$S^{0}_{fil}\to\mathbb{A}^{1}/\mathbb{G}_{m}$
making it into a filtered stack, with generic fiber and special fiber
described in the above proposition. We remark that we may think of
$S^{0}_{fil}$ as the degree $1$ part of a _cogroupoid object_
$S^{0,\bullet}_{fil}$ in the $\infty$-category of (derived) schemes over
$\mathbb{A}^{1}/\mathbb{G}_{m}$; indeed we may apply $Spec(-)$ to the entire
Cech nerve of the map 5.2. We can then take mapping spaces out of this
cogroupoid to obtain a groupoid object.
Now let $\mathcal{X}\to\mathcal{Y}$ be as above. We will focus our attention
on the following derived mapping stack, defined in the category
$dStk_{\mathcal{Y}\times\mathbb{A}^{1}/\mathbb{G}_{m}}$ of derived stacks over
$\mathcal{Y}\times\mathbb{A}^{1}/\mathbb{G}_{m}$:
$\operatorname{Map}_{\mathcal{Y}\times\mathbb{A}^{1}/\mathbb{G}_{m}}(S_{fil}^{0},\mathcal{X}\times\mathbb{A}^{1}/\mathbb{G}_{m})$
By composing with the projection map
$\mathcal{Y}\times\mathbb{A}^{1}/\mathbb{G}_{m}\to\mathbb{A}^{1}/\mathbb{G}_{m}$,
we obtain a map,
$\operatorname{Map}_{\mathcal{Y}\times\mathbb{A}^{1}/\mathbb{G}_{m}}(S_{fil}^{0},\mathcal{X})\to\mathbb{A}^{1}/\mathbb{G}_{m}$
allowing us to view this as a filtered stack. The next proposition identifies
its fiber over $1\in\mathbb{A}^{1}/\mathbb{G}_{m}$:
###### Proposition 5.4.
There is an equivalence
$1^{*}(\operatorname{Map}(S_{fil}^{0},\mathcal{X}))\simeq\mathcal{X}\times_{\mathcal{Y}}\mathcal{X},$
###### Proof.
By formal properties of base change of mapping objects of $\infty$-topoi,
there is an equivalence
$1^{*}(\operatorname{Map}(S_{fil}^{0},\mathcal{X}))\simeq\operatorname{Map}_{\mathcal{Y}}(1^{*}S_{fil}^{0},1^{*}(\mathcal{X}\times\mathbb{A}^{1}/\mathbb{G}_{m}))$
The right hand side is the mapping object out of a disjoint sum of final
objects, and therefore is directly seen to be equivalent to
$\mathcal{X}\times_{\mathcal{Y}}\mathcal{X}$ ∎
Next we identify the fiber over the “closed point”
$0:B\mathbb{G}_{m}\to\mathbb{A}^{1}/\mathbb{G}_{m}$.
###### Proposition 5.5.
There is an equivalence of stacks
$0^{*}(\operatorname{Map}(S_{fil}^{0},\mathcal{X}))\simeq
T_{\mathcal{X}|\mathcal{Y}},$
where $T_{\mathcal{X}|\mathcal{Y}}$ denotes the relative tangent bundle of
$\mathcal{X}\to\mathcal{Y}$.
###### Proof.
We base change along the map
${\operatorname{Spec}}k\to B\mathbb{G}_{m}\to\mathbb{A}^{1}/\mathbb{G}_{m}.$
Invoking again the standard properties of base change of mapping objects we
obtain the equivalence
$0^{*}(\operatorname{Map}(S_{fil}^{0},\mathcal{X}))\simeq\operatorname{Map}_{\mathcal{Y}}(0^{*}S_{fil}^{0},0^{*}(\mathcal{X}\times\mathbb{A}^{1}/\mathbb{G}_{m})).$
By construction, we may identify $0^{*}S_{fil}^{0}$ with
${\operatorname{Spec}}(k[\epsilon]/\epsilon^{2})$. Of course, this means that
the right hand side of the above display is precisely the relative tangent
complex $T_{\mathcal{X}|\mathcal{Y}}$. ∎
To summarize, we have constructed a cogroupoid object in the category of
schemes over $\mathbb{A}^{1}/\mathbb{G}_{m}$, whose piece in cosimplicial
degree $1$ is $S^{0}_{fil}$, and formed the derived mapping stack
$\operatorname{Map}_{\mathcal{Y}\times\mathbb{A}^{1}/\mathbb{G}_{m}}(S_{fil}^{0},\mathcal{X}\times\mathbb{A}^{1}/\mathbb{G}_{m}),$
which will in turn be the degree one piece of a groupoid object in derived
schemes over $\mathbb{A}^{1}/\mathbb{G}_{m}$.
###### Construction 5.6.
Let
$\mathcal{M}_{\bullet}:=\operatorname{Map}_{\mathcal{Y}\times\mathbb{A}^{1}/\mathbb{G}_{m}}(S_{fil}^{0,\bullet},\mathcal{X}\times\mathbb{A}^{1}/\mathbb{G}_{m})$.
Note that we can interpret the degeneracy map
$\mathcal{X}\times\mathbb{A}^{1}/\mathbb{G}_{m}\to\operatorname{Map}_{\mathcal{Y}\times\mathbb{A}^{1}/\mathbb{G}_{m}}(S_{fil}^{0},\mathcal{X}\times\mathbb{A}^{1}/\mathbb{G}_{m})$
as the “inclusion of the constant maps”. We reiterate that this is a groupoid
object in the $\infty$-category of derived schemes over
$\mathbb{A}^{1}/\mathbb{G}_{m}$. We let
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\mathcal{X}/\mathcal{Y}):=\operatorname{colim}_{\Delta}\mathcal{M}_{\bullet}$
denote the colimit of this groupoid object. Note that the colimit is taken in
the $\infty$-category of derived schemes over $\mathbb{A}^{1}/\mathbb{G}_{m}$
(as opposed to all of derived stacks).
By construction,
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\mathcal{X}/\mathcal{Y)}$ is a derived
scheme over $\mathbb{A}^{1}/\mathbb{G}_{m}$. The following proposition
identifies its “generic fiber” with the formal completion
$\widehat{\mathcal{Y}_{\mathcal{X}}}$ of $\mathcal{X}$ in $\mathcal{Y}$.
###### Proposition 5.7.
There is an equivalence
$1^{*}Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\mathcal{X}/\mathcal{Y})\simeq\widehat{\mathcal{Y}_{\mathcal{X}}}$
###### Proof.
As pullback commutes with colimits, this amounts to identifying the delooping
in the category of derived schemes over $\mathcal{Y}$. Note again that all
objects are schemes and not stacks so that this statement makes sense. By the
above identifications, delooping the above groupoid corresponds to taking the
colimit of the nerve $N(f)$ of the map $f:\mathcal{X}\to\mathcal{Y}$, a closed
immersion. Hence, it amounts to proving that
$\operatorname{colim}_{\Delta^{op}}N(f)\simeq\widehat{\mathcal{Y}_{\mathcal{X}}}$
This is precisely the content of Theorem 2.6. ∎
A consequence of the above proposition is that the resulting object is pointed
by $\mathcal{X}$ in the sense that there is a well defined map
$\mathcal{X}\to\widehat{\mathcal{Y}_{\mathcal{X}}}$, arising from the
structure map in the associated colimit diagram. This map is none other than
the “inclusion” of $\mathcal{X}$ into its formal thickening.
Our next order of business is somewhat predictably at this point, to identify
the fiber over $B\mathbb{G}_{m}$ of
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\mathcal{X}/\mathcal{Y})$ with the normal
bundle of $\mathcal{X}$ in $\mathcal{Y}$.
###### Proposition 5.8.
There is an equivalence
$0^{*}Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\mathcal{X}/\mathcal{Y})\simeq\widehat{\mathbb{V}(T_{\mathcal{X}|\mathcal{Y}}[1])}=:\widehat{N_{\mathcal{X}|\mathcal{Y}}}$
in the $\infty$-category of derived schemes over $B\mathbb{G}_{m}$ of our
stack $Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\mathcal{X}/\mathcal{Y})$.
###### Proof.
As in the proof of the previous proposition, it amounts to understanding the
pull-back along ${\operatorname{Spec}}k\to
B\mathbb{G}_{m}\to\mathbb{A}^{1}/\mathbb{G}_{m}$ of the groupoid object
$\mathcal{M}_{\bullet}$ . This is given by
$\mathcal{X}\leftleftarrows T_{\mathcal{X}|\mathcal{Y}}...$
where we abuse notation and identify $T_{\mathcal{X}|\mathcal{Y}}$ with
$\mathbb{V}(T_{\mathcal{X}|\mathcal{Y}})$. Note that
$T_{\mathcal{X}|\mathcal{Y}}\simeq\Omega_{\mathcal{X}}(T_{\mathcal{X}|\mathcal{Y}}[1])$
and so, we may identify the above colimit diagram with the simplicial nerve
$N(f)$ of the unit section $\mathcal{X}\to
T_{\mathcal{X}|\mathcal{Y}}[1]\simeq N_{\mathcal{X}|\mathcal{Y}}$. The result
now follows from another application of Theorem 2.6. ∎
The following statement summarizes the above discussion:
###### Theorem 5.9.
Let $f:\mathcal{X}\to\mathcal{Y}$ be a closed immersion of schemes. Then there
exists a filtered stack
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\mathcal{X}/\mathcal{Y})\to\mathbb{A}^{1}/\mathbb{G}_{m}$
(making it into a relative scheme over $\mathbb{A}^{1}/\mathbb{G}_{m}$) with
the property that there exists a map
$\mathcal{X}\times\mathbb{A}^{1}/\mathbb{G}_{m}\to
Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\mathcal{X}/\mathcal{Y})$
whose fiber over $1\in\mathbb{A}^{1}/\mathbb{G}_{m}$ is
$\mathcal{X}\to\widehat{\mathcal{Y}_{\mathcal{X}}}$
and whose fiber over $0\in\mathbb{A}^{1}/\mathbb{G}_{m}$ is
$\mathcal{X}\to\widehat{N_{\mathcal{X}|\mathcal{Y}}},.$
the formal completion of the unit section of $\mathcal{X}$ in its normal
bundle.
### 5.2 Deformation of a formal group to its normal cone
Fix a (classical) formal group $\widehat{\mathbb{G}}$. We now apply the above
construction to the unit section of the formal group,
$\iota:{\operatorname{Spec}}k\to\widehat{\mathbb{G}}$. Note that
$\widehat{\mathbb{G}}$ is already formally complete along $\iota$. We set
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}}):=Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}({\operatorname{Spec}}k/\widehat{\mathbb{G}})$
This will be a relative scheme over $\mathbb{A}^{1}/\mathbb{G}_{m}$.
###### Proposition 5.10.
Let ${\operatorname{Spec}}k\to\widehat{\mathbb{G}}$ be the unit section of a
formal group. Then, the stack
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})$ of Construction
5.6 is a filtered formal group.
###### Proof.
We will show that there exists a filtered dualizable (and discrete) $R$-module
$M$ for which
$\mathcal{O}(Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}}))\simeq\Gamma^{*}_{fil}(M)^{\vee}\simeq\widehat{\operatorname{Sym}_{fil}^{*}}(M^{\vee}).$
As was shown above, there is an equivalence of
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})_{1}\simeq\widehat{\mathbb{G}}_{m}$
where the left hand side denotes the pullback along
${\operatorname{Spec}}k\to\mathbb{A}^{1}/\mathbb{G}_{m}$; hence we conclude
that the underlying object of
$\mathcal{O}(Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}({\operatorname{Spec}}k/\widehat{\mathbb{G}}))$
is of the form $k[[t]]\simeq\widehat{\operatorname{Sym}^{*}}(M)$ for $M$ a
free rank $k$-module of rank $n$. We now identify the associated graded of the
filtered algebra corresponding to
$\mathcal{O}(Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}}))$. For
this, we use the equivalence
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})_{0}\simeq\widehat{T_{\mathbb{G}|k}}$
of stacks over $B\mathbb{G}_{m}$. We note that the right hand side may indeed
be viewed as a stack over $B\mathbb{G}_{m}$, arising from the weight $-1$
action of $\mathbb{G}_{m}$ by homothety on the fibers. This is the
$\mathbb{G}_{m}$ action which will be compatible with the grading on the dual
numbers $k[\epsilon]$ (which appears in Proposition 5.1) such that $\epsilon$
is of weight one. In particular, since $\widehat{\mathbb{G}}$ is a one
dimensional formal group, it follows that the associated graded is none other
than
$\operatorname{Sym}_{gr}^{*}(M(1))$
the graded symmetric algebra on the graded $k$-module $M(1)$ which is $M$
concentrated in weight $1$. Putting this all together we see that at the level
of filtered objects, there is an equivalence
$\mathcal{O}(Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}}))\simeq\widehat{\operatorname{Sym}_{fil}}(M^{f}(1)),$
where $M^{f}(1)$ is the filtered $k$-module
$M^{f}(1)=\begin{cases}M^{f}(1)_{n}=0,\,\,\,\,\,\,n>1\\\
M^{f}(1)_{n}=M,\,\,\,\,\,\,\,\,n\leq 1\end{cases}$
Recall the deformation to the normal cone will be equipped with a “unit” map
$\mathbb{A}^{1}/\mathbb{G}_{m}\to
Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}}).$
By passing to functions, we deduce from this map that the degree $1$ piece of
the filtration on
$\mathcal{O}(Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}}))$ is a
submodule of the augmentation ideal of $\widehat{\operatorname{Sym}(M)}$.
Thus, the conditions of Proposition 4.33 are satisfied here, so we conclude
that this filtration is none other than the adic filtration of
$\widehat{\operatorname{Sym}(M)}$ with respect to the augmentation ideal.
Finally by Corollary 4.35, this acquires a canonical abelian cogroup structure
which is a filtered enhancement of that of $\widehat{\mathbb{G}}$, making
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})$ into a filtered
formal group. ∎
Now we combine this construction with the
$\mathbb{A}^{1}/\mathbb{G}_{m}$-parametrized Cartier duality of Section 4.
###### Corollary 5.11.
Let $\widehat{\mathbb{G}}$ be a formal group over ${\operatorname{Spec}}k$,
and let $\widehat{\mathbb{G}}^{\vee}$ denote its Cartier dual. Then the
cohomology $R\Gamma(\widehat{\mathbb{G}}^{\vee},\mathcal{O})$ acquires a
canonical filtration.
###### Proof.
By Construction 4.31, the coordinate algebra
$\mathcal{O}(Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})$
corresponds via duality to an abelian group object in smooth filtered
coalgebras. As we are in the discrete setting, this is equivalent to the
structure of a grouplike commutative monoid in this category. In particular,
this is a filtered Hopf algebra object, so it determines a group stack
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})^{\vee}$ over
$\mathbb{A}^{1}/\mathbb{G}_{m}$. ∎
## 6 The deformation to the normal cone of $\widehat{\mathbb{G}_{m}}$
By the above, given any formal group $\widehat{\mathbb{G}}$, one may define a
filtration on its Cartier dual
$\widehat{\mathbb{G}}^{\vee}=\operatorname{Map}(\widehat{\mathbb{G}},\widehat{\mathbb{G}_{m}})$
in the sense of [Mou19]. In the case of the formal multiplicative group, this
gives a filtration on its Cartier dual $D(\mathbb{G}_{m})=\mathsf{Fix}$. In
[MRT19], the authors defined a canonical filtration on this affine group
scheme (defined over a $\mathbb{Z}_{(p)}$-algebra $k$) given by a certain
interpolation between the kernel and fixed points on the Frobenius on the Witt
vector scheme. We would like to compare the filtration on
$\operatorname{Map}(\widehat{\mathbb{G}_{m}},\widehat{\mathbb{G}_{m}})$ with
this construction.
###### Corollary 6.1.
The filtration defined on $\mathsf{Fix}$ is Cartier dual to the $(x)$-adic
filtration on
$\mathcal{O}(\widehat{\mathbb{G}_{m}})\simeq k[[x]].$
Furthermore, this filtration corresponds to the deformation to the normal cone
construction
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}_{m}})$ on
$\widehat{\mathbb{G}}_{m}$.
###### Proof.
Let
$\mathcal{G}_{t}={\operatorname{Spec}}k[X,1/1+tX]$
This is an affine group scheme; one sees by varying the parameter $t$ that
this is naturally defined over $\mathbb{A}^{1}$. If $t$ is invertible, then
this is equivalent to $\mathbb{G}_{m}$; if $t=0$, this is just the formal
additive group ${\mathbb{G}_{a}}$. If we take the formal completion of this at
the unit section, we obtain a formal group $\widehat{\mathcal{G}_{t}}$, with
corresponding formal group law
$F(X,Y)=X+Y+tXY$ (6.2)
which we may think of as a formal group over $\mathbb{A}^{1}$. In [SS01] the
authors describe the Cartier dual of the resulting formal group, for every
$t\in k$, as the group scheme
$\ker(F-t^{p-1}{\operatorname{id}}:\mathbb{W}_{p}\to\mathbb{W}_{p})$
These of course assemble, by way of the natural $\mathbb{G}_{m}$ action on the
Witt vector scheme $\mathbb{W}$, to give the filtered group scheme
$\mathbb{H}\to\mathbb{A}^{1}/\mathbb{G}_{m}$ of [MRT19], whose classifying
stack is the filtered circle. The algebra of functions
$\mathcal{O}(\mathbb{H})$ acquires is a acquires a comultiplication; by
results of [Mou19], we may think of this as a filtered Hopf algebra.
Let us identify this filtered Hopf algebra a bit further, which by abuse of
notation, we refer to as $\mathcal{O}(\mathbb{H})$. After passing to
underlying objects, it is the divided power coalgebra
$\bigoplus\Gamma^{n}(k)$. The algebra structure on this comes from the
multiplication on $\widehat{\mathbb{G}_{m}}$, via Cartier duality. On the
graded side, we have the coordinate algebra of $\mathsf{Ker}$, which by
[Dri20, Lemma 3.2.6], is none other than the free divided power algebra
$k\langle x\rangle\cong k[x,\frac{x^{2}}{2!},...]$
One gives this the grading where each $\frac{x^{n}}{n!}$ is of pure weight
$-n$. The underlying graded smooth coalgebra is
$\bigoplus_{n}\Gamma_{gr}(k(-1))$
We deduce by weight reasons that there is an equivalence of filtered
coalgebras
$\mathcal{O}(\mathbb{H})\simeq\bigoplus_{n}\Gamma^{n}(k^{f}(-1))$
where $k^{fil}(-1)$ is trivial in filtering degrees $n>1$ and equal to $k$
otherwise.
The consequence of the analysis of the above paragraph is that the Hopf
algebra structure on $\mathcal{O}(\mathbb{H})$ corresponds to the data of an
abelian group object in smooth filtered coalgebras, cf. section 4. In
particular, it corresponds to a coAbelian group object structure on the dual,
$\widehat{\operatorname{Sym}_{fil}^{*}(k^{f}(1))}$. This is a complete
filtered algebra satisfying the conditions of Proposition 4.33 and thus
coincides with the adic filtration on $k[[x]]$. The corresponding filtered
coalgebra structure is the unique one commensurate with the adic filtration,
since by corollary 4.35, the comultiplication preserves the adic filtration.
Thus, there exists a unique filtered formal group which recovers
$\widehat{\mathbb{G}_{m}}$ and $\widehat{\mathbb{G}_{a}}$ upon taking
underlying objects and associated gradeds respectively. In the setting of the
filtered Cartier duality of Section 4, this must then be dual to the specified
abelian group object structure on $\mathcal{O}(\mathbb{H})$.
Finally we relate this to the deformation to the normal cone constuction
applied to $\widehat{\mathbb{G}_{m}}$, which also outputs a filtered formal
group. Indeed by the reasoning of Proposition 5.10, this filtered formal group
is itself given by the adic filtration on $k[[t]]$ together with the filtered
coalgebra structure uniquely determined by the group structure on
$\widehat{\mathbb{G}_{m}}$. ∎
## 7 $\widehat{\mathbb{G}}$-Hochschild homology
As an application to the above deformation to the normal cone constructions
associated to a formal group, we further somewhat the following proposal of
[MRT19] described in the introduction.
###### Construction 7.1.
Let $k$ be a $\mathbb{Z}_{(p)}$-algebra. Let $\widehat{\mathbb{G}}$ be a
formal group over $k$. Its Cartier dual $\widehat{\mathbb{G}}^{\vee}$ is an
affine commutative group scheme We let $B\widehat{\mathbb{G}}^{\vee}$ denote
the classifying stack of the group scheme $\widehat{\mathbb{G}}^{\vee}$. Let
$X={\operatorname{Spec}}A$ be an affine derived scheme, corresponding to a
simplicial commutative ring $A$. One forms the derived mapping stack
$\operatorname{Map}_{dStk_{k}}(B\widehat{\mathbb{G}}^{\vee},X).$
If $\widehat{\mathbb{G}}=\widehat{\mathbb{G}_{m}}$, then by the affinization
techniques of [Toë06, MRT19], one recovers, at the level of global sections
$R\Gamma(\operatorname{Map}_{dStk}(B\widehat{\mathbb{G}_{m}}^{\vee},X),\mathcal{O})\simeq\operatorname{HH}(A),$
the Hochschild homology of $A$ as the global sections of this construction.
Following this example one can make the following definition (cf. [MRT19,
Section 6.3])
###### Definition 7.2.
Let $\widehat{\mathbb{G}}$ be a formal group over $k$. Let
$\operatorname{HH}^{\widehat{\mathbb{G}}}:\operatorname{sCAlg}_{k}\to{\operatorname{Mod}}_{k}$
be the functor defined by
$\operatorname{HH}^{\widehat{\mathbb{G}}}(A):=R\Gamma(\operatorname{Map}_{dStk}(B\widehat{\mathbb{G}}^{\vee},X),\mathcal{O})$
As was shown in section 5.2, given a formal group $\widehat{\mathbb{G}}$ over
a commutative ring $R$, one can apply a deformation to the normal cone
construction to obtain a formal group
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})$ to obtain a formal
group over $\mathbb{A}^{1}/\mathbb{G}_{m}$. By applying
$\mathbb{A}^{1}/\mathbb{G}_{m}$-parametrized Cartier duality, one obtains a
group scheme over $\mathbb{A}^{1}/\mathbb{G}_{m}$.
###### Theorem 7.3.
Let $\widehat{\mathbb{G}}$ be an arbitrary formal group. The functor
$\operatorname{HH}^{\widehat{\mathbb{G}}}(-):\operatorname{sCAlg}_{R}\to{\operatorname{Mod}}_{R}$
admits a refinement to the $\infty$-category of filtered $R$-modules
$\widetilde{\operatorname{HH}^{\widehat{\mathbb{G}}}(-)}:\operatorname{sCAlg}_{R}\to{\operatorname{Mod}}_{R}^{filt},$
such that
$\operatorname{HH}^{\widehat{\mathbb{G}}}(-)\simeq\operatorname{}\operatorname{colim}_{(\mathbb{Z},\leq)}\widetilde{\operatorname{HH}^{\widehat{\mathbb{G}}}(-)}$
###### Proof.
Let $Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})^{\vee}$ be the
Cartier dual of the deformation to the normal cone
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})$. Form the mapping
stack
$\operatorname{Map}_{dStk_{/\mathbb{A}^{1}/\mathbb{G}_{m}}}(BDef_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})^{\vee},X\times\mathbb{A}^{1}/\mathbb{G}_{m}).$
This base-changes along the map
$1:{\operatorname{Spec}}k\to\mathbb{A}^{1}/\mathbb{G}_{m}$
to the mapping stack
$\operatorname{Map}_{dStk_{k}}(B\widehat{\mathbb{G}}^{\vee},X),$
which gives the desired geometric refinement. The stack
$\operatorname{Map}_{dStk_{/\mathbb{A}^{1}/\mathbb{G}_{m}}}(BDef_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})^{\vee},X\times\mathbb{A}^{1}/\mathbb{G}_{m})$
is a derived scheme relative to the base $\mathbb{A}^{1}/\mathbb{G}_{m}$.
Indeed, it is nilcomplete, infinitesimally cohesive and admits an obstruction
theory by the arguments of [TV08, Section 2.2.6.3]. Finally its truncation is
the relative scheme $t_{0}X\times\mathbb{A}^{1}/\mathbb{G}_{m}$ over
$\mathbb{A}^{1}/\mathbb{G}_{m}$\- this follows from the identification
$t_{0}\operatorname{Map}(B\widehat{\mathbb{G}}^{\vee},X)\simeq
t_{0}\operatorname{Map}(B\widehat{\mathbb{G}}^{\vee},t_{0}X)$
and from the fact that there are no nonconstant (nonderived) maps $BG\to
t_{0}X$ for $G$ a group scheme.
Hence we conclude by the criteria of [TV08, Theorem C.0.9] that this is a
relative affine derived scheme. By Proposition 2.1, we conclude that
$\mathcal{L}_{fil}^{\widehat{\mathbb{G}}}(X)\to\mathbb{A}^{1}/\mathbb{G}_{m}$
is of finite cohomological dimension and so
$\widetilde{\operatorname{HH}^{\widehat{\mathbb{G}}}(A)}$ defines an
exhaustive filtration on $\operatorname{HH}^{\widehat{\mathbb{G}}}(A)$. ∎
###### Remark 7.4.
In characteristic zero, all one-dimensional formal groups are equivalent to
the additive formal group $\widehat{\mathbb{G}_{a}}$, via an equivalence with
its tangent Lie algebra. In particular the above filtration splits
canonically, one one obtains an equivalence of derived schemes
$\operatorname{Map}_{dStk}(B\widehat{\mathbb{G}}^{\vee},X)\simeq\mathbb{T}_{X|R}[-1]$
In positive or mixed characteristic this is of course not true. However, one
can view all these theories as deformations along the map
$B\mathbb{G}_{m}\to\mathbb{A}^{1}/\mathbb{G}_{m}$ of the de Rham algebra
$DR(A)=\operatorname{Sym}(\mathbb{L}_{A|k}[1])$
## 8 Liftings to spectral deformation rings
In this section we lift the above discussion to the setting of spectral
algebraic geometry over various ring spectra that parametrize _deformations_
of formal groups. These are defined in [Lur18] in the context of the elliptic
cohomology theory. As we will be switching gears now and working in this
setting, we will spend some time recalling and slightly clarifying some of the
ideas in [Lur18]. Namely, we introduce a correspondence between formal groups
over $E_{\infty}$-rings, and spectral affine group schemes, and show it to be
compatible with Cartier duality in the classical setting. We stress that the
necessary ingredients already appear in [Lur18].
### 8.1 Formal groups over the sphere
We recall various aspects of the treatment of formal groups in the setting of
spectra and spectral algebraic geometry. The definition is based on the notion
of smooth coalgebra studied in Section 3.
###### Definition 8.1.
Fix an arbitrary $E_{\infty}$ ring $R$. and let $C$ be a coalgebra over $R$.
Recall that this means that
$C\in\operatorname{CAlg}({\operatorname{Mod}}_{R}^{op})^{op}$. Then $C$ is
smooth if it is flat as an $R$-module, and if $\pi_{0}C$ is smooth as a
coalgebra over $\pi_{0}(R)$, as in Definition 3.11.
Given an arbitrary coalgebra $C$ over $R$, the linear dual
$C^{\vee}=\operatorname{Map}(C,R)$ acquires a canonical $E_{\infty}$-algebra
structure. In general $C$ cannot be recovered from $C^{\vee}$. However, in the
smooth case, the dual $C$ acquires the additional structure of a topology on
$\pi_{0}$ giving it the structure of an adic $E_{\infty}$ algebra. This allows
us to recover $C$, via the following proposition, c.f. [Lur18, Theorem
1.3.15]:
###### Proposition 8.2.
Let $C,D\in\operatorname{cCAlg}^{sm}_{R}$ be smooth coalgebras. Then
$R$-linear duality induces a homotopy equivalence
$\operatorname{Map}_{\operatorname{cCAlg}_{R}}(C,D)\simeq\operatorname{Map}^{\operatorname{cont}}_{\operatorname{CAlg}_{R}}(C^{\vee},D^{\vee}).$
###### Remark 8.3.
One can go further and characterize intrinsically all adic $E_{\infty}$
algebras that arise as duals of smooth coalgebras. These (locally) have
underlying homotopy groups a formal power series ring.
###### Construction 8.4.
Given a coalgebra $C\in\operatorname{cCAlg}_{R}$, one may define a functor
$\operatorname{cSpec}(C):\operatorname{CAlg}_{R}^{cn}\to\mathcal{S};$
this associates, to a connective $R$-algebra $A$, the space of grouplike
elements:
$\operatorname{GLike}(A\otimes_{R}C)=\operatorname{Map}_{\operatorname{cCAlg}_{A}}(A,A\otimes_{R}C).$
###### Remark 8.5.
Fix $C$ a smooth coalgebra. There is always a canonical map of stacks
$\operatorname{coSpec}(C)\to{\operatorname{Spec}}(A)$ where $A=C^{\vee}$, but
it is typically not an equivalence. The condition that $C$ is smooth
guarantees precisely that there is an induced equivalence
$\operatorname{coSpec}(C)\to\operatorname{Spf}(A)\subseteq{\operatorname{Spec}}A$,
where $\operatorname{Spf}(A)$ denotes the formal spectrum of the adic
$E_{\infty}$ algebra $A$. In particular $\operatorname{coSpec}(C)$ is a formal
scheme in the sense of [Lur16, Chapter 8]
One has the following proposition, to be compared with Proposition 3.15
###### Proposition 8.6 (Lurie).
Let $R$ be an $E_{\infty}$-ring. Then the construction
$C\mapsto\operatorname{cSpec}(C)$ induces a fully faithful embedding of
$\infty$-categories
$\operatorname{cCAlg}^{sm}\to\operatorname{Fun}(\operatorname{CAlg}^{cn}_{R},\mathcal{S})$
This facilitates the following definition of a formal group in the setting of
spectral algebraic geometry
###### Definition 8.7.
A functor $X:\operatorname{CAlg}^{cn}_{R}\to\mathcal{S}$ is a formal
hyperplane if it is in the essential image of the $\operatorname{coSpec}$
functor. We now define a formal group to be an abelian group object in formal
hyperplanes, namely an object of $\operatorname{Ab}(\operatorname{HypPlane})$.
As is evident from the thread of the above construction, one may define a
formal group to be a certain type of Hopf algebra, but in a somewhat strict
sense. Namely we can define a formal group to be an object of
$\operatorname{Ab}(\operatorname{cCAlg}^{sm})$; namely an abelian group object
in the $\infty$-category of smooth coalgebras.
###### Remark 8.8.
The monoidal structure on $\operatorname{cCAlg}_{R}$ induced by the underlying
smash product of $R$-modules is Cartesian; in particular it is given by the
product in this $\infty$-category. Hence, a “commutative monoid object” in the
category of $R$-coalgebras will be coalgerbras which are additionally equipped
with an $E_{\infty}$-algebra structure. In particular, they will be
bialgebras.
###### Construction 8.9.
Let $\widehat{\mathbb{G}}$ be a formal group over an $E_{\infty}$-algebra $R$.
Let $\mathcal{H}$ be a strict Hopf algebra $\mathcal{H}$ in the above sense,
for which
$\operatorname{coSpec}\mathcal{H}=\widehat{\mathbb{G}}.$
Let
$U:\operatorname{Ab}(\operatorname{cCAlg}_{R})\to\operatorname{CMon}(\operatorname{cCAlg}_{R})$
be the forgetful functor from abelian group objects to commutative monoids.
Since the monoidal structure on $\operatorname{cCAlg}_{R}$ is cartesian, the
structure of a commutative monoid in $\operatorname{cCAlg}_{R}$ is that of a
commutative algebra on the underlying $R$-module, and so we may view such an
object as a bialgebra in ${\operatorname{Mod}}_{R}$. Finally, applying
${\operatorname{Spec}}(-)$ (the spectral version) to this bialgebra to obtain
a group object in the category of spectral schemes. This is what we refer to
as the _Cartier dual_ $\widehat{\mathbb{G}}^{\vee}$ of $\widehat{\mathbb{G}}$.
###### Remark 8.10.
The above just makes precise the association, for a strict Hopf algebra
$\mathcal{H}$, (i.e. an abelian group object) the association
$\operatorname{Spf}(H^{\vee})\simeq\operatorname{coSpec}(H)\mapsto{\operatorname{Spec}}(H)$
Unlike the $1$-categorical setting studied so far, there is no equivalence
underlying this, as passing between abelian group objects to commutative
monoid objects loses information; hence this is not a duality in the precise
sense. In particular, it is not clear how to obtain a spectral formal group
from a grouplike commutative monoid in schemes, even if the underlying
coalgebra is smooth.
###### Proposition 8.11.
Let $R\to R^{\prime}$ be a morphism of $E_{\infty}$-rings and let Let
$\widehat{\mathbb{G}}$ be a formal group over ${\operatorname{Spec}}R$, and
$\widehat{\mathbb{G}}_{R^{\prime}}$ its extension to $R^{\prime}$. Then
Cartier duality satisfies base-change, so that there is an equivalence
$D(\widehat{\mathbb{G}}|_{R}^{\prime})\simeq D(\widehat{\mathbb{G}})|_{R}$
###### Proof.
Let $\widehat{\mathbb{G}}=\operatorname{Spf}(A)$ be a formal group
corresponding to the adic $E_{\infty}$ ring $A$. Then the Cartier dual is
given by ${\operatorname{Spec}}(\mathcal{H})$ for $\mathcal{H}=A^{\vee}$, the
linear dual of $A$ which is a smooth coalgebra. The linear duality functor
$(-)^{\vee}=\operatorname{Map}_{R}(-,R)$-for example by [Lur18, Remark 1.3.5]
\- commutes with base change and is an equivalence between smooth coalgebras
and their duals. Moreover it preserves finite products and so can be upgraded
to a functor between abelian group objects. ∎
### 8.2 Deformations of formal groups
Let us recall the definition of a deformation of a formal group. These are all
standard notions.
###### Definition 8.12.
Let $\widehat{\mathbb{G}_{0}}$ be formal group defined over a finite field of
characteristic $p$. Let $A$ be a complete Noetherian ring equipped with a ring
homomorphism $\rho:A\to k$, further inducing an isomorphism
$A/\mathfrak{m}\cong k$. A deformation of $\widehat{\mathbb{G}_{0}}$ along
$\rho$ is a pair $(\widehat{\mathbb{G}},\alpha)$ where $\widehat{\mathbb{G}}$
is a formal group over $A$ and
$\alpha:\widehat{\mathbb{G}_{0}}\to\widehat{\mathbb{G}}|_{k}$ is an
isomorphism of formal groups over $k$.
The data $(\widehat{\mathbb{G}},\alpha)$ can be organized into a category
$\operatorname{Def}_{\widehat{\mathbb{G}_{0}}}(A)$. The following classic
theorem due to Lubin and Tate asserts that there exists a universal
deformation, in the sense that there is a ring which corepresents the functor
$A\mapsto\operatorname{Def}_{\widehat{\mathbb{G}_{0}}}(A)$.
###### Theorem 8.13 (Lubin-Tate).
Let $k$ be a perfect field of characteristic $p$ and let
$\widehat{\mathbb{G}_{0}}$ be a one dimensional formal group of height
$n<\infty$ over $k$. Then there exists a complete local Noetherian ring
$R^{cl}_{\widehat{\mathbb{G}}}$ a ring homomorphism
$\rho:R^{cl}_{\widehat{\mathbb{G}}}\to k$
inducing an isomorphism $R^{cl}_{\widehat{\mathbb{G}}}/\mathfrak{m}\cong k$,
and a deformation $(\widehat{\mathbb{G}},\alpha)$ along $\rho$ with the
following universal property: for any other complete local ring $A$ with an
isomorphism $A\cong A/\mathfrak{m}$, extension of scalars induces an
equivalence
$\operatorname{Hom}_{k}(A_{n},A)\simeq\operatorname{Def}_{\widehat{\mathbb{G}_{0}}}(A,\rho)$
(here, we regard the right hand side as a category with only identity
morphisms)
For the purposes of this text, we can interpret the above as saying that every
formal group over a complete local ring $A$ with residue field $k$ can be
obtained from the universal formal group over $A_{0}$ by base change along the
map $A_{0}\to A$. We let $\mathbb{G}^{un}$ denote the universal formal group
over this ring.
###### Remark 8.14.
As a consequence of the classification of formal groups due to Lazard, one has
a description
$A_{0}\cong W(k)[[v_{1},...,v_{n-1}]],$
where the map $\rho:W(k)[[v_{1},...,v_{n-1}]]\to k$ has kernel the maximal
ideal $\mathfrak{m}=(p,v_{1},...,v_{n-1})$.
### 8.3 Deformations over the sphere
As it turns out the ring $A_{0}$ has the special property that it can be
lifted to the $K(n)$-local sphere spectrum. To motivate the discussion, we
restate a classic theorem attributed to Goerss, Hopkins and Miller. We first
set some notation.
###### Definition 8.15.
Let $\mathcal{FG}$ denote the category with
* •
objects being pairs $(k,\widehat{\mathbb{G}})$ where $k$ is a perfect field of
characteristic $p$, and $\widehat{\mathbb{G}}$ is a formal group over $K$
* •
A morphism from $(K,\widehat{\mathbb{G}})$ to
$(k^{\prime},\widehat{\mathbb{G}}^{\prime})$ is a pair $(f,\alpha)$ where
$f:k\to k^{\prime}$ is a ring homomorphism, and
$\alpha:\widehat{\mathbb{G}}\cong\widehat{\mathbb{G}}^{\prime}$ is an
isomorhism of formal groups over $k^{\prime}$
###### Theorem 8.16 (Goerss-Hopkins-Miller).
Let $k$ be a perfect field of characteristic $p>0$, and let
$\widehat{\mathbb{G}_{0}}$ be a formal group of height $n<\infty$ over $k$.
Then there is a functor
$E:\mathcal{FG}\to\operatorname{CAlg},\,\,\,\,\,(k,\widehat{\mathbb{G}})\mapsto
E_{k,\widehat{\mathbb{G}}}$
such that for every $(k,\widehat{\mathbb{G}})$, the following holds
1. 1.
$E_{k,\widehat{\mathbb{G}}}$ is even periodic and complex orientable.
2. 2.
the corresponding formal group over $\pi_{0}E_{k,\widehat{\mathbb{G}}}$ is the
universal deformation of $(k,\widehat{G})$. In particular,
$\pi_{0}E_{k,\widehat{\mathbb{G}}}\cong
A_{0}\cong\mathbb{W}(k)[[v_{1},...v_{n-1}]]$
If we set $E_{k,\widehat{\mathbb{G}}}=(\mathbb{F}_{p^{n}},\Gamma)$, where
$\Gamma$ is the $p$-typical formal group of height $n$, we denote
$E_{n}:=E_{\mathbb{F}_{p^{n}},\Gamma};$
this is the $n$th _Morava E-theory_.
###### Remark 8.17.
The original approach to this uses Goerss-Hopkins obstruction theory. A modern
account due to Lurie can be found in [Lur18, Chapter 5]
As it turns out, this ring can be thought of as parametrizing oriented
deformations of the formal group $\widehat{\mathbb{G}}$. This terminology,
introduced in [Lur18], roughly means that the formal group in question is
equivalent to the Quillen formal group arising from the complex orientation on
the base ring. However, there exists an $E_{\infty}$-algebra parametrizing
_unoriented deformations_ of the formal group over $k$.
###### Theorem 8.18 (Lurie).
Let $k$ be a perfect field of characteristic $p$, and let
$\widehat{\mathbb{G}}$ be a formal group of height $n$ over $k$. There exists
a morphism of connective $E_{\infty}$-rings
$\rho:R^{un}_{\widehat{\mathbb{G}}}\to k$
and a deformation of $\widehat{\mathbb{G}}$ along $\rho$ with the following
properties
1. 1.
$R^{un}_{\widehat{\mathbb{G}}}$ is Noetherian, there is an induced surjection
$\epsilon:\pi_{0}R^{un}_{\widehat{\mathbb{G}}}\to k$ and
$R^{un}_{\widehat{\mathbb{G}}}$ is complete with respect to the ideal
$\ker(\epsilon)$.
2. 2.
Let $A$ be a Noetherian ring $E_{\infty}$-ring for which the underlying ring
homorphism $\epsilon:\pi_{0}(A)\to k$ is surjective and $A$ is complete with
respect to the ideal $\ker(\epsilon)$. Then extension of scalars induces an
equivalence
$\operatorname{Map}_{\operatorname{CAlg}_{/}k}(R^{un}_{\widehat{\mathbb{G}}},A)\simeq\operatorname{Def}_{\widehat{\mathbb{G}}}(A)$
###### Remark 8.19.
We can interpret this theorem as saying that the ring
$R^{un}_{\widehat{\mathbb{G}}_{0}}$ corepresents the spectral formal moduli
problem classifying deformations of $\widehat{\mathbb{G}}_{0}$. Of course this
then means that there exists a universal deformation (this is non classical)
over $R^{un}_{\widehat{\mathbb{G}}_{0}}$ which base-changes to any other
deformation of $\widehat{\mathbb{G}}$
###### Remark 8.20.
This is actually proven in the setting of _$p$ -divisible groups_ over more
general algebras over $k$. However, the formal group in question is the
identity component of a $p$-divisible group over $k$; moreover, any
deformation of the formal group will arise as the identity component of a
deformation of the corresponding $p$-divisible group.(cf. [Lur18, Example
3.0.5])
Now fix an arbitrary formal group $\widehat{\mathbb{G}}$ of height $n$ over a
finite field, and take its Cartier dual
$\mathsf{Fix}_{\widehat{\mathbb{G}}}:=\widehat{\mathbb{G}}^{\vee}$. From
Construction 8.9, we see that this is an affine group scheme over
${\operatorname{Spec}}k$.
###### Theorem 8.21.
There exists a spectral scheme $\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}}$
defined over the $E_{\infty}$ ring $R^{un}_{\widehat{\mathbb{G}}}$, which
lifts $\mathsf{Fix}_{\widehat{\mathbb{G}}}$, giving rise to the following
Cartesian diagram of spectral schemes:
$\textstyle{\mathsf{Fix}_{\widehat{\mathbb{G}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi^{\prime}}$$\scriptstyle{p^{\prime}}$$\textstyle{\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{Spec(\mathbb{F}_{p})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{{\operatorname{Spec}}(R^{un}_{\widehat{\mathbb{G}}})}$
###### Proof.
By Theorem 8.18 above, given a formal group $\widehat{\mathbb{G}}$ over a
perfect field, the functor associating to an augmented ring $A\to k$ the
groupoid of deformations $\operatorname{Def}(A)$ is corepresented by the
spectral (unoriented) deformation ring $R^{un}_{\widehat{\mathbb{G}}}$. Hence
we obtain a map
$R^{un}_{\widehat{\mathbb{G}}}\to\mathbb{F}_{p}$
of $E_{\infty}$-algebras over $k$. Over
${\operatorname{Spec}}(R^{un}_{\widehat{\mathbb{G}}})$, one has the universal
deformation $\widehat{\mathbb{G}}_{un}$. This base-changes along the above map
to $\widehat{\mathbb{G}}$. By definition, this formal group is of the form
$\operatorname{coSpec}(\mathcal{H})$ for some
$\mathcal{H}\in\operatorname{Ab}(\operatorname{cCAlg}^{sm}_{{R^{un}_{\widehat{\mathbb{G}}}}})$.
Let
$U:\operatorname{Ab}(\operatorname{cCAlg}^{sm}_{{R^{un}_{\widehat{\mathbb{G}}}}})\to\operatorname{CMon}^{gp}(\operatorname{cCAlg}^{sm}_{{R^{un}_{\widehat{\mathbb{G}}}}})$
be the forgetful functor from abelian group objects to grouplike commutative
monoid objects. We recall that the symmetric monoidal structure on
cocommutative coalgebras is the cartesian one. Hence, grouplike commutative
monoids will have the strucure of $E_{\infty}$-algebras in the symmetric
monoidal $\infty$-category of $R^{un}_{\widehat{\mathbb{G}}}$-modules. In
particular we obtain a commutative and cocommutative bialgebra, so we can take
${\operatorname{Spec}}(\mathcal{H})$; this will be a grouplike commutative
monoid object in the category of affine spectral schemes over
${\operatorname{Spec}}(R^{un}_{\widehat{\mathbb{G}}})$. Since Cartier duality
commutes with base change (cf. Proposition 8.11), we conclude that
${\operatorname{Spec}}(\mathcal{H})$ base-changes to
$\mathsf{Fix}_{\widehat{\mathbb{G}}}$ under the map
$R^{un}_{\widehat{\mathbb{G}}}$. ∎
###### Example 8.22.
As a motivating example, let $\widehat{\mathbb{G}}=\widehat{\mathbb{G}_{m}}$,
the formal multiplicative group over $\mathbb{F}_{p}$. As described in _loc.
ci_ , this formal group is Cartier dual to
$\operatorname{Fix}\subset\mathbb{W}_{p}$, the Frobenius fixed point subgroup
scheme of the Witt vectors $\mathbb{W}_{p}(-)$. This lifts to
$R^{un}_{\widehat{\mathbb{G}_{m}}}$, which in this case is none other than the
$p$-complete sphere spectrum $\mathbb{S}\hat{{}_{p}}$. In fact, this object
lifts to the sphere itself, by the discussion in [Lur18, Section 1.6]. Hence
we obtain an abelian group object in the category
$\operatorname{cCAlg}_{\mathbb{S}\hat{{}_{p}}}$ of smooth coalgebras over the
$p$-complete sphere. Taking the image of this along the forgetful functor
$Ab(\operatorname{cCAlg}_{\mathbb{S}\hat{{}_{p}}})\to\operatorname{CMon}(\operatorname{cCAlg}_{\mathbb{S}\hat{{}_{p}}})$
we obtain a grouplike commutative monoid $\mathcal{H}$ in
$\operatorname{cCAlg}_{\mathbb{S}\hat{{}_{p}}}$, namely a bialgebra in
$p$-complete spectra. We set
${\operatorname{Spec}}\mathcal{H}=\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}}$.
Then base changing $\mathsf{Fix}^{\mathbb{S}}$ along the map
$\mathbb{S}\hat{{}_{p}}\to\tau_{\leq
0}\mathbb{S}\hat{{}_{p}}\simeq\mathbb{Z}_{p}\to\mathbb{F}_{p}$
recovers precisely the affine group scheme
$\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}}$, by compatibility of Cartier
duality with base change.
One may even go further and base-change to the orientation classifier (cf.
[Lur18, Chapter 6])
$\mathbb{S}\hat{{}_{p}}\simeq R^{un}_{\widehat{\mathbb{G}_{m}}}\to
R^{or}_{\widehat{\mathbb{G}_{m}}}\simeq E_{1}$
and recover height one Morava $E$-theory, a complex orientable spectrum.
Moreover, in height one, Morava $E$-theory is the $p$-complete complex
$K$-theory spectrum $KU\hat{{}_{p}}$. Applying the above procedure, one
obtains the Hopf algebra corresponding to
$C_{*}(\mathbb{C}P^{\infty},KU\hat{{}_{p}})$
whose algebra structure is induced by the abelian group structure on
$\mathbb{C}P^{\infty}$. We now take the spectrum of this bi-algebra; note that
this is to be done in the nonconnective sense (see [Lur16]) as
$KU\hat{{}_{p}}$ is nonconnective. In any case, one obtains an affine
nonconnective spectral group scheme
${\operatorname{Spec}}(C_{*}(\mathbb{C}P^{\infty},KU\hat{{}_{p}}))$
which arises via base change
${\operatorname{Spec}}KU_{\hat{p}}\to{\operatorname{Spec}}R^{un}_{\widehat{\mathbb{G}}_{m}}$
We summarize this discussion with the following diagram of pullback squares in
the $\infty$-category of nonconnective spectral schemes:
$\textstyle{\operatorname{Fix}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi^{\prime}}$$\scriptstyle{p^{\prime}}$$\textstyle{{\operatorname{Spec}}(T)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{{\operatorname{Spec}}(C_{*}(\mathbb{C}P^{\infty},KU\hat{{}_{p}}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{Spec(\mathbb{F}_{p})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{{\operatorname{Spec}}(R^{un}_{\widehat{\mathbb{G}}})}$$\textstyle{{\operatorname{Spec}}(KU_{\hat{p}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
Note that we have the following factorization of the map
$\mathbb{S}\hat{{}_{p}}\to ku\hat{{}_{p}}\to KU\hat{{}_{p}}$
through $p$-complete connective complex $K$-theory, so these lifts exists
there as well.
## 9 Lifts of $\widehat{\mathbb{G}}$-Hochschild homology to the sphere
Let $\widehat{\mathbb{G}}$ be a height $n$ formal group over a perfect field
$k$. We study a variant of $\widehat{\mathbb{G}}$-Hochschild homology which is
more adapted to the tools of spectral algebraic geometry. Roughly speaking, we
take mapping stacks in the setting of spectral algebraic geometry over $k$,
instead of derived algebraic geometry
###### Definition 9.1.
Let $\widehat{\mathbb{G}}$ be a formal group. We define the _$E_{\infty}$
-$\widehat{\mathbb{G}}$ Hochschild homology_ to be the functor defined by
$HH^{\widehat{\mathbb{G}}}_{E_{\infty}}(A):\operatorname{CAlg}_{k}^{cn}\to\operatorname{CAlg}_{k}^{cn},\,\,\,\,\,HH^{\widehat{\mathbb{G}}}_{E_{\infty}}(A)=\operatorname{Map}_{sStk_{k}}(B\widehat{\mathbb{G}}^{\vee},{\operatorname{Spec}}A),$
where $\operatorname{Map}_{sStk_{k}}(-,-)$ denotes the internal mapping object
of the $\infty$-topos $sStk_{k}$.
It is not clear how the two notions of $\widehat{\mathbb{G}}$-Hochschild
homology compare.
###### Conjecture 9.2.
Let $\widehat{\mathbb{G}}$ be a formal group and $A$ a simplicial commutative
$k$-algebra. Then there exists a natural equivalence
$\theta(\operatorname{HH}^{\widehat{\mathbb{G}}}(A))\to\operatorname{HH}_{E_{\infty}}^{\widehat{\mathbb{G}}}(\theta(A))$
In other words, the underlying $E_{\infty}$ algebra of the
$\mathbb{G}$-Hochschild homology coincides with the
$E_{\infty}-\widehat{\mathbb{G}}$-Hochschild homology of $A$, viewed as an
$E_{\infty}$-algebra.
At least when $\widehat{\mathbb{G}}=\widehat{\mathbb{G}_{m}}$, we know that
this is true. In particular, this also recovers Hocshild homology. (relative
to the base ring $k$)
###### Proposition 9.3.
There is a natural equivalence
$\operatorname{HH}(A/k)\simeq\operatorname{HH}^{\widehat{\mathbb{G}_{m}}}_{E_{\infty}}(A)$
of $E_{\infty}$ algebra spectra over $k$.
###### Proof.
This is a modification of the argument of [MRT19]. We have the (underived)
stack $\mathsf{Fix}\simeq\widehat{\mathbb{G}_{m}}^{\vee}$ and in particular a
map
$S^{1}\to B\mathsf{Fix}\simeq B\widehat{\mathbb{G}_{m}}^{\vee}$
This can also be interpreted, by Kan extension as a map of spectral stacks.
This further induces a map between the mapping stacks
$\operatorname{Map}_{sStk_{k}}(S^{1},X)\to\operatorname{Map}_{sStk_{k}}(B\widehat{\mathbb{G}_{m}}^{\vee},X)$
Recall that all (connective) $E_{\infty}$ $k$-algebras may be expressed as a
colimits of free algebras, and all colimits of free algebras may be expressed
as colimits of the free algebra on one generator $k\\{t\\}$. This follows from
[Lur, Corollary 7.1.4.17], where it is shown that $\operatorname{Free}(k)$ is
a compact projective generator for $\operatorname{CAlg}_{k}$. Hence, it is
enough to test the above equivalence in the case where
$X=\mathbb{A}_{sm}^{1}$; this is the ”smooth” affine line, i.e.
$\mathbb{A}_{sm}^{1}={\operatorname{Spec}}k\\{t\\}$, the spectrum of the free
$E_{\infty}$ $k$-algebra on one generator. For this we check that there is an
equivalence on functor of points
$B\mapsto\operatorname{Map}(B\widehat{\mathbb{G}_{m}}^{\vee}\times
B,\mathbb{A}^{1})\simeq\operatorname{Map}(S^{1}\times B,\mathbb{A}^{1})$
for each $B\in\operatorname{CAlg}^{\operatorname{cn}}$. Each side may be
computed as $\Omega^{\infty}(\pi_{*}\mathcal{O})$ where $\pi:BG\times
B\to{\operatorname{Spec}}k$ denotes the structural morphism (where
$G\in\\{\mathbb{Z},\widehat{\mathbb{G}_{m}}^{\vee}\\}$. The result now follows
from the following two facts:
* •
there is an equivalence of global sections
$C^{*}(B\mathsf{Fix},\mathcal{O})\simeq k^{S^{1}}$ [MRT19, Proposition 3.3.2].
* •
$B\mathsf{Fix}$ is of finite cohomological dimension, cf. [MRT19, Proposition
3.3.7],
as we now obtain an equivalence on $B$-points
$\Omega^{\infty}(\pi_{*}(B\widehat{\mathbb{G}_{m}}^{\vee}\times
B))\simeq\Omega^{\infty}(\pi_{*}(B\widehat{\mathbb{G}_{m}}^{\vee})\otimes_{k}B)\simeq\Omega^{\infty}(\pi_{*}(S^{1})\otimes_{k}B)\simeq\Omega^{\infty}(\pi_{*}(S^{1}\times
B)).$
Note that the second equivalence follows from the finite cohomological
dimension of $B\widehat{\mathbb{G}_{m}}^{\vee}$. Applying global sections
$R\Gamma(-,\mathcal{O})$ to this equivalence gives the desired equivalence of
$E_{\infty}$-algebra spectra. ∎
We show that $\widehat{\mathbb{G}}$-Hochschild homology possesses additional
structure which is already seen at the level of ordinary Hochshchild homology.
Recall that for an $E_{\infty}$ ring $R$, its topological Hochschild homology
may be expressed as the tensor with the circle:
$\operatorname{THH}(R)\simeq S^{1}\otimes_{\mathbb{S}}R.$
Thus, when applying the ${\operatorname{Spec}}(-)$ functor to the
$\infty$-category of spectral schemes, this becomes a cotensor over $S^{1}$.
In fact this coincides with the internal mapping object
$\operatorname{Map}(S^{1},X)$, where $X={\operatorname{Spec}}R$. Furthermore,
one has the the following base change property of topological Hochshild
homology: for a map $R\to S$ of $E_{\infty}$ rings, there is a natural
equivalence:
$\operatorname{THH}(A/R)\otimes_{R}S\simeq\operatorname{THH}(A\otimes_{R}S/S)$
In particular if $R$ is a commutative ring over $\mathbb{F}_{p}$ which admits
a lift $\widetilde{R}$ over the sphere spectrum, then one has an equivalence
$\operatorname{THH}(\tilde{R})\otimes_{\mathbb{S}}\mathbb{F}_{p}\simeq\operatorname{HH}(R/\mathbb{F}_{p})$
This can be interpreted geometrically as an equivalence of spectral schemes
$\operatorname{Map}(S^{1},{\operatorname{Spec}}(\tilde{R}))\times{\operatorname{Spec}}\mathbb{F}_{p}\simeq\operatorname{Map}(S^{1},{\operatorname{Spec}}(R))$
over ${\operatorname{Spec}}\mathbb{F}_{p}$. We show that such a geometric
lifting occurs in many instances in the setting of
$\widehat{\mathbb{G}}$-Hochschild homology.
###### Construction 9.4.
Let $\widehat{\mathbb{G}}$ be a height $n$ formal group over $\mathbb{F}_{p}$
and let $R$ be an commutative $\mathbb{F}_{p}$-algebra. Let
$\widehat{\mathbb{G}}_{un}$ denote the universal deformation of
$\widehat{\mathbb{G}}$, which is a formal group over the
$R_{\widehat{\mathbb{G}}}^{un}$. As in section 8.3, we let
$\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}}$ denote its Cartier dual over this
$E_{\infty}$-ring.
###### Theorem 9.5.
Let $\widehat{\mathbb{G}}$ be a height $n$ formal group over $\mathbb{F}_{p}$
and let $X$ be an $\mathbb{F}_{p}$ scheme. Suppose there exists a lift
$\tilde{X}$ over the spectral deformation ring
$R^{un}_{\widehat{\mathbb{G}}}$. Then there exists a homotopy pullback square
of spectral algebraic stacks
$\textstyle{\operatorname{Map}(B\mathsf{Fix}_{\widehat{\mathbb{G}}},X)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi^{\prime}}$$\scriptstyle{p^{\prime}}$$\textstyle{\operatorname{Map}(B\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}},\tilde{X})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{Spec(\mathbb{F}_{p})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{{\operatorname{Spec}}(R^{un}_{\widehat{\mathbb{G}}})}$
displaying
$\operatorname{Map}(B\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}},\tilde{X})$ as a
lift of $\operatorname{Map}(B\mathsf{Fix}_{\widehat{\mathbb{G}}},X)$.
###### Proof.
Given a map $p:X\to Y$ of spectral schemes, there is an induced morphism of
$\infty$-topoi
$p^{*}:\operatorname{Shv}^{\acute{e}t}_{Y}\to\operatorname{Shv}^{\acute{e}t}_{X}$
This pullback functor is symmetric monoidal, and moreover behaves well with
respect to the internal mapping objects. Now let
$X={\operatorname{Spec}}\mathbb{F}_{p}$ and
$Y={\operatorname{Spec}}R_{\widehat{\mathbb{G}}}^{un}$ and let $p$ be the map
induced by the universal property of the spectral deformation ring $R$. In
this particular case, this means there will be an equivalence
$p^{*}\operatorname{Map}(B\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}},\tilde{X})\simeq\operatorname{Map}(p^{*}B\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}},p^{*}\tilde{X})\simeq\operatorname{Map}(B\mathsf{Fix}_{\widehat{\mathbb{G}}},X)$
since $\tilde{X}\times{\operatorname{Spec}}\mathbb{F}_{p}\simeq\ X$ and
$p^{*}B{\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}}\simeq
B\mathsf{Fix}_{\widehat{\mathbb{G}}}}$. ∎
From this we conclude that the $\widehat{\mathbb{G}}$-Hochschild homology has
a lift in the geometric sense, in that there is a spectral mapping stack over
${\operatorname{Spec}}R^{un}_{\widehat{\mathbb{G}}}$ which base changes to
$\operatorname{Map}(B\widehat{\mathbb{G}}^{\vee},X)$. We would like to
conclude this at the level of global section $E_{\infty}$ algebras. This is
not formal unlesss we have a more precise understanding of the regularity
properties of
$\operatorname{Map}(B\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}},X)$ for an
affine spectral scheme $X={\operatorname{Spec}}A$.
Indeed, there is a map
$R\Gamma(\operatorname{Map}(B\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}},\tilde{X}),\mathcal{O})\otimes\mathbb{F}_{p}\to
R\Gamma(\operatorname{Map}(p^{*}B\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}},p^{*}\tilde{X}),\mathcal{O})$
(9.6)
but it is not a priori clear that this is an equivalence. In particular, we
have the following diagram of stable $\infty$-categories
$\textstyle{{\operatorname{Mod}}_{R^{un}_{\widehat{\mathbb{G}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p^{*}}$$\scriptstyle{\phi^{*}}$$\textstyle{\operatorname{QCoh}(\operatorname{Map}(B\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}},\tilde{X}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p^{\prime*}}$$\textstyle{{\operatorname{Mod}}_{\mathbb{F}_{p}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi^{\prime*}}$$\textstyle{\operatorname{QCoh}(\operatorname{Map}(B\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}},\tilde{X}))}$
for which we would like to verify the Beck-Chevalley condition holds; i.e.
that the following canonically defined map
$\rho:p^{*}\circ\phi_{*}\to\phi^{\prime}_{*}\circ p^{\prime*}$
is an equivalence. Here $\phi_{*}$ and $\phi^{\prime}_{*}$ are the right
adjoints and may be thought of as global section functors. This construction
applied to the structure sheaf $\mathcal{O}$ recovers the map (9.6).
This would follow from Proposition 2.1 upon knowing either that the spectral
stack
$\operatorname{Map}(B\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}},\tilde{X})$ is
representable by a derived scheme or, more generally if it is of finite
cohomological dimension. In fact it is the former:
###### Theorem 9.7.
Let $\widehat{\mathbb{G}}$ be as above and let $X={\operatorname{Spec}}A$
denote a spectral scheme. Then the mapping stack
$\operatorname{Map}(B\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}},X)$ is
representable by a spectral scheme.
###### Proof.
This will be an application of the Artin-Lurie representability theorem, cf.
[Lur16, Theorem 18.1.0.1]. Given spectral stacks $X,Y$, the derived spectral
mapping stack $\operatorname{Map}(Y,X)$ is representable by a spectral scheme
if and only if it is nilcomplete, infinitesimally cohesive and admits a
cotangent complex and if the truncation $t_{0}(\operatorname{Map}(Y,X))$ is
representable by a classical scheme. By Proposition 5.10 of [HLP14] if $Y$ is
of finite tor-amplitude and $X$ admits a cotangent complex, then so does the
mapping stack $\operatorname{Map}(Y,X)$; in our case $X$ is an honest spectral
scheme which has a cotangent complex. Note that the condition of being finite
tor-amplitude is local on the source with respect to the flat topology (cf.
[Lur16, Proposition 6.1.2.1]. Thus if there exists a flat cover $U\to Y$ such
that the composition $U\to Y\to{\operatorname{Spec}}R$ is of finite tor
amplitude, then $Y\to{\operatorname{Spec}}R$ itself has this property.
Infinitesimal cohesion follows from [TV08, Lemma 2.2.6.13]. The following
lemma takes care of nilcompleteness:
###### Lemma 9.8.
Let $Y$ be a spectral stack over ${\operatorname{Spec}}(R)$ which may be
written as a colimit of affine spectral schemes
$Y\simeq\operatorname{colim}{\operatorname{Spec}}A_{i}$
where each $A_{i}$ is flat over $R$ and let $X$ be a nilcomplete spectral
stack. Then $\operatorname{Map}_{Stk_{R}}(Y,X)$ is nilcomplete.
###### Proof.
The argument is similar to that of an analogous claim appearing in the proof
of Theorem 2.2.6.11 in [TV08]. Let $Y$ be as above. Then
$\operatorname{Map}(Y,X)\simeq\lim_{i}\operatorname{Map}({\operatorname{Spec}}A_{i},X)$
and so it amounts to verify this for when $Y={\operatorname{Spec}}A$ for
$A_{i}$ flat. In this case we see that for
$B\in\operatorname{CAlg}^{\operatorname{cn}}$,
$\operatorname{Map}({\operatorname{Spec}}A,X)(B)\simeq X(A\otimes_{R}B).$
The map
$\operatorname{Map}({\operatorname{Spec}}A,X)(B)\to\lim\operatorname{Map}({\operatorname{Spec}}A,X)(\tau_{\leq
n}B_{n})$
which we need to check is an equivalence now translates to a map
$X(A\otimes_{R}B)\to X(\tau_{\leq n}B\otimes_{R}A)$ (9.9)
We now use the flatness assumption on $A$. Using the general formula (cf.
[Lur, Proposition 7.2.2.13])in this case
$\pi_{n}(A\otimes
B)\simeq\operatorname{Tor}^{0}_{\pi_{0}(R)}(\pi_{0}A,\pi_{n}B)$
we conclude that $\tau_{\leq n}(A\otimes B)\simeq A\otimes\tau_{\leq n}B$.
Thus, 9.9 above becomes a map
$X(A\otimes_{R}B)\to X(\tau_{\leq n}(B\otimes_{R}A))$
which is an equivalence because $X$ was itself assumed to be nilcomplete. ∎
Finally we show that the truncation is an ordinary scheme. Note first of all
that the truncation functor
$t_{0}:SStk\to Stk$
preserves limits and colimits. It is induced from the Eilenberg Maclane
functor
$H:\operatorname{CAlg}^{0}\to\operatorname{CAlg},\,\,\,A\mapsto HA$
which is itself adjoint to the truncation functor on $E_{\infty}$ rings. One
sees that the truncation functor $t_{0}=H^{*}:SStk\to Stk$ will have as a
right adjoint the functor
$\pi_{0}^{*}:Stk\to SStk,$
induced by the $\pi_{0}$ functor
$R\mapsto\pi_{0}R$
Thus it is right exact and preserves colimits. Hence if $Y=BG$ for some
spectral group scheme $G$, then $t_{0}BG\simeq Bt_{0}G$. Now, one has the
identification
$t_{0}\operatorname{Map}(Y,X)\simeq\operatorname{Map}(t_{0}Y,t_{0}X),$
which in our situation becomes
$t_{0}\operatorname{Map}(B\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}},X)\simeq\operatorname{Map}(BG,t_{0}X)$
for some (classical) affine group scheme $G$. Recall that the only classical
maps between $f:BG\to t_{0}X$ between a classifying stack and a scheme
$t_{0}X$ are the constant ones. Hence we conclude that the truncation of this
spectral mapping stack is equivalent to the scheme $t_{0}X$, the truncation of
$X$. ∎
### 9.1 Topological Hochschild homology
As we saw, for a height $n$ formal group $\widehat{\mathbb{G}}$ over a finite
field $k$, there exists a lift $\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}}$ of
the Cartier dual of $\widehat{\mathbb{G}}$; this allows one to define a lift
of $\widehat{\mathbb{G}}$-Hochschild homology. We show that when the formal
group is $\widehat{\mathbb{G}_{m}}$ this lift is precisely topological
Hochschild homology, at least after $p$-completion, as one would expect. For
the remainder of this section we let
$\widehat{\mathbb{G}}=\widehat{\mathbb{G}_{m}}$, the formal multiplicative
group.
Let $X$ be a fixed spectral stack. We remark that there exists an adjunction
of $\infty$-topoi:
$\mathcal{S}\rightleftarrows SStk_{X}$
where one has on the right hand side the $\infty$-category of spectral stacks
over $X$.
First, one has the following proposition; here we think of $S^{1}$ as a
“constant stack induced by the adjunction
$\pi^{*}:\mathcal{S}\rightleftarrows SStk_{R}:\pi_{*}$
###### Proposition 9.10.
There exists a canonical map
$S^{1}\to B\mathsf{Fix}^{un}_{\widehat{\mathbb{G}}}$
of group objects in the $\infty$-category of spectal stacks over
$\mathbb{S}_{p}$.
###### Proof.
By [MRT19, Construction 3.3.1], there is a canonical map
$\mathbb{Z}\to\mathsf{Fix}$ (9.11)
in the category of fpqc abelian sheaves over
${\operatorname{Spec}}\mathbb{Z}_{(p)}$. We claim that the (discrete) group
scheme $\mathsf{Fix}$ is none other than the truncation of the spectral group
scheme
$\widehat{\mathbb{G}}_{un}^{\vee}\to{\operatorname{Spec}}\mathbb{S}_{p}$
This follows from the fact that $\widehat{\mathbb{G}}_{un}^{\vee}$ is flat
over $\mathbb{S}_{p}$, as the corresponding Hopf algebra is flat. As a result,
the base change of this spectral group scheme along the map
${\operatorname{Spec}}\mathbb{Z}_{p}\to{\operatorname{Spec}}\mathbb{S}_{p}$
is itself flat over $\mathbb{Z}_{p}$ and in particular is $0$-truncated. By
definition, this is $\mathsf{Fix}$. Now, there is an adjunction
$i^{*}:SStk_{\mathbb{S}_{p}}\rightleftarrows Stk_{\mathbb{Z}_{p}}:t_{0}$
against which the map (9.11) is lifted to a map
$\mathbb{Z}\to\widehat{\mathbb{G}}_{un}^{\vee}.$
in $SStk_{\mathbb{S}_{p}}$. This will be a map of group objects, since the
adjoint pair preserves the group structure. Delooping this, we obtain the
desired map
$S^{1}\simeq B\mathbb{Z}\to B\widehat{\mathbb{G}}_{un}^{\vee}.$
∎
Let $X={\operatorname{Spec}}A$ be an affine spectral scheme. By taking mapping
spaces, the above proposition furnishes a map
$\operatorname{Map}(B\widehat{\mathbb{G}}_{un}^{\vee},X)\to\operatorname{Map}(S^{1},X);$
applying global sections further begets a map
$f:\operatorname{THH}(A)\to
R\Gamma(\operatorname{Map}(B\widehat{\mathbb{G}}_{un}^{\vee},X),\mathcal{O})$
of $E_{\infty}$ $\mathbb{S}_{p}$-algebras.
###### Theorem 9.12.
Let
$f_{p}:\operatorname{THH}(A;\mathbb{Z}_{p})\to
R\Gamma(\operatorname{Map}(B\widehat{\mathbb{G}}_{un}^{\vee},X),\mathcal{O})^{\widehat{}}_{p}$
denote the $p$-completion of the above map. Then $f$ is an equivalence.
###### Proof.
Since this is a map of $p$-complete spectra, it is enough to verify it is an
equivalence upon tensoring with the Moore spectrum $\mathbb{S}_{p}/p$. In
fact, since these are both connective spectra, one can go further and test
this simply by tensoring with $\mathbb{F}_{p}$ (eg. by [Mao20, Corollary
A.33]) Hence, we are reduced to showing that
$\operatorname{THH}(A;\mathbb{Z}_{p})\otimes_{\mathbb{S}_{p}}\mathbb{F}_{p}\to
R\Gamma(\operatorname{Map}(B\widehat{\mathbb{G}}_{un}^{\vee},X),\mathcal{O})^{\widehat{}}_{p}\otimes\mathbb{F}_{p}$
is an equivalence of $E_{\infty}$ $\mathbb{F}_{p}$-algebras. By generalities
on topological Hochschild homology, we have the following identification of
the left hand side:
$\operatorname{THH}(A;\mathbb{Z}_{p})\otimes_{\mathbb{S}_{p}}\mathbb{F}_{p}\simeq\operatorname{HH}(A\otimes_{\mathbb{S}_{p}}\mathbb{F}_{p}/\mathbb{F}_{p}).$
Now we can use Theorem 9.7 to identify the right hand side with the global
sections of the following mapping stack
$\operatorname{Map}(B\widehat{\mathbb{G}}_{un}^{\vee},X)\times{\operatorname{Spec}}\mathbb{F}_{p}\simeq\operatorname{Map}(B\widehat{\mathbb{G}}_{un}^{\vee}\times{\operatorname{Spec}}\mathbb{F}_{p},X\times{\operatorname{Spec}}\mathbb{F}_{p})$
By Proposition 9.3, this is precisely
$\operatorname{HH}(A\otimes_{\mathbb{S}_{p}}\mathbb{F}_{p}/\mathbb{F}_{p})$,
whence the equivalence. ∎
## 10 Filtrations in the spectral setting
In Section 6 an interpretation of the HKR filtration on Hochshild homology was
given in terms of a degeneration of $\widehat{\mathbb{G}_{m}}$ to
$\widehat{\mathbb{G}_{a}}$. Moreover, this was expressed as an example of the
deformation to the normal cone construction of section 5.
In Section 9, we further saw that these $\widehat{\mathbb{G}}$-Hochshchild
homology theories may be lifted beyond the integral setting. A natural
question then arises: do the filtrations come along for the ride as well?
Namely, does there exist a filtration on
$\operatorname{THH}^{\widehat{\mathbb{G}}}(-)$ which recovers upon base-
changing along $R^{un}_{\widehat{\mathbb{G}}}\to k$, the filtered object
corresponding to $HH^{\widehat{\mathbb{G}}}(-)$?
We will not seek to answer this question here. However we do give a reason why
some negative results might be expected. As mentioned in the introduction,
many of the constructions do work integrally. For example, one can talk about
the deformation to the normal cone
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}})$ of an arbitrary
formal group over ${\operatorname{Spec}}\mathbb{Z}$. If we apply this to
$\widehat{\mathbb{G}_{m}}$ we obtain a degeneration of the formal
multiplicative group to the formal additive group. We let
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}_{m}})^{\vee}$ the
Cartier dual, as in section 4. In [Toë19] the Cartier dual to
$\widehat{\mathbb{G}_{m}}$ is described to be
${\operatorname{Spec}}(Int(\mathbb{Z}))$, the spectrum of the ring of integer
valued polynomials on $\mathbb{Z}$. Moreover it is shown that
$B{\operatorname{Spec}}(Int(\mathbb{Z}))$ is the affinization of $S^{1}$,
hence one can recover (integral) Hochshild homology mapping out of this.
Let us suppose there exists a lift of $Def(\widehat{\mathbb{G}_{m}})^{\vee}$
to the sphere spectrum, which we shall denote by
$Def^{\mathbb{S}}(\widehat{\mathbb{G}_{m}})^{\vee}$. This would allow us to
define a mapping stack in the $\infty$-category
$sStk_{\mathbb{A}^{1}/\mathbb{G}_{m}}$ of spectral stacks over the spectral
variant of $\mathbb{A}^{1}/\mathbb{G}_{m}$. By the results of [Mou19], this
comes equipped with a filtration on its cohomology, which we would like to
think of as recovering topological Hochschild homology.
However, over the special fiber
$B\mathbb{G}_{m}\to\mathbb{A}^{1}/\mathbb{G}_{m}$, we would expect that such a
lift $Def^{\mathbb{S}}(\widehat{\mathbb{G}_{m}})^{\vee}$ recovers the formal
additive group $\widehat{\mathbb{G}_{a}}$. More precisely, we would get a
formal group over the sphere spectrum
$\widehat{\mathbb{G}}\to{\operatorname{Spec}}\mathbb{S}$ which pulls back to
the formal additive group $\mathbb{G}_{a}$ along the map
$\mathbb{S}\to\mathbb{Z}$. However, by [Lur18, Proposition 1.6.20], this can
not happen. Indeed there it is shown that $\widehat{\mathbb{G}_{a}}$ does not
belong to the essential image of
$\operatorname{FGroup}(\mathbb{S})\to\operatorname{FGroup}(\mathbb{Z})$.
We summarize this discussion into the following proposition.
###### Proposition 10.1.
There exists no lift of
$Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}}_{m})$ over to the
sphere spectrum. In particular, there exists no formal group
$\widetilde{\widehat{\mathbb{G}}}$ over $\mathbb{A}^{1}/\mathbb{G}_{m}$,
relative to $\mathbb{S}$ such that
$\widetilde{\widehat{\mathbb{G}}}\times{\operatorname{Spec}}\mathbb{Z}\simeq
Def_{\mathbb{A}^{1}/\mathbb{G}_{m}}(\widehat{\mathbb{G}_{m}})$.
## References
* [BM19] Lukas Brantner and Akhil Mathew, _Deformation theory and partition lie algebras_ , arXiv preprint arXiv:1904.07352 (2019).
* [Car62] Pierre Cartier, _Groupes algébriques et groupes formels_ , Colloq. Théorie des Groupes Algébriques (Bruxelles, 1962). Librairie Universitaire, Louvain, 1962, pp. 87–111.
* [Car08] Gunnar Carlsson, _Derived completions in stable homotopy theory_ , Journal of Pure and Applied Algebra 212 (2008), no. 3, 550–577.
* [Dri20] Vladimir Drinfeld, _Prismatization_ , arXiv preprint arXiv:2005.04746 (2020).
* [GR17] Dennis Gaitsgory and Nick Rozenblyum, _A study in derived algebraic geometry. volume II: Deformations, lie theory and formal geometry_ , American Mathematical Soc (2017).
* [Haz78] Michiel Hazewinkel, _Formal groups and applications_ , vol. 78, Elsevier, 1978\.
* [HLP14] Daniel Halpern-Leistner and Anatoly Preygel, _Mapping stacks and categorical notions of properness_ , arXiv preprint arXiv:1402.3204 (2014).
* [KR18] Adeel A Khan and David Rydh, _Virtual cartier divisors and blow-ups_ , arXiv preprint arXiv:1802.05702 (2018).
* [Lur] Jacob Lurie, _Higher algebra, September 2017_ , available at his webpage https://www. math. ias. edu/~ lurie.
* [Lur09] , _Higher topos theory_ , Princeton University Press, 2009.
* [Lur15] , _Rotation invariance in algebraic k-theory_ , preprint (2015).
* [Lur16] , _Spectral algebraic geometry_ , Preprint, available at www. math. harvard. edu/~ lurie/papers/SAG-rootfile. pdf (2016).
* [Lur18] , _Elliptic cohomology II: Orientations_ , preprint available from the author’s website (2018).
* [Mao20] Zhouhang Mao, _Perfectoid rings as Thom spectra_ , arXiv preprint arXiv:2003.08697 (2020).
* [Mou19] Tasos Moulinos, _The geometry of filtrations_ , arXiv preprint arXiv:1907.13562 (2019).
* [MRT19] Tasos Moulinos, Marco Robalo, and Bertrand Toën, _A universal HKR theorem_ , arXiv preprint arXiv:1906.00118 (2019).
* [Rak20] Arpon Raksit, _Hochschild homology and the derived de rham complex revisited_ , arXiv preprint arXiv:2007.02576 (2020).
* [SS01] Tsutomu Sekiguchi and Noriyuki Suwa, _A note on extensions of algebraic and formal groups, IV Kummer-Artin-Schreier-Witt theory of degree $p^{2}$_, Tohoku Mathematical Journal, Second Series 53 (2001), no. 2, 203–240.
* [Str99] Neil P Strickland, _Formal schemes and formal groups_ , Contemporary Mathematics 239 (1999), 263–352.
* [Toë06] Bertrand Toën, _Champs affines_ , Selecta mathematica 12 (2006), no. 1, 39–134.
* [Toë14] , _Derived algebraic geometry_ , arXiv preprint arXiv:1401.1044 (2014).
* [Toë19] , _Le problème de la schématisation de Grothendieck revisité_ , arXiv preprint arXiv:1911.05509 (2019).
* [Toë20] , _Classes caractéristiques des schémas feuilletés_ , arXiv preprint arXiv:2008.10489 (2020).
* [TV08] Bertrand Toën and Gabriele Vezzosi, _Homotopical algebraic geometry ii: Geometric stacks and applications: Geometric stacks and applications_ , vol. 2, American Mathematical Soc., 2008.
* [TV11] , _Algebres simpliciales $S^{1}$-équivariantes, théorie de de rham et théoremes HKR multiplicatifs_, Compositio Mathematica 147 (2011), no. 6, 1979–2000.
|
# Strong $B_{QQ^{\prime}}^{*}B_{QQ^{\prime}}V$ vertices and the radiative
decays of $B_{QQ}^{*}\to B_{QQ}\gamma$ in the light-cone sum rules
T. M. Aliev<EMAIL_ADDRESS>Physics Department, Middle East Technical
University, Ankara 06800, Turkey T. Barakat<EMAIL_ADDRESS>Physics &
Astronomy Department, King Saud University, Riyadh 11451, Saudi Arabia K.
Şimşek<EMAIL_ADDRESS>Department of Physics & Astronomy,
Northwestern University, Evanston, Illinois 60208, USA
(August 27, 2024)
###### Abstract
The strong coupling constants of spin-3/2 to spin-1/2 doubly heavy baryon
transitions with light vector mesons are estimated within the light-cone QCD
sum rules method. Moreover, using the vector-meson dominance ansätz, the
widths of radiative decays $B_{QQ}^{*}\to B_{QQ}\gamma$ are calculated. The
results for the said decay widths are compared to the predictions of other
approaches.
## I Introduction
The quark model is a vital tool for the classification of hadronic states. It
predicts the existence of numerous doubly heavy baryons. Among various doubly
heavy baryon states, only two, namely $\Xi_{cc}^{++}$ and $\Xi_{cc}^{+}$, have
been observed. The first observation of $\Xi_{cc}^{+}$ was announced by the
SELEX Collaboration in the channels
$\Xi_{cc}^{+}\to\Lambda_{c}^{+}K^{-}\pi^{+}$ and $pD^{+}K^{-}$ with a mass
$3518.7\pm 1.7{\rm\ MeV}$ Mattson _et al._ (2002). In 2017, the LHCb
Collaboration announced an observation of the doubly heavy baryon
$\Xi_{cc}^{++}$ in the mass spectrum $\Lambda_{c}^{+}K^{-}\pi^{+}\pi^{-}$ Aaij
_et al._ (2017) and confirmed also by measuring another decay channel,
$\Xi_{cc}^{++}\to\Xi_{c}^{+}\pi^{+}$ Aaij _et al._ (2018), with an average
mass obtained as $3621.24\pm 0.65\pm 0.31{\rm\ MeV}$. The observation of
doubly heavy baryon states stimulated new experimental studies in this
direction Aaij _et al._ (2019, 2020).
Theoretical studies on this subject include the study of weak,
electromagnetic, and strong decays of doubly heavy baryons. Their weak and
strong decays have been comprehensively analyzed within the framework of the
light-front QCD, QCD sum rules, and the light-cone sum rules (LCSR) method
Wang _et al._ (2017a); Xiao _et al._ (2017); Wang _et al._ (2017b); Cheng
and Shi (2018); Shi _et al._ (2018); Zhao (2018); Shi _et al._ (2020); Shi
and Zhao (2019); Hu and Shi (2020). Their electromagnetic properties and
radiative decays have been discussed in Li _et al._ (2017); Meng _et al._
(2017); Li _et al._ (2018); Bahtiyar _et al._ (2018). The strong couplings
of doubly heavy baryons with light mesons within the light-cone sum rules have
been studied in Rostami _et al._ (2020); Aliev and Şimşek (2020a, b); Alrebdi
_et al._ (2020); Azizi _et al._ (2020). These coupling constants are the main
parameters for understanding the dynamics of strong decays. The coupling
constants of spin-3/2 to spin-1/2 doubly heavy baryons with $\rho^{+}$ and
$K^{*}$ have been studied in Aliev and Şimşek (2020a) within the framework of
the LCSR method.
The aim of this work is two-fold. First, we extend our previous work Aliev and
Şimşek (2020a) to study the vertices
$\Xi_{QQ^{\prime}q}^{*}\Xi_{QQ^{\prime}q}\omega$ and
$\Omega_{QQ^{\prime}s}^{*}\Omega_{QQ^{\prime}s}\phi$, where
$\Xi_{QQ^{\prime}q}^{*}$ ($\Omega_{QQ^{\prime}s}^{*}$) and
$\Xi_{QQ^{\prime}q}$ ($\Omega_{QQ^{\prime}s}$) denote the spin-3/2 and
spin-1/2 doubly heavy baryons, respectively, within the LCSR method and
second, using the results for these vertices and assuming the vector-meson
dominance (VMD), we estimate the radiative decay widths of
$\Xi_{QQq}^{*}\to\Xi_{QQq}\gamma$ and $\Omega_{QQs}^{*}\to\Omega_{QQs}\gamma$.
In all the following discussion, we will denote the spin-3/2 (1/2) doubly
heavy baryons by $B_{QQ^{\prime}}^{*}$ ($B_{QQ^{\prime}}$) customarily.
The paper is organized as follows. In Sec. II, first, we derive the LCSR for
the coupling constants of the light vector mesons $\omega$ and $\phi$ for the
$\Xi_{QQ^{\prime}q}^{*}\Xi_{QQ^{\prime}q}\omega$ and
$\Omega_{QQ^{\prime}s}^{*}\Omega_{QQ^{\prime}s}\phi$ vertices; second, we
present the results for the radiative decays $\Xi_{QQq}^{*}\to\Xi_{QQq}\gamma$
and $\Omega_{QQs}^{*}\to\Omega_{QQs}\gamma$ by assuming the VMD. Sec. III
contains the numerical analysis of the obtained sum rules for the strong
coupling constants and radiative decays. A summary and conclusion are
presented in Sec. IV.
## II The $B_{QQ^{\prime}q}^{*}B_{QQ^{\prime}q}V$ vertices in the light-cone
sum rules
By using the Lorentz invariance, the vertices
$B_{QQ^{\prime}}^{*}B_{QQ^{\prime}}V$, where $V=\rho^{0}$, $\omega$, or $\phi$
and $B^{(*)}=\Xi^{(*)}$ or $\Omega^{(*)}$, are parametrized in terms of three
coupling constants, $g_{1}$, $g_{2}$, and $g_{3}$, as follows Jones and
Scadron (1973):
$\displaystyle\langle
V(q)B_{QQ^{\prime}}^{*}(p_{2})|B_{QQ^{\prime}}(p_{1})\rangle$
$\displaystyle=\bar{u}_{\alpha}(p_{2})[g_{1}(\varepsilon^{*\alpha}\not{q}-q^{\alpha}\not{\varepsilon}^{*})\gamma_{5}+g_{2}(P\cdot
q\varepsilon^{*\alpha}-P\cdot\varepsilon^{*}q^{\alpha})\gamma_{5}$
$\displaystyle+g_{3}(q\cdot\varepsilon^{*}q^{\alpha}-q^{2}\varepsilon^{*\alpha})\gamma_{5}]u(p_{1})$
(1)
where $u_{\alpha}(p_{2})$ is the Rarita-Schwinger spinor for a spin-3/2
baryon, $\varepsilon_{\alpha}$ is the 4-polarization vector of the light
vector meson $V$, $P=(p_{1}+p_{2})/2$, and $q=p_{1}-p_{2}$. In the rest of the
text, we denote $p_{2}=p$ and $p_{1}=p+q$.
For the determination of the said three coupling constants, $g_{1}$, $g_{2}$,
and $g_{3}$, within the LCSR, we introduce the following correlation function:
$\displaystyle\Pi_{\mu}(p,q)=i\int d^{4}x\ e^{ipx}\langle
V(q)|\mathrm{T}\\{\eta_{\mu}(x)\bar{\eta}(0)\\}|0\rangle$ (2)
where $V(q)$ is a light vector meson ($\rho^{0}$, $\omega$, or $\phi$) with
4-momentum $q_{\mu}$, and $\eta_{\mu}$ and $\eta$ are the interpolating
currents for the spin-3/2 and spin-1/2 baryons, respectively. The most general
form of the interpolating currents of spin-3/2 and spin-1/2 baryons doubly
heavy baryons are
$\displaystyle\eta_{\mu}$ $\displaystyle=N\epsilon^{abc}\\{({q^{a}}^{\rm
T}C\gamma_{\mu}Q^{b})Q^{\prime c}+({q^{a}}^{\rm T}C\gamma_{\mu}Q^{\prime
b})Q^{c}+({Q^{a}}^{\rm T}C\gamma_{\mu}Q^{\prime b})q^{c}\\}$ (3)
$\displaystyle\eta^{(S)}$
$\displaystyle=\frac{1}{\sqrt{2}}\epsilon^{abc}\sum_{i=1}^{2}[({Q^{a}}^{\rm
T}A_{1}^{i}q^{b})A_{2}^{i}Q^{\prime c}+(Q\leftrightarrow Q^{\prime})]$ (4)
$\displaystyle\eta^{(A)}$
$\displaystyle=\frac{1}{\sqrt{6}}\epsilon^{abc}\sum_{i=1}^{2}[2(Q^{a}A_{1}^{i}Q^{\prime
b})A_{2}^{i}q^{c}+({Q^{a}}^{\rm T}A_{1}^{i}Q^{\prime c})-{Q^{\prime a}}^{\rm
T}A_{1}^{i}q^{b}]A_{2}^{i}Q^{c}$ (5)
where $\mathrm{T}$ is the transpose, $N=\sqrt{1/3}$ ($\sqrt{2/3}$) for
identical (distinct) heavy quarks, $A_{1}^{1}=C$, $A_{2}^{1}=\gamma_{5}$,
$A_{1}^{2}=C\gamma_{5}$, and $A_{2}^{2}=\beta I$, the superscripts $S$ and $A$
denote symmetric and antisymmetric interpolating currents with respect to the
interchange of heavy quarks, and $\beta$ is the arbitrary parameter, for which
$\beta=-1$ corresponds to the case of the Ioffe current.
The LCSR for the coupling constants, $g_{1}$, $g_{2}$, and $g_{3}$, is
obtained by calculating the correlation function in two different regions:
First, in terms of hadrons, and second, in the deep Euclidean domain by using
operator product expansion (OPE). In terms of hadrons, the correlation
function is obtained by inserting a complete set of intermediate hadronic
states carrying the same quantum numbers as the interpolating currents
$\eta_{\mu}$ and $\eta$ and using the quark-hadron duality. After isolating
the ground state contribution, we get
$\displaystyle\Pi_{\mu}(p,q)$
$\displaystyle=\frac{\lambda_{1}\lambda_{2}}{(m_{1}^{2}-p^{2})[m_{2}^{2}-(p+q)^{2}]}[-g_{1}(m_{1}+m_{2})\not{\varepsilon}^{*}\not{p}\gamma_{5}q_{\mu}+g_{2}\not{q}\not{p}\gamma_{5}p\cdot\varepsilon^{*}q_{\mu}+g_{3}q^{2}\not{q}\not{p}\gamma_{5}\varepsilon_{\mu}^{*}$
$\displaystyle+{\rm other\ structures}]$ (6)
Here, $\varepsilon^{\mu}$ is the 4-polarization vector of the light vector
meson. In the derivation of Eq. (6), the following definitions have been used:
$\displaystyle\langle 0|\eta|B_{QQ^{\prime}}(p)\rangle=\lambda_{1}u(p,s)$ (7)
$\displaystyle\langle
0|\eta_{\mu}|B_{QQ^{\prime}}^{*}(p)\rangle=\lambda_{2}u_{\mu}(p,s)$ (8)
where $\lambda_{1}$ ($m_{1}$) and $\lambda_{2}$ ($m_{2}$) are the residues
(masses) of the spin-3/2 and spin-1/2 states, respectively. The summation over
spin-1/2 and spin-3/2 baryons is performed by using the corresponding
completeness relations:
$\displaystyle\sum_{s}u(p,s)\bar{u}(p,s)=\not{p}+m$ (9)
$\displaystyle\sum_{s}u_{\mu}(p,s)\bar{u}_{\nu}(p,s)=-(\not{p}+m)\Big{[}g_{\mu\nu}-\frac{1}{3}\gamma_{\mu}\gamma_{\nu}-\frac{2}{3}\frac{p_{\mu}p_{\nu}}{m^{2}}+\frac{1}{3}\frac{p_{\mu}\gamma_{\nu}-p_{\nu}\gamma_{\mu}}{m}\Big{]}$
(10)
At this point, we would like to make the following remarks:
1. (a)
The current $\eta_{\mu}$ couples also to spin-1/2 baryons, $B(p)$, with the
corresponding matrix element
$\displaystyle\langle
0|\eta_{\mu}|B^{-}(p)\rangle=A\left(\gamma_{\mu}-\frac{4}{m}p_{\mu}\right)u(p,s)$
(11)
Hence, the structures containing $\gamma_{\mu}$ or $p_{\mu}$ include
contributions from the $1/2$ states. From Eq. (10), it follows that only
structure proportional to $g_{\mu\nu}$ is free of $1/2$ state contributions.
2. (b)
Not all Lorentz structures are independent. This problem can be solved by
using the specific order of Dirac matrices. In the present work, we specify
the desired order of Dirac matrices to be in the form
$\gamma_{\mu}\not{\varepsilon}\not{q}\not{p}\gamma_{5}$.
We choose the Lorentz structures $\not{\varepsilon}\not{p}\gamma_{5}q_{\mu}$,
$\not{q}\not{p}\gamma_{5}\not{\varepsilon}q_{\mu}$, and
$\not{q}\not{p}\gamma_{5}\varepsilon_{\mu}$ for the determination of the
coupling constants $g_{1}$, $g_{2}$, and $g_{3}$ which are free from $1/2$
contamination and which also yield better stability in the numerical analysis.
The correlation function in the deep Euclidean domain, $p^{2}\ll 0$ and
$(p+q)^{2}\ll 0$, can be calculated by using OPE near the light cone. The
ample details of calculations are presented in Aliev and Şimşek (2020a) and
for this reason, we do not repeat them here.
In the final step, performing a double Borel transformation over the variables
$-p^{2}$ and $-(p+q)^{2}$, choosing the coefficients of the same Lorentz
structures in both representations and matching them, and using the quark-
hadron duality ansätz, we get the desired sum rules for these strong coupling
constants:
$\displaystyle
g_{1}=-\frac{1}{\lambda_{1}\lambda_{2}(m_{1}+m_{2})}e^{m_{1}^{2}/M_{1}^{2}+m_{2}^{2}/M_{2}^{2}}\Pi_{1}^{(S)}$
(12) $\displaystyle
g_{2}=\frac{1}{\lambda_{1}\lambda_{2}}e^{m_{1}^{2}/M_{1}^{2}+m_{2}^{2}/M_{2}^{2}}\Pi_{2}^{(S)}$
(13) $\displaystyle
g_{3}=\frac{1}{\lambda_{1}\lambda_{2}}e^{m_{1}^{2}/M_{1}^{2}+m_{2}^{2}/M_{2}^{2}}\Pi_{3}^{(S)}$
(14)
While one discovers that all the terms vanish for the antisymmetric case, the
explicit expressions of $\Pi_{i}^{(S)}$ can be found in Aliev and Şimşek
(2020a).
At the end of this section, we derive the corresponding coupling constants for
the vertices $B_{QQ}^{*}B_{QQ}\gamma$ by using the VMD ansätz. The VMD implies
that the $B_{QQ}^{*}B_{QQ}\gamma$ vertex can be obtained from
$B_{QQ}^{*}B_{QQ}V$ by converting the corresponding vector meson to a photon.
From the gauge invariance, the $B_{QQ}^{*}B_{QQ}\gamma$ vertex is parametrized
similarly to the $B_{QQ}^{*}B_{QQ}V$ vertex as follows:
$\displaystyle\langle\gamma(q)B_{QQ}^{*}(p_{2})|B_{QQ}(p_{1})\rangle$
$\displaystyle=\bar{u}^{\alpha}(p_{2})[g_{1}^{\gamma}(\varepsilon_{\alpha}^{*\gamma}\not{q}-q_{\alpha}\not{\varepsilon}^{*\gamma})\gamma_{5}+g_{2}^{\gamma}(P\cdot
q\varepsilon_{\alpha}^{*\gamma}-P\cdot\varepsilon^{*\gamma}q_{\alpha})\gamma_{5}$
$\displaystyle+g_{3}^{\gamma}(q\cdot\varepsilon^{*\gamma}q^{\alpha}-q^{2}\varepsilon_{\alpha}^{*\gamma})\gamma_{5}]u(p_{1})$
(15)
Obviously, the last term for real photons is equal to zero. To obtain the
vertex $B_{QQ}^{*}B_{QQ}\gamma$ from the $B_{QQ}^{*}B_{QQ}V$, it is necessary
to make the replacement
$\displaystyle\varepsilon_{\mu}\to
e\sum_{V=\rho^{0},\omega,\phi}e_{q}\frac{f_{V}}{m_{V}}\varepsilon_{\mu}^{\gamma}$
(16)
and go from $q^{2}=m_{V}^{2}$ to $q^{2}=0$. Let’s check this statement.
The radiative decays $B_{QQ}^{*}\to B_{QQ}\gamma$ can be described by the
following Lagrangian:
$\displaystyle\mathscr{L}=iee_{Q}\bar{Q}\gamma_{\mu}QA^{\mu}+iee_{q}\bar{q}\gamma_{\mu}qA^{\mu}$
(17)
From this Lagrangian, one can obtain the decay amplitudes with the
incorporation of the VMD, i.e.
$\displaystyle\langle\gamma(q)B_{QQq}^{*}(p)|\mathscr{L}|B_{QQq}(p+q)\rangle$
$\displaystyle=iee_{q}\varepsilon^{*\gamma\mu}\langle
B_{QQq}^{*}(p)|\bar{q}\gamma_{\mu}q|B_{QQq}(p+q)\rangle$
$\displaystyle=ee_{s}\varepsilon^{*\gamma\mu}\frac{\varepsilon_{\mu}}{q^{2}-m_{\phi}^{2}}\langle\phi(q)B_{QQs}^{*}(p)|B_{QQs}(p+q)\rangle$
$\displaystyle+ee_{q}\varepsilon^{*\gamma\mu}\frac{\varepsilon_{\mu}}{q^{2}-m_{\rho}^{2}}\langle\rho(q)B_{QQq}^{*}(p)|B_{QQq}(p+q)\rangle$
$\displaystyle+ee_{q}\varepsilon^{*\gamma\mu}\frac{\varepsilon_{\mu}}{q^{2}-m_{\omega}^{2}}\langle\omega(q)B_{QQq}^{*}(p)|B_{QQq}(p+q)\rangle$
(18)
At the point $q^{2}=0$, (real photon case), this expression is simplified and
we have
$\displaystyle\langle\gamma(q)B_{QQq}^{*}(p)|\mathscr{L}|B_{QQq}(p+q)\rangle$
$\displaystyle=ee_{s}\varepsilon^{*\gamma}\cdot\varepsilon\frac{f_{\phi}}{m_{\phi}}\langle\phi(q)B_{QQs}^{*}(p)|B_{QQs}(p+q)\rangle$
$\displaystyle+ee_{q}\varepsilon^{*\gamma}\cdot\varepsilon\frac{f_{\rho}}{m_{\rho}}\langle\rho(q)B_{QQq}^{*}(p)|B_{QQq}(p+q)\rangle$
$\displaystyle+ee_{q}\varepsilon^{*\gamma}\cdot\varepsilon\frac{f_{\omega}}{m_{\omega}}\langle\omega(q)B_{QQq}^{*}(p)|B_{QQq}(p+q)\rangle$
(19)
From Eqs. (1) and (16), for $B^{*}_{QQq}B_{QQq}\gamma$ vertex, we get
$\displaystyle\langle\gamma(q)B^{*}_{QQq}(p)|\mathscr{L}|B_{QQq}(p+q)\rangle$
$\displaystyle=\sum_{V=\rho,\omega,\phi}ee_{q}\frac{f_{V}}{m_{V}}\bar{u}_{\alpha}(p)[g_{1}(-q_{\mu}\not{\varepsilon}^{*\gamma}+\varepsilon_{\mu}^{*\gamma}\not{q})$
$\displaystyle-g_{2}(P\cdot\varepsilon^{*\gamma}q_{\mu}-P\cdot
q\varepsilon_{\mu}^{*\gamma})]\gamma_{5}u(p+q)$ (20)
Comparing Eqs. (1) and (20), we obtain the relation among the couplings
$B^{*}_{QQq}B_{QQq}V$ and $B_{QQq}^{*}B_{QQq}\gamma$
$\displaystyle
g_{i}^{\gamma}=\begin{cases}e_{s}\frac{f_{\phi}}{m_{\phi}}g_{i}^{\phi}&\mbox{for
}\Omega_{QQs}^{*}\to\Omega_{QQs}\gamma\\\
e_{u}\left(\frac{f_{\rho}}{m_{\rho}}g_{i}^{\rho}+\frac{f_{\omega}}{m_{\omega}}g_{i}^{\omega}\right)&\mbox{for
}\Xi^{*}_{QQu}\to\Xi_{QQu}\gamma\\\
e_{d}\left(-\frac{f_{\rho}}{m_{\rho}}g_{i}^{\rho}+\frac{f_{\omega}}{m_{\omega}}g_{i}^{\omega}\right)&\mbox{for
}\Xi^{*}_{QQd}\to\Xi_{QQd}\gamma\end{cases}$ (21)
for $i=1,2$. Here, we would like to make two remarks. First, we assume that
couplings do not change considerably when we go from $q^{2}=m_{V}^{2}$ to
$q^{2}=0$. The second remark is related to the fact that, in principle, heavy
vector meson resonances can also contribute. These contributions are neglected
since in the heavy quark limit their contributions are proportional to $m_{\rm
heavy\ meson}^{-3/2}$.
In the numerical calculations for $f_{\rho}$, $f_{\omega}$, and $f_{\phi}$, we
have used the prediction of the sum rules $f_{\rho}=205{\rm\ MeV}$,
$f_{\omega}=185{\rm\ MeV}$, and $f_{\phi}=215{\rm\ MeV}$ Brown _et al._
(2014).
In this work, instead of the formfactors $g_{1}^{\gamma}$ and
$g_{2}^{\gamma}$, we will use the magnetic dipole and electric quadrupole
formfactors, $G_{M}$ and $G_{E}$, respectively, which are more convenient from
an experimental point of view. The relation among these formfactors at the
$q^{2}=0$ point are
$\displaystyle
G_{M}=(3m_{1}+m_{2})\frac{m_{2}}{3m_{1}}g_{1}^{\gamma}+(m_{1}-m_{2})m_{2}\frac{g_{2}^{\gamma}}{3}$
(22) $\displaystyle
G_{E}=(m_{1}-m_{2})\frac{m_{2}}{3m_{1}}(g_{1}^{\gamma}+m_{1}g_{2}^{\gamma})$
(23)
Using these relations, it is straightforward to calculate the decay widths of
$B_{QQ}^{*}B_{QQ}\gamma$ decay. The result is
$\displaystyle\Gamma=\frac{3\alpha}{4}\frac{k_{\gamma}^{3}}{m_{2}^{2}}(3G_{E}^{2}+G_{M}^{2})$
(24)
where $\alpha$ is the fine structure coupling and
$k_{\gamma}=(m_{1}^{2}-m_{2}^{2})/2m_{1}$ is the photon energy.
## III Numerical analysis
In this section, we perform the numerical analysis of the LCSR for the
coupling constants $g_{1}$ and $g_{2}$ obtained in the previous section for
the $\Xi_{QQ^{\prime}}^{*}\Xi_{QQ^{\prime}}\omega$ and
$\Omega_{QQ^{\prime}}^{*}\Omega_{QQ^{\prime}}\phi$ vertices by using Package X
Patel (2015).
The LCSR involves various input parameters, such as the quark masses, the
masses and residues of doubly heavy baryons, and the decay constants of the
light vector mesons, $\omega$ and $\phi$. These parameters are collected in
Table 1.
Table 1: Part of the input parameters. The masses and decay constants are at
$\mu=1{\rm\ GeV}$ and in units of GeV.
Parameter | Value | Parameter | Value | Parameter | Value | Parameter | Value | Parameter | Value | Parameter | Value
---|---|---|---|---|---|---|---|---|---|---|---
$m_{u}$ | 0 | $m_{\omega}$ | 0.783 | $m_{\Xi_{cc}^{*}}$ | 3.692 Brown _et al._ (2014) | $\lambda_{\Xi_{cc}^{*}}$ | 0.12 Aliev _et al._ (2013) | $m_{\Xi_{cc}}$ | 3.610 Brown _et al._ (2014) | $\lambda_{\Xi_{cc}}$ | 0.16 Aliev _et al._ (2012)
$m_{d}$ | 0 | $f_{\omega}$ | 0.187 | $m_{\Xi_{bb}^{*}}$ | 10.178 Brown _et al._ (2014) | $\lambda_{\Xi_{cc}^{*}}$ | 0.22 Aliev _et al._ (2013) | $m_{\Xi_{bb}}$ | 10.143 Brown _et al._ (2014) | $\lambda_{\Xi_{bb}}$ | 0.44 Aliev _et al._ (2012)
$m_{s}$ | 0.137 | $f_{\omega}^{T}$ | 0.151 | $m_{\Xi_{bc}^{*}}$ | 6.985 Brown _et al._ (2014) | $\lambda_{\Xi_{cc}^{*}}$ | 0.15 Aliev _et al._ (2013) | $m_{\Xi_{bc}}$ | 6.943 Brown _et al._ (2014) | $\lambda_{\Xi_{bc}}$ | 0.28 Aliev _et al._ (2012)
$m_{c}$ | 1.4 | $m_{\phi}$ | 1.019 | $m_{\Omega_{cc}^{*}}$ | 3.822 Brown _et al._ (2014) | $\lambda_{\Omega_{cc}^{*}}$ | 0.14 Aliev _et al._ (2013) | $m_{\Omega_{cc}}$ | 3.738 Brown _et al._ (2014) | $\lambda_{\Omega_{cc}}$ | 0.18 Aliev _et al._ (2012)
$m_{b}$ | 4.8 | $f_{\phi}$ | 0.215 | $m_{\Omega_{bb}^{*}}$ | 10.308 Brown _et al._ (2014) | $\lambda_{\Omega_{cc}^{*}}$ | 0.25 Aliev _et al._ (2013) | $m_{\Omega_{bb}}$ | 10.273 Brown _et al._ (2014) | $\lambda_{\Omega_{bb}}$ | 0.45 Aliev _et al._ (2012)
| | $f_{\phi}^{T}$ | 0.186 | $m_{\Omega_{bc}^{*}}$ | 7.059 Brown _et al._ (2014) | $\lambda_{\Omega_{cc}^{*}}$ | 0.17 Aliev _et al._ (2013) | $m_{\Omega_{bc}}$ | 6.998 Brown _et al._ (2014) | $\lambda_{\Omega_{bc}}$ | 0.29 Aliev _et al._ (2012)
The main nonperturbative input parameters of the LCSR are the vector meson
distribution amplitudes (DAs). The explicit expressions of the vector meson
DAs are given in Aliev and Şimşek (2020a) and references therein. The
parameters that appear in the light vector meson DAs for $\omega$ and $\phi$
are presented in Table 2.
Table 2: Vector meson DA parameters for $\omega$ and $\phi$ at $\mu=1{\rm\ GeV}$ Ball _et al._ (1998); Ball and Braun (1999, 1996); Ball _et al._ (2006); Ball (1999); Ball and Zwicky (2005). The accuracy of these parameters are 30–50%. Parameter | $\omega$ | $\phi$ | Parameter | $\omega$ | $\phi$
---|---|---|---|---|---
$a_{1}^{\parallel}$ | 0 | 0 | $\kappa_{3}^{\perp}$ | 0 | 0
$a_{1}^{\perp}$ | 0 | 0 | $\omega_{3}^{\perp}$ | 0.55 | 0.20
$a_{2}^{\parallel}$ | 0.15 | 0.18 | $\lambda_{3}^{\perp}$ | 0 | 0
$a_{2}^{\perp}$ | 0.14 | 0.14 | $\zeta_{4}^{\parallel}$ | 0.07 | 0
$\zeta_{3}^{\parallel}$ | 0.030 | 0.024 | $\tilde{\omega}_{4}^{\parallel}$ | –0.03 | –0.02
$\tilde{\lambda}_{3}^{\parallel}$ | 0 | 0 | $\zeta_{4}^{\perp}$ | –0.03 | –0.01
$\tilde{\omega}_{3}^{\parallel}$ | –0.09 | –0.045 | $\tilde{\zeta}_{4}^{\perp}$ | –0.08 | –0.03
$\kappa_{3}^{\parallel}$ | 0 | 0 | $\kappa_{4}^{\parallel}$ | 0 | 0
$\omega_{3}^{\parallel}$ | 0.15 | 0.09 | $\kappa_{4}^{\perp}$ | 0 | 0
$\lambda_{3}^{\parallel}$ | 0 | 0 | | |
The LCSR for the strong coupling constants $g_{1}$ and $g_{2}$ involves three
auxiliary parameters, namely the Borel mass parameter, $M^{2}$, the continuum
threshold $s_{0}$, and the parameter $\beta$, in the expression of the
interpolating current. Hence, we need to find the working regions of these
parameters where the results for the coupling constants $g_{1}$ and $g_{2}$
practically exhibit insensitivity to the variation of these parameters. The
lower bound of $M^{2}$ is determined by requiring the contributions of higher
twist terms considerably small than the leading twist one (say than 15%). The
upper bound of $M^{2}$ can be found by requiring that the continuum
contribution to the sum rules should be less than 25% of the total result. The
value of continuum threshold $s_{0}$ is obtained by demanding that the two-
point sum rules reproduce the mass of doubly heavy baryons with 10% accuracy.
After performing the numerical analysis, we obtained the working regions for
$M^{2}$ and $s_{0}$ as displayed in Table 3.
Table 3: The working regions of the Borel mass parameter and the central value of the continuum threshold. Transition | $M^{2}\ ({\rm GeV^{2}})$ | $s_{0}\ ({\rm GeV^{2}})$
---|---|---
$\Xi_{cc}^{*}\to\Xi_{cc}\omega$ | $3\leq M^{2}\leq 4.5$ | 18
$\Xi_{bb}^{*}\to\Xi_{bb}\omega$ | $8\leq M^{2}\leq 12$ | 110
$\Xi_{bc}^{*}\to\Xi_{bc}\omega$ | $6\leq M^{2}\leq 8$ | 60
$\Omega_{cc}^{*}\to\Omega_{cc}\phi$ | $3\leq M^{2}\leq 5$ | 18
$\Omega_{bb}^{*}\to\Omega_{bb}\phi$ | $8\leq M^{2}\leq 13$ | 110
$\Omega_{bc}^{*}\to\Omega_{bc}\phi$ | $6\leq M^{2}\leq 9$ | 60
Finally, we note that the value of the $\Xi_{QQ}^{*}\to\Xi_{QQ}\rho^{0}$
couplings can be obtained from the results of Aliev and Şimşek (2020a) via the
isospin symmetry.
As an illustration, we present the dependence of the coupling constants
$g_{1}$, $g_{2}$, and $g_{3}$ on $\cos\theta$ for the transition
$\Xi_{cc}^{*}\to\Xi_{cc}\omega$, where $\theta$ is defined via
$\beta=\tan\theta$ and on the Borel mass parameter, $M^{2}$ in Figs. 1–6. We
summarized our results in Table 4. The corresponding values for the case of
the Ioffe current, for which $\beta=-1$, are also presented. One can see that
in Figs. 1–3, the value of the coupling constant practically does not change
for the values of $\left|\cos\theta\right|$ between 0.5 and 0.8, hence we
determine the working region of $\beta$ accordingly. The errors in Table 4
reflect the uncertainties in the aforementioned input parameters. From this
table, it follows that in the case of a general current, the values of the
coupling constants are comparable to those in the case of the Ioffe current.
Figure 1: The dependence of the modulus of the coupling constant $g_{1}$ for
$\Xi_{cc}^{*}\to\Xi_{cc}\omega$ on $\cos\theta$ at the shown values of $M^{2}$
with $s_{0}=18{\rm\ GeV^{2}}$. Figure 2: The same as Fig. 1 but for $g_{2}$.
Figure 3: The same as Fig. 1 but for $g_{3}$. Figure 4: The dependence of the
modulus of the coupling constant $g_{1}$ for $\Xi_{cc}^{*}\to\Xi_{cc}\omega$
on $M^{2}$ at the shown values of $\beta$ with $s_{0}=18{\rm\ GeV^{2}}$.
Figure 5: The same as Fig. 4 but for $g_{2}$. Figure 6: The same as Fig. 4 but
for $g_{3}$. Table 4: The obtained values of the moduli of the coupling
constants $g_{1}$, $g_{2}$, and $g_{3}$ for the aforementioned transitions
accompanied by a light vector meson.
| Case of the general current | Case of the Ioffe current
---|---|---
Transition | $\left|g_{1}\right|$ | $\left|g_{2}\right|$ | $\left|g_{3}\right|$ | $\left|g_{1}\right|$ | $\left|g_{2}\right|$ | $\left|g_{3}\right|$
$\Xi_{cc}^{*}\to\Xi_{cc}\rho^{0}$ | $1.13\pm 0.25$ | $0.11\pm 0.03$ | $7.81\pm 1.83$ | $0.99\pm 0.22$ | $0.10\pm 0.02$ | $6.92\pm 1.62$
$\Xi_{bb}^{*}\to\Xi_{bb}\rho^{0}$ | $0.76\pm 0.23$ | $0.03\pm 0.00$ | $15.19\pm 4.64$ | $0.67\pm 0.20$ | $0.02\pm 0.00$ | $13.45\pm 4.11$
$\Xi_{bc}^{*}\to\Xi_{bc}\rho^{0}$ | $1.06\pm 0.20$ | $0.05\pm 0.01$ | $14.44\pm 2.84$ | $0.94\pm 0.18$ | $0.05\pm 0.01$ | $12.79\pm 2.51$
$\Xi_{cc}^{*}\to\Xi_{cc}\omega$ | $1.02\pm 0.23$ | $0.10\pm 0.02$ | $7.10\pm 1.68$ | $0.90\pm 0.20$ | $0.09\pm 0.02$ | $6.29\pm 1.49$
$\Xi_{bb}^{*}\to\Xi_{bb}\omega$ | $0.69\pm 0.21$ | $0.03\pm 0.00$ | $13.82\pm 4.25$ | $0.61\pm 0.19$ | $0.02\pm 0.00$ | $12.24\pm 3.77$
$\Xi_{bc}^{*}\to\Xi_{bc}\omega$ | $0.97\pm 0.19$ | $0.05\pm 0.01$ | $13.14\pm 2.60$ | $0.86\pm 0.17$ | $0.05\pm 0.01$ | $11.64\pm 2.31$
$\Omega_{cc}^{*}\to\Omega_{cc}\phi$ | $1.50\pm 0.32$ | $0.50\pm 0.14$ | $9.79\pm 2.50$ | $1.32\pm 0.28$ | $0.45\pm 0.13$ | $8.64\pm 2.22$
$\Omega_{bb}^{*}\to\Omega_{bb}\phi$ | $1.22\pm 0.35$ | $0.15\pm 0.03$ | $23.90\pm 7.19$ | $1.08\pm 0.31$ | $0.14\pm 0.03$ | $21.15\pm 6.38$
$\Omega_{bc}^{*}\to\Omega_{bc}\phi$ | $1.47\pm 0.29$ | $0.25\pm 0.04$ | $19.26\pm 4.13$ | $1.30\pm 0.26$ | $0.22\pm 0.03$ | $17.03\pm 3.66$
Now using the obtained results for $g_{1}$ and $g_{2}$, we can estimate
$g_{i}^{\gamma}$ and hence $G_{M}$ and $G_{E}$. The results for $G_{M}$ and
$G_{E}$ are collected in Table 5.
Table 5: The electric quadrupole and magnetic dipole formfactors for the shown transitions. Transition | $\left|G_{E}\right|$ | $\left|G_{M}\right|$
---|---|---
$\Xi_{cc}^{*++}\to\Xi_{cc}^{++}\gamma$ | $0.00\pm 0.00$ | $1.78\pm 0.40$
$\Xi_{cc}^{*+}\to\Xi_{cc}^{+}\gamma$ | $0.00\pm 0.00$ | $0.11\pm 0.02$
$\Xi_{bb}^{*0}\to\Xi_{bb}^{0}\gamma$ | $0.00\pm 0.00$ | $3.41\pm 1.03$
$\Xi_{bb}^{*-}\to\Xi_{bb}^{-}\gamma$ | $0.00\pm 0.00$ | $0.22\pm 0.06$
$\Omega_{cc}^{*+}\to\Omega_{cc}^{+}\gamma$ | $0.00\pm 0.00$ | $0.52\pm 0.11$
$\Omega_{bb}^{*-}\to\Omega_{bb}^{-}\gamma$ | $0.00\pm 0.00$ | $1.18\pm 0.34$
Using Eq. (24) and the values of $G_{M}$ and $G_{E}$ for the decay widths of
these transitions, it is straightforward to find the values of the
corresponding decay widths. From Eq. (24), one can see that the decay width is
very sensitive to the mass difference of the considered baryons, $\Delta
m=m_{1}-m_{2}$. Therefore, a tiny change in the mass difference leads to a
significant change in the decay width. To see this, as an example, we present
the decay widths for the transition $\Omega_{ccs}^{*}\to\Omega_{ccs}\gamma$ by
using the different mass differences obtained in various approaches. The
results are presented in Table 6.
Table 6: The decay width of the transition $\Omega_{ccs}^{*}\to\Omega_{ccs}\gamma$ for different mass splittings. $\Delta m$ [MeV] | 57 Lü _et al._ (2017) | 61 Bernotas and Šimonis (2013) | 73 Hackman _et al._ (1978) | 84 Brown _et al._ (2014) | 94 Branz _et al._ (2010); Xiao _et al._ (2017); Cui _et al._ (2018) | 100 Li _et al._ (2018)
---|---|---|---|---|---|---
$\Gamma$ [keV] | 0.07 | 0.09 | 0.15 | 0.23 | 0.33 | 0.40
In our numerical calculations, for the masses of spin-1/2 and spin-3/2 states,
we have used the results of Brown _et al._ (2014) (see Table 1) because the
results are practically free from errors. Our final results on the decay
widths are collected in Table 7. For completeness, we also presented the
results for corresponding decay widths obtained within different approaches.
From the comparison of decay widths, we see that our result only for the
$\Omega_{cc}^{*}\to\Omega_{cc}\gamma$ decay is close to the prediction of the
lattice theory and considerably different from the ones in other existing
approaches. One possible source of these discrepancies may be that, for doubly
heavy baryon systems, the VMD ansätz may work not so quite well. In order to
see how the VMD works for doubly heavy baryon systems, it would be useful to
calculate $G_{M}$ and $G_{E}$ directly, i.e. without using the VMD ansätz.
This work is in progress.
Table 7: The widths of the shown radiative decays in units of keV.
Transition | Our work | Chiral quark model Xiao _et al._ (2017) | Three-quark model Branz _et al._ (2010) | Chiral perturbation theory Li _et al._ (2018) | Lattice QCD Bahtiyar _et al._ (2018)
---|---|---|---|---|---
$\Xi_{cc}^{*++}\to\Xi_{cc}^{++}\gamma$ | $(71.33\pm 3.56)\times 10^{-2}$ | 16.7 | 23.5 | 22 | $7.77\times 10^{-2}$
$\Xi_{cc}^{*+}\to\Xi_{cc}^{+}\gamma$ | $(0.29\pm 0.01)\times 10^{-2}$ | 14.6 | 28.8 | 9.57 | $9.72\times 10^{-2}$
$\Omega_{cc}^{*}\to\Omega_{cc}\gamma$ | $(6.08\pm 0.28)\times 10^{-2}$ | 6.93 | 2.11 | 9.45 | $8.47\times 10^{-2}$
$\Xi_{bb}^{*0}\to\Xi_{bb}^{0}\gamma$ | $(2.64\pm 0.24)\times 10^{-2}$ | 1.19 | 0.31 | – | –
$\Xi_{bb}^{*-}\to\Xi_{bb}^{-}\gamma$ | $(0.01\pm 0.00)\times 10^{-2}$ | 0.24 | 0.06 | – | –
$\Omega_{bb}^{*}\to\Omega_{bb}\gamma$ | $(0.31\pm 0.03)\times 10^{-2}$ | 0.08 | 0.02 | – | –
## IV Conclusion
In the present work, first, we estimated the strong coupling constants of
$B_{QQ^{\prime}}^{*}B_{QQ^{\prime}}V$ vertices within the framework of the
LCSR method. Then, assuming the VMD model, we calculated the magnetic dipole
and electric quadrupole formfactors, $G_{M}$ and $G_{E}$, respectively, at the
point $Q^{2}=0$. Using the results for $G_{M}$ and $G_{E}$, we obtained the
decay widths of the radiative decays $B_{QQ}^{*}\to B_{QQ}\gamma$. Our result
for the decay widths of $\Omega_{cc}^{*}\to\Omega_{cc}\gamma$ is in good
agreement with the lattice result and considerably different from the
prediction of other channels in various approaches. Our predictions on the
strong coupling constants for the radiative $B_{QQ}^{*}B_{QQ}V$ vertices, as
well as the decay widths, can be checked at LHCb experiments in the future.
## References
* Mattson _et al._ (2002) M. Mattson _et al._ (SELEX Collaboration), Phys. Rev. Lett. 89, 112001 (2002).
* Aaij _et al._ (2017) R. Aaij _et al._ (LHCb Collaboration), Phys. Rev. Lett. 119, 112001 (2017).
* Aaij _et al._ (2018) R. Aaij _et al._ (LHCb Collaboration), Phys. Rev. Lett. 121, 162002 (2018).
* Aaij _et al._ (2019) R. Aaij _et al._ (LHCb Collaboration), J. High Energy Phys. 2019, 124 (2019).
* Aaij _et al._ (2020) R. Aaij _et al._ (LHCb Collaboration), J. High Energy Phys. 2020, 49 (2020).
* Wang _et al._ (2017a) W. Wang, Z. P. Xing, and J. Xu, Eur. Phys. J. C 77, 800 (2017a).
* Xiao _et al._ (2017) L. Y. Xiao, K. L. Wang, Q. F. Lü, X. H. Zhong, and S. L. Zhu, Phys. Rev. D 96, 094005 (2017).
* Wang _et al._ (2017b) W. Wang, F. S. Yu, and Z. X. Zhao, Eur. Phys. J. C 77, 781 (2017b).
* Cheng and Shi (2018) H. Y. Cheng and Y. L. Shi, Phys. Rev. D 98, 113005 (2018).
* Shi _et al._ (2018) Y. J. Shi, W. Wang, Y. Xing, and J. Xu, Eur. Phys. J. C 78, 56 (2018).
* Zhao (2018) Z. X. Zhao, Eur. Phys. J. C 78, 756 (2018).
* Shi _et al._ (2020) Y. J. Shi, W. Wang, and Z. X. Zhao, Eur. Phys. J. C 80, 568 (2020).
* Shi and Zhao (2019) Y. J. Shi and Z. X. Zhao, Eur. Phys. J. C 79, 501 (2019).
* Hu and Shi (2020) X. H. Hu and Y. J. Shi, Eur. Phys. J. C 80, 56 (2020).
* Li _et al._ (2017) H. S. Li, L. Meng, Z. W. Liu, and S. L. Zhu, Phys. Rev. D 96, 076011 (2017).
* Meng _et al._ (2017) L. Meng, H. S. Li, Z. W. Liu, and S. L. Zhu, Eur. Phys. J. C 77, 869 (2017).
* Li _et al._ (2018) H. S. Li, L. Meng, Z. W. Liu, and S. L. Zhu, Phys. Lett. B 777, 169 (2018).
* Bahtiyar _et al._ (2018) H. Bahtiyar, K. U. Can, G. Erkol, M. Oka, and T. T. Takahashi, Phys. Rev. D 98, 114505 (2018).
* Rostami _et al._ (2020) S. Rostami, K. Azizi, and A. R. Olamaei, (2020), arXiv:2008.12715 [hep-ph] .
* Aliev and Şimşek (2020a) T. M. Aliev and K. Şimşek, (2020a), arXiv:2011.07150 [hep-ph] .
* Aliev and Şimşek (2020b) T. M. Aliev and K. Şimşek, Eur. Phys. J. C 80, 976 (2020b).
* Alrebdi _et al._ (2020) H. I. Alrebdi, T. M. Aliev, and K. Şimşek, Phys. Rev. D 102, 074007 (2020).
* Azizi _et al._ (2020) K. Azizi, A. R. Olamaei, and S. Rostami, (2020), arXiv:2011.02919 [hep-ph] .
* Jones and Scadron (1973) H. F. Jones and M. D. Scadron, Ann. Phys. 81, 1 (1973).
* Brown _et al._ (2014) Z. S. Brown, W. Detmold, S. Meinel, and K. Orginos, Phys. Rev. D 90, 094507 (2014).
* Patel (2015) H. H. Patel, Comput. Phys. Commun. 197, 276 (2015).
* Aliev _et al._ (2013) T. M. Aliev, K. Azizi, and M. Savcı, J. Phys. G 40, 065003 (2013).
* Aliev _et al._ (2012) T. M. Aliev, K. Azizi, and M. Savcı, Nucl. Phys. A895, 59 (2012).
* Ball _et al._ (1998) P. Ball, V. M. Braun, Y. Koike, and K. Tanaka, Nucl. Phys. B529, 323 (1998).
* Ball and Braun (1999) P. Ball and V. Braun, Nucl. Phys. B543, 201 (1999).
* Ball and Braun (1996) P. Ball and V. M. Braun, Phys. Rev. D 54, 2182 (1996).
* Ball _et al._ (2006) P. Ball, V. M. Braun, and A. Lenz, J. High Energy Phys. 2006, 004 (2006).
* Ball (1999) P. Ball, J. High Energy Phys. 01, 010 (1999).
* Ball and Zwicky (2005) P. Ball and R. Zwicky, Phys. Rev. D 71, 014015 (2005).
* Lü _et al._ (2017) Q. F. Lü, K. L. Wang, L. Y. Xiao, and X. H. Zhong, Phys. Rev. D 96, 114006 (2017).
* Bernotas and Šimonis (2013) A. Bernotas and V. Šimonis, Phys. Rev. D 87, 074016 (2013).
* Hackman _et al._ (1978) R. H. Hackman, N. G. Deshpande, D. A. Dicus, and V. L. Teplitz, Phys. Rev. D 18, 2537 (1978).
* Branz _et al._ (2010) T. Branz, A. Faessler, T. Gutsche, M. A. Ivanov, J. G. Körner, V. E. Lyubovitskij, and B. Oexl, Phys. Rev. D 81, 114036 (2010).
* Cui _et al._ (2018) E. L. Cui, H. X. Chen, W. Chen, X. Liu, and S. L. Zhu, Phys. Rev. D 97, 034018 (2018).
|
# COVID-19 Outbreak Prediction and Analysis using Self Reported Symptoms
Rohan Sukumaran∗1 Parth Patwa∗1 Sethuraman T V∗1 Sheshank Shankar1
Rishank Kanaparti1 Joseph Bae1,2 Yash Mathur1 Abhishek Singh1,4 Ayush
Chopra1,4 Myungsun Kang1 Priya Ramaswamy1,3 Ramesh Raskar1,4
1PathCheck Foundation 2Stony Brook Medicine
3University of California San Francisco 4MIT Media Lab
{rohan.sukumaran, parth.patwa<EMAIL_ADDRESS>
###### Abstract
It is crucial for policy makers to understand the community prevalence of
COVID-19 so combative resources can be effectively allocated and prioritized
during the COVID-19 pandemic. Traditionally, community prevalence has been
assessed through diagnostic and antibody testing data. However, despite the
increasing availability of COVID-19 testing, the required level has not been
met in most parts of the globe, introducing a need for an alternative method
for communities to determine disease prevalence. This is further complicated
by the observation that COVID-19 prevalence and spread varies across different
spatial, temporal and demographics. In this study, we understand trends in the
spread of COVID-19 by utilizing the results of self-reported COVID-19 symptoms
surveys as an alternative to COVID-19 testing reports. This allows us to
assess community disease prevalence, even in areas with low COVID-19 testing
ability. Using individually reported symptom data from various populations,
our method predicts the likely percentage of population that tested positive
for COVID-19. We do so with a Mean Absolute Error (MAE) of 1.14 and Mean
Relative error (MRE) of 60.40% with 95% confidence interval as (60.12, 60.67).
This implies that our model predicts +/- 1140 cases than original in a
population of 1 million. In addition, we forecast the location-wise percentage
of the population testing positive for the next 30 days using self-reported
symptoms data from previous days. The MAE for this method is as low as 0.15
(MRE of 23.61% with 95% confidence interval as (23.6, 13.7)) for New York. We
present analysis on these results, exposing various clinical attributes of
interest across different demographics. Lastly, we qualitatively analyse how
various policy enactments (testing, curfew) affect the prevalence of COVID-19
in a community.
11footnotetext: Equal contribution.
## 1 Introduction
The rapid progression of the COVID-19 pandemic has provoked large-scale data
collection efforts on an international level to study the epidemiology of the
virus and inform policies. Various studies have been undertaken to predict the
spread, severity, and unique characteristics of the COVID-19 infection, across
a broad range of clinical, imaging, and population-level datasets (Gostic et
al. 2020; Liang et al. 2020; Menni et al. 2020a; Shi et al. 2020). For
instance, (Menni et al. 2020a) uses self-reported data from a mobile app to
predict a positive COVID-19 test result based upon symptom presentation.
Anosmia was shown to be the strongest predictor of disease presence, and a
model for disease detection using symptoms-based predictors was indicated to
have a sensitivity of about 65%. Studies like (Parma et al. 2020) have shown
that ageusia and anosmia are widespread sequelae of COVID-19 pathogenesis.
From the onset of COVID-19 there also has been significant amount of work in
mathematical modeling to understand the outbreak under different situations
for different demographics (Menni et al. 2020b; Saad-Roy et al. 2020; Wilder,
Mina, and Tambe 2020). Although these works primarily focus on population
level the estimation of different transition probabilities to move between
compartments is challenging.
Carnegie Mellon University (CMU) and the University of Maryland (UMD) have
built chronologically aggregated datasets of self-reported COVID-19 symptoms
by conducting surveys at national and international levels (Fan et al. 2020;
Delphi group 2020). The surveys contain questions regarding whether the
respondent has experienced several of the common symptoms of COVID-19 (e.g.
anosmia, ageusia, cough, etc.) in addition to various behavioral questions
concerning the number of trips a respondent has taken outdoors and whether
they have received a COVID-19 test.
In this work, we perform several studies using the CMU, UMD and OxCGRT (Fan et
al. 2020; Delphi group 2020; Hale et al. 2020) datasets. Our experiments
examine correlations among variables in the CMU data to determine which
symptoms and behaviors are most correlated to high percentage of Covid Like
Illness (CLI). We see how the different symptoms impact the percentage of
population with CLI across different spatio-temporal and demographic (age,
gender) settings. We also predict the percentage of population who got tested
positive for COVID-19 and achieve 60% Mean Relative Error. Further, our
experiments involve time-series analysis of these datasets to forecast CLI
over time. Here, we identify how different spatial window trends vary across
different temporal windows. We aim to use the findings from this method to
understand the possibilities of modelling CLI for geographic areas in which
data collection is sparse or non-existent. Furthermore, results from our
experiments can potentially guide public health policies for COVID-19.
Understanding how the disease is progressing can help the policymakers to
introduce non pharmaceutical interventions (NPIs) and also help them
understand how to distribute critical resources (medicines, doctors,
healthcare workers, transportation and more). This could now be done based on
the insights provided by our models, instead of relying completely on clinical
testing data. Prediction of outbreak using self reported symptoms can also
help reduce the load on testing resources.
Using self reported symptoms collected across spatio-temporal windows to
understand the prevalence and outbreak of COVID-19 is the first of its kind to
the best of our knowledge.
## 2 Datasets
The CMU Symptom Survey aggregates the results of a survey run by CMU (Delphi
group 2020) which was distributed across the US to ~70k random Facebook users
daily. COVIDcast gathers data from the survey and dozens of sources and
produces a set of indicators which can inform our reasoning about the
pandemic. Indicators are produced from these raw data by extracting a metric
of interest, tracking revisions, and applying additional processing like
reducing noise, adjusting temporal focus, or enabling more direct comparisons.
A few of which are
\- 7 Public’s Behavior Indicators like People Wearing Masks and At Away
Location 6hr+
\- 3 Early Indicators like COVID-Related Doctor Visits and COVID-Like Symptoms
in Community
\- 4 Late Indicators like COVID Cases, COVID Deaths, COVID Antigen Test
Positivity (Quidel) and Claims-Based COVID Hospital Admissions
It has 104 columns, including weighted (adjusted for sampling bias),
unweighted signals, demographic columns (age, gender etc) for county and state
level data. We use the data from Apr. 4, ’20 to Sep. 11, ’20. This data is
henceforth referred to as the CMU dataset in the paper.
The UMD Global Symptom Survey aggregates the results of a survey conducted by
the UMD through Facebook (Fan et al. 2020).The survey is available in 56
languages. A representative sample of Facebook users is invited on a daily
basis to report on topics including, for example, symptoms, social distancing
behavior, vaccine acceptance, mental health issues, and financial constraints.
Facebook provides weights to reduce nonresponse and coverage bias. Country and
region-level statistics are published daily via public API and dashboards, and
microdata is available for researchers via data use agreements. Over half a
million responses are collected daily. We use the data of 968 regions,
available from May 01 to September 11. There are 28 unweighted signals
provided, as well as a weighted form (adjusted for sampling bias). These
signals include self reported symptoms, exposure information, general hygiene
etc.
The Oxford COVID-19 Government Response Tracker (OxCGRT) (Hale et al. 2020)
contains government COVID-19 policy data as a numerical scale value
representing the extent of government action. OxCGRT collects publicly
available information on 20 indicators of government response. This
information is collected by a team of over 200 volunteers from the Oxford
community and is updated continuously. Further, they also include statistics
on the number of reported Covid-19 cases and deaths in each country. These are
taken from the JHU CSSE data repository for all countries and the US States.
Here, for the timeseries and one-on-one predictions, we make use of 80% of the
entire data for training and use the remaining set aside 20% for the testing
purpose. The 80-20 split is random.
Similar self reported data and survey data has been used by (Rodriguez et al.
2020a, b; Garcia-Agundez et al. 2021) for understanding the pandemic and
drawing actionable insights.
The Prevalence of Self-Reported Obesity by State and Territory, BRFSS, 2019-
CDC (CDC 2020) is a dataset published by CDC containing the aggregated self
reported obesity values. The values are present at a granularity of state
level and contains 3 columns corresponding to the name of the State , Obesity
values and Confidence intervals (95%). This dataset contains other details
like Race, Ethnicity , Food habits etc which can used for further analysis.
## 3 Method and Experiments
Correlation Studies: Correlation between features of the dataset provides
crucial information about the features and the degree of influence they have
over the target value. We conduct correlation studies on different sub groups
like symptomatic, asymptomatic and varying demographic regions in the CMU
dataset to the discover relationships among the signals and with the target
variable. We also investigate the significance of obesity and population
density on the susceptibility to COVID-19 at state level (CDC 2020). Please
refer to the Appendix for more information.
Feature Pruning: We drop demographic columns such as date, gender, age etc.
Next we drop the unweighted columns because their weighted counterparts exist.
We also drop features like percentage of people who got tested negative,
weighted percentage of people who got tested positive etc as these are
directly related to testing and would make the prediction trivial. Further, we
drop derived features like estimated percentage of people with influenza-like
illness because they were not directly reported by the respondents. Finally,
we drop some features which calculate mean (such as average number of people
in respondent’s household who have Covid Like Illness) because their range was
in the order of $10^{50}$. After the entire process we are left with 36
features. The selected feature list is provided in Appendix.
Outbreak Prediction: We predict the percentage of the population that tested
positive (at a state level) from the CMU dataset. After feature pruning as
mentioned above, we are left with 36 input signals. We rank these 36 signals
according to their f_regression (Fre 2007-2020) (f_statistic of the
correlation to the target variable) and predict the target variable using the
top n ranked features. We experiment with top n features value from 1 to 36
for various demographic groups. We train Linear Regression (Galton 1886),
Decision Tree (Quinlan 1986) and Gradient Boosting (Friedman 2001) models. All
the models are implemented using scikit-learn (Pedregosa et al. 2011).
Time Series Analysis: We predict the percentage of people that tested positive
using the CMU dataset and percentage of people with CLI with the UMD dataset.
Here, we make use of the top 11 features (according to their ranking obtained
in outbreak prediction) from the CMU (36) and UMD (56) datasets for
multivariate multi-step time series forecasting. Given the data is spread
across different spatial windows (geographies) at a state level, we employ an
agglomerative clustering method independently on symptoms and
behavioural/external patterns, and sample locations which are not in the same
cluster for our analysis. Using the Augmented Dickey-Fuller test (Cheung and
Lai 1995) we found the time series samples for these spatial windows to be
stationary. Furthermore, we bucket the data based on the age and gender of the
respondents, to provide granular insights on the model performance on various
demographics. With a total of 12 demographic buckets [(age, gender) pairs]
available, we use a Vector Auto Regressive (VAR) (Holden 1995) model and an
LSTM (Gers, Schmidhuber, and Cummins 1999) model for the experiments.
Furthermore, we qualitatively look at the impact of government policies
(contact tracing, etc) on the spread of the virus.
## 4 Results and Discussion
Correlation Studies: State level analysis revealed a mild positive
correlation, having an R value of 0.24 and a P value of the order of -257,
between the percentage of people tested positive and statewide obesity level.
The P value was Here the obesity is defined as BMI$>$ $30.0$ (NIH 2020).These
results are consistent with prior clinical studies like (Chan et al. 2020) and
indicate that further research required to see if lack of certain nutrients
like Vitamin B, Zinc, Iron or having a BMI$>$ 30.0 could make an individual
more susceptible to COVID-19. Figure 1 shows the correlation amongst multiple
self reported symptoms and the symptoms having a significant positive
correlations are highlighted. This clearly reveals that Anosmia, Ageusia and
fever are relatively strong indicators of COVID-19. From Figure 5, we see that
contact with a COVID-19 positive individual is strongly correlated with
testing COVID-19 positive. Conversely, the percentage of population who avoid
outside contact and the percentage of population testing positive for COVID-19
have a negative correlation. We also find a mild positive correlation between
population density and percentage of population reporting COVID-19 positivity,
which indicate easier transmission of the virus in congested environment.
These observations reaffirm the highly contagious nature of the virus and the
need for social distancing.
The above results motivate us to estimate the % of people tested COVID-19
positive based on % of people who had a direct contact with anyone who
recently tested positive. In doing so, we achieve an Mean Relative Error (MRE)
of 2.33% and Mean Absolute Error (MAE) of 0.03.
Figure 1: Correlation amongst self reported symptoms and % tested COVID positive. Demographic | best n | MAE | MRE | CI
---|---|---|---|---
Entire | 35 | 1.14 | 60.40 | (60.12, 60.67)
Male | 34 | 1.38 | 78.14 | (77.67, 78.62)
Female | 36 | 1.10 | 56.89 | (56.48, 57.30)
Age 18-34 | 30 | 1.23 | 66.35 | (65.59, 67.12)
Age 35-54 | 35 | 1.29 | 67.59 | (67.13, 68.04)
Age 55+ | 33 | 1.20 | 66.40 | (65.86, 66.94)
Table 1: Results of gradient boosting model for prediction of % of population
tested positive across demographics. The 95% confidence interval (CI) for Mean
Relative Error (MRE) is calculated on 20 runs (data shuffled randomly every
time). the MRE and Mean Absolute Error (MAE) are average of 20 runs.
Policies vs CLI/Community Sick Impacts: The impacts of different non
pharmaceutical interventions (NPIs) could be analysed by combining the CMU,
UMD data and Oxford data (Hale et al. 2020). A particular analysis from that
is reported here, where we notice that lifting of stay at home restrictions
resulted in a sudden spike in the number of cases. This can be visualised in
figure 4.
Error Metric: We calculate 2 error metrics -
* •
Mean Absolute Error (MAE): It the absolute value of difference between
predicted value and actual value, averaged over all data points.
MAE = $\frac{1}{n}\sum_{i=1}^{n}|y_{i}-x_{i}|$
where n is the total data instances, $y_{i}$ is the predicted value and
$x_{i}$ is the actual value.
* •
Mean Relative error (MRE): Relative Error is the absolute difference between
predicted value and actual value, divided by the actual value. Mean Relative
Error is Relative error averaged over all the data points.
MRE = $\frac{100}{n}\sum_{i=1}^{n}\left|\frac{y_{i}-x_{i}}{x_{i}+1}\right|$
We add 1 in the denominator to avoid division by 0. The 100 in the numerator
is to get percentage value.
We find that a low MAE value is misleading in the case of predicting the
spread of the virus; the MAE for outbreak prediction is low and has a small
range (1-1.4) but more than 75% of the target lies between 0-2.6, meaning only
a small percentage of the entire population has COVID-19 (if 1% of the entire
population is affected then and MAE of 1 indicates the predicted cases could
be double of actual cases). Hence, MRE is a better metric to judge a system as
it accounts for even minute changes (errors) in the prediction.
Outbreak prediction on CMU Dataset: Gradient boosting performs the best and
considerably better than the next best algorithm in terms of the error metrics
for every demographic group. Hence, only the results for Gradient Boosting are
shown. Table 1 shows best accuracy achieved per dataset. For every dataset,
the best ”n” is in 30s. We achieve an MRE of 60.40% for the entire dataset.
The performance is better on Female-only data when compared to Male-only data.
The performance is slightly better on 55+ age data than other age groups. This
can also be observed from figure 2.
Top Features: Except for minor reordering, the top 5 features are - CLI in
community, loss of smell, CLI in house hold (HH), fever in HH, fever across
every data split. Top 6-10 features per data split are given in table 3. We
can see that ’worked outside home’ and ’avoid contact most time’ are useful
features for male, female and 55+ age data. Figure 2 shows MRE vs number of
features selected for different data splits. Overall, the error decreases as
we add more features. However, the decrease in error isn’t very considerable
when we go beyond 20 features ( $<$ 1%).
Time Series Analysis: As seen in Tables 2, 3, 4 and 5, we are able to forecast
the PCT_CLI with an MRE of 15.11% using just 23 features from the UMD dataset.
We can see that VAR performs better than LSTM on an average. This can be
explained by the dearth of data available. Furthermore, we can see that the
outbreak forecasting on New York was done with 11.28% MRE, making use of only
10 features. This might be caused by an inherent bias in the sampling strategy
or participant responses. For example, the high correlation noted between
anosmia and COVID-19 prevalence suggests several probable causes of
confounding relationships between the two. This could also occur if both
symptoms are specific and sensitive for COVID-19 infection.
Location | VAR (%)
---|---
MRE | MAE
New York | 11.28, 95% CI [10.9, 11.6] | 0.15
California | 13.48, 95% CI [13.4, 13.5] | 0.23
Florida | 17.49, 95% CI [17.5, 17.5] | 0.38
New Jersey | 17.93, 95% CI [17.9, 18] | 0.26
Table 2: The errors of forecasting the outbreak of COVID-19 (% of people tested and positive) for the next 30 days using VAR model. Location | LSTM (%)
---|---
MRE | MAE
New York | 23.61, 95% CI [23.6, 23.7] | 0.36
California | 45.06, 95% CI [45, 45.2] | 0.91
Florida | 64.98, 95% CI [64.8, 65.1] | 1.51
New Jersey | 15.78, 95% CI [15.7, 15.9] | 0.26
Table 3: The errors of forecasting the outbreak of COVID-19 (% of people tested and positive) for the next 30 days using LSTM model. Location | VAR (%)
---|---
MRE | MAE
Tokyo | 17.77, 95% CI [17.7, 17.8] | 0.28
British Columbia | 21.35, 95% CI [21.3, 21.4] | 0.34
Northern Ireland | 42.72, 95% CI [42.7, 42.8] | 0.87
Lombardia | 15.31, 95% CI [15.3, 15.4] | 0.22
Table 4: Results of forecasting the outbreak of COVID-19 (% of people with COVID-19 like illness in the population - PCT_CLI) for the next 30 days using VAR model Location | LSTM (%)
---|---
MRE | MAE
Tokyo | 30.00, 95% CI [29.9, 30.1] | 0.53
British Columbia | 31.11, 95% CI [30.9, 31.3] | 0.56
Northern Ireland | 42.46, 95% CI [42.1, 42.9] | 1.21
Lombardia | 16.11, 95% CI [16, 16.2] | 0.21
Table 5: Results of forecasting the outbreak of COVID-19 (% of people with
COVID-19 like illness in the population - PCT_CLI) for the next 30 days using
LSTM model Figure 2: Error vs number of top features used for the gradient
boosting model. Errors vary across demographics, and generally decrease with
increase in ”n”. The decrease is not considerable after n = 20. Figure 3:
After the top 5 predictive features (which are roughly identical), there are
considerable differences between the most predictive features segmented across
demographics. For example for the age 34-55 demographic, ’sore throat in hh
(household)’ is the 6th most predictive feature but it is not there even in
the top 10 most predictive features for the 55+ age demographic. Figure 4:
Policy Impacts: when Stay at home restrictions were stronger, even with higher
testing rates, the % of population with CLI (pct_cli_ew) was having a downward
trend. Figure 5: Correlation between the people having contact with someone
having CLI and People tested positive. Here the attribute (1) = % of people
who had contact with someone having COVID-19, (2) = % of people tested
positive, (3) = % of people who avoided contact all/most of the time
Symptoms vs CLI overlap : The percentage of population with symptoms like
cough, fever and runny nose is much higher than the percentage of people who
suffer from CLI or the percentage of people who are sick in the community.
Only 4% of the people in the UMD dataset who reported to have CLI weren’t
suffering from chest pain and nausea.
Ablation Studies : Here, we perform ablation studies to verify and investigate
the relative importance of the features that were selected using f regression
feature ranking algorithm (Fre 2007-2020). In the following experiments the
top $N=10$ features obtained from the f_regression algorithm are considered as
the subset for evaluation.
All-but-One: In this experiment, the target variable which is the percentage
of people affected by COVID 19 is estimated by considering $N-1$ features from
a given set of top $N$ features by dropping 1 feature at a time in every
iteration in a descending order. The results are visualised in figure 6 from
which it is clear that there is a considerable increase error when the most
significant feature is dropped and the loss in performance is not as drastic
when any other feature is dropped. This reaffirms our feature selection
method.
Figure 6: Results of All-but-One experiment (MRE)
Cumulative Feature Dropping: In this experiment, we estimate the target
variable based on top $N$=10 features and then carry out the experiment with
$N-i$ features in every iteration where $i$ is the iteration count. The
features are dropped in the descending order. Figure 7 shows the results. The
change in slope from the start to the end of the graph strongly supports our
previous inference that the most important feature has a huge significance on
the performance and error rate and reaffirms our features selection algorithm.
Figure 7: Results of Cumulative Feature Dropping
## 5 Conclusion And Future Work
In this work, we analyse the benefits of COVID-19 self reported symptoms
present in the CMU, UMD, and Oxford datasets. We present correlation analysis,
outbreak prediction, and time series prediction of the percentage of
respondents with positive COVID-19 tests and the percentage of respondents who
show COVID-like illness. By clustering datasets across different demographics,
we reveal micro and macro level insights into the relationship between
symptoms and outbreaks of COVID-19. These insights might form the basis for
future analysis of the epidemiology and manifestations of COVID-19 in
different patient populations. Our correlation and prediction studies identify
a small subset of features that can predict measures of COVID-19 prevalence to
a high degree of accuracy. Using this, more efficient surveys can be designed
to measure only the most relevant features to predict COVID-19 outbreaks.
Shorter surveys will increase the likelihood of respondent participation and
decrease the chances that respondents providing false (or incorrect)
information. We believe that our analysis will be valuable in shaping health
policy and in COVID-19 outbreak predictions for areas with low levels of
testing by providing prediction models that rely on self-reported symptom
data. As shown from our results, the predictions from our models could be
reliably used by health officials and policymakers, in order to prioritise
resources. Furthermore, having crowdsourced information as the base, it helps
to scale this method at a much higher pace, if and when required in the future
(due to the advent of a newer virus or a strain).
In the future, we plan to use advanced deep learning models for predictions.
Furthermore, given the promise shown by population level symptoms data we find
more relevant and timely problems that can be solved with individual data.
Building machine learning systems on data from mobile/wearable devices can be
built to understand users’ vitals, sleep behavior etc., have the data shared
at an individual level, can augment the participatory surveillance dataset and
thereby the predictions made. This can be achieved without compromising on the
privacy of the individual. We also plan to compare the reliability of such
survey methods with actual number of cases in the corresponding regions and
it’s generalisability across the population.
## 6 Acknowledgement
We acknowledge the inputs of Seojin Jang, Chirag Samal, Nilay Shrivastava,
Shrikant Kanaparti, Darshan Gandhi and Priyanshi Katiyar. We further thank
Prof. Manuel Morales (University of Montreal), Morteza Asgari and Hellen
Vasques for helping in developing a dashboard to showcase the results. We also
acknowledge Dr. Thomas C. Kingsley (Mayo Clinic) for his suggestions in the
future works.
## References
* Fre (2007-2020) 2007-2020. _sklearn f regression_. https://scikit-learn.org/stable/modules/generated/sklearn.feature˙selection.f˙regression.html.
* CDC (2020) CDC. 2020. _Data and Statistics_. https://www.cdc.gov/obesity/data/prevalence-maps.html.
* Chan et al. (2020) Chan, C. C.; et al. 2020. Type I interferon sensing unlocks dormant adipocyte inflammatory potential. _Nature Communications_ 11(1). ISSN 2041-1723. URL https://doi.org/10.1038/s41467-020-16571-4.
* Cheung and Lai (1995) Cheung, Y.-W.; and Lai, K. S. 1995. Lag order and critical values of the augmented Dickey–Fuller test. _Journal of Business & Economic Statistics_ 13(3): 277–280.
* Delphi group (2020) Delphi group, C. M. U. 2020. Delphi’s COVID-19 Surveys. URL https://covidcast.cmu.edu/surveys.html.
* Fan et al. (2020) Fan, J.; et al. 2020. COVID-19 World Symptom Survey Data API.
* Friedman (2001) Friedman, J. H. 2001. Greedy function approximation: A gradient boostingmachine. _The Annals of Statistics_ 29(5): 1189 – 1232. doi:10.1214/aos/1013203451. URL https://doi.org/10.1214/aos/1013203451.
* Galton (1886) Galton, F. 1886. Regression Towards Mediocrity in Hereditary Stature. _The Journal of the Anthropological Institute of Great Britain and Ireland_ 15: 246–263. ISSN 09595295. URL http://www.jstor.org/stable/2841583.
* Garcia-Agundez et al. (2021) Garcia-Agundez, A.; Ojo, O.; Hernández-Roig, H. A.; Baquero, C.; Frey, D.; Georgiou, C.; Goessens, M.; Lillo, R. E.; Menezes, R.; Nicolaou, N.; et al. 2021\. Estimating the COVID-19 Prevalence in Spain with Indirect Reporting via Open Surveys. _Frontiers in Public Health_ 9\.
* Gers, Schmidhuber, and Cummins (1999) Gers, F. A.; Schmidhuber, J.; and Cummins, F. 1999. Learning to forget: Continual prediction with LSTM. _1999 Ninth International Conference on Artificial Neural Networks ICANN 99._ .
* Gostic et al. (2020) Gostic, K.; Gomez, A. C.; Mummah, R. O.; Kucharski, A. J.; and Lloyd-Smith, J. O. 2020. Estimated effectiveness of symptom and risk screening to prevent the spread of COVID-19. _eLife_ 9\. ISSN 2050-084X. doi:10.7554/elife.55570. URL https://europepmc.org/articles/PMC7060038.
* Hale et al. (2020) Hale, T.; Webster, S.; Petherick, A.; Phillips, T.; and Kira, B. 2020. Oxford COVID-19 Government Response Tracker Blavatnik School of Government.
* Holden (1995) Holden, K. 1995. Vector auto regression modeling and forecasting. _Journal of Forecasting_ 14(3): 159–166.
* Liang et al. (2020) Liang, W.; Liang, H.; Ou, L.; Chen, B.; Chen, A.; Li, C.; Li, Y.; Guan, W.; Sang, L.; Lu, J.; Xu, Y.; Chen, G.; Guo, H.; Guo, J.; Chen, Z.; Zhao, Y.; Li, S.; Zhang, N.; Zhong, N.; He, J.; and for the China Medical Treatment Expert Group for COVID-19. 2020. Development and Validation of a Clinical Risk Score to Predict the Occurrence of Critical Illness in Hospitalized Patients With COVID-19. _JAMA Internal Medicine_ 180(8): 1081–1089. ISSN 2168-6106. doi:10.1001/jamainternmed.2020.2033. URL https://doi.org/10.1001/jamainternmed.2020.2033.
* Menni et al. (2020a) Menni, C.; Valdes, A. M.; Freidin, M. B.; Sudre, C. H.; Nguyen, L. H.; Drew, D. A.; Ganesh, S.; Varsavsky, T.; Cardoso, M. J.; El-Sayed Moustafa, J. S.; Visconti, A.; Hysi, P.; Bowyer, R. C. E.; Mangino, M.; Falchi, M.; Wolf, J.; Ourselin, S.; Chan, A. T.; Steves, C. J.; and Spector, T. D. 2020a. Real-time tracking of self-reported symptoms to predict potential COVID-19. _Nature Medicine_ 26(7): 1037–1040. ISSN 1546-170X. doi:10.1038/s41591-020-0916-2. URL https://doi.org/10.1038/s41591-020-0916-2.
* Menni et al. (2020b) Menni, C.; et al. 2020b. Real-time tracking of self-reported symptoms to predict potential COVID-19. _Nature medicine_ 1–4.
* NIH (2020) NIH. 2020. _Adult Body Mass Index (BMI)_. https://www.nhlbi.nih.gov/health/educational/lose˙wt/BMI/bmicalc.htm.
* Parma et al. (2020) Parma, V.; et al. 2020. More than smell. COVID-19 is associated with severe impairment of smell, taste, and chemesthesis. _medRxiv_ doi:10.1101/2020.05.04.20090902. URL https://www.medrxiv.org/content/early/2020/05/24/2020.05.04.20090902.
* Pedregosa et al. (2011) Pedregosa, F.; et al. 2011. Scikit-learn: Machine Learning in Python. _Journal of Machine Learning Research_ 12: 2825–2830.
* Quinlan (1986) Quinlan, J. R. 1986. Induction of decision trees. _Machine Learning_ 1(1): 81–106. ISSN 1573-0565. doi:10.1007/BF00116251. URL https://doi.org/10.1007/BF00116251.
* Rodriguez et al. (2020a) Rodriguez, A.; Muralidhar, N.; Adhikari, B.; Tabassum, A.; Ramakrishnan, N.; and Prakash, B. A. 2020a. Steering a Historical Disease Forecasting Model Under a Pandemic: Case of Flu and COVID-19. _arXiv preprint arXiv:2009.11407_ .
* Rodriguez et al. (2020b) Rodriguez, A.; Tabassum, A.; Cui, J.; Xie, J.; Ho, J.; Agarwal, P.; Adhikari, B.; and Prakash, B. A. 2020b. DeepCOVID: An Operational Deep Learning-driven Framework for Explainable Real-time COVID-19 Forecasting. _medRxiv_ doi:10.1101/2020.09.28.20203109. URL https://www.medrxiv.org/content/early/2020/09/29/2020.09.28.20203109.
* Saad-Roy et al. (2020) Saad-Roy, C. M.; et al. 2020. Immune life history, vaccination, and the dynamics of SARS-CoV-2 over the next 5 years. _Science_ .
* Shi et al. (2020) Shi, F.; et al. 2020. Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation and Diagnosis for COVID-19. _IEEE Reviews in Biomedical Engineering_ 1–1. ISSN 1941-1189. doi:10.1109/rbme.2020.2987975. URL http://dx.doi.org/10.1109/RBME.2020.2987975.
* Wilder, Mina, and Tambe (2020) Wilder, B.; Mina, M. J.; and Tambe, M. 2020. Tracking disease outbreaks from sparse data with Bayesian inference. _arXiv preprint arXiv:2009.05863_ .
## 7 Appendix
The sample features present in the datasets can be observed in table 6.
Dataset | Example Signals
---|---
UMD | COVID-like illness symptoms, influenza-like illness symptoms, mask usage
CMU | sore throat, loss of smell/taste, chronic lung disease
OxCGRT | containment and closure policies, economic policies, health system policies
Table 6: Example Signal Information for the Datasets
### Correlation Studies
The detailed plots of the correlation analysis of the CMU dataset is noted in
figure 11.
Figure 8: Correlation study: The relationship between the underlying medical condition and percentage People tested COVID positive Figure 9: Observed VS Predicted: Prediction of percentage of people tested positive using percentage of people who recently had a contact with someone who is COVID positive. Figure 10: Correlation map depicting the relationship between the features along with the target variable(s) Figure 11: State wise distribution of percentage of people tested COVID positive Figure 12: Correlation study: The relationship between the COVID like illness and percentage People tested COVID positive Rank | Signal | F_Statistic
---|---|---
1 | COVID-like Illness in Community | 14938.48816456
2 | Loss of smell or taste | 9498.89229794
3 | COVID-like Illness in Household | 6050.88250153
4 | Fever in Household | 5490.15612527
5 | Fever | 4388.95759983
6 | Sore Throat in Household | 1787.42269067
7 | Avoid contact with others most of the time | 1494.25038393
8 | Difficulty breathing in Household | 1330.48793481
9 | Persistent Pain Pressure in Chest | 1257.78331468
10 | Runny Nose | 1084.84412662
11 | Worked outside home | 1023.50285601
12 | Nausea or Vomiting | 1016.94758914
13 | Shortness of breath in Household | 1004.67944587
14 | Sore Throat | 975.25614266
15 | Difficulty Breathing | 723.49150048
16 | Asthma | 466.91243179
17 | Shortness of Breath | 440.88344033
18 | Cough in Household | 322.05679444
19 | No symptoms in past 24 hours | 241.72819985
20 | Diarrhea | 228.59465358
21 | Chronic Lung Disease | 224.24651285
22 | Cancer | 205.19827073
23 | Other Pre-existing Disease | 158.31567587
24 | Tiredness or Exhaustion | 134.36715409
25 | Cough | 84.66549815
26 | No Above Medical Conditions | 84.40193799
27 | Heart Disease | 74.71994609
28 | Multiple Medical Conditions | 52.61630823
29 | Autoimmune Disorder | 40.8942176
30 | Nasal Congestion | 33.60170138
31 | Kidney Disease | 23.88450351
32 | Average people in Household with COVID-like ilness | 14.52969291
33 | Multiple Symptoms | 12.56805547
34 | Muscle Joint Aches | 1.72398411
35 | High Blood Pressure | 0.48328156
36 | Diabetes | 0.24390025
Table 7: Features ranked for entire by F score. All signals are represented as
percentages of respondents who responded that way.
### Time Series
In table 8 we continue to experiment with different spatial windows, like
trying to predict PCT_CLI for different locations like ”Tokyo” and ”British
Columbia” using different combination of features. Further on table 10
analysis is done on more US states with an LSTM based deep learning model to
predict PCT_CLI and we notice that there is no significant gain in using DL
models (probably due to lack of data). The pct_community_sick is another
variable which we try to predict, and the results can be seen in table 9
In figs [13,15] we do Dynamic Time Warping(DTW) to compare how well our
forecasted timeseries curve matches with the original curve. DTW was used due
to the flexibility to compare timeseries signals which are of different
lengths. This will enable us to compare different temporal windows across
different spatial windows to understand the effectiveness of the model at
different contexts.
Location | Bucket | RMSE | MAE | MRE (%) | Features Used
---|---|---|---|---|---
Abu Dhabi | male and age 18-34 | 2.43 | 2.23 | 167.86 | difficulty breathing + anosmia ageusia (weighted)
Tokyo | female and age 35-54 | 0.56 | 0.47 | 30.16 | difficulty breathing + anosmia ageusia (weighted)
British Columbia | male and age 55+ | 1.09 | 0.59 | 28.68 | difficulty breathing + anosmia ageusia (weighted)
Lombardia | male and age 55+ | 0.95 | 0.67 | 28.72 | difficulty breathing + anosmia ageusia (weighted)
Lombardia | male and age 55+ | 0.95 | 0.67 | 28.72 | Behavioural / external features (weighted)
British Columbia | male and age 55+ | 1.07 | 0.76 | 50.17 | Behavioural / external features (weighted)
Tokyo | female and age 35-54 | 0.58 | 0.49 | 31.38 | Behavioural / external features (weighted)
Abu Dhabi | male and age 18-34 | 2.91 | 2.78 | 207.94 | Behavioural / external features (weighted)
Table 8: RMSE and MAE scores for different buckets of interest + Ablation - VAR model - PCT _CLI _weighted Location | Bucket | RMSE | MAE | MRE (%)
---|---|---|---|---
Abu Dhabi | male and age 18-34 | 9.99 | 8.94 | 73.11
Tokyo | female and age 35-54 | 1.13 | 1.02 | 41.67
British Columbia | male and age 55+ | 3.21 | 2.65 | 137.13
Lombardia | male and age 55+ | 1.25 | 1.25 | 24.49
Table 9: RMSE and MAE scores for different buckets of interest - VAR model - PCT _Community _Sick Location | Bucket | RMSE | MAE | MRE | Model
---|---|---|---|---|---
TX | male and age overall | 1.56 | 1.21 | 43.00 | VAR
CA | male and age overall | 1.22 | 0.93 | 23.44 | VAR
NY | female and age overall | 0.7 | 0.56 | 21.59 | VAR
FL | female and age overall | 1.48 | 1.18 | 19.35 | VAR
TX | male and age overall | 6.28 | 4.06 | 89.4 | LSTM
CA | male and age overall | 2.83 | 2.68 | 71.24 | LSTM
NY | female and age overall | 2.02 | 1.9 | 68.17 | LSTM
FL | female and age overall | 4.33 | 4.19 | 73.34 | LSTM
Table 10: RMSE and MAE scores for different buckets of interest - VAR/LSTM models - PCT_CLI - Here we see that deep learning models aren’t performing better than normal statistical models Figure 13: DTW plot analysing the relationship between our forecasted curve vs the original curve for Ohio State Figure 14: Forecasted curve vs the original curve for Ohio. Figure 15: DTW plot analysing the relationship between our forecasted curve vs the original curve Figure 16: Forecasted curve vs the original curve for Texas. Demography | Feature Removed | MAE | MRE
---|---|---|---
Male | no feature removed | 1.389806313 | 77.42367322
pct_cmnty_cli_weighted | 1.470745054 | 82.97970974
pct_self_anosmia_ageusia_weighted | 1.423361929 | 79.90430572
pct_self_none_of_above_weighted | 1.410196471 | 78.62630177
pct_self_runny_nose_weighted | 1.398427829 | 78.13485192
Female | no feature removed | 1.100879926 | 57.63336087
pct_cmnty_cli_weighted | 1.218554308 | 64.54253671
pct_self_anosmia_ageusia_weighted | 1.155647687 | 61.12311515
pct_self_none_of_above_weighted | 1.121811889 | 58.73118158
pct_self_runny_nose_weighted | 1.104380112 | 57.92685018
Young | no feature removed | 1.231519891 | 67.07207641
pct_cmnty_cli_weighted | 1.31846811 | 72.201516
pct_self_anosmia_ageusia_weighted | 1.277138933 | 70.38556851
pct_avoid_contact_all_or_most_time_weighted | 1.244334089 | 67.80402144
pct_self_runny_nose_weighted | 1.234101952 | 67.46623764
Mid | no feature removed | 1.276053866 | 67.05778653
pct_cmnty_cli_weighted | 1.384547554 | 73.44381028
pct_self_anosmia_ageusia_weighted | 1.326526868 | 70.22181485
pct_self_none_of_above_weighted | 1.321293709 | 69.44829708
pct_avoid_contact_all_or_most_time_weighted | 1.285893087 | 67.62940495
Old | no feature removed | 1.172592164 | 63.98633923
pct_cmnty_cli_weighted | 1.314221647 | 72.59134309
pct_avoid_contact_all_or_most_time_weighted | 1.191250701 | 64.98442049
pct_self_anosmia_ageusia_weighted | 1.192677984 | 65.76644281
pct_self_multiple_symptoms_weighted | 1.186357275 | 64.7244507
Demography | Feature Removed | MAE | MRE
---|---|---|---
Overall | no feature removed | 1.143995128 | 60.83421503
pct_cmnty_cli_weighted | 1.248043237 | 67.08605954
pct_self_anosmia_ageusia_weighted | 1.177417511 | 63.07033879
pct_self_none_of_above_weighted | 1.169464223 | 61.67148756
pct_self_runny_nose_weighted | 1.149200232 | 61.32185068
pct_hh_cli_weighted | 1.14551667 | 60.93481883
pct_avoid_contact_all_or_most_time_weighted | 1.149772918 | 61.16631628
pct_worked_outside_home_weighted | 1.147615986 | 61.0433573
pct_self_fever_weighted | 1.144711565 | 60.92739832
pct_hh_fever_weighted | 1.143703022 | 60.7946325
pct_hh_difficulty_breathing_weighted | 1.143007815 | 60.83204654
|
# Probing $\mu$eV ALPs with future LHAASO observation of AGN $\gamma$-ray
spectra
Guangbo Long Siyu Chen Shuo Xu Hong-Hao Zhang<EMAIL_ADDRESS>School of Physics, Sun Yat-sen University, Guangzhou, GuangDong, People’s
Republic of China
###### Abstract
Axion-like particles (ALPs) are predicted in some well-motivated theories
beyond the Standard Model. The TeV gamma-rays from active galactic nuclei
(AGN) suffers attenuation by the pair production interactions with the cosmic
infrared background light (EBL/CMB) during its travel to the earth. The
attenuation can be circumvented through photon-ALP conversions in the AGN and
Galaxy magnetic-field, and a flux enhancement is expected to arise in the
observed spectrum. In this work, we study the potential of the AGN gamma-ray
spectrum for energy up to above 100 TeV to probe ALP-parameter space at around
$\mu$eV, where the coupling $g_{a\gamma}$ is so far relatively weak
constrained. We find the nearby and bright sources, Mrk 501, IC 310 and M 87,
are suitable for our objective. Assuming an intrinsic spectrum exponential
cutoff energy at $E_{\rm c}$=100 TeV, we extrapolate the observed spectra of
these sources up to above 100 TeV by the models with/without ALPs. For
$g_{a\gamma}\gtrsim 2\times$$10^{-11}\rm GeV^{-1}$ with $m_{a}\lesssim
1\,\mu$eV, the flux at around 100 TeV predicted by the ALP model can be
enhanced more than an order of magnitude than that from the standard
absorption, and could be detected by LHAASO. Our result is subject to the
uncertainty from the intrinsic spectrum above tens of TeV, which require
further observations on these sources by the forthcoming CTA, LHAASO, SWGO and
so on.
## I Introduction
Several extensions of the Standard Model suggest the existence of very light
pseudoscalar bosons called axion-like particles (ALPs) Svrcek2006 ;
Jaeckel2010 . These spin-0 neutral particles are a sort of generalization of
the axion, which was originally proposed to solve the strong CP problem
naturally PQ1977 ; Weinberg1978 ; Wiczek1978 , and they are also a promising
dark-matter candidate Preskill1983 ; Abbott1983 ; Dine1983 ; Marsh2011 . One
of their characteristics is their coupling to photons by
$g_{a\gamma}\mathbf{E}\cdot\mathbf{B}a$, with $g_{a\gamma}$ being the coupling
strength, $\mathbf{E}$ the electric field of the photons, $\mathbf{B}$ an
external magnetic field, and $a$ the ALP field strength Raffelt1988 ;
Sikivie1983 . As a consequence, the phenomenon of photon-ALP mixing take
place, and lead to photon-ALP oscillations (or conversions) Raffelt1988 ;
Sikivie1983 ; Hochmuth2007 ; Angelis2008 . To reach efficient conversions,
they should take place above a critical energy given by Hooper2007 ;
Angelis2007 ; Mirizzi2009
$E_{\rm crit}\sim 20(\frac{m_{a}}{10^{-6}\rm eV})^{2}(\frac{10^{-5}\rm
G}{B})(\frac{2.5\cdot 10^{-11}{\rm GeV}^{-1}}{g_{a\gamma}})\,\rm TeV,$ (1)
where $B$ is the homogeneous magnetic-field component transverse to the
propagation direction. In contrast to axion, ALP mass $m_{a}$ is independent
of $g_{a\gamma}$. Around the critical energy, oscillatory features that depend
on the configuration of magnetic field are expected to occur Wouters2012 .
Many laboratory experiments and astronomical observations are being carried
out to search for ALPs via the effect mentioned above. The representative
experiments are photon-regenerated experiments, such as“Light shining through
a wall” experiments ALPS ALPS2013 , solar ALPs experiments CAST CAST2011 and
dark-matter haloscopes ADMX ADMX2006 .
Owing to the universal presence of magnetic fields along the line of sight to
active galactic nuclei (AGN), photon-ALP oscillations can lead to distinctive
signatures in AGN spectra Angelis2007 ; Simet2008 ; Hooper2007 ; Angelis2008 .
Thus, ALP-photon coupling can be detected through the observations of AGN
(see, e.g. Refs Angelis2007 ; Conde2009 ; Angelis2011 ; Dominguez2011 ;
Tavecchio2012 ).
On the one hand, it is possible to detect the ALP-induced observational
effects on the $\gamma$-rays transparency of the Universe Angelis2007 ;
Simet2008 ; Conde2009 ; Angelis2011 ; Dominguez2011 ; Meyer2013 ; Meyer2014 ;
Troitsky2016 ; Montanino2017 ; Buehler2020 . The very-high-energy (VHE, above
100 GeV) $\gamma$-rays from the extragalactic sources suffer attenuation by
pair-production interactions with the background (extragalactic background
light, EBL; or cosmic microwave background, CMB) photons during the
propagation Nikishov1962 ; Hauser2001 ; HESS2006 ; Dwek2013 ; Costamante2013 .
The attenuation increases with the distance to the source and the energy of
the VHE photons Dwek2013 . If the photon-ALP conversions exist for a
sufficiently large coupling, the emitted photons convert into ALPs and then
these ALPs reconvert back into photons before arriving in the Earth, i.e. ALPs
circumvent pair production. Thus, the opacity of the Universe for VHE gamma-
rays is reduced and the observed flux is enhanced significantly (i.e. causing
a hardening of the spectra above $E_{\rm crit}$, see e.g. Refs. Angelis2007 ;
Mirizzi2009 ; Dominguez2011 ; Angelis2013 ; Meyer2013 ; Troitsky2016 ;
hardening2 ; Galanti2018 ; Galanti2019 ). The range of the parameters where
ALPs would increase the $\gamma$-ray transparency of the Universe (for 1.3
times the optical depth of Franceschini _et al._ EBL model Franceschini2008 )
is constrained from VHE $\gamma-$rays observations of blazar (AGN with jet
closely aligned to the line of sight) Meyer2013 . Data from the Fermi-LAT
observations of distant (redshift $z>$0.1) blazar limit
$g_{a\gamma}<10^{-11}\rm GeV^{-1}$ for $m_{a}<$3 neV assuming a value of the
intergalactic magnetic field strength of 1 nG Buehler2020 .
On the other hand, taking seriously the irregularities of AGN gamma-ray
spectra produced by the oscillations at energies around $E_{\rm crit}$, strong
bounds on $g_{a\gamma}$ have been derived CTA2020alp . In particular, for 0.4
neV$<m_{a}<$100 neV, the strongest bounds on $g_{a\gamma}$ are derived from
the absence of irregularities in H.E.S.S. and Fermi-LAT observations as well
as Fermi-LAT observations of AGN Hess2013alp ; Fermi2016alp ; Zhang2018alp ;
Li2020alp . It is worth emphasizing that this method highly depend on the
configuration of magnetic fields adopted Libanov2020 .
So far, the coupling $g_{a\gamma}<6.6\times 10^{11}\,\rm GeV^{-1}$ for 0.2
$\mu$eV$\lesssim m_{a}\lesssim 2\,\mu$eV, containing viable ALP dark matter
parameter space (i.e. $g_{a\gamma}\lesssim 2\times 10^{11}\,\rm GeV^{-1}$ for
$m_{a}\sim\mu$eV) Arias2012 , almost have not been limited (see e.g. Figure 5
of Ref Buehler2020 ), although they are expected to be probed by future
experiments (e.g. ALPS II ALPS2013 , IAXO IAXO2019 ) or radio-astronomical
observations (e.g. Refs Sigl2017 ; Edwards2020 ; Caputo2019 ; Ghosh2020 ).
According to Eq. (1), effective oscillations for ALPs at this parameter range
takes place when the energy of photons is larger than $\sim$20 TeV for the
value of AGN magnetic field $B\sim 10^{-5}$ G Kohri2017 ; Meyer2014magnetic .
Therefore, photons with energy larger than 20 TeV are required to be detected
if probing these ALPs through the VHE $\gamma-$rays observations of AGN.
However, only very few photons of such high energy from extragalactic sources
have been observed by past and present telescopes TEVCAT .
Thanks to the upcoming Large High Altitude Air Shower Observatory (LHAASO
LHAASO2019 ) with the ability to survey the TeV sky continuously, it is
expected to reach sensitivities above 30 TeV about 100 times higher than that
of the current VHE instruments (e.g. H.E.S.S. HESSp , MAGIC MAGICp , VERITAS
VERITASp ) LHAASO2019 . Furthermore, the conversions in the intergalactic
magnetic field (IGMF) can be neglected. With current upper limits on the IGMF
strength of $\lesssim 10^{-9}$ G and on $g_{a\gamma}<6.6\times 10^{-11}\,\rm
GeV^{-1}$ IGMF ; CAST2011 , Eq.(1) give that $E_{\rm crit}\lesssim$100 TeV
only for $m_{a}\lesssim 35$ neV. Obviously, the unprecedented sensitivities of
LHAASO to detect TeV AGN provide a good chance to probe $\mu$ eV ALPs.
In this paper, we assume ALPs converted from the gamma-rays photons in the
AGN’s magnetic field travel unhindered through extragalactic space, and then
these ALPs partially back-convert into photons in galactic magnetic field
(GMF), see Fig. 1. We investigate the LHAASO sensitivity to detect the ALP-
induced flux enhancement at the highest energies by using the extrapolated
observations of suitable AGNs for energy up to above 100 TeV, and estimate the
corresponding ALP-parameter space.
The paper is structured as follows. In section II we give the formula for
evaluating the photon survival probability along the line of sight. The sample
selection is described and the data analysis is introduced in section III
before presenting our results in section IV. We discuss our model assumptions
in section V and conclude in section VI.
## II Photon survival probability
Figure 1: Cartoon of the formalism adopted in this article, where the TeV
photon ($\gamma$, purple line) /ALP (a, black line) beam propagates from the
AGN $\gamma$-ray source to the Earth. The interaction $\gamma+\gamma_{\rm
EBL/CMB}\rightarrow\rm e^{\pm}$ takes place during the photon propagation. The
photon-ALP conversions (green line) take place in the magnetic field around
the gamma-ray emission region and GMF respectively, leading to an improvement
of photon survival probability. There are two $\gamma\rightarrow\gamma$
channels: $\gamma\rightarrow\gamma(\rm e^{\pm})\rightarrow\gamma$;
$\gamma\rightarrow a\rightarrow\gamma$, and the latter is dominant in this
situation.
During the propagation from $\gamma$-ray emission regions to the Earth, we
assume the emitted photons mix with ALPs in the AGN $B$-field and in the GMF
respectively, and undergo the pair production with EBL/CMB in extragalactic
space, shown in Fig. 1. The photon survival probability
$P_{\gamma\rightarrow\gamma}$ will be calculated under these effects.
### II.1 Photon-ALP conversion
The probability of the conversion from an unpolarized photon to an ALP after
passing through a homogeneous magnetic field $\mathbf{B}$ over a distance of
length $r$, is expressed as Tavecchio2012 ; Angelis2013 ; Masaki2017
$P_{\gamma\rightarrow
a}=\frac{1}{2}\Big{[}\frac{g_{a\gamma}B}{\bigtriangleup_{\rm
osc}(E)}\Big{]}^{2}{\rm sin}^{2}\Big{[}\frac{\bigtriangleup_{\rm
osc}(E)r}{2}\Big{]}$ (2)
with $\bigtriangleup_{\rm osc}(E)$=$g_{a\gamma}B\sqrt{1+(\frac{E_{\rm
crit}}{E}+\frac{E}{E_{\rm H}})^{2}}$, where $E_{\rm crit}$ is the critical
energy defined in Eq. 1 and $E_{\rm H}$ is derived from the Cotton-Mouton (CM)
effect accounting for the photon one-loop vacuum polarization given by
Tavecchio2012
$E_{\rm H}=2.1\times 10^{5}\big{(}\frac{10^{-5}\rm
G}{B}\big{)}\big{(}\frac{g_{a\gamma}}{10^{-11}\rm GeV^{-1}}\big{)}\,\,\rm
GeV.$ (3)
The factor “1/2” in Eq. 2 results from the average over the photon helicities
Kohri2017 .
$P_{a\rightarrow\gamma}$ (=$2P_{\gamma\rightarrow a}$) tends to be sizable and
constant, when $E_{\rm crit}<E<E_{\rm H}$ and $1\lesssim g_{a\gamma}Br/2$. For
the case of the Galaxy, the latter can be expressed as
$1\lesssim(\frac{r}{10\,\rm kpc})(\frac{B}{1.23\,\mu\rm
G})(\frac{g_{a\gamma}}{5\cdot 10^{-11}{\rm GeV}^{-1}}).$ (4)
Here, we refer to Ref Han2017 for the values corresponding to $r$ and $B$.
### II.2 Magnetic field assumption
The magnetic fields around the AGN gamma-ray source commonly include those in
the jet, the radio lobes, and the host galaxy. The jet $B$ is believed to
decrease as the distance to the central engine along the jet axis
Tavecchio2015 ; Zheng2017 ; Meyer2014magnetic . At the VHE emission region,
the typical value of $B$ is in the interval of 0.1-5 G Tavecchio2015 ;
Zheng2017 ; Kang2014 ; Meyer2014magnetic ; Xue2016 ; Sullivan2009 . The
typical value of the magnetic field in the radio lobes is $10\,\mu$G for a
coherence length of 10 kpc IAXO2019 ; Pudritz2012 . The magnetic field in the
host galactic is poorly known and its strength is approximately equal to
$\mu$G with coherence lengths of the order of 0.1 to 0.2 kpc Widrow2002 ;
Meyer2014magnetic ; IAXO2019 . A part of VHE $\gamma-$ray AGNs are located in
galaxy clusters hardening2 , where the typical $B$ value is 5 $\mu$G with
coherence lengths of 10-100 kpc IAXO2019 ; Widrow2002 ; Fermi2016alp .
Our method takes advantage of the ALP-induced flux enhancement. It is more
sensitive to the average size of magnetic fields than the complicated magnetic
field configuration and detail with considerable uncertainty Meyer2013 ;
Meyer2014magnetic ; Fermi2016alp . Therefore, for simplicity, we assume the
“source” magnetic field is homogeneous within a region of 10 kpc, with
strength $B_{\rm s}$=10 $\mu$G, following Refs. Kohri2017 ; Hooper2007 .
Similarly, an average value of 1.23 $\mu$G for the Galactic magnetic field
(GMF) $B_{\rm GMF}$ with 10 kpc is adopted in our model Han2017 .
According to Eq. 4, the minimum coupling $g_{a\gamma}$ reaching the
significant transformation is about 2.5 $g_{11}$ (=$10^{-11}$ $\rm GeV^{-1}$)
for the GMF. As the larger $B_{\rm s}$, the minimum $g_{a\gamma}$
corresponding to the conversion in the source can be low to $g_{11}$.
Analogously, the critical energy for the conversion in the GMF, $\sim$160 TeV
for $g_{a\gamma}\simeq 2.5g_{11}$, $m_{a}\simeq 1\,\mu$eV, is higher than that
(the critical energy) for the source. However, the CM effect can suppress the
source conversion above 210 TeV for $g_{a\gamma}=g_{11}$ due to the larger
$B_{\rm s}$, as seen in Eq. 3, and it can be neglected in the case of GMF for
the energy considered in this paper.
We neglect the CM effect contributed by the CMB photons Dobrynina2015 ;
Montanino2017 , which is important when $E\gtrsim 460$ TeV for the B-fields
and the coupling ($g_{a\gamma}\gtrsim g_{11}$) considered in this work.
### II.3 Photon survival probability
We employ the EBL model of Ref. Gilmore2012 to account the VHE photon
absorption onto the EBL. This recent EBL model has been tested repeatedly and
is generally consistent with the VHE $\gamma$-ray observations (e.g., Ref.
Fermi2012 ; HESS2013 ; VERITAS2015 ; Biteau2015 ; MAGIC2016 ; Armstrong2017 ;
Yuan2012 ; long2020 ). Furthermore, the infrared EBL intensity from this model
is in the mid-level among several recent EBL models Dominguez2011EBL ;
Finke2010 ; Franceschini2008 , which are more inconsistent but basically match
the direct measurements at infrared band HESS2017 . Hence, choosing this model
to account the EBL optical depth is helpful to reduce EBL uncertainty. Above
140 TeV, the CMB optical depth of TeV photons becomes dominant as its
intensity is much stronger than the EBL’s at the wavelength longer than 400
$\mu$m.
After obtaining the EBL/CMB spectrum, we can further estimate the optical
depth $\tau_{\gamma\gamma}(E,z)$ for the photon with energy $E$ from the
source of redshift $z$ Gilmore2012 ; Gong2013 . Then, the photon survival
probability on the whole path from the source to the earth can be derived
$P_{\gamma\rightarrow\gamma}=P_{\gamma\rightarrow\gamma}^{\rm S}{\rm
exp}(-\tau_{\gamma\gamma})P_{\gamma\rightarrow\gamma}^{\rm
G}+P_{\gamma\rightarrow a}^{\rm S}P_{a\rightarrow\gamma}^{\rm G},$ (5)
where $P_{\gamma\rightarrow a}^{\rm S}$ and $P_{a\rightarrow\gamma}^{\rm S}$
are the conversion probabilities from photons/ALPs to ALPs/photons in the
source respectively. There is a relation $P_{\gamma\rightarrow\gamma}^{\rm
S}=1-P_{\gamma\rightarrow a}^{\rm S}$; Similarly, the variables with a
superscript “G” represent those for the GMF. The derivation of Eq. 5 can be
illustrated vividly in Fig. 1: the first term corresponds to
$\gamma\rightarrow\gamma(\rm e^{\pm})\rightarrow\gamma$ channel suffered from
EBL/CMB absorption. the second is related to $\gamma\rightarrow
a\rightarrow\gamma$ channel unaffected by the absorption.
Figure 2: The photon survival probability $P_{\gamma\rightarrow\gamma}(E,z)$
on the whole path from the $\gamma-$ray source to the earth, where the ALP
mass $m_{a}=1\,\mu$eV and coupling $g_{a\gamma}=3g_{11}$. The meaning of each
colored curve is annotated in the diagram. $P_{1}=P_{\gamma\rightarrow a}^{\rm
S}P_{a\rightarrow\gamma}^{\rm G}$ and $P_{2}=P_{\gamma\rightarrow\gamma}^{\rm
S}{\rm exp}(-\tau_{\gamma\gamma}(E,0.01))P_{\gamma\rightarrow\gamma}^{\rm G}$,
i.e., the second and first (redshift-independent) term in Eq. 5. They
correspond to the channels of $\gamma\rightarrow a\rightarrow\gamma$ and
$\gamma\rightarrow\gamma(\rm e^{\pm})\rightarrow\gamma$ shown in Fig. 1,
respectively
Fig. 2 shows the change of $P_{\gamma\rightarrow\gamma}(E,z)$ with energy $E$
for different $z$, where the ALP mass $m_{a}=1\,\mu$eV and coupling
$g_{a\gamma}=3g_{11}$. $P_{2}=P_{\gamma\rightarrow\gamma}^{\rm S}{\rm
exp}(-\tau_{\gamma\gamma}(E,0.01))P_{\gamma\rightarrow\gamma}^{\rm G}$ and
$P_{1}=P_{\gamma\rightarrow a}^{\rm S}P_{a\rightarrow\gamma}^{\rm G}$, i.e.,
the first and second term in Eq. 5. In the low energy region, the conversion
is noneffective and $P_{\gamma\rightarrow\gamma}(E,z)$ is dominated by the
absorption term $P_{2}\sim{\rm exp}(-\tau_{\gamma\gamma})$. As the energy
turns to the higher region, ${\rm exp}(-\tau_{\gamma\gamma})\rightarrow 0$,
while the channel $\gamma\rightarrow a\rightarrow\gamma$ is getting “wider”,
and hence $P_{\gamma\rightarrow\gamma}\simeq P_{1}$, which is independent of
$z$. As a consequence, the curves of $P_{\gamma\rightarrow\gamma}(E,z)$ for
different $z$ at high energy region show v-shaped lines, and converge to
$P_{1}$. When $E>E_{\rm crit}\thickapprox$150 TeV, $P_{1}$ is getting closer
and closer to its maximum, and when $E>E_{\rm H}\thickapprox$630 TeV, the CM
effect suppresses the source conversion. Thus the peak appears at the highest
energy band.
Note that since we do not concern the spectral irregularities, we take the
average value for the square of the sine function in Eq. 2 when the phase is
larger than 1 rad, e.g., according to Refs. Kohri2017 ; Hooper2007 ;
Mirizzi2007 , smearing out the rapid-oscillatory features of the probability
function. The average value is approximatively taken 2/3 rather than 1/2 to
match the saturation-conversion probability ($P_{\gamma\rightarrow a}$) of
1/3, which corresponds to a more realistic scenario the beam propagates
through many domains of randomly oriented magnetic fields with constant size
$B$, for instance, in Refs. Mirizzi2007 ; Meyer2014 .
In the limit of saturated conversion $E_{\rm crit}\ll E\ll E_{\rm H}$ and
$1<g_{a\gamma}Br/2$, about
$P_{\gamma\rightarrow a}^{\rm S}P_{a\rightarrow\gamma}^{\rm
G}=\frac{1}{3}\times\frac{2}{3}$ (6)
of the original photons survive through $\gamma\rightarrow a\rightarrow\gamma$
channel. Obviously, for the ALP-parameter value applied to Fig. 2, the
condition of the saturated conversion does not match, for example
$g_{a\gamma}B_{\rm GMF}r_{\rm GMF}/2<1$.
Figure 3: Top panel and left of the bottom panels: fitting and extrapolating
the observations of M 87, IC 310 and Mrk 501. The blue and red lines
represents respectively the PLC or LPC fit with the EBL (Gilmore)/CMB-
absorption correction and that considering further the photon-ALP conversion
and the CM effect. $E_{c}$ is the cutoff energy assumed for the intrinsic
spectrum. The meaning of other symbols are indicated in the legend. Right of
the bottom panel: expected ALP limits based on our model as well as the future
LHAASO (or+CTA) observations of M 87 (blue dash line), IC 310 (green dash
line) and Mrk 501 (red dash line). For comparison, limits (black line) and 5
$\sigma$ sensitivities of future experiments (black dashed line) are also
shown.
## III method
### III.1 Sample selection
In our method, the VHE $\gamma-$ray observations are utilized to model the
intrinsic spectrum of the emitted source. So far about 75 VHE AGN have been
detected by the VHE instruments TEVCAT . In principle, most of these sources
may be used to search for the ALP-induced flux boost since it is independent
of the redshift above $E_{\rm crit}$, as shown in Fig. 2. But we should
acquire as many data below $E_{\rm crit}$ as possible, which are expected to
be slightly affected by the ALPs, so that a more realistic spectrum at higher
energy could be extrapolated by the observations together with the assumed
model (see Eq. 7). Hence, we preferentially consider the nearby sources whose
$P_{\gamma\rightarrow\gamma}$ curve can show a “shallow valley” due to the
relatively slight $\gamma\gamma$-absorption (see Fig. 2).
Based on the study of Franceschini et al. Franceschini2019 , the adjacent
sources of M 87, IC 310 and Mkn 501 would likely be detected by LHAASO up to
75 TeV, 50 TeV, 25 TeV respectively, when taking into account the standard
EBL-absorption. Thus, we predict that they could provide more detectable data
below $E_{\rm crit}$. In this paper, we will fit and extrapolate the spectral
data of these sources to the highest VHE energies.
_M 87_($z$=0.004)—a giant radio galaxy of Fanaroff-Riley type I with kpc radio
jet, located in the Virgo Cluster. It has been detected by almost all the
Imaging Air Cherenkov telescopes (IACTs) M87MAGIC2019 . Strong and rapid flux
variability in gamma-ray band is shown, but no significant spectral changes
with a typical photon index of 2.2 M87MAGIC2019 ; M87VERITAS2011 ; M87HESS2006
. We adopt H.E.S.S. data taken during 21 h of effective observation, in the
2005 12. Feb.-15. May high state (see Fig. 3).
_IC 310_($z$=0.019)—seems to be a transitional AGN between a low-luminosity
HBL (high-frequency peaked BL Lac, namely blazar with weak optical emission
lines Urry1995 ) and a radio galaxy Franceschini2019 , located on the
outskirts of the Perseus galaxy cluster. An extraordinary TeV flare in 2012
Nov. 12-13 and then a high state during several of the following months was
detected by MAGIC IC310MAGIC2017 . We use the observed spectrum with photon
spectral index 1.9 during 3.7 h of observation in the flare state (see Fig.
3).
_Mrk 501_($z$=0.034)—the next-closest known HBL. It is known for showing the
spectral variability at VHE band. During a famous outburst in 1997, the source
shown a very hard intrinsic spectrum with no softening up to the highest-
energy detected photons of 20 TeV Franceschini2019 . We choose the spectrum
detected by Fermi-LAT and MAGIC during the 4.5 month long multifrequency
campaign (2009 March 15 - August 1 during its relatively low activity)
Mrk501fermi2011 (see Fig. 3). Note that this BL Lac is also supposed to be
located in galaxy clusters hardening2 .
### III.2 Theoretical and intrinsic spectra
We model VHE gamma-ray spectra with
$\psi_{0}=\rm e^{-\tau_{\gamma\gamma}}\phi\,\,\rm
or\,\,\psi_{1}=P_{\gamma\rightarrow\gamma}\phi,$ (7)
where $P_{\gamma\rightarrow\gamma}$ and $\tau_{\gamma\gamma}$ are defined in
Eq. 5. $\phi$ represents the intrinsic spectrum assumed for the sources. The
model with ALP has two additional free parameters, $g_{a\gamma}$ and $m_{a}$,
relative to the traditional model.
$\phi$ is assumed as one of the two common models HESS2013 : power-law with
exponential cut-off (PLC), and log-parabola with exponential cut-off (LPC).
The PLC spectrum is described by three parameters: $\phi_{\rm
PLC}=\phi_{0}(E/E_{0})^{-\alpha}{\rm exp}(-E/E_{\rm c})$, where $E_{\rm c}$ is
the cut-off energy, $\alpha$ is the photon spectral index constrained by the
particle acceleration theory as $\alpha\geq 1.5$, $\phi_{0}$ is the flux
normalization, and $E_{0}$ is the fixed reference energy. While the LPC
spectrum has additional curvature parameter $t>0$: $\phi_{\rm
LPC}=\phi_{0}(E/E_{0})^{-s-t\,{\rm log}(E/E_{0})}{\rm exp}(-E/E_{\rm c})$ and
also $\langle s+t\,{\rm log}(E/E_{0})\rangle\geq 1.5$.
Since the highest energy of detected photons in our samples is $\lesssim$ 20
TeV and the spectra are hard with no sign of convergence, the constrain on
$E_{\rm c}$ by the observations is very weak. If the parent particles
responsible for the VHE emission are electrons, the cutoff can be derived from
the Klein-Nishina suppression, energy loss of the electrons and pair
attenuation in the VHE emission region, see e.g., Lefa2012 ; Stawarz2008 ;
Lewis2019 ; Warren2020 ; Lemoine2020 ; Mrk501fermi2011 . Here, we uniformly
take $E_{\rm c}$=100 TeV for our samples Franceschini2019 , though $E_{\rm c}$
could be higher if the VHE $\gamma$-ray emission is of a hadronic origin
LHAASO2019 ; Xue2019 .
### III.3 Fitting and extrapolating the observations
To simulate the observations at the highest energies, we firstly fit the three
observed spectra with $\psi_{0}$ and extrapolate it to hundreds of TeV
energies, respectively. Meanwhile, the form (PLC or LPC) of $\phi$ to be
chosen for each source is determined in terms of its average chi-square value
per degree of freedom. Then, we use $\psi_{1}$ containing the determined
$\phi$ and $P_{\gamma\rightarrow\gamma}$ with given $g_{a\gamma}$ and $m_{a}$
to fit and extrapolate the observations of each source.
We assume if the ALP-induced flux enhancement $\frac{\psi_{1}}{\psi_{0}}$ is
more than one order of magnitude and the predicted spectra $\psi_{1}$ is over
the equipment sensitivity, then the given ALP could be constrained. As the
continuity and (approximative) monotonicity of $P_{\gamma\rightarrow\gamma}$,
we only need to test the ALP parameters with small $g_{a\gamma}$ and that with
large $m_{a}$ to obtain the constrained region.
## IV Results
Fig. 3 and Fig. 5 in the appendix report the predicted sensitivity limits for
future LHAASO/CTA observations of the three most promising nearby AGNs. The
blue line represents the standard absorption fit including an extrapolation
above 100 TeV. The results corresponding to the minimum allowable coupling
$g_{a\gamma}$ for different mass $m_{a}$ are shown by the red line. The 50-h 5
$\sigma$ sensitivity limits for CTA, 5-year and 1-year 5 $\sigma$ limits for
LHAASO are shown with the black dotted line, the black dashed line and the
black solid line, respectively. Combined observations from the instruments
will reach sensitivities of a few times $10^{-11}$ GeV $\rm cm^{-1}$ $\rm
s^{-1}$.
_M 87_. The befitting intrinsic spectrum for the VHE observations is PLC with
a photon index of 2.1. The predicted spectra extrapolated by the best-fit
model ($\psi_{0}$ and $\psi_{1}$) are above the 5-year sensitivity of LHAASO
up to 100 TeV (this will allow measurements of the M 87 spectrum up to about
70 TeV), which is beneficial to constrain the intrinsic spectrum. Above 100
TeV, the photon-ALP conversion gradually become important so that the photons
survive mainly thought $\gamma\rightarrow a\rightarrow\gamma$ channel.
Consequently, the flux enhancement is over an order of magnitude and the flux
is over the LHAASO sensitivity around 100 TeV for three given ALP-parameter
values. The line of $\psi_{1}$ shows a very or no “shallow valley”, as the
transition of survival probability $P_{\gamma\rightarrow\gamma}\approx P_{1}$
to $P_{\gamma\rightarrow\gamma}\approx P_{2}$ is smooth due to the low
redshift of M 87. At the highest energy, the intrinsic-spectrum cutoff makes
the curve go down.
_IC 310_. A PLC-intrinsic spectrum is used to model the intrinsic spectrum.
The predicted spectra extrapolated by the best-fit model ($\psi_{0}$ and
$\psi_{1}$) are above the sensitivity of LHAASO up to about 30 TeV, which
theoretically will allow measurements of the IC 310 spectrum up to 60 TeV.
Above $\sim$70 TeV, the photon-ALP conversion gradually become important, so
that the flux enhancement is over an order of magnitude and could be detected
by LHAASO around 100 TeV. But the intrinsic-spectrum cutoff or it together
with the CM effect makes the curve turn down at the highest energy band.
_Mrk 501_. A LPC-intrinsic spectrum is chosen. The predicted absorption-
corrected spectrum is over the sensitivity of LHAASO up to about 30 TeV, which
theoretically will give a relatively weak constraint on the intrinsic
spectrum. The very prominent enhanced flux at around 100 TeV is over the
LHAASO sensitivity.
We estimate the ALP parameter space that will be possible probed by the future
LHAASO (or+CTA) observations in the last picture of Fig. 3, where other limits
and sensitivity projections are also given for comparing. For M 87, LHAASO
would be able to explore $g_{a\gamma}$ down to about 2$g_{11}$ for
$m_{a}<0.3\,\mu$eV. For IC 310 and Mrk 501, a lower value of
$g_{a\gamma}\simeq g_{11}$ for $m_{a}<0.7\,\mu$eV and $m_{a}<0.4\,\mu$eV would
be explored respectively, some of which is invoked to explain the cold dark
matter Fermi2016alp . The results corresponding to IC 310 give stronger
exploitable bound on the coupling for $m_{a}\,\lesssim 1\mu$eV:
$g_{a\gamma}\simeq 2g_{11}$. In the case of M 87, a relatively weak bound is
given, as its observed flux is lower and its lower redshift leads to that
higher energy is required to achieve the same enhancement.
## V Discussion
In this section, we will discuss our model assumptions.
As one of the effective conversion conditions requires the photon energy to
satisfy $E_{\rm crict}<E<E_{\rm H}$ and depends on the $B-$field, particularly
the highest energy conversions in the source $B-$field is prone to be
suppressed by the CM effect. Therefore, the uncertainty of $B_{\rm s}$ can
translate into an uncertainty of the photon survival probability and our
result. Fig. 4 shows how $B_{\rm s}$ affects $P_{\gamma\rightarrow\gamma}$ for
a fixed redshift $z=0.005$ and ALP parameters $m_{a}=1\,\mu$eV,
$g_{a\gamma}=2g_{11}$. For $B_{\rm s}$ in the range from several $\mu$G to 20
$\mu$G, the photon survival probabilities are close at around 200 TeV. It
means that provided $B_{\rm s}$ is between several $\mu$G and 20 $\mu$G the
ALP-induced flux enhancement could be achieved and comparable as done with
$B_{\rm s}=10\,\mu$G above. From this perspective, our result appears to be
robust.
We assume the exponential cutoff energy $E_{\rm c}=100$ TeV for the intrinsic
spectrum, and the ALP-induced flux would be detected for $g_{a\gamma}\gtrsim
2\times$$g_{11}$ with $m_{a}\lesssim 1\,\mu$eV. This result is sensitive to
$E_{\rm c}$, as the predicted photons around 100 TeV are mainly from the
conversion channel $\gamma\rightarrow a\rightarrow\gamma$. We therefore
investigate how the limits change if we alter the fiducial assumptions for the
$E_{\rm c}$ calculation. We find if $E_{\rm c}=200$ TeV, the limited ALP-
parameter region expands to $g_{a\gamma}\gtrsim 1\times$$g_{11}$ with
$m_{a}\lesssim 1\,\mu$eV, and $E_{\rm c}=50$ TeV, it reduces to
$g_{a\gamma}\gtrsim 3\times$$g_{11}$ with $m_{a}\lesssim 1\,\mu$eV.
The reasons we take $E_{\rm c}=100$ TeV is as follows. First, the spectrum
from our sample show very hard and have no sign of cutoff up to the highest
energy of about $10-20$ TeV. Second, if the spectrum is dominated by emission
of leptonic origin (with evidence that most of the rapid variable emission has
a leptonic origin), the cutoff above 100 TeV is possible. The recent
observation from the Crab Nebula with energy beyond 100 TeV show no
exponential cutoff below 100 TeV, which is usually interpreted in the
framework of leptonic models Amenomori2019 ; MAGIC2019 ; HAWC2019 . As
powerful cosmic particle accelerators Kotera2011 , that may happen on some
extreme TeV AGNs, too. Thirdly, AGNs are excellent candidates as Ultra-High-
Energy Cosmic Rays sources Mbarek2019 , and the hadronic cosmic rays are
capable of producing spectrum without cutoff below 100 TeV if the VHE emission
is dominated by hadronic origin CTA2017 . To determine the magnitude of
$E_{\rm c}$ without ambiguity, we need to further research on the intrinsic
physics (including parent particle species and its spectral energy
distribution, the radiation mechanism, and pair attenuation in the emission
region) of the $\gamma-$ray sources, as well as the forthcoming observations
above tens of TeV by CTA, LHAASO, SWGO and so on.
The spectrum of IC 310 we adopted was observed during a huge flare state that
lasted for only a few days by MAGIC, and nobody knows how many times such
strong flare or more occurs in one year. Even so, we simulate the observations
with 1 yr or 5 yr sensitivities of LHAASO, which will be able to continuously
survey every day the TeV sky to assess their suitability to constrain ALPs. In
this sense, the extrapolated flux from this spectrum should correspond to a
very high predicted one and thus its ALP limit should be treated as an
optimistic estimate.
Figure 4: The photon survival probabilities $P_{\gamma\rightarrow\gamma}$ for
different values of the source field magnetic $B_{\rm s}$. The redshift
$z=0.005$ and field magnetic region $r_{s}$=10 kpc. For $B_{\rm s}$ in the
range from several $\mu$G to 20 $\mu$G, the photon survival probabilities are
close at around 200 TeV. The CM effect almost completely suppresses the
photon-ALP conversion for $B_{\rm s}=50\,\mu$G.
## VI Conclusion
In this article, we have discussed the potential of the gamma-ray spectrum of
AGN for energy up to above 100 TeV to probe ALP parameter space at around
$\mu$eV, where the coupling $g_{a\gamma}$ is so far relatively weak
constraint.
In case of conventional physics, most of the photons above tens of TeV emitted
from distant (distance$>$10 Mpc) AGN would be absorbed by the EBL/CMB during
its travel to the earth (see Fig 1 and 2). But more such photons, no matter
how far away, could survive, if we assume that the photon-ALP conversions
($\gamma\rightarrow a\rightarrow\gamma$) take place in the homogeneous source
and Galaxy magnetic-field for typical values of $l$=10 kpc, $B_{\rm s}$=10
$\mu$G, and $B_{\rm GMF}$=1.23 $\mu$G. Consequently, a very significant ALP-
induced flux enhancement, shaped as a peak, is expected to arise in the
observed spectrum above tens of TeV (see Fig 2). This provides the upcoming
LHAASO a good chance to detect the enhancement as its unprecedented
sensitivity above 30 TeV.
In order to acquire as many observations at tens of TeV as possible and thus
reduce the uncertainty from the intrinsic spectrum, the nearby and bright
sources, such as Mrk 501, IC 310 and M 87, are recommended to constrain the
ALPs around $\mu$eV. Assuming an intrinsic spectrum with exponentially
truncated at a fixed cutoff energy $E_{\rm c}$=100 TeV, we have extrapolated
the observed spectra of our sample up to above 100 TeV by the models
with/without ALPs. For $g_{a\gamma}\gtrsim 2\times$$10^{-11}\rm GeV^{-1}$ with
$m_{a}\lesssim 1\,\mu$eV, the flux at around 100 TeV predicted by the ALP
model can be more than an order of magnitude larger than that from the
standard absorption, and the enhanced flux could be detected by LHAASO (see
Fig 3 and 5).
Our result is subject to the uncertainty from the extrapolation of intrinsic
spectrum above tens of TeV. This will require further research on these
sources (Mrk 501, IC 310 and M 87) based on the forthcoming observations by
CTA, LHAASO, SWGO and so on.
###### Acknowledgements.
We would like to thank Weipeng Lin, P. H. T. Tam, Chengfeng Cai, Yu-Zhao Yu,
Seishi Enomoto, Yi-Lei Tang and Yu-Pan Zeng for useful discussions and
comments. This work is supported by the National Natural Science Foundation of
China (NSFC) under Grant No. 11875327, the Fundamental Research Funds for the
Central Universities, China, and the Sun Yat-Sen University Science
Foundation.
## References
* (1) P. Svrcek, E. Witten, Axions In String Theory. J. High Energy Phys. 0606. 051 (2006).
* (2) J. Jaeckel and A. Ringwald, The Low-Energy Frontier of Particle Physics. Ann. Rev. Nucl. Part. Sci. 60, 405 (2010).
* (3) R. D. Peccei and H. R. Quinn, CP Conservation in the Presence of Pseudoparticles. Phys. Rev. Lett. 38, 1440 (1977);
* (4) S. Weinberg, A New Light Boson?. Phys. Rev. Lett. 40, 223 (1978).
* (5) F. Wiczek, Problem of Strong P and T Invariance in the Presence of Instantons. Phys. Rev. Lett. 40, 279 (1978).
* (6) J. Preskill, M. B. Wise, and F. Wilczek, Cosmology of the invisible axion. Phys. Lett. B. 120, 127 (1983).
* (7) L. F. Abbott and P. Sikivie, A cosmological bound on the invisible axion. Phys. Lett. B. 120, 133 (1983).
* (8) M. Dine and W. Fischler, The Not So Harmless Axion. Phys. Lett. B. 120, 137 (1983).
* (9) J. E. Marsh, Axiverse extended: Vacuum destabilization, early dark energy, and cosmological collapse. Phys. Rev. D. 83, 123526 (2011).
* (10) G. Raffelt and L. Stodolsky, Mixing of the photon with low-mass particles. Phys. Rev. D. 37, 1237 (1988)
* (11) P. Sikivie, Phys. Rev. Lett. 51, 1415 (1983).
* (12) K. A. Hochmuth and G. Sigl, Effects of axion-photon mixing on gamma-ray spectra from magnetized astrophysical sources. Phys. Rev. D. 76, 123011 (2007).
* (13) A. de Angelis, O. Mansutti, and M. Roncadelli, Axion-like particles, cosmic magnetic fields and gamma-ray astrophysics. Phys. Lett. B. 659, 847 (2008)
* (14) D. Hooper and P. D. Serpico, Detecting Axionlike Particles with Gamma Ray Telescopes. Phys. Rev. Lett. 99, 231102 (2007).
* (15) A. De Angelis, M. Roncadelli and O. Mansutti, Evidence for a new light spin-zero boson from cosmological gamma-ray propagation?. Phys. Rev. D. 76, 121301(R) (2007).
* (16) A. Mirizzi, D. Montanino, and J. Cosmol. Stochastic conversions of TeV photons into axion-like particles in extragalactic magnetic fields. J. Cosmol. Astropart. Phys. 12 (2009) 004.
* (17) D. Wouters and P. Brun, Irregularity in gamma ray source spectra as a signature of axionlike particles. Phys. Rev. D. 86, 043005 (2012).
* (18) R. Bahre _et al_. (ALPS), Any light particle search II¡ªTechnical Design Report. Journal of Instrumentation. 8, T09001 (2013).
* (19) S. Aune _et al_. (CAST), CAST search for sub-eV mass solar axions with 3He buffer gas. Phys. Rev. Lett. 107, 261302 (2011).
* (20) L. D. Duffy, P. Sikivie, D. B. Tanner, S. J. Asztalos, C. Hagmann, D. Kinion, L. J. Rosenberg, K. van Bibber, D. B. Yu, and R. F. Bradley (ADMX), High resolution search for dark-matter axions. Phys. Rev. D. 74, 012006 (2006).
* (21) M. Simet, D. Hooper, and P. D. Serpico, Milky Way as a kiloparsec-scale axionscope. Phys. Rev. D. 77, 063001 (2008).
* (22) M. A. Sanchez-Conde, D. Paneque, E. Bloom, F. Prada, and A. Dominguez, Hints of the existence of Axion-Like-Particles from the gamma-ray spectra of cosmological sources. Phys. Rev. D. 79, 123511 (2009).
* (23) A. De Angelis, G. Galanti, and M. Roncadelli, Relevance of axionlike particles for very-high-energy astrophysics. Phys. Rev. D. 84, 105030 (2011).
* (24) A. Dominguez, M. A. Sanchez-Conde, and F. Prada, Axion-like particle imprint in cosmological very-high-energy sources. J. Cosmol. Astropart. Phys. 11 (2011) 020.
* (25) A. De. Angelis, G. Galanti, and M. Roncadelli, Relevance of axion-like particles for very-high-energy astrophysics. Phys. Rev. D. 87, 109903(E) (2013).
* (26) F. Tavecchio, M. Roncadelli, G. Galanti et al., Evidence for an axion-like particle from PKS 1222+216? Phys. Rev. D. 86, 085036(E) (2012).
* (27) M. Meyer, D. Horns, and M. Raue, First lower limits on the photon-axion-like particle coupling from very high energy gamma-ray observations. Phys. Rev. D. 87, 035027 (2013).
* (28) M. Meyer, and J. Conrad, Sensitivity of the Cherenkov Telescope Array to the detection of axion-like particles at high gamma-ray opacities. J. Cosmol. Astropart. Phys. 12 (2014) 016.
* (29) S. Troitsky, Towards discrimination between galactic and intergalactic axion-photon mixing. Phys. Rev. D. 93, 045014 (2016).
* (30) D. Montanino, F. Vazza, A. Mirizzi et al., Enhancing the Spectral Hardening of Cosmic TeV Photons by Mixing with Axionlike Particles in the Magnetized Cosmic Web. Phys. Rev. Lett. 119, 101101 (2017).
* (31) R. Buehler, G. Gallardo, G. Maier, A. Dominguez, M. Lopez and M. Meyer. Search for the imprint of axion-like particles in the highest-energy photons of hard $\gamma$-ray blazars. J. Cosmol. Astropart. Phys. 09 (2020) 027.
* (32) A. I. Nikishov, Sov. Phys. Pair production by a constant external field. JETP 393, 14(1962).
* (33) M, G, Hauser and E, Dwek, THE COSMIC INFRARED BACKGROUND: Measurements and Implications. Annu. Rev. Astron. Astrophys. 39, 249 (2001).
* (34) F. Aharonian et al. (H.E.S.S. Collaboration), A low level of extragalactic background light as revealed by $\gamma$-rays from blazars. Nature(London) 440,1018(2006).
* (35) E. Dwek and A, Kusenko, The Extragalactic Background Light and the Gamma-ray Opacity of the Universe. Astropart. Phys. 43, 112 (2013).
* (36) L. Costamante, Gamma-rays from Blazars and the Extragalactic Background Light. Int. J. Mod. Phys. D. 22, 1330025(2013).
* (37) D. Horns, L. Maccione, M. Meyer, A. Mirizzi, D. Montanino, and M. Roncadelli, “Hardening of TeV gamma spectrum of active galactic nuclei in galaxy clusters by conversions of photons into axionlike particles,” Phys Rev D, 86, 075024(2012).
* (38) G. Galanti, M. Roncadelli, Behavior of axionlike particles in smoothed out domainlike magnetic fields. Phys. Rev. D. 98, 043018 (2018).
* (39) G. Galanti, F. Tavecchio, M. Roncadelli and C. Evoli, Blazar VHE spectral alterations induced by photon-ALP oscillations. Mon. Not. R. Astron. Soc. 487, 123 (2019).
* (40) A. Franceschini, G. Rodighiero, and M. Vaccari, Extragalactic optical-infrared background radiation, its time evolution and the cosmic photon-photon opacity. Astron. Astrophys. 487, 837(2008).
* (41) H. Abdalla et al. (CTA), Sensitivity of the Cherenkov Telescope Array for probing cosmology and fundamental physics with gamma-ray propagation. arXiv:2010.01349.
* (42) A. Abramowski, F. Acero, F. Aharonian, F. Ait Benkhali, A. G. Akhperjanian, E. Anguner et al., Constraints on axionlike particles with H.E.S.S. from the irregularity of the PKS 2155-304 energy spectrum, Phys. Rev. D. 88, 102003 (2013).
* (43) M. Ajello, A. Albert, B. Anderson, L. Baldini, G. Barbiellini, D. Bastieri et al., Search for Spectral Irregularities due to Photon-Axionlike-Particle Oscillations with the Fermi Large Area Telescope, Phys. Rev. Lett. 116,161101 (2016).
* (44) C. Zhang, Y. F. Liang, S. Li et al., New bounds on axionlike particles from the Fermi Large Area Telescope observation of PKS 2155-304. Phys. Rev. D. 97, 063009 (2018).
* (45) H. J. Li, J. G. Guo, X. J. Bi, S. J. Lin and P. F. Yin, Limits on axion-like particles from Mrk 421 with 4.5-years period observations by ARGO-YBJ and Fermi-LAT. arXiv:2008.09464.
* (46) M. Libanov and S. Troitsky, On the impact of magnetic-field models in galaxy clusters on constraints on axion-like particles from the lack of irregularities in high-energy spectra of astrophysical sources. Phys. Lett. B. 802, 135252 (2020).
* (47) P. Arias, D. Cadamuro, M. Goodsell, J. Jaeckel, J. Redondo and A. Ringwald, WISPy cold dark matter. J. Cosmol. Astropart. Phys. 6 (2012) 013.
* (48) E. Armengaud et al. (IAXO), Physics potential of the International Axion Observatory (IAXO). J. Cosmol. Astropart. Phys. 06 (2019) 047.
* (49) G. Sigl, Phys. Rev. D. Astrophysical haloscopes. 96, 103014 (2017).
* (50) T. D. P. Edwards, M. Chianese, B. J. Kavanagh, S. M. Nissanke and C. Weniger, Unique Multimessenger Signal of QCD Axion Dark Matter. Phys. Rev. Lett. 124, 161101 (2020).
* (51) A. Caputo, C. PenaGaray and S. J. Witte, Detecting the Stimulated Decay of Axions at Radio Frequencies. Looking for axion dark matter in dwarf spheroidal galaxies. Phys. Rev. D. 98, 083024 (2018)
* (52) O. Ghosh, J. Salvado and J. Miralda-Escude, Axion Gegenschein: Probing Back-scattering of Astrophysical Radio Sources Induced by Dark Matter. arXiv:2008.02729.
* (53) K. Kohri and H. Kodama, Axion-like particles and recent observations of the cosmic infrared background radiation. Phys. Rev. D. 96, 051701(R)(2017).
* (54) M. Meyer, D. Montanino and J. Conrad, J. Cosmol. Astropart. On detecting oscillations of gamma rays into axion-like particles in turbulent and coherent magnetic fields. Phys. 09 (2014) 003.
* (55) see http://tevcat.uchicago.edu/.
* (56) X. Bai et al. (LHAASO collaboration), The large high altitude air shower observatory (LHAASO) science white paper. arXiv:1905.02773.
* (57) F. Aharonian et al. (HESS collaboration), Observations of the Crab Nebula with H.E.S.S, Astron. Astrophys. 899, 457(2006).
* (58) MAGIC collaboration, J. Aleksic et al., Performance of the MAGIC stereo system obtained with Crab Nebula data, Astropart. Phys. 435, 35 (2012).
* (59) J. Holder, V.A. Acciari, E. Aliu, T. Arlen, M. Beilicke et al., Status of the VERITAS Observatory, AIP Conf. Proc. 657, 1085 (2009) [arXiv:0810.0474].
* (60) M. S. Pshirkov, P. G. Tinyakov, and F. R. Urban, New limits on extragalactic magnetic fields from rotation measures. Phys. Rev. Lett. 116, 191302(2016).
* (61) E. Masaki, A. Aoki, and Jiro Soda, Photon-Axion Conversion, Magnetic Field Configuration, and Polarization of Photons. Phys. Rev. D. 96, 043519 (2017).
* (62) J. L. Han, Annu. Rev. Astron. Astrophys. 55, 1(2017).
* (63) F. Tavecchio, M. Roncadelli, and G. Galanti, Photons to axion-like particles conversion in Active Galactic Nuclei. Phys. Lett. B. 744, 375 (2015).
* (64) Y. G. Zheng, C. Y. Yang, L. Zhang, and J. C. Wang, Discerning the Gamma-Ray-emitting Region in the Flat Spectrum Radio Quasars. Astrophys. J. Suppl. Ser. 228, 1(2017).
* (65) S. J. Kang, L. Chen, and Q. w. Wu, Constraints on the minimum electron Lorentz factor and matter content of jets for a sample of bright Fermi blazars. Astrophys. J. Suppl. Ser. 215, 5(2014).
* (66) R. Xue, D. Luo, Z. R. Wang et al., Curvature of the spectral energy distribution, the inverse Compton component and the jet in Fermi 2LAC blazars. Mon. Not. R. Astron. Soc. 463, 3038 (2016).
* (67) S. P. O. Sullivan and D.C. Gabuzda, Magnetic field strength and spectral distribution of six parsec-scale active galactic nuclei jets. Mon. Not. Roy. Astron. Soc. 400, 26 (2009).
* (68) R. E. Pudritz, M. J. Hardcastle and D. C. Gabuzda, Magnetic fields in astrophysical jets: from launch to termination. Space Sci. Rev. 169, 27 (2012).
* (69) L. M. Widrow, Reviews of Modern Physics. Origin of Galactic and Extragalactic Magnetic Fields. 74 775 (2002).
* (70) A. Dobrynina, A. Kartavtsev, and G. Raffelt, Photon-photon dispersion of TeV gamma rays and its role for photon-ALP conversion. Phys. Rev. D. 91, 083003 (2015).
* (71) R. C. Gilmore, R. S. Sommerville, J. R. Primack et al., Semi-analytic modelling of the extragalactic background light and consequences for extragalactic gamma-ray spectra. Mon. Not. R. Astron. Soc. 422, 3189 (2012).
* (72) M. Ackermann et al. (Fermi-LAT Collaboration), The Imprint of The Extragalactic Background Light in the Gamma-Ray Spectra of Blazars. Science. 338, 1190(2012).
* (73) A. Abramowski et al. (H.E.S.S. Collaboration). Measurement of the extragalactic background light imprint on the spectra of the brightest blazars observed with H.E.S.S. Astron. Astrophys. 550, A4 (2013).
* (74) A. U. Abeysekara et al. (VERITAS Collaboration), Gamma-rays from the quasar PKS 1441+ 25: Story of an escape. Astrophys. J. 815, L22(2015).
* (75) J. Biteau and D. A. Williams, The extragalactic background light, the Hubble constant, and anomalies: conclusions from 20 years of TeV gamma-ray observations. Astrophys. J. 812, 60(2015).
* (76) M. L. Ahnen et al. (MAGIC Collaboration), MAGIC observations of the February 2014 flare of 1ES 1011+496 and ensuing constraint of the EBL density. Astron. Astrophys. 590, A24(2016).
* (77) T. Armstrong, A. M. Brown, and P. M. Chadwick, Fermi-LAT high z AGN and the Extragalactic Background Light. Mon. Not. R. Astron. Soc. 470, 4089(2017).
* (78) Q. Yuan, H. L. Huang, X. J Bi, and H. H. Zhang, Measuring the extragalactic background light from very high energy gamma-ray observations of blazars. arXiv:1212.5866.
* (79) G. B. Long, W. P. Lin, P. H. T. Tam, and W. S. Zhu, Testing the CIBER cosmic infrared background measurements and axionlike particles with observations of TeV blazars. Phys. Rev. D. 101, 063004(2020).
* (80) A. Dominguez, J. R. Primack, D. J. Rosario et al., Extragalactic background light inferred from AEGIS galaxy-SED-type fractions. Mon. Not. R. Astron. Soc. 410, 2556(2011).
* (81) J. D. Finke, S. Razzaque, and C. D. Dermer, Modeling the extragalactic background light from stars and dust. Astrophys. J. 712, 238 (2010).
* (82) H. Abdalla et al. (H.E.S.S. Collaboration), Measurement of the EBL spectral energy distribution using the VHE $\gamma$-ray spectra of H.E.S.S. blazars. Astron. Astrophys. 606, A59(2017).
* (83) Y. Gong and A. Cooray, Astrophys. The extragalactic background light from the measurements of the attenuation of high-energy gamma-ray spectrum. J. Lett. 772, L12(2013).
* (84) A. Mirizzi, G.G. Raffelt and P.D. Serpico, Signatures of axion-like particles in the spectra of TeV gamma-ray sources, Phys. Rev. D 76, 023001(2007).
* (85) A. Franceschini, L. Foffano, E. Prandini, and F. Tavecchio, Very high-energy constraints on the infrared extragalactic background light. 629, A2 (2019).
* (86) MAGIC collaboration, V. A. Acciari et al., Mon. Not. R. Astron. Soc. 492, 5354 (2020).
* (87) VERITAS collaboration, E. Aliu et al., VERITAS Observations of Day-scale Flaring of M 87 in 2010 April. Astrophys. J. 746, 141 (2011).
* (88) H.E.S.S. collaboration, F. Aharonian. Fast variability of TeV $\gamma$-rays from the radio galaxy M87. Sience 314,1424(2006).
* (89) C. M. Urry and P. Padovani, Unified Schemes for Radio-Loud Active Galactic Nuclei. Publ. Astron. Soc. Jan. 107, 803(1995).
* (90) MAGIC collaboration, M. L. Ahnen et al., First multi-wavelength campaign on the gamma-ray-loud active galaxy IC 310, Astron. Astrophys. 603, A25 (2017).
* (91) Fermi-LAT, MAGIC and VERITAS collaboration. Insights into the high-energy gamma-ray emission of Markarian 501 from extensive multifrequency observations in the Fermi era, Astrophys. J. 727, 129 (2011).
* (92) E. Lefa, S. R. Kelner, and F. A. Aharonian. On the spectral shape of radiation due to Inverse Compton Scattering close to the maximum cut-off. Astrophys. J. 753, 1176 (2012).
* (93) L, Stawarz and V Petrosian. On the momentum diffusion of radiating ultrarelativistic electrons in a turbulent magnetic field. Astrophys. J. 681, 1725 (2008).
* (94) T. R. Lewis, P. A. Becker and J. D. Finke. Electron Acceleration in Blazars: Application to the 3C 279 Flare on 2013 December 20. Astrophys. J. 884, 116 (2019).
* (95) D. C. Warren, C. A. A. Beauchemin, M. V. Barkov, and S. Nagataki. The maximum energy of shock-accelerated electrons in a microturbulent magnetic field. arXiv:2010.06234.
* (96) M. Lemoine and M. A. Malkov. Powerlaw spectra from stochastic acceleration. Mon. Not. R. Astron. Soc. 499, 4972 (2020).
* (97) R. Xue, R.Y. Liu, M. Petropoulou, F. Oikonomou, Z. R.Wang, K. Wang, and X.Y. Wang. Powerlaw spectra from stochastic acceleration. Astrophys. J. 886, 23 (2019).
* (98) M. Amenomori et al. (Tibet AS$\gamma$ Collaboration), First Detection of Photons with Energy beyond 100 TeV from an Astrophysical Source. Phys. Rev. Lett. 123, 051101 (2019).
* (99) MAGIC collaboration, V. A. Acciari et al., MAGIC very large zenith angle observations of the Crab Nebula up to 100 TeV. Astron. Astrophys. 635, A158 (2019).
* (100) HAWC collaboration, A. U. Abeysekara et al., Measurement of the Crab Nebula Spectrum Past 100 TeV with HAWC. Astrophys. J. 881, 134 (2019).
* (101) K. Kotera and A. V. Olinto. The Astrophysics of Ultrahigh-Energy Cosmic Rays. Annu. Rev. Astron. Astrophys.49, 119 (2011).
* (102) R. Mbarek, D. Caprioli. Bottom-up Acceleration of Ultra-High-Energy Cosmic Rays in the Jets of Active Galactic Nuclei. Astrophys. J. 886, 8 (2019).
* (103) CTA collaboration, B.S. Acharya et al., Science with the Cherenkov Telescope Array. arXiv:1709.07997.
## Appendix A fitting Results for the other 12 spectra
Figure 5: Fitting and extrapolating the observations of M 87, IC 310 and Mrk
501. The meanings of various symbols in the diagram are the same as Fig. 3.
|
# TDMSci: A Specialized Corpus for Scientific Literature Entity Tagging of
Tasks Datasets and Metrics
Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin and Debasis Ganguly
IBM Research Europe, Ireland
<EMAIL_ADDRESS>
###### Abstract
_Tasks_ , _Datasets_ and _Evaluation Metrics_ are important concepts for
understanding experimental scientific papers. However, most previous work on
information extraction for scientific literature mainly focuses on the
abstracts only, and does not treat datasets as a separate type of entity Zadeh
and Schumann (2016); Luan et al. (2018). In this paper, we present a new
corpus that contains domain expert annotations for _Task (T), Dataset (D),
Metric (M)_ entities on 2,000 sentences extracted from NLP papers. We report
experiment results on TDM extraction using a simple data augmentation strategy
and apply our tagger to around 30,000 NLP papers from the ACL Anthology. The
corpus is made publicly available to the community for fostering research on
scientific publication summarization Erera et al. (2019) and knowledge
discovery.
## 1 Introduction
The past years have witnessed a significant growth in the number of scientific
publications and benchmarks in many disciplines. As an example, in the year
2019 alone, more than 170k papers were submitted to the pre-print repository
arXiv111https://arxiv.org/help/stats/2019_by_area and among them, close to 10k
papers were classified as NLP papers (i.e., cs.CL). Each experimental
scientific field, including NLP, will benefit from the massive increase in
studies, benchmarks, and evaluations, as they can provide ingredients for
novel scientific advancements.
However, researchers may struggle to keep track of all studies published in a
particular field, resulting in duplication of research, comparisons with old
or outdated benchmarks, and lack of progress. In order to tackle this problem,
recently there have been a few manual efforts to summarize the state-of-the-
art on selected subfields of NLP in the form of leaderboards that extract
tasks, datasets, metrics and results from papers, such as _NLP-progress_
222https://github.com/sebastianruder/NLP-progress or
_paperswithcode_.333https://paperswithcode.com But these manual efforts are
not sustainable over time for all NLP tasks.
Over the past few years, several studies and shared tasks have begun to tackle
the task of entity extraction from scientific papers. Augenstein et al. (2017)
formalized a task to identify three types of entities (i.e., _task, process,
material_) in scientific publications (SemEval 2017 task10). Gábor et al.
(2018) presented a task (SemEval 2018 task 7) on semantic relation extraction
from NLP papers. They provided a dataset of 350 abstracts and reuse the entity
annotations from Zadeh and Schumann (2016). Recently Luan et al. (2018)
released a corpus containing 500 abstracts with six types of entity
annotations. However, these corpora do not treat _Dataset_ as a separate type
of entity and most of them focus on the abstracts only.
In a previous study, we developed an IE system to extract {_task, dataset,
metric_} triples from NLP papers based on a small, manually created
task/dataset/metric (TDM) taxonomy Hou et al. (2019). In practice, we found
that a TDM knowledge base is required to extract TDM information and build NLP
leaderboards for a wide range of NLP papers. This can help researchers quickly
understand related literature for a particular task, or to perform comparable
experiments.
As a first step to build such a TDM knowledge base for the NLP domain, in this
paper we present a specialized English corpus containing 2,000 sentences taken
from the full text of NLP papers which have been annotated by domain experts
for three main concepts: Task (T), Dataset (D) and Metric (M). Based on this
corpus, we develop a TDM tagger using a novel data augmentation technique. In
addition, we apply this tagger to around 30,000 NLP papers from the ACL
Anthology and demonstrate its value to construct an NLP TDM knowledge graph.
We release our corpus at https://github.com/IBM/science-result-extractor.
## 2 Related Work
A lot of interest has been focused on information extraction from scientific
literature. SemEval 2017-task 10 Augenstein et al. (2017) proposed a new task
for the identification of three types of entities (Task, Method, and Material)
in a corpus of 500 paragraphs taken from open access journals. Based on
Augenstein et al. (2017) and Gábor et al. (2018), Luan et al. (2018) created
_SciERC_ , a dataset containing 500 scientific abstracts with annotations for
six types of entities and relations between them. Both SemEval 2017-task 10
and _SciERC_ do not treat “ _dataset_ ” as a separate entity type. Instead,
their “ _material_ ” category comprises a much larger set of resource types,
including tools, knowledge resources, bilingual dictionaries, as well as
datasets. In our work, we focus on “ _datasets_ ” entities that researchers
use to evaluate their approaches because dataset is one of the three core
elements to construct leaderboards for NLP papers.
Concurrent to our work, Jain et al. (2020) develop a new corpus _SciREX_ which
contains 438 papers on different domains from _paperswithcode_. It includes
annotations for four types of entities (i.e., _Task, Dataset, Metric, Method_)
and the relations between them. The initial annotations were carried out
automatically using distant signals from _paperswithcode_. Later human
annotators performed necessary corrections to generate the final dataset.
_SciREX_ is the closest to our corpus in terms of entity annotations. In our
work, we focus on TDM entities which reflect the collectively shared views in
the NLP community and our corpus is annotated by five experts who all have
5-10 years NLP research experiences.
## 3 Corpus Creation
### 3.1 Annotation Scheme
We developed an annotation scheme for annotating Task, Dataset, and Evaluation
Metric phrases in NLP papers. Our annotation guidelines444Please see the
appendix for the whole annotation scheme. are based on the scientific term
annotation scheme described in Zadeh and Schumann (2016). Different from
previous corpora Zadeh and Schumann (2016); Luan et al. (2018), we only
annotated factual and content-bearing entities. This is because we aim to
build a TDM knowledge base in the future and non-factual entities (e.g., _a
high-coverage sense-annotated corpus_ in Example 3.1) do not reflect the
collectively shared views of TDM entities in the NLP domain.
* In order to learn models for disambiguating a large set of content words, _a high-coverage sense-annotated corpus_ is required.
Following the above guidelines, we also do not annotate _anonymous entities_ ,
such as “ _this task_ ” or “ _the dataset_ ”. These entities are anaphors and
can not be used independently to refer to any specific TDM entities without
contexts. In general, we choose to annotate TDM entities that normally have
specific names and whose meanings usually are consistent across different
papers. From this perspective, the TDM entities that we annotate are similar
to named entities, which are self-sufficient to identify the referents.
### 3.2 Pilot Annotation Study
#### Data preparation.
For the pilot annotation study, we choose 100 sentences from the NLP-TDMS
corpus Hou et al. (2019). The corpus contains 332 NLP papers which are
annotated with triples of _{ Task, Dataset, Metric}_ on the document level. We
use string and substring match to extract a list of sentences from these
papers which are likely to contain the document level _Task, Dataset, Metric_
annotations. We then manually choose 100 sentences from this list following
the criteria: 1) the sentence should contain the valid mention of _Task_ ,
_Dataset_ , or _Metric_ ; 2) the sentences should come from different papers
as much as possible; and 3) there should be a balanced distribution of _task_
, _dataset_ , and _metric_ mentions in these sentences.
#### Annotation agreement.
Four NLP domain experts annotated the same 100 sentences for a pilot
annotation study, following the annotation guidelines described above. All the
annotations were conducted using BRAT Stenetorp et al. (2012). The inter
annotator agreement has been calculated with a pairwise comparison between
annotators using _precision_ , _recall_ and _F-score_ on the exact match of
the annotated entities. In other words, two entities are considered matching
(true positive) if they have the same boundaries and are assigned to the same
label. We also calculate Fleiss’ kappa on a per token basis, comparing the
agreement of annotators on each token in the corpus. Table 1 lists the mean
F-score as well as the token-based Fleiss’ $\kappa$ value for each entity
type. Overall, we achieve high reliability for all categories.
| Mean F-score | Fleiss’ $\kappa$
---|---|---
| (EM) | (Token)
Task | 0.720 | 0.797
Dataset | 0.752 | 0.829
Metric | 0.757 | 0.896
Overall | 0.743 | 0.842
Table 1: Inter-annotator agreement.
#### Adjudication.
The final step of the pilot annotation was to reconcile disagreements among
the four annotators to produce the final canonical annotation. This step also
allows us to refine the annotation guidelines. Specifically, through the
discussion of annotation disagreements we could identify ambiguities and
omissions in the guidelines. For example, one point of ambiguity was whether a
task must be associated with a dataset, or can we annotate higher level tasks,
e.g., sequence labeling, which do not have a dedicated dataset but may include
several tasks and datasets. This discussion also revealed the overlap in how
we refer to tasks and datasets in the literature. As authors we frequently use
these interchangeably, often with shared tasks, e.g., “ _SemEval-07 task 17_ ”
seems to more often refer to a dataset than a specific instance of the
(Multilingual) Word Sense Disambiguation task, or the “ _MultiNLI_ ” corpus is
sometimes used as shorthand for the task. After the discussion, we agreed that
we should annotate higher level tasks. In addition, we should assign labels to
entities according to their actual referential meanings in contexts.
### 3.3 Main Annotation
After the pilot study, 1,900 additional sentences were annotated by five NLP
researchers. Four annotators participated in the pilot annotation study, and
all annotators joined the adjudication discussion. Note that every annotator
annotate different set of sentences. The annotator who designed the annotation
scheme annotated 700 sentences, the other four annotators annotated 300
sentences each.555Due to time constraints, we did not carry out another round
of pilot study. Partially it is because we felt that the revised guidelines
resulting from the discussion were sufficient for the annotators to decide
ambiguous cases. So in the second stage annotators annotated disjoint sets of
sentences. After this, the annotator who designed the annotation scheme went
through the whole corpus again to verify the annotations.
In general, most sentences in our corpus are not from the abstracts. Note that
the goal of developing our corpus is to automatically build an NLP TDM
taxonomy and use them to tag NLP papers. Therefore, the inclusion of sentences
from the whole paper other than the abstract section is important for our
purpose. Because not all abstracts talk about all three elements. For
instances, for the top ten papers listed in the {_sentiment analysis, IMDB,
accuracy_} leaderboard in _paperswithcode_
666https://paperswithcode.com/sota/sentiment-analysis-on-imdb, search was
carried out on November, 2020., only four abstracts mention the dataset “
_IMDB_ ”. If we only focus on the abstracts, we will miss the other six papers
from the leaderboard.
| Train | Test
---|---|---
# Sentences | 1500 | 500
# Task | 1219 | 396
# Dataset | 420 | 192
# Metric | 536 | 174
Table 2: Statistics of task/dataset/metric mentions in the training and testing datasets. | _CRF_ | _CRF w/ gazetteer_ | _SciIE_ | _Flair-TDM_
---|---|---|---|---
| P | R | F | P | R | F | P | R | F | P | R | F
_Original training data_
Task | 63.79 | 46.72 | 53.94 | 61.86 | 45.45 | 52.40 | 69.23 | 54.55 | 61.02 | 61.54 | 54.55 | 57.83
Dataset | 65.42 | 36.46 | 46.82 | 65.45 | 37.50 | 47.68 | 66.97 | 38.02 | 48.50 | 52.66 | 46.35 | 49.30
Metric | 80.00 | 66.67 | 72.73 | 80.95 | 68.39 | 74.14 | 77.99 | 71.26 | 74.47 | 76.33 | 74.14 | 75.22
Micro- | 68.45 | 48.69 | 56.90 | 67.70 | 48.69 | 56.64 | 71.21 | 54.20 | 61.55 | 62.99 | 56.96 | 59.79
_Original Training data + Augmented masked training data_
Task | 63.24 | 43.43 | 51.50 | 62.96 | 42.93 | 51.05 | 68.63 | 55.81 | 61.56 | 65.14 | 53.79 | 58.92
Dataset | 62.38 | 32.81 | 43.00 | 64.71 | 34.38 | 44.90 | 55.43 | 50.52 | 52.86 | 59.15 | 50.52 | 54.50
Metric | 80.15 | 62.64 | 70.32 | 79.29 | 63.79 | 70.70 | 76.83 | 72.41 | 74.56 | 79.63 | 74.14 | 76.79
Micro- | 67.58 | 45.14 | 54.13 | 67.77 | 45.54 | 54.47 | 67.17 | 58.27 | 62.40 | 67.23 | 57.61 | 62.05
Table 3: Results of different models for _task/dataset/metric_ entity
recognition on _TDMSci_ test dataset.
## 4 A TDM Entity Tagger
Our final corpus _TDMSci_ contains 2,000 sentences with 2,937 mentions of
three entity types. We convert the original BRAT annotations to the standard
CoNLL format using BIO scheme.777Note that our BRAT annotation contains a
small amount of embedded entities, e.g., _WSJ portion of Ontonotes_ and
_Ontonotes_. We only keep the longest span when we convert the BRAT
annotations to the CoNLL format. We develop a tagger to extract TDM entities
based on this corpus.
### 4.1 Experimental Setup
To evaluate the performance of our tagger, we split _TDMSci_ into training and
testing sets, which contains 1,500 and 500 sentences, respectively. Table 2
shows the statistics of task/dataset/metric mentions in these two datasets.
For evaluation, we report precision, recall, F-score on exact match for each
entity type as well as micro-averaged precision, recall, F-score for all
entities.
### 4.2 Models
We model the task as a sequence tagging problem. We apply a traditional CRF
model Lafferty et al. (2001) with various lexical features and a BiLSTM-CRF
model for this task. To compare with the state-of-the-art entity extraction
model on scientific literature, we also use _SciIE_ from Luan et al. (2018) to
train a _TDM_ entity recognition model based on our training data. Below we
describe all models in detail.
#### CRF.
We use the Stanford CRF implementation Finkel et al. (2005) to train a _TDM_
NER tagger based on our training data. We use the following features: unigrams
of the previous, current and next words, current word character n-grams,
current POS tag, surrounding POS tag sequence, current word shape, surrounding
word shape sequence.
#### CRF with gazetteers.
To test whether the above CRF model can benefit from knowledge resources, we
add two gazetteers to the feature set: one is a list containing around 6,000
dataset names which were crawled from LRE
Map,888http://www.elra.info/en/catalogues/lre-map/ and another gazetteer
comprises around 30 common evaluation metrics compiled by the authors.
#### SciIE.
Luan et al. (2018) proposed a multi-task learning system to extract entities
and relations from scientific articles. _SciIE_ is based on span
representations using ELMo Peters et al. (2018) and here we adapt it for _TDM_
entity extraction. Note that if _SciIE_ predicts several embedded entities, we
keep the one that has the highest confidence score. In practice we notice that
this does not happen in our corpus.
#### Flair-TDM
For BiLSTM-CRF model, we use the recent _Flair_ framework Akbik et al. (2018)
based on the cased BERT-base embeddings Devlin et al. (2018). We train our
_Flair-TDM_ model with a learning rate of 0.1, a batch size of 32, a hidden
size of 768, and the maximum epochs of 150.
### 4.3 Data Augmentation
For TDM entity extraction, we expect that the surrounding context will play an
important role. For instance, in the following sentence “we show that for X on
the Y, our model outperforms the prior state-of-the-art”, one can easily guess
that X is a task entity while Y is a dataset entity. As a result, we propose a
simple data augmentation strategy that generates the additional mask training
data by replacing every token within an annotated TDM entity as UNK.
Figure 1: A subset of the _TDM_ graph.
### 4.4 Results and Discussion
Table 3 shows the performance of different models for _task/dataset/metric_
entity recognition on our testing dataset.
First, it seems that although adding gazetteers can help the CRF model detect
_dataset_ and _metric_ entities better, the positive effect is limited. In
general, both _SciIE_ and _Flair-TDM_ perform better than _CRF_ models for
detecting all three type of entities.
Second, augmenting the original training data with the additional masked data
as described in Section 4.3 further improves the performance both for _SciIE_
and _Flair-TDM_. However, this is not the case for the CRF models. We assume
this is because CRF models heavily depend on the lexical features.
Finally, we randomly sampled 100 sentences from the testing dataset and
compared the predicted TDM entities in _Flair-TDM_ against the gold
annotations. We found that most errors are from the boundary mismatch for task
and dataset entities, e.g., _text summarization_ vs. _abstractive text
summarization_ , or _Penn Treebank_ vs. _Penn Treebank dataset_. The last
error comes from the bias in the training data. A lot of researchers use “
_Penn Treebank_ ” to refer to a dataset. So the model will learn this bias and
only tag ” _Penn Treebank_ ” as the dataset even though in a specific testing
sentence, ” _Penn Treebank dataset_ ” was used to refer to the same corpus.
In general, we think these mismatched predictions are reasonable in the sense
that they capture the main semantics of the referents. Note that the numbers
reported in Table 3 are based on exact match. Sometimes requiring exact match
may be too restictive for downstreaming tasks. Therefore, we carried out an
additional evaluation for the best _Flair-TDM_ model using partial match from
SemEval 2013-Task 9 Segura-Bedmar et al. (2013), which gives us a micro-
average F1 of 76.47 for type partial match.
## 5 An Initial TDM Knowledge Graph
In this section, we apply the _Flair-TDM_ tagger to around 30,000 NLP papers
from ACL Anthology to build an initial TDM knowledge graph.
We downloaded all NLP papers from 1974 to 2019 that belong to ACL from the ACL
Anthology999https://www.aclweb.org/anthology/. For each paper, we collect
sentences from the title, the abstract/introduction/dataset/corpus/experiment
sections, as well as from the table captions. We then apply the _Flair-TDM_
tagger to these sentences. Based on the tagger results, we build an initial
graph $G$ using the following steps:
* •
add a _TDM_ entity as a node into $G$ if it appears at least five times in
more than one paper;
* •
create a link between a _task_ node and a _dataset/metric_ node if they appear
in the same sentence at least five times in different papers.
By applying the above simple process, we get a noisy _TDM_ knowledge graph
containing 180k nodes and 270k links. After checking a few dense areas, we
find that our graph encodes valid knowledge about NLP task/dataset/metric.
Figure 1 shows that in our graph, the task “SRL” (semantic role labelling) is
connected to a few datasets such as “FrameNet”, “PropBank”, and “NomBank” that
are standard benchmark datasets for this task.
Based on the tagged ACL Anthology and this initial noisy graph, we are
exploring various methods to build a large-scale NLP TDM knowledge graph and
to evaluate its accuracy/coverage in an ongoing work.
## 6 Conclusion
In this paper, we have presented a new corpus (_TDMSci_) annotated for three
important concepts (_Task/Dataset/Metric_) that are necessary for extracting
the essential information from an NLP paper. Based on this corpus, we have
developed a _TDM_ tagger using a simple but effective data augmentation
strategy. Experiments on 30,000 NLP papers show that our corpus together with
the _TDM_ tagger can help to build _TDM_ knowledge resources for the NLP
domain.
## References
* Akbik et al. (2018) Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In _COLING 2018, 27th International Conference on Computational Linguistics_ , pages 1638–1649.
* Augenstein et al. (2017) Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lakshmi Vikraman, and Andrew McCallum. 2017. Semeval 2017 task 10: Scienceie - extracting keyphrases and relations from scientific publications. In _Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)_ , pages 546–555. Association for Computational Linguistics.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. _CoRR_ , abs/1810.04805.
* Erera et al. (2019) Shai Erera, Michal Shmueli-Scheuer, Guy Feigenblat, Ora Peled Nakash, Odellia Boni, Haggai Roitman, Doron Cohen, Bar Weiner, Yosi Mass, Or Rivlin, Guy Lev, Achiya Jerbi, Jonathan Herzig, Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin, Francesca Bonin, and David Konopnicki. 2019. A summarization system for scientific documents. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations_ , pages 211–216, Hong Kong, China. Association for Computational Linguistics.
* Finkel et al. (2005) Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In _Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)_ , pages 363–370, Ann Arbor, Michigan. Association for Computational Linguistics.
* Gábor et al. (2018) Kata Gábor, Davide Buscaldi, Anne-Kathrin Schumann, Behrang QasemiZadeh, Haïfa Zargayouna, and Thierry Charnois. 2018. Semeval-2018 task 7: Semantic relation extraction and classification in scientific papers. In _Proceedings of The 12th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT, New Orleans, Louisiana, June 5-6, 2018_ , pages 679–688.
* Hou et al. (2019) Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin, and Debasis Ganguly. 2019. Identification of tasks, datasets, evaluation metrics, and numeric scores for scientific leaderboards construction. In _Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers_ , pages 5203–5213.
* Jain et al. (2020) Sarthak Jain, Madeleine van Zuylen, Hannaneh Hajishirzi, and Iz Beltagy. 2020. SciREX: A challenge dataset for document-level information extraction. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7506–7516, Online. Association for Computational Linguistics.
* Lafferty et al. (2001) John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In _Proceedings of the Eighteenth International Conference on Machine Learning_ , ICML ’01, pages 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
* Luan et al. (2018) Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 3219–3232. Association for Computational Linguistics.
* Peters et al. (2018) Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics.
* Segura-Bedmar et al. (2013) Isabel Segura-Bedmar, Paloma Martínez, and María Herrero-Zazo. 2013. SemEval-2013 task 9 : Extraction of drug-drug interactions from biomedical texts (DDIExtraction 2013). In _Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)_ , pages 341–350, Atlanta, Georgia, USA. Association for Computational Linguistics.
* Stenetorp et al. (2012) Pontus Stenetorp, Sampo Pyysalo, Goran Topić, Tomoko Ohta, Sophia Ananiadou, and Jun’ichi Tsujii. 2012. brat: a web-based tool for NLP-assisted text annotation. In _Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics_ , pages 102–107, Avignon, France. Association for Computational Linguistics.
* Zadeh and Schumann (2016) Behrang Q. Zadeh and Anne-Kathrin Schumann. 2016. The acl rd-tec 2.0: A language resource for evaluating term extraction and entity recognition methods. In _LREC_.
## Appendix A TDM Entity Annotation Guidelines
### A.1 Introduction
This scheme describes guidelines for annotating _Task_ , _Dataset_ , and
_Evaluation Metric_ phrases in NLP papers. We have pre-processed NLP papers in
PDF format and chosen sentences that are likely to contain the above-mentioned
entities for annotation. These sentences may come from different sections
(e.g., Abstract, Introduction, Experiment, Dataset) as well as tables (e.g.,
table captions).
### A.2 Entity Types
We annotate the following three entity types:
* •
Task: A task is a problem to solve (e.g., _information extraction_ ,
_sentiment classification_ , _dialog state tracking_ , _POS tagging_ , _NER_).
* •
Dataset: A dataset is a specific corpus or language resource. Datasets are
often used to develop models or run experiments for NLP tasks. A dataset
normally has a short name, e.g., _IMDB_ , _Gigaword_.
* •
Metric: An evaluation metric explains the performance of a model for a
specific task, e.g., _BLEU_ (for machine translation), or _accuracy_ (for a
range of NLP tasks).
### A.3 Notes and Examples
Entity spans. Particular attention must be paid to the entity spans in order
to improve agreement. The following list indicates all the annotation
directions that annotators have been given regarding entity spans. Table 4
shows examples of correct span annotation.
* •
Following the ACL RD-TEC 2.0 annotation
guideline,101010https://github.com/languagerecipes/acl-rd-
tec-2.0/blob/master/distribution/documents/acl-rd-tec-guidelines-ver2.pdf
determiners should not be part of an entity span. For example, the string ‘the
text8 test set‘, only the span ‘test8’ is annotated as dataset.
* •
Minimum span principle: Annotators should annotate only the minimum span
necessary to represent the original meaning of task/dataset/metric. See Table
4, rows 1,2,3,4.
* •
Include ‘corpus/dataset/benchmark’ when annotating dataset if these tokens are
the head-noun of the dataset entity. For example: ‘ubuntu corpus’,
‘SemEval-2010 Task 8 dataset’.
* •
Exclude the head noun of ‘task/problem’ when annotating task (e.g., only
annotation “link prediction” for “the link prediction problem”) unless they
are the essential part of the task itself (e.g., CoNLL-2012 shared task,
SemEval-2010 relation classification task).
* •
Conjunction: If the conjunction NP is an ellipse, annotate the whole phrase
(see Table 4, rows 6,11); otherwise, annotate the conjuncts separately (see
Table 4, row 5).
* •
Tasks can be premodifiers (see Table 4, rows 7,8,12)
* •
Embedded spans: Normally TDM entities do not contain any other TDM entities. A
small number of _Task_ and _Dataset_ entities can contain other entities (see
Table 4, row 12).
Row | Phrase | Annotation | Entity
---|---|---|---
1 | The public Ubuntu Corpus | Ubuntu Corpus | Dataset
2 | the web portion of TriviaQA | web portion of TriviaQA | Dataset
3 | sentiment classification of movie reviews | sentiment classification | Task
4 | the problem of part-of-speech tagging for informal, online conversational text | part-of-speech tagging | Task
5 | The FB15K and WN18 datasets | FB15K; WN18 | Dataset
6 | Hits at 1, 3 and 10 | Hits at 1, 3 and 10 | Metric
7 | Link prediction benchmarks | Link prediction | Task
8 | POS tagging accuracy | POS tagging; accuracy | Task, Metric
9 | the third Dialogue State Tracking Challenge | Dialogue State Tracking, third Dialogue State Tracking Challenge | Task, Dataset
10 | SemEval-2017 Task 9 | SemEval-2017 Task 9 | Task
11 | temporal and causal relation extraction and classification | temporal and causal relation extraction and classification | Task
12 | the SemEval-2010 Task 8 dataset | SemEval-2010 Task 8 dataset; SemEval-2010 Task 8 | Dataset,Task
Table 4: Examples of entity span annotation guidelines
#### Anonymous entities.
Do not annotate anonymous entities, which include anaphors. The following
examples are anonymous entities:
* •
_this task_
* •
_this metric_
* •
_the dataset_
* •
_a public corpus for context-sensitive response selection_ in the sentence
“Experimental results in a a public corpus for context-sensitive response
selection demonstrate the effectiveness of the proposed multi-vew model.”
#### Abbreviation.
If both the full name and the abbreviation are present in the sentence,
annotate the abbreviation with its corresponding full name together. For
instance, we annotate “20-newsgroup (20NG)” as a dataset entity in Example
A.3.
#### Factual entity.
Only annotate “factual, content-bearing” entities. Task, dataset, and metric
entities normally have specific names and their meanings are consistent across
different papers. In Example A.3, “ _a high-coverage sense-annotated corpus_ ”
is not a factual entity.
* We used four datasets: IMDB, Elec, RCV1, and 20-newsgrous (20NG) to facilitate direct comparison with DL15.
In order to learn models for disambiguating a large set of content words, a
high-coverage sense-annotated corpus is required.
|
# QFold: Quantum Walks and Deep Learning to Solve Protein Folding
P. A. M. Casares<EMAIL_ADDRESS>Departamento de Física Teórica, Universidad
Complutense de Madrid. Roberto Campos<EMAIL_ADDRESS>Departamento de Física
Teórica, Universidad Complutense de Madrid. Quasar Science Resources, SL. M.
A. Martin-Delgado<EMAIL_ADDRESS>Departamento de Física Teórica, Universidad
Complutense de Madrid. CCS-Center for Computational Simulation, Universidad
Politécnica de Madrid.
###### Abstract
We develop quantum computational tools to predict how proteins fold in 3D, one
of the most important problems in current biochemical research. We explain how
to combine recent deep learning advances with the well known technique of
quantum walks applied to a Metropolis algorithm. The result, QFold, is a fully
scalable hybrid quantum algorithm that in contrast to previous quantum
approaches does not require a lattice model simplification and instead relies
on the much more realistic assumption of parameterization in terms of torsion
angles of the amino acids. We compare it with its classical analog for
different annealing schedules and find a polynomial quantum advantage, and
validate a proof-of-concept realization of the quantum Metropolis in IBMQ
Casablanca quantum processor.
###### pacs:
Valid PACS appear here
††preprint: APS/123-QED
## I Introduction
Proteins are complex biomolecules, made up of one or several chains of amino
acids, and with a large variety of functions in organisms. Amino acids are 20
compounds made of amine $(-NH_{2})$ and carboxyl $(-COOH)$ groups, with a side
chain that differences them. However, the function of the protein is not only
determined by the amino acid chain, which is relatively simple to figure out
experimentally, but from its spatial folding, which is much more challenging
and expensive to obtain in a laboratory. In fact it is so complicated that the
gap between proteins whose sequence is known and those for which the folding
structure has been additionally analyzed is three orders of magnitude: there
are over 200 million sequences available at the UniProt database
uniprot2019uniprot , but just over 172 thousand whose structure is known, as
given in the Protein Data Bank PDB . Furthermore, experimental techniques
cannot always analyse the tridimensional configuration of the proteins, giving
rise to what is called the dark proteome perdigao2015darkproteome that
represents a significant fraction of the organisms including humans
bhowmick2016darkproteome ; perdigao2019dark . There are even proteins with
several stable foldings bryan2010metamorphicproteins , and others that have no
stable folding called Intrinsically Disordered dunker2001intrinsically .
Since proteins are such cornerstone biomolecules, and retrieving their folding
so complicated, the problem of protein folding is widely regarded as one of
the most important and hard problems in computational biochemistry, and has
motivated research for decades. Having an efficient and reliable computational
procedure to guess their structure would therefore represent a large boost for
biochemical research.
Until recently, one of the most popular approaches to fold proteins was to
apply a Metropolis algorithm parameterised in terms of the torsion angles, as
is done for example in the popular library package Rosetta Rosetta and the
distributed computing project Rosetta@Home Rosetta@home ; das2007rosetta@home
. The main problem with this approach, though, is that the problem is
combinatorial in nature, and NP complete even for simple models hart1997robust
; berger1998protein . For this reason, other approaches are also worth
exploring. In the 2018 edition of the Critical Assessment of Techniques for
Protein Structure Prediction (CASP) competition CASP , for example, the winner
was DeepMind’s AlphaFold model AlphaFold , that was able to show that Deep
Learning techniques allow to obtain much better results. DeepMind approach
consisted on training a neural network to produce a mean field potential,
dependent on the distance between amino acids and the torsion angles, that can
be later minimized by gradient descent.
In this article we study how Quantum Computing could help improve the state of
the art in this problem when large error-corrected quantum computers become
available. We propose using the prediction of AlphaFold as a starting point
for a quantum Metropolis-Hastings algorithm. The Metropolis algorithm is a
Markov-chain Monte Carlo algorithm, that is, an algorithm that performs a
random walk $\mathcal{W}$ over a given graph. The Metropolis algorithm is
specially designed to quickly reach the equilibrium state, the state
$\pi^{\beta}$ such that $\mathcal{W}\pi^{\beta}=\pi^{\beta}$. Slowly modifying
the inverse temperature parameter $\beta$ such that the states with smaller
energy become increasingly favoured by the random walk, we should end in the
ground state of the system with high probability.
Figure 1: Example of the smallest dipeptide: the glycylglycine. We can see
that each amino acid has the chain (Nitrogen-$C_{\alpha}$-Carboxy). Different
amino acids would have a different side chain attached to the $C_{\alpha}$
instead of Hydrogen as it is the case for the Glycyne. In each figure we
depict either angle $\phi$ or $\psi$. Angle $\psi$ is defined as the torsion
angle between two planes: the first one defined by the three atoms in the
backbone of the amino acid ($N_{1}$, $C_{\alpha,1}$, $C_{1}$), and the second
by the same atoms except substituting the Nitrogen in that amino acid by the
Nitrogen of the subsequent one: ($C_{\alpha,1}$, $C_{1}$, $N_{2}$). For the
$\phi$ angle the first plane is made out of the three atoms in the amino acid
($N_{2}$, $C_{\alpha,2}$, $C_{2}$) whereas the second plane is defined
substituting the Carboxy atom in the amino acid by the Carboxy from the
preceding amino acid: ($C_{1}$, $N_{2}$, $C_{\alpha,2}$). These graphics were
generated using hanwell2012avogadro and Inkscape.
Several modifications of the Metropolis-Hastings algorithm to adapt it to a
quantum algorithm have been proposed wocjan2008speedup ; somma2007quantum ;
somma2008quantum ; temme2011quantum ; yung2012quantum ; lemieux2019efficient ,
mostly based on substituting the classical random walk by a Szegedy quantum
walk szegedy2004quantum . On the contrary, our work takes advantage of the
application of a quantum Metropolis algorithm under out-of-equilibrium
conditions similar to what is usually done classically, and has been done on
Ising modelslemieux2019efficient . Specifically, we aim to simulate this
procedure for several small peptides, the smallest proteins with only a few
amino acids; and compare the expected running time with the classical
simulated annealing, and also check whether starting from the initial state
proposed by an algorithm similar to AlphaFold may speed up the simulated
annealing process.
Our work benefits from two different lines of research. The first one makes
use of quantum walks to obtain polynomial quantum advantages, inspired mainly
by Szegedy work szegedy2004quantum , and by theoretical quantum Metropolis
algorithms indicated in the above. In contrast with lemieux2019efficient , our
work focuses only on the unitary heuristic implementation of the Metropolis
algorithm, but studies what happens with a different system (peptides) and
with different annealing schedules instead of only testing a single linear
schedule for the inverse temperature $\beta$. Lastly, we also validate a proof
of concept using IBM Casablanca processor, experimentally realizing the
quantum Metropolis algorithm in actual quantum hardware.
The second line of research related to our work is the use of quantum
techniques to speedup or improve the process of protein folding. The reason
for this is because even simplified models of protein folding are NP hard
combinatorial optimization problems, so polynomial speedups could in principle
be expected from the use of quantum computing instead of their classical
counterparts. The literature on this problem babbush2012construction ;
tavernelli2020resource ; perdomo2012finding ; fingerhuth2018quantum ;
babej2018coarse ; perdomo2008construction ; outeiral2020investigating and
related ones mulligan2020designing ; banchi2020molecular focuses on such
simplified lattice models that are still very hard, and mostly on adiabatic
computation. In contrast, our work presents much more realistic fully scalable
model, parametrized in terms of the torsion angles. The torsion angles, also
called dihedral, are angles between the atoms in the backbone structure of the
protein, that determine its folding. An example with the smallest of the
dipeptides, the glycylglycine, can be found in figure 1. These angles are
usually three per amino acid, $\phi$, $\psi$ and $\omega$, but the latter is
almost always fixed at value $\pi$ and for that reason, not commonly taken
into account in the models AlphaFold .
Figure 2: Scheme of the QFold algorithm. Starting from the amino acid
sequence, we use Psi4 to extract the atoms conforming the protein, and a
Minifold module, in substitution of AlphaFold, as initializer. The algorithm
then uses the guessed angles by Minifold as a starting point (or rather, as
the means of the starting von Mises distributions with $\kappa=1$), and the
energy of all possible positions calculated by Psi4, to perform a quantum
Metropolis algorithm that finally outputs the torsion angles. In the scheme of
the algorithm, the backbone builder represents a subroutine that recovers the
links between atoms of the protein, and in particular the backbone chain,
using the atom positions obtained from PubChem using Psi4. The initializer,
instantiated in our case by Minifold, is a second subroutine that gives a
first estimate of the torsion angles, before passing it to the quantum
Metropolis. The energy calculator uses Psi4 to calculate the energy of all
possible rotation angles that we want to explore, and these energies are used
in the quantum Metropolis algorithm, which outputs the expected folding. For a
more detailed flowchart, we refer to the figure 3.
These considerations, and the fact that we use a distilled version of
AlphaFold AlphaFold as initialization, makes our work different from the
usual approach in quantum protein folding: commonly adiabatic approaches have
been used so far, whereas our algorithm is digital. The downside of this more
precise approach is that the number of amino acids that we are able to
simulate is more restricted, but we are nevertheless able to perform
experiments in actual hardware. Finally, it is worth mentioning that in the
2020 CASP competition, DeepMind’s team tested their AlphaFold v2 algorithm
which significantly improves the results from their previous version.
In summary, the main contributions of our work are threefold: firstly, we
design a quantum algorithm that is scalable and realistic, and provided with a
fault-tolerant quantum computer could become competitive with current state of
the art techniques. Secondly, we analyse the use of different cooling
schedules in out-of-equilibrium quantum walks, and perform ideal quantum
simulations of QFold and compare its performance with the equivalent classical
Metropolis algorithm, pointing towards a quantum speedup. This quantum
advantage is enough to make the quantum Metropolis more convenient than its
classical counterpart in average-length proteins even after taking into
account slowdowns due to error correction protocols. Thirdly, we implement a
proof-of-concept of the quantum Metropolis algorithm in actual quantum
hardware, validating our work.
## II QFold algorithm
Figure 3: Flow chart of the QFold algorithm. This figure has to be viewed with
the help of figure 2. QFold has several functionalities integrated altogether,
that could be summarized in an initialization module, a simulation module, and
an experiment module. We denote by diamonds each of the decisions one has to
make. The top part constitutes the initialization module, where Minifold can
be used to get a guess of the correct folding, and Psi4 uses PubChem to
calculate the energies of rotations. The bottom half represents the experiment
or simulation algorithms, that output either Probabilities or
Quantum/Classical TTS and makes use of Qiskit. Different options of the
algorithm are represented by diamonds, and more information on them can be
found in section III.2.
The algorithm we introduce is called QFold and has three main components that
we will introduce in this section (see figure 2 for a scheme of QFold): an
initialization routine to find a good initial guess of the dihedral angles
that characterise the protein folding, a quantum Metropolis to find an even
lower energy state from the initial guess, and a classical metropolis to
compare against the quantum Metropolis to assess possible speedups. The aim of
this section is to introduce the theoretical background we have used for our
results.
### II.1 Initializer
QFold makes use of quantum walks as a resource to accelerate the exploration
of protein configurations (see figures 2 and 3). However, in nature proteins
do not explore the whole exponentially large space of possible configurations
in order to fold. In a similar fashion, QFold does not aim to explore all
possible configurations, but rather uses a good initialization guess state
based on Deep Learning techniques such as AlphaFold. Since such initial point
is in principle closer to the actual solution in the real space, we expect it
to be most helpful the larger the protein being modelled. In fact, one of the
motivations for our work was the fact that adding a Rosetta relaxation at the
end of the AlphaFold algorithm was able to slightly improve the results of the
AlphaFold algorithm AlphaFold . Notice that a Rosetta relaxation is the way
Rosetta calls its classical Metropolis algorithm. Therefore, we expect that
improved versions of Rosetta, using in our case quantum walks, could be of
help to find an even better solution to the protein folding problem than the
one provided by only using AlphaFold.
The AlphaFold initializer starts from the amino acid sequence ($S$), and
performs the following procedures:
1. 1.
First perform a Multiple Sequence Alignment (MSA) procedure to extract
features of the protein already observed in other proteins whose folding is
known.
2. 2.
Then, parametrizing the proteins in terms of their backbone torsion angles
$(\phi,\psi)$ (see figure 1), train a residual convolutional neural network to
predict distances between amino acids, or as they call them, residues.
3. 3.
Train also a separate model that gives a probability distribution for the
torsion angle conditional on the protein sequence and its previously analysed
MSA features, $P(\phi,\psi|S,MSA(S))$. This is done using a 1-dimensional
pooling layer that takes the predicted distances between amino acids and
outputs different secondary structure such as the $\alpha$-helix or the
$\beta$-sheet 111The $\alpha$-helix and the $\beta$-sheet correspond to two
common structures found in protein folding. Such structures constitute what is
called the secondary structure of the protein, and are characterised because
$(\phi,\psi)=(-\pi/3,-\pi/4)$ in the $\alpha$-helix, and
$(\phi,\psi)=(-3\pi/4,-3\pi/4)$ in the $\beta$-sheet, due to the hydrogen
bonds that happen between backbone amino groups NH and backbone carboxy groups
CO. To make the prediction, the algorithm makes use of bins of size 10º,
effectively discretising its prediction.
4. 4.
All this information, plus some additional factors extracted from Rosetta, is
used to train an effective potential that aims to give smaller energies to the
configurations that the model believes to be more likely to happen in nature.
Finally, at inference time one starts from a rough guess using the MSA
sequence, and performs gradient descent on the effective potential. One can
also perform several attempts with noisy restarts and return the best option.
Interestingly enough, the neural network is also able to return an estimation
of its uncertainty. Such uncertainty is measured by the parameter $\kappa$ in
the von Mises distribution, and plays the role of the inverse of the variance.
The von Mises distribution is the circular analog of the normal distribution,
and its use is justified because angles are periodic variables
von2014mathematical .
### II.2 Classical Metropolis
As we have mentioned in the introduction, a relatively popular approach to
perform protein folding has been to use the Metropolis algorithm. The
Metropolis algorithm is an algorithm that performs a random walk over the
configuration space $\Omega$. The configuration space is the abstract space of
possible values the torsion angles that a given protein can take. As such, a
given state $i$ is a list of values for such torsion angles. In particular,
for computational purposes, we will set that angles can take values from a
given set, that is, it will not be a continuous but a discrete distribution.
Over such space we can define probability distributions. Furthermore, since
those angles will dictate the position of the atoms in the protein, the state
$i$ will also imply an energy level $E_{i}$, due to the interaction of the
atoms. In the Rosetta library, the function that calculates an approximation
to such energy is called scoring function.
Starting from a state $i$, the Metropolis algorithm proposes a change
uniformly at random to one of the configurations, $j$, connected to $i$. We
will call $T_{ij}$ to the probability of such proposal. Then this change is
accepted with probability
$A_{ij}=\min\left(1,e^{-\beta(E_{j}-E_{i})}\right),$ (1)
resulting in an overall probability of change $i\rightarrow j$ at a given step
$\mathcal{W}_{ij}=T_{ij}A_{ij}$.
Slowly varying $\beta$ one decreases the probability that steps that increase
the energy of the state are accepted, and as a consequence when $\beta$ is
sufficiently large, the end state is a local minima. If this annealing
procedure is done sufficiently slowly, one can also ensure that the minima
found is the global minima. However, in practice one does not perform this
annealing as slowly as required, resorting instead to heuristic restarts of
the classical walk, and selecting the best result found by the several
different trajectories.
In our implementation we emulate having oracle access to the energies of
different configurations. Such oracle in practice is a subroutine that calls
the Psi4 package turney2012psi4 to calculate the energies of all possible
configurations for the torsion angles that we want to explore. We give more
detail of our particular implementation in section III.2.
### II.3 Quantum Metropolis
A natural generalisation of the Metropolis algorithm explained in the previous
section is the use of quantum walks instead of random walks. The most popular
quantum walk for this purpose is Szegedy’s szegedy2004quantum , that consists
of two rotations similar to the rotations performed in Grover’s algorithm
Grover . Szegedy’s quantum walk is defined on a bipartite graph. Given the
acceptance probabilities $\mathcal{W}_{ij}=T_{ij}A_{ij}$, $A_{ij}$ defined in
(1), for the transition from state $i$ to state $j$, one defines the unitary
$U\ket{j}\ket{0}:=\ket{j}\sum_{i\in\Omega}\sqrt{\mathcal{W}_{ji}}\ket{i}=\ket{j}\ket{p_{j}}.$
(2)
Taking
$R_{0}:=\mathbf{1}-2\Pi_{0}=\mathbf{1}-2(\mathbf{1}\otimes\ket{0}\bra{0})$ (3)
the reflection over the state $\ket{0}$ in the second subspace, and $S$ the
swap gate that swaps both subspaces, we define the quantum walk step as
$W:=U^{\dagger}SUR_{0}U^{\dagger}SUR_{0}.$ (4)
We refer to Appendix A and Fig. 9 for a detailed account on the use of these
quantum walks in a similar way to the Grover rotations. For completeness, in
Appendix A we review in more detail the theoretical basis of Szegedy quantum
walks that leads us to believe that a quantum advantage is possible in our
problem.
It is well known that if $\delta$ is the eigenvalue gap of the classical walk,
and $\Delta$ the phase gap of the quantum walk, then the complexity of the
classical walk is $O(\delta^{-1})$, the complexity of the quantum algorithm
$O(\Delta^{-1})$, and the relation between the phase and eigenvalue gap is
given by $\Delta=\Omega(\delta^{1/2})$ magniez2011search , offering a
potential quantum advantage. Our algorithm aims to explore what is the
corresponding efficiency gain in practice.
The quantum Metropolis algorithm that we employ lemieux2019efficient uses a
small modification of the Szegedy quantum walk, substituting the bipartite
graph by a coin. That is, we will have 3 quantum registers: $\ket{\cdot}_{S}$
indicating the current state of the system, $\ket{\cdot}_{M}$ that indexes the
possible moves one may take, and $\ket{\cdot}_{C}$ the coin register. We may
also have ancilla registers $\ket{\cdot}_{A}$. The quantum walk step is then
$\tilde{W}=RV^{\dagger}B^{\dagger}FBV.$ (5)
Here $V$ prepares a superposition over all possible steps one may take in
register $\ket{\cdot}_{M}$, $B$ rotates the coin qubit $\ket{\cdot}_{C}$ to
have amplitude of $\ket{1}_{C}$ corresponding to the acceptance probability
indicated by (1), $F$ changes the $\ket{\cdot}_{S}$ register to the new
configuration (conditioned on the value of $\ket{\cdot}_{M}$ and
$\ket{\cdot}_{C}=\ket{1}_{C}$), and $R$ is a reflection over the state
$\ket{0}_{MCA}$.
Although other clever options are available lemieux2019efficient , here we
implement the simplest heuristic algorithm, which consists of implementing $L$
steps of the quantum walk
$\ket{\psi(L)}:=\tilde{W}_{L}...\tilde{W}_{1}\ket{\pi_{0}},$ (6)
where $t={1,...,L}$ also defines an annealing schedule, for chosen values of
$\beta(t)$ at each step. More detailed explanation of algorithm
lemieux2019efficient can be found in appendix B.
## III Simulations, experiments and results
### III.1 Figures of merit
When looking for a metric to assess the goodness of given solutions to protein
folding we have to strike a balance between two important aspects: on the one
hand, we want a model that with high probability finds the correct solution.
On the other hand, we would like such procedure to be fast. For example going
through all configuration solutions would be quite accurate albeit extremely
expensive if not directly impossible.
A natural metric to use in this context is then the Total Time to Solution
(TTS) lemieux2019efficient defined as the average expected time it would take
the procedure to find the solution if we can repeat the procedure in case of
failure:
$TTS(t):=t\frac{\log(1-\delta)}{\log(1-p(t))}.$ (7)
where $t\in\mathbb{N}$ is the number of quantum/random steps performed in an
attempt of the quantum/classical Metropolis algorithm, $p(t)$ the probability
of hitting the right state after those steps in each attempt, and $\delta$ a
target success probability of the algorithm taking into account restarts, that
we set to the usual value of $0.9$. In any case, since it is a constant, the
value of $TTS(t)$ with other value of $\delta$ is straightforward to recover.
Although one should not expect to be able to calculate $p(t)$ in the average
protein because finding the ground state is already very challenging, for
smaller instances it is possible to calculate the $TTS$ for example executing
the algorithm many times and calculating the percentage of them that correctly
identifies the lowest energy state. Using quantum resources, the corresponding
definition is (27) from appendix B.
We can see that this metric represents the compromise between longer walks and
the corresponding expected increase in probability of success. Using this
figure the way we have to compare classical and quantum walks is to compare
the minimum values achieved for the $\min_{t}TTS(t)$. Similar metrics have
also been defined previously in the literature albash2018demonstration .
On the other hand we would also like mention that there is a small
modification of the classical algorithm that could improve its TTS, because we
only output the last configuration of the random walk instead of the state
with minimum energy found so far, a common choice for example in the
Rosetta@Home project. The reason for not having included this modification is
because the length of the classical path, 2 to 50 steps, represents a sizable
portion of the total space that ranges from 64 to 4096 available positions,
whereas that will not be the case for large proteins. We believe that had we
run the classical experiments with that modification, we would have introduced
a relatively large bias in the results, favouring the classical random walks
in the smallest instances of the problem, and therefore likely overestimating
the quantum advantage.
For the experiment run in IBM Quantum systems and whose results can be found
in section III.3.4, the metric we use instead of the TTS is the probability of
measuring the correct answer, and in particular whether we are able to detect
small changes in the probability corresponding to the correct solution.
Measuring the TTS here would not be interesting due to the high level of noise
of the circuit.
### III.2 Simulation and experimental setup
#### III.2.1 Simulations
For the assessment of our algorithm we have built a simulation pipeline that
allows to perform a variety of options. The main software libraries used are
Psi4 for the calculation of energies of different configurations in peptides
turney2012psi4 , a distilled unofficial version of AlphaFold dubbed Minifold
ericalcaide2019minifold , and Qiskit Qiskit for the implementation of quantum
walks. Simulations were run on personal laptops and small clusters for
prototyping, and due to its large computational cost, Amazon Web Services aws
for deploying the complete algorithm and obtaining the results. A scheme of
the pipeline of the simulation algorithm can be seen in figure (3).
The initialization procedure takes as input the name of the peptide we want to
simulate and uses Psi4 to download the file of the corresponding molecule from
PubChem kim2019pubchem , an online repository containing abundant information
over many molecules, including atomic positions.
After that, and before executing the quantum and classical walks, the system
checks whether an energy file is available, and if not uses Psi4 to calculate
and store all possible energy values of the different rotations of the torsion
angles. For the calculation of that energy we choose the relatively common
6-31G atomic orbital basis functions jensen2013atomic , and the procedure of
Moller-Plesset to second order helgaker2014molecular as a more accurate and
not too expensive alternative to the Hartree-Fock procedure. However, it is
also possible to choose any other basis or energy method. Finally, the system
performs the corresponding quantum and random walks and returns the minimum
TTS found.
Figure 4: In section III.3.4 we explain a proof of concept experimental
realization of the quantum Metropolis algorithm, and this figure represents
the implementation of the coin flip subroutine of this hardware-adapted
algorithm, whose operator is represented by $B$ in (5). The circuit has two
key simplifications that reduce the depth. The first one is that we first
rotate the coin register to $\ket{1}$ and then we rotate it back to $\ket{0}$
if the acceptance probability is not 1. This halves the cost, since otherwise
one would have to perform 8 multi-controlled rotations (all possible
combinations of the control values for registers move, $\phi$ and $\psi$), and
in this case we only perform 4 of them. The second simplification again halves
the cost, grouping together rotations with similar values. We empirically see
that the rotation values controlled on $\ket{000}$ and $\ket{010}$ are very
similar, so we group them in rotation $R_{0}$, that implicitly depends on
$\beta$. Similarly, we group the rotations controlled on $\ket{001}$ and
$\ket{101}$ in $R_{1}$, also dependent on $\beta$. This also has the nice
effect of only requiring to control on two out of the three bottom registers,
($\phi$, $\psi$ and move), transforming CCC-$R_{X}$ gates in CC-$R_{X}$.
Separated by the two barriers, from left to right and from top to bottom we
can see the implementation of such CC-$R_{X}$ gates.
In order to evaluate the impact of adding a machine learning module such as
AlphaFold at the beginning of the algorithm, we have implemented the
initialization option to start from random values of the dihedral angles, from
the ones returned by minifold, or the actual original angles that the molecule
has, as returned by the PubChem library.
On the other hand, to evaluate the potential quantum advantage, we also allow
to select the number of bits that specify the discretization of the torsion
angles. For example, 1 bit means that angles can take values in $\\{0,\pi\\}$,
whereas 2 bits indicate discretization in $\pi/2$ radians. In general, the
precision of the angles will be $2^{1-b}\pi$, for $b$ the number of rotation
bits. Notice that the precision of 10º of AlphaFold when reporting their
angles, that we indicated in section II.1, is intermediate between $b=5$ and
$b=6$. The main idea here is that when we increase $b$ or the number of
angles, the size of the search space becomes larger, and evaluating how the
classical and quantum $\min_{t}TTS(t)$ grow we may be able to check whether a
polynomial quantum advantage exists. The $TTS$ can be directly calculated from
inferred classical probabilities, if one is executing the classical metropolis
or the quantum Metropolis in quantum hardware, or from the amplitudes, if one
is running a simulation of the latter. Its specific definition can be seen in
equation (7).
Finally, we implemented and used the experiment mode, that in contrast to the
the simulation mode explained in previous paragraphs, allows us to run the
smallest instances of our problem in actual IBM Q hardware, dipeptides with a
single rotation bit.
Other choices we have to make involve the value of the parameter $\beta$ in
(1), whether it is fixed or follows some annealing schedule, the number of
steps for which the system computes their TTS, or the $\kappa$ parameter from
the von Mises distribution if the initialization is given by Minifold. In
fact, if the original AlphaFold algorithm were to be used, the $\kappa$ values
returned by AlphaFold could actually be used instead of our default value
$\kappa=1$, and furthermore the preparation of the amplitudes corresponding to
this probability distribution could be made efficient using the Grover-Rudolph
state preparation procedure State_prep_grover .
Additionally, while Qiskit allows to recover the probabilities from the
amplitudes, to evaluate the classical walks we have to repeat a certain number
of times the procedure to infer the probabilities. This number of repetitions
is controlled by a variable named number iterations, that we have set to
$500\times(2^{b})^{2n-2}$, where $b$ is the number of bits and $n$ the number
of amino acids, to reflect that larger spaces require more statistics.
Figure 5: Implementation of the full quantum Metropolis circuit from section
III.3.4 implemented in actual quantum hardware, using the coin flip rotation
described in figure 4. The steps of the circuit are separated by barriers for
a better identification (from left to right, and from top to bottom): first
put $\phi$ and $\psi$ in superposition. Then, for each of the two steps
$\tilde{W}$ from (5): put the move register (that controls which angle to
move) in a superposition, operator $V$, and prepare the coin, $B$. Then,
controlled on the coin being in state $\ket{1}$ and the move register the
corresponding angle, change the value of $\psi$ (if move $=\ket{1}$) or $\phi$
(if move $=\ket{0}$), denoted by $F$. Then we uncompute the coin,
$B^{\dagger}$ and the move preparation, $V^{\dagger}$, and perform the phase
flip on $\ket{\text{move}}\ket{\text{coin}}=\ket{00}$ represented by $R$. The
second quantum walk step proceeds equally (with different value of $\beta$ and
the rotations), but now we do not have to uncompute the move and coin
registers before measuring $\phi$ and $\psi$ because it is the last step.
If a non-fixed $\beta$ is attempted, we have implemented and tested several
options for the annealing schedule. The implemented schedules are:
* * •
Boltzmann or logarithmic implements the famous logarithmic schedule
kirkpatrick1983optimization
$\beta(t)=\beta(1)\log(te)=\beta(1)\log(t)+\beta(1).$ (8a)
Notice that the multiplication of $t$ times $e$ is necessary in order to make
a fair comparison with the rest of the schedules, so that they all start in
$\beta(1)$. As a consequence, this is not truly the theoretical schedule
required to achieve a quadratic speedup.
* •
Cauchy or linear implements a schedule given by
$\beta(t)=\beta(1)t.$ (8b)
* •
geometric defines
$\beta(t)=\beta(1)\alpha^{-t+1},$ (8c)
where $\alpha<1$ is a parameter that we have heuristically set to $0.9$.
* •
And finally exponential uses
$\beta(t)=\beta(1)\exp(\alpha(t-1)^{1/N}),$ (8d)
where $\alpha$ is again set to $0.9$ and $N$ is the space dimension, which in
this case is equal to the number of torsion angles.
For comparison purposes, the value of $\beta(1)$ chosen has been heuristically
optimized to $50$.
Our current system has two main limitations depending on the mode it is used.
If the aim is to perform a simulation, then the amount of Random Access Memory
of the simulator is the main concern. This is why we have simulated, for a
fixed value of $\beta$:
* •
Dipeptides with 3 to 5 rotation bits.
* •
Tripeptides with 2 rotation bits.
* •
Tetrapeptides with a single rotation bit.
Additionally, dipeptides with 6 bits, tripeptides with 3 bits and
tetrapeptides with 2 bits can be simulated for a few steps, but not enough of
them to calculate our figure of merit with confidence. If the $\beta$ value is
not fixed but follows some annealing schedule, the requirements are larger,
but we can still simulate the same peptides. Further than that, the Random
Access Memory requirements for an ideal (classical) simulation of the quantum
algorithm become quite large. Notice that Qiskit simulator supports 32 qubits
at the moment, but our system is more constrained by the depth of the circuit,
which can run into millions of gates.
#### III.2.2 Experiments
W have also performed experiments in the IBMQ Casablanca processor. In
contrast with the previous experiments, due to the low signal to noise ratio
in the available quantum hardware, it does not make much sense to directly
compare the values of the TTS figure of merit. Instead, the objective here is
to be able to show that we can implement a two-step quantum walk (the minimum
required to produce interference) and still be able to see an increase in
probability associated with the correct state. Since we are heavily
constrained in the depth of the quantum circuit we can implement, we
experiment only with dipeptides, and with 1 bit of precision in the rotation
angles: that is $\phi$ and $\psi$ can be either $0$ or $\pi$.
In this quantum circuit, depicted in figures 4 and 5, we will have 4 qubits,
namely $\ket{\phi}$, $\ket{\psi}$, a coin qubit, and another indicating what
the angle register to update in the next step. Additionally, we always start
from the uniform superposition $\ket{+}_{\phi}\ket{+}_{\psi}$. We then perform
2 quantum walk steps $\tilde{W}$ with values of $\beta$ empirically chosen
$0.1$ and $1$ to have large probabilities of measuring
$\ket{0}_{\phi}\ket{0}_{\psi}$, where we encode the state of minimum energy,
the correct state of the dipeptide.
The figures corresponding to the circuit are 4 and 5, the former depicting the
coin flip procedure and the latter using it as a subroutine in the circuit as
a whole. The coin flip subroutine is the most costly part of the quantum
circuit both in this hardware implementation and in the simulations of
previous sections too, since it includes multiple multi-controlled rotations
of the coin qubit. Perhaps an important remark to make is that this hardware-
adapted circuit contains some simplifications in order to minimize the length
of the circuit as much as possible, since it will be one of the most important
quantities determining the amount of error in the circuit, our limiting
factor.
Peptides | Precision random | Precision minifold | $b$ | quantum min(TTS) random | quantum min(TTS) minifold
---|---|---|---|---|---
Dipeptides | 0.53 | 0.53 | 3 | 136.25 | 270.75
4 | 547.95 | 1137.45
5 | 1426.28 | 1458.02
Tripeptides | 0.46 | 0.71 | 2 | 499.93 | 394.49
Tetrapeptides | 0.51 | 0.79 | 1 | 149.80 | 26.30
Table 1: Table of average precisions defined in equation (9), and
corresponding quantum minimum TTS, defined in equation (7) as the expected
number of steps it would take to find the solution using the quantum
algorithm, with different initializations. $b$ denotes the rotation bits, and
in bold we have indicated which of minifold or random values are best. The aim
of this table is understanding the impact of minifold initialization in the
quantum $\min TTS$, our figure of merit. The two main aspects to notice from
the table are that Minifold precision grows with the size of the peptide, and
that when it is the case that the minifold precision is higher, the
corresponding quantum min TTS values are lower than their random counterparts.
This supports the idea that using a smart initial state helps to find the
native folding of the protein faster.
There is one more important precision to be made: since our implementation of
the quantum circuit in the IBMQ Casablanca processor has 176 basic gates of
depth even after being heavily optimized by Qiskit transpiler, we need a way
to tell whether what we are measuring is only noise or relevant information
survives the noise. Our first attempt to distinguish these two cases was to
use the natural technique of zero-noise extrapolation, where additional gates
are added that do not change the theoretical expected value of the circuit,
but introduce additional noise temme2017error . By measuring how the measured
probabilities change, one can extrapolate backwards to the theoretical ‘zero
noise’ case. Unfortunately, the depth of the circuit is already so large that
it does not work: it does not converge or else returns unrealistic results, at
least when attempted with the software library Mitiq larose2020mitiq .
For this reason we need to find a way out that is only valid because our
circuit is parameterised in terms of the angles and the values of $\beta$.
Additionally we know that if we were to set the value of $\beta$ to $0$, the
theoretical result would be $1/4$ as there are 4 possible states. As a
consequence, our strategy consists of trying to detect changes in the
probability when we use $\beta(\bm{t})=(0,0)$ or $\beta(\bm{t})=(0.1,1)$. The
notation $\beta(\bm{t})$ denotes the value of $\beta$ chosen at each of the
two steps.
For the experiment, we reserved 3 hours of usage of the IBMQ Casablanca
processor, with quantum volume 32. During that time we were able to run 25 so-
called ‘jobs’ with $\beta(\bm{t})=(0,0)$ and 20 ‘jobs’ for 8 arbitrarily
chosen dipeptides. Each run consisted of 8192 repetitions of the circuit
(which can be seen in figures 4 and 5) and an equal number of measurements,
what means that for each dipeptide we run a total of 163840 circuits, and
204800 for $\beta(\bm{t})=(0,0)$ as a baseline.
As we will see from the results, the main limitation of our experiment is the
noise of the system and therefore the depth of the circuit. For this reason we
restrict ourselves to a single rotation bit in dipeptides.
### III.3 Experimental and simulation results
#### III.3.1 Initialization
In this section we analyse the impact of different initialization methods for
the posterior use of quantum/classical walks. Although we know that AlphaFold
is capable of making a relatively good guess for the correct folding, and
therefore it is reasonable to expect AlphaFold’s guess to be close to the
optimal folding solution in the conformation space, our aim is to give some
additional support to this idea.
As an initializer, we decided to use (and minorly contributed to) Minifold
because even though it does not achieve the state of the art in the prediction
of the angles, it is quite simple and sufficient to illustrate our point.
Minifold uses a residual network implementation given in Tensorflow and Keras
abadi2016tensorflow ; chollet2015keras . Perhaps the most important detail of
using this model is that because we are trying to predict small peptides, and
Minifold uses a window of 34 amino acids for its predictions, we had to use
padding.
The metrics that we analyse in this case are twofold: in the first place, we
would like to see whether Minifold achieves a better precision on the angles
than random guessing. This is a necessary condition for our use of Minifold,
or more generally, any smart initialization module, to make sense. We measure
the precision as $1$ minus the normalized angular distance between the
returned value by the initialization module and the actual value we get from
PubChem (see figures 1 and 2):
$p=1-\frac{d(\alpha,\tilde{\alpha})}{\pi},$ (9)
where $\tilde{\alpha}$ is the estimated angle (either $\phi$ or $\psi$) given
by Minifold or chosen at random, $\alpha$ is the true value calculated from
the output of Psi4 and PubChem, and $d$ denotes the angular distance. Since we
have normalized it, the random initialization gets a theoretical average
precision of 0.5. In table 1 there is a summary comparing the average
precision results of minifold and random initialization broken down by the
protein and bits. The dipeptide results show that due to the small size of the
peptide, minifold has barely a better precision than just random. However,
this situation improves for tripeptides and tetrapeptides, getting a better
precision, and as a consequence, lower TTS values.
Figure 6: Comparison of the Classical and Quantum minimum TTS achieved for the
simulation of the quantum Metropolis algorithm with $\beta=10^{3}$, for 10
dipeptides (with rotation bits $b=3,4,5$), 10 tripeptides ($b=2$) and 4
tetrapeptides ($b=1$), also showing the different initialization options
(random or minifold), and the best fit lines. In dashed grey line we separate
the space where the Quantum TTS is smaller than the Classical TTS. The key
aspect to notice in this graph is that although for smaller instances the
quantum algorithm does not seem to match or beat the times achieved by the
classical Metropolis, due to the exponent being smaller than one (either
$0.89$ or $0.53$ for minifold or random respectively) for average size
proteins we can expect the quantum advantage to be dominant and make the
quantum Metropolis more useful than its classical counterpart. In section
III.3.2 we discuss further details and explain why the random initialization
exponent seems more favourable than the minifold exponent.
Figure 7: Comparison of the Classical and Quantum minimum TTS achieved for the
same peptides as those in figure 6, except that due to computational cost we
do not include dipeptides with 5 rotation bits. This figure corresponds to
section III.3.3, and shows the different initialization options (random or
minifold) and annealing schedules (Boltzmann/logarithmic, Cauchy/linear,
geometric and exponential), and the best fit lines. In dashed grey line we
depict the diagonal. The corresponding fit exponents are given in table 2,
where we can see that in three out of the four cases using an annealing
schedule increases the quantum advantage. On the other hand, using an
exponential schedule does not seem to give but a tiny advantage when used with
a minifold initialization.
If Minifold having a greater precision in the angles than random guessing was
a precondition for our analysis to make sense, the actual metric we are
interested in is whether it has some impact reducing the TTS metric. Otherwise
we could avoid using an initialization module altogether. First of all we were
not expecting an actual reduction of the exponent due to the use of minifold
initialization as much as multiplicative TTS reduction prefactors. However, in
figure 6 the exponent of random initialization model is smaller than the one
corresponding to the minifold initialization in the fit. While this may seem
to indicate that our initialization is in fact harmful to the convergence of
the quantum algorithm, in fact the explanation is quite the opposite: for the
smaller instances of the problem, and very specially in the case of random
initialization, the minimum TTS value is achieved for $t=2$ as can be seen
from the two horizontal structures formed by the blue points in the figure,
meaning that in such cases only using the minifold initialization the quantum
algorithm is able to profit from the incipient quantum advantage. This effect
disappears for larger instances of the problem, but while for random
initialization there is a penalisation of the TTS in the smallest problems
(thus lowering the exponent), the minifold initialization is capable of
correcting for this effect, lowering the TTS of smaller instances of the
problem and, as a bonus, rising the exponent. We are therefore inclined to
believe that the minifold exponent more accurately represents the true
asymptotic exponent of the algorithm for this problem. In conclusion, while
the small size of our experiments does not allow us to see the benefits of
using a smart initialization, they have been important to get a calibrated
estimate of the actual quantum advantage, and we can also see that it helps
reduce the TTS cost both in the classical and quantum algorithm, which nicely
fits the intuition that being closer-than-random to the solution helps to find
the solution faster.
#### III.3.2 Fixed $\beta$
We now discuss whether we are able to observe a quantum advantage in the TTS,
our figure of merit. We will again be discussing the results given in figure
6, for it represents the best fit to the classical vs quantum TTS, and
therefore accurately depicts the expected quantum advantage: we can see that
the slopes separated for the different initialization options are $0.89$ for
the minifold initialization and $0.53$ for the random initialization. As a
consequence, we can see that if these trends are sustained with larger
proteins, there is a polynomial advantage.
As we have seen, figure 6 points towards a quantum advantage. The final
question we would therefore like to answer, is what does this advantage mean
for the modelling of large enough proteins. For that, we only need one
additional ingredient, which is how the expected classical $\min TTS$ scales
with the size of the configuration of the problem. Our data in this respect is
even more restricted because we only have access to configuration spaces of
64, 256 and 1024 positions. Therefore we are making a regression to only three
points, but could give us nevertheless some hints of whether our technique,
the use of quantum Metropolis algorithm, will be helpful to solve the
tridimensional structure of proteins given a large enough and fault tolerant
quantum computer. The regression exponent of a $\log(size)$ vs
$\log(\text{classical }\min TTS)$ fit using both random and minifold
initializations is $r=0.88$, that should not be confused with those in figure
6.
Let us take, for the sake of giving an example, an average 250 amino acids
protein, which has approximately 500 angles to fix. If we use $b=6$ bits to
specify such angles as might be done in a realistic setting, the classical
$\min_{t}TTS$ would be $\approx(2^{b})^{2\times 250\times
r}=(2^{6})^{500\times 0.88}$. The quantum $\min_{t}TTS$, on the other hand,
will be such number to the corresponding exponent of figure 6, that we will
call $e_{m}=0.89$ and $e_{r}=0.53$ for minifold and random. This will
translate to a speedup factor of between $\approx 10^{87}$ and $10^{373}$,
although the latter represents an upper bound and the former is probably
closer to the actual value. This reveals that even if we take into account the
slowdown due to error correction and other factors in quantum computing, the
use of the quantum Metropolis seems very powerful and competitive. And this
conclusion is robust: if we took the quantum advantage exponent to be just
$e=0.95$ and the growth of the TTS with the size of the space an extremely
conservative $r=0.5$, the quantum speedup before factoring in the operating
frequency of the computer will still be a factor of $\approx 10^{22}$. Larger
proteins, which exist in nature, will exhibit even larger speedups in the TTS,
the expected time it would take to find the native structure.
#### III.3.3 Annealing schedules and variable $\beta$
In the previous section we analysed what quantum advantage could be expected
when using a fixed $\beta$ schedule, resulting in a pure quantum walk with
several steps. However, Metropolis algorithms are rarely used in practice with
a fixed $\beta$, since at the beginning of the algorithm one would like to
favour exploration, requiring a low $\beta$ value, and at the end one would
like to focus mostly on exploitation, that is achieved with a high $\beta$
value. It can be seen that this necessity is linked to the well known
exploration-exploitation trade-off in Machine Learning and more generally in
Optimization sutton .
Fit exponents
---
Schedule | Random init. | Minifold init.
Fixed $\beta$ | $0.53$ | $0.89$
Logarithmic | $0.29$ | $0.88$
Linear | $0.34$ | $0.86$
Exponential | $0.37$ | $1.00$
Geometric | $0.29$ | $0.85$
Table 2: Table of scaling exponents for different annealing schedules and
initialization options. The peptides are the same, except that for fixed
$\beta$ we have also included dipeptides with 5 bits of precision, what is
costly for the rest of schedules. For fixed $\beta$, the value heuristically
chosen was $\beta=1000$, while the initial $\beta$ value in each of the
schedules, defined in (8), is $\beta(1)=50$.
The annealing schedule with the strongest theoretical background is usually
called the inhomogeneous algorithm, and it is known because one can prove that
the algorithm will converge to the global minimum value of the energy with
probability 1, albeit generally too slowly (section 3.2 van1987simulated ).
Its implementation is conceptually similar to the Boltzmann or logarithmic
schedule that we use, but with a different prefactor van1987simulated .
Therefore, the question we would like to answer in this section is what
happens when we use our quantum Metropolis algorithm outside of the
equilibrium, that is, when we use a schedule that changes faster than the
theoretical inhomogeneous algorithm. For this task, several optimization
schedules have been proposed, of which we have implemented and tested four
different options whose mathematical formulation can be seen in (8).
Our conclusions from figure 7 and table 2 are that using a variable schedule,
the quantum advantage can be made larger than that of the fixed the
temperature algorithm. However, not all cases give the same advantage, with
the exponential schedule giving practically none, and the geometric schedule
explained in section 5.2 of van1987simulated being the most promising, with
an exponent of $0.85$ if minifold initialization is used. Linear and
logarithmic schedules lie in between.
Lastly, large differences in the exponents exist depending on the
initialization criteria used, although the same argument on why this is the
case given in section III.3.2 applies here too. In consequence, a more
detailed analysis should be carried out in future work. However, from figures
6 and 7 we can see that the decrease in the expected TTS value when using a
minifold initialization is reflected in the corresponding prefactors, smaller
and more favourable.
#### III.3.4 Experiments in IBMQ Casablanca
Figure 8: Results from hardware measurements corresponding to the experimental
realization of the quantum Metropolis described in section III.3.4 and the
circuit depicted in figures 4 and 5. For each dipeptide we perform a student
t-test to check whether the average success probabilities for
$\beta(\bm{t})=(0,0)$ and $\beta(\bm{t})=(0.1,1)$ are actually different
student1908ttest . The largest p-value measured in all 8 cases is $3.94\cdot
10^{-18}$, indicating that in all cases the difference is significant. For
each of the eight dipeptides we run 163840 times the circuit, and for the
baseline 204800 times.
After having analysed the potential quantum advantage in simulation, we would
also like to implement the quantum Metropolis in IBM Q Casablanca processor.
The circuit we have implemented is made of two quantum walk operators, the
minimum required to cause interference, in the smallest instance of our
problem: dipeptides with a single rotation bit, thus allowing for angles of
$0$ and $\pi$.
The corresponding implementation of the circuit, which was run in IBMQ
Casablanca processor, is depicted in figures 4 and 5. An important detail of
our implementation of this quantum Metropolis algorithm is that in order to be
able to see more than noise we have needed to perform simplifications that
benefit from the particular structure of the problem. This allows us to
minimize the depth of the circuit, that strongly influences the noise, the
limiting figure of our experiment.
In fact, both due to the high level of noise of the circuit, and the minimal
size of the problem we are trying to solve, which only has 4 possible states
to search from, in this experiment it does not make sense to use the Total
Time to Solution as a figure of merit. Rather, we are only interested in
seeing if the quantum circuit is sensitive to the underlying probabilities,
which depend on the chosen $\beta$ value. As we discussed in section III.2.2,
for $\beta=0$ all states are equiprobable whereas for higher $\beta$ values
the probability of the target state is higher, and our experiment aims to see
this difference of probability in practice.
The depicted circuit, with values of $\beta(\bm{t})=(0,0)$ and
$\beta(\bm{t})=(0.1,1)$ was run as explained in the section III.2.2 for
several hundred thousand times for each dipeptide to be able to certify that
our measurements do not correspond to pure noise. The results of such
probability differences are depicted in figure 8, and they are highly
significant as indicated by the low p-value achieved in all cases, smaller or
equal to $3.94\cdot 10^{-18}$, in the corresponding student t-test of the
underlying binomial distribution. Such binomial distribution corresponds to
returning value $1$ when the measurement correctly identifies the minimum-
energy state which is encoded in $\ket{00}$, and $0$ when it does not.
It can also be seen that in seven out of the eight cases tested, the quantum
Metropolis algorithm points in the right direction of increased success
probability. This gives us confidence that we are measuring a small amount of
quantum effects. The outlier, glycylvaline, is surprising because is the
dipeptide that in the simulation shows the greatest theoretical probability of
measuring $\ket{00}$, and at the same time is the dipeptide with the largest
p-value, although still very significant. We can only hypothesise that this is
due to some experimental imperfection.
## IV Conclusions and outlook
We have studied how quantum computing might complement modern machine learning
techniques to predict the folding of the proteins. For that, we have
introduced QFold, an algorithm that implements a quantum Metropolis algorithm
using as a starting guess the output of a machine learning algorithm, that in
our case is a simplified implementation of AlphaFold algorithm named Minifold,
but that could in fact be substituted by any initialization module that uses
future improvement of such deep learning techniques.
An important feature of QFold is that it is a realistic description of the
protein folding, meaning that the description of the folded structure relies
on the actual torsion angles that describe the final conformation of the
protein. This is realised by the number of bits $b$ used to fix the precision
of those angles, for which a moderate value of $b=5$ or $b=6$ would be as
accurate as the original AlphaFold. This is in sharp contrast with the rigid
lattice models used to approximate protein folding in the majority of quantum
computing algorithmic proposals for protein folding. Although in our current
simulations presented in this work the range of the precision is limited by
the resources of the classical simulation, nothing prevents QFold from
reaching a realistic accurate precision once a fully fledged quantum computer
is at our disposal, since our formulation is fully scalable within a fault-
tolerant scenario.
The quantum Metropolis algorithm itself relies on the construction of a coined
version of Szegedy quantum walk lemieux2019efficient , and we use the Total
Time to Solution defined in (7), as a figure of merit. Our construction of
this quantum Metropolis algorithm represents the first main contribution of
our work, as it represents a scalable and realistic algorithm for protein
folding that in contrast to previous proposals, does not rely on simplified
lattice models.
The second main contribution is an analysis of the expected quantum advantage
in protein folding, that although moderate in the scaling exponent with the
Minifold initialization, could represent a large difference in the expected
physical time needed to find the minimal energy configuration in proteins of
average size due to the exponential nature of this combinatorial problem. This
quantum advantage analysis is also performed for different realistic annealing
schedules, indicating that the out-of-equilibrium quantum Metropolis
algorithms can show a similar quantum advantage, and can even improve the
advantage shown for the fixed beta case, as can be seen from table 2. The
third contribution is a proof-of-concept small implementation of our algorithm
in actual quantum software.
Our results for the computation of protein folding provide further support to
the development of classical simulations of quantum hardware. A clear message
from our simulations is that it is worthwhile developing quantum software and
classical simulations of ideal quantum computers in order to confirm that
certain quantum algorithms provide a realistic quantum advantage. Some of the
quantum algorithms that must be assessed in realistic conditions are those
based on quantum walks like quantum Metropolis variants with fast annealing
schedule. For them, the complexity scales as $O(\delta^{-a})$, $\delta$ the
eigenvalue gap, and $a>0$ dependent on the specific application and
realization of the quantum Metropolis. This is in contrast with pure quantum
walks, where the classical complexity scales as $O(\delta^{-1})$ and the
quantum complexity as $O(\delta^{-1/2})$, as can be seen in appendix B.
However, it would be naïve to consider this quadratic quantum advantage as an
achievable and useful one in problems similar to ours. Instead, one should aim
to compare quantum algorithms with the classical ones used in practice, where
heuristic annealing schedules are used instead. As a consequence, finding out
the precise values of the corresponding exponent is of great importance since
it is a measure of the quantum advantage. For this reason, not all efforts
should be devoted to finding a quantum advantage solely with noisy ‘baby
quantum computers’, like NISQ devices, but also to continue investigating the
real possibilities of universal quantum computers when they become available.
An unknown that is worth addressing is whether the new software for protein
folding coming out of the CASP competition CASP will remain in the public
domain or else, will go proprietary. This is specially important since some
the most powerful tools used by that modern software rely on deep learning
that has to be trained. It so happens that the protein databases used for that
training are of public domain. They are the result of many years of
collaborative work among many research institutions that are funded publicly.
Our results point towards the possibility of using an open public software
like Qiskit Qiskit , Psi4 turney2012psi4 and ‘community’ implementations of
the AlphaFold algorithm AlphaFold of which Minifold ericalcaide2019minifold
is an example, to compensate the power of commercial software for protein
folding.
We would also like to point out that despite the great advances achieved by
the new classical methods in the latest editions of the CASP competition CASP
, there is still huge room for improvement and there are many gaps in the
problem of protein folding that await to be filled. These include
understanding protein-protein and protein-ligand interactions, the real time
folding evolution to the equilibrium configuration, the dark proteome,
proteins with dynamical configurations like the intrinsically disordered
proteins (IDP) and so on and so forth. Crucially, we believe that the current
limitations of the training data sets, which are biased towards easily to be
crystallized proteins, puts constraints on what can be achieved using only
deep learning techniques. Our present research is an attempt to explore
techniques that address these limitations.
Future improvements of our work include primarily a refinement of our
experimentally found quantum advantage, with peptides of larger size, and a
more accurate comparative analysis of the precise quantum advantage that can
be found with each annealing schedule. Such research would be valuable because
it is an important decision to be made when deploying these optimization
algorithms in practice, and no research of this question has been carried out
to the best of our knowledge.
Additionally, we also believe further work should be conducted in clarifying
whether asymptotically one should expect either of the initialization modes to
be polynomially faster finding the ground state. Similar quantum Metropolis
algorithms could be used in a variety of domains, and as such a detailed
analysis, both theoretical and experimental, of the expected quantum advantage
in each case seems desirable.
## V Acknowledgements
P.A.M.C and R.C contributed equally to this work. We would like to thank kind
advice from Jaime Sevilla on von Mises distributions and statistical t-tests,
Alvaro Martínez del Pozo and Antonio Rey on protein folding, Andrew W. Senior
on minor details of his AlphaFold article, Carmen Recio, Juan Gómez, Juan Cruz
Benito, Kevin Krsulich and Maddy Todd on the usage of Qiskit, and Jessica
Lemieux and the late David Poulin on aspects of the quantum Metropolis
algorithm. We also want to thank IBM Quantum Research for allowing us to use
their quantum processors under the Research program. We also thank Quasar
Science for facilitating the access to the AWS resources. We acknowledge
financial support from the Spanish MINECO grants MINECO/FEDER Projects FIS
2017-91460-EXP, PGC2018-099169-B-I00 FIS-2018 and from CAM/FEDER Project No.
S2018/TCS-4342 (QUITEMAD-CM). The research of M.A.M.-D. has been partially
supported by the U.S. Army Research Office through Grant No. W911NF-14-1-0103.
P. A. M. C. thanks the support of a MECD grant FPU17/03620, and R.C. the
support of a CAM grant IND2019/TIC17146.
## Appendix A Szegedy quantum walks
In order to explain what are Quantum Walks, we need to introduce Markov Chain.
Given a configuration space $\Omega$, a Markov Chain is a stochastic model
over $\Omega$, with transition matrix $\mathcal{W}_{ij}$, that specifies the
probability of transition that does not depend on previous states but only on
the present one. Random walks are the process of moving across $\Omega$
according to $\mathcal{W}_{ij}$.
Quantum walks are the quantum equivalent of random walks portugal2013quantum .
The most famous and used quantum walks are those defined by Ambainis
ambainis2007quantum and Szegedy szegedy2004quantum , although several
posterior generalisations and improvements have been developed such as those
in magniez2011search . Quantum walks often achieve a quadratic advantage in
the hitting time of a target state with respect to the spectral gap, defined
below, and are widely used in several other algorithms paparo2012google ;
paparo2013quantum ; paparo2014quantum , as we shall see.
Szegedy quantum walks are not usually defined using a coin, but rather on a
bipartite walk. This implies duplicating the Hilbert space, and defining the
unitary
$U\ket{j}\ket{0}:=\ket{j}\sum_{i\in\Omega}\sqrt{\mathcal{W}_{ji}}\ket{i}=\ket{j}\ket{p_{j}}$
(10a) and also the closely related
$V\ket{0}\ket{i}:=\sum_{j\in\Omega}\sqrt{\mathcal{W}_{ij}}\ket{j}\ket{i}=\ket{p_{i}}\ket{i}.$
(10b)
Figure 9: Geometrical visualization of a quantum walk operator $W$ of Szegedy
type. $W$ performs a series of rotations that in the subspace
$\mathcal{A}+\mathcal{B}$, defined in (11b) with their corresponding rotation
operators (13b), may be written as a block diagonal matrix, where each block
is a 2-dimensional rotation $\omega_{j}=R(2\varphi_{j})$ given by (18). This
figure represents the direct sum of Grover-like rotations in the subspace
spanned by $\mathcal{A}+\mathcal{B}$, and therefore $W$. This quantum walk
operator represents equation (4) from section II.3.
$\mathcal{W}_{ij}$ must be understood as the probability that state $\ket{i}$
transitions to state $\ket{j}$. We can check that these operators fulfil that
$SU=VS$, for $S$ the Swap operation between the first and second Hilbert
subspace. The cost of applying $U$ is usually called (quantum) update cost.
Define also the subspaces
$\mathcal{A}:=\text{span}\\{\ket{j}\ket{0}:j\in\Omega\\}$ (11a) and
$\mathcal{B}:=U^{\dagger}SU\mathcal{A}=U^{\dagger}VS\mathcal{A}.$ (11b)
Having defined $U$ and $V$, we can define $M:=U^{\dagger}VS$, a matrix with
entries valued
$\braket{i,0}{U^{\dagger}VS}{j,0}=\sqrt{\mathcal{W}_{ji}}\sqrt{\mathcal{W}_{ij}}=\sqrt{\pi_{i}/\pi_{j}}\mathcal{W}_{ij}$,
the last equality thanks to the detailed balance equation 222Notice that in
most texts the definition of $M$ does not explicitly include $S$. It is
assumed implicitly though.. In fact, in matrix terms it is usually written
$M=D_{\pi}^{-1/2}\mathcal{W}D_{\pi}^{1/2}$ where we have written $D_{\pi}$ to
indicate the diagonal matrix with the entries of the equilibrium state-vector
of probabilities, $\pi$. This implies that $\mathcal{W}$ and $M$ have the same
spectrum $\lambda_{0}=1\geq...\geq\lambda_{d-1}\geq 0$, as the matrix
$\mathcal{W}$ is positive definite, $p^{T}\mathcal{W}p\in[0,1]$, and of size
$d$. The corresponding eigenstates are $\ket{\phi_{j}}\ket{0}$, and phases
$\varphi_{j}=\arccos{\lambda_{j}}$. In particular
$\ket{\phi_{0}}=\sum_{i}\sqrt{\pi_{i}}\ket{i}$, is the equilibrium
distribution.
We can also define the projectors around $\mathcal{A}$ and $\mathcal{B}$ as
$\Pi_{\mathcal{A}}$ and $\Pi_{\mathcal{B}}$
$\Pi_{\mathcal{A}}:=(\mathbf{1}\otimes\ket{0}\bra{0}),$ (12a)
$\Pi_{\mathcal{B}}:=U^{\dagger}VS(\mathbf{1}\otimes\ket{0}\bra{0})SV^{\dagger}U$
(12b)
with their corresponding rotations
$R_{\mathcal{A}}=2\Pi_{\mathcal{A}}-\mathbf{1},$ (13a)
$R_{\mathcal{B}}=2\Pi_{\mathcal{B}}-\mathbf{1}.$ (13b)
Using this rotation we further define a quantum walk step as we did in the
main text equation (4),
$W=R_{\mathcal{B}}R_{\mathcal{A}}=U^{\dagger}SUR_{\mathcal{A}}U^{\dagger}SUR_{\mathcal{A}}.$
(14)
Using the previous expressions we can state, using
$\Pi_{\mathcal{A}}\ket{\phi_{j}}\ket{0}=\ket{\phi_{j}}\ket{0}$
$\Pi_{\mathcal{A}}U^{\dagger}VS\ket{\phi_{j}}\ket{0}=\cos\phi_{j}\ket{\phi_{j}}\ket{0},$
(15a) and
$\Pi_{\mathcal{B}}\ket{\phi_{j}}\ket{0}=U^{\dagger}VS\cos\phi_{j}\ket{\phi_{j}}\ket{0}.$
(15b)
(15b) is true because (supplementary material yung2012quantum ),
$\Pi_{\mathcal{A}}U^{\dagger}VS\Pi_{\mathcal{A}}=\Pi_{\mathcal{A}}SV^{\dagger}U\Pi_{\mathcal{A}}$
(16)
due to
$\begin{split}&\braket{\phi_{j},0}{\Pi_{\mathcal{A}}U^{\dagger}VS\Pi_{\mathcal{A}}}{\phi_{j},0}=\lambda_{j}\\\
&=\lambda_{j}^{\dagger}=\braket{\phi_{j},0}{\Pi_{\mathcal{A}}SV^{\dagger}U\Pi_{\mathcal{A}}}{\phi_{j},0}.\end{split}$
(17)
Thus $W$ will preserve the subspace spanned by
$\\{\ket{\phi_{j}}\ket{0},U^{\dagger}VS\ket{\phi_{j}}\ket{0}\\}$, which is
invariant under $\Pi_{\mathcal{A}}$ and $\Pi_{\mathcal{B}}$; mirroring the
situation in the Grover algorithm Grover . Also, as a consequence of the
previous, and of the fact that in $\mathcal{A}+\mathcal{B}$ operator $W$ has
eigenvalues $e^{2i\varphi_{j}}$ magniez2011search ; szegedy2004quantum , in
such subspace the operator $W$ can be written as a block diagonal operator
with matrices
$w_{j}=\begin{pmatrix}\cos(2\varphi_{j})&-\sin(2\varphi_{j})\\\
\sin(2\varphi_{j})&\cos(2\varphi_{j})\end{pmatrix}.$ (18)
Finally, notice that the eigenvalue gap of $\mathcal{W}$ is defined as
$\delta=1-\lambda_{1}$, and in general the hitting time of a classical walk
will grow like $O(\delta^{-1})$ (Proposition 1 of magniez2011search ). On the
other hand, the hitting time of the Quantum Walk will scale like
$O(\Delta^{-1})$ (Theorem 6 of magniez2011search ), where
$\Delta:=2\varphi_{1}$. But $\Delta\geq 2\sqrt{1-|\lambda_{1}|^{2}}\geq
2\sqrt{\delta}$, so $\Delta=\Omega(\delta^{1/2})$. In fact, writing
$\delta=1-\lambda_{1}=1-\cos\varphi_{1}$, and expanding in Taylor series
$\cos\varphi_{1}=1-\frac{\varphi_{1}^{2}}{2}+\frac{\varphi_{1}^{4}}{24}+O(\varphi_{1}^{6})$,
we can see that
$\frac{\varphi_{1}^{2}}{2}\geq
1-\cos\varphi_{1}\geq\frac{\varphi_{1}^{2}}{2}-\frac{\varphi_{1}^{4}}{24}.$
(19)
Using the definitions of $\delta$ and $\Delta$ and the fact that
$\varphi_{1}\in(0,\pi/2)$, it is immediate
$\begin{split}\frac{\Delta^{2}}{8}\geq\delta\geq\frac{\Delta^{2}}{8}-\frac{\Delta^{4}}{2^{4}\cdot
24}&=\frac{\Delta^{2}}{8}\left(1-\frac{\Delta^{2}}{2\cdot 24}\right)\\\
&\geq\frac{\Delta^{2}}{8}\left(1-\frac{\pi^{2}}{2\cdot 24}\right).\end{split}$
(20)
Consequently, $\Delta=\Theta(\delta^{1/2})$, and this is the reason why
Quantum Walks are quadratically faster than their classical counterparts.
## Appendix B Mathematical description of the out-of-equilibrium quantum
Metropolis algorithm
In the appendix A we have reviewed the Szegedy quantum walk. In this appendix,
we present a quantum Metropolis-Hasting algorithm based on the use of Szegedy
walks.
The objective of the Metropolis-Hastings algorithm is sampling from the Gibbs
distribution
$\rho^{\beta}=\ket{\pi^{\beta}}\bra{\pi^{\beta}}=Z^{-1}(\beta)\sum_{\bm{\phi}\in\Omega}e^{-\beta
E(\bm{\phi})}\ket{\bm{\phi}}\bra{\bm{\phi}}$, where $E(\bm{\phi})$ is the
energy of a given configuration of angles of the molecule, $\beta$ plays the
role of the inverse of a temperature that will be lowered during the process,
and $Z(\beta)=\sum_{\bm{\phi}\in\Omega}e^{-\beta E(\bm{\phi})}$ a
normalization factor. $\Omega$ represents the configuration space, in our case
the possible values the torsion angles may take. One can immediately notice
that if $\beta$ is sufficiently large only the configurations with the lowest
possible energy will appear in when sampling from that state, with high
probability. Thus, we wish to prepare one such state to be able find the
configuration with the lowest energy, in our case the folded state of the
protein in nature.
One way to construct such distribution $\pi^{\beta}$ is to create a rapidly
mixing Markov Chain that has as equilibrium distribution $\pi^{\beta}$. Such
Markov Chain is characterized, at a given $\beta$, by a transition matrix
$\mathcal{W}$ that induces a random walk over the possible states. That is,
$\mathcal{W}$ maps a given distribution $p$ to another
$p^{\prime}=\mathcal{W}p$. Let us introduce some useful definitions: an
aperiodic walk is called irreducible if any state in $\Omega$ can be accessed
from any other state in $\Omega$, although not necessarily in a single step.
Additionally, we will say that a walk is reversible when it fulfills the
detailed balance condition
$\mathcal{W}_{j,i}^{\beta}\pi^{\beta}_{i}=\mathcal{W}_{i,j}^{\beta}\pi^{\beta}_{j}.$
(21)
Metropolis-Hastings algorithm uses the following transition matrix
$\mathcal{W}_{ij}=\begin{cases}T_{ij}A_{ij},&\text{if $i\neq j$}\\\
1-\sum_{k\neq j}T_{kj}A_{kj},&\text{if $i=j$},\end{cases}$ (22)
where, as given in the main text equation (1),
$A_{ij}=\min\left(1,e^{-\beta(E_{i}-E_{j})}\right),$ (23)
and
$T_{ij}=\begin{cases}\frac{1}{N}&\text{there is a move connecting $j$ to
$i$}\\\ 0,&\text{else},\end{cases}$ (24)
for $N$ the number of possible outgoing movements from state $j$, which we
assume to be independent of the current state, as is the case for our
particular problem. In the case of the Metropolis-Hastings algorithm the
detailed balance condition is fulfilled with the above definitions.
Having defined the Metropolis algorithm, we now want to quantize it using the
previous section. There have been several proposals to quantize the Metropolis
algorithm, as can be seen in the Appendix B. They often rely on slow evolution
from $\ket{\pi_{t}}\rightarrow\ket{\pi_{t+1}}$ using variations of Amplitude
Amplification, until a large $\beta$ has been achieved.
However, most often this Metropolis algorithms are used outside of
equilibrium, something that is not often worked out in the previous
approaches. There are at least two ways to perform the quantization of the
out-of-equilibrium Metropolis-Hastings algorithm lemieux2019efficient . The
first one, called ‘Zeno with rewind’ temme2011quantum uses a simpler version
of previous quantum walks, where instead of amplitude amplification, quantum
phase estimation is used on operators $W_{j}$, so that measuring the first
eigenvalue means that we have prepared the corresponding eigenvector
$\ket{\psi_{t}^{0}}=\ket{\pi_{t}}$ somma2007quantum . Since the eigenvalue gap
is $\Delta_{t}$, the cost of Quantum Phase Estimation is $O(\Delta_{t}^{-1})$.
Define measurements $Q_{t}=\ket{\pi_{t}}\bra{\pi_{t}}$ and
$Q^{\perp}_{t}=\mathbf{1}-\ket{\pi_{t}}\bra{\pi_{t}}$, for $t$ an index
indicating the step of the cooling schedule. Performing these measurements
consists, as we have mentioned, in performing phase estimation of the
corresponding operator $\mathcal{W}_{t}$, at cost $\Delta_{t}^{-1}$. If a
measurement of type $Q^{\perp}_{t}$ indicates a restart of the whole algorithm
it is called ‘without rewind’. However, if measurement $Q^{\perp}_{t}$ is
obtained, one can perform phase estimation of $\mathcal{W}_{t-1}$. If
$|\braket{\pi_{t}}{\pi_{t-1}}|^{2}=F_{t}^{2}$, then the transition probability
between $Q_{j}\leftrightarrow Q_{t-1}$ and between
$Q^{\perp}_{t}\leftrightarrow Q^{\perp}_{t-1}$ is given by $F_{t}^{2}$; and
the transition probability between $Q_{t}\leftrightarrow Q^{\perp}_{t-1}$ and
between $Q^{\perp}_{t}\leftrightarrow Q_{t-1}$ is given by $1-F_{t}^{2}$, so
in a logarithmic number of steps one can recover state $\ket{\pi_{t-1}}$ or
$\ket{\pi_{t}}$.
The second proposal is to perform the unitary heuristic procedure
lemieux2019efficient
$\ket{\psi(L)}=W_{L}...W_{1}\ket{\pi_{0}}.$ (25)
This is in some ways the simplest way one would think of quantizing the
Metropolis-Hastings algorithm, implementing a quantum walk instead of a random
walk. It is very similar to the classical way of performing many steps of the
classical walk, slowly increasing $\beta$ until one arrives to the aim
temperature. In addition, this procedure is significantly simpler than the
previously explained ones because it does not require to perform phase
estimation on $W_{t}$.
Two more innovations are introduced by lemieux2019efficient . In the first
place, a heuristic Total Time to Solution (TTS) is defined. Assuming some
start distribution if operators $W$ are applied $t$ times, the probability of
success is given by $p(t)$. In order to be successful with constant
probability $1-\delta$ that means repeating the procedure
$\log(1-\delta)/\log(1-p(t))$ times. The total expected time to success is
then
$TTS(t):=t\frac{\log(1-\delta)}{\log(1-p(t))},$ (26)
as also indicated in the main text equation (7). For the unitary procedure
$TTS(L)=L\frac{\log(1-\delta)}{\log(1-|\braket{\pi^{g}}{W_{L}...W_{1}}{\pi_{0}}|^{2})}$
(27)
with $\pi^{g}$ the ground state of the Hamiltonian.
Additionally lemieux2019efficient constructed an alternative to Szegedy
operator $\tilde{W}$ using a Boltzmann coin, such that the quantum walk
operator operates on three registers $\ket{x}_{S}\ket{z}_{M}\ket{b}_{C}$. $S$
stands for the codification of the state, $M$ for the codification of possible
movements, and $C$ the Boltzmann coin. Their operator $\tilde{W}$ is
equivalent to the Szegedy operator under a conjugation operator $Y$ that maps
moves to states and viceversa:
$\tilde{W}=RV^{\dagger}B^{\dagger}FBV,$ (28)
with,
$V:\ket{0}_{M}\rightarrow\sum_{j}N^{-1/2}\ket{j}$ (29a)
$\begin{split}B:&\ket{x}_{S}\ket{j}_{M}\ket{0}_{C}\rightarrow\\\
&\ket{x}_{S}\ket{j}_{M}\left(\sqrt{1-A_{x\cdot
z_{j},x}}\ket{0}+\sqrt{A_{x\cdot z_{j},x}}\ket{1}\right)\end{split}$ (29b)
$F:\ket{x}_{S}\ket{j}_{M}\ket{b}_{C}\rightarrow\ket{x\cdot
z^{b}_{j}}_{S}\ket{j}_{M}\ket{b}_{C}$ (29c)
$\begin{split}R:&\ket{0}_{M}\ket{0}_{C}\rightarrow-\ket{0}_{M}\ket{0}_{C}\\\
&\ket{j}_{M}\ket{b}_{C}\rightarrow\ket{j}_{M}\ket{b}_{C},\quad(j,b)\neq(0,0)\end{split}$
(29d)
Here $V$ proposes different moves, $B$ prepares the Boltzmann coin, $F$ flips
the bits necessary to prepare the new state, conditional on the Boltzmann coin
being in state 1, and $R$ is a reflection operator on state $(0,0)$ for the
coin and movement registers. Although our encoding of the operators and states
is slightly different, the algorithm that we have used is this one, mainly due
to its simplicity.
## References
* (1) U. Consortium, “Uniprot: a worldwide hub of protein knowledge,” Nucleic Acids Research, vol. 47, no. D1, pp. D506–D515, 2019.
* (2) H. M. Berman, J. Westbrook, Z. Feng, G. Gilliland, T. N. Bhat, H. Weissig, I. N. Shindyalov, and P. E. Bourne, “The protein data bank,” Nucleic Acids Research, vol. 28, no. 1, pp. 235–242, 2000.
* (3) N. Perdigão, J. Heinrich, C. Stolte, K. S. Sabir, M. J. Buckley, B. Tabor, B. Signal, B. S. Gloss, C. J. Hammang, B. Rost, et al., “Unexpected features of the dark proteome,” Proceedings of the National Academy of Sciences, vol. 112, no. 52, pp. 15898–15903, 2015.
* (4) A. Bhowmick, D. H. Brookes, S. R. Yost, H. J. Dyson, J. D. Forman-Kay, D. Gunter, M. Head-Gordon, G. L. Hura, V. S. Pande, D. E. Wemmer, et al., “Finding our way in the dark proteome,” Journal of the American Chemical Society, vol. 138, no. 31, pp. 9730–9742, 2016.
* (5) N. Perdigão and A. Rosa, “Dark proteome database: studies on dark proteins,” High-throughput, vol. 8, no. 2, p. 8, 2019.
* (6) P. N. Bryan and J. Orban, “Proteins that switch folds,” Current Opinion in Structural Biology, vol. 20, no. 4, pp. 482–488, 2010.
* (7) A. K. Dunker, J. D. Lawson, C. J. Brown, R. M. Williams, P. Romero, J. S. Oh, C. J. Oldfield, A. M. Campen, C. M. Ratliff, K. W. Hipps, et al., “Intrinsically disordered protein,” Journal of Molecular Graphics and Modelling, vol. 19, no. 1, pp. 26–59, 2001.
* (8) R. Das and D. Baker, “Macromolecular modeling with rosetta,” Annu. Rev. Biochem., vol. 77, pp. 363–382, 2008.
* (9) University of Washington<EMAIL_ADDRESS>boinc.bakerlab.org, 2021.
* (10) R. Das, B. Qian, S. Raman, R. Vernon, J. Thompson, P. Bradley, S. Khare, M. D. Tyka, D. Bhat, D. Chivian, et al., “Structure prediction for casp7 targets using extensive all-atom refinement with rosetta@ home,” Proteins: Structure, Function, and Bioinformatics, vol. 69, no. S8, pp. 118–128, 2007.
* (11) W. E. Hart and S. Istrail, “Robust proofs of np-hardness for protein folding: general lattices and energy potentials,” Journal of Computational Biology, vol. 4, no. 1, pp. 1–22, 1997.
* (12) B. Berger and T. Leighton, “Protein folding in the hydrophobic-hydrophilic (hp) is np-complete,” in Proceedings of the Second Annual International Conference on Computational Molecular Biology, pp. 30–39, 1998.
* (13) A. Kryshtafovych, T. Schwede, M. Topf, K. Fidelis, and J. Moult, “Critical assessment of methods of protein structure prediction (casp)—round xiii,” Proteins: Structure, Function, and Bioinformatics, vol. 87, no. 12, pp. 1011–1020, 2019.
* (14) A. Senior, R. Evans, J. Jumper, J. Kirkpatrick, L. Sifre, T. Green, C. Qin, A. Zidek, A. Nelson, A. Bridgland, et al., “Improved protein structure prediction using potentials from deep learning,” Nature, 2020.
* (15) M. D. Hanwell, D. E. Curtis, D. C. Lonie, T. Vandermeersch, E. Zurek, and G. R. Hutchison, “Avogadro: an advanced semantic chemical editor, visualization, and analysis platform,” Journal of Cheminformatics, vol. 4, no. 1, p. 17, 2012.
* (16) P. Wocjan and A. Abeyesinghe, “Speedup via quantum sampling,” Physical Review A, vol. 78, no. 4, p. 042336, 2008.
* (17) R. Somma, S. Boixo, and H. Barnum, “Quantum simulated annealing,” arXiv preprint arXiv:0712.1008, 2007.
* (18) R. D. Somma, S. Boixo, H. Barnum, and E. Knill, “Quantum simulations of classical annealing processes,” Physical Review Letters, vol. 101, no. 13, p. 130504, 2008.
* (19) K. Temme, T. J. Osborne, K. G. Vollbrecht, D. Poulin, and F. Verstraete, “Quantum metropolis sampling,” Nature, vol. 471, no. 7336, p. 87, 2011\.
* (20) M.-H. Yung and A. Aspuru-Guzik, “A quantum–quantum metropolis algorithm,” Proceedings of the National Academy of Sciences, vol. 109, no. 3, pp. 754–759, 2012.
* (21) J. Lemieux, B. Heim, D. Poulin, K. Svore, and M. Troyer, “Efficient Quantum Walk Circuits for Metropolis-Hastings Algorithm,” Quantum, vol. 4, p. 287, June 2020.
* (22) M. Szegedy, “Quantum speed-up of markov chain based algorithms,” in 45th Annual IEEE Symposium on Foundations of Computer Science, pp. 32–41, IEEE, 2004\.
* (23) R. Babbush, A. Perdomo-Ortiz, B. O’Gorman, W. Macready, and A. Aspuru-Guzik, “Construction of energy functions for lattice heteropolymer models: a case study in constraint satisfaction programming and adiabatic quantum optimization,” arXiv preprint arXiv:1211.3422, 2012.
* (24) A. Robert, P. K. Barkoutsos, S. Woerner, and I. Tavernelli, “Resource-efficient quantum algorithm for protein folding,” arXiv preprint arXiv:1908.02163, 2019.
* (25) A. Perdomo-Ortiz, N. Dickson, M. Drew-Brook, G. Rose, and A. Aspuru-Guzik, “Finding low-energy conformations of lattice protein models by quantum annealing,” Scientific Reports, vol. 2, p. 571, 2012.
* (26) M. Fingerhuth, T. Babej, et al., “A quantum alternating operator ansatz with hard and soft constraints for lattice protein folding,” arXiv preprint arXiv:1810.13411, 2018.
* (27) T. Babej, M. Fingerhuth, et al., “Coarse-grained lattice protein folding on a quantum annealer,” arXiv preprint arXiv:1811.00713, 2018.
* (28) A. Perdomo, C. Truncik, I. Tubert-Brohman, G. Rose, and A. Aspuru-Guzik, “Construction of model hamiltonians for adiabatic quantum computation and its application to finding low-energy conformations of lattice protein models,” Physical Review A, vol. 78, no. 1, p. 012320, 2008.
* (29) C. Outeiral, G. M. Morris, J. Shi, M. Strahm, S. C. Benjamin, and C. M. Deane, “Investigating the potential for a limited quantum speedup on protein lattice problems,” arXiv preprint arXiv:2004.01118, 2020.
* (30) V. K. Mulligan, H. Melo, H. I. Merritt, S. Slocum, B. D. Weitzner, A. M. Watkins, P. D. Renfrew, C. Pelissier, P. S. Arora, and R. Bonneau, “Designing peptides on a quantum computer,” bioRxiv, p. 752485, 2020.
* (31) L. Banchi, M. Fingerhuth, T. Babej, C. Ing, and J. M. Arrazola, “Molecular docking with gaussian boson sampling,” Science Advances, vol. 6, no. 23, p. eaax1950, 2020.
* (32) The $\alpha$-helix and the $\beta$-sheet correspond to two common structures found in protein folding. Such structures constitute what is called the secondary structure of the protein, and are characterised because $(\phi,\psi)=(-\pi/3,-\pi/4)$ in the $\alpha$-helix, and $(\phi,\psi)=(-3\pi/4,-3\pi/4)$ in the $\beta$-sheet, due to the hydrogen bonds that happen between backbone amino groups NH and backbone carboxy groups CO.
* (33) R. Von Mises, Mathematical Theory of Probability and Statistics. Academic Press, 2014.
* (34) J. M. Turney, A. C. Simmonett, R. M. Parrish, E. G. Hohenstein, F. A. Evangelista, J. T. Fermann, B. J. Mintz, L. A. Burns, J. J. Wilke, M. L. Abrams, et al., “Psi4: an open-source ab initio electronic structure program,” Wiley Interdisciplinary Reviews: Computational Molecular Science, vol. 2, no. 4, pp. 556–565, 2012.
* (35) L. K. Grover, “Quantum mechanics helps in searching for a needle in a haystack,” Physical Review Letters, vol. 79, no. 2, p. 325, 1997.
* (36) F. Magniez, A. Nayak, J. Roland, and M. Santha, “Search via quantum walk,” SIAM Journal on Computing, vol. 40, no. 1, pp. 142–164, 2011.
* (37) T. Albash and D. A. Lidar, “Demonstration of a scaling advantage for a quantum annealer over simulated annealing,” Physical Review X, vol. 8, no. 3, p. 031016, 2018.
* (38) E. Alcaide, “Minifold: a deeplearning-based mini protein folding engine.” https://github.com/EricAlcaide/MiniFold/, 2019.
* (39) H. Abraham et al., “Qiskit: An open-source framework for quantum computing,” 2019.
* (40) Amazon.com, Inc., “Amazon web services.” aws.amazon.com, 2021.
* (41) S. Kim, J. Chen, T. Cheng, A. Gindulyte, J. He, S. He, Q. Li, B. A. Shoemaker, P. A. Thiessen, B. Yu, et al., “Pubchem 2019 update: improved access to chemical data,” Nucleic Acids Research, vol. 47, no. D1, pp. D1102–D1109, 2019.
* (42) F. Jensen, “Atomic orbital basis sets,” Wiley Interdisciplinary Reviews: Computational Molecular Science, vol. 3, no. 3, pp. 273–295, 2013.
* (43) T. Helgaker, P. Jorgensen, and J. Olsen, Molecular electronic-structure theory. John Wiley & Sons, 2014.
* (44) L. Grover and T. Rudolph, “Creating superpositions that correspond to efficiently integrable probability distributions,” arXiv preprint quant-ph/0208112, 2002.
* (45) S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983.
* (46) K. Temme, S. Bravyi, and J. M. Gambetta, “Error mitigation for short-depth quantum circuits,” Physical Review Letters, vol. 119, no. 18, p. 180509, 2017.
* (47) R. LaRose, A. Mari, P. J. Karalekas, N. Shammah, and W. J. Zeng, “Mitiq: A software package for error mitigation on noisy quantum computers,” 2020.
* (48) M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al., “Tensorflow: A system for large-scale machine learning,” in 12th $\\{$USENIX$\\}$ Symposium on Operating Systems Design and Implementation ($\\{$OSDI$\\}$ 16), pp. 265–283, 2016\.
* (49) F. Chollet et al., “Keras.” https://github.com/fchollet/keras, 2015\.
* (50) R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT press, 2018.
* (51) P. J. Van Laarhoven and E. H. Aarts, “Simulated annealing,” in Simulated annealing: Theory and applications, pp. 7–15, Springer, 1987.
* (52) Student, “The probable error of a mean,” Biometrika, pp. 1–25, 1908.
* (53) R. Portugal, Quantum walks and search algorithms. Springer, 2013.
* (54) A. Ambainis, “Quantum walk algorithm for element distinctness,” SIAM Journal on Computing, vol. 37, no. 1, pp. 210–239, 2007.
* (55) G. D. Paparo and M. Martin-Delgado, “Google in a quantum network,” Scientific Reports, vol. 2, p. 444, 2012.
* (56) G. D. Paparo, M. Müller, F. Comellas, and M. A. Martin-Delgado, “Quantum google in a complex network,” Scientific Reports, vol. 3, p. 2773, 2013\.
* (57) G. D. Paparo, V. Dunjko, A. Makmal, M. A. Martin-Delgado, and H. J. Briegel, “Quantum speedup for active learning agents,” Physical Review X, vol. 4, no. 3, p. 031002, 2014.
* (58) Notice that in most texts the definition of $M$ does not explicitly include $S$. It is assumed implicitly though.
|
# PAWLS: PDF Annotation With Labels and Structure
Mark Neumann Zejiang Shen
Allen Institute for Artificial Intelligence
<EMAIL_ADDRESS>Sam Skjonsberg
###### Abstract
Adobe’s Portable Document Format (PDF) is a popular way of distributing view-
only documents with a rich visual markup. This presents a challenge to NLP
practitioners who wish to use the information contained within PDF documents
for training models or data analysis, because annotating these documents is
difficult. In this paper, we present PDF Annotation with Labels and Structure
(PAWLS), a new annotation tool designed specifically for the PDF document
format. PAWLS is particularly suited for mixed-mode annotation and scenarios
in which annotators require extended context to annotate accurately. PAWLS
supports span-based textual annotation, N-ary relations and freeform, non-
textual bounding boxes, all of which can be exported in convenient formats for
training multi-modal machine learning models. A read-only PAWLS server is
available at https://pawls.apps.allenai.org/ 111Please see Appendix A for
instructions on accessing the demo. and the source code is available at
https://github.com/allenai/pawls.
## 1 Introduction
Authors of Natural Language Processing technology rely on access to gold
standard annotated data for training and evaluation of learning algorithms.
Despite successful attempts to create machine readable document formats such
as XML and HTML, the Portable Document Format (PDF) is still widely used for
read only documents which require visual markup, across domains such as
scientific publishing, law and government. This presents a challenge to NLP
practitioners, as the PDF format does not contain exhaustive markup
information, making it difficult to extract semantically meaningful regions
from a PDF. Annotating text extracted from PDFs in a plaintext format is
difficult, because the extracted text stream lacks any organisation or markup,
such as paragraph boundaries, figure placement and page headers/footers.
Existing popular annotation tools such as BRAT Stenetorp et al. (2012) focus
on annotation of user provided plain text in a web browser specifically
designed for annotation only. For many labeling tasks, this format is exactly
what is required. However, as the scope and ability of natural language
processing technology goes beyond purely textual processing due in part to
recent advances in large language models (Peters et al., 2018; Devlin et al.,
2019, inter alia), the context and media in which datasets are created must
evolve as well.
In addition, the quality of both data collection and evaluation methodology is
highly dependent on the particular annotation/evaluation context in which the
data being annotated is viewed Joseph et al. (2017); Läubli et al. (2018).
Annotating data directly on top of a PDF canvas allows naturally occurring
text to be collected in addition to it to being by annotators in it’s original
context - that of the PDF itself.
To address the need for an annotation tool that goes beyond plaintext data, we
present a new annotation tool called PAWLS (PDF Annotation With Labels and
Structure). In this paper, we discuss some of the PDF specific design choices
in PAWLS, including automatic bounding box uniformity, free-form annotations
for non-textual image regions and scale/dimension agnostic bounding box
storage. We report agreement statistics from an initial round of labelling
during the creation of a PDF structure parsing dataset for which PAWLS was
originally designed.
Figure 1: An overview of the PAWLS annotation interface.
## 2 Design Choices
As shown in Figure 1, the primary operation that PAWLS supports is drawing a
bounding box over a PDF document with the mouse, and assigning that region of
the document a textual label. PAWLS supports drawing both freeform boxes
anywhere on the PDF, as well as boxes which are associated with tokens
extracted from the PDF itself.
This section describes some of the user interface design choices in PAWLS.
### 2.1 PDF Native Annotation
The primary tenet of PAWLS is the idea that annotators are accustomed to
reading and interacting with PDF documents themselves, and as such, PAWLS
should render the actual PDF as the medium for annotation. In order to achieve
this, annotations themselves must be relative to a rendered PDF’s scale in the
browser. Annotations are automatically re-sized to fit the rendered PDF
canvas, but stored relative to the absolute dimensions of the original PDF
document.
### 2.2 Annotator Ease of Use
PAWLS contains several features which are designed to speed up annotation by
users, as well as minimizing frustrating or difficult interaction experiences.
Bounding box borders in PAWLS change depending on the size and density of the
annotated span, making it easier to read dense annotations. Annotators can
hide bounding box labels using the CTRL key for cases where labels are
obscuring the document flow. Users can undo annotations with familiar key
combinations (CMD-z) and delete annotations directly from the sidebar. These
features were derived from a tight feedback loop with annotation experts
during development of the tool.
Figure 2: An example of visual token selection. When a user begins
highlighting a bounding box, PAWLS uses underlying token level boundary
information extracted from the PDF to 1) highlight selected textual spans as
they are dragged over and 2) normalize the bounding box of a selection to be a
fixed padded distance from the maximally large token boundary.
### 2.3 Token Parsing
PAWLS pre-processes PDFs before they are rendered in the UI to extract the
bounding boxes of every token present in the document. This allows a variety
of interactive labelling features described below. Users can choose between
different pre-processors based on their needs, such as GROBID
222https://github.com/kermitt2/grobid and PdfPlumber
333https://github.com/jsvine/pdfplumber for programmatically generated PDFs,
or Tesseract 444https://github.com/tesseract-ocr/tesseract for Optical
Character Recognition (OCR) in PDFs which have been scanned, or are otherwise
low quality. Future extensions to PAWLS will include higher level PDF
structure which is general enough to be useful across a range of domains, such
as document titles, paragraphs and section headings to further extend the
possible annotation modes, such as clicking on paragraphs or sections.
### 2.4 Visual Token Selection and Box Snapping
PAWLS pre-processes PDFs before they are served in the annotation interface,
giving access to token level bounding box information. When users draw new
bounding boxes, token spans are highlighted to indicate their inclusion in the
annotation. After the user has completed the selection, the bounding box
“snaps” to a normalized boundary containing the underlying PDF tokens. Figure
2 demonstrates this interaction. In particular, this allows bounding boxes to
be normalized relative to their containing token positions (having a fixed
border), making annotations more consistent and uniform with no additional
annotator effort. This feature allows annotators to focus on the content of
their annotations, rather than ensuring a consistent visual markup, easing the
annotation flow and increasing the consistency of the collected annotations.
### 2.5 N-ary Relational Annotations
PAWLS supports N-ary relational annotations as well as those based on bounding
boxes. Relational annotations are supported for both textual and free-form
annotations, allowing the collection of event structures which include non-
textual PDF regions, such as figure/table references, or sub-image
coordination. For example, this feature would allow annotators to link figure
captions to particular figure regions, or relate a discussion of a particular
table column in the text to the exact visual region of the column/table
itself. Figure 3 demonstrates this interaction mode for two annotations.
Figure 3: The relation annotation modal.
### 2.6 Command Line Interface
PAWLS includes a command line interface for administrating annotation
projects. It includes functionality for assigning labeling tasks to
annotators, monitoring the annotation progress and quality (measuring inter
annotator agreement), and exporting annotations in a variety of formats.
Additionally, it supports pre-populating annotations from model predictions,
detailed in Section 2.7.
Annotations in PAWLS can be exported to different formats to support different
downstream tasks. The hierarchical structure of user-drawn blocks and PDF
tokens is stored in JSON format, linking blocks with their corresponding
tokens. For vision-centered tasks (e.g., document layout detection), PAWLS
supports converting to the widely-used COCO format, including generating jpeg
captures of pdf pages for training vision models. For text-centric tasks,
PAWLS can generate a table for tokens and labels obtained from the annotated
bounding boxes.
### 2.7 Annotation Pre-population
The PAWLS command line interface supports pre-population of annotations given
a set of bounding boxes predictions for each page. This enables model-in-the-
loop type functionality, with annotators correcting model predictions directly
on the PDF. Future extensions to PAWLS will include active learning based
annotation suggestions as annotators work, from models running as a service.
## 3 Implementation
PAWLS is implemented as a Python-based web server which serves PDFs,
annotations and other metadata stored on disk in the JSON format. The user
interface is a Single Page Application implemented using Typescript and relies
heavily on the React web framework. PDFs are rendered using PDF.js. PAWLS is
designed to be used in a browser, with no setup work required on the behalf of
annotators apart from navigating to a web page. This makes annotation projects
more flexible as they can be distributed across a variety of crowd-sourcing
platforms, used in house, or run on local machines.
PAWLS development and deployment are both managed using the containerization
tools Docker and Docker Compose, and multiple PAWLS instances are running on a
Google Cloud Platform Kubernetes cluster. Authentication in production
environments is managed via Google Account logins, but PAWLS can be run
locally by individual users with no authentication.
## 4 Case Study
PAWLS enables the collection of mixed-mode annotations on PDFs. PAWLS is
currently in use for a PDF Layout Parsing project for academic papers, for
which we have collected an initial set of gold standard annotations. This
dataset consists of 80 PDF pages with 2558 densely annotated bounding boxes of
20 categories from 3 annotators.
Table 1 reports pairwise Inter-Annotator agreement scores, split out into
textual and non-textual labels. For textual labels like titles and paragraphs,
the agreement is measured via token accuracy: for each word labeled, we
compare the label of the belonging block across different annotators. Non-
textual labels are used for regions like figures and tables, and they are
usually labeled using free-form boxes. Average Precision (AP) score Lin et al.
(2014), commonly used in Object Detection tasks (e.g., COCO) in computer
vision, is adopted to measure the consistency of these boxes labeled by
different annotators. As AP calculates the block categories agreement at
different overlapping levels, the scoring is not commutative.
| Annotator 1 | Annotator 2 | Annotator 3
---|---|---|---
Annotator 1 | N/A | 94.43 / 86.58 | 93.28 / 83.97
Annotator 2 | 94.43 / 86.49 | N/A | 88.69 / 84.20
Annotator 3 | 93.28 / 84.67 | 88.69 / 84.79 | N/A
Table 1: The Inter-Annotator Agreement scores for the labeling task. We show
the textual / non-textual annotation agreement scores in each cell. The
$(i,j)$-th element in this table is calculated by treating $i$’s annotation as
the “ground truth” and $j$’s as the “prediction”.
## 5 Related Work
Many commercial PDF annotation tools exist, such as IBM Watson’s smart
document understanding feature and TagTog’s Beta PDF Annotation tool
555https://www.tagtog.net/#pdf-annotation. PAWLS will be open source and
freely available. Knowledge management systems such as Protégé Musen (2015)
support PDFs, but more suited to management of large, evolving corpora and
knowledge graph construction than the creation of static datasets.
LabelStudio 666https://labelstud.io/ supports image annotation as well as
plaintext/html-based annotation, meaning PDF pages can be uploaded and
annotated within their user interface. However, bounding boxes are hand drawn,
and the context of the entire PDF is not visible as the pdf pages are viewed
as individual images. PDFAnno Shindo et al. (2018) is the closest tool
conceptually to PAWLS, supporting multiple annotation modes and pdf-based
rendering. Unfortunately PDFAnno is no longer maintained and PAWLS provides
additional functionality, such as pre-annotation.
Several PDF based datasets exist for document parsing, such as DocBank Li et
al. (2020b), PubLeNet Zhong et al. (2019) and TableBank Li et al. (2020a).
However, both DocBank and PubLeNet are constructed using weak supervision from
Latex parses or Pubmed XML information. TableBank consists of 417k tables
extracted from Microsoft Word documents and computer generated PDFs. This
approach is feasible for common elements of document structure such as tables,
but is not possible for custom annotation labels or detailed figure/table
decomposition.
The PAWLS interface is similar to tools which augment PDFs for reading or note
taking purposes. Along with commercial tools such as Adobe Reader, SideNoter
Abekawa and Aizawa (2016) augments PDFs with rich note taking and linguistic
annotation overlays, directly on the PDF canvas. ScholarPhi Head et al. (2020)
augments the PDF reading experience with equation overlays and definition
modals for symbols.
As a PDF specific annotation tool, PAWLS adds to the wider landscape of
annotation tools which fulfil a particular niche. SLATE Kummerfeld (2019)
provides a command line annotation tool for expert annotators; Mayhew and Roth
(2018) provides an annotation interface specifically designed for cross-
lingual annotation in which the annotators do not speak the target language.
Textual annotation tools such as BRAT Stenetorp et al. (2012), Pubtator Wei et
al. (2013, 2012) or Knowtator Ogren (2006) are recommended for annotations
which do not require full PDF context, or for which extension to multi-modal
data formats is not possible or likely. We view PAWLS as a complimentary tool
to the suite of text based annotation tools, which support more advanced types
of annotation and configuration, but deal with annotation on extracted text
removed from it’s originally published format.
In particular, we envisage scholarly document annotation as a key use case for
PAWLS, as PDF is a widely used format in the context of scientific
publication. Several recently published datasets leave document structure
parsing or multi-modal annotation to future work. For example, the SciREX
dataset Jain et al. (2020) use the text-only LaTeX source of ArXiv papers for
dataset construction, leaving Table and Figure extraction to future work.
Multiple iterations of the Evidence Inference dataset Lehman et al. (2019);
DeYoung et al. (2020) use textual descriptions of interventions in clinical
trial reports; answering inferential questions using figures, tables and
graphs may be a more natural format for some queries.
## 6 Conclusion
In this paper, we have introduced a new annotation tool, PAWLS, designed
specifically with PDFs in mind. PAWLS facilitates the creation of multi-modal
datasets, due to its support for mixed mode annotation of both text and image
sub-regions on PDFs. Additionally, we described several user interface design
choices which improve the resulting annotation quality, and conducted a small
initial annotation effort, reporting high annotator agreement. PAWLS is
released as an open source project under the Apache 2.0 license.
## References
* Abekawa and Aizawa (2016) Takeshi Abekawa and Akiko Aizawa. 2016. SideNoter: Scholarly paper browsing system based on PDF restructuring and text annotation. In _Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations_ , pages 136–140, Osaka, Japan. The COLING 2016 Organizing Committee.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* DeYoung et al. (2020) Jay DeYoung, Eric Lehman, Ben Nye, Iain J. Marshall, and Byron C. Wallace. 2020\. Evidence inference 2.0: More data, better models.
* Head et al. (2020) Andrew Head, Kyle Lo, Dongyeop Kang, Raymond Fok, Sam Skjonsberg, Daniel S. Weld, and Marti A. Hearst. 2020. Augmenting scientific papers with just-in-time, position-sensitive definitions of terms and symbols. _ArXiv_ , abs/2009.14237.
* Jain et al. (2020) Sarthak Jain, Madeleine van Zuylen, Hannaneh Hajishirzi, and Iz Beltagy. 2020. SciREX: A challenge dataset for document-level information extraction. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7506–7516, Online. Association for Computational Linguistics.
* Joseph et al. (2017) Kenneth Joseph, Lisa Friedland, William Hobbs, David Lazer, and Oren Tsur. 2017\. ConStance: Modeling annotation contexts to improve stance classification. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 1115–1124, Copenhagen, Denmark. Association for Computational Linguistics.
* Kummerfeld (2019) Jonathan K. Kummerfeld. 2019. SLATE: A super-lightweight annotation tool for experts. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations_ , pages 7–12, Florence, Italy. Association for Computational Linguistics.
* Läubli et al. (2018) Samuel Läubli, Rico Sennrich, and Martin Volk. 2018. Has machine translation achieved human parity? a case for document-level evaluation. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 4791–4796, Brussels, Belgium. Association for Computational Linguistics.
* Lehman et al. (2019) Eric Lehman, Jay DeYoung, Regina Barzilay, and Byron C Wallace. 2019. Inferring which medical treatments work from reports of clinical trials. In _Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL)_ , pages 3705–3717.
* Li et al. (2020a) Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, and Zhoujun Li. 2020a. TableBank: Table benchmark for image-based table detection and recognition. In _Proceedings of the 12th Language Resources and Evaluation Conference_ , pages 1918–1925, Marseille, France. European Language Resources Association.
* Li et al. (2020b) Minghao Li, Yiheng Xu, Lei Cui, Shaohan Huang, Furu Wei, Zhoujun Li, and M. Zhou. 2020b. Docbank: A benchmark dataset for document layout analysis. _ArXiv_ , abs/2006.01038.
* Lin et al. (2014) Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In _European conference on computer vision_ , pages 740–755. Springer.
* Mayhew and Roth (2018) Stephen Mayhew and Dan Roth. 2018. TALEN: Tool for annotation of low-resource ENtities. In _Proceedings of ACL 2018, System Demonstrations_ , pages 80–86, Melbourne, Australia. Association for Computational Linguistics.
* Musen (2015) M. Musen. 2015. The protégé project: a look back and a look forward. _AI matters_ , 1 4:4–12.
* Ogren (2006) Philip V. Ogren. 2006. Knowtator: A protégé plug-in for annotated corpus construction. In _HLT-NAACL_.
* Peters et al. (2018) Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics.
* Shindo et al. (2018) Hiroyuki Shindo, Yohei Munesada, and Y. Matsumoto. 2018. Pdfanno: a web-based linguistic annotation tool for pdf documents. In _LREC_.
* Stenetorp et al. (2012) Pontus Stenetorp, Sampo Pyysalo, Goran Topić, Tomoko Ohta, Sophia Ananiadou, and Jun’ichi Tsujii. 2012. brat: a web-based tool for NLP-assisted text annotation. In _Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics_ , pages 102–107, Avignon, France. Association for Computational Linguistics.
* Wei et al. (2012) Chih-Hsuan Wei, Hung-Yu Kao, and Zhiyong Lu. 2012. Pubtator: A pubmed-like interactive curation system for document triage and literature curation. _BioCreative 2012 workshop_ , 05.
* Wei et al. (2013) Chih-Hsuan Wei, Hung-Yu Kao, and Zhiyong Lu. 2013. Pubtator: a web-based text mining tool for assisting biocuration. _Nucleic Acids Research_ , 41.
* Zhong et al. (2019) Xu Zhong, J. Tang, and Antonio Jimeno-Yepes. 2019. Publaynet: Largest dataset ever for document layout analysis. _2019 International Conference on Document Analysis and Recognition (ICDAR)_ , pages 1015–1022.
## Appendix A Accessing the Demo
Production deployments of PAWLS use Google Authentication to allow users to
log in. The demo server, accessible at https://pawls.apps.allenai.org/, is
configured to allow access to all non-corp gmail accounts, e.g any email
address ending in<EMAIL_ADDRESS>No annotations will be collected from this
server, as it is read-only. Please use a personal email address, or create a
one-off account if you do not use gmail. If you cannot log in, please try
again using an incognito window which will ensure gmail cookies are not set. A
demo video for PAWLS usage is available at https://youtu.be/TB4kzh2H9og.
|
# On a class of integrable Hamiltonian equations in 2+1 dimensions
B. Gormley1, E.V. Ferapontov1,2, V.S. Novikov1
###### Abstract
We classify integrable Hamiltonian equations of the form
$u_{t}=\partial_{x}\left(\frac{\delta H}{\delta u}\right),\quad H=\int h(u,w)\
dxdy,$
where the Hamiltonian density $h(u,w)$ is a function of two variables:
dependent variable $u$ and the non-locality
$w=\partial_{x}^{-1}\partial_{y}u$. Based on the method of hydrodynamic
reductions, the integrability conditions are derived (in the form of an
involutive PDE system for the Hamiltonian density $h$). We show that the
generic integrable density is expressed in terms of the Weierstrass
$\sigma$-function: $h(u,w)=\sigma(u)e^{w}$. Dispersionless Lax pairs,
commuting flows and dispersive deformations of the resulting equations are
also discussed.
MSC: 35Q51, 37K10.
Keywords: Hamiltonian PDEs, hydrodynamic reductions, Einstein-Weyl geometry,
dispersionless Lax pairs, commuting flows, dispersive deformations,
Weierstrass elliptic functions.
1Department of Mathematical Sciences
Loughborough University
Loughborough, Leicestershire LE11 3TU
United Kingdom
2Institute of Mathematics, Ufa Federal Research Centre,
Russian Academy of Sciences, 112, Chernyshevsky Street,
Ufa 450077, Russia
e-mails:
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
To Allan Fordy on the occasion of his 70th birthday
###### Contents
1. 1 Introduction
1. 1.1 Formulation of the problem
2. 1.2 Equivalent approaches to dispersionless integrability
3. 1.3 Summary of the main results
2. 2 Proofs
1. 2.1 The method of hydrodynamic reductions: proof of Theorem 1
2. 2.2 Canonical forms of integrable densities: proof of Theorem 2
3. 2.3 Dispersionless Lax pairs
4. 2.4 Commuting flows: proof of Theorem 3
5. 2.5 Commuting flows and dispersionless Lax pairs
3. 3 Dispersive deformations
4. 4 Appendix: dispersionless Lax pair for $h=\sigma(u)e^{w}$
## 1 Introduction
### 1.1 Formulation of the problem
In this paper we investigate Hamiltonian systems of the form
$u_{t}=\partial_{x}\bigg{(}\frac{\delta H}{\delta u}\bigg{)},\quad H=\int
h(u,w)\ dxdy.$ (1)
Here $\partial_{x}$ is the Hamiltonian operator, and the Hamiltonian density
$h(u,w)$ depends on $u$ and the nonlocal variable
$w=\partial_{x}^{-1}\partial_{y}u$ (equivalently, $w_{x}=u_{y}$). Since
$\frac{\delta H}{\delta u}=h_{u}+\partial_{x}^{-1}\partial_{y}(h_{w})$ we can
rewrite equation (1) in the two-component first-order quasilinear form:
$\displaystyle u_{t}=(h_{u})_{x}+(h_{w})_{y},\quad w_{x}=u_{y}.$ (2)
Familiar examples of this type include the dispersionless KP equation
($h=\frac{1}{2}w^{2}+\frac{1}{6}u^{3}$) and the dispersionless Toda (Boyer-
Finley) equation ($h=e^{w}$). Our main goal is to classify integrable systems
within class (2) and to construct their dispersionless Lax pairs, commuting
flows and dispersive deformations. Before stating our main results, let us
begin with a brief description of the existing approaches to dispersionless
integrability in 2+1 dimensions.
### 1.2 Equivalent approaches to dispersionless integrability
Here we summarise three existing approaches to integrability of equations of
type (2), namely, the method of hydrodynamic reductions, the geometric
approach based on integrable conformal geometry (Einstein-Weyl geometry), and
the method of dispersionless Lax pairs. Based on seemingly different ideas,
these approaches lead to equivalent integrability conditions/classification
results [13].
The method of hydrodynamic reductions, see e.g. [16, 9], consists of seeking
multiphase solutions to system (2) in the form
$u=u(R^{1},R^{2},\ldots,R^{n}),\quad w=w(R^{1},R^{2},\ldots,R^{n})$ (3)
where the phases $R^{i}(x,y,t)$ (also known as Riemann invariants; note that
their number $n$ can be arbitrary) satisfy a pair of commuting hydrodynamic-
type systems:
$R^{i}_{y}=\mu^{i}(R)R^{i}_{x},\quad R^{i}_{t}=\lambda^{i}(R)R^{i}_{x};$ (4)
we recall that the commutativity conditions are equivalent to the following
constraints for the characteristic speeds $\mu^{i},\lambda^{i}$ [18, 19]:
$\frac{\partial_{j}\mu^{i}}{\mu^{j}-\mu^{i}}=\frac{\partial_{j}\lambda^{i}}{\lambda^{j}-\lambda^{i}},$
(5)
$i\neq j,\ \partial_{j}=\partial_{R^{j}}$. Substituting ansatz (3) into (2)
and using (4), (5) one obtains an overdetermined system for the unknowns
$u,w,\mu^{i},\lambda^{i}$, viewed as functions of $R^{1},\dots,R^{n}$ (the so-
called generalised Gibbons-Tsarev system, or GT-system). System (2) is said to
be integrable by the method of hydrodynamic reductions if it possesses
‘sufficiently many’ multi-phase solutions of type (3), in other words, if the
corresponding GT-system is involutive. Note that the coefficients of GT-system
will depend on the density $h(u,w)$ and its partial derivatives. The
requirement that GT-system is involutive imposes differential constraints for
the Hamiltonian density $h$, the so-called integrability conditions. Details
of this computation will be given in Section 2.1.
Integrability via Einstein-Weyl geometry. Let us first introduce a conformal
structure defined by the characteristic variety of system (2). Given a
$2\times 2$ quasilinear system
$A(v)v_{x^{1}}+B(v)v_{x^{2}}+C(v)v_{x^{3}}=0$
where $A,B,C$ are $2\times 2$ matrices depending on $v=(u,w)^{T}$, the
characteristic equation of this system, $\det(Ap_{1}+Bp_{2}+Cp_{3})=0$,
defines a conic $g^{ij}p_{i}p_{j}=0$. This gives the characteristic conformal
structure $[g]=g_{ij}dx^{i}dx^{j}$ where $g_{ij}$ is the inverse matrix of
$g^{ij}$. For system (2) direct calculation gives
$[g]=4h_{ww}dxdt-dy^{2}-4h_{uw}dydt+4(h_{ww}h_{uu}-h_{uw}^{2})dt^{2};$ (6)
here we set $(x^{1},x^{2},x^{3})=(x,y,t)$. Note that $[g]$ depends upon a
solution to the system (we will assume $[g]$ to be non-degenerate, which is
equivalent to the condition $h_{ww}\neq 0$). It turns out that integrability
of system (2) can be reformulated geometrically as the Einstein-Weyl property
of the characteristic conformal structure $[g]$. We recall that Einstein-Weyl
geometry is a triple $(\mathbb{D},[g],\omega)$ where $[g]$ is a conformal
structure, $\mathbb{D}$ is a symmetric affine connection and
$\omega=\omega_{k}dx^{k}$ is a 1-form such that
$\mathbb{D}_{k}g_{ij}=\omega_{k}g_{ij},\quad R_{(ij)}=\Lambda g_{ij}$ (7)
for some function $\Lambda$ [2, 3]; here $R_{(ij)}$ is the symmetrised Ricci
tensor of $\mathbb{D}$. Note that the first part of equations (7), known as
the Weyl equations, uniquely determines $\mathbb{D}$ once $[g]$ and $\omega$
are specified. It was observed in [13] that for broad classes of
dispersionless integrable systems (in particular, for systems of type (2)),
the one-form $\omega$ is given in terms of $[g]$ by a universal explicit
formula
$\omega_{k}=2g_{kj}\mathcal{D}_{s}g^{js}+\mathcal{D}_{k}\ln{(\det g_{ij})}$
where $\mathcal{D}_{k}$ denotes the total derivative with respect to $x^{k}$.
Applied to $[g]$ given by (6), this formula implies
$\displaystyle\omega_{1}$ $\displaystyle=0,$ $\displaystyle\omega_{2}$
$\displaystyle=\frac{2(h_{uuw}v_{xx}+2h_{uww}v_{xy}+h_{www}v_{yy})}{h_{ww}},$
(8) $\displaystyle\omega_{3}$
$\displaystyle=\frac{4(h_{uw}(h_{uuw}v_{xx}+2h_{uww}v_{xy}+h_{www}v_{yy})}{h_{ww}}-\frac{h_{ww}(h_{www}v_{xx}+2h_{uuw}v_{xy}+h_{uww}v_{yy}))}{h_{ww}}.$
To summarise, integrability of system (2) is equivalent to the Einstein-Weyl
property of $[g],\ \omega$ given by (6), (8) on every solution of system (2).
Note that in 3D, Einstein-Weyl equations (7) are themselves integrable by the
twistor construction [17], see also [8], and thus constitute ‘integrable
conformal geometry’.
Dispersionless Lax pair of system (2) consist of two Hamilton-Jacobi type
equations for an auxiliary function $S$,
$S_{t}=F(S_{x},u,w),\quad S_{y}=G(S_{x},u,w),$
whose compatibility condition, $S_{ty}=S_{yt}$, is equivalent to system (2).
Dispersionless Lax pairs were introduced in [20] as quasiclassical limits of
Lax pairs of integrable soliton equations in 2+1D. It is known that the
existence of a dispersionless Lax representation is equivalent to
hydrodynamic/geometric integrability discussed above [10, 13]. We refer to
Section 2.3 for dispersionless Lax pairs of integrable systems (2).
### 1.3 Summary of the main results
Our first result is the set of integrability conditions for the Hamiltonian
density $h$.
###### Theorem 1.
The following conditions are equivalent:
(a) System (2) is integrable by the method of hydrodynamic reductions;
(b) Characteristic conformal structure [g] and covector $\omega$ given by (6),
(8) satisfy Einstein-Weyl equations (7) on every solution of system (2);
(c) System (2) possesses a dispersionless Lax pair;
(d) Hamiltonian density $h(u,w)$ satisfies the following set of integrability
conditions:
$\displaystyle h_{www}^{2}-h_{ww}h_{wwww}$ $\displaystyle=0,$ $\displaystyle
h_{uww}h_{www}-h_{ww}h_{uwww}$ $\displaystyle=0,$ $\displaystyle
h_{uuw}h_{www}-h_{ww}h_{uuww}$ $\displaystyle=0,$ (9) $\displaystyle
h_{uuu}h_{www}-h_{ww}h_{uuuw}$ $\displaystyle=0,$
$\displaystyle-3h_{uuw}^{2}+4h_{uww}h_{uuu}-h_{ww}h_{uuuu}$ $\displaystyle=0.$
Theorem 1 is proved in Section 2.1. The system of integrability conditions (9)
is involutive, and modulo natural equivalence transformations its solutions
can be reduced to one of the six canonical forms.
###### Theorem 2.
Solutions $h(u,w)$ of system (9) can be reduced to one of the six canonical
forms:
$\displaystyle h(u,w)$ $\displaystyle=\frac{1}{2}w^{2}+\frac{1}{6}u^{3},$
$\displaystyle h(u,w)$ $\displaystyle=w^{2}+u^{2}w-\frac{1}{4}u^{4},$
$\displaystyle h(u,w)$ $\displaystyle=uw^{2}+\beta u^{7},$ $\displaystyle
h(u,w)$ $\displaystyle=e^{w},$ $\displaystyle h(u,w)$ $\displaystyle=ue^{w},$
$\displaystyle h(u,w)$ $\displaystyle=\sigma(u;0,g_{3})e^{w};$
here $\beta$ and $g_{3}$ are constants, and $\sigma(u;g_{2},g_{3})$ denotes
the Weierstrass sigma function.
Theorem 2 is proved in Section 2.2. Dispersionless Lax pairs for the
corresponding systems (2) are constructed in Section 2.3.
It turns out that every integrable system (2) possesses a higher commuting
flow of the form
$\displaystyle u_{\tau}$
$\displaystyle=a(u,w,v)u_{x}+b(u,w,v)u_{y}+c(u,w,v)w_{y}+d(u,w,v)v_{y},$
$\displaystyle w_{x}$ $\displaystyle=u_{y},$ (10) $\displaystyle v_{x}$
$\displaystyle=(p(u,w))_{y},$
where $\tau$ is the higher ‘time’, and $v=\partial_{x}^{-1}\partial_{y}p(u,w)$
is an extra nonlocal variable (in contrast to the 1+1 dimensional case, higher
commuting flows in 2+1 dimensions require higher nonlocalities). Remarkably,
the structure of higher nonlocalities is uniquely determined by the original
system (2), in particular, the function $p(u,w)$ can be expressed in terms of
$h(u,w)$: $p=h_{w}$. Furthermore, commuting flow (10) is automatically
Hamiltonian.
###### Theorem 3.
Every integrable system (2) possesses a higher commuting flow (10) with the
nonlocality $v_{x}=(h_{w})_{y}$. Commuting flow (10) is Hamiltonian with the
Hamiltonian density $f(u,w,v)$ of the form
$f(u,w,v)=vh_{w}(u,w)+g(u,w),$
where $g(u,w)$ can be recovered from the compatible equations
$\displaystyle g_{ww}$ $\displaystyle=4h_{uw}h_{ww}+\alpha wh_{ww},$
$\displaystyle g_{uuu}$ $\displaystyle=8h_{uw}h_{uuu}+\alpha wh_{uuu},$
$\displaystyle g_{uuw}$ $\displaystyle=6h_{uw}h_{uuw}+2h_{ww}h_{uuu}+\alpha
wh_{uuw}.$
Here the constant $\alpha$ is defined by the relation
$\alpha=2h_{ww}\frac{\partial}{\partial w}\big{(}\frac{h_{uw}}{h_{ww}}\big{)}$
which follows from integrability conditions (9).
Theorem 3 is proved in Section 2.4. Dispersionless Lax pairs for commuting
flows are constructed in Section 2.5.
## 2 Proofs
In this section we prove Theorems 1-3 and construct Lax pairs for integrable
systems (2) and their commuting flows (10).
### 2.1 The method of hydrodynamic reductions: proof of Theorem 1
Equivalences (a) $\Leftrightarrow$ (b) and (a) $\Leftrightarrow$ (c) of
Theorem 1 follow from the results of [13] and [10] which hold for general two-
component systems of hydrodynamic type in 2+1 dimensions.
Equivalence (a) $\Leftrightarrow$ (d) can be demonstrated as follows. Let us
rewrite system (2) in the form
$u_{t}=h_{uu}u_{x}+2h_{uw}u_{y}+h_{ww}w_{y},\quad w_{x}=u_{y},$
and substitute the ansatz $u=u(R^{1},R^{2},\ldots,R^{n}),\
w=w(R^{1},R^{2},\ldots,R^{n})$. Using equations (4) and collecting
coefficients at $R^{i}_{x}$ we obtain $\partial_{i}w=\mu^{i}\partial_{i}u$,
along with the dispersion relation
$\lambda^{i}=h_{uu}+2h_{uw}\mu^{i}+h_{ww}(\mu^{i})^{2}$. Substituting the last
formula into the commutativity conditions (5) we obtain
$\footnotesize{\partial_{j}\mu^{i}=\frac{h_{uuu}+h_{uuw}(\mu^{j}+2\mu^{i})+h_{uww}\big{(}2\mu^{i}\mu^{j}+(\mu^{i})^{2}\big{)}+h_{www}\mu^{j}(\mu^{i})^{2}}{h_{ww}(\mu^{j}-\mu^{i})}\partial_{j}u.}$
(11)
Finally, the compatibility condition
$\partial_{i}\partial_{j}w=\partial_{j}\partial_{i}w$ results in
$\footnotesize{\partial_{i}\partial_{j}u=\frac{2h_{uuu}+3h_{uuw}(\mu^{j}+\mu^{i})+h_{uww}((\mu^{i})^{2}+4\mu^{i}\mu^{j}+(\mu^{j})^{2})+h_{www}(\mu^{j}(\mu^{i})^{2}+\mu^{i}(\mu^{j})^{2})}{h_{ww}(\mu^{j}-\mu^{i})^{2}}\partial_{i}u\partial_{j}u}.$
(12)
Equations (11), (12) constitute the corresponding GT-system. As one can see,
it contains partial derivatives of the Hamiltonian density $h$ in the
coefficients. Verifying involutivity of GT-system amounts to checking the
compatibility conditions
$\partial_{k}(\partial_{j}\mu^{i})=\partial_{j}(\partial_{k}\mu^{i})$ and
$\partial_{k}(\partial_{i}\partial_{j}u)=\partial_{j}(\partial_{i}\partial_{k}u)$.
Direct computation (performed in Mathematica) results in the integrability
conditions (9) for $h(u,w)$. Note that without any loss of generality one can
restrict to the case when the number of Riemann invariants $R^{i}$ is equal to
three, indeed, all compatibility conditions involve three distinct indices
only. This finishes the proof of Theorem 1.
### 2.2 Canonical forms of integrable densities: proof of Theorem 2
We have five integrability conditions, namely
$\displaystyle h_{www}^{2}-h_{ww}h_{wwww}$ $\displaystyle=0,$ (13)
$\displaystyle h_{uww}h_{www}-h_{ww}h_{uwww}$ $\displaystyle=0,$ (14)
$\displaystyle h_{uuw}h_{www}-h_{ww}h_{uuww}$ $\displaystyle=0,$ (15)
$\displaystyle h_{uuu}h_{www}-h_{ww}h_{uuuw}$ $\displaystyle=0,$ (16)
$\displaystyle-3h_{uuw}^{2}+4h_{uww}h_{uuu}-h_{ww}h_{uuuu}$ $\displaystyle=0.$
(17)
The classification of solutions will be performed modulo equivalence
transformations leaving system (2) form-invariant (and therefore preserving
the integrability conditions). These include
$\tilde{x}=x-2at,\quad\tilde{y}=y-2bt,\quad\tilde{h}=h+au^{2}+buw+mu+nw+p,$
(18)
as well as
$\tilde{x}=x-sy,\quad\tilde{w}=w+su;$ (19)
(other variables remain unchanged). We will always assume $h_{ww}\neq 0$ which
is equivalent to the requirement of irreducibility of the dispersion relation.
There are two main cases to consider.
Case 1: $h_{www}=0.$ Then
$h(u,w)=\alpha(u)w^{2}+\beta(u)w+\gamma(u),$
and the integrability conditions imply
$\alpha^{\prime\prime}=0,\quad\beta^{\prime\prime\prime}=0,\quad-3\beta^{\prime\prime
2}+8\alpha^{\prime}\gamma^{\prime\prime\prime}-2\alpha\gamma^{\prime\prime\prime\prime}=0.$
There are two further subcases: $\alpha=1$ and $\alpha=u$.
The subcase $\alpha=1$ leads, modulo equivalence transformations (18), to
densities of the form
$h(u,w)=w^{2}+\beta_{1}u^{2}w-\frac{\beta_{1}^{2}}{4}u^{4}+\gamma_{1}u^{3},$
$\beta_{1},\gamma_{1}=const$. For $\beta_{1}=0$ we obtain the first case of
Theorem 2 (after a suitable rescaling). If $\beta_{1}\neq 0$ then we can
eliminate the term $u^{3}$ by a translation of $u$. This gives the second case
of Theorem 2 (after rescaling of $u$ and $w$).
The subcase $\alpha=u$ leads, modulo equivalence transformations (18), to
densities of the form
$h(u,w)=uw^{2}+\beta_{1}u^{2}w+\gamma_{1}u^{7}+\frac{\beta_{1}^{2}}{4}u^{3}.$
$\beta_{1},\gamma_{1}=const$. Note that we can set $\beta_{1}=0$ using
transformation (19) with $s=\beta_{1}/2$. This gives the third case of Theorem
2.
Case 2: $h_{www}\neq 0.$ Then the first two integrability conditions
$(\ref{eq:int1})$ and $(\ref{eq:int2})$ imply $h_{www}=ch_{ww}$ for some
constant $c$ (which can be set equal to $1$). This gives
$h(u,w)=a(u)e^{w}+p(u)w+q(u).$
The next two integrability conditions (15) and (16) give $p^{\prime\prime}=0$
and $q^{\prime\prime\prime}=0$, respectively. Thus, modulo equivalence
transformations (18) we can assume $h(u,w)=a(u)e^{w}$. Finally, equation (17)
implies
$aa^{\prime\prime\prime\prime}-4a^{\prime}a^{\prime\prime\prime}+3a^{\prime\prime
2}=0,$
which is the classical equation for the Weierstrass sigma function
(equianharmonic case $g_{2}=0$). Setting $\wp=-(\ln a)^{\prime\prime}$ we
obtain $\wp^{\prime\prime}=6\wp^{2}$, which integrates to
$\wp^{\prime 2}=4\wp^{3}-g_{3},$ (20)
$g_{3}=const$. There are three subcases.
Subcase $g_{3}=0,\ \wp=0$. Then $a(u)=e^{\alpha u+\beta}$ and modulo
equivalence transformations (19) we obtain Case 4 of Theorem 2.
Subcase $g_{3}=0,\ \wp=\frac{1}{u^{2}}$. Then $a(u)=ue^{\alpha u+\beta}$ and
modulo equivalence transformations (19) we obtain Case 5 of Theorem 2.
Subcase $g_{3}\neq 0$. Then $a(u)=\sigma(u;0,g_{3})e^{\alpha u+\beta}$ and
modulo equivalence transformations (19) we obtain the last case of our
classification. This finishes the proof of Theorem 2.
Remark. The paper [12] gives a classification of integrable two-component
Hamiltonian systems of the form
$\begin{bmatrix}U_{t}\\\ W_{t}\end{bmatrix}=\begin{bmatrix}0&\partial_{x}\\\
\partial_{x}&\partial_{y}\end{bmatrix}\begin{bmatrix}\frac{\delta H}{\delta
U}\\\ \frac{\delta H}{\delta W}\end{bmatrix}$ (21)
where $H=\int F(U,W)\ dxdy$. Explicitly, we have
$U_{t}=(F_{W})_{x},\quad W_{t}=(F_{U})_{x}+(F_{W})_{y}.$
Let us introduce a contact change of variables $(U,W,F)\to(u,w,f)$ via partial
Legendre transform:
$w=F_{W},\quad u=U,\quad f=F-WF_{W},\quad f_{w}=-W,\quad f_{u}=F_{U}.$
In the new variables the system becomes
$w_{y}=-(f_{u})_{x}-(f_{w})_{t},\quad u_{t}=w_{x}.$
Modulo relabelling $u\leftrightarrow w,\ f\to-h,\ y\to t,\ t\to x,\ x\to y$
these equations coincide with (2). Thus, Hamiltonian formalisms (1) and (21)
are equivalent. Examples of dKP and Boyer-Finley equations suggest however
that Hamiltonian formalism (1) is more natural and convenient, indeed, in the
form (1) both equations arise directly in their ‘physical’ variables.
### 2.3 Dispersionless Lax pairs
In this section we provide dispersionless Lax representations for all six
canonical forms of Theorem 2. The results are summarised in Table 1 below.
Table 1: Dispersionless Lax pairs for integrable systems (2)
Hamiltonian density $h(u,w)$ | Dispersionless Lax pair
---|---
$h(u,w)=\frac{1}{2}w^{2}+\frac{1}{6}u^{3}$ |
${\rm System}\ (\ref{eq:pde2}):$ | $S_{t}=\frac{1}{3}{S_{x}^{3}}+uS_{x}+w$
$u_{t}=uu_{x}+w_{y}$ | $S_{y}=\frac{1}{2}S_{x}^{2}+u$
$w_{x}=u_{y}$ |
$h(u,w)=w^{2}+u^{2}w-\frac{1}{4}u^{4}$ |
${\rm System}\ (\ref{eq:pde2}):$ | $S_{t}=(3u^{2}+2w)S_{x}+2uS_{x}^{4}+\frac{2}{7}S_{x}^{7}$
$u_{t}=(2w-3u^{2})u_{x}+4uu_{y}+2w_{y}$ | $S_{y}=uS_{x}+\frac{1}{4}S_{x}^{4}$
$w_{x}=u_{y}$ |
$h(u,w)=uw^{2}+\beta u^{7}$ |
${\rm System}\ (\ref{eq:pde2}):$ | $S_{t}=4u^{2}\wp(S_{x})(w+\frac{1}{5}u^{3}\wp^{\prime}(S_{x}))$
$u_{t}=42\beta u^{5}u_{x}+4wu_{y}+2uw_{y}$ | $S_{y}=u^{2}\wp(S_{x})$
$w_{x}=u_{y}$ | here $\wp^{\prime 2}=4\wp^{3}-35\beta$
$h(u,w)=e^{w}$ |
${\rm System}\ (\ref{eq:pde2}):$ | $S_{t}=-\frac{e^{w}}{S_{x}+u}$
$u_{t}=e^{w}w_{y}$ | $S_{y}=-\ln(S_{x}+u)$
$w_{x}=u_{y}$ |
$h(u,w)=ue^{w}$ |
${\rm System}\ (\ref{eq:pde2}):$ | $S_{t}=\frac{3u^{2}e^{w}S_{x}}{u^{3}-S_{x}^{3}}$
$u_{t}=e^{w}(2u_{y}+uw_{y})$ | $S_{y}=\ln(S_{x}-u)+\varepsilon\ln(S_{x}-\varepsilon u)+\varepsilon^{2}\ln(S_{x}-\varepsilon^{2}u)$
$w_{x}=u_{y}$ | here $\varepsilon=\exp\big{(}\frac{2\pi i}{3}\big{)}$
$h(u,w)=\sigma(u)e^{w}$ |
${\rm System}\ (\ref{eq:pde2}):$ | $S_{t}=\sigma(u)e^{w}G_{u}(S_{x},u)$
$u_{t}=e^{w}(\sigma^{\prime\prime}u_{x}+2\sigma^{\prime}u_{y}+\sigma w_{y})$ | $S_{y}=G(S_{x},u)$
$w_{x}=u_{y}$ | here $\sigma(u)=\sigma(u;0,g_{3})$
In the last case the function $G(p,u)$ is defined by the equations
$G_{p}=\frac{G_{uu}}{G_{u}}-\zeta(u),\quad
G_{uuu}G_{u}-2G_{uu}^{2}+2\wp(u)G_{u}^{2}=0$ (22)
where $\zeta$ and $\wp$ are the Weierstrass functions (equianharmonic case
$g_{2}=0$). The general solution of these equations is given by the formula
$G(p,u)=\ln\sigma(\lambda(p-u))+\epsilon\ln\sigma(\lambda(p-\epsilon
u))+\epsilon^{2}\ln\sigma(\lambda(p-\epsilon^{2}u))$ (23)
where $\epsilon=e^{2\pi i/3}=-\frac{1}{2}+i\frac{\sqrt{3}}{2}$ and
$\lambda=\frac{i}{\sqrt{3}}$. Note that the degeneration $g_{3}\to 0,\
\sigma(u)\to u$ takes the Lax pair corresponding to the Hamiltonian density
$h=\sigma(u)e^{w}$ to the Lax pair for the density $h=ue^{w}$. We refer to the
Appendix for a proof that formula (23) indeed solves the equations (22): this
requires some non-standard identities for equianharmonic elliptic functions.
### 2.4 Commuting flows: proof of Theorem 3
Our aim is to show that every integrable system (2) possesses a commuting flow
of the form (10),
$\displaystyle u_{\tau}$
$\displaystyle=a(u,w,v)u_{x}+b(u,w,v)u_{y}+c(u,w,v)w_{y}+d(u,w,v)v_{y},$
$\displaystyle w_{x}$ $\displaystyle=u_{y},$ $\displaystyle v_{x}$
$\displaystyle=(p(u,w))_{y}.$
Here $\tau$ is the higher ‘time’ variable and
$v=\partial_{x}^{-1}\partial_{y}p(u,w)$ is a new nonlocality (to be
determined). Due to the presence of nonlocal variables, direct computation of
compatibility condition $u_{t\tau}=u_{\tau t}$ is not straightforward.
Therefore, we adopt a different approach and require that the combined system
(2) $\cup$ (10),
$\displaystyle u_{t}$ $\displaystyle=(h_{u})_{x}+(h_{w})_{y},$ (24)
$\displaystyle u_{\tau}$
$\displaystyle=a(u,w,v)u_{x}+b(u,w,v)u_{y}+c(u,w,v)w_{y}+d(u,w,v)v_{y},$ (25)
$\displaystyle w_{x}$ $\displaystyle=u_{y},$ (26) $\displaystyle v_{x}$
$\displaystyle=(p(u,w))_{y},$ (27)
possesses hydrodynamic reductions. Thus, we seek multiphase solutions of the
form $u=u(R^{1},\ldots,R^{n})$, $w=w(R^{1},\ldots,R^{n})$ and
$v=v(R^{1},\ldots,R^{n})$ where the Riemann invariants $R^{i}$ satisfy a
triple of commuting systems of hydrodynamic type:
$R^{i}_{y}=\mu^{i}(R)R^{i}_{x},\quad R^{i}_{t}=\lambda^{i}(R)R^{i}_{x},\quad
R^{i}_{\tau}=\eta^{i}(R)R^{i}_{x}.$
We recall that the commutativity conditions are equivalent to
$\displaystyle\frac{\partial_{j}\mu^{i}}{\mu^{j}-\mu^{i}}=\frac{\partial_{j}\lambda^{i}}{\lambda^{j}-\lambda^{i}}=\frac{\partial_{j}\eta^{i}}{\eta^{j}-\eta^{i}}.$
(28)
Following the same procedure as in Section 2.1, from equations (24) and (26)
we obtain the relations $\partial_{i}w=\mu^{i}\partial_{i}u$, the GT-system
(11), (12), and the integrability conditions (9) for the Hamiltonian density
$h(u,w)$. Similarly, equation (27) implies
$\partial_{i}v=(p_{u}\mu^{i}+p_{w}(\mu^{i})^{2})\partial_{i}u,$
and the compatibility condition
$\partial_{j}\partial_{i}v=\partial_{i}\partial_{j}v$ results in the relations
$h_{uuw}p_{w}-h_{ww}p_{uu}=0,\quad h_{uww}p_{w}-h_{ww}p_{uw}=0,\quad
h_{www}p_{w}-h_{ww}p_{ww}=0.$
Modulo unessential constants of integration (which can be removed by
equivalence transformations) these relations uniquely specify the nonlocality:
$p(u,w)=h_{w}(u,w).$
Finally, equation (25) gives an additional dispersion relation,
$\eta^{i}=a+b\mu^{i}+(c+p_{u}d)(\mu^{i})^{2}+p_{w}d(\mu^{i})^{3}.$
Substituting $\eta^{i}$ into the commutativity conditions (28) we obtain the
following set of relations:
$\displaystyle p_{w}^{2}d_{v}$ $\displaystyle=0,$ (29)
$\displaystyle\big{(}(p_{ww}d+p_{w}d_{w})+p_{w}p_{u}d_{v}\big{)}h_{ww}$
$\displaystyle=2p_{w}dh_{www},$ (30) $\displaystyle(p_{uw}d+p_{w}d_{u})h_{ww}$
$\displaystyle=2p_{w}dh_{uww},$ (31) $\displaystyle
h_{ww}(c_{v}+p_{u}d_{v})p_{w}$ $\displaystyle=p_{w}dh_{www},$ (32)
$\displaystyle
h_{ww}\big{(}(c_{w}+p_{uw}d+p_{u}d_{w})+(c_{v}+p_{u}d_{v})p_{u}\big{)}$
$\displaystyle=5h_{uww}dp_{w}+ch_{www}+p_{u}dh_{www},$ (33) $\displaystyle
h_{ww}b_{v}p_{w}$ $\displaystyle=2p_{w}dh_{uww},$ (34) $\displaystyle
h_{ww}(c_{u}+p_{uu}d+p_{u}d_{u})$
$\displaystyle=4p_{w}dh_{uuw}+ch_{uww}+p_{u}dh_{uww},$ (35) $\displaystyle
h_{ww}a_{v}p_{w}$ $\displaystyle=p_{w}dh_{uuw},$ (36) $\displaystyle
h_{ww}(b_{w}+b_{v}p_{u})$
$\displaystyle=4p_{w}dh_{uuw}+2ch_{uww}+2p_{u}dh_{uww},$ (37) $\displaystyle
h_{ww}b_{u}$ $\displaystyle=2p_{w}dh_{uuu}+2ch_{uuw}+2p_{u}dh_{uuw},$ (38)
$\displaystyle h_{ww}(a_{w}+a_{v}p_{u})$
$\displaystyle=p_{w}dh_{uuu}+ch_{uuw}+p_{u}dh_{uuw},$ (39) $\displaystyle
h_{ww}a_{u}$ $\displaystyle=ch_{uuu}+p_{u}dh_{uuu}.$ (40)
Using the fact that $p=h_{w}$ we solve these relations modulo the
integrability conditions (9), recall that $h_{ww}\neq 0$. Equation (29) gives
$d_{v}=0$. Equations (30, 31) imply
$\displaystyle dh_{www}-h_{ww}d_{w}$
$\displaystyle=0,\;\;\;\;dh_{uww}-h_{ww}d_{u}=0,$
which can be solved for $d$:
$d=\delta h_{ww},$
for some constant $\delta$ (which will be set equal to $2$ in what follows).
Equation (32) gives
$c_{v}=d\frac{h_{www}}{h_{ww}}.$
Setting $c=d\frac{h_{www}}{h_{ww}}v+c_{1}$ for some $c_{1}=c_{1}(u,w)$ and
substituting into equation (35) we find
$c_{1}=3dh_{uw}+c_{2}(w)h_{ww}.$
Substituting $c=d\frac{h_{www}}{h_{ww}}v+3dh_{uw}+c_{2}(w)h_{ww}$ into
equation (33) we find
$\displaystyle(c_{2})_{w}$
$\displaystyle=d\bigg{(}\frac{h_{uww}h_{ww}-h_{www}h_{uw}}{h_{ww}^{2}}\bigg{)}=d\frac{\partial}{\partial
w}\bigg{(}\frac{h_{uw}}{h_{ww}}\bigg{)}.$
It turns out that modulo the integrability conditions $(c_{2})_{w}$ is a
constant. If we set
$\alpha=d\frac{\partial}{\partial w}\bigg{(}\frac{h_{uw}}{h_{ww}}\bigg{)},$
the final formula for $c$ can be written as
$c=d\frac{h_{www}}{h_{ww}}v+3dh_{uw}+\alpha wh_{ww}.$
The equations for the coefficients $a$ and $b$ cannot be integrated
explicitly; rearranging the remaining equations gives the following final
result:
$\left.\begin{cases}a_{u}=\frac{h_{uuu}}{h_{ww}}(c+dh_{uw}),&\\\
a_{w}=\frac{h_{uuw}}{h_{ww}}c+dh_{uuu},&\\\
a_{v}=d\frac{h_{uuw}}{h_{ww}}.&\end{cases}\right.\left.\begin{cases}b_{u}=2\big{(}dh_{uuu}+\frac{h_{uuw}}{h_{ww}}(c+dh_{uw})\big{)},&\\\
b_{w}=2\big{(}\frac{h_{uww}}{h_{ww}}c+2dh_{uuw}\big{)},&\\\
b_{v}=2d\frac{h_{uww}}{h_{ww}}.&\end{cases}\right.$ $\displaystyle
c=d\frac{h_{www}}{h_{ww}}v+3dh_{uw}+\alpha wh_{ww},\;\;\;d=\delta
h_{ww},\;\;\;\alpha=d\frac{\partial}{\partial
w}\bigg{(}\frac{h_{uw}}{h_{ww}}\bigg{)}.$
The equations for $a$ and $b$ are consistent modulo integrability conditions
(9). This proves the existence of commuting flows (10).
Hamiltonian formulation of commuting flows. Our next goal in to show that the
obtained commuting flow can be cast into Hamiltonian form
$u_{\tau}=\partial_{x}\left(\frac{\delta F}{\delta u}\right),\quad F=\int
f(u,w,v)\ dxdy,$ (41)
with the nonlocal variables $w,v$ defined by $w_{x}=u_{y},\
v_{x}=(h_{w})_{y}$. More precisely, we claim that the commuting density $f$ is
given by the formula
$f(u,w,v)=vh_{w}+g(u,w)$
where the function $g(u,w)$ is yet to be determined. We have
$\frac{\delta F}{\delta
u}=2vh_{uw}+g_{u}+\partial_{x}^{-1}\partial_{y}(2vh_{ww}+g_{w}),$
so that equation (41) takes the form
$\displaystyle u_{\tau}$
$\displaystyle=(2vh_{uw}+g_{u})_{x}+(2vh_{ww}+g_{w})_{y},$ (42) $\displaystyle
w_{x}$ $\displaystyle=u_{y},$ $\displaystyle v_{x}$
$\displaystyle=(h_{w})_{y}.$
Explicitly, (42) gives
$\displaystyle u_{\tau}$
$\displaystyle=(2vh_{uuw}+g_{uu})u_{x}+(4vh_{uww}+2g_{uw}+2(h_{uw})^{2})u_{y}$
$\displaystyle+(2h_{uw}h_{ww}+2vh_{www}+g_{ww})w_{y}+2h_{ww}v_{y}.$
Comparing this with (25) we thus require
$\displaystyle a$ $\displaystyle=2vh_{uuw}+g_{uu},$ $\displaystyle b$
$\displaystyle=4vh_{uww}+2g_{uw}+2h_{uw}^{2},$ $\displaystyle c$
$\displaystyle=2vh_{www}+2h_{uw}h_{ww}+g_{ww},$ $\displaystyle d$
$\displaystyle=2h_{ww}.$
Using the expressions for $a,b,c,d$ calculated above we obtain the equations
for $g(u,w)$:
$\displaystyle g_{ww}$ $\displaystyle=4h_{uw}h_{ww}+\alpha wh_{ww},$
$\displaystyle g_{uuu}$ $\displaystyle=8h_{uw}h_{uuu}+\alpha wh_{uuu},$
$\displaystyle g_{uuw}$ $\displaystyle=6h_{uw}h_{uuw}+2h_{ww}h_{uuu}+\alpha
wh_{uuw};$
note that these equations are consistent modulo the integrability conditions
(9). This finishes the proof of Theorem 3.
### 2.5 Commuting flows and dispersionless Lax pairs
In this section we calculate commuting flows of integrable systems (2) and
construct their dispersionless Lax pairs.
1\. Hamiltonian density $h(u,w)=\frac{1}{6}u^{3}+\frac{1}{2}w^{2}$. The
commuting density is
$f(u,w,v)=vw+u^{2}w.$
Commuting flow has the form (note that $\alpha=0$):
$\displaystyle u_{\tau}$ $\displaystyle=2wu_{x}+4uu_{y}+2v_{y},$
$\displaystyle w_{x}$ $\displaystyle=u_{y},$ $\displaystyle v_{x}$
$\displaystyle=w_{y}.$
Dispersionless Lax pair:
$\displaystyle S_{y}$ $\displaystyle=\frac{1}{2}S_{x}^{2}+u,$ $\displaystyle
S_{\tau}$ $\displaystyle=\frac{1}{2}S_{x}^{4}+2uS_{x}^{2}+2wS_{x}+2u^{2}+2v.$
2\. Hamiltonian density $h(u,w)=w^{2}+u^{2}w-\frac{1}{4}u^{4}$. The commuting
density is
$f(u,w,v)=2wv+u^{2}v+8uw^{2}-\frac{8}{5}u^{5}.$
Commuting flow has the form (note that $\alpha=0$):
$\displaystyle u_{\tau}$
$\displaystyle=(4v-32u^{3})u_{x}+(32w+8u^{2})u_{y}+24uw_{y}+4v_{y},$
$\displaystyle w_{x}$ $\displaystyle=u_{y},$ $\displaystyle v_{x}$
$\displaystyle=(2w+u^{2})_{y}.$
Dispersionless Lax pair:
$\displaystyle S_{y}$ $\displaystyle=uS_{x}+\frac{1}{4}S_{x}^{4},$
$\displaystyle S_{\tau}$
$\displaystyle=(4v+32uw+16u^{3})S_{x}+(8w+24w^{2})S_{x}^{4}+8uS_{x}^{7}+\frac{4}{5}S_{x}^{10}.$
3\. Hamiltonian density $h(u,w)=uw^{2}+\beta u^{7}$. The commuting density is
$f(u,w,v)=2uwv+4uw^{3}+20\beta u^{7}w.$
Commuting flow has the form (note that $\alpha=4$):
$\displaystyle u_{\tau}$ $\displaystyle=840\beta
u^{5}wu_{x}+(8v+32w^{2}+280\beta u^{6})u_{y}+32uww_{y}+4uv_{y},$
$\displaystyle w_{x}$ $\displaystyle=u_{y},$ $\displaystyle v_{x}$
$\displaystyle=(2uw)_{y}.$
Dispersionless Lax pair:
$\displaystyle S_{y}$ $\displaystyle=u^{2}\wp(S_{x}),$ $\displaystyle
S_{\tau}$
$\displaystyle=(8u^{2}v+32u^{2}w^{2}+16u^{5}w\wp^{\prime}(S_{x})+8u^{8}\wp^{3}(S_{x}))\wp(S_{x}),$
where $\wp^{\prime 2}=4\wp^{3}-35\beta$.
4\. Hamiltonian density $h(u,w)=e^{w}$. The commuting density is
$f(u,w,v)=ve^{w}.$
Commuting flow has the form (note that $\alpha=0$):
$\displaystyle u_{\tau}$ $\displaystyle=2ve^{w}w_{y}+2e^{w}v_{y},$
$\displaystyle w_{x}$ $\displaystyle=u_{y},$ $\displaystyle v_{x}$
$\displaystyle=(e^{w})_{y}.$
Dispersionless Lax pair:
$\displaystyle S_{y}$ $\displaystyle=-\ln(S_{x}+u),$ $\displaystyle S_{\tau}$
$\displaystyle=\frac{-2ve^{w}}{S_{x}+u}+\frac{e^{2w}}{(S_{x}+u)^{2}}.$
5\. Hamiltonian density $h(u,w)=ue^{w}$. The commuting density is
$f(u,w,v)=uve^{w}+ue^{2w}.$
Commuting flow has the form (note that $\alpha=0$):
$\displaystyle u_{\tau}$
$\displaystyle=(4ve^{w}+6e^{2w})u_{y}+(2uve^{w}+6ue^{2w})w_{y}+2ue^{w}v_{y},$
$\displaystyle w_{x}$ $\displaystyle=u_{y},$ $\displaystyle v_{x}$
$\displaystyle=(ue^{w})_{y}.$
Dispersionless Lax pair:
$\displaystyle S_{y}$
$\displaystyle=\ln(S_{x}-u)+\varepsilon\ln(S_{x}-\varepsilon
u)+\varepsilon^{2}\ln(S_{x}-\varepsilon^{2}u),$ $\displaystyle S_{\tau}$
$\displaystyle=\frac{3u^{2}e^{w}S_{x}(2u^{3}v-2vS_{x}^{3}-3e^{w}S_{x}^{3})}{(S_{x}^{3}-u^{3})^{2}}.$
6\. Hamiltonian density $h(u,w)=\sigma(u)e^{w}$. The commuting density is
$f(u,w,v)=v\sigma(u)e^{w}+\sigma(u)\sigma^{\prime}(u)e^{2w}.$
Commuting flow has the form (note that $\alpha=0$):
$\displaystyle u_{\tau}$
$\displaystyle=(2v\sigma^{\prime\prime}e^{w}+(\sigma\sigma^{\prime})^{\prime\prime}e^{2w})u_{x}+(4v\sigma^{\prime}e^{w}+(4\sigma\sigma^{\prime\prime}+6\sigma^{\prime
2})e^{2w})u_{y}+(2v\sigma e^{w}+6\sigma\sigma^{\prime}e^{2w})w_{y}+2\sigma
e^{w}v_{y},$ $\displaystyle w_{x}$ $\displaystyle=u_{y},$ $\displaystyle
v_{x}$ $\displaystyle=(\sigma e^{w})_{y}.$
Dispersionless Lax pair:
$\displaystyle S_{y}$ $\displaystyle=G(S_{x},u)$ $\displaystyle S_{\tau}$
$\displaystyle=2[ve^{w}\sigma(u)+e^{2w}\sigma(u)\sigma^{\prime}(u)]G_{u}(S_{x},u)-e^{2w}\sigma(u)^{2}G_{uu}(S_{x},u),$
here $G(S_{x},u)$ is defined by equations (22).
## 3 Dispersive deformations
Dispersive deformations of hydrodynamic type systems in $1+1$ dimensions were
thoroughly investigated in [4, 5, 6, 7] based on deformations of the
corresponding hydrodynamic symmetries. In $2+1$ dimensions, an alternative
approach based on deformations of hydrodynamic reductions was proposed in [14,
15].
It still remains a challenging problem to construct dispersive deformations of
all Hamiltonian systems (2) obtained in this paper. In general, all three
ingredients of the construction may need to be deformed, namely, the
Hamiltonian operator $\partial_{x}$, the Hamiltonian density $h(u,w)$ and the
nonlocality $w$. Here we give just two examples.
Example 1: dKP equation. The Hamiltonian density
$h=\frac{1}{2}w^{2}+\frac{1}{6}u^{3}$ results in the dKP equation:
$u_{t}=uu_{x}+w_{y},\quad w_{x}=u_{y}.$
It possesses an integrable dispersive deformation
$u_{t}=uu_{x}+w_{y}-\epsilon^{2}u_{xxx},\quad w_{x}=u_{y},$
which is the full KP equation (in this section $\epsilon$ denotes an arbitrary
deformation parameter). The KP equation corresponds to the deformed
Hamiltonian density
$h(u,w)=\frac{1}{2}w^{2}+\frac{1}{6}u^{3}+\frac{\epsilon^{2}}{2}u_{x}^{2},$
while the Hamiltonian operator $\partial_{x}$ and the nonlocality
$w=\partial_{x}^{-1}\partial_{y}u$ stay the same. Indeed, we have
$u_{t}={\partial_{x}}\frac{\delta H}{\delta
u}={\partial_{x}}\big{(}\partial_{x}^{-1}\partial_{y}w+\frac{1}{2}u^{2}-\epsilon^{2}u_{xx}\big{)}\\\
=uu_{x}+w_{y}-\epsilon^{2}u_{xxx}.$
Example 2: Boyer-Finley equation. The Hamiltonian density $h=e^{w}$ results in
the dispersionless Toda (Boyer-Finley) equation:
$\displaystyle u_{t}$ $\displaystyle=e^{w}w_{y},\;\;\;\;w_{x}=u_{y}.$
It possesses an integrable dispersive deformation
$u_{t}=\left(\frac{1-T^{-1}}{\epsilon}\right)e^{w},\quad
w_{x}=\left(\frac{T-1}{\epsilon}\right)u,$
which is the full Toda equation. Here $T$ and $T^{-1}$ denote the
forward/backward $\epsilon$-shifts in the $y$-direction, so that
$\frac{T-1}{\epsilon}$ and $\frac{1-T^{-1}}{\epsilon}$ are the
forward/backward discrete $y$-derivatives. The Toda equation corresponds to
the deformed nonlocality $w=\partial_{x}^{-1}\frac{T-1}{\epsilon}u$, while the
Hamiltonian operator $\partial_{x}$ and the Hamiltonian density $h=e^{w}$ stay
the same. Indeed, we have
$\displaystyle\frac{\delta H}{\delta
u}=\partial_{x}^{-1}\bigg{(}\frac{1-T^{-1}}{\epsilon}\bigg{)}e^{w},$
so that
$\displaystyle u_{t}=\partial_{x}\frac{\delta H}{\delta
u}=\bigg{(}\frac{1-T^{-1}}{\epsilon}\bigg{)}e^{w},$
as required.
## 4 Appendix: dispersionless Lax pair for $h=\sigma(u)e^{w}$
Here we prove that expression (23),
$G(p,u)=\ln\sigma(\lambda(p-u))+\epsilon\ln\sigma(\lambda(p-\epsilon
u))+\epsilon^{2}\ln\sigma(\lambda(p-\epsilon^{2}u)),$
where $\epsilon=e^{2\pi i/3}=-\frac{1}{2}+i\frac{\sqrt{3}}{2}$ and
$\lambda=\frac{i}{\sqrt{3}}$, solves the equations (22),
$G_{p}=\frac{G_{uu}}{G_{u}}-\zeta(u),\quad
G_{uuu}G_{u}-2G_{uu}^{2}+2\wp(u)G_{u}^{2}=0.$
In what follows we will use the addition formula
$\zeta(u+v)=\zeta(u)+\zeta(v)+\frac{1}{2}\frac{\wp^{\prime}(u)-\wp^{\prime}(v)}{\wp(u)-\wp(v)}.$
(43)
We will also need the following identity:
Proposition 1. In the equianharmonic case, the Weiesrtrass functions satisfy
the identity
$\lambda\frac{\wp^{\prime}(\lambda u)}{\wp(\lambda u)}+3\lambda\zeta(\lambda
u)-\zeta(u)=0,\qquad\lambda=\frac{i}{\sqrt{3}}.$ (44)
Proof:
Using the standard expansions
$\zeta(z)=\frac{1}{z}-\frac{g_{3}}{140}z^{5}-\dots,\qquad\wp(z)=\frac{1}{z^{2}}+\frac{g_{3}}{28}z^{4}+\dots,$
one can show that formula (44) holds to high order in $z$ for the specific
parameter value $\lambda=\frac{i}{\sqrt{3}}$. Therefore, it is sufficient to
establish the differentiated (by $u$) identity (44), namely,
$-\lambda^{2}\wp(\lambda u)+\frac{\lambda^{2}g_{3}}{\wp^{2}(\lambda
u)}+\wp(u)=0,$ (45)
where we have used $\wp^{\prime\prime}=6\wp^{2}$ and $\wp^{\prime
2}=4\wp^{3}-g_{3}$. Explicitly, (45) reads
$\wp(iu/\sqrt{3})-\frac{g_{3}}{\wp^{2}(iu/\sqrt{3})}+3\wp(u)=0.$
Setting $u=i\sqrt{3}v$ we obtain
$\wp(v)-\frac{g_{3}}{\wp^{2}(v)}+3\wp(i\sqrt{3}v)=0.$ (46)
Thus, it is sufficient to establish (46). Formulae of this kind appear in the
context of complex multiplication for elliptic curves with extra symmetry. Let
us begin with the standard invariance properties of the equianharmonic
$\zeta$-function:
$\zeta(\epsilon
z)=\epsilon^{2}\zeta(z),\qquad\zeta(\epsilon^{2}z)=\epsilon\zeta(z);$
here $\epsilon=e^{2\pi i/3}=-\frac{1}{2}+i\frac{\sqrt{3}}{2}$ is the cubic
root of unity. Setting $z=2v$ this gives
$\zeta(-v+i\sqrt{3}v)=\epsilon^{2}\zeta(2v),\qquad\zeta(-v-i\sqrt{3}v)=\epsilon\zeta(2v).$
Using the addition formula (43) one can rewrite these relations in the form
$-\zeta(v)+\zeta(i\sqrt{3}v)+\frac{1}{2}\frac{-\wp^{\prime}(v)-\wp^{\prime}(i\sqrt{3}v)}{\wp(v)-\wp(i\sqrt{3}v)}=\epsilon^{2}\zeta(2v)$
and
$-\zeta(v)-\zeta(i\sqrt{3}v)+\frac{1}{2}\frac{-\wp^{\prime}(v)+\wp^{\prime}(i\sqrt{3}v)}{\wp(v)-\wp(i\sqrt{3}v)}=\epsilon\zeta(2v),$
respectively. Adding there relations together (and keeping in mind that
$1+\epsilon+\epsilon^{2}=0$) we obtain
$-2\zeta(v)-\frac{\wp^{\prime}(v)}{\wp(v)-\wp(i\sqrt{3}v)}+\zeta(2v)=0.$
Using the duplication formula
$\zeta(2v)=2\zeta(v)+\frac{3\wp^{2}(v)}{\wp^{\prime}(v)}$ this simplifies to
$-\frac{\wp^{\prime}(v)}{\wp(v)-\wp(i\sqrt{3}v)}+\frac{3\wp^{2}(v)}{\wp^{\prime}(v)}=0,$
which is equivalent to (46) via $\wp^{\prime 2}=4\wp^{3}-g_{3}$.∎
Proposition 2. Expression (23) solves the equations (22).
Proof:
Computation of partial derivatives of $G(p,u)$ gives
$G_{p}=\lambda\zeta(\lambda(p-u))+\lambda\epsilon\zeta(\lambda(p-\epsilon
u))+\lambda\epsilon^{2}\zeta(\lambda(p-\epsilon^{2}u)),$
$G_{u}=-\lambda\zeta(\lambda(p-u))-\lambda\epsilon^{2}\zeta(\lambda(p-\epsilon
u))-\lambda\epsilon\zeta(\lambda(p-\epsilon^{2}u)).$
Using the addition formula (43), the identity $1+\epsilon+\epsilon^{2}=0$, and
the invariance
$\begin{array}[]{c}\zeta(\epsilon
z)=\epsilon^{2}\zeta(z),\quad\zeta(\epsilon^{2}z)=\epsilon\zeta(z),\\\
\wp(\epsilon z)=\epsilon\wp(z),\quad\wp(\epsilon^{2}z)=\epsilon^{2}\wp(z),\\\
\wp^{\prime}(\epsilon
z)=\wp(z),\quad\wp^{\prime}(\epsilon^{2}z)=\wp(z),\end{array}$
we obtain:
$\begin{array}[]{c}\frac{1}{\lambda}G_{p}=\zeta(\lambda(p-u))+\epsilon\zeta(\lambda(p-\epsilon
u))+\epsilon^{2}\zeta(\lambda(p-\epsilon^{2}u))\\\ \\\ =\zeta(\lambda
p)-\zeta(\lambda u)+\frac{1}{2}\frac{\wp^{\prime}(\lambda
p)+\wp^{\prime}(\lambda u)}{\wp(\lambda p)-\wp(\lambda u)}\\\ \\\
+\epsilon\left(\zeta(\lambda p)-\epsilon^{2}\zeta(\lambda
u)+\frac{1}{2}\frac{\wp^{\prime}(\lambda p)+\wp^{\prime}(\lambda
u)}{\wp(\lambda p)-\epsilon\wp(\lambda u)}\right)\\\ \\\
+\epsilon^{2}\left(\zeta(\lambda p)-\epsilon\zeta(\lambda
u)+\frac{1}{2}\frac{\wp^{\prime}(\lambda p)+\wp^{\prime}(\lambda
u)}{\wp(\lambda p)-\epsilon^{2}\wp(\lambda u)}\right)\\\ \\\ =-3\zeta(\lambda
u)+\frac{\wp^{\prime}(\lambda p)+\wp^{\prime}(\lambda
u)}{2}\left(\frac{1}{\wp(\lambda p)-\wp(\lambda
u)}+\frac{\epsilon}{\wp(\lambda p)-\epsilon\wp(\lambda
u)}+\frac{\epsilon^{2}}{\wp(\lambda p)-\epsilon^{2}\wp(\lambda u)}\right)\\\
\\\ =-3\zeta(\lambda u)+\frac{\wp^{\prime}(\lambda p)+\wp^{\prime}(\lambda
u)}{2}\frac{3\wp^{2}(\lambda u)}{\wp^{3}(\lambda p)-\wp^{3}(\lambda u)}\\\ \\\
=-3\zeta(\lambda u)+\frac{\wp^{\prime}(\lambda p)+\wp^{\prime}(\lambda
u)}{2}\frac{12\wp^{2}(\lambda u)}{\wp^{\prime 2}(\lambda p)-\wp^{\prime
2}(\lambda u)}\\\ \\\ =-3\zeta(\lambda u)+\frac{6\wp^{2}(\lambda
u)}{\wp^{\prime}(\lambda p)-\wp^{\prime}(\lambda u)}.\end{array}$
A similar calculation gives:
$\begin{array}[]{c}-\frac{1}{\lambda}G_{u}=\zeta(\lambda(p-u))+\epsilon^{2}\zeta(\lambda(p-\epsilon
u))+\epsilon\zeta(\lambda(p-\epsilon^{2}u))\\\ \\\ =\zeta(\lambda
p)-\zeta(\lambda u)+\frac{1}{2}\frac{\wp^{\prime}(\lambda
p)+\wp^{\prime}(\lambda u)}{\wp(\lambda p)-\wp(\lambda u)}\\\ \\\
+\epsilon^{2}\left(\zeta(\lambda p)-\epsilon^{2}\zeta(\lambda
u)+\frac{1}{2}\frac{\wp^{\prime}(\lambda p)+\wp^{\prime}(\lambda
u)}{\wp(\lambda p)-\epsilon\wp(\lambda u)}\right)\\\ \\\
+\epsilon\left(\zeta(\lambda p)-\epsilon\zeta(\lambda
u)+\frac{1}{2}\frac{\wp^{\prime}(\lambda p)+\wp^{\prime}(\lambda
u)}{\wp(\lambda p)-\epsilon^{2}\wp(\lambda u)}\right)\\\ \\\
=\frac{\wp^{\prime}(\lambda p)+\wp^{\prime}(\lambda
u)}{2}\left(\frac{1}{\wp(\lambda p)-\wp(\lambda
u)}+\frac{\epsilon^{2}}{\wp(\lambda p)-\epsilon\wp(\lambda
u)}+\frac{\epsilon}{\wp(\lambda p)-\epsilon^{2}\wp(\lambda u)}\right)\\\ \\\
=\frac{\wp^{\prime}(\lambda p)+\wp^{\prime}(\lambda u)}{2}\frac{3\wp(\lambda
p)\wp(\lambda u)}{\wp^{3}(\lambda p)-\wp^{3}(\lambda u)}\\\ \\\
=\frac{\wp^{\prime}(\lambda p)+\wp^{\prime}(\lambda u)}{2}\frac{12\wp(\lambda
p)\wp(\lambda u)}{\wp^{\prime 2}(\lambda p)-\wp^{\prime 2}(\lambda u)}\\\ \\\
=\frac{6\wp(\lambda p)\wp(\lambda u)}{\wp^{\prime}(\lambda
p)-\wp^{\prime}(\lambda u)}.\end{array}$
To summarise, we have:
$G_{p}=-3\lambda\zeta(\lambda u)+\frac{6\lambda\wp^{2}(\lambda
u)}{\wp^{\prime}(\lambda p)-\wp^{\prime}(\lambda u)},\qquad
G_{u}=-\frac{6\lambda\wp(\lambda p)\wp(\lambda u)}{\wp^{\prime}(\lambda
p)-\wp^{\prime}(\lambda u)}.$
This gives
$\frac{G_{uu}}{G_{u}}=(\ln G_{u})_{u}=\lambda\frac{\wp^{\prime}(\lambda
u)}{\wp(\lambda u)}+\frac{6\lambda\wp^{2}(\lambda u)}{\wp^{\prime}(\lambda
p)-\wp^{\prime}(\lambda u)},$ (47)
and the first equation (22), $G_{p}=\frac{G_{uu}}{G_{u}}-\zeta(u)$, is
satisfied identically due to (44). Finally, the second equation (22),
$G_{uuu}G_{u}-2G_{uu}^{2}+2\wp(u)G_{u}^{2}=0$, which can be written in the
equivalent form
$-\left(\frac{G_{uu}}{G_{u}}\right)_{u}+\left(\frac{G_{uu}}{G_{u}}\right)^{2}=2\wp(u),$
is satisfied identically due to (47) and (45). ∎
## Acknowledgements
We thank Yurii Brezhnev and Maxim Pavlov for clarifying discussions. The
research of EVF was supported by the EPSRC grant EP/N031369/1. The research of
VSN was supported by the EPSRC grant EP/V050451/1.
## References
* [1] N.I. Akhiezer, Elements of the theory of elliptic functions, Translated from the second Russian edition by H. H. McFaden. Translations of Mathematical Monographs, 79. American Mathematical Society, Providence, RI ( 1990) 237 pp.
* [2] E. Cartan, Sur une classe d’espaces de Weyl, Ann. Sci. École Norm. Sup. (3) 60 (1943) 1-16.
* [3] E. Cartan, The geometry of differential equations of third order, Revista Mat. Hisp.-Amer. 4 (1941) 3-33.
* [4] B.A. Dubrovin, Hamiltonian PDEs: deformations, integrability, solutions, J. Phys. A 43, no. 43 (2010) 434002, 20 pp.
* [5] B.A. Dubrovin, On Hamiltonian perturbations of hyperbolic systems of conservation laws. II. Universality of critical behaviour, Comm. Math. Phys. 267, no. 1 (2006) 117-139.
* [6] B.A. Dubrovin, Si-Qi Liu and Youjin Zhang, On Hamiltonian perturbations of hyperbolic systems of conservation laws. I. Quasi-triviality of bi-Hamiltonian perturbations, Comm. Pure Appl. Math. 59, no. 4 (2006) 559-615.
* [7] B.A. Dubrovin and Youjin Zhang, Bi-Hamiltonian hierarchies in 2D topological field theory at one-loop approximation, Comm. Math. Phys. 198, no. 2 (1998) 311-361.
* [8] M. Dunajski, E.V. Ferapontov and B. Kruglikov, On the Einstein-Weyl and conformal self-duality equations, J. Math. Phys. 56 (2015) 083501.
* [9] E.V. Ferapontov and K.R. Khusnutdinova, On integrability of (2+1)-dimensional quasilinear systems, Comm. Math. Phys. 248 (2004) 187-206.
* [10] E.V. Ferapontov and K.R. Khusnutdinova, The characterization of 2-component (2+1)-dimensional integrable systems of hydrodynamic type, J. Phys. A: Math. Gen. 37, no. 8 (2004) 2949-2963.
* [11] E.V. Ferapontov, A. Moro and V. V. Sokolov, Hamiltonian systems of hydrodynamic type in 2+1 dimensions, Comm. Math. Phys. 285, no. 1 (2009) 31-65.
* [12] E.V. Ferapontov, A.V. Odesskii and N.M. Stoilov, Classification of integrable two-component Hamiltonian systems of hydrodynamic type in 2+1 dimensions, Journal of Mathematical Physics, 52 (2011) 073505; Manuscript ID: 10-1152R.
* [13] E.V. Ferapontov and B. Kruglikov, Dispersionless integrable systems in 3D and Einstein-Weyl geometry, J. Diff. Geom. 97 (2014) 215-254.
* [14] E.V. Ferapontov and A. Moro, Dispersive deformations of hydrodynamic reductions of 2D dispersionless integrable systems, J. Phys. A: Math. Theor. 42, no. 3 (2009) 035211, 15 pp.
* [15] E.V. Ferapontov, A. Moro and V.S. Novikov, Integrable equations in $2+1$ dimensions: deformations of dispersionless limits, J. Phys. A: Math. Theor. 42, no. 34 (2009) 345205, 18 pp.
* [16] J. Gibbons and S.P. Tsarev, Reductions of the Benney equations, Phys. Lett. A 211 (1996) 19-24.
* [17] N.J. Hitchin, Complex manifolds and Einstein’s equations, Twistor geometry and nonlinear systems (Primorsko, 1980), 73-99, Lecture Notes in Math. 970, Springer, Berlin-New York (1982).
* [18] S.P. Tsarev, Poisson brackets and one-dimensional Hamiltonian systems of hydrodynamic type, Dokl. Akad. Nauk SSSR 282, no. 3 (1985) 534-537.
* [19] S.P. Tsarev, Geometry of Hamiltonian systems of hydrodynamic type. The generalized hodograph method, Izvestija AN USSR Math. 54, no. 5 (1990) 1048-1068.
* [20] E.V. Zakharov, Dispersionless limit of integrable systems in $2+1$ dimensions, in Singular Limits of Dispersive Waves, Ed. N.M. Ercolani et al., Plenum Press, NY, (1994) 165-174.
|
# Hölder regularity and convergence for a non-local model
of nematic liquid crystals in the large-domain limit
Giacomo Canevari and Jamie M. Taylor Dipartimento di Informatica —
Università di Verona, Strada le Grazie 15, 37134 Verona, Italy.
_E-mail address_<EMAIL_ADDRESS>— Basque Center for Applied
Mathematics, Alameda de Mazarredo 14, 48009 Bilbao, Spain.
_E-mail address_<EMAIL_ADDRESS>
###### Abstract
We consider a non-local free energy functional, modelling a competition
between entropy and pairwise interactions reminiscent of the second order
virial expansion, with applications to nematic liquid crystals as a particular
case. We build on previous work on understanding the behaviour of such models
within the large-domain limit, where minimisers converge to minimisers of a
quadratic elastic energy with manifold-valued constraint, analogous to
harmonic maps. We extend this work to establish Hölder bounds for
(almost-)minimisers on bounded domains, and demonstrate stronger convergence
of (almost)-minimisers away from the singular set of the limit solution. The
proof techniques bear analogy with recent work of singularly perturbed energy
functionals, in particular in the context of the Ginzburg-Landau and Landau-de
Gennes models.
## 1 Introduction
### 1.1 Variational models of liquid crystals
Liquid crystalline systems are those which sit outside of the classical solid-
liquid-gas trichotomy. While there are a plethora of different systems
classified as liquid crystals, they can be broadly described as fluid systems
where molecules admit a long range order of certain degrees of freedom. This
is in contrast to classical fluids, which lack long range correlations between
molecules. The fluidity of the systems makes them “soft”, that is, easily
susceptible to influence by external influences such as fields or stresses,
whilst the long range ordering permits anisotropic electrostatic and optical
behaviour. These two properties combined make them ideal for a variety of
technological applications, as their anisotropy is exploitable whilst their
softness makes them easy to manipulate.
The simplest liquid crystalline system is that of a nematic liquid crystal.
These are systems of elongated molecules, often idealised as having axial
symmetry, which form phases with no long range positional order, but where the
long axes of molecules is generally well aligned over larger length scales.
Even in the well studied case of nematics there are a variety of models that
one may use to study their theoretical behaviour, where the choice of model is
usually dependent on the length scales considered and the type of defects one
wishes to observe.
One of the earliest and most studied free energy functionals one may consider
in continuum modelling is the Oseen-Frank model [16]. In the simplest
formulation, we consider a prescribed domain $\Omega\subseteq\mathbb{R}^{3}$
and a unit vector field as our continuum variable
$n\colon\Omega\to\mathbb{S}^{2}$, interpreted as the local alignment axis of
molecules. As molecules are assumed to be (statistically) head-to-tail
symmetric, we interpret the configurations $n$, $-n$ as equivalent. In the
simplified one-constant approximation, we look for minimisers of the free
energy
(1.1) $\int_{\Omega}\frac{K}{2}\left|\nabla n(x)\right|^{2}\,\mathrm{d}x,$
subject to certain boundary conditions, although more general formulations are
possible. The problem has attracted interest not only from the liquid crystal
community, but also from the mathematical community as the prototypical
harmonic map problem. If a prescribed Dirichlet boundary condition admits non-
zero degree, then by necessity any $n$ satisfying it must admit
discontinuities, meaning that defects/singularities are an unavoidable part of
the model’s study.
More generally, one may consider an Oseen-Frank energy where different modes
of deformation are penalised to different extents. Neglecting the saddle-splay
null-Lagrangian term, this gives a free energy of the form
(1.2) $\int_{\Omega}\frac{K_{1}}{2}(\nabla\cdot
n(x))^{2}+\frac{K_{2}}{2}(n(x)\cdot\nabla\times
n(x))^{2}+\frac{K_{3}}{2}|n(x)\times\nabla\times n(x)|^{2}\,\mathrm{d}x.$
The constants $K_{1}$, $K_{2}$, $K_{3}$ are known as the Frank constants, and
represent the penalisations of splay, twist, and bend deformations
respectively. In the case where $K_{1}=K_{2}=K_{3}=K$, we reclaim the one-
constant approximation (1.1).
It is natural to ask if such a free energy can be justified. While the
original formulation was more phenomenological in nature and based solely on
symmetry arguments and a small-deformation assumption, attempts have been made
to identify the Oseen-Frank model as a large-domain limit of a more
fundamental model, the Landau-de Gennes model [12, 36]. In the Landau-de
Gennes model, the continuum variable is the Q-tensor, corresponding to the
normalised second moment of a one-particle distribution function. Explicitly,
if the distribution of the long axes of molecules in a small neighbourhood of
a point $x\in\Omega$ are described by a probability distribution
$f(x,\,\cdot)\colon\mathbb{S}^{2}\to[0,\,+\infty)$, we define the Q-tensor at
the point $x$ as
(1.3) $Q(x)=\int_{\mathbb{S}^{2}}f(x,\,p)\left(p\otimes
p-\frac{1}{3}I\right)\,\mathrm{d}p.$
As molecules are assumed to be head-to-tail symmetric, a molecule is as likely
to have orientation $p\in\mathbb{S}^{2}$ as $-p$, so that
$f(x,\,p)=f(x,\,-p)$. For this reason the first moment of $f(x,\,\cdot)$ will
always vanish, making the Q-tensor the first non-trivial moment, containing
information on molecular alignment. Q-tensors are, following their definition,
traceless, symmetric, $3\times 3$ matrices. We denote this set as
(1.4) $\text{Sym}_{0}(3)=\left\\{Q\in\mathbb{R}^{3}\colon
Q=Q^{T},\,\text{Trace}(Q)=0\right\\}.$
The Q-tensor contains more information than the director field, namely that it
does not force the interpretation of axially symmetric ordering about an axis
(less symmetric configurations are permitted), and the degree of orientational
ordering is permitted to vary.
Depending on their eigenvalues, they come in one of three varieties.
* •
If all eigenvalues are equal, $Q=0$, and we say that $Q$ is isotropic, and
representative of a disordered system. In particular, if $f$ is a uniform
distribution on $\mathbb{S}^{2}$, $Q=0$.
* •
If two eigenvalues are equal and the third is distinct, we say $Q$ is
uniaxial. A uniaxial Q-tensor can be written as $Q=s\left(n\otimes
n-\frac{1}{3}I\right)$, for a scalar $s$ and unit vector $n$. We interpret $n$
as the favoured direction of alignment, and $s$ as a measure of the degree of
ordering molecules about $n$.
* •
If all three eigenvalues are distinct, we say that $Q$ is biaxial.
The corresponding free energy to be minimised is
(1.5) $\int_{\Omega}\psi_{b}(Q(x))+W(Q(x),\nabla Q(x))\,\mathrm{d}x.$
The function $\psi_{b}\colon\text{Sym}_{0}(3)\to\mathbb{R}\cup\\{+\infty\\}$
is a frame indifferent bulk potential, which may be taken as a polynomial or
the Ball-Majumdar singular potential. Its main characteristic is that, in the
cases considered, it is minimised on the set
(1.6) $\mathscr{N}=\left\\{Q\in\text{Sym}_{0}(3)\colon\textrm{there exists
}n\in\mathbb{S}^{2}\textrm{ such that }Q=s_{0}\left(n\otimes
n-\frac{1}{3}I\right)\right\\},$
with $s_{0}$ a temperature, concentration and material dependent constant. The
elastic energy $W$ is minimised when $\nabla Q=0$. While many forms are
possible, by symmetry the only frame-indifferent, quadratic energy that only
depends on the gradient of $Q$ is of the form
(1.7) $W(\nabla
Q)=\frac{L_{1}}{2}Q_{ij,k}Q_{ij,k}+\frac{L_{2}}{2}Q_{ij,k}Q_{ik,j}+\frac{L_{3}}{2}Q_{ij,j}Q_{ik,k},$
where Einstein summation notation is used. While Oseen-Frank represents
nematic defects as discontinuities in the continuum variables, the Landau de-
Gennes approach admits a different description, where nematic defects point
defects are typically described as a melting of nematic order, that is $Q=0$.
This permits smooth configurations to describe defects.
In an appropriate large-domain limit of a rescaled problem, the contributions
of the bulk energy become overwhelming, and we expect the minimisers to
converge to minimisers of a constrained problem, where we minimise the elastic
energy
(1.8) $\int_{\Omega}W(\nabla Q)\,\mathrm{d}x,$
subject to the constraint that $Q(x)\in\mathscr{N}$ almost everywhere. In the
case where $Q=s_{0}\left(n\otimes n-\frac{1}{3}I\right)$ almost everywhere for
some $n\in W^{1,2}(\Omega,\mathbb{S}^{2})$, we say that $Q$ is orientable, and
the problem in the presence of Dirichlet boundary conditions that are
$\mathscr{N}$-valued almost everywhere becomes equivalent to that of the
minimising the energy (1.2) for $n$. The constants $L_{i}$ and $K_{i}$ are
related in the case of Dirichlet boundary conditions where null-Lagrangian
terms may be neglected as
(1.9) $\begin{split}\frac{1}{s_{0}^{2}}K_{1}=&2L_{1}++L_{2}+L_{3},\\\
\frac{1}{s_{0}^{2}}K_{2}=&2L_{1},\\\
\frac{1}{s_{0}^{2}}K_{3}=&2L_{1}.\end{split}$
An energy purely quadratic in $\nabla Q$ cannot give rise to three independent
elastic constants in the Oseen-Frank model, with the so-called “cubic term”
$Q_{ij}Q_{kl,i}Q_{kl,j}$ often being used to fill the degeneracy. Such a term
does not arise from the model we will consider, although a more complex
variant taking into account molecular length scales has been proposed to avoid
this issue [11].
Studying the convergence of minimisers of Landau-de Gennes towards the Oseen-
Frank limit has attracted interest, with Majumdar and Zarnescu showing global
$W^{1,2}$ convergence and uniform convergence away from singular sets in the
one-constant case [35], Nguyen and Zarnescu proving convergence results in
stronger topologies [37], Contreras, Lamy and Rodiac generalising the approach
to other harmonic-map problems [10], and further extensions by Contreras and
Lamy [9] and Canevari, Majumdar and Stroffolini [7] to more general elastic
energies. In other settings, the $W^{1,2}$-convergence does not hold globally
but only locally, away from the singular sets, due to topological obstructions
carried by the boundary data and/or the domain (see e.g. [2, 19, 6, 23]).
Recently, Di Fratta, Robbins, Slastikov and Zarnescu found higher-order
Landau-de Gennes corrections to the Oseen-Frank functional, in two dimensional
domains, by studying the $\Gamma$-expansion of the Landau-de Gennes functional
in the large-domain limit [14]. The problem holds many parallels to the now-
classical Ginzburg-Landau problem [4]. Other singular limits and qualitative
features of Landau-de Gennes solutions have been studied too; see, for
instance, [8, 13, 22, 25, 29, 24, 26, 27] and the references therein.
While Landau-de Gennes has proven an effective model in many situations, there
are still open questions as to how one may justify the model in a rigorous
way. While one may use Landau-de Gennes, in appropriate situations, to justify
Oseen-Frank, a rigorous justification of Landau-de Gennes itself is lacking.
Historically it was justified on a phenomenological basis, but other work has
been able to provide Landau-de Gennes as a gradient expansion of a non-local
mean field model [17, 20]. Justification by formal gradient expansions leaves
open the question as to the consistency minimisers of the original free energy
with minimisers of its approximation. To this end, recent work has been
focused on rigorous asymptotic analysis of non-local free energies, which
similarly produce the Oseen-Frank model in a large-domain limit [31, 32, 42,
43]. These approaches “bypass” the intermediate and non-rigorous derivation of
Landau-de Gennes. This is analogous to recent investigations into
peridynamics, a formulation of elasticity based on non-local interactions.
These formulations of elasticity bear mathematical similarity with the mean-
field theory approach, where stress-strain relations are described in terms of
non-local operators on the deformation map, rather than derivatives as in the
more classical formulations of elasticity [3, 40]. The classical density
functional theory we will consider in this work is based on a simplified
competition between an entropic contribution to the energy, favouring
disorder, and an interaction energy, favouring order. The models themselves
are justified as a second order truncation of the virial expansion in the
dilute regime based on long-range attractive interactions in the style of
Maier and Saupe [34] and with mathematical similarity to the model of Onsager
[38]. Explicitly, given the one-particle distribution function $f(x,\cdot)$ in
a neighbourhood of $x$, we define a free energy functional
(1.10)
$\begin{split}&k_{B}T\rho_{0}\int_{\Omega\times\mathbb{S}^{2}}f(x,\,p)\ln
f(x,\,p)\,\mathrm{d}x\,\mathrm{d}p\\\
&\qquad\qquad\qquad-\frac{\rho_{0}^{2}}{2}\int_{\Omega\times\mathbb{S}^{2}}\int_{\Omega\times\mathbb{S}^{2}}f(x,\,p)f(y,\,p)\mathcal{K}(x-y,\,p,\,q)\,\mathrm{d}x\,\mathrm{d}p\,\mathrm{d}y\,\mathrm{d}q.\end{split}$
$\rho_{0}>0$ is the number density of particles in space, $k_{B}$ the
Boltzmann constant, $T>0$ temperature and $\mathcal{K}(z,\,p,\,q)$ denotes the
interaction energy of particles with orientations $p$, $q$ and with centres of
mass separated by a vector $z$. The entropic term on the left is convex and
readily shown to be minimised at a uniform distribution, that is, an isotropic
disordered system. The nature of the pairwise interaction energy on the right
is that nearby particles will prefer to be aligned with each other. We see
that temperature and concentration mediate the competition between these
opposing mechanisms. Recent work has established the Oseen-Frank energy (1.2)
in terms of a large-domain limit of the energy (1.10) under certain
assumptions, in which the elastic constants $K_{i}$ can be related to second
moments of the interaction kernel. Previous work has established weaker modes
of convergence, while in this work we will establish stronger convergence of
minimisers away from defect sets, analogous to the approach taken by Majumdar
and Zarnescu for the Landau-de Gennes model [35].
### 1.2 Simplification of the model and non-dimensionalisation
Here and throughout the sequel, we consider the more general case where
molecules admit an internal degree of freedom $p$ in a manifold $\mathcal{M}$.
We will employ a macroscopic order parameter $u\in\mathbb{R}^{m}$ to emphasise
the analysis is not limited to the concrete case of nematic liquid crystals.
Through most of the paper, we consider the case where $f$ is prescribed on
$\left(\mathbb{R}^{3}\setminus\frac{1}{{\varepsilon}}\Omega\right)\times\mathcal{M}$,
where $\Omega$ is a non-dimensional reference domain and ${\varepsilon}>0$ is
a small parameter, representative of the inverse of a large length scale of
the domain. In Section 5, we relax this assumption and study a minimisation
problem where $f$ is prescribed only in a neighbourhood of the domain, of
suitable thickness. We consider the free energy
(1.11)
$\begin{split}&\tilde{\mathcal{G}}_{\varepsilon}(f)=k_{B}T\rho_{0}\int_{\frac{1}{{\varepsilon}}\Omega\times\mathcal{M}}f(x,\,p)\ln
f(x,\,p)\,\mathrm{d}x\,\mathrm{d}p\\\
&\qquad\qquad\qquad-\frac{\rho_{0}^{2}}{2}\int_{\mathbb{R}^{3}\times\mathcal{M}}\int_{\mathbb{R}^{3}\times\mathcal{M}}f(x,\,p)f(y,\,q)\mathcal{K}(x-y,\,p,\,q)\,\mathrm{d}x\,\mathrm{d}p\,\mathrm{d}y\,\mathrm{d}q.\end{split}$
For simplification of the problem, we take the interaction energy to be of the
form
(1.12) $\mathcal{K}(z,\,p,\,q)=K(z)\sigma(p)\cdot\sigma(q),$
where $\sigma\in L^{\infty}(\mathcal{M},\mathbb{R}^{m})$ is some “microscopic
order parameter”, and $K\colon\mathbb{R}^{3}\to\mathbb{R}^{m\times m}$ is a
symmetric tensor field, which will satisfy certain technical conditions (see
(K1)–(K6) in Section 2). By applying Fubini we may then introduce a
“macroscopic order parameter”, $u\in
L^{\infty}(\mathbb{R}^{3},\mathbb{R}^{m})$ by
(1.13) $u(x)=\int_{\mathcal{M}}f(x,\,p)\sigma(p)\,\mathrm{d}p,$
and re-write the interaction energy as
(1.14)
$\begin{split}&-\frac{\rho_{0}^{2}}{2}\int_{\mathbb{R}^{3}\times\mathcal{M}}\int_{\mathbb{R}^{3}\times\mathcal{M}}f(x,\,p)f(y,\,p)\mathcal{K}(x-y,\,p,\,q)\,\mathrm{d}x\,\mathrm{d}p\,\mathrm{d}y\,\mathrm{d}q\\\
&\qquad\qquad\qquad=-\frac{\rho_{0}^{2}}{2}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}K(x-y)u(x)\cdot
u(y)\,\mathrm{d}x\,\mathrm{d}y.\end{split}$
While it is not possible to write the entropic term explicitly in terms of
$u$, we may provide a lower bound by means of the singular Ball-
Majumdar/Katriel potential and its extensions [1, 28, 41] by
(1.15) $\int_{\frac{1}{{\varepsilon}}\Omega\times\mathcal{M}}f(x,\,p)\ln
f(x,\,p)\,\mathrm{d}x\,\mathrm{d}p\geq\int_{\frac{1}{{\varepsilon}}\Omega}\psi_{s}(u(x))\,\mathrm{d}x,$
where the function
$\psi_{s}\colon\mathbb{R}^{m}\to\mathbb{R}\cup\\{+\infty\\}$ is defined by
(1.16) $\psi_{s}(u)=\min\left\\{\int_{\mathcal{M}}f(p)\ln
f(p)\,\mathrm{d}p\colon f\geq 0\textrm{
a.e.,}\,\int_{\mathcal{M}}f(p)\,\mathrm{d}p=1,\,\int_{\mathcal{M}}f(p)\sigma(p)\,\mathrm{d}p=u\right\\}\\!,$
where by convention $\psi_{s}(u)=+\infty$ when the constraint set is empty.
Note that the minimisation problem (1.16) is strictly convex, thus solutions
are necessarily unique, and we may define $f_{u}$ to be the corresponding
minimiser for $u\in\mathcal{Q}=\left\\{u:\psi_{s}(u)<+\infty\right\\}$. That
is,
(1.17) $f_{u}=\text{arg min}\left\\{\int_{\mathcal{M}}f(p)\ln
f(p)\,\mathrm{d}p\colon f\geq 0\textrm{
a.e.,}\,\int_{\mathcal{M}}f(p)\,\mathrm{d}p=1,\,\int_{\mathcal{M}}f(p)\sigma(p)\,\mathrm{d}p=u\right\\}.$
The precise definition of $\psi_{s}$ will be unimportant in this work, and we
employ any function $\psi_{s}$ satisfying certain technical assumptions in the
sequel (see (H1)–(H6) in Section 2)
We in fact have the result that $f^{*}$ is a minimiser of
$\tilde{\mathcal{G}}_{\varepsilon}$ if and only if, for
$u^{*}(x)=\int_{\mathbb{S}^{2}}f^{*}(x,\,p)\sigma(p)\,\mathrm{d}p$, $u^{*}$ is
a minimiser of
(1.18)
$\tilde{\mathcal{F}}_{\varepsilon}(u)=k_{B}T\rho_{0}\int_{\frac{1}{{\varepsilon}}\Omega}\psi_{s}(u(x))\,\mathrm{d}x-\frac{\rho_{0}^{2}}{2}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}K(x-y)u(x)\cdot
u(y)\,\mathrm{d}x\,\mathrm{d}y,$
with $f^{*}=f_{u^{*}}$. This is readily seen by writing the minimisation as a
two-step process, first minimising over all $f$ such that $f_{u}=f$, and later
minimising over $u$ and noting that the first minimisation may be performed
pointwise almost-everywhere in $\mathbb{R}^{3}$, as in [42]. That is to say,
we have a simpler, macroscopic energy with equivalent minimisers. By
introducing a change of variables,
$x=\frac{x^{\prime}}{{\varepsilon}},\quad
y=\frac{y^{\prime}}{{\varepsilon}},\quad
u^{\prime}(x^{\prime})=u(x),\quad{\varepsilon}^{\prime}:=\frac{{\varepsilon}}{\rho_{0}^{1/3}},\quad
K^{\prime}(x^{\prime})=\frac{1}{k_{B}T}K({\varepsilon}^{\prime}x),$
and a (non-dimensional) constant $C_{{\varepsilon}^{\prime}}$ to be specified
later, we rescale the domain and obtain the free energy we will consider for
the remainder of this work, so that
(1.19)
$\begin{split}E_{{\varepsilon}^{\prime}}(u^{\prime})&:=\frac{{\varepsilon}}{k_{B}T\rho_{0}^{1/3}}\tilde{\mathcal{F}}_{\varepsilon}(u)+C_{{\varepsilon}^{\prime}}\\\
&=\frac{1}{{{\varepsilon}^{\prime}}^{2}}\int_{\Omega}\psi_{s}(u^{\prime}(x^{\prime}))\,\mathrm{d}x^{\prime}-\frac{1}{2{{\varepsilon}^{\prime}}^{5}}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}K^{\prime}\left(\frac{x^{\prime}-y^{\prime}}{{\varepsilon}^{\prime}}\right)u^{\prime}(x^{\prime})\cdot
u^{\prime}(y^{\prime})\,\mathrm{d}x^{\prime}\,\mathrm{d}y^{\prime}+C_{{\varepsilon}^{\prime}}.\end{split}$
The additive constant $C_{{\varepsilon}^{\prime}}$ is irrelevant for the
purpose of minimisation; however, we will make a specific choice of
$C_{{\varepsilon}^{\prime}}$ (see Equation (2.6) below) for analytical
convenience. We will consider the regime as ${\varepsilon}^{\prime}\to 0$ in
this work. From the definition of ${\varepsilon}^{\prime}$, this may be
interpreted in two forms, one in which the characteristic length scale of the
domain, $\frac{1}{{\varepsilon}}$, becomes large, and one in which the density
$\rho_{0}$ becomes large. However, as the energy we consider is based on the
second order virial expansion which is explicitly a model for dilute regimes,
we interpret the limit ${\varepsilon}^{\prime}\to 0$ as the former, that is, a
large-domain limit. In the sequel we omit the primes and consider (1.19) as
our free energy functional to be minimised at scale ${\varepsilon}>0$.
## 2 Technical assumptions and main results
Let $\text{Sym}_{0}(m)$ be the space of $(m\times m)$-symmetric matrices, with
real coefficients. Given an interaction kernel
$K\colon\mathbb{R}^{3}\to\text{Sym}_{0}(m)$ and ${\varepsilon}>0$, we define
$K_{\varepsilon}(z):={\varepsilon}^{-3}K({\varepsilon}^{-1}z)$ for any
$z\in\mathbb{R}^{3}$. Then, we may rewrite the functional (1.19) as
(2.1)
$\begin{split}E_{\varepsilon}(u):=-\frac{1}{2{\varepsilon}^{2}}\int_{\mathbb{R}^{3}\times\mathbb{R}^{3}}K_{\varepsilon}(x-y)u(x)\cdot
u(y)\,\mathrm{d}x\,\mathrm{d}y+\frac{1}{{\varepsilon}^{2}}\int_{\Omega}\psi_{s}(u(x))\,\mathrm{d}x+C_{\varepsilon},\end{split}$
where $u\colon\mathbb{R}^{3}\to\mathbb{R}^{m}$ is the macroscopic order
parameter, $\Omega\subseteq\mathbb{R}^{3}$ is a bounded, smooth domain, and
$\psi_{s}\colon\mathbb{R}^{m}\to[0,\,+\infty]$ is any convex potential that
satisfies the assumptions (H1)–(H6) below (for instance, the Ball-
Majumdar/Katriel potential defined by (1.16)).
#### Assumptions on the kernel $K$.
Our assumptions on the kernel $K$ are reminiscent of [42]. We define
$g(z):=\lambda_{\min}(K(z))$ for any $z\in\mathbb{R}^{3}$, where
$\lambda_{\min}(K)$ denotes the minimum eigenvalue of $K$.
1. (K1)
$K\in W^{1,1}(\mathbb{R}^{3},\,\text{Sym}_{0}(m))$.
2. (K2)
$K$ is even, that is $K(z)=K(-z)$ for a.e. $z\in\mathbb{R}^{m}$.
3. (K3)
$g\geq 0$ a.e. on $\mathbb{R}^{3}$, and there exist positive numbers
$\rho_{1}<\rho_{2}$, $k$ such that $g\geq k$ a.e. on $B_{\rho_{2}}\setminus
B_{\rho_{1}}$.
4. (K4)
$g\in L^{1}(\mathbb{R}^{3})$ and has finite second moment, that is
$\int_{\mathbb{R}^{3}}g(z)\left|z\right|^{2}\mathrm{d}z<+\infty$.
5. (K5)
There exists a positive constant $C$ such that $\lambda_{\max}(K(z))\leq
Cg(z)$ for a.e. $z\in\mathbb{R}^{3}$ (where $\lambda_{\max}(K)$ denotes the
maximum eigenvalue of $K$).
6. (K6)
There holds
$\int_{\mathbb{R}^{3}}\left\|\nabla
K(z)\right\|\left|z\right|^{3}\mathrm{d}z<+\infty,$
where $\left\|\nabla
K(z)\right\|^{2}:=\partial_{\alpha}K_{ij}(z)\,\partial_{\alpha}K_{ij}(z)$.
In the case of physically meaningful systems the tensor $K$ will have to
respect frame invariance. In the case of nematic liquid crystals, where the
order parameter is a traceless symmetric matrix $Q$, frame indifference
implies that the bilinear form must necessarily be of the form
(2.2) $K(z)Q_{1}\cdot Q_{2}=f_{1}(|z|)Q_{1}\cdot Q_{2}+f_{2}(|z|)Q_{1}z\cdot
Q_{2}z+f_{3}(|z|)(Q_{1}z\cdot z)(Q_{2}z\cdot z),$
for all $Q_{1},Q_{2}\in\text{Sym}_{0}(3)$, where $f_{1}$, $f_{2}$, $f_{3}$ are
real-valued functions defined on $[0,\,+\infty)$ [42]. It is clear that by
appropriate choices of $f_{1}$, $f_{2}$, $f_{3}$ which are $C^{1}$ and with
sufficient decay at infinity the previous assumptions can be satisfied. This
family of bilinear forms includes the simplified interaction energy
(2.3) $K(z)Q_{1}\cdot Q_{2}=C\frac{\varphi(|z|)}{|z|^{6}}Q_{1}\cdot Q_{2}$
for a suitable cutoff function $\varphi$, zero near the origin, as found in
[5, Equation (3.43)]. The cutoff of the energy in a vicinity of $|z|=0$ is
reflective of the energy being derived as an approximation of a long-ranged,
attractive interaction.
#### Assumptions on the singular potential $\psi_{s}$.
1. (H1)
$\psi_{s}\colon\mathbb{R}^{m}\to[0,\,+\infty]$ is a convex function.
2. (H2)
The domain of $\psi_{s}$,
$\mathcal{Q}:=\psi_{s}^{-1}[0,\,+\infty)\subseteq\mathbb{R}^{m}$, is a non-
empty, bounded open set and $\psi_{s}\in C^{2}(\mathcal{Q})$.
3. (H3)
There exists a constant $c>0$ such that
$\nabla^{2}\psi_{s}(y)\chi\cdot\chi\geq c\left|\chi\right|^{2}$ for any
$y\in\mathcal{Q}$ and any $\chi\in\mathbb{R}^{m}$.
4. (H4)
There holds $\psi_{s}(y)\to+\infty$ as
$\operatorname{dist}(y,\,\partial\mathcal{Q})\to 0$.
We define the “bulk potential” $\psi_{b}\colon\mathcal{Q}\to\mathbb{R}$ in
terms of $K$ and $\psi_{s}$, as
(2.4)
$\psi_{b}(y):=\psi_{s}(y)-{\frac{1}{2}}\left(\int_{\mathbb{R}^{3}}K(z)\,\mathrm{d}z\right)y\cdot
y+c_{0}\quad\textrm{for any }y\in\mathcal{Q},$
where $c_{0}\in\mathbb{R}$ is a constant, uniquely determined by imposing that
$\inf\psi_{b}=0$. We make the following assumptions on $\psi_{b}$:
1. (H5)
The set $\mathscr{N}:=\psi_{b}^{-1}(0)\subseteq\mathcal{Q}$ is a compact,
smooth, connected manifold without boundary.
2. (H6)
For any $y\in\mathscr{N}$ and any unit vector $\xi\in\mathbb{S}^{m-1}$ that is
orthogonal to $\mathscr{N}$ at $y$, we have
$\nabla^{2}\psi_{b}(y)\xi\cdot\xi>0$.
###### Remark 2.1.
If the norm of $\int_{\mathbb{R}^{3}}K(z)\,\mathrm{d}z$ is smaller than the
constant $c$ given by (H3), then the function $\psi_{b}$ is convex and hence,
its zero-set $\mathscr{N}$ reduces to a point. This happens, for example, in
the sufficiently high temperature regime, independently of the precise form of
$K$. Nevertheless, our arguments remain valid in this case, too.
###### Remark 2.2.
The Ball-Majumdar/Katriel potential, defined by (1.16), satisfies the
conditions (H1)–(H6). (H1), (H2), (H4), and (H5) follow from [1], apart from
the $C^{2}$ smoothness of $\psi_{s}$ which is implicitly proven in [28] via an
inverse function theorem argument, although not stated. (H3) is proven in
[41]. With this choice of the potential, the set
$\mathscr{N}:=\psi_{b}^{-1}(0)$ is either a point or the manifold given by
(1.6) (see [1, Section 4]). In both cases, (H6) is satisfied (see [30,
Proposition 4.2]).
#### The admissible class and an equivalent expression for the free energy.
We complement the minimisation of the functional (2.1) by prescribing $u$ on
$\mathbb{R}^{3}\setminus\Omega$. We take a map $u_{\mathrm{bd}}\in
H^{1}(\mathbb{R}^{3},\,\mathbb{R}^{m})$ such that
(BD) $u_{\mathrm{bd}}(x)\in\mathcal{Q}\quad\textrm{for a.e.
}x\in\mathbb{R}^{3}\setminus\Omega,\qquad
u_{\mathrm{bd}}(x)\in\mathscr{N}\quad\textrm{for a.e. }x\in\Omega,$
and we define the admissible class
(2.5) $\mathscr{A}:=\left\\{u\in
L^{2}(\mathbb{R}^{3},\,\mathcal{Q})\colon\psi_{s}(u)\in L^{1}(\Omega),\
u=u_{\mathrm{bd}}\textrm{ a.e. on }\mathbb{R}^{3}\setminus\Omega\right\\}\\!.$
In the class $\mathscr{A}$, the functional $E_{\varepsilon}$ has an alterative
expression. For any $y\in\mathbb{R}^{m}$, we use the abbreviated notation
$y^{\otimes 2}:=y\otimes y$. We choose
(2.6)
$C_{\varepsilon}:=\frac{c_{0}}{{2}{\varepsilon}^{2}}\left|\Omega\right|+\frac{1}{{2}{\varepsilon}^{2}}\int_{\mathbb{R}^{3}\setminus\Omega}\left(\int_{\mathbb{R}^{3}}K(z)\,\mathrm{d}z\right)\cdot
u_{\mathrm{bd}}(x)^{\otimes 2}\,\mathrm{d}x,$
where $\left|\Omega\right|$ denotes the volume of $\Omega$ and
$c_{0}\in\mathbb{R}$ is the same number as in (2.4). The constant
$C_{\varepsilon}$ only depends on ${\varepsilon}$, $\Omega$, $K$ and
$u_{\mathrm{bd}}$, so it is does not affect minimisers of the functional. By
applying the algebraic identity
$-2K(x-y)u(x)\cdot u(y)=K(x-y)\cdot(u(x)-u(y))^{\otimes 2}-K(x-y)\cdot
u(x)^{\otimes 2}-K(x-y)\cdot u(y)^{\otimes 2}$
and using (2.4), (2.6), we re-write (2.1) as
(2.7)
$\begin{split}E_{\varepsilon}(u)=\frac{1}{4{\varepsilon}^{2}}\int_{\mathbb{R}^{3}\times\mathbb{R}^{3}}K_{\varepsilon}(x-y)\cdot\left(u(x)-u(y)\right)^{\otimes
2}\,\mathrm{d}x\,\mathrm{d}y+\frac{1}{{\varepsilon}^{2}}\int_{\Omega}\psi_{b}(u(x))\,\mathrm{d}x\end{split}$
for any $u\in\mathscr{A}$. We note that the free energy admits parallels to
the Landau-de Gennes energy, with the right-hand term being a corresponding
bulk energy and the left-hand term acting as a non-local analogue of the
elastic energy, which we shall see is reclaimed in a precise way in the
asymptotic limit as ${\varepsilon}\to 0$. Let $L$ be the unique symmetric
fourth-order tensor that satisfies
(2.8) $L\xi\cdot\xi:=\frac{1}{4}\int_{\mathbb{R}^{3}}K(z)\cdot(\xi z)^{\otimes
2}\,\mathrm{d}z\qquad\textrm{for any }\xi\in\mathbb{R}^{m\times 3}.$
Coordinate-wise, $L$ is defined by
$L_{ij\alpha\beta}=\frac{1}{4}\int_{\mathbb{R}^{3}}K_{\alpha\beta}(z)\,z_{i}\,z_{j}\,\mathrm{d}z$
for any $i$, $j\in\\{1,\,2,\,3\\}$ and $\alpha$,
$\beta\in\\{1,2,\,\ldots,\,m\\}$. Let $E_{0}\colon\mathscr{A}\to[0,\,+\infty]$
be given as
(2.9) $E_{0}(u):=\begin{cases}\displaystyle\int_{\Omega}L\nabla u\cdot\nabla
u&\textrm{if }u\in H^{1}(\Omega,\,\mathscr{N})\cap\mathscr{A}\\\
+\infty&\textrm{otherwise.}\end{cases}$
By assumption (BD), the set $H^{1}(\Omega,\,\mathscr{N})\cap\mathscr{A}$ is
non-empty and hence, the functional $E_{0}$ is not identically equal to
$+\infty$. Taylor [42] proved that, as ${\varepsilon}\to 0$, the functional
$E_{\varepsilon}$ $\Gamma$-converges to $E_{0}$ with respect to the
$L^{2}$-topology. In particular, up to subsequences, minimisers
$u_{\varepsilon}$ of $E_{\varepsilon}$ in the class $\mathscr{A}$ converge
$L^{2}$-strongly to a minimiser $u_{0}$ of $E_{0}$ in $\mathscr{A}$. Our aim
is to prove a convergence result for minimisers, in a stronger topology.
#### Main results.
Given a Borel set $G\subseteq\mathbb{R}^{3}$ and $u\in
L^{\infty}(G,\,\mathcal{Q})$, we define
(2.10) $F_{\varepsilon}(u,\,G):=\frac{1}{4{\varepsilon}^{2}}\int_{G\times
G}K_{\varepsilon}(x-y)\cdot\left(u(x)-u(y)\right)^{\otimes
2}\,\mathrm{d}x\,\mathrm{d}y+\frac{1}{{\varepsilon}^{2}}\int_{G}\psi_{b}(u(x))\,\mathrm{d}x.$
For any $\mu\in(0,\,1)$, we denote the $\mu$-Hölder semi-norm of $u$ on $G$ as
$[u]_{C^{\mu}(G)}:=\sup_{x,\,y\in G,\ x\neq
y}\frac{\left|u(x)-u(y)\right|}{\left|x-y\right|^{\mu}}.$
###### Theorem A (Uniform $\eta$-regularity).
Assume that the conditions (K1)–(K6), (H1)–(H6) and (BD) are satisfied. Then,
there exist positive numbers $\eta$, ${\varepsilon}_{*}$, $M$ and
$\mu\in(0,\,1)$ such that for any ball $B_{r_{0}}(x_{0})\subseteq\Omega$, any
${\varepsilon}\in(0,\,{\varepsilon}_{*}r_{0})$, and any minimiser
$u_{\varepsilon}$ of $E_{\varepsilon}$ in $\mathscr{A}$, there holds
$r_{0}^{-1}F_{\varepsilon}(u_{\varepsilon},\,B_{r_{0}}(x_{0}))\leq\eta\qquad\Longrightarrow\qquad
r_{0}^{\mu}\,[u_{\varepsilon}]_{C^{\mu}(B_{r_{0}/2}(x_{0}))}\leq M.$
As a corollary, we deduce a convergence result for minimisers of
$E_{\varepsilon}$, in the locally uniform topology. We recall that any
minimiser $u_{0}$ for the limit functional (2.9) in $\mathscr{A}$ is smooth in
$\Omega\setminus S[u_{0}]$, where
(2.11) $S[u_{0}]:=\left\\{x\in\Omega\colon\liminf_{\rho\to
0}\rho^{-1}\int_{B_{\rho}(x)}\left|\nabla u_{0}\right|^{2}>0\right\\}\\!.$
Moreover, $S[u_{0}]$ is a closed set of zero total length (see e.g. [21, 33]).
###### Theorem B.
Assume that the conditions (K1)–(K6), (H1)–(H6) and (BD) are satisfied. Let
$u_{\varepsilon}$ be a minimiser of $E_{\varepsilon}$ in $\mathscr{A}$. Then,
up to extraction of a (non-relabelled) subsequence, we have
$u_{\varepsilon}\to u_{0}\qquad\textrm{locally uniformly in }\Omega\setminus
S[u_{0}],$
where $u_{0}$ is a minimiser of the functional (2.9) in $\mathscr{A}$.
The strategy of the proof for Theorem A is inspired by [9]. Under the
assumption $F_{\varepsilon}(u_{\varepsilon},\,B_{1})\leq\eta$, we obtain an
algebraic decay for the mean oscillation of $u_{\varepsilon}$, that is
(2.12)
$\fint_{B_{\rho}}\left|u_{\varepsilon}-\fint_{B_{\rho}}u_{\varepsilon}\right|^{2}\leq
C\rho^{2\mu}$
for any $\rho\in(0,\,1)$ and some positive constants $C$, $\mu$ that do not
depend on $\rho$, ${\varepsilon}$. If the radius $\rho$ is large enough, i.e.
$\rho\geq\lambda_{1}{\varepsilon}$ for some ${\varepsilon}$-independent
constant $\lambda_{1}$, we exploit the decay properties for the limit
functional $E_{0}$ (see e.g. [21, 33]) to obtain an algebraic decay for
$F_{\varepsilon}(u_{\varepsilon},\,B_{\rho})$ as a function of $\rho$; then,
we deduce (2.12) via a suitable Poincaré inequality (Proposition 3.4). On the
other hand, if $\rho\leq\lambda_{1}{\varepsilon}$ we obtain (2.12) from the
Euler-Lagrange equations for $E_{\varepsilon}$ (Proposition 3.1). The
inequality (2.12) immediately implies the desired bound on the Hölder norm of
$u_{\varepsilon}$, by Campanato embedding. Once Theorem A is proven, Theorem B
follows, via the Ascoli-Arzelà theorem.
## 3 Preliminary results
### 3.1 The Euler-Lagrange equations
Throughout the paper, we denote by $C$ several constants that depend only on
$\Omega$, $K$, $m$, $\psi_{s}$ and $u_{\mathrm{bd}}$. We write $A\lesssim B$
as a short-hand for $A\leq CB$. We also define
$g_{\varepsilon}(z):={\varepsilon}^{-3}g({\varepsilon}^{-1}z)$ for
$z\in\mathbb{R}^{3}$ (where, we recall, $g(z)$ is the minimum eigenvalue of
$K(z)$) and
(3.1) $\Lambda:=\nabla\psi_{s}\colon\mathcal{Q}\to\mathbb{R}^{m}.$
###### Proposition 3.1.
Consider the free energy $E_{\varepsilon}$, given by (2.1), with
$u=u_{\mathrm{bd}}$ on $\mathbb{R}^{3}\setminus\Omega$. Then there exists a
minimiser $u_{\varepsilon}\in L^{\infty}(\Omega,\,\mathcal{Q})$ (identified
with its extension by $u_{\mathrm{bd}}$ to $\mathbb{R}^{3}$), and it satisfies
the Euler-Lagrange equation,
(3.2)
$\Lambda(u_{\varepsilon}(x))=\int_{\mathbb{R}^{3}}K_{\varepsilon}(x-y)u_{\varepsilon}(y)\,\mathrm{d}y$
for a.e. $x\in\Omega$.
###### Proof.
By neglecting the additive constant in (2.1), and multiplying by
${\varepsilon}^{2}$, without loss of generality we may consider the functional
$\mathcal{F}(u):=\int_{\Omega}\psi_{s}(u(x))\,\mathrm{d}x-\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}K_{\varepsilon}(x-y)u(x)\cdot
u(y)\,\mathrm{d}x\,\mathrm{d}y$
instead of $E_{\varepsilon}$. To show existence, we use a direct method
argument. First we show that the bilinear form admits a global lower bound. As
$u_{\mathrm{bd}}\in L^{2}(\mathbb{R}^{3},\,\mathcal{Q})$ and $u$ admits
uniform $L^{\infty}$-bounds on $\Omega$, we have that $u\in
L^{2}(\mathbb{R}^{3},\,\overline{\mathcal{Q}})$,
$\left\|u\right\|_{L^{2}(\mathbb{R}^{3})}$ is bounded uniformly. We thus have
the estimate that
(3.3)
$\begin{split}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}&|K_{\varepsilon}(x-y)u(x)\cdot
u(y)|\,\mathrm{d}x\,\mathrm{d}y\\\
&\lesssim\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}g_{\varepsilon}(x-y)|u(x)||u(y)|\,\mathrm{d}x\,\mathrm{d}y\\\
&=\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\left(g_{\varepsilon}(x-y)^{\frac{1}{2}}|u(x)|\right)\left(g_{\varepsilon}(x-y)^{\frac{1}{2}}|u(y)|\right)\,\mathrm{d}x\,\mathrm{d}y\\\
&\lesssim\left(\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}g_{\varepsilon}(x-y)|u(x)|^{2}\,\mathrm{d}x\,\mathrm{d}y\right)^{\frac{1}{2}}\left(\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}g_{\varepsilon}(x-y)|u(y)|^{2}\,\mathrm{d}x\,\mathrm{d}y\right)^{\frac{1}{2}}\\\
&=\left\|g_{\varepsilon}\right\|_{L^{1}(\mathbb{R}^{3})}\left\|u\right\|_{L^{2}(\mathbb{R}^{3})}^{2}=\left\|g\right\|_{L^{1}(\mathbb{R}^{3})}\left\|u\right\|_{L^{2}(\mathbb{R}^{3})}^{2}\end{split}$
The singular function $\psi_{s}$ admits a lower bound pointwise, hence the
functional $\mathcal{F}$ admits a global lower bound. To show the admissible
set is non empty, simply take $u(x)=u_{0}\in\mathcal{Q}$ for all $x\in\Omega$,
so that $\psi_{s}(u(x))$ is a non-infinite constant.
The uniform $L^{\infty}$ bounds on $u$ imply that we have $L^{\infty}$ weak-*
compactness of a minimising sequence. As $\psi_{s}$ is strictly convex, we
have weak-* lower semicontinuity of the entropic term. It suffices to show
weak-* lower semicontinuity of the bilinear term. First we split the bilinear
term into the “boundary” and “bulk” contributions. That is, we write
$u=u_{\mathrm{bd}}\,\chi_{\mathbb{R}^{3}\setminus\Omega}+u\,\chi_{\Omega}$,
where $\chi_{\mathbb{R}^{3}\setminus\Omega}$ and $\chi_{\Omega}$ are the
characteristic functions of $\mathbb{R}^{3}\setminus\Omega$ and $\Omega$
respectively. As
$K_{\varepsilon}*(u_{\mathrm{bd}}\,\chi_{\mathbb{R}^{3}\setminus\Omega})\in
L^{1}(\Omega)$, if $u_{j}\overset{*}{\rightharpoonup}u$,
(3.4)
$\int_{\Omega}u_{j}(x)\,K_{\varepsilon}*(u_{\mathrm{bd}}\,\chi_{\mathbb{R}^{3}\setminus\Omega})(x)\,\mathrm{d}x\to\int_{\Omega}u(x)\,K_{\varepsilon}*(u_{\mathrm{bd}}\,\chi_{\mathbb{R}^{3}\setminus\Omega})(x)\,\mathrm{d}x.$
The second term requires a little more care. Following [15, Corollary 4.1],
the map $L^{\infty}(\Omega)\ni u\mapsto K_{\varepsilon}*(u\,\chi_{\Omega})$ is
$L^{\infty}$-to-$L^{1}$ compact if and only if the set
$\left\\{K_{\varepsilon}(x-\cdot)\,\chi_{\Omega}\colon x\in\Omega\right\\}$ is
relatively $L^{1}$-compact. This is immediate however as $\Omega$ is a bounded
set and $K_{\varepsilon}$ is integrable. Therefore the map
(3.5) $u\mapsto\int_{\Omega}\int_{\Omega}K_{\varepsilon}(x-y)u(x)\cdot
u(y)\,\mathrm{d}x\,\mathrm{d}y$
is in fact continuous with the weak-* $L^{\infty}$ topology, and therefore the
entire bilinear term is continuous also. Therefore the energy functional is
lower semicontinuous and minimisers exist by the direct method.
To show that minimisers satisfy the Euler-Lagrange equation, we note that if
$u$ has finite energy, then the measure of the set
$\\{x\in\Omega:u(x)\in\partial\mathcal{Q}\\}$ is zero. In particular, we may
define $U_{\delta}=\left\\{x\in\Omega:\psi_{s}(u(x))<1/\delta\right\\}$, and
we have that
(3.6) $\Omega=\Gamma\cup\bigcup\limits_{\delta>0}U_{\delta},$
where $\Gamma$ is a null set. By Assumption (H4), for every $\delta>0$, there
exists some $\gamma>0$ so that if $\psi_{s}(\tilde{u})<1/\delta$, then
$\operatorname{dist}(\tilde{u},\,\partial\mathcal{Q})>\gamma$. In particular,
for $\phi\in L^{\infty}(\mathbb{R}^{3},\,\mathbb{R}^{m})$ supported on
$U_{\delta}$ and $\eta$ sufficiently small, $u+\eta\phi$ is bounded away from
$\partial\mathcal{Q}$ on $U_{\delta}$. Therefore we may take variations
without issue, as
$\begin{split}\frac{1}{\eta}\left(\mathcal{F}(u+\eta\phi)-\mathcal{F}(u)\right)&=\int_{U_{\delta}}\frac{1}{\eta}\left(\psi_{s}(u(x)+\eta\phi(x))-\psi_{s}(u(x))\right)\,\mathrm{d}x\\\
&-\frac{1}{2}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}K_{\varepsilon}(x-y)\cdot\left(2\phi(x)\otimes
u(y)+\eta\phi(x)\otimes\phi(y)\right)\,\mathrm{d}x\,\mathrm{d}y.\end{split}$
Now we have no issue taking the limit as $\eta\to 0$, as $\psi_{s}$ is $C^{2}$
away from $\partial\mathcal{Q}$, to give
$\begin{split}\lim\limits_{\eta\to
0}\frac{1}{\eta}\left(\mathcal{F}(u+\eta\phi)-\mathcal{F}(u)\right)=&\int_{U_{\delta}}\Lambda(u(x))\cdot\phi(x)\,\mathrm{d}x-\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}K_{\varepsilon}(x-y)u(y)\,\mathrm{d}y\cdot\phi(x)\,\mathrm{d}x\\\
=&\int_{U_{\delta}}\left(\Lambda(u(x))-\int_{\mathbb{R}^{3}}K_{\varepsilon}(x-y)u(y)\,\mathrm{d}y\right)\cdot\phi(x)\,\mathrm{d}x,\end{split}$
recalling that $\phi(x)=0$ outside of $U_{\delta}$. As $\phi$ was arbitrary,
this implies that $u$ satisfies
(3.7)
$\Lambda(u(x))=\int_{\mathbb{R}^{3}}K_{\varepsilon}(x-y)u(y)\,\mathrm{d}y$
on $U_{\delta}$, and since $\delta$ was arbitrary, this implies that $u$
satisfies the Euler-Lagrange equation outside of $\Gamma$, which is of measure
zero. ∎
The Euler-Lagrange equations are particularly useful when used in combination
with the following property.
###### Lemma 3.2.
The map $\Lambda\colon\mathcal{Q}\to\mathbb{R}^{m}$ is invertible and its
inverse is of class $C^{1}$. Moreover,
(3.8) $\sup_{z\in\mathbb{R}^{m}}\left\|\nabla(\Lambda^{-1})(z)\right\|\leq
c^{-1},$
where $c$ is the constant given by (H3), and
(3.9) $\left|\Lambda(y)\right|\to+\infty\qquad\textrm{as
}\operatorname{dist}(y,\,\partial\mathcal{Q})\to 0.$
###### Proof.
To prove (3.9), it suffices to note that as $\psi_{s}$ is a closed proper
convex function which is $C^{1}$ on an open domain, so by applying classical
results from convex analysis [39, Theorem 25.1, Theorem 26.1], we see that
$\psi_{s}$ satisfies the property of essential smoothness, which implies
(3.9). More so, as $\psi_{s}$ is also strictly convex on a bounded domain,
this implies $\psi_{s}$ is a Legendre-type function which provides the results
that $\Lambda(\mathcal{Q})=\mathbb{R}^{m}$ [39, Corollary 13.3.1], and that
$\Lambda$ is a $C^{0}$ bijection from $\mathcal{Q}\to\Lambda(Q)$ [39, Theorem
26.5]. The $C^{1}$ regularity of $\Lambda^{-1}$ follows immediately from the
inverse function theorem, as $\psi_{s}$ is strongly convex. ∎
The Euler-Lagrange equation (3.2) and Lemma 3.2 have important consequences in
terms of regularity and “strict physicality” of minimisers — that is, the
image of $u_{\varepsilon}$ does not touch the boundary of the physically
admissible set $\mathcal{Q}$.
###### Proposition 3.3.
Minimisers $u_{\varepsilon}$ of the functional $E_{\varepsilon}$ in the class
$\mathscr{A}$ are Lipschitz-continuous on $\Omega$, with $\left\|\nabla
u_{\varepsilon}\right\|_{L^{\infty}(\Omega)}\lesssim{\varepsilon}^{-1}$.
Moreover, there exists a number $\delta>0$ such that for any ${\varepsilon}>0$
and any $x\in\Omega$,
(3.10)
$\operatorname{dist}(u_{\varepsilon}(x),\,\partial\mathcal{Q})\geq\delta.$
###### Proof.
The minimiser $u_{\varepsilon}$ takes values in the bounded set $\mathcal{Q}$
and hence, $\|u_{\varepsilon}\|_{L^{\infty}(\mathbb{R}^{3})}\leq C$, where the
constant $C$ does not depend on ${\varepsilon}$. Moreover,
$\|K_{\varepsilon}\|_{L^{1}(\mathbb{R}^{3})}=\|K\|_{L^{1}(\mathbb{R}^{3})}<+\infty$.
Then, by applying Young’s inequality to (3.2), we obtain
$\left\|\Lambda(u_{\varepsilon})\right\|_{L^{\infty}(\Omega)}\leq\|K_{\varepsilon}\|_{L^{1}(\mathbb{R}^{3})}\|u_{\varepsilon}\|_{L^{\infty}(\mathbb{R}^{3})}\leq
C.$
On the other hand, we have $\left|\Lambda(z)\right|\to+\infty$ as
$z\to\partial\mathcal{Q}$ by (3.9) and hence, (3.10) follows. Since we have
assumed that $K\in W^{1,1}(\mathbb{R}^{3},\,\text{Sym}_{0}(m))$, from the
Euler-Lagrange equation (3.2) we deduce
$\left\|\nabla(\Lambda\circ
u_{\varepsilon})\right\|_{L^{\infty}(\Omega)}=\left\|\nabla
K_{\varepsilon}*u_{\varepsilon}\right\|_{L^{\infty}(\Omega)}\leq{\varepsilon}^{-1}\left\|\nabla
K\right\|_{L^{1}(\Omega)}\left\|u_{\varepsilon}\right\|_{L^{\infty}(\Omega)}<+\infty.$
By Lemma 3.2, we conclude that $\left\|\nabla
u_{\varepsilon}\right\|_{L^{\infty}(\Omega)}\lesssim{\varepsilon}^{-1}$. ∎
### 3.2 A Poincaré-type inequality for $F_{\varepsilon}$
The goal of this section is to prove the following inequality on
$F_{\varepsilon}$. We recall that the functional $F_{\varepsilon}$ is defined
in (2.10).
###### Proposition 3.4.
There exists ${\varepsilon}_{1}>0$ such that, for any $u\in
L^{\infty}(\mathbb{R}^{3},\,\mathbb{R}^{m})$, any $\rho>0$, any
$x_{0}\in\mathbb{R}^{3}$ and any
${\varepsilon}\in(0,\,{\varepsilon}_{1}\rho]$, there holds
$\fint_{B_{\rho/2}(x_{0})}\left|u-\fint_{B_{\rho/2}(x_{0})}u\right|^{2}\lesssim\rho^{-1}F_{\varepsilon}(u,\,B_{\rho}(x_{0})).$
To simplify the proof of Proposition 3.4, we will take advantage of the
scaling properties of $F_{\varepsilon}$: if $u_{\rho}\colon
B_{1}\to\mathbb{R}^{m}$ is defined by $u_{\rho}(x):=u(\rho x+x_{0})$ for $x\in
B_{1}$, then a change of variables gives
(3.11)
$\rho^{-1}F_{\varepsilon}(u,\,B_{\rho}(x_{0}))=F_{{\varepsilon}/\rho}(u_{\rho},\,B_{1}).$
In the proof of Proposition 3.4, we will adapt arguments from [42]. By
assumption (K3), there exist positive numbers $\rho_{1}<\rho_{2}$, $k$ such
that $g\geq k$ a.e. on $B_{\rho_{2}}\setminus B_{\rho_{1}}$. Let $\varphi\in
C^{\infty}_{\mathrm{c}}(B_{\rho_{2}}\setminus B_{\rho_{1}})$ be a non-
negative, radial function (i.e. $\varphi(z)=\tilde{\varphi}(\left|z\right|)$
for $z\in\mathbb{R}^{3}$) such that
$\int_{\mathbb{R}^{3}}\varphi(z)\,\mathrm{d}z=1$. Since $g$ is bounded away
from zero on the support of $\varphi$, there holds
$\varphi+\left|\nabla\varphi\right|\leq Cg\qquad\textrm{pointwise
a.e.\leavevmode\nobreak\ on }\mathbb{R}^{3},$
for some constant $C$ that depends on $g$ and $\varphi$; however, $\varphi$ is
fixed once and for all, and so is $C$. We define
$\varphi_{\varepsilon}(z):={\varepsilon}^{-3}\varphi({\varepsilon}^{-1}z)$ for
any $z\in\mathbb{R}^{3}$ and ${\varepsilon}>0$. Then,
$\varphi_{\varepsilon}\in C^{\infty}_{\mathrm{c}}(\mathbb{R}^{3})$ is non-
negative, even, satisfies
$\int_{\mathbb{R}^{3}}\varphi_{\varepsilon}(z)\,\mathrm{d}z=1$ and
(3.12)
$\varphi_{\varepsilon}+{\varepsilon}\left|\nabla\varphi_{\varepsilon}\right|\leq
Cg_{\varepsilon}\qquad\textrm{pointwise a.e.\leavevmode\nobreak\ on
}\mathbb{R}^{3}.$
###### Lemma 3.5.
There exists ${\varepsilon}_{2}>0$ such that, for any $u\in
L^{\infty}(B_{1},\,\mathbb{R}^{m})$ and any
${\varepsilon}\in(0,\,{\varepsilon}_{2}]$, there holds
$\int_{B_{1/2}}\left|\nabla(\varphi_{\varepsilon}*u)\right|^{2}\lesssim{\varepsilon}^{-2}\int_{B_{1}\times
B_{1}}K_{\varepsilon}(x-y)\cdot\left(u(x)-u(y)\right)^{\otimes
2}\mathrm{d}x\,\mathrm{d}y.$
###### Proof.
We adapt the arguments from [42, Lemma 2.1 and Proposition 2.1]. We define
$I(y,\,z):=\int_{B_{1/2}}\nabla\varphi_{\varepsilon}(x-y)\cdot\nabla\varphi_{\varepsilon}(x-z)\,\mathrm{d}x\qquad\textrm{for
}y,\,z\in\mathbb{R}^{3}.$
We express the gradient of $\varphi_{\varepsilon}*u$ as
$\nabla(\varphi_{\varepsilon}*u)=(\nabla\varphi_{\varepsilon})*u$. By applying
the identity $2a\cdot
b=-\left|a-b\right|^{2}+\left|a\right|^{2}+\left|b\right|^{2}$, we obtain
$\begin{split}\int_{B_{1/2}}\left|\nabla(\varphi_{\varepsilon}*u)(x)\right|^{2}\mathrm{d}x&=\int_{\mathbb{R}^{3}\times\mathbb{R}^{3}}u(y)\cdot
u(z)\,I(y,\,z)\,\mathrm{d}y\,\mathrm{d}z\\\
&=\underbrace{-\frac{1}{2}\int_{\mathbb{R}^{3}\times\mathbb{R}^{3}}\left|u(y)-u(z)\right|^{2}I(y,\,z)\,\mathrm{d}y\,\mathrm{d}z}_{=:I_{1}}\\\
&+\underbrace{\frac{1}{2}\int_{\mathbb{R}^{3}\times\mathbb{R}^{3}}\left|u(y)\right|^{2}I(y,\,z)\,\mathrm{d}y\,\mathrm{d}z}_{=:I_{2}}+\underbrace{\frac{1}{2}\int_{\mathbb{R}^{3}\times\mathbb{R}^{3}}\left|u(z)\right|^{2}I(y,\,z)\,\mathrm{d}y\,\mathrm{d}z}_{=:I_{3}}\end{split}$
We first consider the term $I_{2}$. Since $\varphi_{\varepsilon}$ is compactly
supported, we have
$\int_{\mathbb{R}^{3}}\nabla\varphi_{\varepsilon}(z)\,\mathrm{d}z=0$.
Therefore,
$I_{2}=\frac{1}{2}\int_{B_{1/2}\times\mathbb{R}^{3}}\left|u(y)\right|^{2}\nabla\varphi_{\varepsilon}(x-y)\cdot\left(\int_{\mathbb{R}^{3}}\nabla\varphi_{\varepsilon}(x-z)\,\mathrm{d}z\right)\mathrm{d}x\,\mathrm{d}y=0,$
and likewise $I_{3}=0$. Now, we consider $I_{1}$. The gradient
$\nabla\varphi_{\varepsilon}$ is supported in a ball of radius
$C{\varepsilon}$, where $C$ is an ${\varepsilon}$-independent constant. This
implies
$\begin{split}I_{1}&=\frac{1}{2}\int_{B_{1/2+C{\varepsilon}}\times
B_{1/2+C{\varepsilon}}}\left|u(y)-u(z)\right|^{2}\left(\int_{B_{1/2}}\nabla\varphi_{\varepsilon}(x-y)\cdot\nabla\varphi_{\varepsilon}(x-z)\,\mathrm{d}x\right)\,\mathrm{d}y\,\mathrm{d}z\\\
&\leq\int_{B_{1/2}\times B_{1/2+C{\varepsilon}}\times
B_{1/2+C{\varepsilon}}}\left|u(y)-u(x)\right|^{2}\left|\nabla\varphi_{\varepsilon}(x-y)\right|\left|\nabla\varphi_{\varepsilon}(x-z)\right|\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z\\\
&\qquad+\int_{B_{1/2}\times B_{1/2+C{\varepsilon}}\times
B_{1/2+C{\varepsilon}}}\left|u(x)-u(z)\right|^{2}\left|\nabla\varphi_{\varepsilon}(x-y)\right|\left|\nabla\varphi_{\varepsilon}(x-z)\right|\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z\\\
&\leq
2\left\|\nabla\varphi_{\varepsilon}\right\|_{L^{1}(\mathbb{R}^{3})}\int_{B_{1/2}\times
B_{1/2+C{\varepsilon}}}\left|u(y)-u(x)\right|^{2}\left|\nabla\varphi_{\varepsilon}(y-x)\right|\mathrm{d}x\,\mathrm{d}y\end{split}$
Thanks to (3.12), we obtain
$\begin{split}I_{1}\lesssim{\varepsilon}^{-2}\left\|g\right\|_{L^{1}(\mathbb{R}^{3})}\int_{B_{1/2}\times
B_{1/2+C{\varepsilon}}}\left|u(y)-u(x)\right|^{2}g_{\varepsilon}(y-x)\,\mathrm{d}x\,\mathrm{d}y.\end{split}$
For ${\varepsilon}$ sufficiently small we have $1/2+C{\varepsilon}<1$, and the
lemma follows. ∎
Given two sets $A\subseteq\mathbb{R}^{3}$,
$A^{\prime}\subseteq\mathbb{R}^{3}$, we write $A\subset\\!\subset A^{\prime}$
when the _closure_ of $A$ is contained in $A^{\prime}$.
###### Lemma 3.6.
Let $A$, $A^{\prime}$ be open sets such that $A\subset\\!\subset
A^{\prime}\subseteq\mathbb{R}^{3}$. Then, there exists
${\varepsilon}_{3}={\varepsilon}_{3}(A,\,A^{\prime})$ such that, for any $u\in
L^{\infty}(A^{\prime},\,\mathbb{R}^{m})$ and any
${\varepsilon}\in(0,\,{\varepsilon}_{3}]$, there holds
$\int_{A}\left|u-\varphi_{\varepsilon}*u\right|^{2}\lesssim\int_{A^{\prime}\times
A^{\prime}}K_{\varepsilon}(x-y)\cdot\left(u(x)-u(y)\right)^{\otimes
2}\mathrm{d}x\,\mathrm{d}y.$
###### Proof.
Since $\int_{\mathbb{R}^{3}}\varphi_{\varepsilon}(z)\,\mathrm{d}z=1$, we have
$I:=\int_{A}\left|u(x)-(\varphi_{\varepsilon}*u)(x)\right|^{2}\mathrm{d}x=\int_{A}\left|\int_{\mathbb{R}^{3}}\varphi_{\varepsilon}(x-y)\left(u(x)-u(y)\right)\mathrm{d}y\right|^{2}\mathrm{d}x.$
We apply Jensen inequality with respect to the probability measure
$\varphi_{\varepsilon}(x-y)\,\mathrm{d}y$:
$I\leq\int_{A}\left(\int_{\mathbb{R}^{3}}\varphi_{\varepsilon}(x-y)\left|u(x)-u(y)\right|^{2}\mathrm{d}y\right)\mathrm{d}x.$
Because the support of $\varphi_{\varepsilon}$ is contained in a ball of
radius $C{\varepsilon}$, where $C$ is an ${\varepsilon}$-independent constant,
the integrand is equal to zero if $x\in A$,
$\operatorname{dist}(y,\,A)>C{\varepsilon}$. By applying (3.12), we obtain
$I\leq\int_{A\times\\{y\in\mathbb{R}^{3}\colon\operatorname{dist}(y,\,A)\leq
C{\varepsilon}\\}}g_{\varepsilon}(x-y)\left|u(x)-u(y)\right|^{2}\mathrm{d}x\,\mathrm{d}y$
and, if ${\varepsilon}\leq C^{-1}\operatorname{dist}(A,\,\partial
A^{\prime})$, the lemma follows. ∎
###### Proof of Proposition 3.4.
Due to the scaling property (3.11), it suffices to prove that
(3.13) $\fint_{B_{1/2}}\left|u-\fint_{B_{1/2}}u\right|^{2}\lesssim
F_{{\varepsilon}/\rho}(u,\,B_{1})$
for any $u\in L^{\infty}(\mathbb{R}^{3},\,\mathbb{R}^{m})$ and any
${\varepsilon}$, $\rho$ with ${\varepsilon}/\rho$ sufficiently small. The
triangle inequality and the elementary inequality $(a+b+c)^{2}\leq
3(a^{2}+b^{2}+c^{2})$ imply
$\fint_{B_{1/2}}\left|u-\fint_{B_{1/2}}u\right|^{2}\leq
6\fint_{B_{1/2}}\left|u-\varphi_{{\varepsilon}/\rho}*u\right|^{2}+3\fint_{B_{1/2}}\left|\varphi_{{\varepsilon}/\rho}*u-\fint_{B_{1/2}}\varphi_{{\varepsilon}/\rho}*u\right|^{2}$
Thanks to the Poincaré inequality, we obtain
$\fint_{B_{1/2}}\left|u-\fint_{B_{1/2}}u\right|^{2}\lesssim\int_{B_{1/2}}\left|u-\varphi_{{\varepsilon}/\rho}*u\right|^{2}+\int_{B_{1/2}}\left|\nabla(\varphi_{{\varepsilon}/\rho}*u)\right|^{2}.$
If ${\varepsilon}/\rho$ is sufficiently small, Lemma 3.5 and Lemma 3.6 give
$\fint_{B_{1/2}}\left|u-\fint_{B_{1/2}}u\right|^{2}\lesssim\left(({\varepsilon}/\rho)^{2}+1\right)F_{{\varepsilon}/\rho}(u,\,B_{1}),$
so (3.13) follows. ∎
### 3.3 Localised $\Gamma$-convergence for the non-local term
The $\Gamma$-convergence of the functional $E_{\varepsilon}$, as
${\varepsilon}\to 0$, was studied in [42]. In this section, we adapt the
arguments of [42] to prove a localised $\Gamma$-convergence result. We focus
on the interaction part of the free energy only, since this is all we need in
the proof of Theorem A.
###### Proposition 3.7.
Let $\rho>0$, and let $v_{\varepsilon}\in
L^{\infty}(B_{\rho}(x_{0}),\,\mathbb{R}^{m})$ be a sequence of maps such that
$\sup_{{\varepsilon}>0}\left(\frac{1}{{\varepsilon}^{2}}\int_{B_{\rho}(x_{0})\times
B_{\rho}(x_{0})}K_{\varepsilon}(x-y)\cdot\left(v_{\varepsilon}(x)-v_{\varepsilon}(y)\right)^{\otimes
2}\,\mathrm{d}x\,\mathrm{d}y+\left\|v_{\varepsilon}\right\|_{L^{\infty}(B_{\rho}(x_{0}))}\right)<+\infty.$
Then, there exist a map $v_{0}\in(L^{\infty}\cap
H^{1})(B_{\rho/2}(x_{0}),\,\mathbb{R}^{m})$ and a (non-relabelled) subsequence
such that $v_{\varepsilon}\to v_{0}$ strongly in $L^{2}(B_{\rho/2}(x_{0}))$ as
${\varepsilon}\to 0$. Moreover, for any open set $G\subseteq
B_{\rho/2}(x_{0})$ we have
(3.14) $\int_{G}L\nabla v_{0}(x)\cdot\nabla
v_{0}(x)\,\mathrm{d}x\leq\liminf_{{\varepsilon}\to
0}\frac{1}{4{\varepsilon}^{2}}\int_{G\times
G}K_{\varepsilon}(x-y)\cdot\left(v_{\varepsilon}(x)-v_{\varepsilon}(y)\right)^{\otimes
2}\,\mathrm{d}x\,\mathrm{d}y.$
###### Proposition 3.8.
Let $\rho>0$. Let $v_{\varepsilon}\in(L^{\infty}\cap
H^{1})(B_{\rho}(x_{0}),\,\mathbb{R}^{m})$ be a sequence such that
$v_{\varepsilon}\to v_{0}$ strongly in $H^{1}(B_{\rho}(x_{0}))$ and
$\sup_{\varepsilon}\left\|v_{\varepsilon}\right\|_{L^{\infty}(B_{\rho}(x_{0}))}<+\infty$.
Then
$\limsup\limits_{{\varepsilon}\to
0}\frac{1}{4{\varepsilon}^{2}}\int_{B_{\rho}(x_{0})\times
B_{\rho}(x_{0})}K_{\varepsilon}(x-y)\left(v_{\varepsilon}(x)-v_{\varepsilon}(y)\right)^{\otimes
2}\,\mathrm{d}x\,\mathrm{d}y\leq\int_{B_{\rho}(x_{0})}L\nabla
v_{0}(x)\cdot\nabla v_{0}(x)\,\mathrm{d}x.$
In the proofs of Proposition 3.7 and 3.8, we will use the following notation.
Given a vector $w\in\mathbb{R}^{3}\setminus\\{0\\}$ and a function $u$ defined
on a subset of $\mathbb{R}^{3}$, we define the difference quotient
$D_{w}u(x):=\frac{u(x+w)-u(x)}{\left|w\right|}$
for any $x$ in the domain of $u$ such that $x+w$ belongs to the domain of $u$.
If $\left|w\right|\leq h$, $u\in H^{1}(B_{\rho+h})$, and $|\cdot|_{*}$ is any
seminorm on $\mathbb{R}^{m}$, then
(3.15)
$\int_{B_{\rho}}|D_{{\varepsilon}w}u(x)|_{*}^{2}\,\mathrm{d}x\leq\int_{B_{\rho+{\varepsilon}h}}|(\hat{w}\cdot\nabla)u(x)|_{*}^{2}\,\mathrm{d}x$
where $\hat{w}:=w/\left|w\right|$. This follows from the same technique as,
e.g., [18, Lemma 7.23], for the case in which we have the standard Euclidean
norm. However, we realise the proof only relies on the convexity of the
seminorm, and no further structure. For convenience, we give the proof of
Proposition 3.8 first.
###### Proof of Proposition 3.8.
We assume that $x_{0}=0$. Using a reflection across the boundary of $B_{\rho}$
and a cut-off function, we define $v_{\varepsilon}$ and $v_{0}$ on
$\mathbb{R}^{3}\setminus B_{\rho}$, in such a way that
$v_{\varepsilon}\in(L^{\infty}\cap H^{1})(\mathbb{R}^{3},\,\mathbb{R}^{m})$,
$v_{\varepsilon}\to v_{0}$ strongly in $H^{1}(\mathbb{R}^{3})$ and
$\sup_{\varepsilon}\left\|v_{\varepsilon}\right\|_{L^{\infty}(\mathbb{R}^{3})}<+\infty$.
Let $t>0$ be a parameter. We have
$\begin{split}&\frac{1}{4{\varepsilon}^{2}}\int_{B_{\rho}}\int_{B_{\rho}}K_{\varepsilon}(x-y)\cdot\left(v_{\varepsilon}(x)-v_{\varepsilon}(y)\right)^{\otimes
2}\,\mathrm{d}x\,\mathrm{d}y\\\
&\leq\frac{1}{4}\int_{B_{\rho}}\int_{\mathbb{R}^{3}}|z|^{2}K(z)\cdot\left(D_{{\varepsilon}z}v_{\varepsilon}(x)\right)^{\otimes
2}\,\mathrm{d}z\,\mathrm{d}x\\\
&=\frac{1}{4}\int_{B_{\rho}}\int_{B_{\frac{t}{{\varepsilon}}}}|z|^{2}K(z)\cdot\left(D_{{\varepsilon}z}v_{\varepsilon}(x)\right)^{\otimes
2}\,\mathrm{d}z\,\mathrm{d}x+\frac{1}{4}\int_{B_{\rho}}\int_{\mathbb{R}^{3}\setminus
B_{\frac{t}{{\varepsilon}}}}|z|^{2}K(z)\cdot\left(D_{{\varepsilon}z}v_{\varepsilon}(x)\right)^{\otimes
2}\,\mathrm{d}z\,\mathrm{d}x.\end{split}$
To estimate the first integral at the right-hand side, we exchange the order
of integration and, for any $z$, we apply (3.15) to the seminorm
$\left|\xi\right|_{*}^{2}:=\left|z\right|^{2}K(z)\cdot\xi^{\otimes 2}$; for
the second integral, we apply (K5):
(3.16)
$\begin{split}&\frac{1}{4{\varepsilon}^{2}}\int_{B_{\rho}}\int_{B_{\rho}}K_{\varepsilon}(x-y)\cdot\left(v_{\varepsilon}(x)-v_{\varepsilon}(y)\right)^{\otimes
2}\,\mathrm{d}x\,\mathrm{d}y\\\
&\leq\frac{1}{4}\int_{\mathbb{R}^{3}}\int_{B_{\rho+t}}K(z)\cdot\left((z\cdot\nabla)v_{\varepsilon}(x)\right)^{\otimes
2}\,\mathrm{d}x\,\mathrm{d}z+C\int_{B_{\rho}}\int_{\mathbb{R}^{3}\setminus
B_{\frac{t}{{\varepsilon}}}}g(z)|z|^{2}|D_{{\varepsilon}z}v_{\varepsilon}(x)|^{2}\,\mathrm{d}z\,\mathrm{d}x\\\
&\stackrel{{\scriptstyle\eqref{L}}}{{=}}\int_{B_{\rho+t}}L\nabla
v_{\varepsilon}(x)\cdot\nabla
v_{\varepsilon}(x)\,\mathrm{d}x+C\int_{B_{\rho}}\int_{\mathbb{R}^{3}\setminus
B_{\frac{t}{{\varepsilon}}}}g(z)|z|^{2}|D_{{\varepsilon}z}v_{\varepsilon}(x)|^{2}\,\mathrm{d}z\,\mathrm{d}x.\end{split}$
We now estimate the latter summand independently. For $z\in
B_{\frac{t}{{\varepsilon}}}$, $|{\varepsilon}z|^{2}>t^{2}$, so
$|D_{{\varepsilon}z}v_{\varepsilon}(x)|^{2}\leq\frac{4}{t^{2}}\left\|v_{\varepsilon}\right\|_{L^{\infty}(\mathbb{R}^{3})}^{2}\\!.$
Therefore we may estimate
(3.17) $\begin{split}\int_{B_{\rho}}\int_{\mathbb{R}^{3}\setminus
B_{\frac{t}{{\varepsilon}}}}g(z)|z|^{2}|D_{{\varepsilon}z}v_{\varepsilon}(x)|^{2}\,\mathrm{d}z\,\mathrm{d}x&\leq\int_{B_{\rho}}\int_{\mathbb{R}^{3}\setminus
B_{\frac{t}{{\varepsilon}}}}\frac{4}{t^{2}}g(z)|z|^{2}\left\|v_{\varepsilon}\right\|_{L^{\infty}(\mathbb{R}^{3})}^{2}\,\mathrm{d}z\,\mathrm{d}x\\\
&=\frac{4\left|B_{\rho}\right|\left\|v_{\varepsilon}\right\|_{L^{\infty}(\mathbb{R}^{3})}^{2}}{t^{2}}\int_{\mathbb{R}^{3}\setminus
B_{\frac{t}{{\varepsilon}}}}g(z)|z|^{2}\,\mathrm{d}z\end{split}$
As $g$ has finite second moment and we have assumed that
$\left\|v_{\varepsilon}\right\|_{L^{\infty}(\Omega)}\leq C$, for fixed $t$ we
must have that
(3.18) $\lim\limits_{{\varepsilon}\to
0}\frac{4\left|B_{\rho}\right|\left\|v_{\varepsilon}\right\|_{L^{\infty}(\mathbb{R}^{3})}^{2}}{t^{2}}\int_{\mathbb{R}^{3}\setminus
B_{\frac{t}{{\varepsilon}}}}g(z)|z|^{2}\,\mathrm{d}z=0.$
Combining (3.16), (3.17) and (3.18) gives
$\limsup\limits_{{\varepsilon}\to
0}\frac{1}{4{\varepsilon}^{2}}\int_{B_{\rho}}\int_{B_{\rho}}K_{\varepsilon}(x-y)\cdot\left(v_{\varepsilon}(x)-v_{\varepsilon}(y)\right)^{\otimes
2}\,\mathrm{d}y\,\mathrm{d}x\leq\limsup\limits_{{\varepsilon}\to
0}\int_{B_{\rho+t}}L\nabla v_{\varepsilon}(x)\cdot\nabla
v_{\varepsilon}(x)\,\mathrm{d}x.$
As $v_{\varepsilon}\to v_{0}$ in $H^{1}(\mathbb{R}^{3})$, this implies
$\lim\limits_{{\varepsilon}\to 0}\int_{B_{\rho+t}}L\nabla
v_{\varepsilon}(x)\cdot\nabla
v_{\varepsilon}(x)\,\mathrm{d}x=\int_{B_{\rho+t}}L\nabla v_{0}(x)\cdot\nabla
v_{0}(x)\,\mathrm{d}x.$
Therefore we have
$\limsup\limits_{{\varepsilon}\to
0}\frac{1}{4{\varepsilon}^{2}}\int_{B_{\rho}}\int_{B_{\rho}}K_{\varepsilon}(x-y)\cdot\left(v_{\varepsilon}(x)-v_{\varepsilon}(y)\right)^{\otimes
2}\,\mathrm{d}y\,\mathrm{d}x\leq\int_{B_{\rho+t}}L\nabla v_{0}(x)\cdot\nabla
v_{0}(x)\,\mathrm{d}x,$
and passing to the limit as $t\to 0$ in the right-hand side gives the desired
result. ∎
###### Proof of Proposition 3.7.
Again, we assume that $x_{0}=0$. Let $(\varphi_{\varepsilon})$ be the sequence
of mollifiers given by (3.12). By Lemma 3.5, the sequence
$\varphi_{\varepsilon}*v_{\varepsilon}$ is bounded in $H^{1}(B_{\rho/2})$ and
hence, up to extraction of a (non-relabelled) subsequence,
$\varphi_{\varepsilon}*v_{\varepsilon}\rightharpoonup v_{0}$ weakly in
$H^{1}(B_{\rho/2})$ and strongly in $L^{2}(B_{\rho/2})$. By Lemma 3.6,
$\varphi_{\varepsilon}*v_{\varepsilon}-v_{\varepsilon}\to 0$ strongly in
$L^{2}(B_{\rho/2})$. Therefore, $v_{\varepsilon}\to v_{0}$ strongly in
$L^{2}(B_{\rho/2})$. Let $G\subseteq B_{\rho/2}$ be open and
$G^{\prime}\subset\\!\subset G$. Then we may write that
(3.19)
$\begin{split}\frac{1}{4{\varepsilon}^{2}}\int_{G}\int_{G}&K_{\varepsilon}(x-y)\cdot\left(v_{\varepsilon}(x)-v_{\varepsilon}(y)\right)^{\otimes
2}\,\mathrm{d}y\,\mathrm{d}x\\\
&\geq\frac{1}{4{\varepsilon}^{2}}\int_{G^{\prime}}\int_{G}K_{\varepsilon}(x-y)\cdot\left(v_{\varepsilon}(x)-v_{\varepsilon}(y)\right)^{\otimes
2}\,\mathrm{d}y\,\mathrm{d}x\\\
&=\frac{1}{4}\int_{G^{\prime}}\int_{\frac{G-x}{{\varepsilon}}}|z|^{2}K(z)\cdot\left(D_{{\varepsilon}z}v_{\varepsilon}(x)\right)^{\otimes
2}\,\mathrm{d}z\,\mathrm{d}x.\end{split}$
Let $G^{c}:=\mathbb{R}^{3}\setminus G$ and
$\delta:=\operatorname{dist}(G^{\prime},\,G^{c})>0$. We note that
$\begin{split}\left|\int_{G^{\prime}}\int_{\left(\frac{G-x}{{\varepsilon}}\right)^{c}}|z|^{2}K(z)\cdot\left(D_{{\varepsilon}z}v_{\varepsilon}(x)\right)^{\otimes
2}\,\mathrm{d}y\,\mathrm{d}x\right|\lesssim\int_{G^{\prime}}\int_{B_{\frac{\delta}{{\varepsilon}}}^{c}}g(z)|z|^{2}|D_{{\varepsilon}z}v_{\varepsilon}(x)|^{2}\,\mathrm{d}z\,\mathrm{d}x,\end{split}$
which by previous estimates (see (3.17), (3.18)) we have seen converges to
zero as ${\varepsilon}\to 0$. This means
(3.20) $\begin{split}\liminf\limits_{{\varepsilon}\to
0}&\int_{G^{\prime}}\int_{\frac{G-x}{{\varepsilon}}}|z|^{2}K(z)\cdot\left(D_{{\varepsilon}z}v_{\varepsilon}(x)\right)^{\otimes
2}\,\mathrm{d}z\,\mathrm{d}x\\\ &=\liminf\limits_{{\varepsilon}\to
0}\int_{G^{\prime}}\int_{\mathbb{R}^{3}}|z|^{2}K(z)\cdot\left(D_{{\varepsilon}z}v_{\varepsilon}(x)\right)^{\otimes
2}\,\mathrm{d}z\,\mathrm{d}x\\\ \end{split}$
Furthermore, we note this can be written as an $L^{2}$ norm, by defining
$w_{\varepsilon}:G^{\prime}\times\mathbb{R}^{3}\to\mathbb{R}^{m}$ by
$w_{\varepsilon}(z,x):=|z|K^{\frac{1}{2}}(z)D_{{\varepsilon}z}v_{\varepsilon}(x)$.
We immediately see that $w_{\varepsilon}$ is $L^{2}$-bounded, so must admit an
$L^{2}$-weakly converging subsequence $w_{j}:=w_{{\varepsilon}_{j}}$ with
${\varepsilon}_{j}\to 0$ and $w_{j}$ has weak-$L^{2}$ limit $w_{0}$.
Furthermore, we take
(3.21) $\liminf\limits_{{\varepsilon}\to
0}\int_{G^{\prime}}\int_{\mathbb{R}^{3}}|z|^{2}K(z)\cdot\left(D_{{\varepsilon}z}v_{\varepsilon}(x)\right)^{\otimes
2}\,\mathrm{d}z\,\mathrm{d}x=\liminf\limits_{j\to\infty}\left\|w_{j}\right\|^{2}_{L^{2}(G^{\prime}\times\mathbb{R}^{3})}\geq\left\|w_{0}\right\|^{2}_{L^{2}(G^{\prime}\times\mathbb{R}^{3})}\\!.$
It remains to identify the limit $w_{0}$. We may do this by integrating
against test functions. Let $\phi\in
C^{\infty}_{\mathrm{c}}(G^{\prime}\times\mathbb{R}^{3})$. There exists some
$R_{0}>0$ such that, for any $(y,\,z)\in\mathbb{R}^{3}\times\mathbb{R}^{3}$
with $|z|>R_{0}$, $\phi(y,\,z)=0$. Furthermore, there exists some $\delta>0$
so that if $\operatorname{dist}(y,\,(G^{\prime})^{c})<\delta$, then
$\phi(y,\,z)=0$. In particular, if ${{\varepsilon}_{j}}<\frac{\delta}{R_{0}}$
and $(x-{\varepsilon}_{j}z,\,z)\in\mathrm{supp}(\phi)$, then $x\in
G^{\prime}$. Therefore
$\begin{split}\langle
w_{j},\,\phi\rangle&=\int_{G^{\prime}}\int_{\mathbb{R}^{3}}\phi(x,\,z)|z|K^{\frac{1}{2}}(z)D_{{{\varepsilon}_{j}}z}v_{{\varepsilon}_{j}}(x)\,\mathrm{d}z\,\mathrm{d}x\\\
&=\frac{1}{{{\varepsilon}_{j}}}\int_{G^{\prime}}\int_{\mathbb{R}^{3}}\Big{(}\phi(x-{{\varepsilon}_{j}}z,\,z)-\phi(x,\,z)\Big{)}K^{\frac{1}{2}}(z)v_{{\varepsilon}_{j}}(x)\,\mathrm{d}z\,\mathrm{d}x,\end{split}$
and we may exploit the fact that
$\frac{1}{{{\varepsilon}_{j}}}\Big{(}\phi(x-{{\varepsilon}_{j}}z,\,z)-\phi(x,\,z)\Big{)}\to(-z\cdot\nabla_{x})\phi(x,\,z)\qquad\textrm{uniformly
on }G^{\prime}\textrm{ as }j\to+\infty,$
with the assumed $L^{2}$ convergence of $v_{{\varepsilon}_{j}}\to v_{0}$, to
give that
$\begin{split}\lim\limits_{j\to\infty}\langle
w_{j},\,\phi\rangle&=\lim\limits_{j\to\infty}\frac{1}{{\varepsilon}_{j}}\int_{G^{\prime}}\int_{\mathbb{R}^{3}}\Big{(}\phi(x-{{\varepsilon}_{j}}z,\,z)-\phi(x,\,z)\Big{)}K^{\frac{1}{2}}(z)v_{{\varepsilon}_{j}}(x)\,\mathrm{d}z\,\mathrm{d}x,\\\
&=\int_{G^{\prime}}\int_{\mathbb{R}^{3}}(-z\cdot\nabla_{x})\phi(x,\,z)K^{\frac{1}{2}}(z)v_{0}(x)\,\mathrm{d}z\,\mathrm{d}x\\\
&=\int_{G^{\prime}}\int_{\mathbb{R}^{3}}\phi(x,\,z)K^{\frac{1}{2}}(z)(z\cdot\nabla)v_{0}(x)\,\mathrm{d}z\,\mathrm{d}x=\langle
w_{0},\,\phi\rangle.\end{split}$
Therefore $w_{0}(x,\,z)=K^{\frac{1}{2}}(z)(z\cdot\nabla)v_{0}(x)$, and by
(3.19), (3.20), (3.21) we have
$\begin{split}\liminf\limits_{{\varepsilon}\to
0}&\frac{1}{4{\varepsilon}^{2}}\int_{G}\int_{G}K_{\varepsilon}(x-y)\cdot\left(v_{\varepsilon}(x)-v_{\varepsilon}(y)\right)^{\otimes
2}\,\mathrm{d}y\,\mathrm{d}x\\\
&\geq\frac{1}{4}\liminf\limits_{j\to\infty}\left\|w_{j}\right\|^{2}_{L^{2}(G^{\prime}\times\mathbb{R}^{3})}\\\
&\geq\frac{1}{4}\left\|w_{0}\right\|^{2}_{L^{2}(G^{\prime}\times\mathbb{R}^{3})}\\\
&=\frac{1}{4}\int_{G^{\prime}}\int_{\mathbb{R}^{3}}K(z)\cdot\Big{(}(z\cdot\nabla)v_{0}(x)\Big{)}^{\otimes
2}\,\mathrm{d}z\,\mathrm{d}x\\\
&\stackrel{{\scriptstyle\eqref{L}}}{{=}}\int_{G^{\prime}}L\nabla
v_{0}(x)\cdot\nabla v_{0}(x)\,\mathrm{d}x.\end{split}$
As the set $G^{\prime}\subset\\!\subset G$ was arbitrary, by monotonicity the
lower bound (3.14) holds. ∎
## 4 Proof of the main results
### 4.1 A compactness result for $\omega$-minimisers
The goal of this section is to prove a compactness result for minimisers of
$E_{\varepsilon}$, subject to variable “boundary conditions”, as
${\varepsilon}\to 0$. For later convenience, we state our result in terms of
“almost minimisers” — or, more precisely, $\omega$-minimisers, as defined
below. This will be useful to study variants of our original minimisation
problem, as we will do in Section 5.
###### Definition 4.1.
Let $\Omega\subseteq\mathbb{R}^{3}$ be a bounded domain. Let
$\omega\colon[0,\,+\infty)\to[0,\,+\infty)$ be an increasing function such
that $\omega(s)\to 0$ as $s\to 0$. We say that a function $u\in
L^{\infty}(\mathbb{R}^{3},\,\mathcal{Q})$ is an $\omega$-minimiser of
$E_{\varepsilon}$ in $\Omega$ if, for any ball
$B_{\rho}(x_{0})\subseteq\Omega$ and any $v\in
L^{\infty}(\mathbb{R}^{3},\,\mathcal{Q})$ such that $v=u$ a.e. on
$\mathbb{R}^{3}\setminus B_{\rho}(x_{0})$, there holds
$E_{\varepsilon}(u)\leq E_{\varepsilon}(v)+\omega({\varepsilon})\,\rho.$
By definition, a minimiser for $E_{\varepsilon}$ in the class $\mathscr{A}$
defined by (2.5) is also a $\omega$-minimiser in $\Omega$, for any $\omega\geq
0$. $\omega$-minimisers behave nicely with respect to scaling. Given $u\in
L^{\infty}(\mathbb{R}^{3},\,\mathcal{Q})$, an increasing function
$\omega\colon[0,\,+\infty)\to[0,\,+\infty)$, $x_{0}\in\mathbb{R}^{3}$ and
$\rho>0$, we define $u_{\rho}\colon\mathbb{R}^{3}\to\mathcal{Q}$ and
$\omega_{\rho}\colon[0,\,+\infty)\to[0,\,+\infty)$ as
$u_{\rho}(y):=u(x_{0}+\rho y)$ for $y\in\mathbb{R}^{3}$ and
$\omega_{\rho}(s):=\omega(\rho s)$ for $s\geq 0$, respectively. A scaling
argument implies
###### Lemma 4.1.
If $u$ is an $\omega$-minimiser for $E_{\varepsilon}$ in a bounded domain
$\Omega\subseteq\mathbb{R}^{3}$, then $u_{\rho}$ is an
$\omega_{\rho}$-minimiser for $E_{{\varepsilon}/\rho}$ in $(\Omega-
x_{0})/\rho$.
The goal of this section is to prove the following
###### Proposition 4.2.
Let $\Omega\subseteq\mathbb{R}^{3}$ be a bounded domain. Let
$\omega\colon[0,\,+\infty)\to[0,\,+\infty)$ be an increasing function such
that $\omega(s)\to 0$ as $s\to 0$. Let $u_{\varepsilon}$ be a sequence of
$\omega$-minimisers of $E_{\varepsilon}$ in $\Omega$, and let
$B_{\rho}(x_{0})\subseteq\Omega$ be a ball such that
$\sup_{{\varepsilon}>0}F_{\varepsilon}(u_{\varepsilon},\,B_{\rho}(x_{0}))<+\infty$.
Then, up to extraction of a non-relabelled subsequence, $u_{\varepsilon}$
converge $L^{2}(B_{\rho/2}(x_{0}))$-strongly to a map $u_{0}\in
H^{1}(B_{\rho/2}(x_{0}),\,\mathscr{N})$, which minimises the functional
$w\in
H^{1}(B_{\rho/2}(x_{0}),\,\mathscr{N})\mapsto\int_{B_{\rho/2}(x_{0})}L\nabla
w\cdot\nabla w$
subject to its own boundary conditions. Moreover, for any $s\in(0,\,\rho/2)$
there holds
(4.1) $\lim_{{\varepsilon}\to
0}F_{\varepsilon}(u_{\varepsilon},\,B_{s}(x_{0}))=\int_{B_{s}(x_{0})}L\nabla
u_{0}\cdot\nabla u_{0}.$
Proposition 4.2 differs from the results in [42] in that no “boundary
condition” is prescribed: each $u_{\varepsilon}$ minimises the functional
$E_{\varepsilon}$ (possibily up to a small error, which is quantified by the
function $\omega$) subject to its own “boundary condition”. The main
ingredient in the proof of Proposition 4.2 is the following extension lemma.
###### Lemma 4.3 ([33, 10]).
For any $M>0$, there exists $\eta=\eta(M)>0$ such that the following statement
holds. Let $\mathcal{Q}_{0}\subset\\!\subset\mathcal{Q}$ be an open set that
contains $\mathscr{N}$. Let $\rho$, $\lambda$ be positive numbers with
$\lambda<\rho$, and let $u\in H^{1}(\partial B_{\rho},\,\mathcal{Q}_{0})$,
$v\in H^{1}(\partial B_{\rho},\,\mathscr{N})$ be such that
$\int_{\partial B_{\rho}}\left(\left|\nabla u\right|^{2}+\left|\nabla
v\right|^{2}\right)\mathrm{d}\mathscr{H}^{2}\leq M,\qquad{\int_{\partial
B_{\rho}}\left|u-v\right|^{2}\,\mathrm{d}\mathscr{H}^{2}\leq\eta\lambda^{2}.}$
Then, there exists a map $w\in H^{1}(B_{\rho}\setminus
B_{\rho-\lambda},\,\mathcal{Q}_{0})$ such that $w(x)=u(x)$ for
$\mathscr{H}^{2}$-a.e. $x\in\partial B_{\rho}$, $w(x)=v(\rho
x/(\rho-\lambda))$ for $\mathscr{H}^{2}$-a.e. $x\in\partial B_{\rho-\lambda}$,
and
$\displaystyle\int_{B_{\rho}\setminus B_{\rho-\lambda}}\left|\nabla
w\right|^{2}\lesssim\lambda\int_{\partial B_{\rho}}\left(\left|\nabla
u\right|^{2}+\left|\nabla
v\right|^{2}+\frac{\left|u-v\right|^{2}}{\lambda^{2}}\right)\mathrm{d}\mathscr{H}^{2}$
$\displaystyle\int_{B_{\rho}\setminus
B_{\rho-\lambda}}\psi_{b}(w)\lesssim\lambda\int_{\partial
B_{\rho}}\psi_{b}(u)\,\mathrm{d}\mathscr{H}^{2}$
###### Remark 4.1.
Lemma 4.3, in case $\psi_{b}=0$, was first proven by Luckhaus [33, Lemma 1].
Up to a scaling, the statement given here is essentially the same as [10,
Lemma B.2]. However, in [10] the potential is assumed to be finite and smooth
on the whole of $\mathbb{R}^{m}$, while our potential $\psi_{b}$ is singular
out of $\mathcal{Q}$. Nevertheless, the proof carries over to our setting.
Indeed, the map $w$ constructed in [10] takes values in a neighbourhood of
$\mathscr{N}$, whose thickness can be made arbitrarily small by choosing
$\eta$ small (see also [33, Lemma 1]). In particular, we can make sure that
the image of $w$ is contained in the set $\mathcal{Q}_{0}$, where the function
$\psi_{b}$ is finite and smooth, and the arguments of [10] carry over.
Incidentally, Lemma 4.3 crucially depends on the non-degeneracy assumption
(H6) for the bulk potential $\psi_{b}$.
We state a few other technical results, which will be useful in the proof of
Proposition 4.2.
###### Lemma 4.4.
Let $G$, $G^{\prime}$ be open sets in $\mathbb{R}^{3}$, with
$G\subset\\!\subset G^{\prime}$. Given
$0<\theta<\operatorname{dist}(G,\,\partial G^{\prime})$, define
$\partial_{\theta}G:=\\{x\in\mathbb{R}^{3}\colon\operatorname{dist}(x,\,\partial
G)<\theta\\}$. Then, for any ${\varepsilon}>0$ and $u\in
L^{\infty}(\mathbb{R}^{3},\,\mathbb{R}^{m})$, there holds
$\begin{split}F_{\varepsilon}(u,\,G^{\prime})\leq
F_{\varepsilon}(u,\,G)&+F_{\varepsilon}(u,\,G^{\prime}\setminus
G)+F_{\varepsilon}(u,\,\partial_{\theta}G)\\\
&+\frac{C}{\theta^{2}}\left\|u\right\|^{2}_{L^{\infty}(\mathbb{R}^{3})}\left|G\right|\int_{\mathbb{R}^{3}\setminus
B_{\theta/{\varepsilon}}}g(z)\left|z\right|^{2}\,\mathrm{d}z.\end{split}$
###### Proof.
Since $K_{\varepsilon}$ is even (by (K2)), we have
$\begin{split}F_{\varepsilon}(u,\,G^{\prime})\leq
F_{\varepsilon}(u,\,G)&+F_{\varepsilon}(u,\,G^{\prime}\setminus G)\\\
&+\frac{1}{4{\varepsilon}^{2}}\int_{\partial_{\theta}G\times\partial_{\theta}G}K_{\varepsilon}(x-y)\cdot\left(u(x)-u(y)\right)^{\otimes
2}\,\mathrm{d}x\,\mathrm{d}y\\\
&+\frac{1}{2{\varepsilon}^{2}}\int_{\\{(x,\,y)\in
G\times\mathbb{R}^{3}\colon\left|x-y\right|\geq\theta\\}}K_{\varepsilon}(x-y)\cdot\left(u(x)-u(y)\right)^{\otimes
2}\,\mathrm{d}x\,\mathrm{d}y\end{split}$
Keeping in mind that $u$ is bounded, and using (K5), we obtain
$\begin{split}F_{\varepsilon}(u,\,G^{\prime})\leq
F_{\varepsilon}(u,\,G)&+F_{\varepsilon}(u,\,G^{\prime}\setminus
G)+F_{\varepsilon}(u,\,\partial_{\theta}G)\\\
&+\frac{C}{{\varepsilon}^{2}}\left\|u\right\|_{L^{\infty}(\mathbb{R}^{3})}^{2}\underbrace{\int_{\\{(x,y)\in
G\times\mathbb{R}^{3}\colon\left|x-y\right|\geq\theta\\}}g_{\varepsilon}(x-y)\,\mathrm{d}x\,\mathrm{d}y}_{=:I}\end{split}$
We bound the term $I$ by making the change of variable $y=x+{\varepsilon}z$:
$\begin{split}I=\int_{G}\left(\int_{\mathbb{R}^{3}\setminus
B_{\theta/{\varepsilon}}}g(z)\,\mathrm{d}z\right)\mathrm{d}x\leq\frac{{\varepsilon}^{2}}{\theta^{2}}\left|G\right|\int_{\mathbb{R}^{3}\setminus
B_{\theta/{\varepsilon}}}g(z)\left|z\right|^{2}\,\mathrm{d}z.\end{split}$
The right-hand side is finite, because of (K4). The lemma follows. ∎
Another variant of Lemma 4.4 is the following “gluing lemma” for the non-local
functional $E_{\varepsilon}$.
###### Lemma 4.5.
Given a number $\theta>0$, a Borel set $G\subseteq\mathbb{R}^{3}$, and maps
$u_{1}$, $u_{2}\in L^{\infty}(\mathbb{R}^{3},\,\mathbb{R}^{m})$, define the
map
$u:=\begin{cases}u_{1}&\textrm{on }G\\\ u_{2}&\textrm{on
}\mathbb{R}^{3}\setminus G\end{cases}$
and
$\partial_{\theta}G:=\\{x\in\mathbb{R}^{3}\colon\operatorname{dist}(x,\,\partial
G)<\theta\\}$, as above. Then, for any ${\varepsilon}>0$, there holds
$\begin{split}E_{\varepsilon}(u)\leq
F_{\varepsilon}(u_{1},\,G)&+F_{\varepsilon}(u_{2},\,\mathbb{R}^{3}\setminus
G)+2F_{\varepsilon}(u_{2},\,\partial_{\theta}G)+\frac{C}{{\varepsilon}^{2}}\int_{\partial_{\theta}G}\left|u_{1}-u_{2}\right|^{2}\\\
&+\frac{C}{\theta^{2}}\left\|u\right\|^{2}_{L^{\infty}(\mathbb{R}^{3})}\left|G\right|\int_{\mathbb{R}^{3}\setminus
B_{\theta/{\varepsilon}}}g(z)\left|z\right|^{2}\,\mathrm{d}z.\end{split}$
###### Proof.
We repeat the arguments of Lemma 4.4, with $G^{\prime}=\mathbb{R}^{3}$:
$\begin{split}E_{\varepsilon}(u)\leq
F_{\varepsilon}(u,\,G)&+F_{\varepsilon}(u,\,G^{\prime}\setminus G)\\\
&+\underbrace{\frac{1}{4{\varepsilon}^{2}}\int_{\partial_{\theta}G\times\partial_{\theta}G}K_{\varepsilon}(x-y)\cdot\left(u(x)-u(y)\right)^{\otimes
2}\,\mathrm{d}x\,\mathrm{d}y}_{=:J}\\\
&+\frac{1}{2{\varepsilon}^{2}}\int_{\\{(x,\,y)\in
G\times\mathbb{R}^{3}\colon\left|x-y\right|\geq\theta\\}}K_{\varepsilon}(x-y)\cdot\left(u(x)-u(y)\right)^{\otimes
2}\,\mathrm{d}x\,\mathrm{d}y\end{split}$
The last term at the right-hand side can be bounded exactly as in Lemma 4.4.
The triangle inequality and the elementary inequality $(a+b)^{2}\leq
2(a^{2}+b^{2})$ imply
$\begin{split}J&\leq\frac{1}{2{\varepsilon}^{2}}\int_{\partial_{\theta}G\times\partial_{\theta}G}K_{\varepsilon}(x-y)\cdot\left(u_{1}(x)-u_{2}(x)\right)^{\otimes
2}\,\mathrm{d}x\,\mathrm{d}y\\\
&\qquad\qquad\qquad+\frac{1}{2{\varepsilon}^{2}}\int_{\partial_{\theta}G\times\partial_{\theta}G}K_{\varepsilon}(x-y)\cdot\left(u_{2}(x)-u_{2}(y)\right)^{\otimes
2}\,\mathrm{d}x\,\mathrm{d}y\\\
&\leq\frac{1}{2{\varepsilon}^{2}}\left\|K\right\|_{L^{1}(\mathbb{R}^{3})}\int_{\partial_{\theta}G}\left|u_{1}(x)-u_{2}(x)\right|^{2}\mathrm{d}x+2F_{\varepsilon}(u_{2},\,\partial_{\theta}G)\end{split}$
The lemma follows. ∎
Finally, we will need an inequality on the bulk potential $\psi_{b}$.
###### Lemma 4.6.
For any $\delta>0$, there exists a constant $C_{\delta}>0$ such that, for any
$y_{1}\in\mathcal{Q}$, $y_{2}\in\mathcal{Q}$ with
$\operatorname{dist}(y_{2},\,\partial\mathcal{Q})\geq\delta$, we have
(4.2) $\psi_{b}(y_{2})\leq
C_{\delta}\left(\psi_{b}(y_{1})+\left|y_{1}-y_{2}\right|^{2}\right).$
###### Proof.
The assumption (H6) implies, via a Taylor expansion and a compactness
argument, that there exist $\gamma>0$, $\kappa_{1}>0$, $\kappa_{2}>0$ so that
if $\operatorname{dist}(y,\,\mathscr{N})<\gamma$, then
(4.3)
$\kappa_{1}\operatorname{dist}^{2}(y,\,\mathscr{N})\leq\psi_{b}(y)\leq\kappa_{2}\operatorname{dist}^{2}(y,\,\mathscr{N}).$
To prove the result we exhaust three cases,
1. 1.
$\operatorname{dist}(y_{1},\,\mathscr{N})\geq\frac{1}{2}\gamma$.
2. 2.
$\operatorname{dist}(y_{1},\,\mathscr{N})<\frac{1}{2}\gamma$,
$\operatorname{dist}(y_{2},\,\mathscr{N})\geq\gamma$.
3. 3.
$\operatorname{dist}(y_{1},\,\mathscr{N})<\frac{1}{2}\gamma$,
$\operatorname{dist}(y_{2},\,\mathscr{N})<\gamma$.
In the case of (1), we have that such $y_{1}$ satisfy $\psi_{b}(y_{1})>c_{1}$
for a constant $c_{1}>0$ (that depends on $\gamma$), as $y_{1}$ is bounded
away from the minimising manifold. We furthermore have that
$\psi_{b}(y_{2})\leq c_{2}$ because
$\operatorname{dist}(y_{2},\,\partial\mathcal{Q})>\delta$ (and the constant
$c_{2}$ will depend on $\delta$). Therefore the inequality (4.2) holds
trivially with $C_{\delta}=\frac{c_{1}}{c_{2}}$.
In the case of (2), since
$\operatorname{dist}(y_{1},\,\mathscr{N})<\frac{1}{2}\gamma$,
$\operatorname{dist}(y_{2},\,\mathscr{N})\geq\gamma$, we must have
$|y_{1}-y_{2}|^{2}\geq\frac{1}{4}\gamma^{2}$, then we use the upper bound on
$\psi_{b}(y_{2})$ as before.
In the case of (3), we note that since $y_{1},y_{2}$ are both sufficiently
close to $\mathscr{N}$,
$\begin{split}\psi_{b}(y_{2})&\stackrel{{\scriptstyle\eqref{nondeg}}}{{\lesssim}}\operatorname{dist}^{2}(y_{2},\,\mathscr{N})\lesssim\operatorname{dist}^{2}(y_{1},\,\mathscr{N})+|y_{1}-y_{2}|^{2}\stackrel{{\scriptstyle\eqref{nondeg}}}{{\lesssim}}\psi_{b}(y_{1})+|y_{1}-y_{2}|^{2}.\qed\end{split}$
###### Proof of Proposition 4.2.
By a scaling argument (see Equation (3.11) and Lemma 4.1), we can assume
without loss of generality that $\rho=1$ and $x_{0}=0$.
###### Step 1 (Compactness).
Let $\varphi_{\varepsilon}\in C^{\infty}_{\mathrm{c}}(\mathbb{R}^{3})$ be
defined as in Section 3.2. Lemma 3.5 and Lemma 3.6 imply that
(4.4)
$\int_{B_{1/2}}\left|\nabla(\varphi_{\varepsilon}*u_{\varepsilon})\right|^{2}\lesssim
F(u_{\varepsilon},\,B_{1}),\qquad\int_{B_{1/2}}\left|\varphi_{\varepsilon}*u_{\varepsilon}-u_{\varepsilon}\right|^{2}\lesssim{\varepsilon}^{2}F(u_{\varepsilon},\,B_{1})$
for ${\varepsilon}$ small enough. Since $F(u_{\varepsilon},\,B_{1})$ is
bounded, we can extract a (non-relabelled) subsequence so that
$\varphi_{\varepsilon}*u_{\varepsilon}\rightharpoonup u_{0}$ weakly in
$H^{1}(B_{1/2})$, $u_{\varepsilon}\to u_{0}$ strongly in $L^{2}(B_{1/2})$. We
must show that
(4.5) $\int_{B_{1/2}}L\nabla u_{0}\cdot\nabla u_{0}\leq\int_{B_{1/2}}L\nabla
v\cdot\nabla v$
for any $v\in H^{1}(B_{1/2},\,\mathscr{N})$ such that $v=u_{0}$ on $\partial
B_{1/2}$. By an approximation argument, it suffices to prove (4.5) in case
$v=u_{0}$ in a neighbourhood of $\partial B_{1/2}$. Therefore, we fix
$s\in(0,\,1/2)$ and we take a map $v\in H^{1}(B_{1/2},\,\mathscr{N})$ such
that $v=u_{0}$ on $B_{1/2}\setminus\bar{B}_{s}$. The map $v$ is not an
admissible competitor for $u_{\varepsilon}$, because in general $u_{0}\neq
u_{\varepsilon}$ on $\mathbb{R}^{3}\setminus B_{1}$. In the next step, we will
modify $v$ near the boundary of $B_{s}$, so to obtain an admissible
competitor.
###### Step 2 (Construction of a competitor for $u_{\varepsilon}$).
Let $N\geq 1$ be an integer number, and let $\bar{s}:=\max(s,\,1/4)$. We
consider the annulus $B_{1/2}\setminus\bar{B}_{\bar{s}}$ and divide it into
$N$ concentric sub-annuli:
$A_{i}:=B_{\bar{s}+i\frac{1/2-\bar{s}}{N}}\setminus\bar{B}_{\bar{s}+(i-1)\frac{1/2-\bar{s}}{N}}\qquad\textrm{for
}i=1,\,2,\,\ldots,\,N.$
We have $\sum_{i=1}^{N}F_{\varepsilon}(u_{\varepsilon},\,A_{i})\leq
F_{\varepsilon}(u_{\varepsilon},\,B_{1})$ and hence, for any ${\varepsilon}$
we can choose an index $i({\varepsilon})$ such that
(4.6)
$F_{\varepsilon}(u_{\varepsilon},\,A_{i({\varepsilon})})\leq\frac{F_{\varepsilon}(u_{\varepsilon},\,B_{1})}{N}.$
Passing to a subsequence, we may also assume that all the indices
$i({\varepsilon})$ are the same, so from now on, we write $i$ instead of
$i({\varepsilon})$. We take positive numbers $a<b$ such that
$A^{\prime}:=B_{b}\setminus\bar{B}_{a}\subset\\!\subset A_{i}$. Then, Lemma
3.6 gives
(4.7)
$\frac{1}{{\varepsilon}^{2}}\int_{A^{\prime}}\left|\varphi_{\varepsilon}*u_{\varepsilon}-u_{\varepsilon}\right|^{2}\lesssim
F_{\varepsilon}(u_{\varepsilon},\,A_{i})\lesssim\frac{F_{\varepsilon}(u_{\varepsilon},\,B_{1})}{N}$
for ${\varepsilon}$ small enough. From Proposition 3.3 and Lemma 4.6, we
deduce
(4.8)
$\begin{split}\frac{1}{{\varepsilon}^{2}}\int_{A^{\prime}}\psi_{b}(\varphi_{\varepsilon}*u_{\varepsilon})&\lesssim\frac{1}{{\varepsilon}^{2}}\int_{A^{\prime}}\left(\psi_{b}(u_{\varepsilon})+\left|\varphi_{\varepsilon}*u_{\varepsilon}-u_{\varepsilon}\right|^{2}\right)\stackrel{{\scriptstyle\eqref{comp2}}}{{\lesssim}}\frac{F_{\varepsilon}(u_{\varepsilon},\,B_{1})}{N}\end{split}$
Using Fatou’s lemma, we see that
$\begin{split}&\int_{a}^{b}\left(\liminf_{{\varepsilon}\to 0}\int_{\partial
B_{r}}\left|\nabla(\varphi_{\varepsilon}*u_{\varepsilon})\right|^{2}+\frac{1}{{\varepsilon}^{2}}\psi_{b}(\varphi_{\varepsilon}*u_{\varepsilon})\,\mathrm{d}\mathscr{H}^{2}\right)\mathrm{d}r\\\
&\qquad\qquad\qquad\leq\liminf_{{\varepsilon}\to
0}\int_{A^{\prime}}\left(\left|\nabla(\varphi_{\varepsilon}*u_{\varepsilon})\right|^{2}+\frac{1}{{\varepsilon}^{2}}\psi_{b}(\varphi_{\varepsilon}*u_{\varepsilon})\right)\stackrel{{\scriptstyle\eqref{energybd},\eqref{comp3}}}{{\leq}}C.\end{split}$
Since $A^{\prime}\subseteq B_{1/2}\setminus\bar{B}_{s}$, we have $v=u_{0}$ on
$A^{\prime}$ and hence, $\varphi_{\varepsilon}*u_{\varepsilon}\to v$ strongly
in $L^{2}(A^{\prime})$. Therefore, by Fubini theorem, there exists a radius
$r\in(a,\,b)\subseteq(1/4,\,1/2)$ and a (non-relabelled) subsequence
${\varepsilon}\to 0$ such that
(4.9) $\displaystyle\int_{\partial B_{r}}\left(\left|\nabla
v\right|^{2}+\left|\nabla(\varphi_{\varepsilon}*u_{\varepsilon})\right|^{2}+\frac{1}{{\varepsilon}^{2}}\psi_{b}(\varphi_{\varepsilon}*u_{\varepsilon})\right)\,\mathrm{d}\mathscr{H}^{2}\leq\frac{C}{b-a}$
(4.10) $\displaystyle\int_{\partial
B_{r}}\left|\varphi_{\varepsilon}*u_{\varepsilon}-v\right|^{2}\mathrm{d}\mathscr{H}^{2}\to
0\qquad\textrm{as }{\varepsilon}\to 0.$
Let
$\lambda_{\varepsilon}:={\left({\varepsilon}+\int_{\partial
B_{r}}\left|\varphi_{\varepsilon}*u_{\varepsilon}-v\right|^{2}\mathrm{d}\mathscr{H}^{2}\right)^{1/4}}>0$
Thanks to Proposition 3.3, (4.9) and (4.10), we can apply Lemma 4.3 to
construct a map $w_{\varepsilon}\in H^{1}(B_{r}\setminus
B_{r-\lambda_{\varepsilon}},\,\mathcal{Q})$ such that
$w_{\varepsilon}(x)=(\varphi*u_{\varepsilon})(x)$ for $x\in\partial B_{r}$,
$w_{\varepsilon}(x)=v(rx/(r-\lambda_{\varepsilon}))$ for $x\in\partial
B_{r-\lambda_{\varepsilon}}$, and
(4.11) $\int_{B_{r}\setminus B_{r-\lambda_{\varepsilon}}}\left(\left|\nabla
w_{\varepsilon}\right|^{2}+\frac{1}{{\varepsilon}^{2}}\psi_{b}(w_{\varepsilon})\right)\lesssim\frac{\lambda_{\varepsilon}}{b-a}+\frac{1}{\lambda_{\varepsilon}}\int_{\partial
B_{r}}\left|\varphi_{\varepsilon}*u_{\varepsilon}-v\right|^{2}\,\mathrm{d}\mathscr{H}^{2}$
The right-hand side of (4.11) converges to zero as ${\varepsilon}\to 0$, due
to (4.10). Finally, we take $c\in(r,\,b)$ and we define
$v_{\varepsilon}(x):=\begin{cases}u_{\varepsilon}(x)&\textrm{if
}x\in\mathbb{R}^{3}\setminus B_{c}\\\
(\varphi_{\varepsilon}*u_{\varepsilon})(x)&\textrm{if }B_{c}\setminus B_{r}\\\
w_{\varepsilon}(x)&\textrm{if }x\in B_{r}\setminus
B_{r-\lambda_{\varepsilon}}\\\
v\left(\dfrac{rx}{r-\lambda_{\varepsilon}}\right)&\textrm{if }x\in
B_{r-\lambda_{\varepsilon}}\end{cases}$
The map $v_{\varepsilon}$ is an admissible competitor for $u_{\varepsilon}$,
because $v_{\varepsilon}\in L^{\infty}(\mathbb{R}^{3},\,\mathcal{Q})$ and
$v_{\varepsilon}=u_{\varepsilon}$ a.e. on $\mathbb{R}^{3}\setminus B_{1}$.
###### Step 3 (Bounds on $E_{\varepsilon}(v_{\varepsilon})$).
We observe that $\partial B_{c}\subset\\!\subset
A^{\prime}\setminus\bar{B}_{r}$ and hence, there exists $\theta>0$ such that
the $B_{c+\theta}\setminus B_{c-\theta}\subseteq
A^{\prime}\setminus\bar{B}_{r}$. Then, Lemma 4.5 gives
$\begin{split}E_{\varepsilon}(v_{\varepsilon})\leq
F_{\varepsilon}(v_{\varepsilon},\,B_{c})&+F_{\varepsilon}(u_{\varepsilon},\,\mathbb{R}^{3}\setminus
B_{c})+CF_{\varepsilon}(u_{\varepsilon},\,A^{\prime})+\frac{C}{{\varepsilon}^{2}}\int_{A^{\prime}\setminus\bar{B}_{r}}\left|u_{\varepsilon}-\varphi_{\varepsilon}*u_{\varepsilon}\right|^{2}\\\
&+\frac{C}{\theta^{2}}\left\|v_{\varepsilon}\right\|^{2}_{L^{\infty}(\mathbb{R}^{3})}\int_{\mathbb{R}^{3}\setminus
B_{\theta/{\varepsilon}}}g(z)\left|z\right|^{2}\,\mathrm{d}z\end{split}$
The map $v_{\varepsilon}$ takes values in the bounded set $\mathcal{Q}$. Due
to (4.6) and (4.7), we deduce
(4.12) $\begin{split}E_{\varepsilon}(v_{\varepsilon})\leq
F_{\varepsilon}(v_{\varepsilon},\,B_{c})&+F_{\varepsilon}(u_{\varepsilon},\,\mathbb{R}^{3}\setminus
B_{c})+\frac{C}{N}+\frac{C}{\theta^{2}}\int_{\mathbb{R}^{3}\setminus
B_{\theta/{\varepsilon}}}g(z)\left|z\right|^{2}\,\mathrm{d}z.\end{split}$
We bound $F_{\varepsilon}(v_{\varepsilon},\,B_{c})$ in a similar way, with the
help of Lemma 4.4. Reducing if necessary the value of $\theta$, so to have
$B_{r+\theta}\setminus B_{r-\theta}\subset\\!\subset A^{\prime}$, and
observing that $B_{c}\setminus\bar{B}_{r}\subseteq A^{\prime}$, we obtain
$\begin{split}F_{\varepsilon}(v_{\varepsilon},\,B_{c})&\leq
F_{\varepsilon}(v_{\varepsilon},\,B_{r})+CF_{\varepsilon}(\varphi_{\varepsilon}*u_{\varepsilon},\,A^{\prime})+\frac{C}{\theta^{2}}\left\|v_{\varepsilon}\right\|^{2}_{L^{\infty}(\mathbb{R}^{3})}\int_{\mathbb{R}^{3}\setminus
B_{\theta/{\varepsilon}}}g(z)\left|z\right|^{2}\,\mathrm{d}z\\\
&\stackrel{{\scriptstyle\eqref{comp1},\,\eqref{comp3}}}{{\leq}}F_{\varepsilon}(v_{\varepsilon},\,B_{r})+\frac{C}{N}+\frac{C}{\theta^{2}}\int_{\mathbb{R}^{3}\setminus
B_{\theta/{\varepsilon}}}g(z)\left|z\right|^{2}\,\mathrm{d}z.\end{split}$
Together with (4.12), this implies
(4.13) $\begin{split}E_{\varepsilon}(v_{\varepsilon})\leq
F_{\varepsilon}(v_{\varepsilon},\,B_{r})+F_{\varepsilon}(u_{\varepsilon},\,\mathbb{R}^{3}\setminus
B_{c})+\frac{C}{N}+\frac{C}{\theta^{2}}\int_{\mathbb{R}^{3}\setminus
B_{\theta/{\varepsilon}}}g(z)\left|z\right|^{2}\,\mathrm{d}z.\end{split}$
###### Step 4 ($u_{0}$ is a minimiser: proof of (4.5)).
Since $u_{\varepsilon}$ is an $\omega$-minimiser for $E_{\varepsilon}$, from
(4.13) we deduce
$\begin{split}E_{\varepsilon}(u_{\varepsilon})\leq
F_{\varepsilon}(v_{\varepsilon},\,B_{r})+F_{\varepsilon}(u_{\varepsilon},\,\mathbb{R}^{3}\setminus
B_{c})+\frac{C}{N}+\frac{C}{\theta^{2}}\int_{\mathbb{R}^{3}\setminus
B_{\theta/{\varepsilon}}}g(z)\left|z\right|^{2}\,\mathrm{d}z+\omega({\varepsilon}).\end{split}$
On the other hand,
$F_{\varepsilon}(u_{\varepsilon},\,B_{r})+F_{\varepsilon}(u_{\varepsilon},\,\mathbb{R}^{3}\setminus
B_{c})\leq E_{\varepsilon}(u_{\varepsilon})$ and hence,
(4.14) $\begin{split}F_{\varepsilon}(u_{\varepsilon},\,B_{r})\leq
F_{\varepsilon}(v_{\varepsilon},\,B_{r})+\frac{C}{N}+\frac{C}{\theta^{2}}\int_{\mathbb{R}^{3}\setminus
B_{\theta/{\varepsilon}}}g(z)\left|z\right|^{2}\,\mathrm{d}z+\omega({\varepsilon}).\end{split}$
Using (4.11), a routine computation shows that $v_{\varepsilon}\to v$ strongly
in $H^{1}(B_{r})$ as ${\varepsilon}\to 0$. Then, we can pass to the limit as
${\varepsilon}\to 0$ in (4.14), using Proposition 3.7, Proposition 3.8, (4.8)
and (K4). We obtain
(4.15) $\int_{B_{r}}L\nabla u_{0}\cdot\nabla u_{0}\leq\int_{B_{r}}L\nabla
v\cdot\nabla v+\frac{C}{N}$
We have chosen $v$ in such a way that $v=u_{0}$ on
$B_{1/2}\setminus\bar{B}_{s}$; moreover, by construction,
$r>\bar{s}:=\max(s,\,1/4)\geq s$. Therefore, (4.15) implies
$\int_{B_{s}}L\nabla u_{0}\cdot\nabla u_{0}\leq\int_{B_{s}}L\nabla
v\cdot\nabla v+\frac{C}{N}$
and, letting $N\to+\infty$, (4.5) follows.
###### Step 5 (Proof of (4.1)).
We choose $v=u_{0}$. Passing to the limit as ${\varepsilon}\to 0$ in both
sides of (4.14), with the help of Proposition 3.8 and (K4), we obtain
(4.16) $\begin{split}\limsup_{{\varepsilon}\to
0}F_{\varepsilon}(u_{\varepsilon},\,B_{r})\leq\int_{B_{r}}L\nabla
u_{0}\cdot\nabla u_{0}+\frac{C}{N}\end{split}$
On the other hand, Proposition 3.7 implies
(4.17) $\begin{split}\int_{B_{r}\setminus\bar{B}_{s}}L\nabla u_{0}\cdot\nabla
u_{0}\leq\liminf_{{\varepsilon}\to
0}F_{\varepsilon}(u_{\varepsilon},\,B_{r}\setminus\bar{B}_{s})\end{split}$
Combining (4.16) with (4.17) and Proposition 3.7, we deduce
$\begin{split}\int_{B_{s}}L\nabla u_{0}\cdot\nabla
u_{0}\leq\liminf_{{\varepsilon}\to
0}F_{\varepsilon}(u_{\varepsilon},\,B_{s})\leq\limsup_{{\varepsilon}\to
0}F_{\varepsilon}(u_{\varepsilon},\,B_{s})\leq\int_{B_{s}}L\nabla
u_{0}\cdot\nabla u_{0}+\frac{C}{N}\end{split}$
Letting $N\to+\infty$, (4.1) follows.∎
### 4.2 A decay lemma for $F_{\varepsilon}$
The aim of this section is to prove a decay property for $F_{\varepsilon}$:
###### Lemma 4.7.
Let $\Omega\subseteq\mathbb{R}^{3}$ be a bounded domain. Let
$\omega\colon[0,\,+\infty)\to[0,\,+\infty)$ be an increasing function such
that $\lim_{s\to 0}\omega(s)=0$. Then, there exist numbers $\eta>0$,
$\theta\in(0,\,1)$ and ${\varepsilon}_{*}>0$ such that, for any ball
$B_{\rho}(x_{0})\subseteq\Omega$, any
${\varepsilon}\in(0,\,{\varepsilon}_{*}\rho)$ and any $\omega$-minimiser
$u_{\varepsilon}$ of $E_{\varepsilon}$ in $\Omega$, there holds
$\rho^{-1}F_{\varepsilon}(u_{\varepsilon},\,B_{\rho}(x_{0}))\leq\eta\quad\Longrightarrow\quad(\theta\rho)^{-1}F_{\varepsilon}(u_{\varepsilon},\,B_{\theta\rho}(x_{0}))\leq\frac{1}{2}\,\rho^{-1}F_{\varepsilon}(u_{\varepsilon},\,B_{\rho}(x_{0})).$
We will deduce Lemma 4.7 from the analogous property satisfied by the limit
functional, (2.9). To this end, we will first need to check that the limit
functional is elliptic. Recall that the tensor $L$ is defined by (2.8).
###### Proposition 4.8.
There exists a constant $\lambda>1$ so that
$\lambda^{-1}|\xi|^{2}\leq L\xi\cdot\xi\leq\lambda|\xi|^{2}$
for all $\xi\in\mathbb{R}^{m\times 3}$.
###### Proof.
The upper bound comes trivially, as
$\begin{split}4L\xi\cdot\xi=\int_{\mathbb{R}^{3}}K(z)(\xi z)\cdot(\xi
z)\,\mathrm{d}z\lesssim\int_{\mathbb{R}^{3}}g(z)|\xi
z|^{2}\,\mathrm{d}z\lesssim\left(\int_{\mathbb{R}^{3}}g(z)|z|^{2}\,\mathrm{d}z\right)|\xi|^{2}\end{split}$
and the constant at the right-hand side is finite, due to (K4). For the lower
bound, recall that $g$ is non-negative and satisfies $g(z)\geq k$ for
$\rho_{1}<|z|<\rho_{2}$. Then we have that
$\begin{split}4L\xi\cdot\xi=\int_{\mathbb{R}^{3}}K_{ij}(z)z_{\alpha}z_{\beta}\,\xi_{i,\alpha}\,\xi_{j,\beta}\,\mathrm{d}z&\geq\int_{\mathbb{R}^{3}}g(z)z_{\alpha}z_{\beta}\,\xi_{i,\alpha}\,\xi_{i,\beta}\,\mathrm{d}z\\\
&\geq k\int_{B_{\rho_{2}}\setminus
B_{\rho_{1}}}z_{\alpha}z_{\beta}\,\mathrm{d}z\,\xi_{i,\alpha}\,\xi_{i,\beta}\end{split}$
We may evaluate the inner integral as
$\begin{split}\int_{B_{\rho_{2}}\setminus
B_{\rho_{1}}}z_{\alpha}z_{\beta}\,\mathrm{d}z=&\int_{\rho_{1}}^{\rho_{2}}\int_{\mathbb{S}^{2}}r^{2}p_{\alpha}p_{\beta}\,\mathrm{d}p\,\mathrm{d}r\\\
=&\int_{\rho_{1}}^{\rho_{2}}r^{2}\,\mathrm{d}r\int_{\mathbb{S}^{2}}p_{\alpha}p_{\beta}\,\mathrm{d}p\\\
=&\left(\frac{\rho_{2}^{3}-\rho_{1}^{3}}{3}\right)\frac{4\pi}{3}\delta_{\alpha\beta}\end{split}$
This gives a lower bound on the bilinear form as
$L\xi\cdot\xi\geq\frac{k\pi(\rho_{2}^{3}-\rho_{1}^{3})}{9}\delta_{\alpha\beta}\xi_{i\alpha}\xi_{i\beta}=\frac{k\pi(\rho_{2}^{3}-\rho_{1}^{3})}{9}|\xi|^{2}\qed$
###### Proof of Lemma 4.7.
Since $L$ is elliptic (Proposition 4.8), the limit functional (2.9) satisfies
a decay property: there exist numbers $\eta\in(0,\,+\infty)$,
$\theta\in(0,\,1/4)$ such that, for any minimiser $u_{0}\in
W^{1,2}(B_{1/4},\,\mathscr{N})$ of $E_{0}$ subject to its own boundary
conditions, there holds
(4.18) $\int_{B_{1/4}}L\nabla u_{0}\cdot\nabla
u_{0}\leq\eta\quad\Longrightarrow\quad\theta^{-1}\int_{B_{\theta}}L\nabla
u_{0}\cdot\nabla u_{0}\leq\frac{1}{4}\int_{B_{1/4}}L\nabla u_{0}\cdot\nabla
u_{0}$
(see e.g. [33, 21]). We claim that there exists ${\varepsilon}_{*}>0$ such
that, for any ball $B_{\rho}(x_{0})\subseteq\Omega$, any
${\varepsilon}\in(0,\,{\varepsilon}_{*}\rho)$ and any $\omega$-minimiser
$u_{\varepsilon}$ of $E_{\varepsilon}$,
(4.19)
$\rho^{-1}F_{\varepsilon}(u_{\varepsilon},\,B_{\rho}(x_{0}))\leq\eta\quad\Longrightarrow\quad(\theta\rho)^{-1}F_{\varepsilon}(u_{\varepsilon},\,B_{\theta\rho}(x_{0}))\leq\frac{1}{2}\,\rho^{-1}F_{\varepsilon}(u_{\varepsilon},\,B_{\rho/4}(x_{0})).$
Once (4.19) is proven, the lemma follows. Suppose, towards a contradiction,
that (4.19) does not hold. Then, we find a sequence
$(\bar{{\varepsilon}}_{j},\,\bar{\rho}_{j},\,\bar{x}_{j},\,\bar{u}_{j})_{j\in\mathbb{N}}$,
where $\bar{{\varepsilon}}_{j}/\bar{\rho}_{j}\to 0$ and $\bar{u}_{j}$ is a
$\omega$-minimiser of $E_{\bar{{\varepsilon}}_{j}}$, such that
$B_{\bar{\rho}_{j}}(\bar{x}_{j})\subseteq\Omega$ and
(4.20)
$\bar{\rho}_{j}^{-1}F_{\bar{{\varepsilon}}_{j}}(\bar{u}_{j},\,B_{\bar{\rho}_{j}}(\bar{x}_{j}))\leq\eta,\quad(\theta\bar{\rho}_{j})^{-1}F_{\bar{{\varepsilon}}_{j}}(\bar{u}_{j},\,B_{\theta\bar{\rho}_{j}}(\bar{x}_{j}))>\frac{1}{2}\,\bar{\rho}_{j}^{-1}F_{\bar{{\varepsilon}}_{j}}(\bar{u}_{j},\,B_{\bar{\rho}_{j}/4}(\bar{x}_{j})).$
We scale the space variables, defining
$u_{j}(y):=\bar{u}_{j}(\bar{x}_{j}+\bar{\rho}_{j}y)$ for $y\in\mathbb{R}^{3}$
and ${\varepsilon}_{j}:=\bar{{\varepsilon}}_{j}/\bar{\rho}_{j}\to 0$. By Lemma
4.1, $u_{j}$ is an $\omega_{j}$-minimser for $E_{{\varepsilon}_{j}}$, where
$\omega_{j}(s):=\omega(\bar{\rho}_{j}s)$ for $s\geq 0$. However, the radii
$\bar{\rho}_{j}$ are bounded by a constant that depends on $\Omega$ only, say
$\bar{\rho}_{j}\leq R_{0}$ for any $j$. Let us define
$\omega_{0}(s):=\omega(R_{0}s)$ for any $s\geq 0$. Since $\omega$ is
increasing, we deduce that $u_{j}$ is an $\omega_{0}$-minimiser of
$E_{{\varepsilon}_{j}}$, for any $j$. Moreover, (3.11) and (4.20) imply
(4.21)
$F_{{\varepsilon}_{j}}(u_{j},\,B_{1})\leq\eta,\qquad\theta^{-1}F_{{\varepsilon}_{j}}(u_{j},\,B_{\theta})>\frac{1}{2}\,F_{{\varepsilon}_{j}}(u_{j},\,B_{1/4}).$
As a consequence, we can apply Proposition 4.2 to the sequence $u_{j}$. Up to
extraction of a (non-relabelled) subsequence, we obtain that
$u_{\varepsilon}\to u_{0}$ in $L^{2}(B_{1/2})$, where $u_{0}\in
H^{1}(B_{1/2},\,\mathscr{N})$ minimises the limit functional (2.9) subject to
its own boundary condition; moreover,
(4.22)
$\lim_{j\to+\infty}F_{{\varepsilon}_{j}}(u_{j},\,B_{s})=\int_{B_{s}}L\nabla
u_{0}\cdot\nabla u_{0}\qquad\textrm{for any }s\in(0,\,1/2).$
Due to (4.21) and (4.22), we have $\int_{B_{1/4}}L\nabla u_{0}\cdot\nabla
u_{0}\leq\eta$ and hence, by (4.18),
(4.23) $\theta^{-1}\int_{B_{\theta}}L\nabla u_{0}\cdot\nabla
u_{0}\leq\frac{1}{4}\int_{B_{1/4}}L\nabla u_{0}\cdot\nabla u_{0}.$
On the other hand, from (4.21) and (4.22) we obtain
$\theta^{-1}\int_{B_{\theta}}L\nabla u_{0}\cdot\nabla
u_{0}\geq\frac{1}{2}\int_{B_{1/4}}L\nabla u_{0}\cdot\nabla u_{0},$
which contradicts (4.23). Therefore, (4.19) follows. ∎
### 4.3 Proof of Theorem A and Theorem B
###### Proof of Theorem A.
Let $\eta$, $\theta$ and ${\varepsilon}_{*}$ be given by Lemma 4.7. Let
$B_{r_{0}}(x_{0})\subset\\!\subset\Omega$ be a ball,
${\varepsilon}\in(0,\,{\varepsilon}_{*}r_{0})$, and let $u_{\varepsilon}$ be a
minimiser of $E_{\varepsilon}$ in $\mathscr{A}$ such that
(4.24)
$r_{0}^{-1}F_{\varepsilon}(u_{\varepsilon},\,B_{r_{0}}(x_{0}))\leq\eta.$
By a scaling argument, using (3.11), we can assume without loss of generality
that $x_{0}=0$ and $r_{0}=1$.
###### Step 1 (Campanato estimate for radii $\rho\gtrsim{\varepsilon}$).
Thanks to (4.24), we can apply Lemma 4.7 iteratively, and we deduce that
$\theta^{-n}F_{\varepsilon}(u_{\varepsilon},\,B_{\theta^{n}})\leq
2^{-n}F_{\varepsilon}(u_{\varepsilon},\,B_{1})\stackrel{{\scriptstyle\eqref{Holder0}}}{{\leq}}2^{-n}\eta$
for any integer $n\geq 1$ such that
$\theta^{n}{\varepsilon}_{*}\geq{\varepsilon}$. As a consequence, there exist
positive numbers $\alpha$ (depending on $\theta$ only) and $C_{1}$ (depending
on $\eta$, $\theta$, ${\varepsilon}_{*}$ only) such that
$\rho^{-1}F_{\varepsilon}(u_{\varepsilon},\,B_{\rho})\leq
C_{1}\rho^{\alpha}\qquad\textrm{for any }\rho\in({\varepsilon},\,1).$
By applying Proposition 3.4, and possibly modifying the value of $C_{1}$, we
obtain
(4.25)
$\fint_{B_{\rho}}\left|u_{\varepsilon}-\fint_{B_{\rho}}u_{\varepsilon}\right|^{2}\leq
C_{1}\rho^{\alpha}\qquad\textrm{for any
}\rho\in(\lambda_{1}{\varepsilon},\,1)$
where $\lambda_{1}:=\max\\{1/2,\,1/(2{\varepsilon}_{1})\\}$ and
${\varepsilon}_{1}$ is given by Proposition 3.4.
###### Step 2 (Campanato estimate for radii $\rho\lesssim{\varepsilon}$).
We need to show that an estimate similar to (4.25) holds for
$\rho<\lambda_{1}{\varepsilon}$ as well. To this end, we define
(4.26) $p:=\frac{2\alpha}{3}+3,\qquad\beta:=\frac{2\alpha+9}{3\alpha+9}.$
We have $p>3$, $0<\beta<1$. Let
$m_{\varepsilon}:=\fint_{B_{2{\varepsilon}^{\beta}}}u_{\varepsilon}$
and let $\chi_{\varepsilon}$ be the characteristic function of the ball
$B_{{\varepsilon}^{\beta}}$. Since $\nabla K_{\varepsilon}$ has zero average,
from the Euler-Lagrange equation (Proposition 3.1) we obtain
$\begin{split}\nabla(\Lambda\circ u_{\varepsilon})&=(\nabla
K_{\varepsilon})*(u_{\varepsilon}-m_{\varepsilon})\\\
&=(\chi_{\varepsilon}\nabla
K_{\varepsilon})*(u_{\varepsilon}-m_{\varepsilon})+\left((1-\chi_{\varepsilon})\nabla
K_{\varepsilon}\right)*(u_{\varepsilon}-m_{\varepsilon}).\end{split}$
Let $\tilde{\chi}_{\varepsilon}$ be the characteristic function of the ball
$B_{2{\varepsilon}^{\beta}}$. Since $\chi_{\varepsilon}\nabla K_{\varepsilon}$
is supported on $B_{{\varepsilon}^{\beta}}$, we deduce
$\begin{split}\nabla(\Lambda\circ u_{\varepsilon})&=(\chi_{\varepsilon}\nabla
K_{\varepsilon})*(\tilde{\chi}_{\varepsilon}(u_{\varepsilon}-m_{\varepsilon}))+\left((1-\chi_{\varepsilon})\nabla
K_{\varepsilon}\right)*(u_{\varepsilon}-m_{\varepsilon})\quad\textrm{in
}B_{{\varepsilon}^{\beta}}.\end{split}$
We apply Hölder’s inequality, and then Young’s inequality for the convolution:
(4.27) $\begin{split}\left\|\nabla(\Lambda\circ
u_{\varepsilon})\right\|_{L^{p}(B_{{\varepsilon}^{\beta}})}&\lesssim\|(\chi_{\varepsilon}\nabla
K_{\varepsilon})*(\tilde{\chi}_{\varepsilon}(u_{\varepsilon}-m_{\varepsilon}))\|_{L^{p}(\mathbb{R}^{3})}\\\
&\qquad\qquad+{\varepsilon}^{3\beta/p}\left\|((1-\chi_{\varepsilon})\nabla
K_{\varepsilon})*(u_{\varepsilon}-m_{\varepsilon})\right\|_{L^{\infty}(\mathbb{R}^{3})}\\\
&\lesssim\left\|\nabla
K_{\varepsilon}\right\|_{L^{1}(B_{{\varepsilon}^{\beta}})}\left\|u_{\varepsilon}-m_{\varepsilon}\right\|_{L^{p}(B_{2{\varepsilon}^{\beta}})}\\\
&\qquad\qquad+{\varepsilon}^{3\beta/p}\left\|\nabla
K_{\varepsilon}\right\|_{L^{1}(\mathbb{R}^{3}\setminus
B_{{\varepsilon}^{\beta}})}\left\|u_{\varepsilon}-m_{\varepsilon}\right\|_{L^{\infty}(\mathbb{R}^{3})}\end{split}$
We bound the terms at the right-hand side. For ${\varepsilon}$ small enough
(so that $2{\varepsilon}^{\beta}\geq\lambda_{1}{\varepsilon}$), the inequality
(4.25) implies
$\left\|u_{\varepsilon}-m_{\varepsilon}\right\|^{2}_{L^{2}(B_{2{\varepsilon}^{\beta}})}\lesssim{\varepsilon}^{(3+\alpha)\beta}.$
Since $\|u_{\varepsilon}\|_{L^{\infty}(\mathbb{R}^{3})}\leq C$, by
interpolation we obtain
(4.28)
$\begin{split}\left\|u_{\varepsilon}-m_{\varepsilon}\right\|_{L^{p}(B_{2{\varepsilon}^{\beta}})}&\lesssim\left\|u_{\varepsilon}-m_{\varepsilon}\right\|_{L^{2}(B_{2{\varepsilon}^{\beta}})}^{2/p}\lesssim{\varepsilon}^{(3+\alpha)\beta/p}.\end{split}$
By a change of variable, we have
(4.29) $\left\|\nabla
K_{\varepsilon}\right\|_{L^{1}(B_{{\varepsilon}^{\beta}})}\leq{\varepsilon}^{-1}\left\|\nabla
K\right\|_{L^{1}(\mathbb{R}^{3})}$
and
(4.30) $\begin{split}\left\|\nabla
K_{\varepsilon}\right\|_{L^{1}(\mathbb{R}^{3}\setminus
B_{{\varepsilon}^{\beta}})}&={\varepsilon}^{-1}\int_{\mathbb{R}^{3}\setminus
B_{{\varepsilon}^{\beta-1}}}\left\|\nabla K(z)\right\|\mathrm{d}z\\\
&\leq{\varepsilon}^{-1}\int_{\mathbb{R}^{3}\setminus
B_{{\varepsilon}^{\beta-1}}}\left\|\nabla
K(z)\right\|\frac{\left|z\right|^{3}}{{\varepsilon}^{3\beta-3}}\,\mathrm{d}z\\\
&\leq{\varepsilon}^{2-3\beta}\int_{\mathbb{R}^{3}}\left\|\nabla
K(z)\right\|\left|z\right|^{3}\mathrm{d}z,\end{split}$
where the integral at the right-hand side is finite by Assumption (K6).
Combining (4.27), (4.28), (4.29) and (4.30), and using that $u_{\varepsilon}$
is bounded in $L^{\infty}(\mathbb{R}^{3})$, we obtain
$\left\|\nabla(\Lambda\circ
u_{\varepsilon})\right\|_{L^{p}(B_{{\varepsilon}^{\beta}})}\lesssim{\varepsilon}^{(3+\alpha)\beta/p-1}+{\varepsilon}^{2-3\beta+3\beta/p}$
By simple algebra, from (4.26) we obtain
$\frac{(3+\alpha)\beta}{p}-1=2-3\beta+\frac{3\beta}{p}=0$
so $\|\nabla(\Lambda\circ
u_{\varepsilon})\|_{L^{p}(B_{{\varepsilon}^{\beta}})}$ is bounded. Thanks to
(3.8), we deduce that $\|\nabla
u_{\varepsilon}\|_{L^{p}(B_{{\varepsilon}^{\beta}})}$ is bounded too. Since
$p>3$, by Sobolev embedding we conclude that
(4.31) $[u_{\varepsilon}]_{C^{\mu}(B_{{\varepsilon}^{\beta}})}\leq C,$
where
(4.32) $\mu:=1-\frac{3}{p}=\frac{2\alpha}{2\alpha+9}.$
Now, let us fix $\rho\leq\lambda_{1}{\varepsilon}$. We have $\beta<1$ and
hence, $\rho\leq\lambda_{1}{\varepsilon}\leq{\varepsilon}^{\beta}$ for
${\varepsilon}$ small enough. Then, (4.31) implies
(4.33)
$\begin{split}\fint_{B_{\rho}}\left|u_{\varepsilon}-\fint_{B_{\rho}}u_{\varepsilon}\right|^{2}&\lesssim\rho^{2\mu}\qquad\textrm{for
any }\rho\in(0,\,\lambda_{1}{\varepsilon}].\end{split}$
###### Step 3 (Conclusion).
By combining (4.25), (4.32) and (4.33), we deduce that
$\fint_{B_{\rho}}\left|u_{\varepsilon}-\fint_{B_{\rho}}u_{\varepsilon}\right|^{2}\leq
C_{2}\,\rho^{\min(\alpha,\,2\mu)}=C_{2}\,\rho^{2\mu}$
for any radius $\rho\in(0,\,1)$ and for some constant $C_{2}>0$ that does not
depend on ${\varepsilon}$, $\rho$. Then, Campanato embedding gives an
${\varepsilon}$-independent bound on the $\mu$-Hölder semi-norm of
$u_{\varepsilon}$ on $B_{1/2}$. This completes the proof. ∎
###### Proof of Theorem B.
Let $u_{\varepsilon}$ be a minimiser of $E_{\varepsilon}$ in $\mathscr{A}$. By
the results of [42], there exists a (non-relabelled) subsequence such that
$u_{\varepsilon}\to u_{0}$ strongly in $L^{2}(\Omega)$, where $u_{0}$ is a
minimiser of the limit functional (2.9). Take a point $x_{0}\in\Omega\setminus
S[u_{0}]$, where $S[u_{0}]$ is defined by (2.11). By definition of $S[u_{0}]$,
there exists a number $r_{0}>0$ such that
$r_{0}^{-1}\int_{B_{r_{0}}(x_{0})}L\nabla u_{0}\cdot\nabla
u_{0}\leq\frac{\eta}{2},$
where $\eta$ is given by Theorem A. Proposition 4.2 implies
$r_{0}^{-1}F_{\varepsilon}(u_{\varepsilon},\,B_{r_{0}}(x_{0}))\leq\eta$
for any ${\varepsilon}$ small enough and hence, by Theorem A,
$[u_{\varepsilon}]_{C^{\mu}(B_{r_{0}}(x_{0}))}$ is uniformly bounded. Then,
Ascoli-Arzelà’s theorem implies that $u_{\varepsilon}\to u_{0}$ uniformly in
$B_{r_{0}}(x_{0})$. ∎
## 5 Generalisation to finite-thickness boundary conditions
In this section, we discuss a variant of the minimisation problem, where we
prescribe $u$ in a neighbourhood of $\partial\Omega$ only. Let
$\Omega_{\varepsilon}\supset\\!\supset\Omega$ be a larger domain, possibly
depending on ${\varepsilon}$. We consider the functional
(5.1)
$\begin{split}\tilde{E}_{\varepsilon}(u):=-\frac{1}{2{\varepsilon}^{2}}\int_{\Omega_{\varepsilon}\times\Omega_{\varepsilon}}K_{\varepsilon}(x-y)u(x)\cdot
u(y)\,\mathrm{d}x\,\mathrm{d}y+\frac{1}{{\varepsilon}^{2}}\int_{\Omega}\psi_{s}(u(x))\,\mathrm{d}x+C_{\varepsilon},\end{split}$
where $C_{\varepsilon}$ is given by (2.6). As before, we take a map
$u_{\mathrm{bd}}\in H^{1}(\mathbb{R}^{3},\,\mathcal{Q})$ that satisfies (BD)
and define the admissible class
(5.2) $\tilde{\mathscr{A}}_{\varepsilon}:=\left\\{u\in
L^{2}(\Omega_{\varepsilon},\,\mathbb{R}^{m})\colon\psi_{s}(u)\in
L^{1}(\Omega),\ u=u_{\mathrm{bd}}\textrm{ a.e. on
}\Omega_{\varepsilon}\setminus\Omega\right\\}\\!.$
The thickness of the boundary layer $\Omega_{\varepsilon}\setminus\Omega$ must
be related to the decay properties of the kernel $K$. More precisely, in
addition to (K1)–(K6), (H1)–(H6), we assume that
1. (K′)
There exist $q\geq 2$, $\tau>0$ such that
$\displaystyle\int_{\mathbb{R}^{3}}g(z)\left|z\right|^{q}\mathrm{d}z<+\infty$
and
$\operatorname{dist}(\Omega,\,\partial\Omega_{\varepsilon})\geq\tau{\varepsilon}^{1-2/q}$
for any ${\varepsilon}>0$.
###### Remark 5.1.
In case the kernel $K$ is compactly supported, we can allow for a boundary
layer of thickness proportional to ${\varepsilon}$. More precisely, we can
replace the assumption (K′) with the following: there exists $R_{0}>0$ such
that $\mathrm{supp}(K)\subseteq B_{R_{0}}$ and
$\operatorname{dist}(\Omega,\,\partial\Omega_{\varepsilon})\geq
R_{0}{\varepsilon}$ for any ${\varepsilon}>0$. The proofs in this case remain
essentially unchanged.
Under these assumptions, we can prove the analogues of Theorems A and B.
###### Theorem 5.1.
Assume that the conditions (K1)–(K6), (H1)–(H6), (BD) and (K′) are satisfied.
Then, there exist positive numbers $\eta$, ${\varepsilon}_{*}$, $M$ and
$\mu\in(0,\,1)$ such that, for any ball
$B_{r_{0}}(x_{0})\subset\\!\subset\Omega$, any
${\varepsilon}\in(0,\,{\varepsilon}_{*}r_{0})$, and any minimiser
$\tilde{u}_{\varepsilon}$ of $\tilde{E}_{\varepsilon}$ in
$\tilde{\mathscr{A}}_{\varepsilon}$, there holds
$r_{0}^{-1}F_{\varepsilon}(\tilde{u}_{\varepsilon},\,B_{r_{0}}(x_{0}))\leq\eta\qquad\Longrightarrow\qquad
r_{0}^{\mu}\,[\tilde{u}_{\varepsilon}]_{C^{\mu}(B_{r_{0}/2}(x_{0}))}\leq M.$
###### Theorem 5.2.
Assume that the conditions (K1)–(K6), (H1)–(H6), (BD) and (K′) are satisfied.
Let $\tilde{u}_{\varepsilon}$ be a minimiser of $\tilde{E}_{\varepsilon}$ in
$\tilde{\mathscr{A}}_{\varepsilon}$. Then, up to extraction of a (non-
relabelled) subsequence, we have
$\tilde{u}_{\varepsilon}\to u_{0}\qquad\textrm{locally uniformly in
}\Omega\setminus S[u_{0}],$
where $u_{0}$ is a minimiser of the functional (2.9) in $\mathscr{A}$ and
$S[u_{0}]$ is defined by (2.11).
The proofs of Theorem 5.1 and 5.2 are largely similar to those of Theorem A
and B. They rely on the following results:
###### Lemma 5.3.
For any ${\varepsilon}$, there exists a minimiser $\tilde{u}_{\varepsilon}$
for $\tilde{E}_{\varepsilon}$ in $\tilde{\mathscr{A}}_{\varepsilon}$ and it
satisfies the Euler-Lagrange equation,
(5.3)
$\Lambda(\tilde{u}_{\varepsilon}(x))=\int_{\Omega_{\varepsilon}}K_{\varepsilon}(x-y)\tilde{u}_{\varepsilon}(y)\,\mathrm{d}y$
for a.e. $x\in\Omega$.
The proof of Lemma 5.3 is identical to that of Proposition 3.1, so we skip it
for brevity. We remark that the equation (5.3) can be written as
$\Lambda(\tilde{u}_{\varepsilon})=K_{\varepsilon}*(\tilde{u}_{\varepsilon}\chi_{\varepsilon})\qquad\textrm{a.e.
on }\Omega,$
where $\chi_{\varepsilon}$ is the characteristic function of
$\Omega_{\varepsilon}$. In particular, the uniform strict physicality of
$\tilde{u}_{\varepsilon}$ follows from (5.3), exactly as in Proposition 3.3.
###### Lemma 5.4.
Let $\tilde{u}_{\varepsilon}$ be a minimiser for $\tilde{E}_{\varepsilon}$ in
$\tilde{\mathscr{A}}_{\varepsilon}$, identified with its extension by
$u_{\mathrm{bd}}$ to $\mathbb{R}^{3}$. Then, $\tilde{u}_{\varepsilon}$ is an
$\omega$-minimiser for $E_{\varepsilon}$ in $\Omega$, where
$\omega\colon[0,\,+\infty)\to[0,\,+\infty)$ is an increasing function that
depends only on $\Omega$, $K$, $\mathcal{Q}$ and satisfies $\lim_{s\to
0}\omega(s)=0$.
Once Lemma 5.4 is proven, Theorem 5.1 and Theorem 5.2 follow by the same
arguments as before. Before we give the proof of Lemma 5.4, we introduce the
auxiliary function
(5.4)
$H_{\varepsilon}(x):=\frac{1}{{\varepsilon}^{2}}\int_{\mathbb{R}^{3}\setminus(\Omega_{\varepsilon}-x)/{\varepsilon}}K(z)\,\mathrm{d}z\qquad\textrm{for
any }x\in\mathbb{R}^{3}.$
###### Lemma 5.5.
Under the assumption (K′), $H_{\varepsilon}\to 0$ uniformly in $\Omega$, as
${\varepsilon}\to 0$.
###### Proof.
For any $x\in\Omega$, we have
$B_{\tau{\varepsilon}^{1-2/q}}(x)\subseteq\Omega_{\varepsilon}$ by (K′), and
hence
$B_{\tau{\varepsilon}^{-2/q}}\subseteq(\Omega_{\varepsilon}-x)/{\varepsilon}$,
$\mathbb{R}^{3}\setminus(\Omega_{\varepsilon}-x)/{\varepsilon}\subseteq\mathbb{R}^{3}\setminus
B_{\tau{\varepsilon}^{-2/q}}$. Then, the definition (5.4) of $H_{\varepsilon}$
gives
$\begin{split}\left\|H_{\varepsilon}(x)\right\|&\leq\frac{1}{{\varepsilon}^{2}}\int_{\mathbb{R}^{3}\setminus(\Omega_{\varepsilon}-x)/{\varepsilon}}\left\|K(z)\right\|\mathrm{d}z\\\
&\leq\frac{1}{{\varepsilon}^{2}}\int_{\mathbb{R}^{3}\setminus
B_{\tau{\varepsilon}^{-2/q}}}\left\|K(z)\right\|\mathrm{d}z\\\
&\leq\int_{\mathbb{R}^{3}\setminus
B_{\tau{\varepsilon}^{-2/q}}}\left\|K(z)\right\|\frac{\left|z\right|^{q}}{\tau^{q}}\,\mathrm{d}z\end{split}$
and the right-hand side tends to zero as ${\varepsilon}\to 0$, due to (K′). ∎
###### Proof of Lemma 5.4.
We write
$\Omega_{\varepsilon}^{c}:=\mathbb{R}^{3}\setminus\Omega_{\varepsilon}$. Let
$\tilde{u}_{\varepsilon}$ be a minimiser for $\tilde{E}_{\varepsilon}$ in
$\tilde{\mathscr{A}}_{\varepsilon}$. Let $B:=B_{\rho}(x_{0})\subseteq\Omega$
be a ball, and let $v\in L^{\infty}(\mathbb{R}^{3},\,\mathcal{Q})$ be such
that $v=\tilde{u}_{\varepsilon}$ a.e. on $\mathbb{R}^{3}\setminus B$. By
comparing (2.1) with (5.1), and using (K2), we obtain
(5.5) $\begin{split}&E_{\varepsilon}(v)-\tilde{E}_{\varepsilon}(v)\\\
&=-\frac{1}{{\varepsilon}^{2}}\int_{\Omega_{\varepsilon}\times\Omega_{\varepsilon}^{c}}K_{\varepsilon}(x-y)v(x)\cdot
v(y)\,\mathrm{d}x\,\mathrm{d}y-\frac{1}{2{\varepsilon}^{2}}\int_{\Omega_{\varepsilon}^{c}\times\Omega_{\varepsilon}^{c}}K_{\varepsilon}(x-y)v(x)\cdot
v(y)\,\mathrm{d}x\,\mathrm{d}y\\\
&=-\frac{1}{{\varepsilon}^{2}}\int_{B\times\Omega_{\varepsilon}^{c}}K_{\varepsilon}(x-y)v(x)\cdot
u_{\mathrm{bd}}(y)\,\mathrm{d}x\,\mathrm{d}y-\frac{1}{{\varepsilon}^{2}}\int_{(\Omega\setminus
B)\times\Omega_{\varepsilon}^{c}}K_{\varepsilon}(x-y)\tilde{u}_{\varepsilon}(x)\cdot
u_{\mathrm{bd}}(y)\,\mathrm{d}x\,\mathrm{d}y\\\
&\qquad\qquad\qquad-\frac{1}{2{\varepsilon}^{2}}\int_{\Omega_{\varepsilon}^{c}\times\Omega_{\varepsilon}^{c}}K_{\varepsilon}(x-y)u_{\mathrm{bd}}(x)\cdot
u_{\mathrm{bd}}(y)\,\mathrm{d}x\,\mathrm{d}y\end{split}$
The second and third integral at the right-hand side are independent of $v$.
We bound the first integral by making the change of variable
$y=x+{\varepsilon}z$ and applying Fubini theorem:
$\begin{split}\frac{1}{{\varepsilon}^{2}}\left|\int_{B\times\Omega_{\varepsilon}^{c}}K_{\varepsilon}(x-y)v(x)\cdot
u_{\mathrm{bd}}(y)\,\mathrm{d}x\,\mathrm{d}y\right|\leq\left\|u_{\mathrm{bd}}\right\|_{L^{\infty}(\mathbb{R}^{3})}\int_{B}\left\|H_{\varepsilon}(x)\right\|\left|v(x)\right|\,\mathrm{d}x\end{split}$
where $H_{\varepsilon}$ is defined by (5.4). Since $u_{\mathrm{bd}}$ and $v$
both take values in the bounded set $\mathcal{Q}$, and since
$\left|B\right|\lesssim\rho^{3}\lesssim\rho$, by Lemma 5.5 there exists an
increasing function $\omega\colon[0,\,+\infty)\to[0,\,+\infty)$, depending
only on $H_{\varepsilon}$ and $\mathcal{Q}$, such that $\lim_{s\to
0}\omega(s)=0$ and
(5.6)
$\begin{split}\frac{1}{{\varepsilon}^{2}}\left|\int_{B\times\Omega_{\varepsilon}^{c}}K_{\varepsilon}(x-y)v(x)\cdot
u_{\mathrm{bd}}(y)\,\mathrm{d}x\,\mathrm{d}y\right|\leq\frac{\omega({\varepsilon})\,\rho}{2}.\end{split}$
From (5.5) and (5.6), we deduce
(5.7)
$\begin{split}E_{\varepsilon}(\tilde{u}_{\varepsilon})-\tilde{E}_{\varepsilon}(\tilde{u}_{\varepsilon})\leq
E_{\varepsilon}(v)-\tilde{E}_{\varepsilon}(v)+\omega({\varepsilon})\,\rho.\end{split}$
On the other hand,
(5.8)
$\tilde{E}_{\varepsilon}(\tilde{u}_{\varepsilon})\leq\tilde{E}_{\varepsilon}(v),$
because $\tilde{u}_{\varepsilon}$ is a minimiser for $\tilde{E}_{\varepsilon}$
and $v\in\tilde{\mathscr{A}}_{\varepsilon}$. Combining (5.7) and (5.8), the
lemma follows. ∎
#### Acknowledgements.
The authors are grateful to Arghir D. Zarnescu, who brought the problem to
their attention. Part of this work was carried out when the authors were
visiting the _Centro Internazionale per la Ricerca Matematica_ (CIRM) in
Trento (Italy), supported by the Research-in-Pairs program. The authors would
like to thank the CIRM for its hospitality. This research has been partially
supported by the Basque Government through the BERC 2018-2021 program; and by
Spanish Ministry of Economy and Competitiveness MINECO through BCAM Severo
Ochoa excellence accreditation SEV-2017-0718 and through project
MTM2017-82184-R funded by (AEI/FEDER, UE) and acronym “DESFLU”. G. C. was
partially supported by GNAMPA-INdAM.
## References
* [1] John M Ball and Apala Majumdar. Nematic liquid crystals: from Maier-Saupe to a continuum theory. Molecular crystals and liquid crystals, 525(1):1–11, 2010.
* [2] P. Bauman, J. Park, and D. Phillips. Analysis of nematic liquid crystals with disclination lines. Archive for Rational Mechanics and Analysis, 205(3):795–826, Sep 2012.
* [3] José C Bellido, Carlos Mora-Corral, and Pablo Pedregal. Hyperelasticity as a ${\Gamma}$-limit of peridynamics when the horizon goes to zero. Calculus of Variations and Partial Differential Equations, 54(2):1643–1670, 2015.
* [4] F. Bethuel, H. Brezis, and F. Hélein. Ginzburg-Landau Vortices. Progress in Nonlinear Differential Equations and their Applications, 13\. Birkhäuser Boston Inc., Boston, MA, 1994.
* [5] Mark J Bowick, David Kinderlehrer, Govind Menon, and Charles Radin. Mathematics and Materials, volume 23. American Mathematical Soc., 2017.
* [6] G. Canevari. Line defects in the small elastic constant limit of a three-dimensional Landau-de Gennes model. Archive for Rational Mechanics and Analysis, 223(2):591–676, Feb 2017.
* [7] Giacomo Canevari, Apala Majumdar, and Bianca Stroffolini. Minimizers of a Landau–de Gennes energy with a subquadratic elastic energy. Archive for Rational Mechanics and Analysis, 233(3):1169–1210, 2019\.
* [8] Andres Contreras and Xavier Lamy. Biaxial escape in nematics at low temperature. Journal of Functional Analysis, 272(10):3987 – 3997, 2017.
* [9] Andres Contreras and Xavier Lamy. Singular perturbation of manifold-valued maps with anisotropic energy. arXiv preprint arXiv:1809.05170, 2018.
* [10] Andres Contreras, Xavier Lamy, and Rémy Rodiac. On the convergence of minimizers of singular perturbation functionals. Indiana University mathematics journal, 67(4):1665–1682, 2018.
* [11] RNP Creyghton and BM Mulder. Scratching a 50-year itch with elongated rods. Molecular Physics, 116(21-22):2742–2756, 2018.
* [12] Pierre-Gilles de Gennes and Jacques Prost. The physics of liquid crystals (international series of monographs on physics). Oxford University Press, USA, 2:4, 1995.
* [13] Giovanni Di Fratta, Jonathan M. Robbins, Valeriy Slastikov, and Arghir Zarnescu. Half-integer point defects in the Q-tensor theory of nematic liquid crystals. J. Nonlinear Sci., 26:121–140, 2016.
* [14] Giovanni Di Fratta, Jonathan M. Robbins, Valeriy Slastikov, and Arghir Zarnescu. Landau-de Gennes corrections to the Oseen-Frank theory of nematic liquid crystals. Archive for Rational Mechanics and Analysis, 236(2):1089–1125, 2020\.
* [15] SP Eveson. Compactness criteria for integral operators in ${L^{\infty}}$ and ${L^{1}}$ spaces. Proceedings of the American Mathematical Society, 123(12):3709–3716, 1995.
* [16] Frederick C Frank. I. Liquid crystals. On the theory of liquid crystals. Discussions of the Faraday Society, 25:19–28, 1958.
* [17] Ioana C Gârlea and Bela M Mulder. The Landau-de Gennes approach revisited: A minimal self-consistent microscopic theory for spatially inhomogeneous nematic liquid crystals. The Journal of chemical physics, 147.
* [18] David Gilbarg and Neil S Trudinger. Elliptic partial differential equations of second order. springer, 2015.
* [19] D. Golovaty and J. A. Montero. On minimizers of a Landau-de Gennes energy functional on planar domains. Arch. Rational Mech. Anal., 213(2):447–490, 2014.
* [20] Jiequn Han, Yi Luo, Wei Wang, Pingwen Zhang, and Zhifei Zhang. From microscopic theory to macroscopic theory: a systematic study on modeling for liquid crystals. Archive for Rational Mechanics and Analysis, 215(3):741–809, 2015\.
* [21] R. Hardt, D. Kinderlehrer, and F.-H. Lin. Existence and partial regularity of static liquid crystal configurations. Comm. Math. Phys., 105(4):547–570, 1986.
* [22] A. Henao, D. Majumdar and A. Pisante. Uniaxial versus biaxial character of nematic equilibria in three dimensions. Calculus of Variations and Partial Differential Equations, 56(2):55, Apr 2017.
* [23] R. Ignat and R. Jerrard. Renormalized energy between vortices in some Ginzburg-Landau models on 2-dimensional Riemannian manifolds. Preprint arXiv 1910.02921.
* [24] R. Ignat, L. Nguyen, V. Slastikov, and A. Zarnescu. Stability of the melting hedgehog in the Landau-de Gennes theory of nematic liquid crystals. Arch. Rational Mech. Anal., 215(2):633–673, 2015.
* [25] Radu Ignat, Luc Nguyen, Valeriy Slastikov, and Arghir Zarnescu. Instability of point defects in a two-dimensional nematic liquid crystal model. Annales de l’Institut Henri Poincaré C, Analyse non linéaire, 33(4):1131 – 1152, 2016.
* [26] Radu Ignat, Luc Nguyen, Valeriy Slastikov, and Arghir Zarnescu. Stability of point defects of degree $\pm 1/2$ in a two-dimensional nematic liquid crystal model. Calc. Var., 55:119, 2016.
* [27] Radu Ignat, Luc Nguyen, Valeriy Slastikov, and Arghir Zarnescu. Symmetry and multiplicity of solutions in a two-dimensional Landau–de Gennes model for liquid crystals. Arch. Ration. Mech. Anal., 237:1421–1473, 2020.
* [28] J Katriel, GF Kventsel, GR Luckhurst, and TJ Sluckin. Free energies in the Landau and molecular field approaches. Liquid Crystals, 1(4):337–355, 1986.
* [29] Georgy Kitavtsev, J. M. Robbins, Valeriy Slastikov, and Arghir Zarnescu. Liquid crystal defects in the Landau-de Gennes theory in two dimensions — Beyond the one-constant approximation. Mathematical Models and Methods in Applied Sciences, 26(14):2769–2808, 2016.
* [30] Sirui Li, Wei Wang, and Pingwen Zhang. Local well-posedness and small deborah limit of a molecule-based Q-tensor system. Discrete & Continuous Dynamical Systems-Series B, 20(8), 2015.
* [31] Yuning Liu and Wei Wang. The Oseen–Frank limit of Onsager’s molecular theory for liquid crystals. Archive for Rational Mechanics and Analysis, pages 1–30, 2017.
* [32] Yuning Liu and Wei Wang. The small Deborah number limit of the Doi–Onsager equation without hydrodynamics. Journal of Functional Analysis, 275(10):2740–2793, 2018.
* [33] S. Luckhaus. Partial Hölder continuity for minima of certain energies among maps into a Riemannian manifold. Indiana Univ. Math. J., 37(2):349–367, 1988.
* [34] Wilhelm Maier and Alfred Saupe. Eine einfache molekular-statistische Theorie der nematischen kristallinflüssigen Phase. Teil l1. Zeitschrift für Naturforschung A, 14(10):882–889, 1959.
* [35] Apala Majumdar and Arghir Zarnescu. Landau–de Gennes theory of nematic liquid crystals: the Oseen–Frank limit and beyond. Archive for rational mechanics and analysis, 196(1):227–280, 2010\.
* [36] Nigel J Mottram and Christopher JP Newton. Introduction to Q-tensor theory. arXiv preprint arXiv:1409.3542, 2014.
* [37] L. Nguyen and A. Zarnescu. Refined approximation for minimizers of a Landau-de Gennes energy functional. Cal. Var. Partial Differential Equations, 47(1-2):383–432, 2013\.
* [38] Lars Onsager. The effects of shape on the interaction of colloidal particles. Annals of the New York Academy of Sciences, 51(1):627–659, 1949\.
* [39] R Tyrrell Rockafellar. Convex analysis. Number 28. Princeton University Press, 1970.
* [40] Stewart A Silling and Richard B Lehoucq. Convergence of peridynamics to classical elasticity theory. Journal of Elasticity, 93(1):13, 2008.
* [41] Jamie M Taylor. Maximum entropy methods as the bridge between microscopic and macroscopic theory. Journal of Statistical Physics, 164(6):1429–1459, 2016.
* [42] Jamie M Taylor. Oseen–Frank-type theories of ordered media as the ${\Gamma}$-limit of a non-local mean-field free energy. Mathematical Models and Methods in Applied Sciences, 28(04):615–657, 2018.
* [43] Jamie M Taylor. $\Gamma$-convergence of a mean-field model of a chiral doped nematic liquid crystal to the oseen–frank description of cholesterics. Nonlinearity, 33(6):3062, 2020.
|
# Fundamental solutions and Hadamard states for a scalar field with arbitrary
boundary conditions on an asymptotically AdS spacetimes
Claudio Dappiaggi1,2,3,a, Alessio Marta4,5,6,b,
1 Dipartimento di Fisica – Università di Pavia, Via Bassi 6, 27100 Pavia,
Italy.
2 INFN, Sezione di Pavia – Via Bassi 6, 27100 Pavia, Italy.
3 Istituto Nazionale di Alta Matematica – Sezione di Pavia, Via Ferrata, 5,
27100 Pavia, Italy.
4 Dipartimento di Matematica – Università di Milano, Via Cesare Saldini, 50 –
I-20133 Milano, Italy.
5 INFN, Sezione di Milano – Via Celoria, 16 – I-20133 Milano, Italy.
6 Istituto Nazionale di Alta Matematica – Sezione di Milano, Via Saldini, 50,
I-20133 Milano, Italy.
a<EMAIL_ADDRESS>, b<EMAIL_ADDRESS>
###### Abstract
We consider the Klein-Gordon operator on an $n$-dimensional asymptotically
anti-de Sitter spacetime $(M,g)$ together with arbitrary boundary conditions
encoded by a self-adjoint pseudodifferential operator on $\partial M$ of order
up to $2$. Using techniques from $b$-calculus and a propagation of
singularities theorem, we prove that there exist advanced and retarded
fundamental solutions, characterizing in addition their structural and
microlocal properties. We apply this result to the problem of constructing
Hadamard two-point distributions. These are bi-distributions which are weak
bi-solutions of the underlying equations of motion with a prescribed form of
their wavefront set and whose anti-symmetric part is proportional to the
difference between the advanced and the retarded fundamental solutions. In
particular, under a suitable restriction of the class of admissible boundary
conditions and setting to zero the mass, we prove their existence extending to
the case under scrutiny a deformation argument which is typically used on
globally hyperbolic spacetimes with empty boundary.
## 1 Introduction
The $n$-dimensional anti-de Sitter spacetime (AdSn) is a maximally symmetric
solution of the vacuum Einstein equations with a negative cosmological
constant. From a geometric viewpoint it is noteworthy since it is not globally
hyperbolic and it possesses a timelike conformal boundary. Due to these
features the study of hyperbolic partial differential equations on top of this
background becomes particularly interesting, especially since the initial
value problem does not yield a unique solution unless suitable boundary
conditions are assigned. Therefore several authors have investigated the
properties of the Klein-Gordon equation on an AdS spacetime, see e.g. [Bac11,
EnKa13, Hol12, War13, Vas12] to quote some notable examples, which have
inspired our analysis.
A natural extension of the framework outlined in the previous paragraph
consists of considering a more general class of geometries, namely the so-
called $n$-dimensional asymptotically AdS spacetimes, which share the same
behaviour of AdSn in a neighbourhood of conformal infinity. In this case the
analysis of partial differential equations such as the Klein-Gordon one
becomes more involved due to admissible class of backgrounds and, in
particular, due to the lack of isometries of the metric. Noteworthy has been
the recent analysis by Gannot and Wrochna, [GW18], in which, using techniques
proper of $b$-calculus they have investigated the structural properties of the
Klein-Gordon operator with Robin boundary conditions. In between the several
results proven, we highlight in particular the theorem of propagation of
singularities and the existence of advanced and retarded fundamental
solutions.
Yet, as strongly advocated in [DDF18], the class of boundary conditions which
are of interest in concrete models is greater than the one considered in
[GW18], a notable example in this direction being the so-called Wentzell
boundary conditions, see e.g. [Coc14, DFJ18, FGGR02, Ue73, Za15]. For this
reason in [DM20], we started an investigation aimed at generalizing the
results of [GW18] proving a theorem of propagation of singularities for the
Klein-Gordon operator on an asymptotically anti-de Sitter spacetime $M$ such
that the boundary condition is implemented by a $b$-pseudodifferential
operator $\Theta\in\Psi^{k}_{b}(\partial M)$ with $k\leq 2$, see Section 3.1
for the definitions.
Starting from this result, in this work we proceed with our investigation and,
still using techniques proper of $b$-calculus, we discuss the existence of
advanced and retarded fundamental solutions for the Klein-Gordon operator with
prescribed boundary conditions. The first main result that we prove is the
following:
###### Theorem 1.1.
Let $P_{\Theta}$ be the Klein-Gordon operator as per Equation (20) where
$\Theta$ abides to Hypothesis 4.1. Then there exist unique retarded $(+)$ and
advanced $(-)$ propagators, that is continuous operators
$G_{\Theta}^{\pm}:\dot{\mathcal{H}}^{-1,m+1}_{\pm}(M)\rightarrow\mathcal{H}^{1,m}_{\pm}(M)$
such that $P_{\Theta}G_{\Theta}^{\pm}=\mathbb{I}$ on
$\dot{\mathcal{H}}^{-1,m+1}_{\pm}(M)$ and
$G_{\Theta}^{\pm}P_{\Theta}=\mathbb{I}$ on
$\mathcal{H}^{1,m}_{\pm,\Theta}(M)$. Furthermore, $G_{\Theta}^{\pm}$ is a
continuous map from $\dot{\mathcal{H}_{0}^{-1,\infty}}(M)$ to
$\mathcal{H}_{loc}^{1,\infty}(M)$ where the subscript $0$ indicates that we
consider only functions of compact support.
Here the spaces $\dot{\mathcal{H}}^{-1,s+1}_{\pm}(M)$,
$\mathcal{H}^{1,s}_{\pm}(M)$ as well as
$\dot{\mathcal{H}_{0}^{-1,\infty}}(M)$, $\mathcal{H}_{loc}^{1,\infty}(M)$ and
$\mathcal{H}^{1,m}_{\pm,\Theta}(M)$ are characterized in Definition 3.5 and in
Section 4, see in particular Equations (41b), (41a) and (42).
In addition, we characterize the wavefront set of the advanced $(-)$ and of
the retarded $(+)$ fundamental solutions as well as their wavefront set,
thanks to the theorem of propagation of singularities proven in [DM20]. This
result allows us to discuss a notable application which is strongly inspired
by the so-called algebraic approach to quantum field theory, see e.g. [BDFY15]
for a recent review. In this framework a key rôle is played by the so-called
Hadamard two-point distributions, which are positive bi-distributions on the
underlying background which are characterized by the following defining
properties: they are bi-solutions of the underlying equations of motion, their
antisymmetric part is proportional to the difference between the advanced and
retarded fundamental solutions and their wavefront set has a prescribed form,
see e.g. [KM13]. If the underlying background is globally hyperbolic and with
empty boundary, the existence of these two-point distributions is a by-product
of the standard Hörmander propagation of singularities theorem and of a
deformation argument due to Fulling, Narcovich and Wald, see [FNW81].
In the scenarios investigated in this work this conclusion does no longer
apply since we are considering asymptotically AdS spacetimes which possess in
particular a conformal boundary. At the level of Hadamard two-point
distributions this has long-standing consequences since even the standard form
of the wavefront set has to be modified to take into account reflection of
singularities at the boundary, see [DF18] and Definition 5.3 below. Our second
main result consists of showing that, under a suitable restriction on the
allowed class of boundary conditions, see Hypothesis 4.1 in the main body of
this work, it is possible to prove existence of Hadamard two-point
distributions. First we focus on static spacetimes and, using spectral
techniques, we construct explicitly an example, which, in the language of
theoretical physics, is often referred to as the ground state. Subsequently we
show that, starting from this datum and using the theorem of propagation of
singularities proven in [DM20], we can use also in this framework a
deformation argument to infer the existence of an Hadamard two-point
distribution on a generic $n$-dimensional asymptotically AdS spacetime. It is
important to observe that this result is in agreement and it complements the
one obtained in [Wro17]. To summarize our second main statement is the
following, see also Definition 4.2 for the notion of static and of physically
admissible boundary conditions:
###### Theorem 1.2.
Let $(M,g)$ be a globally hyperbolic, asymptotically anti-de Sitter spacetime
and let $(M_{S},g_{S})$ be its static deformation as per Lemma 5.2. Let
$\Theta_{K}$ be a static and physically admissible boundary condition so that
the Klein-Gordon operator $P_{\Theta_{K}}$ on $(M_{S},g_{S})$ admits a
Hadamard two-point function as per Proposition 5.5. Then there exists a
Hadamard two point-function on $(M,g)$ for the associated Klein-Gordon
operator with boundary condition ruled by $\Theta_{K}$.
It is important to stress that the deformation argument forces us to restrict
in the last part of the paper the class of admissible boundary conditions and
notable examples such as those of Wentzell type are not included. They require
a separate analysis of their own [ADM21].
The paper is structured as follows. In Section 2 we recollect the main
geometric data, particularly the notions of globally hyperbolic spacetime with
timelike boundary and that of asymptotically AdS spacetime. In Section 3 we
discuss the analytic data at the heart of our analysis. We start from a
succinct review of $b$-calculus in Section 3.1, followed by one of twisted
Sobolev spaces and energy forms. In Section 3.4 we formulate the dynamical
problem, we are interested in, both in a strong and in a weak sense. In
Section 4 we obtain our first main result, namely the existence of advanced
and retarded fundamental solutions for all boundary conditions abiding to
Hypothesis 4.1. In addition we investigate the structural properties of these
propagators and we characterize their wavefront set. In Section 5 we
investigate the existence of Hadamard two-point distributions in the case of
vanishing mass. First, in Section 5.1 and 5.2, using spectral techniques we
prove their existence on static spacetimes though for a restricted class of
admissible boundary conditions, see Hypothesis 4.1 and Definition 4.2.
Subsequently, in Section 5.3, we extend to the case in hand a deformation
argument due to Fulling, Narcowich and Wald proving existence of Hadamard two-
point distributions on a generic $n$-dimensional asymptotically AdS spacetime.
## 2 Geometric Data
In this section our main goal is to fix notations and conventions as well as
to introduce the three main geometric data that we shall use in our analysis,
namely globally hyperbolic spacetimes with timelike boundary, asymptotically
anti-de Sitter spacetimes and manifolds of bounded geometry. We assume that
the reader is acquainted with the basic notions of Lorentzian geometry, cf.
[ON83]. Throughout this paper with spacetime, we indicate always a smooth,
connected, oriented and time oriented Lorentzian manifold $M$ of dimension
$\dim M=n\geq 2$ equipped with a smooth Lorentzian manifold $g$ of signature
$(-,+,\dots,+)$. With $C^{\infty}(M)$ (resp. $C^{\infty}_{0}(M)$) we indicate
the space of smooth (resp. smooth and compactly supported) functions on $M$,
while $\dot{C}^{\infty}(M)$ (resp. $\dot{C}^{\infty}_{0}(M)$) stands for the
collection of all smooth (resp. smooth and compactly supported) functions
vanishing at $\partial M$ with all their derivatives. In between all
spacetimes, the following class plays a notable rôle [AFS18].
###### Definition 2.1.
Let $(M,g)$ be a spacetime with non empty boundary $\iota:\partial M\to M$. We
say that $(M,g)$
1. 1.
has a timelike boundary if $(\partial M,\iota^{*}g)$ is a smooth, Lorentzian
manifold,
2. 2.
is globally hyperbolic if it does not contain closed causal curves and if, for
every $p,q\in M$, $J^{+}(p)\cap J^{-}(q)$ is either empty or compact.
If both conditions are met, we call $(M,g)$ a globally hyperbolic spacetime
with timelike boundary and we indicate with $\mathring{M}=M\setminus\partial
M$ the interior of $M$.
Observe that, for simplicity, we assume throughout the paper that also
$\partial M$ is connected. Notice in addition that Definition 2.1 reduces to
the standard notion of globally hyperbolic spacetimes when $\partial
M=\emptyset$. The following theorem, proven in [AFS18], gives a more explicit
characterization of the class of manifolds, we are interested in and it
extends a similar theorem valid when $\partial M=\emptyset$.
###### Theorem 2.1.
Let $(M,g)$ be an $n$-dimensional globally hyperbolic spacetime with timelike
boundary. Then it is isometric to a Cartesian product $\mathbb{R}\times\Sigma$
where $\Sigma$ is an $(n-1)$-dimensional Riemannian manifold. The associated
line element reads
$ds^{2}=-\beta d\tau^{2}+\kappa_{\tau},$ (1)
where $\beta\in C^{\infty}(\mathbb{R}\times\Sigma;(0,\infty))$ while
$\tau:\mathbb{R}\times\Sigma\to\mathbb{R}$ plays the rôle of time coordinate.
In addition $\mathbb{R}\ni\tau\mapsto\kappa_{\tau}$ identifies a family of
Riemmannian metrics, smoothly dependent on $\tau$ and such that, calling
$\Sigma_{\tau}\doteq\\{\tau\\}\times\Sigma$, each
$(\Sigma_{\tau},\kappa_{\tau})$ is a Cauchy surface with non empty boundary.
###### Remark 2.1.
Observe that a notable consequence of this theorem is that, calling
$\iota_{\partial M}:\partial M\to M$ the natural embedding map, then
$(\partial M,h)$ where $h=\iota^{*}_{\partial M}g$ is a globally hyperbolic
spacetime. In particular the associated line element reads
$ds^{2}|_{\partial M}=-\beta|_{\partial M}d\tau^{2}+\kappa_{\tau}|_{\partial
M}.$
In addition to Definition 2.1 we consider another notable class of spacetimes
introduced in [GW18].
###### Definition 2.2.
Let $M$ be an n-dimensional manifold with non empty boundary $\partial M$.
Suppose that $\mathring{M}=M\setminus\partial M$ is equipped with a smooth
Lorentzian metric $g$ and that
* a)
If $x\in\mathcal{C}^{\infty}(M)$ is a boundary function, then
$\widehat{g}=x^{2}g$ extends smoothly to a Lorentzian metric on $M$.
* b)
The pullback $h=\iota^{*}_{\partial M}\widehat{g}$ via the natural embedding
map $\iota_{\partial M}:\partial M\to M$ individuates a smooth Lorentzian
metric.
* c)
$\widehat{g}^{-1}(dx,dx)=1$ on $\partial M$.
Then $(M,g)$ is called an asymptotically anti-de Sitter (AdS) spacetime. In
addition, if $(M,\widehat{g})$ is a globally hyperbolic spacetime with
timelike boundary, cf. Definition 2.1, then we call $(M,g)$ a globally
hyperbolic asymptotically AdS spacetime.
Observe that conditions a), b) and c) are actually independent from the choice
of the boundary function $x$ and the pullback $h$ is actually determined up to
a conformal multiple since there exists always the freedom of multiplying the
boundary function $x$ by any nowhere vanishing $\Omega\in C^{\infty}(M)$. Such
freedom plays no rôle in our investigation and we shall not consider it
further. Hence, for definiteness, the reader can assume that a global boundary
function $x$ has been fixed once and for all.
As a direct consequence of the collar neighbourhood theorem and of the freedom
in the choice of the boundary function in Definition 2.2, this can always be
engineered in such a way, that, given any $p\in\partial M$, it is possible to
find a neighbourhood $U\subset\partial M$ containing $p$ and $\epsilon>0$ such
that on $U\times[0,\epsilon)$ the line element associated to $g$ reads
$ds^{2}=\frac{-dx^{2}+h_{x}}{x^{2}}$ (2)
where $h_{x}$ is a family of Lorentzian metrics depending smoothly on $x$ such
that $h_{0}\equiv h$.
###### Remark 2.2.
It is important to stress that the notion of asymptotically AdS spacetime
given in Definition 2.2 is actually more general than the one given in [AD99],
which is more commonly used in the general relativity and theoretical physics
community. Observe in particular that $h_{x}$ in Equation (2) does not need to
be an Einstein metric nor $\partial M$ is required to be diffeomorphic to
$\mathbb{R}\times\mathbb{S}^{n-2}$. Since we prefer to make a close connection
to both [GW18] and [DM20] we stick to their nomenclature.
###### Remark 2.3.
Throughout the paper, with the symbols $\tau$ and $x$ we shall always indicate
respectively the time coordinate as in Equation (1) and the spatial coordinate
as in Equation (2).
### 2.1 Manifolds of bounded geometry
To conclude this section we introduce the manifolds of bounded geometry since
they are the natural arena where one can define Sobolev spaces when the
underlying background has a non empty boundary. In this section we give a
succinct survey of the main concepts and of those results which will play a
key rôle in our analysis. An interested reader can find more details in
[Sch01, AGN16, GS13, GOW17] as well as in [DDF18, Sec. 2.1 & 2.2].
###### Definition 2.3.
A Riemannian manifold $(N,h)$ with empty boundary is of bounded geometry if
* a)
The injectivity radius $r_{inj}(N)$ is strictly positive,
* b)
$N$ is of totally bounded curvature, namely for all
$k\in\mathbb{N}\cup\\{0\\}$ there exists a constant $C_{k}>0$ such that
$\|\bigtriangledown^{k}R\|_{L^{\infty}(M)}<C_{k}$.
This definition cannot be applied slavishly to a manifold with non empty
boundary and, to extend it, we need to introduce a preliminary concept.
###### Definition 2.4.
Let $(N,h)$ be a Riemannian manifold of bounded geometry and let
$(Y,\iota_{Y})$ be a codimension $k$, closed, embedded smooth submanifold with
an inward pointing, unit normal vector field $\nu_{Y}$. The submanifold
$(Y,\iota^{*}_{Y}g)$ is of bounded geometry if:
* a)
The second fundamental form $II$ of $Y$ in $N$ and all its covariant
derivatives along $Y$ are bounded,
* b)
There exists $\varepsilon_{Y}>0$ such that the map
$\phi_{\nu_{Y}}:Y\times(-\varepsilon_{Y},\varepsilon_{Y})\rightarrow N$
defined as $(x,z)\mapsto\phi_{\nu_{Y}}(x,z)\doteq exp_{x}(z\nu_{Y,x})$ is
injective.
These last two definitions can be combined to introduce the following notable
class of Riemannian manifolds
###### Definition 2.5.
Let $(N,h)$ be a Riemannian manifold with $\partial N\neq\emptyset$. We say
that $(N,h)$ is of bounded geometry if there exists a Riemannian manifold of
bounded geometry $(N^{\prime},h^{\prime})$ of the same dimension as $N$ such
that:
* a)
$N\subset N^{\prime}$ and $h=h^{\prime}|_{N}$
* b)
$(\partial N,\iota^{*}h^{\prime})$ is a bounded geometry submanifold of
$N^{\prime}$, where $\iota:\partial N\rightarrow N^{\prime}$ is the embedding
map.
###### Remark 2.4.
Observe that Definition 2.5 is independent from the choice of $N^{\prime}$.
For completeness, we stress that an equivalent definition which does not
require introducing $N^{\prime}$ can be formulated, see for example [Sch01].
Definition 2.5 applies to a Riemannian scenario, but we are particularly
interested in Lorentzian manifolds. In this case the notion of bounded
geometry can be introduced as discussed in [GOW17] for the case of a manifold
without boundary, although the extension is straightforward. More precisely
let us start from $(N,h)$ a Riemannian manifold of bounded geometry such that
$\dim N=n$. In addition we call
$BT^{m}_{m^{\prime}}(B_{n}(0,\frac{r_{inj}(N)}{2}),\delta_{E})$, the space of
all bounded tensors on the ball $B_{n}(0,\frac{r_{inj}(N)}{2})$ centered at
the origin of the Euclidean space $(\mathbb{R}^{n},\delta_{E})$ where
$\delta_{E}$ stands for the flat metric. For every
$m,m^{\prime}\in\mathbb{N}\cup\\{0\\}$, we denote with
$BT^{m}_{m^{\prime}}(N)$ the space of all rank $(m,m^{\prime})$ tensors $T$ on
$N$ such that, for any $p\in M$, calling $T_{p}\doteq(\exp_{p}\circ
e_{p})^{*}T$ where $e_{p}:(\mathbb{R}^{n},\delta)\to(T_{p}N,h_{p})$ is a
linear isometry, the family $\\{T_{p}\\}_{p\in M}$ is bounded on
$BT^{m}_{m^{\prime}}(B_{n}(0,\frac{r_{inj}(N)}{2}),\delta_{E})$.
###### Definition 2.6.
A smooth Lorentzian manifold $(M,g)$ is of bounded geometry if there exists a
Riemannian metric $\widehat{g}$ on $M$ such that:
* a)
$(M,\widehat{g})$ is of bounded geometry.
* b)
$g\in BT^{0}_{2}(M,\widehat{g})$ and $g^{-1}\in BT^{2}_{0}(M,\widehat{g})$.
On top of a Riemannian (or of a Lorentzian) manifold of bounded geometry
$(N,h)$ we can introduce $H^{k}(N)\equiv W^{2,k}(N)$ which is the completion
of
$\mathcal{E}^{k}(N)\doteq\\{f\in C^{\infty}(N)\;|\;f,\nabla
f,\dots,(\nabla)^{k}f\in L^{2}(N)\\},$
with respect to the norm
$\|f\|_{W^{2,k}(N)}=\left(\sum\limits_{i=0}^{k}\|(\nabla)^{i}f\|_{L^{2}(N)}\right)^{\frac{1}{2}},$
where $\nabla$ is the covariant derivative built out of the Riemannian metric
$h$, while $(\nabla)^{i}$ indicates the $i$-th covariant derivative. This
notation is employed to disambiguate with $\nabla^{i}=h^{ij}\nabla_{j}$.
###### Remark 2.5.
One might wonder why the assumption of bounded geometry is necessary since it
seems to play no rôle in above characterization. The reason is actually two-
fold. On the one hand it is possible to give a local definition of Sobolev
spaces via a suitable choice of charts, which yields in turn a global
counterpart via a partition of unity argument. Such definition is a prior
different from the one given above unless one assumes to work with manifolds
of bounded geometry, see [GS13]. In addition such alternative characterization
of Sobolev spaces allows for introducing a suitable generalization to
manifolds of bounded geometry of the standard Lions-Magenes trace, which will
play an important rôle especially in Section 5.1.
Observe that, henceforth, we shall always assume implicitly that all manifolds
that we consider are of bounded geometry.
## 3 Analytic Preliminaries
In this section we introduce the main analytic tools that play a key rôle in
our investigation. We start by recollecting the main results from [DM20] which
are, in turn, based on [GW18] and [Vas10, Vas12].
### 3.1 On b-pseudodifferential operators
In the following we assume for definiteness that $(M,g)$ is a globally
hyperbolic, asymptotically $AdS$ spacetime of bounded geometry as per
Definition 2.2 and Definition 2.6. In addition we assume that the reader is
familiar with the basic ideas and tools behind $b$-geometry, first introduced
by R. Melrose in [Mel92]. Here we limit ourselves to fix notations and
conventions, following the presentation of [GMP14].
In the following with ${}^{b}TM$ we indicate the $b$-tangent bundle which is a
vector bundle whose fibres are
${}^{b}T_{p}M=\left\\{\begin{array}[]{ll}T_{p}M&p\in\mathring{M}\\\
\textrm{span}_{\mathbb{R}}(x\partial_{x},T_{p}\partial M)&p\in\partial
M\end{array}\right.,$
where $x$ is the global boundary function introduced in Definition 2.2, here
promoted to coordinate. Similarly we can define per duality the $b$-cotangent
bundle, ${}^{b}T^{*}M$ which is a vector bundle whose fibers are
${}^{b}T^{*}_{p}M=\left\\{\begin{array}[]{ll}T^{*}_{p}M&p\in\mathring{M}\\\
\textrm{span}_{\mathbb{R}}(\frac{dx}{x},T^{*}_{p}\partial M)&p\in\partial
M\end{array}\right.$
For future convenience, whenever we fix a chart $U$ of $M$ centered at a point
$p\in\partial M$, we consider $(x,y_{i},\xi,\eta_{i})$ and
$(x,y_{i},\zeta,\eta_{i})$, $i=1,\dots,n-1=\dim\partial M$, local coordinates
respectively of $T^{*}M|_{U}$ and of ${}^{b}T^{*}M|_{U}$. Since we are
considering globally hyperbolic spacetimes, hence endowed with a distinguished
time direction $\tau$, cf. Equation (1), we identify implicitly
$\eta_{n-1}\equiv\tau$. In addition, observe that there exists a natural
projection map
$\pi:T^{*}M\to{}^{b}T^{*}M,\quad(x,y_{i},\xi,\eta_{i})\mapsto\pi(x,y_{i},\xi,\eta_{i})=(x,y_{i},x\xi,\eta_{i}),$
which is non-injective. This feature prompts the definition of a very
important structure in our investigation, namely the compressed $b$-cotangent
bundle
${}^{b}\dot{T}^{*}M\doteq\pi[T^{*}M],$ (3)
which is a vector sub-bundle of ${}^{b}T^{*}M$, such that
${}^{b}\dot{T}^{*}_{p}M\equiv T^{*}_{p}M$ whenever $p\in\mathring{M}$. The
last geometric structure that we shall need in this work is the b-cosphere
bundle which is realized as the quotient manifold obtained via the action of
the dilation group on $T^{*}_{b}M\setminus\\{0\\}$, namely
${}^{b}S^{*}M\doteq{\raisebox{1.99997pt}{${}^{b}T^{*}M\setminus\\{0\\}$}\left/\raisebox{-1.99997pt}{$\mathbb{R}^{+}$}\right.}.$
(4)
We remark that, if we consider a local chart $U\subset M$ such that
$U\cap\partial M\neq\emptyset$ and the local coordinates
$(x,y_{i},\zeta,\eta_{i})$, $i=1,\dots,n-1=\dim\partial M$, on
${}^{b}T^{*}_{U}M\doteq{}^{b}T^{*}M|_{U}$, we can build a natural counterpart
on ${}^{b}S^{*}_{U}M$, namely $(x,y_{i},\widehat{\zeta},\widehat{\eta}_{i})$
where $\widehat{\zeta}=\frac{\zeta}{|\eta_{n-1}|}$ and
$\widehat{\eta_{i}}=\frac{\eta_{i}}{|\eta_{n-1}|}$. On top of these geometric
structures we can define two natural classes of operators.
###### Definition 3.1.
Let $(M,g)$ be a globally hyperbolic, asymptotically $AdS$ spacetime. We call
* •
$\textbf{Diff}_{b}(M)\doteq\bigoplus_{k=0}^{\infty}\textbf{Diff}^{k}_{b}(M)$
the graded, differential operator algebra generated by $\Gamma({}^{b}TM)$, the
space of smooth section of the $b$-tangent bundle.
* •
$\Psi_{b}^{m}(M)$ the set of properly supported $b$-pseudodifferential
operators ($b-\Psi$DOs) of order $m$, $m\in\mathbb{R}$.
The notion of $b-\Psi$DOs is strictly intertwined with $S^{m}({}^{b}T^{*}M)$
the set of all symbols of order $m$ on ${}^{b}T^{*}M$ and in particular there
exists a principal symbol map
$\sigma_{b,m}:\Psi_{b}^{m}(M)\to
S^{m}({}^{b}T^{*}M)/S^{m-1}({}^{b}T^{*}M),\quad A\mapsto a=\sigma_{b,m}(A),$
(5)
which gives rise to an isomorphism
$\Psi_{b}^{m}(M)/\Psi_{b}^{m-1}(M)\simeq
S^{m}({}^{b}T^{*}M)/S^{m-1}({}^{b}T^{*}M).$
In addition we can endow the space of symbols $S^{m}({}^{b}T^{*}M)$ with a
Fréchet topology induced by the family of seminorms
$\|a\|_{N}\ =\sup_{(z,k_{z})\in
K_{i}\times\mathbb{R}^{n}}\max_{|\alpha|+|\gamma|\leq
N}\dfrac{|\partial_{z}^{\alpha}\partial_{\zeta}^{\gamma}a(z,k_{z})|}{\langle
k_{z}\rangle^{m-|\gamma|}},$
where $\langle k_{z}\rangle=(1+|k_{z}|^{2})^{\frac{1}{2}}$, while
$\\{K_{i}\\}_{i\in I}$, $I$ being an index set, is an exhaustion of $M$ by
compact subsets. Hence one can endow $S^{m}({}^{b}T^{*}M)$ with a metric $d$
as follows
$d(a,b)=\sum_{N\in\mathbb{N}}2^{-N}\dfrac{\|a-b\|_{N}}{1+\|a-b\|_{N}}.\quad\forall
a,b\in S^{m}({}^{b}T^{*}M)$
In view of these data the following definition is natural
###### Definition 3.2.
A subset of $\Psi_{b}^{m}(M)$ is called bounded if such is the associated
subset of $S^{m}({}^{b}T^{*}M)$ with respect to the Fréchet topology.
Finally we can recall the notion of elliptic $b-\Psi$DO and of wavefront set
both of a single and of a family of pseudodifferential operators, cf. [Hör03]:
###### Definition 3.3.
A b-pseudodifferential operator $A\in\Psi^{m}_{b}(M)$ is elliptic at a point
$q_{0}\in\ {}^{b}T^{*}M\setminus\\{0\\}$ if there exists $c\in
S^{-m}(^{b}T^{*}M)$ such that
$\sigma_{b,m}(A)\cdot c-1\in S^{-1}(^{b}T^{*}M),$
in a conic neighbourhood of $q_{0}$. We call $ell_{b}(A)$ the (conic) subset
of ${}^{b}T^{*}M\setminus\\{0\\}$ in which $A$ is elliptic.
###### Definition 3.4.
For any $P\in\Psi^{m}_{b}(M)$, we say that $(z_{0},k_{z_{0}})\notin
WF^{\prime}_{b}(P)$ if the associated symbol $p(z,k_{z})$ is such that, for
every multi-indices $\gamma$ and for every $N\in\mathbb{N}$, there exists a
constant $C_{N,\alpha,\gamma}$ such that
$|\partial_{z}^{\alpha}\partial^{\gamma}_{k_{z}}p(z,k_{z})|\leq
C_{N,\alpha,\gamma}\langle k_{z}\rangle^{-N},$
for $z$ in a neighbourhood of $z_{0}$ and $k_{z}$ in a conic neighbourhood of
$k_{z_{0}}$.
Similarly, if $\mathcal{A}$ is a bounded subset of $\Psi_{b}^{m}(M)$ and
$q\in{}^{b}T^{*}M$. We say that $q\not\in WF_{b}^{\prime}(\mathcal{A})$ if
there exists $B\in\Psi_{b}(M)$, elliptic at $q$, such that
$\\{BA:A\in\mathcal{A}\\}$ is a bounded subset of $\Psi_{b}^{-\infty}(M)$.
To conclude this part of the section, we stress that, in order to study the
behavior of a b-pseudodifferential operator at the boundary, it is useful to
introduce the notion of indicial family, [GW18]. Let $A\in\Psi_{b}^{m}(M)$.
For a fixed boundary function $x$, cf. Definition 2.2, and for any
$v\in\mathcal{C}^{\infty}(\partial M)$ we define the indicial family
$\widehat{N}(A)(s):C^{\infty}(\partial M)\to C^{\infty}(\partial M)$ as:
$\widehat{N}(A)(s)v=x^{-is}A\left(x^{is}u\right)|_{\partial M}$ (6)
where $u\in\mathcal{C}^{\infty}(M)$ is any function such that $u|_{\partial
M}=v$.
### 3.2 Twisted Sobolev Spaces
In this section we introduce the second main analytic ingredient that we need
in our investigation. To this end, once more we consider $(M,g)$ a globally
hyperbolic, asymptotically $AdS$ spacetime and the associated Klein-Gordon
operator $P\doteq\Box_{g}-m^{2}$, where $m^{2}$ plays the rôle of a mass term,
while $\Box_{g}$ is the D’Alembert wave operator built out of the metric $g$.
It is convenient to introduce the parameter
$\nu=\frac{1}{2}\sqrt{(n-1)^{2}+4m^{2}},$ (7)
which is constrained to be positive. This is known in the literature as the
Breitenlohner-Freedman bound [BF82]. In the spirit of [GW18] and [DM20, Sec.
3.2] we introduce the following, finitely generated, space of twisted
differential operators
$\textbf{Diff}^{1}_{\nu}(M)\doteq\\{x^{\nu_{-}}Dx^{-\nu_{-}}\;|\;D\in\textbf{Diff}^{1}(M)\\},$
where $\nu_{-}=\frac{n-1}{2}-\nu$, $n=\dim M$. Starting from these data, and
calling with $x$ and $d\mu_{g}$ respectively the global boundary function, cf.
Definition 2.2, and the metric induced volume measure we set
$\mathcal{L}^{2}(M)\doteq
L^{2}(M;x^{2}d\mu_{g})\;\;\textrm{and}\;\mathcal{H}^{1}(M)\doteq\\{u\in\mathcal{L}^{2}(M)\;|\;Qu\in\mathcal{L}^{2}(M)\;\forall
Q\in\textbf{Diff}^{1}_{\nu}(M)\\}.$ (8)
The latter is a Sobolev space if endowed with the norm
$\|u\|^{2}_{\mathcal{H}^{1}(M)}=\|u\|^{2}_{\mathcal{L}^{2}(M)}+\sum_{i=1}^{n}\|Q_{i}u\|^{2}_{\mathcal{L}^{2}(M)},$
where $\\{Q_{i}\\}_{i=1\dots n}$ is a generating set of
$\textbf{Diff}^{1}_{\nu}(M)$. In addition we shall be using
$\mathcal{L}^{2}_{loc}(M)$, the space of locally square integrable functions
over $M$ with respect to the measure $x^{2}d\mu_{g}$ and
$\dot{\mathcal{L}^{2}}_{loc}(M)$ the counterpart built starting from
$\dot{C}^{\infty}(M)$ in place of $C^{\infty}(M)$. Starting from these spaces
we can build the first order Sobolev spaces $\mathcal{H}^{1}_{loc}(M)$ and
$\dot{\mathcal{H}}^{1}_{loc}(M)$ as well as their respective topological
duals, $\dot{\mathcal{H}}^{-1}_{loc}(M)$ and $\mathcal{H}^{-1}_{loc}(M)$.
Finally, calling $\mathcal{E}^{\prime}(M)$ the topological dual space of
$\dot{C}^{\infty}(M)$, we set
$\mathcal{H}^{1}_{0}(M)=\mathcal{H}^{1}_{loc}(M)\cap\mathcal{E}^{\prime}(M),$
(9)
while, similarly, we define $\mathcal{H}^{-1}_{0}(M)$.
We discuss succinctly the interactions between $\Psi^{m}_{b}(M)$ and
$\textbf{Diff}_{\nu}^{1}(M)$. We begin by studying the action of
pseudodifferential operators of order zero on the spaces
$\mathcal{H}^{k}_{loc/0}(M)$, $k=\pm 1$, just defined. Every
$A\in\Psi^{0}_{b}(M)$ is a bounded operator thereon, as stated in the
following lemma.
###### Lemma 3.1 ([GW18], Lemma 3.8 and [Vas08], Lemma 3.2, Corollary 3.4).
Let $A\in\Psi^{0}_{b}(M)$. Then $A$ is a continuous linear map
$\mathcal{H}^{1}_{loc/0}(M)\rightarrow\mathcal{H}^{1}_{loc/0}(M),\ \ \
\dot{\mathcal{H}}^{1}_{loc/0}(M)\rightarrow\dot{\mathcal{H}}^{1}_{loc/0}(M),$
which extends per duality to a continuous map
$\dot{\mathcal{H}}^{-1}_{0/loc}(M)\rightarrow\dot{\mathcal{H}}^{-1}_{0/loc}(M),\
\ \ \mathcal{H}^{-1}_{0/loc}(M)\rightarrow\mathcal{H}^{-1}_{0/loc}(M).$
The proof of this lemma gives a useful information. Let $A\in\Psi^{0}_{b}(M)$
be with compact support $U\subset M$. Then there exists
$\chi\in\mathcal{C}_{0}^{\infty}(U)$ such that
$\|Au\|_{\mathcal{H}^{k}(M)}\leq C\|\chi u\|_{\mathcal{H}^{k}(M)},$ (10)
for every $u\in\mathcal{H}^{k}(M)$ with $k=\pm 1$. The constant $C$ is bounded
by a seminorm of $A$.
To study in full generality the interactions between $\Psi^{m}_{b}(M)$ and
$\textbf{Diff}_{\nu}^{1}(M)$, we need to introduce one last class of relevant
spaces
###### Definition 3.5.
Let $k=-1,0,1$ and let $m\geq 0$. Given $u\in\mathcal{H}_{loc}^{k}(M)$ (resp.
$\mathcal{H}^{k}(M)$), we say that $u\in\mathcal{H}_{loc}^{k,m}(M)$ (resp.
$\mathcal{H}^{k,m}(M)$) if $Au\in\mathcal{H}_{loc}^{k}(M)$ (resp.
$\mathcal{H}^{k}(M)$) for all $A\in\Psi^{m}_{b}(M)$. Furthermore, we define
$\mathcal{H}^{k,\infty}(M)$ as:
$\mathcal{H}^{k,\infty}(M)\doteq\bigcap_{m=0}^{\infty}\mathcal{H}^{k,m}(M).$
(11)
###### Remark 3.1.
As observed in [Vas08], whenever $m$ is finite, it is enough to check that
both $u$ and $Au$ lie in $\mathcal{H}^{k}_{loc}(M)$ for a single elliptic
operator $A\in\Psi^{m}_{b}(M)$.
Observe that, in full analogy to Definition 3.5, we define similarly
$\mathcal{H}^{k,m}_{0}(M)$ and $\dot{\mathcal{H}}^{k,m}_{loc}(M)$. In the
following definition, we extend the notion of wavefront set to the spaces
$\mathcal{H}_{loc}^{k,m}(M)$.
###### Definition 3.6.
Let $k=0,\pm 1$ and let $u\in\mathcal{H}^{k,m}_{loc}(M)$, $m\in\mathbb{R}$.
Given $q\in{}^{b}T^{*}M\setminus\\{0\\}$, we say that $q\not\in
WF_{b}^{k,m}(u)$ if there exists $A\in\Psi_{b}^{m}(M)$ such that $q\in
ell_{b}(A)$ and $Au\in\mathcal{H}^{k}_{loc}(M)$, where $ell_{b}$ stands for
the elliptic set as per Definition 3.3. When $m=+\infty$, we say that
$q\not\in WF_{b}^{k,\infty}(M)$ if there exists $A\in\Psi^{0}_{b}(M)$ such
that $q\in ell_{b}(A)$ and $Au\in\mathcal{H}^{k,\infty}_{loc}(M)$.
With all these data, we can define two notable trace maps which will be a key
ingredient in the next section. The following proposition summarizes the
content of [GW18, Lemma 3.3] and [DM20, Lemma 3.4]:
###### Theorem 3.1.
Let $(M,g)$ be a globally hyperbolic, asymptotically $AdS$ spacetime of
bounded geometry with $n=\dim M$ and let $\nu>0$, cf. Equation (7). Then there
exists a continuous map
$\widetilde{\gamma}_{-}:\mathcal{H}^{1}_{0}(M)\to\mathcal{H}^{\nu}(\partial
M)$, which can be extended to a continuous map
$\gamma_{-}:\mathcal{H}^{1,m}_{loc}(M)\to\mathcal{H}^{\nu+m}_{loc}(\partial
M),\quad\forall m\leq 0.$
###### Remark 3.2.
In order to better grasp the rôle of the trace map defined in Theorem 3.1, it
is convenient to focus the attention on
$\mathbb{R}^{n}_{+}\doteq[0,\infty)\times\mathbb{R}^{n-1}$. In this setting,
any $u\in\mathcal{H}^{1}(\mathbb{R}^{n}_{+})$ can be restricted to the subset
$[0,\epsilon)\times\mathbb{R}^{n-1}$, $\epsilon>0$ admitting an asymptotic
expansion $u=x^{\nu_{-}}u_{-}+x^{r+1}u_{0}$ where $2r=n-2$, while
$u_{-}\in\mathcal{H}^{\nu}(\mathbb{R}^{n})$ and
$u_{0}\in\mathcal{H}^{1}([0,\epsilon);L^{2}(\mathbb{R}^{n-1}))$. In this
context it holds that $\gamma_{-}(u)=u_{-}$.
At last we recall from [GW18] a notable property of the trace $\gamma_{-}$
related to its boundedness. Let $u\in\mathcal{H}(M)$, then for every
$\varepsilon>0$ there exists $C_{\varepsilon}>0$ such that
$\|\gamma_{-}u\|^{2}_{L^{2}(\partial
M)}\leq\varepsilon\|u\|^{2}_{\mathcal{H}^{1}(M)}+C_{\varepsilon}\|u\|^{2}_{\mathcal{L}^{2}(M)}.$
(12)
### 3.3 Twisted Energy Form
In this section we focus the attention on discussing the last two preparatory
key concepts before stating the boundary value problem, we are interested in.
We recall that $P=\Box_{g}-m^{2}$ is the Klein-Gordon operator and, following
[GW18], we can individuate a distinguished class of spaces whose elements
enjoy additional regularity with respect to $P$:
###### Definition 3.7.
Let $(M,g)$ be a globally hyperbolic, asymptotically anti-de Sitter spacetime
and let $P$ be the Klein-Gordon operator. For all
$m\in\mathbb{R}\cup\\{\pm\infty\\}$, we define the Frechét spaces
$\mathcal{X}^{m}(M)=\\{u\in\mathcal{H}^{1,m}_{loc}(M)\;|\;Pu\in
x^{2}\mathcal{H}^{0,m}_{loc}(M)\\},$ (13)
with respect to the seminorms
$\norm{u}_{\mathcal{X}^{m}(M)}=\norm{\phi
u}_{\mathcal{H}^{1,m}(M)}+\norm{x^{-2}\phi Pu}_{\mathcal{H}^{0,m}(M)},$ (14)
where $\phi\in C^{\infty}_{0}(M)$.
At this point we are ready to introduce a suitable energy form. The standard
definition must be adapted to the case in hand, in order to avoid divergences
due to the behaviour of the solutions of the Klein-Gordon equation at the
boundary. To this end it is convenient to make use of the so-called admissible
twisting functions, that is, calling $x$ the global boundary function as per
Definition 2.2, the collection of $F\in x^{\nu_{-}}C^{\infty}(M)$ such that
1. 1.
$x^{-\nu_{-}}F>0$ on $M$,
2. 2.
$S_{F}\doteq F^{-1}P(F)\in x^{2}L^{\infty}(M)$ where $P$ is the Klein-Gordon
operator.
For any such function, we can define a twisted differential
$d_{F}\doteq F\circ d\circ
F^{-1}:\dot{C}^{\infty}(M)\to\dot{C}^{\infty}(M;T^{*}M),\quad v\mapsto
d_{F}(v)=dv+vF^{-1}(dF).$ (15)
Accordingly we can introduce the twisted Dirichlet (energy) form
$\mathcal{E}_{0}(u,v)\doteq-\int\limits_{M}g(d_{F}u,d_{F}\overline{v})d\mu_{g}.\quad\forall
u,v,\in\mathcal{L}^{2}_{loc}(M)$ (16)
Starting from these data, we are ready to introduce a second trace map. More
precisely we start from
$\widetilde{\gamma}_{+}:\mathcal{X}^{\infty}(M)\to\mathcal{H}^{1,\infty}_{loc}(M)\quad
u\mapsto\widetilde{\gamma}_{+}(u)=x^{1-2\nu}\partial_{x}(F^{-1}u)|_{\partial
M}.$
Calling $d^{\dagger}_{F}$ the formal adjoint of $d_{F}$ as in Equation (15)
with respect to the inner product on $L^{2}(M;d\mu_{g})$ we observe that, on
account of the identity $P=-d^{\dagger}_{F}d_{F}+F^{-1}P(F)$, the following
Green’s formula holds true for all $u\in\mathcal{X}^{\infty}(M)$ and for all
$v\in\mathcal{H}^{1}_{0}(M)$:
$\int Pu\cdot\overline{v}\ d\mu_{g}=\mathcal{E}_{0}(u,v)+\int
S_{F}u\cdot\bar{v}\ d\mu_{g}+\int\gamma_{+}u\cdot\gamma_{-}\bar{v}\ d\mu_{h}.$
(17)
With these premises the following result holds true, cf. [GW18, Lemma 4.8]:
###### Lemma 3.2.
The map $\widetilde{\gamma}_{+}$ can be extended to a bounded linear map
$\gamma_{+}:\mathcal{X}^{k}(M)\to\mathcal{H}^{k-\nu}_{loc}(\partial
M),\quad\forall k\in\mathbb{R}$
and, if $u\in\mathcal{X}^{k}(M)$, the Green’s formula (17) holds true for
every $v\in\mathcal{H}^{1,-k}_{0}(M)$.
###### Remark 3.3.
In order to better grasp the rôle of the second trace map characterized in
Lemma 3.2, it is convenient to focus once more the attention on
$\mathbb{R}^{n}_{+}\doteq[0,\infty)\times\mathbb{R}^{n-1}$ endowed with a
metric whose line element reads in standard Cartesian coordinates
$ds^{2}=\frac{-dx^{2}+h_{ab}dy^{a}dy^{b}}{x^{2}},$
where $h$ is a smooth Lorentzian metric on $\mathbb{R}^{n-1}$. Consider an
admissible twisting function $F$ such that $\lim\limits_{x\to
0^{+}}x^{-\nu_{-}}F=1$ and $u\in\mathcal{H}^{1,k}_{0}(\mathbb{R}^{n}_{+})$
such that $Pu\in x^{2}\mathcal{H}^{0,k}_{0}(\mathbb{R}^{n}_{+})$ for $k\geq
0$. Then, for every $\epsilon>0$, the restriction of $u$ to
$[0,\epsilon)\times\mathbb{R}^{n}$ admits an asymptotic expansion of the form
$u=Fu_{-}+x^{\nu_{+}}u_{+}+x^{r+2}\mathcal{H}_{b}^{k+2}([0,\epsilon);\mathcal{H}^{k-3}(\mathbb{R}^{n-1}))$
where $2r=n-2$ while $u_{-}\in\mathcal{H}^{\nu+k}(\mathbb{R}^{n-1})$ and
$u_{+}\in\mathcal{H}^{k-1-2\nu}(\mathbb{R}^{n-1})$. In this context it holds
that $\gamma_{+}(u)=2\nu u_{+}$.
### 3.4 The boundary value problem
In this section we use the ingredients introduced in the previous analysis to
formulate the dynamical problem we are interested in. At a formal level we
look for $u\in\mathcal{H}^{1}_{loc}(M)$ such that
$\left\\{\begin{array}[]{l}Pu=(\Box_{g}-m^{2})u=f\\\
\gamma_{+}u=\Theta\gamma_{-}u\end{array}\right.,$ (18)
where $\Theta\in\Psi^{k}_{b}(\partial M)$ while $\gamma_{-},\gamma_{+}$ are
the trace maps introduced in Theorem 3.1 and in Lemma 3.2 respectively. It is
not convenient to look for strong solutions of Equation (18). More precisely,
for any $\Theta\in\Psi^{k}_{b}(\partial M)$ , we assume that there exists an
admissible twisting function $F$ and we define the energy functional
$\mathcal{E}_{\Theta}(u,v)=\mathcal{E}_{0}(u,v)+\int\limits_{M}S_{F}u\cdot\overline{v}\,d\mu_{g}+\int\limits_{\partial
M}\Theta\gamma_{-}u\cdot\gamma_{-}\overline{v},$ (19)
where $S_{F}=F^{-1}P(F)$, $\mathcal{E}_{0}$ is the twisted Dirichlet form, cf.
Equation (16), $u\in\mathcal{H}^{1}_{loc}(M)$, while
$v\in\mathcal{H}^{1}_{0}(M)$. Hence, we can introduce
$P_{\Theta}:\mathcal{H}^{1}_{loc}(M)\rightarrow\dot{\mathcal{H}}^{-1}_{loc}(M)$
by
$\langle P_{\Theta}u,v\rangle=\mathcal{E}_{\Theta}(u,v).$ (20)
Observe that, on account of the regularity of $\gamma_{-}u$, we can extend
$P_{\Theta}$ as an operator
$P_{\Theta}:\mathcal{H}^{1,m}_{loc}(M)\rightarrow\dot{\mathcal{H}}^{-1,m}_{loc}(M)$,
$m\in\mathbb{R}$ [GW18].
###### Remark 3.4.
The reader might be surprised by the absence of $\gamma_{+}$ in the weak
formulation of the boundary value problem as per Equation (20). This is only
apparent since the last term in the right hand side of Equation (20) is a by-
product of the Green’s formula as per Equation (17) together with the boundary
condition introduced in Equation (18).
We are now in position to recollect the two main results proved in [DM20]
concerning a propagation of singularities theorem for the Klein-Gordon
operator with boundary conditions ruled by a pseudo-differential operator
$\Theta\in\Psi^{k}_{b}(\partial M)$ with $k\leq 2$. As a preliminary step, we
introduce two notable geometric structures. More precisely, since the
principal symbol of $x^{-2}P$ reads $\widehat{p}\doteq\widehat{g}(X,X)$, where
$X\in\Gamma(T^{*}M)$, the associated characteristic set is
$\mathcal{N}=\left\\{(q,k_{q})\in
T^{*}M\setminus\\{0\\}\;|\;\widehat{g}^{ij}(k_{q})_{i}(k_{q})_{j}=0\right\\},$
(21)
while the compressed characteristic set is
$\dot{\mathcal{N}}=\pi[\mathcal{N}]\subset{}^{b}\dot{T}(M),$ (22)
where $\pi$ is the projection map from $T^{*}M$ to the compressed cotangent
bundle, cf. Equation (3). A related concept is the following:
###### Definition 3.8.
Let $I\subset\mathbb{R}$ be an interval. A continuous map
$\gamma:I\rightarrow\dot{\mathcal{N}}$ is a generalized broken
bicharacteristic (GBB) if for every $s_{0}\in I$ the following conditions hold
true:
* a)
If $q_{0}=\gamma(s_{0})\in\mathcal{G}$, then for every
$\omega\in\Gamma^{\infty}(^{b}T^{*}M)$,
$\frac{d}{ds}(\omega\circ\gamma)=\\{\widehat{p},\pi^{*}\omega\\}(\eta_{0}),$
(23)
where $\eta_{0}\in\mathcal{N}$ is the unique point for which
$\pi(\eta_{0})=q_{0}$, while $\pi:T^{*}M\to{}^{b}T^{*}M$ and $\\{,\\}$ are the
Poisson brackets on $T^{*}M$.
* b)
If $q_{0}=\gamma(s_{0})\in\mathcal{H}$, then there exists $\varepsilon>0$ such
that $0<|s-s_{0}|<\varepsilon$ implies $x(\gamma(s))\neq 0$, where $x$ is the
global boundary function, cf. Definition 2.2.
With these structures and recalling in particular the wavefront set introduced
in Definition 3.6 we can state the following two theorems, whose proof can be
found in [DM20]:
###### Theorem 3.2.
Let $\Theta\in\Psi_{b}^{k}(\partial M)$ with $0<k\leq 2$. If
$u\in\mathcal{H}_{loc}^{1,m}(M)$ for $m\leq 0$ and
$s\in\mathbb{R}\cup\\{+\infty\\}$, then
$WF_{b}^{1,s}(u)\setminus\left(WF_{b}^{-1,s+1}(P_{\Theta}u)\cup
WF_{b}^{-1,s+1}(\Theta u)\right)$ is the union of maximally extended
generalized broken bicharacteristics within the compressed characteristic set
$\dot{\mathcal{N}}$.
In full analogy it holds also
###### Theorem 3.3.
Let $\Theta\in\Psi_{b}^{k}(M)$ with $k\leq 0$. If
$u\in\mathcal{H}_{loc}^{1,m}(M)$ for $m\leq 0$ and
$s\in\mathbb{R}\cup\\{+\infty\\}$, then it holds that
$WF_{b}^{1,s}(u)\setminus WF_{b}^{-1,s+1}(P_{\Theta}u)$ is the union of
maximally extended GBBs within the compressed characteristic set
$\dot{\mathcal{N}}$.
## 4 Fundamental Solutions
In this section we prove the first of the main results of our work. We start
by investigating the existence of fundamental solutions associated to the
boundary value problem as in Equation (18). We shall uncover that a positive
answer can be found, though we need to restrict suitably the class of
admissible b-$\Psi$DOs $\Theta\in\Psi_{b}^{k}(\partial M)$ in comparison to
that of Theorem 3.2 and 3.3. We stress that, from the viewpoint of
applications, these additional conditions play a mild rôle since all scenarios
of interest are included in our analysis.
We recall that the case of Dirichlet boundary condition was already analysed
in [Vas12], while the generalization to Robin boundary conditions was studied
in [War13] and [GW18], that we follow closely. We introduce a cutoff function
playing an important rôle in the following theorems. Consider
$\chi_{0}(s)=\begin{cases}exp(s^{-1})\ if\ s>0\\\ 0\ \ \ \ \ \ \ \ \ \ \ if\
s\leq 0\end{cases},$
and let $\chi_{1}\in C^{\infty}(\mathbb{R})$ be such that $\chi_{1}(s)=0$ for
all $s\in(-\infty,0]$ while $\chi_{1}(s)=1$ if $s\in[1,+\infty)$. For any but
fixed $\tau_{0},\tau_{1}\in\mathbb{R}$ with $\tau_{0}<\tau_{1}$, we call
$\chi:(\tau_{0},\tau_{1})\rightarrow\mathbb{R}$ the smooth function
$\chi(s)\doteq\chi_{0}(-\delta^{-1}(s-\tau_{1}))\chi_{1}((s-\tau_{0})/\varepsilon),$
(24)
where $\delta\gg 1$ while $\varepsilon\in(0,\tau_{1}-\tau_{0})$. Under these
hypotheses, calling $\chi^{\prime}_{0}=\frac{d\chi_{0}}{ds}$, it holds that,
cf. [Vas12]
$\chi\leq-\delta^{-1}(\tau_{1}-\tau_{0})^{2}\chi^{\prime}\;\;\textrm{with}\;\;\chi^{\prime}=-\delta^{-1}\chi_{0}^{\prime}(-\delta^{-1}(s-\tau_{1})).$
(25)
Consider $u_{loc}\in\mathcal{H}^{1,1}(M)$ such that its support lies in
$[\tau_{0}+\varepsilon,\tau_{1}]\times\Sigma$, cf. Definition 2.1. As
discussed in [GW18], one can use the cutoff function introduced to prove a
twisted version of the Poincaré inequality proved in [Vas12, Proposition 2.5]:
$\|(-\chi^{\prime})^{1/2}u\|^{2}_{\mathcal{L}^{2}(M)}\leq
C\|(-\chi^{\prime})^{1/2}d_{F}u\|^{2}_{\mathcal{L}^{2}(M)},$ (26)
where $d_{F}$ is the twisted differential as per Equation (15).
Since we deal with a larger class of boundary conditions than those considered
in [Vas12] and in [GW18], we need to make an additional hypothesis. Recall
that, as in the previous sections, we are identifying a pseudodifferential
operator on $\partial M$ with its natural extension on $M$, i.e. constant in
$x$, the global boundary function. As starting point we need a preliminary
definition:
###### Definition 4.1.
Let $\Theta\in\Psi^{k}_{b}(M)$. We call it local in time if, for every $u$ in
the domain of $\Theta$, $\tau(\textrm{supp}(\Theta
u))\subseteq\tau(\textrm{supp}(u))$ where
$\tau:\mathbb{R}\times\Sigma\to\mathbb{R}$ is the time coordinate individuated
in Theorem 2.1.
Recalling [Jos99, Sec. 6] for the definition of the adjoint of a
pseudodifferential operator, we can now formulate the following hypothesis
###### Hypothesis 4.1.
We consider $\Theta\in\Psi^{k}_{b}(M)$ with $k\leq 2$, only if it is local in
time, see Definition 4.1, and if $\Theta=\Theta^{*}$.
The next step in the analysis of the problem in hand lies in proving the
following lemma which generalizes a counterpart discussed in [GW18] for the
case of Robin boundary conditions.
###### Lemma 4.1.
Let $u\in\mathcal{H}^{1,1}_{loc}(M)$ and let $\Theta\in\Psi^{k}_{b}(\partial
M)$ be such that its canonical extension to $M$ abides to the Hypothesis 4.1.
Then there exists a compact subset $K\subset M$ and a real positive constant
$C$ such that
$\|(-\phi^{\prime})^{1/2}u\|_{\mathcal{H}^{1}(K)}\leq
C\|P_{\Theta}u\|_{\mathcal{H}^{-1,1}(K)},$
where $\phi=\tau\chi$, $\chi$ being the same as in Equation (26), while
$P_{\Theta}$ is defined in Equation (20).
###### Proof.
The proof is a generalization of those in [Vas12] and [GW18] to the case of
boundary conditions encoded by pseudodifferential operators. Therefore we
shall sketch the common part of the proof, focusing on the terms introduced by
the boundary conditions. Adopting the same conventions as at the beginning of
the section, assume that
$supp(u)\subset[\tau_{0}+\varepsilon,\tau_{1}]\times\Sigma$. We start by
computing a twisted version of the energy form considered in [Vas12]. Consider
$\langle-i[(V^{\prime})^{*}P_{\Theta}-P_{\Theta}V^{\prime}]u,u\rangle$, with
$V^{\prime}=FVF^{-1}\in\textit{Diff}_{b}^{\ 1}(M)$ and
$V\in\mathcal{V}_{b}(M)$ with compact support. Note that, since $\Theta$ is
self-adjoint, i.e., $\Theta=\Theta^{*}$, then
$i[(V^{\prime})^{*}P_{\Theta}-P_{\Theta}V^{\prime}]$ is a second order
formally self-adjoint operator, the purpose of $V^{\prime*}$ being to remove
zeroth order terms. Let $V=-\phi W$ with
$W=\bigtriangledown_{\widehat{g}}\tau$. It belongs to $\mathcal{V}_{b}(X)$
because $\widehat{g}(dx,dt)=0$. A direct computation shows that
$\begin{split}\langle-i[(V^{\prime})^{*}P_{\Theta}-P_{\Theta}V^{\prime}]u,u\rangle=2Re\langle
P_{\Theta}u,V^{\prime}u\rangle=\\\
=2Re\mathcal{E}_{0}(u,V^{\prime}u)+2Re\langle
S_{F}u,V^{\prime}u\rangle+2Re\langle\Theta\gamma_{-}u,\gamma_{-}V^{\prime}u\rangle,\end{split}$
(27)
where $\mathcal{E}_{0}$ is the twisted Dirichlet energy form, cf. Equation
(16), $S_{F}$ is defined in Section 3.3, while $\gamma_{+}$ and $\gamma_{-}$
are the trace maps introduced in Theorem 3.1 and in Lemma 3.2. We analyze each
term in the above sum separately. Starting form the first one and proceeding
as in [GW18], we rewrite
$2Re\mathcal{E}_{0}(u,V^{\prime}u)=\langle B^{ij}Q_{i}u,Q_{j}u\rangle,$
where $Q_{i}$, $i=1,\dots,n$ is a generating set of
$\textbf{Diff}^{1}_{\nu}(M)$, while the symmetric tensor $B$ is
$\begin{split}B=-(\phi\cdot div_{\widehat{g}}W+2F\phi V(F^{-1})+(n-2)\phi
x^{-1}W(x))\widehat{g}^{-1}+\\\
+\phi\mathcal{L}_{W}\widehat{g}^{-1}+2T(W,\bigtriangledown_{\widehat{g}}\phi).\end{split}$
(28)
Here $T(W,\bigtriangledown_{\widehat{g}}\phi)$ is the stress-energy tensor,
with respect to $\widehat{g}$, see Definition 2.2, of a scalar field
associated with $W$ and $\bigtriangledown_{\widehat{g}}\phi$, that is,
denoting with $\odot$ the symmetric tensor product,
$T(W,\bigtriangledown_{\widehat{g}}\phi)=(\bigtriangledown_{\widehat{g}}\phi)\odot
W-\frac{1}{2}\widehat{g}(\bigtriangledown_{\widehat{g}}\phi,W)\cdot\widehat{g}^{-1}.$
(29)
Focusing on this term and using that
$\bigtriangledown_{\widehat{g}}\phi=\chi^{\prime}\bigtriangledown_{\widehat{g}}\tau$,
a direct computation yields:
$T_{\widehat{g}}(W,\bigtriangledown_{\widehat{g}}\phi)=\frac{1}{2}(\chi^{\prime}\circ\tau)\big{[}2(\bigtriangledown_{\widehat{g}}\tau)\otimes(\bigtriangledown_{\widehat{g}}\tau)-\widehat{g}(\bigtriangledown_{\widehat{g}}\tau,\bigtriangledown_{\widehat{g}}\tau)\cdot\widehat{g}^{-1}\big{]}.$
(30)
Since $\bigtriangledown_{\widehat{g}}\phi$ and
$\bigtriangledown_{\widehat{g}}\tau$ are respectively past- and future-
pointing timelike vectors, then
$T_{\widehat{g}}(W,\bigtriangledown_{\widehat{g}}\phi)$ is negative definite.
Hence we can rewrite Equation (27) as
$\begin{split}\langle-T^{ij}_{\widehat{g}}(W,\bigtriangledown_{\widehat{g}}\phi)Q_{i}u,Q_{j}u\rangle=\langle-i[(V^{\prime})^{*}P_{\Theta}-P_{\Theta}V^{\prime}]u,u\rangle+2Re\mathcal{E}_{0}(K^{ij}Q_{i}u,Q_{j}u)+\\\
+2Re\langle
S_{F}u,V^{\prime}u\rangle+2Re\langle\Theta\gamma_{-}u,\gamma_{-}V^{\prime}u\rangle,\end{split}$
(31)
with
$K=-(F\phi V(F^{-1})+(n-2)\phi
x^{-1}W(x))\widehat{g}^{-1}+\phi\mathcal{L}_{W}\widehat{g}^{-1}.$
Since $-T_{\widehat{g}}(W,\bigtriangledown_{\widehat{g}}\phi)^{ij}$ is
positive definite, then $\mathcal{Q}(u,u)\doteq\langle-
T_{\widehat{g}}(W,\bigtriangledown_{\widehat{g}}\phi)^{ij}Q_{i}u,Q_{j}u\rangle\geq
0$. This can be seen by direct inspection from the explicit form
$\begin{split}\mathcal{Q}(u,u)=\int_{M}\phi^{\prime}\left((\bigtriangledown_{\widehat{g}}\tau)^{i}(\bigtriangledown_{\widehat{g}}\tau)^{j}-\dfrac{1}{2}\widehat{g}((\bigtriangledown_{\widehat{g}}\tau)^{i}(\bigtriangledown_{\widehat{g}}\tau)^{j})\right)Q_{i}u\
\overline{Q_{j}u}\ x^{2}d\mu_{g}\\\
=\int_{M}H((-\phi^{\prime})^{\/2}d_{F}u,(-\phi^{\prime})^{1/2}d_{F}\overline{u})x^{2}d\mu_{g},\end{split}$
(32)
where $H$ is the sesquilinear pairing between $1$-forms induced by the metric.
Focusing then on the term $\langle K^{ij}Q_{i}u,Q_{j}u\rangle$, we observe
that, as a consequence of our choice for the functions $f$ and $W$, we have
$V(x)=\widehat{g}(\bigtriangledown_{\widehat{g}}\tau,\bigtriangledown_{\widehat{g}}x)=0$
on $\partial M$. In addition it holds that $x^{-1}W(x)=\mathcal{O}(1)$ near
$\partial M$, and
$\mathcal{L}_{V}\widehat{g}^{-1}=2\bigtriangledown_{\widehat{g}}(\bigtriangledown_{\widehat{g}}\tau)=2\widehat{\Gamma}^{i}_{\tau\tau}\partial_{i}$.
These observations allow us to establish the following bound, cf. [Vas12] and
[GW18]:
$|\langle K^{ij}Q_{i}u,Q_{j}u\rangle|\leq
C\|\phi^{1/2}d_{F}u\|_{\mathcal{L}^{2}(M)}\leq
C\delta^{-1}(\tau_{1}-\tau_{0})^{2}\|(-\phi^{\prime})^{1/2}d_{F}u\|^{2}_{\mathcal{L}^{2}(M)},$
(33)
with $C$ a suitable, positive constant. Now we focus on establishing a bound
for the terms on the right hand side of Equation (31). We estimate the first
one as follows:
$\displaystyle|\langle-i[(V^{\prime})^{*}P_{\Theta}-P_{\Theta}V^{\prime}]u,u\rangle|\leq$
$\displaystyle
C\left(\|\phi^{1/2}FWF^{-1}P_{\Theta}u\|_{\dot{\mathcal{H}}^{-1}(M)}^{2}+\|\phi^{1/2}u\|_{\mathcal{H}^{1}(M)}^{2}\right)+C\left(\|\phi^{1/2}P_{\Theta}u\|_{\mathcal{L}^{2}(M)}^{2}+\|\phi^{1/2}FWF^{-1}u\|_{\mathcal{L}^{2}(M)}^{2}\right)\leq$
$\displaystyle\leq
C\big{(}\|FWF^{-1}P_{\Theta}u\|_{\dot{\mathcal{H}}^{-1}(M)}^{2}+\delta^{-1}(\tau_{1}-\tau_{0})^{2}\|(-\phi^{\prime})^{1/2}u\|_{\mathcal{H}^{1}(M)}^{2}+$
$\displaystyle+\|P_{\Theta}u\|_{\mathcal{L}^{2}(M)}^{2}+\delta^{-1}(\tau_{1}-\tau_{0})^{2}\|(-\phi^{\prime})^{1/2}FWF^{-1}u\|_{\mathcal{L}^{2}(M)}^{2}\Big{)},$
(34)
where in the last inequality we used Equation (25). As for the second term in
Equation (31), using that $S_{F}\in x^{2}L^{\infty}(M)$, we establish the
bound
$2|Re\langle S_{F}u,V^{\prime}u\rangle|\leq\widetilde{C}\left(\|\phi^{1/2}\
u\|^{2}_{\mathcal{L}^{2}(M)}+\|\phi^{1/2}\
d_{F}u\|^{2}_{\mathcal{L}^{2}(M)}\right),$
for a suitable constant $\widetilde{C}>0$. Using Equation (25) and the
Poincaré inequality, this last bound becomes
$2|Re\langle S_{F}u,V^{\prime}u\rangle|\leq
C\delta^{-1}(\tau_{1}-\tau_{0})^{2}\|(-\phi^{\prime})^{1/2}d_{F}u\|^{2}_{\mathcal{L}^{2}(M)}.$
(35)
At last we give a give a bound for the last term in Equation (27), containing
the pseudodifferential operator $\Theta$ which implements the boundary
conditions. Recalling Hypothesis 4.1, it is convenient to consider the
following three cases separately
* a)
$\Theta\in\Psi^{k}_{b}(\partial M)$ with $k\leq 1$,
* b)
$\Theta\in\Psi^{k}_{b}(\partial M)$ with $1<k\leq 2$.
Now we give a bound case by case.
* a)
It suffices to focus on $\Theta\in\Psi^{1}_{b}(\partial M)$ recalling that,
for $k<1$, $\Psi^{k}_{b}(\partial M)\subset\Psi^{1}_{b}(\partial M)$. If with
a slight abuse of notation we denote with $\Theta$ both the operator on the
boundary and its trivial extension to the whole manifold, we can write
$\langle\Theta\gamma_{-}u,\gamma_{-}V^{\prime}u\rangle=\langle\widehat{N}(\Theta)(-i\nu_{-})\gamma_{-}u,\gamma_{-}V^{\prime}u\rangle=\langle\gamma_{-}\Theta
u,\gamma_{-}V^{\prime}u\rangle,$
where $\widehat{N}(\Theta)(-i\nu_{-})$ is the indicial family as in Equation
(6). We recall that any $A\in\Psi^{s}_{b}(\partial M)$, $s\in\mathbb{N}$, can
be decomposed as $\sum\limits_{i=1}^{n}Q_{i}A_{i}+B$, with
$A_{i},B\in\Psi^{s-1}_{b}(\partial M)$, while $Q_{i}$, $i=1,\dots,n$ is a
generating set of $\mathbf{Diff}^{1}_{\nu}(M)$. Hence we can rewrite $\Theta$
as
$\Theta=\sum_{i}Q_{i}\Theta_{i}+\Theta^{\prime}=\sum_{i}\left(\Theta_{i}Q_{i}+[Q_{i},\Theta_{i}]\right)+\Theta^{\prime},$
where $\Theta_{i},\Theta^{\prime}$ and $[Q_{i},\Theta_{i}]$ are in
$\Psi^{0}(\partial M)$. Therefore
$|\langle\gamma_{-}\Theta
u,\gamma_{-}V^{\prime}u\rangle|\leq|\langle\gamma_{-}\left(\sum_{i}\Theta_{i}Q_{i}u\right),\gamma_{-}V^{\prime}u\rangle|+|\langle\gamma_{-}\left(\left([Q_{i},\Theta_{i}]+\Theta^{\prime}\right)u\right),\gamma_{-}V^{\prime}u\rangle|.$
To begin with, we focus on the first term on the right hand side of this
inequality. Using Equations (12) and (25) together with the Poincaré
inequality (26) and Lemma 3.1,
$\begin{split}|\langle\gamma_{-}\left(\sum_{i}\Theta_{i}Q_{i}u\right),\gamma_{-}V^{\prime}u\rangle|\leq\varepsilon\left(\sum_{i}\|\phi^{1/2}\Theta_{i}Q_{i}u\|_{\mathcal{H}^{1}(M)}^{2}+\|\phi^{1/2}FWF^{-1}u\|_{\mathcal{H}^{1}(M)}^{2}\right)+\\\
+C_{\varepsilon}\left(\sum_{i}\ \|\phi^{1/2}\
Q_{i}u\|_{\mathcal{L}^{2}(M)}^{2}+\|\phi^{1/2}FWF^{-1}u\|_{\mathcal{L}^{2}(M)}^{2}\right)\leq
C_{\varepsilon}\delta^{-1}(\tau_{1}-\tau_{0})^{2}\|(-\phi^{\prime})^{1/2}d_{F}u\|^{2}_{\mathcal{L}^{2}(M)},\end{split}$
for a suitable constant $C_{\varepsilon}>0$. As for the second term, since
$u\in\mathcal{H}^{1,1}(M)$ we can proceed as above using that the operator
$\Theta^{\prime}+[Q_{i},\Theta_{i}]$ is of order $0$ and we can conclude that
$\left|\langle\gamma_{-}\left(\left([Q_{i},\Theta_{i}]+\Theta^{\prime}\right)u\right),\gamma_{-}V^{\prime}u\rangle\right|\leq\widetilde{C}_{\varepsilon}\|\phi^{1/2}\
u\|_{\mathcal{H}^{1}(M)}^{2}\leq
C_{\varepsilon}\delta^{-1}(\tau_{1}-\tau_{0})^{2}\|(-\phi^{\prime})^{1/2}d_{F}u\|_{\mathcal{L}^{2}(M)}^{2},$
for suitable positive constants $C_{\varepsilon}$ and
$\widetilde{C}_{\varepsilon}$. Therefore, it holds a bound of the form
$|Re\langle\Theta\gamma_{-}u,\gamma_{-}V^{\prime}u\rangle|\leq
C^{\prime}_{\epsilon}\delta^{-1}(\tau_{1}-\tau_{0})^{2}\|(-\phi^{\prime})^{1/2}d_{F}u\|_{\mathcal{L}^{2}(M)}^{2}.$
* b)
Since $\Psi^{k}_{b}(\partial M)\subset\Psi^{k^{\prime}}_{b}(\partial M)$ if
$k<k^{\prime}$, it is enough to consider $\Theta\in\Psi^{2}_{b}(\partial M)$
and to observe that, we can decompose $\Theta$ as
$\Theta=\sum\limits_{i=1}^{n}Q_{i}\left(\sum\limits_{j=1}^{n}Q_{j}A_{ij}\right)+B_{i},$
where $B_{i}\in\Psi^{1}_{b}(\partial M)$ while $A_{ij}\in\Psi^{0}_{b}(\partial
M)$. At this point one can apply twice consecutively the same reasoning as in
item a) to draw the sought conclusion.
Finally, considering Equation (31) and collecting all bounds we proved, we
obtain
$\langle-
T_{\widehat{g}}^{ij}(W,\bigtriangledown_{\widehat{g}}\phi)Q_{i}u,Q_{j}u\rangle\leq
C\Big{(}\|P_{\Theta}u\|_{\dot{\mathcal{H}}^{-1,1}(M)(}^{2}+C\delta^{-1}(\tau_{1}-\tau_{0})^{2}\|(-\phi^{\prime})^{1/2}d_{F}u\|_{\mathcal{L}^{2}(M)}^{2}.$
(36)
Since the inner product $H$ defined by the left hand side of Equation (32) is
positive definite, then for $\delta$ large enough
$\langle-T^{ij}_{\widehat{g}}(W,\bigtriangledown_{\widehat{g}}\phi)Q_{i}u,Q_{j}u\rangle-C\delta^{-1}(\tau_{1}-\tau_{0})^{2}\|(-\phi^{\prime})^{1/2}d_{F}u\|_{\mathcal{L}^{2}(M)}^{2}\geq
0,$
and the associated Dirichlet form $\widetilde{\mathcal{Q}}$ defined as
$\widetilde{\mathcal{Q}}(u,u)=\int_{M}\left[H((-\phi^{\prime})^{\/2}d_{F}u,(-\phi^{\prime})^{1/2}d_{F}\overline{u})-C\delta^{-1}(\tau_{1}-\tau_{0})^{2}|(-\phi^{\prime})^{1/2}d_{F}u|^{2}\right]x^{2}d\mu_{g},$
(37)
bounds $\|(-\phi^{\prime})^{1/2}d_{F}u\|^{2}_{\mathcal{L}^{2}(M)}$. We
conclude the proof by observing that, once we have an estimate for
$\|(-\phi^{\prime})^{1/2}d_{F}u\|^{2}_{\mathcal{L}^{2}(M)}$, with the Poincaré
inequality we can also bound
$\|(-\phi^{\prime})^{1/2}u\|_{\mathcal{L}^{2}(M)}$. Therefore, considering the
support of $\chi$ and $u$, there exists a compact subset $K\subset M$ such
that
$\|(-\phi^{\prime})^{1/2}u\|_{\mathcal{L}^{2}(M)}\leq
C\|(-\phi^{\prime})^{1/2}P_{\Theta}u\|_{\dot{\mathcal{H}}^{-1,1}(K)},$ (38)
from which the sought thesis descends.
∎
###### Remark 4.1.
The case with $\Theta\in\Psi^{k}(M)$ of order $k\leq 0$, can also be seen as a
corollary of the well-posedness result of [GW18].
The following two statements guarantee uniqueness and existence of the
solutions for the Klein-Gordon equation associated to the operator
$P_{\Theta}$ individuated in Equation (20). Mutatis mutandis, since we assume
that $\Theta$ is local in time, the proof of the first statement is identical
to the counterpart in [Vas12] and therefore we omit it.
###### Corollary 4.1.
Let $M$ be a globally hyperbolic, asymptotically anti-de Sitter spacetime, cf.
Definition 2.2 and let $f\in\dot{\mathcal{H}}^{-1,1}(M)$ be vanishing whenever
$\tau<\tau_{0}$, $\tau_{0}\in\mathbb{R}$. Suppose in addition that $\Theta$
abides to the Hypothesis 4.1. Then there exists at most one
$u\in\mathcal{H}^{1}_{0}(M)$ such that $supp(u)\subset\\{q\in M\ |\
\tau(q)\geq\tau_{0}\\}$ and it is a solution of $P_{\Theta}u=f$
At the same time the following statement holds true.
###### Lemma 4.2.
Let $M$ be a globally hyperbolic, asymptotically anti-de Sitter spacetime, cf.
Definition 2.2 and let $f\in\dot{\mathcal{H}}^{-1,1}(M)$ be vanishing whenever
$\tau<\tau_{0}$, $\tau_{0}\in\mathbb{R}$. Then there exists
$u\in\mathcal{H}^{1,-1}(M)$ of the problem $P_{\Theta}u=f$, cf. Equation (20),
such that $\tau(\textrm{supp}(u))\geq\tau_{0}$.
The proof follows the one given in [Vas12, Prop. 4.15], but we feel worth
sketching the main ideas. The first step consists of proving a local version
of the lemma, namely that given a compact set $I\subset\mathbb{R}$, there
exists $\sigma>0$ such that for every $\tau_{0}\in I$ there exists
$u\in\mathcal{H}^{1,-1}(M)$ such that $supp(u)=\\{p\in M\ |\ \tau(p)\geq 0\\}$
and $P_{\Theta}u=f$ for $\tau<\tau_{0}+\sigma$. The main point of this part of
the proof consists of applying Lemma 4.1 to ensure that the adjoint of the
Klein-Gordon operator, say $P^{*}_{\Theta}$, is invertible over the set of
smooth functions supported in suitable compact subsets of $M$ – see [Vas12,
Lem. 4.14] for further details. With this result in hand, one divides the time
direction into sufficiently small intervals $[\tau_{j},\tau_{j+1}]$ and uses a
partition of unity along the time coordinate to build a global solution for
$P_{\Theta}u=f$.
At last we extend our results for $u\in\mathcal{H}^{1,m}_{loc}(M)$ and for
$f\in\dot{\mathcal{H}}^{-1,m+1}_{loc}(M)$. Let us consider
$\Theta\in\Psi^{k}_{b}(\partial M)$ with $k\leq 0$, the proof for the positive
cases being the same. If $m\geq 0$, Lemma 4.1 entails that Equation (18)
admits a unique solution lying in $\mathcal{H}^{1}_{loc}(M)$. By the
propagation of singularities theorem, cf. Theorem 3.3 and using Hypothesis
4.1, the solution lies in $\mathcal{H}^{1,m}_{loc}(M)$ and the following
generalization of the bound in Lemma 4.1 holds true:
$\|u\|_{\mathcal{H}^{1,m}(M)}\leq C\|f\|_{\dot{\mathcal{H}}^{-1,m+1}(M)}.$
If $m<0$ we can draw the same conclusion considering, as in [Vas12, Thm.
8.12],
$P_{\Theta}u_{j}=f_{j}\\\ $ (39)
where $f_{j}\in\dot{\mathcal{H}}^{-1,m+1}(M)$ is sequence converging to $f$ as
$j\to\infty$. Each of these equations has a unique solution
$u_{j}\in\mathcal{H}^{1}(M)$. In addition the propagation of singularities
theorem, cf. Theorem (3.3) yields the bound
$\|u_{k}-u_{j}\|_{\mathcal{H}^{1,m}(K)}\leq
C\|f_{k}-f_{j}\|_{\dot{\mathcal{H}}^{-1,m+1}(L)}$
for suitable compact sets $K,L\subset M$ and for every $j,k\in\mathbb{N}$.
Since $f_{j}\rightarrow f$ in $\dot{\mathcal{H}}^{-1,m+1}(L)$, we can conclude
that the sequence $u_{j}$ is converging to $u\in\mathcal{H}^{1,m}(K)$.
Considering $f_{j}$ such that each $f_{j}$ vanishes if $\\{\tau<\tau_{0}\\}$,
one obtains the desired support property of the solution. To conclude this
analysis we summarize the final result which combines Corollary 4.1 and Lemma
4.2.
###### Proposition 4.1.
Let $M$ be a globally hyperbolic, asymptotically anti-de Sitter spacetime, cf.
Definition 2.2 and let $m,\tau_{0}\in\mathbb{R}$ while
$f\in\dot{\mathcal{H}}^{-1,m+1}_{loc}(M)$. Assume in addition that $\Theta$
abides to Hypothesis 4.1. If $f$ vanishes for $\tau<\tau_{0}$,
$\tau_{0}\in\mathbb{R}$ being arbitrary but fixed, then there exists a unique
$u\in\mathcal{H}^{1,m}_{loc}(M)$ such that
$P_{\Theta}u=f,$ (40)
where $P_{\Theta}$ is the operator in Equation (20).
We have gathered all ingredients to prove the existence of advanced and
retarded fundamental solutions associated to the Klein-Gordon operator
$P_{\Theta}$, cf. Equation (20). To this end let us define the following
notable subspaces of $\mathcal{H}^{k,m}(M)$, $k=0,\pm 1$,
$m\in\mathbb{N}\cup\\{0\\}$:
$\mathcal{H}^{k,m}_{-}(M)=\\{u\in\mathcal{H}^{k,m}(M)\;|\;\exists\tau_{-}\in\mathbb{R}\;\textrm{such
that}\;p\notin\textrm{supp}(u),\;\textrm{if}\,\tau(p)<\tau_{-}\\},$ (41a)
$\mathcal{H}^{k,m}_{+}(M)=\\{u\in\mathcal{H}^{k,m}(M)\;|\;\exists\tau_{+}\in\mathbb{R}\;\textrm{such
that}\;p\notin\textrm{supp}(u)\;\textrm{if}\,\tau(p)>\tau_{+}\\},$ (41b)
$\mathcal{H}^{k,m}_{tc}(M)\doteq\mathcal{H}^{k,m}_{+}(M)\cap\mathcal{H}^{k,m}_{-}(M),$
(41c)
where the subscript $tc$ stands for timelike compact. In addition we call
$\mathcal{H}^{1,m}_{\pm,\Theta}(M)\doteq\\{u\in
H^{1,m}_{\pm}(M)\;|\;\gamma_{+}(u)=\Theta\gamma_{-}(u)\\},$ (42)
where $\gamma_{-},\gamma_{+}$ are the trace maps introduced in Theorem 3.1 and
in Lemma 3.2, while $\Theta$ is a pseudodifferential abiding to Hypothesis
4.1.
Exactly as in [GW18] from Lemma 4.1 and from Proposition 4.1, it descends the
following result on the advanced and retarded propagators $G_{\Theta}^{\pm}$
associated to the Klein-Gordon operator $P_{\Theta}$, cf. Equation (20).
###### Theorem 4.1.
Let $P_{\Theta}$ be the Klein-Gordon operator as per Equation (20) where
$\Theta$ abides to Hypothesis 4.1. Then there exist unique retarded $(+)$ and
advanced $(-)$ propagators, that is continuous operators
$G_{\Theta}^{\pm}:\dot{\mathcal{H}}^{-1,m+1}_{\pm}(M)\rightarrow\mathcal{H}^{1,m}_{\pm}(M)$
such that $P_{\Theta}G_{\Theta}^{\pm}=\mathbb{I}$ on
$\dot{\mathcal{H}}^{-1,m+1}_{\pm}(M)$ and
$G_{\Theta}^{\pm}P_{\Theta}=\mathbb{I}$ on
$\mathcal{H}^{1,m}_{\pm,\Theta}(M)$. Furthermore, $G_{\Theta}^{\pm}$ is a
continuous map from $\dot{\mathcal{H}}_{0}^{-1,\infty}(M)$ to
$\mathcal{H}_{loc}^{1,\infty}(M)$ where the subscript $0$ indicates that we
consider only functions of compact support.
Observe that the restriction to $\mathcal{H}^{1,m}_{\pm,\Theta}(M)$ is
necessary since, per construction an element in the range of
$G^{\pm}_{\Theta}P_{\Theta}$ abides to the boundary conditions as in Equation
(18).
###### Remark 4.2.
Associated to the advanced and to retarded propagators, one can define the
causal propagator
$G_{\Theta}:\dot{\mathcal{H}}_{0}^{-1,m+1}(M)\rightarrow\mathcal{H}^{1,m}_{loc}(M)$
as $G_{\Theta}=G_{\Theta}^{+}-G_{\Theta}^{-}$.
Since $G^{\pm}_{\Theta}$ are continuous maps, cf. Theorem 4.1, one can apply
Schwartz kernel theorem to infer that one can associate to them a bi-
distribution $\mathcal{G}^{\pm}_{\Theta}\in\mathcal{D}^{\prime}(M\times M)$.
To conclude the section we highlight a standard and important application of
the fundamental solutions and in particular of the causal propagator cf.
Remark 4.2.
###### Proposition 4.2.
Let $P_{\Theta}$ be the Klein-Gordon operator as per Equation (20) and let
$G_{\Theta}$ be its associated causal propagator, cf. Remark 4.2. Then the
following is an exact sequence:
$\displaystyle
0\to\mathcal{H}^{1,\infty}_{tc,\Theta}(M)\stackrel{{\scriptstyle
P_{\Theta}}}{{\longrightarrow}}\dot{\mathcal{H}}^{-1,\infty}_{tc}(M)\stackrel{{\scriptstyle
G_{\Theta}}}{{\longrightarrow}}\mathcal{H}^{1,\infty}_{\Theta}(M)\stackrel{{\scriptstyle
P_{\Theta}}}{{\longrightarrow}}\dot{\mathcal{H}}^{-1,\infty}(M)\to 0\,.$ (43)
###### Proof.
To prove that the sequence is exact, we start by establishing that
$P_{\Theta}$ is injective on $\mathcal{H}^{1,\infty}_{tc,\Theta}(M)$. This is
a consequence of Theorem 4.1 which guarantees that, if $P_{\Theta}(h)=0$ for
$h\in\mathcal{H}^{1,\infty}_{tc,\Theta}(M)$, then $G^{+}P_{\Theta}(h)=h=0$.
Secondly, on account of Theorem 4.1 and in particular of the identity
$G^{\pm}_{\Theta}P_{\Theta}=\mathbb{I}$ on $\mathcal{H}^{1}_{\pm,\Theta}(M)$,
it holds that $G_{\Theta}P_{\Theta}(f)=0$ for all
$f\in\mathcal{H}^{1,\infty}_{tc,\Theta}(M)$. Hence
$\mathrm{Im}(P_{\Theta})\subseteq\ker(P_{\Theta})$. Assume that there exists
$f\in\dot{\mathcal{H}}^{-1,\infty}_{tc}(M)$ such that $G_{\Theta}(f)=0$. It
descends that
$G^{+}_{\Theta}(f)=G^{-}_{\Theta}(f)\in\mathcal{H}^{1,\infty}_{tc,\Theta}(M)$.
Applying $P_{\Theta}$ it holds that $f=P_{\Theta}G^{+}_{\Theta}(f)$, that is
$f\in P_{\Theta}[\mathcal{H}^{1,\infty}_{tc,\Theta}(M)]$.
The third step consists of recalling that, per construction,
$P_{\Theta}G_{\Theta}=0$ and that, still on account of Theorem 4.1,
$\textrm{Im}(G_{\Theta})\subseteq\ker(P_{\Theta})$. To prove the opposite
inclusion, suppose that $u\in\ker(P_{\Theta})$. Let $\chi\equiv\chi(\tau)$ be
a smooth function such that there exists $\tau_{0},\tau_{1}\in\mathbb{R}$ such
that $\chi=1$ if $\tau>\tau_{1}$ and $\chi=0$ if $\tau<\tau_{0}$. Since
$\Theta$ is a static boundary condition and, therefore, it commutes with
$\chi$, it holds that $\chi u\in\mathcal{H}^{1,\infty}_{+,\Theta}(M)$. Hence
setting $f\doteq P_{\Theta}\chi u$, a direct calculation shows that
$G_{\Theta}f=u$
To conclude we need to show that the map $P_{\Theta}$ on the before last arrow
is surjective. To this end, let $j\in\dot{\mathcal{H}}^{-1,\infty}(M)$ and let
$\chi\equiv\chi(\tau)$ be as above. Let $h\doteq G^{+}_{\Theta}\left(\chi
j\right)+G^{-}_{\Theta}\left((1-\chi)j\right)$. Per construction
$h\in\mathcal{H}^{1,\infty}(M)$ and $P_{\Theta}(h)=j$. ∎
Mainly for physical reasons we individuate the following special classes of
boundary conditions. Recall that, according to Theorem 2.1 $M$ is isometric to
$\mathbb{R}\times\Sigma$ and $\partial M$ to $\mathbb{R}\times\partial\Sigma$.
###### Definition 4.2.
Let $\Theta\in\Psi^{k}_{b}(M)$ with $k\leq 2$ and let $\Theta=\Theta^{*}$ We
call $\Theta$
* •
physically admissible if $WF_{b}^{-1,s+1}(\Theta u)\subseteq
WF_{b}^{-1,s+1}(P_{\Theta}u)$ for all $u\in\mathcal{H}^{1,m}_{loc}(M)$ with
$m\leq 0$ and $s\in\mathbb{R}\cup\\{\infty\\}$.
* •
a static boundary condition if $\Theta\equiv\Theta_{K}$ is the natural
extension to $\Psi^{k}_{b}(M)$ of a pseudodifferential operator
$K=K^{*}\in\Psi_{b}^{k}(\partial\Sigma)$ with $k\leq 2$.
Observe that any static boundary condition is automatically local in time, see
Definition 4.1. Starting from these premises we can investigate further
properties of the fundamental solutions, starting from the singularities of
the advanced and retarded propagators. To this end let us introduce
$\mathcal{W}_{b}^{-\infty}(M)$ the space of bounded operators from
$\dot{\mathcal{H}}_{0}^{-1,-\infty}(M)$ to $\mathcal{H}^{1,\infty}_{loc}(M)$
and we give a definition of wavefront set complementary to that of Definition
3.4.
###### Definition 4.3 (Operatorial wavefront set $WF_{b}^{Op}(M)$).
Let
$\Lambda:\dot{\mathcal{H}}^{-1,-\infty}_{0}(M)\rightarrow\mathcal{H}^{1,\infty}_{loc}(M)$
be a continuous map. A point
$(q_{1},q_{2})\in{}^{b}S^{*}M\times{}^{b}S^{*}(M)\not\in WF_{b}^{Op}(M)$ if
there exists two b-pseudodifferential operators $B_{1}$ and $B_{2}$ in
$\Psi_{b}^{0}(M)$ elliptic at $q_{1}$ and $q_{2}$ respectively, such that
$B_{1}\Lambda B_{2}^{*}\in\mathcal{W}_{b}^{-\infty}(M)$.
Recalling Equation (4), we can state the following theorem characterizing the
singularities of the advanced and of the retarded fundamental solutions. The
proof is a direct application of Theorem 3.2 or of Theorem 3.3.
###### Theorem 4.2.
Let $\Delta$ denote the diagonal in ${}^{b}S^{*}M\times{}^{b}S^{*}M$ and let
$\Theta$ be physically admissible as per Definition 4.2. Then
$WF_{b}^{Op}(G_{\Theta}^{\pm})\setminus\Delta\subset\\{(q_{1},q_{2})\in{}^{b}S^{*}M\times{}^{b}S^{*}M\
|\ q_{1}\dot{\sim}q_{2},\ \pm t(q_{1})>\pm t(q_{2})\\},$
where $q_{1}\dot{\sim}q_{2}$ means that $q_{1},q_{2}$ are two points in
$\dot{\mathcal{N}}$, cf. Equation (22) connected by a generalized broken
bicharacteristic, cf. Definition 3.8.
###### Remark 4.3.
The reason for the hypothesis on $\Theta$ lies in the fact that we do not want
to alter the microlocal behavior of the system in $\mathring{M}$. More
precisely, if no restriction on the wavefront set of $\Theta u$ is placed,
then by the propagation of singularities theorem, cf. Theorem 3.2, in addition
to the singularities propagating along the generalized broken
bicharacteristics of the Klein-Gordon operator we should account also for
those of $\Theta u$. On the one hand this would be in sharp contrast with what
happens if $M$ were a globally hyperbolic spacetime without boundary. On the
other hand, in concrete applications such as the construction of Hadamard two-
point functions, one seeks for bi-distributions with a prescribed form of the
wave front set and whose antisymmetric part coincides with the difference
between the advanced and retarded fundamental solutions associated to the
Klein-Gordon operator with boundary condition implemented by $\Theta$, see
e.g. [DF16, DF18, DW19, Wro17, GW18].
In addition one can infer the following localization property which is
sometimes referred to as time-slice axiom.
###### Corollary 4.2.
Let
$\mathcal{H}^{-1,\infty}_{tc,[\tau_{1},\tau_{2}]}(M)\subset\mathcal{H}^{-1,\infty}_{tc}(M)$
be the collection of all $u\in\mathcal{H}^{-1,\infty}_{tc}(M)$ such that
$p\notin\textrm{supp}(u)$ whenever $\tau(p)\notin[\tau_{1},\tau_{2}]$,
$\tau_{1},\tau_{2}\in\mathbb{R}$. Then, if $\Theta$ is a static boundary
condition as per Definition 4.2, the inclusion map
$\iota_{\tau_{1},\tau_{2}}:\dot{\mathcal{H}}^{-1,\infty}_{tc,[\tau_{1},\tau_{2}]}(M)\rightarrow\dot{\mathcal{H}}^{-1,\infty}_{tc}(M)$
induces the isomorphism
$[\iota_{\tau_{1},\tau_{2}}]:\dfrac{\dot{\mathcal{H}}_{tc,[\tau_{1},\tau_{2}]}^{1,\infty}(M)}{P_{\Theta}\mathcal{H}_{tc,[\tau_{1},\tau_{2}]}^{1,\infty}(M)}\rightarrow\dfrac{\dot{\mathcal{H}}_{tc}^{1,\infty}(M)}{P_{\Theta}\mathcal{H}_{tc}^{1,\infty}(M)}.$
(44)
###### Proof.
By direct inspection one can realize that the map $\iota_{\tau_{1},\tau_{2}}$
descends to the quotient space
$\dfrac{\dot{\mathcal{H}}_{tc,[\tau_{1},\tau_{2}]}^{1,\infty}(M)}{P_{\Theta}\mathcal{H}_{tc,[\tau_{1},\tau_{2}]}^{1,\infty}(M)}$.
The ensuing application $[\iota_{\tau_{1},\tau_{2}}]$ is manifestly injective.
We need to show that it is also surjective. Consider therefore any
$[f]\in\dfrac{\dot{\mathcal{H}}_{tc}^{1,\infty}(M)}{P_{\Theta}\mathcal{H}_{tc}^{1,\infty}(M)}$
and let $G_{\Theta}(f)$ be the associated solution of the Klein-Gordon
equation, cf. Equation (43). Let $\chi\equiv\chi(\tau)$ be a smooth function
such that $\chi=1$ if $\tau>\tau_{2}$ while $\chi=0$ if $\tau<\tau_{1}$. The
function $h\doteq P_{\Theta}\left(\chi
G_{\Theta}(f)\right)\in\dot{\mathcal{H}}^{-1,\infty}_{tc,[\tau_{1},\tau_{2}]}(M)$,
where $G_{\Theta}$ is the causal propagator, cf. Remark 4.2 and Proposition
4.2. Per construction the map $P_{\Theta}\circ\chi\circ G_{\Theta}$ descends
to an application from
$\dfrac{\dot{\mathcal{H}}_{tc}^{1,\infty}(M)}{P_{\Theta}\mathcal{H}_{tc}^{1,\infty}(M)}$
to
$\dfrac{\dot{\mathcal{H}}_{tc,[\tau_{1},\tau_{2}]}^{1,\infty}(M)}{P_{\Theta}\mathcal{H}_{tc,[\tau_{1},\tau_{2}]}^{1,\infty}(M)}$
which is both a left and a right inverse of $[\iota_{\tau_{1},\tau_{2}}]$. ∎
## 5 Hadamard States
In this section, we discuss a specific application of the results obtained in
the previous section, namely we prove existence of a family of distinguished
two-point correlation functions for a Klein-Gordon field on a globally
hyperbolic, asymptotically AdS spacetime, dubbed Hadamard two-point
distributions. These play an important rôle in the algebraic formulation of
quantum field theory, particularly when the underlying background is a generic
globally hyperbolic spacetime with or without boundary, see e.g. [KM13] for a
review as well as [DF16, DF18, DFM18] for the analysis on anti-de Sitter
spacetime and [Wro17] for an that on a generic asymptotically AdS spacetime,
though only in the case of Dirichlet boundary conditions.
Here our goal is to prove that such class of two-point functions exists even
if one considers more generic boundary conditions. To prove this statement,
the strategy that we follow is divided in three main steps, which we summarize
for the reader’s convenience. To start with, we restrict our attention to
static, asymptotically anti-de Sitter and globally hyperbolic spacetimes and
to boundary conditions which are both physically acceptable and static, see
Definition 4.2. In this context, by means of spectral techniques, we give an
explicit characterization of the advanced and retarded fundamental solutions.
To this end we use the theory of boundary triples, a framework which is
slightly different, albeit connected, to the one employed in the previous
sections, see [DDF18].
Subsequently we show that, starting from the fundamental solutions and from
the associated causal propagator, it is possible to identify a distinguished
two-point distributions of Hadamard form.
To conclude, we adapt and we generalize to the case in hand a deformation
argument due to Fulling, Narcowich and Wald, [FNW81] which, in combination
with the propagation of singularities theorem, allows to infer the existence
of Hadamard two-point distributions for a Klein-Gordon field on a generic
globally hyperbolic and asymptotically AdS spacetime starting from those on a
static background.
### 5.1 Fundamental solutions on static spacetimes
In this section we give a concrete example of advanced and retarded
fundamental solutions for the Klein-Gordon operator $P_{\Theta}$, cf. Equation
(20) on a static, globally hyperbolic, asymptotically AdS spacetime. For the
sake of simplicity, we consider a massless scalar field, corresponding to the
case $\nu=(n-1)/2$, see Equation 7. Observe that, since the detailed analysis
of this problem has been mostly carried out in [DDF18], we refer to it for the
derivation and for most of the technical details. Here we shall limit
ourselves to giving a succinct account of the main results.
As a starting point, we specify precisely the underlying geometric structure:
###### Definition 5.1.
Let $(M,g)$ be an $n$-dimensional Lorentzian manifold. We call it a static
globally hyperbolic, asymptotically AdS spacetime if it abides to Definition
2.2 and, in addition,
* 1)
There exists an irrotational, timelike Killing field $\chi\in\Gamma(TM)$, such
that $\mathcal{L}_{\chi}(x)=0$ where $x$ is the global boundary function,
* 2)
$(M,\hat{g})$ is isometric to a standard static spacetime, that is a warped
product $\mathbb{R}\times_{\beta}S$ with line element
$ds^{2}=-\alpha^{2}dt^{2}+h_{S}$ where $h_{S}$ is a $t$-independent Riemannian
metric on $S$, while $\alpha\neq\alpha(t)$ is a smooth, positive function.
###### Remark 5.1.
In the following, without loss of generality, we shall assume that, whenever
we consider a static globally hyperbolic, asymptotically flat spacetime if it
abides to Definition 2.2, the timelike Killing field $\chi$ coincides with the
vector field $\partial_{\tau}$, cf. Theorem 2.1. Hence the underlying line-
element reads as $ds^{2}=-\beta d\tau^{2}+\kappa$ where both $\beta$ and
$\kappa$ are $\tau$-independent and $S$ can be identified with the Cauchy
surface $\Sigma$ in Theorem 2.1. For convenience we also remark that, in view
of this characterization of the metric, the associated Klein-Gordon equation
$Pu=0$ with $P=\Box_{g}$ reads
$\left(-\partial^{2}_{\tau}+E\right)u=0,$ (45)
where $E=\beta\Delta_{\kappa}$, being $\Delta_{\kappa}$ the Laplace-Beltrami
operator associated to the the Riemannian metric $\kappa$.
Henceforth we consider only static boundary conditions as per Definition 4.2
which we indicate with the symbol $\Theta_{K}$ to recall that they are induced
from $K\in\Psi^{k}_{b}(\partial M)$. Since the underlying spacetime is static,
in order to construct the advanced and retarded fundamental solutions, we can
focus our attention on
$\mathcal{G}_{\Theta_{K}}\in\mathcal{D}^{\prime}(\mathring{M}\times\mathring{M})$
, the bi-distribution associated to the causal propagator $G_{\Theta_{K}}$,
cf. Remark 4.2. It satisfies the following initial value problem, see also
[DDF18]:
$\begin{cases}(P_{\Theta_{K}}\otimes\mathbb{I})\mathcal{G}_{\Theta_{K}}=(\mathbb{I}\otimes
P_{\Theta_{K}})\mathcal{G}_{\Theta_{K}}=0\\\
\mathcal{G}_{\Theta_{K}}|_{\tau=\tau^{\prime}}=0\quad\\\
\partial_{\tau}\mathcal{G}_{\Theta_{K}}|_{\tau=\tau^{\prime}}=-\partial_{\tau^{\prime}}\mathcal{G}_{\Theta_{K}}|_{\tau=\tau^{\prime}}=\delta\end{cases}$
(46)
where $\delta$ is the Dirac distribution on the diagonal of
$\mathring{M}\times\mathring{M}$. Starting from $\mathcal{G}_{\Theta_{K}}$ one
can recover the advanced and retarded fundamental solutions
$\mathcal{G}^{\pm}_{\Theta_{K}}$ via the identities:
$\mathcal{G}^{-}_{\Theta_{K}}=\vartheta(\tau-\tau^{\prime})\mathcal{G}_{\Theta_{K}}\quad\textrm{and}\quad\mathcal{G}^{+}_{\Theta_{K}}=-\vartheta(\tau^{\prime}-\tau)\mathcal{G}_{\Theta_{K}},$
(47)
where $\vartheta$ is the Heaviside function. The existence and the properties
of $\mathcal{G}_{\Theta_{K}}$ have been thoroughly analyzed in [DDF18] using
the framework of boundary triples, cf. [Gru68]. Here we recall the main
structural aspects.
###### Definition 5.2.
Let $H$ be a separable Hilbert space over $\mathbb{C}$ and let $S:D(S)\subset
H\rightarrow H$ be a closed, linear and symmetric operator. A boundary triple
for the adjoint operator $S^{*}$ is a triple
$(\mathsf{h},\gamma_{0},\gamma_{1})$, where $\mathsf{h}$ is a separable
Hilbert space over $\mathbb{C}$ while
$\gamma_{0},\gamma_{1}:D(S^{*})\rightarrow\mathsf{h}$ are two linear maps
satisfying
* 1)
For every $f,f^{\prime}\in D(P^{*})$ it holds
$(S^{*}f|f^{\prime})_{H}-(f|S^{*}f^{\prime})_{H}=(\gamma_{1}f|\gamma_{0}f^{\prime})_{\mathsf{h}}-(\gamma_{0}f|\gamma_{1}f^{\prime})_{\mathsf{h}}$
(48)
* 2)
The map $\gamma:D(S^{*})\rightarrow\mathsf{h}\times\mathsf{h}$ defined by
$\gamma(f)=(\gamma_{0}f,\gamma_{1}f)$ is surjective.
One of the key advantages of this framework is encoded in the following
proposition, see [Mal92]
###### Proposition 5.1.
Let $S$ be a linear, closed and symmetric operator on $H$. Then an associated
boundary triple $(\mathsf{h},\gamma_{0},\gamma_{1})$ exists if and only if
$S^{*}$ has equal deficiency indices. In addition, if
$\Theta:D(\Theta)\subseteq\mathsf{h}\rightarrow\mathsf{h}$ is a closed and
densely defined linear operator, then $S_{\Theta}\doteq
S^{*}|_{ker(\gamma_{1}-\Theta\gamma_{0})}$ is a closed extension of $S$ with
domain
$D(S_{\Theta})\doteq\\{f\in D(S^{*})\;|\;\gamma_{0}(f)\in
D(\Theta),\;\textrm{and}\;\gamma_{1}(f)=\Theta\gamma_{0}(f)\\}$
The map $\Theta\mapsto S_{\Theta}$ is one-to-one and
$S^{*}_{\Theta}=S_{\Theta^{*}}$. In other word there is a one-to-one
correspondence between self-adjoint operators $\Theta$ on $\mathsf{h}$ and
self-adjoint extensions of $S$.
Noteworthy is the application of this framework to the case where the rôle of
$S$ is played by a second order elliptic partial differential operator $E$.
Observe that this symbol is employed having in mind the subsequent application
to Equation (45). To construct a boundary triple associated with $E^{*}$, let
$n$ be the unit, outward pointing, normal of $\partial\Sigma$ and let
$\Gamma_{0}\colon H^{2}(\Sigma)\ni f\mapsto\Gamma f\in
H^{3/2}(\Sigma),\,\qquad\Gamma_{1}\colon H^{2}(\Sigma)\ni
f\mapsto-\Gamma\nabla_{n}f\in H^{1/2}(\Sigma)\,,$
where $H^{k}(\Sigma)$ indicates the Sobolev space associated to the Riemannian
manifold $(\Sigma,\kappa)$ introduced at the end of Section 2.1. Here
$\Gamma:H^{s}(\Sigma)\to H^{s-\frac{1}{2}}(\Sigma)$, $s>\frac{1}{2}$ is the
continuous surjective extension of the restriction map from
$C^{\infty}_{0}(\Sigma)$ to $C^{\infty}_{0}(\partial\Sigma)$, cf. [GS13, Th.
4.10 & Cor. 4.12]. In addition, since the inner product
$(\,|\,)_{L^{2}(\partial\Sigma)}$ on $L^{2}(\partial\Sigma)\equiv
L^{2}(\partial\Sigma;\iota^{*}_{\Sigma}d\mu_{g})$,
$\iota_{\Sigma}:\partial\Sigma\hookrightarrow\Sigma$, extends continuously to
a pairing on $H^{-1/2}(\partial\Sigma)\times H^{1/2}(\partial\Sigma)$ as well
as on $H^{-3/2}(\partial\Sigma)\times H^{3/2}(\partial\Sigma)$, there exist
isomorphisms
$\iota_{\pm}\colon H^{\pm 1/2}(\partial\Sigma)\to L^{2}(\partial\Sigma),\qquad
j_{\pm}\colon H^{\pm 3/2}(\partial\Sigma)\to L^{2}(\partial\Sigma)\,,$
such that, for all $(\psi,\phi)\in H^{1/2}(\partial\Sigma)\times
H^{-1/2}(\partial\Sigma)$ and for all $(\widetilde{\psi},\widetilde{\phi})\in
H^{3/2}(\partial\Sigma)\times H^{-3/2}(\partial\Sigma)$,
$\displaystyle(\psi,\phi)_{(1/2,-1/2)}=(\iota_{+}\psi|\,\iota_{-}\phi)_{L^{2}(\partial\Sigma)}\,,\quad(\widetilde{\psi},\widetilde{\phi})_{(3/2,-3/2)}=(j_{+}\widetilde{\psi}|\,j_{-}\widetilde{\phi})_{L^{2}(\partial\Sigma)}\,,$
where $(,)_{(1/2,-1/2)}$ and $(,)_{(3/2,-3/2)}$ stand for the duality pairings
between the associated Sobolev spaces.
###### Remark 5.2.
Note that in the massless case, the two trace operators $\Gamma_{0}$ and
$\Gamma_{1}$ coincide respectively with the restriction to $H^{2}(M)$ of the
traces $\gamma_{-}$ and $\gamma_{+}$ introduced in Theorem 3.1 and in Lemma
3.2.
Gathering all the above ingredients, we can state the following proposition,
cf. [DDF18, Thm. 24 & Rmk 25]:
###### Proposition 5.2.
Let $E^{*}$ be the adjoint of a second order, elliptic, partial differential
operator on a Riemannian manifold $(\Sigma,\kappa)$ with boundary and of
bounded geometry. Let
$\displaystyle\gamma_{0}\colon H^{2}(M)\ni f\mapsto\iota_{+}\Gamma_{0}f\in
L^{2}(\partial M)\,,$ (49) $\displaystyle\gamma_{1}\colon H^{2}(M)\ni f\mapsto
j_{+}\Gamma_{1}f\in L^{2}(\partial M)\,,$ (50)
Then $(L^{2}(\partial M),\gamma_{0},\gamma_{1})$ is a boundary triple for
$E^{*}$.
Combining all data together, particularly Proposition 5.1 and Proposition 5.2
we can state the following theorem, whose proof can be found in [DDF18, Thm
29]
###### Theorem 5.1.
Let $(M,g)$ be a static, globally hyperbolic, asymptotically AdS spacetime as
per Definition 5.1. Let $(\gamma_{0},\gamma_{1},L^{2}(\partial M))$ be the
boundary triple as in Proposition 5.2 associated with $E^{*}$, the adjoint of
the elliptic operator defined in (45) and let $K$ be a densely defined self-
adjoint operator on $L^{2}(\partial\Sigma)$ which individuates a static and
physically admissible boundary condition as per Definition 4.2. Let $E_{K}$ be
the self-adjoint extension of $E$ defined as per Proposition 5.1 by
$E_{K}\doteq E^{*}|_{D(E_{K})}$, where
$D(E_{K})\doteq\ker(\gamma_{1}-K\gamma_{0})$. Furthermore, let assume that the
spectrum of $E_{K}$ is bounded from below.
Then, calling $\Theta_{K}$ the associated boundary condition, the advanced and
retarded Green’s operators $\mathsf{G}^{\pm}_{\Theta_{K}}$ associated to the
wave operator $\partial_{t}^{2}+E_{K}$ exist and they are unique. They are
completely determined in terms of
$\mathcal{G}^{\pm}_{\Theta_{K}}\in\mathcal{D}^{\prime}(\mathring{M}\times\mathring{M})$.
These are bidistributions such that
$\mathcal{G}^{-}_{\Theta_{K}}=\vartheta(t-t^{\prime})\mathcal{G}_{\Theta_{K}}$
and
$\mathcal{G}^{+}_{\Theta_{K}}=-\vartheta(t^{\prime}-t)\mathcal{G}_{\Theta_{K}}$
where
$\mathcal{G}_{\Theta_{K}}\in\mathcal{D}^{\prime}(\mathring{M}\times\mathring{M})$
is such that, for all $f\in\mathcal{D}(\mathring{M})$
$\displaystyle\mathcal{G}_{\Theta_{K}}(f_{1},f_{2})\doteq\int_{\mathbb{R}^{2}}\textrm{d}t\textrm{d}t^{\prime}\,\bigg{(}f_{1}(t)\bigg{|}(-E_{K})^{-\frac{1}{2}}\sin[(-E_{K})^{\frac{1}{2}}(t-t^{\prime})\big{]}f_{2}(t^{\prime})\bigg{)},$
(51)
where $f(t)\in H^{2}(\Sigma)$ denotes the evaluation of $f$, regarded as an
element of $C_{\textrm{c}}^{\infty}(\mathbb{R},H^{\infty}(\Sigma))$ and
$E_{K}^{-\frac{1}{2}}\sin[E_{K}^{\frac{1}{2}}(t-t^{\prime})]$ is defined
exploiting the functional calculus for $E_{K}$. Moreover it holds that
$\mathsf{G}^{\pm}_{\Theta_{K}}\colon\mathcal{D}(\mathring{M})\to
C^{\infty}(\mathbb{R},H^{\infty}_{\Theta_{K}}(\Sigma))\,,$
where $H^{\infty}_{\Theta_{K}}(\Sigma)\doteq\bigcap_{k\geq
0}D(E_{\Theta_{K}}^{k})$. In particular,
$\displaystyle\gamma_{1}\big{(}\mathsf{G}^{\pm}_{\Theta_{K}}f\big{)}=\Theta_{K}\gamma_{0}\big{(}\mathsf{G}^{\pm}_{\Theta_{K}}f\big{)}\qquad\forall
f\in C^{\infty}_{0}(\mathring{M})\,.$ (52)
###### Remark 5.3.
Observe that, in Theorem 5.1 we have constructed the advanced and retarded
fundamental solutions $\mathcal{G}^{\pm}_{\Theta}$ as elements of
$\mathcal{D}^{\prime}(\mathring{M}\times\mathring{M})$. Yet we can combine
this result with Theorem 4.1 to conclude that there must exist unique and
advanced retarded propagators on the whole $M$ whose restriction to
$\mathring{M}$ coincides with $\mathcal{G}^{\pm}_{\Theta_{K}}$. With a slight
abuse of notation we shall refer to these extended fundamental solutions with
the same symbol.
### 5.2 Existence of Hadamard States on Static Spacetimes
In this section, we discuss the existence of Hadamard two-point functions. We
stress that the so-called Hadamard condition and its connection to microlocal
analysis have been first studied and formulated under the assumption that the
underlying spacetime is without boundary and globally hyperbolic. We shall not
enter into the details and we refer an interested reader to the survey in
[KM13].
As outlined in the introduction, if the underlying background possesses a
timelike boundary, the notion of Hadamard two-point function needs to be
modified accordingly. Here we follow the same rationale advocated in [DF16,
DF17] and also in [DW19, Wro17].
###### Definition 5.3.
Let $(M,g)$ be a globally hyperbolic, asymptotically AdS spacetime as per
Definition 2.2. A bi-distribution $\lambda_{2}\in\mathcal{D}^{\prime}(M\times
M)$ is called of Hadamard form if its restriction to $\mathring{M}$ has the
following wavefront set
$WF(\lambda_{2})=\left\\{(p,k,p^{\prime},-k^{\prime})\in
T^{*}(\mathring{M}\times\mathring{M})\setminus\\{0\\}\;|\;(p,k)\sim(p^{\prime},k^{\prime})\;\textrm{and}\;k\triangleright
0\right\\},$ (53)
where $\sim$ entails that $(p,k)$ and $(p^{\prime},k^{\prime})$ are connected
by a generalized broken bicharactersitic, while $k\triangleright 0$ means that
the co-vector $k$ at $p\in\mathring{M}$ is future-pointing. Furthermore we
call $\lambda_{2,\Theta}\in\mathcal{D}^{\prime}(M\times M)$ a Hadamard two-
point function associated to $P_{\Theta}$, if, in addition to Equation (53),
it satisfies
$(P_{\Theta}\otimes\mathbb{I})\lambda_{2,\Theta}=(\mathbb{I}\otimes
P_{\Theta})\lambda_{2,\Theta}=0,$
and, for all $f,f^{\prime}\in\mathcal{D}(\mathring{M})$,
$\lambda_{2,\Theta}(f,f)\geq
0,\quad\textrm{and}\quad\lambda_{2,\Theta}(f,f^{\prime})-\lambda_{2,\Theta}(f^{\prime},f)=i\mathcal{G}_{\Theta}(f,f^{\prime}),$
(54)
where $P_{\Theta}$ is the Klein-Gordon operator as in Equation (20), while
$\mathcal{G}_{\Theta}$ is the associated causal propagator, cf. Remark 4.2.
###### Remark 5.4.
To make contact with the terminology often used in theoretical physics, given
a Hadamard two-point function $\lambda_{2,\Theta}$, we can identify the
following associated bidistributions:
* •
the bulk-to-bulk two-point function
$\mathring{\lambda}_{2,\Theta}\in\mathcal{D}^{\prime}(\mathring{M}\times\mathring{M})$
such that
$\mathring{\lambda}_{2,\Theta}\doteq\left.\lambda_{2,\Theta}\right|_{\mathring{M}}$
is the restriction of the Hadamard two-point function to
$\mathring{M}\times\mathring{M}$.
* •
the boundary-to-boundary two-point function
$\lambda_{2,\partial,\Theta}\in\mathcal{D}^{\prime}(\partial M\times\partial
M)$ such that
$\lambda_{2,\partial,\Theta}\doteq(\iota_{\partial}^{*}\otimes\iota_{\partial}^{*})\lambda_{2,\Theta}$
where $\iota_{\partial}:\partial M\to M$ is the embedding map of the boundary
in $M$.
Observe that $\lambda_{2,\partial,\Theta}$ is well-defined on account of
Equation (53) and of [Hör03, Thm. 8.2.4].
The existence of Hadamard two-point functions is not a priori obvious and it
represents an important question at the level of applications. Here we address
it in two steps. First we focus on static, globally hyperbolic, asymptotically
anti-de Sitter spacetimes and subsequently we drop the assumption that the
underlying background is static, proving existence of Hadamard two-point
functions via a deformation argument.
Let us focus on the first step. To this end, on the one hand we need the
boundary condition $\Theta$ to abide to Hypothesis 4.1, while, on the other
hand we make use of some auxiliary results from [Wro17], specialized to the
case in hand. In the next statements it is understood that to any Hadamard
two-point function $\lambda_{2,\Theta}$, it corresponds
$\Lambda_{\Theta}:\dot{\mathcal{H}}_{0}^{-k,-\infty}(M)\rightarrow\mathcal{H}^{k,-\infty}_{loc}(M)$,
with $k=\pm 1$. Recalling Definition 3.3 and 4.3, the following lemma holds
true, cf. [Wro17, Lem. 5.3]:
###### Lemma 5.1.
For any $q_{1},q_{2}\in{}^{b}S^{*}M$, $(q_{1},q_{2})\not\in
WF^{Op}(\Lambda_{\Theta})$ if and only if there exist neighbourhoods
$\Gamma_{i}$ of $q_{i}$, $i=1,2$, such that for all $B_{i}\in\Psi_{b}^{0}(M)$
elliptic at $q_{i}$ satisfying $WF_{b}^{Op}(B_{i})\subset\Gamma_{i}$,
$B_{1}\Lambda B_{2}\in\mathcal{W}^{-\infty}_{b}(M)$.
Observe that this lemma entails in particular that, given any $f_{i}\in
C^{\infty}(M)$, $i=1,2$ such that $\textrm{supp}(f_{i})\subset\mathring{M}$
then $f_{1}\Lambda_{\Theta}f_{2}$ has a smooth kernel over
$\mathring{M}\times\mathring{M}$. In addition the following also holds true,
cf. [Wro17, Prop. 5.6]:
###### Proposition 5.3.
Let $\Lambda_{\Theta}$ identify an Hadamard two-point function. If
$(q_{1},q_{2})\in WF_{b}^{Op}(\Lambda_{\Theta})$ for $q_{1},q_{2}\in
T^{*}M\setminus\\{0\\}$, then $(q_{1},q_{1})\in WF_{b}^{Op}(\Lambda_{\Theta})$
or $(q_{2},q_{2})\in WF_{b}^{Op}(\Lambda_{\Theta})$.
Given any two points $q_{1}$ and $q_{2}$ in the cosphere bundle
${}^{b}S^{*}M$, cf. Equation (4) we shall write $q_{1}\dot{\sim}q_{2}$ if both
$q_{1}$ and $q_{2}$ lie in the compressed characteristic bundle
$\dot{\mathcal{N}}$ and they are connected by a generalized broken
bicharacteristic, cf. Definition 3.8. With these data and using [Wro17, Prop.
5.9] together with Hypothesis 4.1 and with Theorems 3.2 and 3.3, we can
establish the following operator counterpart of the propagation of
singularities theorem:
###### Proposition 5.4.
Let
$\Lambda_{\Theta}:\dot{\mathcal{H}}_{0}^{-1,-\infty}(M)\rightarrow\mathcal{H}^{1,-\infty}_{loc}(M)$
and suppose that $(q_{1},q_{2})\in WF^{Op}_{b}(\Lambda_{\Theta})$. If
$P_{\Theta}\Lambda_{\Theta}=0$, then $q_{1}\in\dot{\mathcal{N}}$ and
$(q_{1}^{\prime},q_{2})\in WF_{b}^{Op}(\Lambda_{\Theta})$ for every
$q_{1}^{\prime}$ such that $q_{1}^{\prime}\dot{\sim}q_{1}$. Similarly, if
$\Lambda_{\Theta}P_{\Theta}=0$, then $q_{2}\in\dot{\mathcal{N}}$ and
$(q_{1},q_{2}^{\prime})\in WF_{b}^{Op}(\Lambda_{\Theta})$ for all
$q_{2}^{\prime}$ such that $q_{2}^{\prime}\dot{\sim}q_{2}$.
Our next step consists of refining Theorem 4.2 in $\mathring{M}$, cf. for
similarities with [DF18, Cor. 4.5].
###### Corollary 5.1.
Let
$G_{\Theta}:\mathcal{H}^{-1,-\infty}(\mathring{M})\rightarrow\mathcal{H}^{1,-\infty}(\mathring{M})$
be the restriction to $\mathring{M}$ of the causal propagator as per Remark
4.2 . Then
$WF_{b}^{Op}(G_{\Theta})=\\{(q_{1},q_{2})\in{}^{b}S^{*}\mathring{M}\times{}^{b}S^{*}\mathring{M}\
|\ q_{1}\dot{\sim}q_{2}\\}.$
###### Proof.
A direct application of Theorem 4.2 yields
$WF^{Op}(G_{\Theta})\subseteq\\{(q_{1},q_{2})\in{}^{b}S^{*}\mathring{M}\times{}^{b}S^{*}\mathring{M}\;|\;q_{1}\dot{\sim}q_{2}\\}$
From this inclusion, it descends that every pair of points in the singular
support of $G$ is connected by a generalized broken bicharacteristic
completely contained in $\mathring{M}$. Since ${}^{b}T^{*}\mathring{M}\simeq
T^{*}\mathring{M}$, we can apply [BF09, Ch.4, Thm. 16] and the sought
statement is proven. ∎
With these data, we are ready to address the main question of this section.
Suppose that $(M,g)$ is a static, globally hyperbolic, asymptotically AdS
spacetime, cf. Definition 2.2 and 5.1. Let $P_{\Theta}$ be the Klein-Gordon
operator as per Equation (20) and let $\Theta\equiv\Theta_{K}$ be a static
boundary condition as per Theorem 5.1. For simplicity we also assume that the
spectrum of $E_{K}$ is contained in the positive real axis. Then the following
key result holds true:
###### Proposition 5.5.
Let $(M,g)$ be a static, globally hyperbolic asymptotically AdS spacetime and
let $P_{\Theta_{K}}$ be the Klein-Gordon operator with a static and physically
admissible boundary condition as per Definition 4.2 Then there exists a
Hadamard two-point function associated to $P_{\Theta}$,
$\lambda_{2,\Theta_{K}}\in\mathcal{D}^{\prime}(M\times M)$ such that, for all
$f_{1},f_{2}\in\mathcal{D}(M)$
$\displaystyle\lambda_{2,\Theta_{K}}(f_{1},f_{2})\doteq
2i\int_{\mathbb{R}^{2}}\textrm{d}t\textrm{d}t^{\prime}\,\bigg{(}f_{1}(t)\bigg{|}\frac{\exp[iE_{\Theta_{K}}^{\frac{1}{2}}(t-t^{\prime})\big{]}}{(-E_{\Theta_{K}})^{\frac{1}{2}}}f_{2}(t^{\prime})\bigg{)},$
(55)
###### Proof.
Observe that, per construction $\lambda_{2,\Theta_{k}}$ is a bi-solution of
the Klein-Gordon equation associated to the operator $P_{\Theta_{K}}$ and it
abides to Equation (54). We need to show that Equation (53) holds true. To
this end it suffices to combine the following results. From [SV00] one can
infer that, the restriction of $\mathring{\lambda}_{2,\Theta_{K}}$, the bulk-
to-bulk two-point distribution, to every globally hyperbolic submanifold of
$M$ not intersecting the boundary is consistent with Equation (53). At this
point it suffices to invoke Proposition 5.3 and 5.5 to draw the sought
conclusion. ∎
###### Remark 5.5.
Observe that, from a physical viewpoint, in the preceding theorem, we have
individuated the two-point function of the so-called ground state with
boundary condition prescribed by $\Theta_{K}$.
### 5.3 A Deformation Argument
In order to prove the existence of Hadamard two-point functions on a generic
asymptotically anti-de Sitter spacetime for a Klein-Gordon field with
prescribed static boundary condition, we shall employ a a deformation argument
akin to that first outlined in [FNW81] on globally hyperbolic spacetimes with
empty boundary.
To this end we need the following lemma, see [Wro17, Lem. 4.6], slightly
adapted to the case in hand. In anticipation, recalling Equation (2), we say
that a globally hyperbolic, asymptotically AdS spacetime is even modulo
$\mathcal{O}(x^{3})$ close to $\partial M$ if $h(x)=h_{0}+x^{2}h_{1}(x)$ where
$h_{1}$ is a symmetric two-tensor, see [Wro17, Def. 4.3].
###### Lemma 5.2.
Suppose $(M,g)$ is a globally hyperbolic, asymptotically anti-de Sitter
spacetime. For any $\tau_{2}\in\mathbb{R}$ there a static, globally hyperbolic
asymptotically AdS spacetime $(M,g^{\prime})$ as well as $\tau_{0},\tau_{1}$
with $\tau_{0}<\tau_{1}<\tau_{2}$ such that $g^{\prime}=g$ if
$\\{\tau\geq\tau_{1}\\}$, while, if $\\{\tau\leq\tau_{0}\\}$, $(M,g^{\prime})$
is isometric to a standard static asymptotically AdS spacetime $(M,g_{S})$
which is even modulo $\mathcal{O}(x^{3})$ and in which $C\leq\beta\leq C^{-1}$
for some $C>0$, with $\beta$ as in Equation (1).
Consider now a generic, globally hyperbolic, asymptotically anti-de Sitter
spacetime $(M,g)$ and a deformation as per Lemma 5.2. Observe that, per
construction, all generalized broken bicharacteristics reach the region of $M$
with $\tau\in[\tau_{1},\tau_{2}]$. This observation leads to the following
result which is a direct consequence of the propagation of singularities
theorem 3.3 and 3.2. Mutatis mutandis, the proof is as that of [Wro17, Lem.
5.10] and, thus, we omit it.
###### Lemma 5.3.
Suppose that $\Lambda_{\Theta}\in\mathcal{D}^{\prime}(M\times M)$ is a bi-
solution of the Klein-Gordon equation ruled by $P_{\Theta}$ abiding to
Equation (54) and with a wavefront set of Hadamard form in the region of $M$
such that $\tau_{1}<\tau<\tau_{2}$. Then $\Lambda_{\Theta}$ is a Hadamard two-
point function.
To conclude, employing Corollary 4.2 we can prove the sought result:
###### Theorem 5.2.
Let $(M,g)$ be a globally hyperbolic, asymptotically anti-de Sitter spacetime
and let $(M_{S},g_{S})$ be its static deformation as per Lemma 5.2. Let
$\Theta_{K}$ be a static and physically admissible boundary condition so that
the Klein-Gordon operator $P_{\Theta_{K}}$ on $(M_{S},g_{S})$ admits a
Hadamard two-point function as per Proposition 5.5. Then there exists a
Hadamard two point-function on $(M,g)$ for the associated Klein-Gordon
operator with boundary condition ruled by $\Theta_{K}$.
###### Proof.
Let $(M,g)$ be as per hypothesis and let $(M,g_{S})$ be a static, globally
hyperbolic, asymptotically AdS spacetime such that there exists a third,
globally hyperbolic, asymptotically AdS spacetime $(M,g^{\prime})$
interpolating between $(M,g)$ and $(M,g_{S})$ in the sense of Lemma 5.2. On
account of Theorem 2.1, in all three cases $M$ is isometric to
$\mathbb{R}\times\Sigma$.
On account of Proposition 5.5, on $(M,g_{S})$ we can identify an Hadamard two-
point function as in Equation (55) subordinated to the boundary condition
$\Theta_{K}$. We indicate it with $\lambda_{2,S}$ omitting any reference to
$\Theta_{K}$ since it plays no explicit rôle in the analysis.
Focusing the attention on $(M,g^{\prime})$, Lemma 5.2 guarantees that, if
$\tau<\tau_{0}$, $\tau$ being the time coordinate along $\mathbb{R}$, then
therein $(M,g^{\prime})$ is isometric to $(M,g_{S})$. Calling this region
$M_{0}$, the restriction $\lambda_{2,S}|_{M_{0}\times M_{0}}$ identifies a
two-point distribution of Hadamard form. Notice that we have omitted to write
explicitly the underlying isometries for simplicity of notation.
Using the time-slice axiom in Corollary 4.2, for any pair of test-functions
$f,f^{\prime}\in\mathcal{D}(M^{\prime})$ such that for all
$p\in\textrm{supp}(f)\cup\textrm{supp}(f^{\prime})$, $\tau(p)>\tau_{0}$, we
set $h=P_{\Theta_{K}}\chi G_{\Theta_{K}}(f)$ and
$h^{\prime}=P_{\Theta_{K}}\chi G_{\Theta_{K}}(f^{\prime})$ where
$G_{\Theta_{K}}$ is the causal propagator associated to $P_{\Theta_{K}}$ in
$(M,g^{\prime})$, while $\chi=\chi(\tau)$ is any smooth function such that
there exists $\tau_{1},\tau_{2}<\tau_{0}$ for which $\tau=0$ if
$\tau<\tau_{1}$ while $\chi=1$ if $\tau>\tau_{2}$. We define
$\lambda^{\prime}_{2}(f,f^{\prime})=\lambda_{2,S}(h,h^{\prime}).$
Observe that $h,h^{\prime}\in C^{\infty}_{tc}(M)$ and therefore the right-hand
side of this identity is well-defined. In addition, since $G_{\Theta_{K}}$ is
continuous on $\mathcal{D}(M)$, sequential continuity entails that
$\lambda_{2}^{\prime}\in\mathcal{D}(M^{\prime}\times M^{\prime})$. In
addition, per construction, it is a solution of the Klein-Gordon equation
ruled by $P_{\Theta_{K}}$ on $(M^{\prime},g^{\prime})$ and abiding to Equation
(54).
Furthermore Lemma 5.3 yields that $\lambda_{2}^{\prime}$ is of Hadamard form.
To conclude it suffices to focus on $(M,g)$ recalling that there exists
$\tau_{1}\in\mathbb{R}$ such that, in the region
$(M_{1},g^{\prime})\subset(M,g^{\prime})$ for which $\tau>\tau_{1}$,
$(M,g^{\prime})$ is isometric to $(M,g)$. Hence,we can repeat the argument
given above. More precisely we consider
$\lambda_{2}^{\prime}|_{M^{\prime}\times M^{\prime}}$ and, using the time-
slice axiom, see Corollary 4.2, we can identify
$\lambda_{2}\in\mathcal{D}^{\prime}(M\times M)$ which is a solution of the
Klein-Gordon equation ruled by $P_{\Theta_{K}}$ and it abides to Equation
(54). Lemma 5.3 entails also that it is of Hadamard form, hence proving the
sought result. ∎
## Acknowledgments
We are grateful to Benito Juarez Aubry for the useful discussions which
inspired the beginning of this project and to Nicolò Drago both for the useful
discussions and for pointing out references [GM20, GMP14]. We are also
grateful to Simone Murro and to MichałWrochna for the useful discussions. The
work of A. Marta is supported by a fellowship of the Università Statale di
Milano, which is gratefully acknowledged. C. Dappiaggi is grateful to the
Department of Mathematics of the Università Statale di Milano for the kind
hospitality during the realization of part of this work.
## References
* [AD99] A. Ashtekar and S. Das, “Asymptotically Anti-de Sitter space-times: Conserved quantities,” Class. Quant. Grav. 17 (2000), L17-L30 [arXiv:hep-th/9911230 [hep-th]].
* [AFS18] L. Aké Hau, J. L. Flores, M. Sánchez, “Structure of globally hyperbolic spacetimes with timelike boundary”, arXiv:1808.04412 [gr-qc], to appear in Rev. Mat. Iberoamericana (2020).
* [AGN16] B. Ammann, N. Große and V. Nistor, “Well-posedness of the Laplacian on manifolds with boundary and bounded geometry”, Math. Nachr. 292 (2019) 1213. arXiv:1611.00281 [math-AP].
* [ADM21] B. A. Juárez-Aubry, C. Dappiaggi and A. Marta, in preparation
* [Bac11] A. Bachelot, “The Klein-Gordon Equation in Anti-de Sitter Cosmology,” J. Math. Pure. Appl. 96 (2011), 527 [arXiv:1010.1925 [math-ph]].
* [Bär13] C. Bär, “Green-hyperbolic operators on globally hyperbolic spacetimes,” Comm. Math. Phys. 333, (2015) 1585, arXiv:1310.0738 [math-ph]
* [BF82] P. Breitenlohner, D. Z. Freedman, “Stability in gauged extended supergravity”, Annals Phys. 144, (1982) 249.
* [BF09] C. Bär, K. Fredenhagen, “Quantum field theory on curved spacetimes: Concepts and Mathematical Foundations”, Lect. Notes Phys. 786 (2009) 1.
* [BDFY15] R. Brunetti, C. Dappiaggi, K. Fredenhagen and J. Yngvason, Advances in algebraic quantum field theory, (2015) Springer 453p.
* [Coc14] G. M. Coclite, et al, “Continuous dependence in hyperbolic problems with Wentzell boundary conditions,” Commun. Pure Appl. Anal. 13 (2014), 419.
* [DDF18] C. Dappiaggi, N. Drago, H. Ferreira “Fundamental solutions for the wave operator on static Lorentzian manifolds with timelike boundary”, Lett. Math. Phys. 109 (2019), 2157, [arXiv:1804.03434 [math-ph]].
* [DF16] C. Dappiaggi, H. R. C. Ferreira, “Hadamard states for a scalar field in anti-de Sitter spacetime with arbitrary boundary conditions”, Phys. Rev. D 94, 125016 (2016), [arXiv:1610.01049 [gr-qc]]
* [DF17] C. Dappiaggi and H. R. C. Ferreira, “On the algebraic quantization of a massive scalar field in anti-de-Sitter spacetime,” Rev. Math. Phys. 30 (2017) no.02, 1850004, [arXiv:1701.07215 [math-ph]].
* [DFJ18] C. Dappiaggi, H. R. Ferreira and B. A. Juárez-Aubry, “Mode solutions for a Klein-Gordon field in anti–de Sitter spacetime with dynamical boundary conditions of Wentzell type”, Phys. Rev. D 97 (2018) no.8, 085022 [arXiv:1802.00283 [hep-th]].
* [DF18] C. Dappiaggi, H. R. Ferreira, “On the algebraic quantization of a massive scalar field in anti-de-Sitter spacetime”, Rev. Math. Phys. 30 (2018), no. 2, 1850004, [arXiv:1701.07215 [math-ph]]
* [DFM18] C. Dappiaggi, H. R. Ferreira and A. Marta, “Ground states of a Klein-Gordon field with Robin boundary conditions in global anti–de Sitter spacetime”, Phys. Rev. D 98, 025005 (2018) [arXiv:1805.03135 [hep-th]]
* [DM20] C. Dappiaggi and A. Marta, “A generalization of the propagation of singularities theorem on asymptotically anti-de Sitter spacetimes,” [arXiv:2006.00560 [math-ph]].
* [DW19] W. Dybalski and M. Wrochna, “A mechanism for holography for non-interacting fields on anti-de Sitter spacetimes,” Class. Quant. Grav. 36 (2019) no.8, 085006 [arXiv:1809.05123 [math-ph]].
* [EnKa13] A. Enciso and N. Kamran, “A singular initial-boundary value problem for nonlinear wave equations and holography in asymptotically anti-de Sitter spaces,” J. Math. Pure. Appl. 103 (2015), 1053 [arXiv:1310.0158 [math.AP]].
* [FGGR02] A. Favini, G.R. Goldstein, J.A. Goldstein, S. Romanelli, “The heat equation with generalized wentzell boundary condition,” J. Evol. Equ. 2 (2002), 1.
* [FNW81] S. A. Fulling, F. J. Narcowich, R. M. Wald, “Singularity Structure of the Two Point Function in Quantum Field Theory in Curved Space-time”, Ann. Phys. ( N.y.) 136 ( 1981) 243-272
* [GOW17] C. Gérard, O. Oulghazi, M. Wrochna, “Hadamard States for the Klein-Gordon equation on Lorentzian manifolds of bonded geometry”, Comm. Math. Phys. 352 (2017) 519 [arXiv:1602.00930 [math-ph]].
* [GM20] N. Ginoux and S. Murro, “On the Cauchy problem for Friedrichs systems on globally hyperbolic manifolds with timelike boundary”, [arXiv:arXiv:2007.02544].
* [GMP14] V. Guillemin, E. Miranda and A. R. Pires “Symplectic and Poisson geometry on b-manifolds” Adv. in Math. 264 (2014) 864. arXiv:1206.2020 [math.SG].
* [Gru68] G. Grubb, _“A characterization of the non-local boundary value problems associated with an elliptic operator”_ , Ann. Sc. Norm. Sup. Pisa (3) 22 (1968) 425.
* [GS13] N. Große, C. Schneider, “Sobolev spaces on Riemannian manifolds with bounded geometry:General coordinates and traces”, Math. Nachr. 286 (2013) 1586.
* [GW18] O. Gannot, M. Wrochna “Propagation of Singularities on AdS Spacetimes for General Boundary Conditions ant the Holographic Hadamard Condition”, to appear on J. Inst. Math. Juissieu (2020), arXiv:1812.06564 [math.AP].
* [Hol12] G. Holzegel, “Well-posedness for the massive wave equation on asymptotically anti-de Sitter spacetimes”, J. Hype. Diff. Eq. 9 (2012), 239.
* [Hör03] L. Hörmander “The Analysis of Linear Partial Differential Operators I” Springer-Verlag (2003), 440 pg.
* [HSF12] S. Hassi, H. de Snoo, F. Szafraniec “Operators Methods for Boundary Value Problems”, London Mathematical Society Lecture Notes (2012) Cambridge University press, 298p.
* [Jos99] M. S. Joshi “Lectures on Pseudo-differential Operators” arXiv:math/9906155 [math.AP]
* [KM13] I. Khavkine and V. Moretti, “Algebraic QFT in Curved Spacetime and quasifree Hadamard states: an introduction,” Published in: Chapter 5, Advances in Algebraic Quantum Field Theory, R. Brunetti et al. (eds.), Springer, 2015 [arXiv:1412.5945 [math-ph]].
* [Mal92] M. Malamud “On a formula for the generalized resolvents of a non-densely defined Hermitian operator”, Ukr. Math. J. 44 (1992), 1522
* [Mel92] R. B. Melrose, “The Atiyah-Patodi-Singer index theorem”, Research Notes in Mathematics, (1993) CRC Press, 392pg.
* [ON83] B. O’Neill, “Semi-Riemannian Geometry with Applications to Relativity”, San Diego Academic Press (1983), 468pg.
* [SV00] H. Sahlmann and R. Verch, “Passivity and microlocal spectrum condition,” Commun. Math. Phys. 214 (2000), 705 [arXiv:math-ph/0002021 [math-ph]].
* [Sch01] T. Schick, “Manifolds with boundary and of bounded geometry”, Math. Nachr. 223 (2001) 103, arXiv:math/0001108 [math.DG].
* [War13] C. M. Warnick “The massive wave equation in asymptotically AdS spacetimes”, Comm. Math. Phys, 321 (2013) 85.
* [Wro17] M. Wrochna “The holographic Hadamard condition on asymptotically Anti-de Sitter spacetimes”, Lett. Math. Phys. 107 (2017) 2291, [arXiv:1612.01203 [math-ph]].
* [Ue73] T. Ueno, “Wave equation with Wentzell’s boundary condition and a related semigroup on the boundary, I,” Proc. Japan Acad. 49 (1973), 672.
* [Vas08] A. Vasy “Propagation of singularities for the wave equation on manifolds with corners”, Annals of Mathematics, 168 (2008), 749, arXiv:math/0405431 [math.AP].
* [Vas10] A. Vasy “Diffraction at corners for the wave equation on differential forms”, Comm. Part. Diff. Eq. 35 (2010), 1236, arXiv:0906.0738 [math.AP]
* [Vas12] A. Vasy “The wave equation on asymptotically Anti-de Sitter spaces”, Analysis & PDE 5 (2012), 81, arXiv:0911.5440 [math.AP].
* [Za15] J. Zahn, “Generalized Wentzell boundary conditions and quantum field theory,” Annales Henri Poincare 19 (2018) no.1, 163-187 [arXiv:1512.05512 [math-ph]].
|
# Nonequilibrium thermomechanics of Gaussian phase packet crystals:
application to the quasistatic quasicontinuum method
Prateek Gupta<EMAIL_ADDRESS>Dennis M. Kochmann<EMAIL_ADDRESS>Mechanics &
Materials Lab, Department of Mechanical and Process Engineering
ETH Zürich, 8092 Zürich, Switzerland
###### Abstract
The quasicontinuum method was originally introduced to bridge across length
scales – from atomistics to significantly larger continuum scales – thus
overcoming a key limitation of classical atomic-scale simulation techniques
while solely relying on atomic-scale input (in the form of interatomic
potentials). An associated challenge lies in bridging across time scales to
overcome the time scale limitations of atomistics. To address the biggest
challenge, bridging across both length and time scales, only a few techniques
exist, and most of those are limited to conditions of constant temperature.
Here, we present a new strategy for the space-time coarsening of an atomistic
ensemble, which introduces thermomechanical coupling. We investigate the
quasistatics and dynamics of a crystalline solid described as a lattice of
lumped correlated Gaussian phase packets occupying atomic lattice sites. By
definition, phase packets account for the dynamics of crystalline lattices at
finite temperature through the statistical variances of atomic momenta and
positions. We show that momentum-space correlation allows for an exchange
between potential and kinetic contributions to the crystal’s Hamiltonian.
Consequently, local adiabatic heating due to atomic site motion is captured.
Moreover, within the quasistatic approximation the governing equations reduce
to the minimization of thermodynamic potentials such as Helmholtz free energy
(depending on the fixed variables), and they yield the local equation of
state. We further discuss opportunities for describing atomic-level thermal
transport using the correlated Gaussian phase packet formulation and the
importance of interatomic correlations. Such a formulation offers a promising
avenue for a finite-temperature non-equilibrium quasicontinuum method that may
be combined with thermal transport models.
###### keywords:
Quasicontinuum , Multiscale Modeling , Atomistics , Non-Equilibrium
Statistical Mechanics , Updated-Lagrangian
††journal: Journal of the Mechanics and Physics of Solids
## 1 Introduction
Crystalline solids exhibit physical and chemical transport phenomena across
wide ranges of length and time scales. This includes the transport of charges
(Butcher, 1986; Ziman, 2001), of heat (Ziman, 2001), and of mass (Weiner,
2012), as well as mechanical failure. Understanding such phenomena is crucial
from both a fundamental scientific standpoint as well as to further advance
technologies ranging form solid-state batteries (Kim et al., 2014a) to thermal
management systems (Hicks and Dresselhaus, 1993) to failure-resistant metallic
structural components (Hirth, 1980) – all exposed to complex dynamic
conditions varying over time scales of a few microseconds to several hours and
length scales of a few nanometers to a few meters. Such variety of length and
time scales underlying the transport phenomena mandates the need for
simulation techniques that capture the physical processes at all length and
time scales involved. While continuum mechanics and related finite element and
phase field methods have been successful at modeling physical processes at
relatively large length scales (typically micrometers and above) and time
scales (milliseconds and above) (Hirth, 1980; Mendez et al., 2018), molecular
statics (MS) and molecular dynamics (MD) have been successful at elucidating
the physics of various transport phenomena at atomic-level length scales
(angstroms to tens of nanometers) and time scales (femto- to nanoseconds)
(Tuckerman, 2010). Where a higher level of accuracy is required, methods such
as Density Functional Theory (DFT) or the direct computation of Schrödinger’s
equation have aimed at capturing the quantum coupling of molecular-level
physics. All of the aforementioned techniques specialize in the approximate
ranges of length and time scales mentioned above. However, each of those
techniques poses restrictive assumptions at smaller scales while becoming
prohibitively costly at larger scales, hence making scale-bridging techniques
attractive (Srivastava and Nemat-Nasser, 2014; van der Giessen et al., 2020).
For instance, the current state-of-the-art DFT based multiscale modeling
techniques developed by Motamarri et al. (2020) can model systems consisting
of approximately 4000 atoms with a large computational cost incurred by
massively parallel supercomputers.
Several concurrent scale-bridging techniques have been developed over the past
few decades, particularly focusing on multiscale thermomechanical modeling of
crystalline materials (cf. (Xu and Chen, 2019) for a detailed review of some
of these techniques). The _atomistic-to-continuum_ method developed by Wagner
et al. (2008) utilizes a coupling between a pre-determined atomistic domain
and an overlaid continuum domain, discretized using a suitable finite element
method. The continuum domain is used for providing a heat bath to the
atomistic domain, simultaneously minimizing the difference between the
temperature fields in both the domains establishing a two-way coupling. In the
_bridging-domain method_ , the pre-determined atomistic and continuum
subdomains are coupled by imposing a weak displacement compatibility condition
at the intersecting nodes (Belytschko and Xiao, 2003). Nodal mechanical forces
are computed using the total Hamiltonian of the system constructed using the
atomistic and continuum subdomains as well as the weak compatibility
conditions. Chen (2009) developed the _concurrent atomistic-continuum_ method
in which microscopic balance equations, derived using Irving and Kirkwood
(1950)’s theory, are solved to determine thermomechanical deformation of
atomic sites within large continuum-scale subdomains. Unlike the _atomistic-
to-continuum_ and _bridging-domain method_ methods, the _concurrent atomistic-
continuum_ method allows capturing of some lattice defects such as
dislocations and cracks in the continuum subdomain. The _coupled
atomistic/discrete-dislocation_ (Shilkrot et al., 2002) method is another
multiscale method that allows the movement of dislocation defects across the
atomistic and continuum subdomains. While the _concurrent atomistic-continuum_
requires only the interatomic potential as the constitutive input, the
_coupled atomistic/discrete-dislocation_ method requires reduced-order
continuum constitutive models to determine long-range elastic stress fields.
All the aforementioned methods require apriori knowledge of atomistic and
continuum subdomains and solve the thermomechanical deformation in both the
subdomains using significantly different methodologies. This inhibits a
seamless transition from atomistic to continuum scale subdomains. Furthermore,
finite-temperature variants of all these techniques typically use time-
integration at the atomic vibration scales, resolving the available phonon
modes in the domain (Chen et al., 2017), thus not allowing for temporal
coarsening.
The quasicontinuum (QC) method of Tadmor et al. (1996a) is another such scale-
bridging method that aims to solve the problem of spatial scale-bridging from
atomistic to continuum length scales via intermediate mesoscopic scales
(Miller et al., 1998). Starting from a standard atomistic setting, a carefully
selected set of representative degrees of freedom (_repatoms_) reduces the
computational costs and admits simulating continuum-scale problems by
restricting atomistic resolution to where it is in fact needed. Different
flavours of QC have been proposed based on the interpolation of forces on
repatoms (Knap and Ortiz, 2001), or based on approximating the total
Hamiltonian of the system using quadrature-like _summation and sampling rules_
(Eidel and Stukowski, 2009; Dobson et al., 2010; Gunzburger and Zhang, 2010;
Espanol et al., 2013). For example, the nonlocal energy-based formulation of
Amelang et al. (2015) enables a seamless spatial scale-bridging within the QC
setup, which does not require a strict separation of (nor apriori knowledge
about) atomistic and non-atomistic (coarsened) subdomains within the
simulation domain. The capabilities of this fully nonlocal QC technique have
been demonstrated, e.g., by large-scale quasistatic total-Lagrangian
simulations of dislocation interactions during nano-indentation (Amelang et
al., 2015), nanoscale surface and size effects (Amelang and Kochmann, 2015),
and void growth and coalescence (Tembhekar et al., 2017). In this work, we
adopt the nonlocal energy-based setup of Amelang et al. (2015) in a new,
updated Lagrangian setting for spatial upscaling.
While spatial coarse-graining is thus achieved by approximating an ensemble of
atoms with a subset of atoms, temporal coarse-graining requires approximate
modeling due to the unavailability of the trajectories of all the atoms, at a
given time. Furthermore, Hamiltonian dynamics of an ensemble of atoms couples
length and time scales in the system (Evans and Morriss, 2007), which is why
spatial scale-bridging techniques have often been applied to systems at zero
temperature or at a uniform temperature only (Tadmor et al., 2013). One way to
model uniform temperature is to apply a global ergodic assumption for every
atomic site, thus yielding space-averaged trajectories of atoms as phase-
averaged trajectories. Suitable global thermal equilibrium distributions (such
as NVT ensembles (Tadmor and Miller, 2011)) are used for phase averaging of
trajectories. The motion of atoms on these phase-averaged trajectories is
governed by phase-averaged interatomic potential and kinetic energy. Due to
thermal softening of interatomic potential upon phase-averaging, the
accessible time-scales in isothermal dynamic simulations increase, enabling
time coarsening. Kim et al. (2014b) further increased the accessible time
limits of Tadmor et al.’s method using hyperdynamics (Voter, 1997) to capture
thermally activated rare events, such as atomic-scale mass diffusion. However,
most prior work has been restricted to local harmonic approximations of
interatomic potentials and isothermal deformations (Tadmor and Miller, 2011;
Tadmor et al., 2013) at a uniform temperature. Another way is to use Langevin
dynamics, in which a stochastic thermal forcing is added to the dynamics of
atoms to account for thermal vibrations (Qu et al., 2005; Marian et al.,
2009). Unfortunately, such an approach poses a time integration restriction
even for systems in thermodynamic equilibrium (uniform temperature), which is
why it is computationally costly. Kulkarni et al. (2008) introduced a
variational mean-field approach for modeling non-uniform temperature, which
approximates the global distribution function of the ensemble as a product of
local entropy maximizing (or _max-ent_) distributions, constraining the local
frequency of atomic vibrations using the local equipartition of energy of
every atom. This local ergodic assumption yields time-averaged trajectories as
phase-averaged trajectories and thus enables the definition of non-uniform
temperature and internal energy. However, the interatomic independence or
statistically uncorrelated local distributions, inherent in that approach,
precludes the transport of energy across the atoms. As a remedy, Kulkarni et
al. (2008) modeled thermal transport using the finite-element setup of the QC
formulation and empirical bulk-thermal conductivity values. Venturini et al.
(2014) extended the max-ent approach to non-uniform interstitial
concentrations of solute atoms in crystalline solids to also include mass
diffusion. Specifically, they modeled transport using linear Onsager kinetics,
derived from a local dissipation inequality. Combining Venturini et al.
(2014)’s max-ent based approach to include non-uniform interstitial
concentration, Mendez et al. (2018) used an Arrhenius type master-equation
model to include diffusive transport to achieve long-term atomistic
simulations. Venturini et al. (2014)’s max-ent based model, in a specific case
of isothermal diffusion of impurities in crystals, resembles the _diffusive
molecular dynamics_ (DMD) formulation of Li et al. (2011), who used an
isotropic Gaussian atomic density cloud to model uncorrelated vibrations of
atoms. Here, we introduce a formulation based on Gaussian Phase Packets (GPP)
ansatz, different from the max-ent distribution ansatz to understand the
dynamics of local distribution functions of atoms.
Previously, Ma et al. (1993) studied the dynamic Gaussian Phase Packet (GPP)
ansatz as a _trajectory sampling_ technique for uniform-temperature molecular
dynamics simulations. They approximated the distribution function of each atom
in an ensemble as a correlated Gaussian distribution with the covariance
matrix as the mean-field or phase-space parameter. Such an approximation
yields the evolution equations of the covariance matrix by either directly
substituting the GPP ansatz in the Liouville equation (strong form) or by
using the Frenkel-Dirac-McLachlan variational principle (McLachlan, 1964). The
resulting equations may be integrated in time, combined with appropriate
ergodic assumptions, to infer the locally averaged physical quantities of the
system (such as temperature). However, Ma et al. (1993) applied the
formulation on a small system of atoms relaxing towards thermodynamic
equilibrium. Their work was inspired by the application of GPPs in quantum
mechanics by Heller (1975), who used it for calculating parameterized
solutions of the Schrödinger’s equation.
In this work, we study the GPP ansatz applied to an ensemble of atoms to
elucidate the nonequilibrium and (local-thermal) equilibrium thermomechanical
quasistatics and dynamics of the system. We show that an approximation of
interatomic correlations is required for modeling atomistic-level transport
phenomena. In an ensemble of $N$ atoms, such an approximation increases the
degrees of freedom by $\mathcal{O}(N^{2})$, which is computationally costly.
Therefore, we employ uncorrelated atoms or _interatomic independence_ for
further modeling and applications. Our GPP approach might be considered as a
dynamic extension of the max-ent methods developed by Kulkarni et al. (2008),
and Venturini et al. (2014), and the DMD approach of Li et al. (2011). We show
that incorporating momentum-displacement correlations in a Gaussian ansatz for
the distribution function elucidates the local dynamics of atoms and energy
exchange from kinetic to potential, thus dynamically capturing the
thermomechanical deformation. Due to the assumption that the atoms are
uncorrelated (solid crystal of independent atoms is also known as an
_Einstein’s solid_), the GPP approximation still fails to capture the thermal
transport due to non-uniform thermomechanical deformation and requires
additional phenomenological modelling to that end. Moreover, to achieve
temporal coarsening, our GPP approach highlights the importance of vanishing
interatomic correlations, approaching the quasistatic approximation. We
combine the GPP framework within the quasistatic approximation with Venturini
et al.’s linear Onsager kinetics to model local thermal transport. Within the
quasistatic approximation, our GPP approach resembles the quasistatic max-ent
approach of Kulkarni et al. (2008) and Venturini et al. (2014). However, the
correlated Gaussian ansatz highlights the physical significance of assuming
vanishing cross-correlations across all degrees of freedom. Furthermore, it
emphasizes the need for additional thermodynamic assumptions required for
modeling the thermomechanical deformation of the crystal, due to the loss of
knowledge of the temporal evolution of the correlations. In Section 2 we
review the fundamentals of nonequilibrium statistical mechanics based on the
Liouville equation and the GPP approximation. We show that the interatomic
correlations are of fundamental importance for modeling interatomic heat flux.
However, such correlations tend to increase the degrees of freedom
significantly, which is why they are neglected in this work. We still retain
the phase-space correlation of each atom, which we later identify as the
_thermal momentum_. In Section 3 we discuss the importance of thermal momentum
in dynamics and the implications of its vanishing limit in quasistatics. We
show that in the quasistatic limit the equations define local thermomechanical
equilibrium of the system and yield the rate-independent local thermal
equation of state. To capture thermal transport, we review Venturini et al.’s
linear Onsager kinetics-based model and its application. We also discuss the
time scale imposed by the Onsager kinetics-based model and its time stepping
constraints. In Section 4 we discuss the QC implementation of the local
thermomechanical equilibrium equations combined with thermal transport and
demonstrate its use in a new update-Lagrangian distributed-memory QC solver
for coarse-grained atomistic simulations. Finally, in Section 5 we conclude
our analysis and discuss limitations and future prospects.
## 2 Nonequilibrium thermodynamics of Gaussian Phase Packets
Figure 1: Schematic illustrating a typical nonequilibrium QC study of a domain
with atomistic and coarsened subdomains. All atomic sites are modeled using
the Gaussian Phase Packet (GPP) approximation. The transport of energy among
the atomic sites is modeled using the linear Onsager kinetics of Venturini et
al. (2014).
In this section, we discuss the nonequilibrium modeling of deformation
mechanics of crystalline solids using the GPP approximation in which atoms are
treated as Gaussian clouds, centered at the mean phase-space positions of the
atoms. We briefly review the application of the Liouville equation for
analyzing the statistical evolution of a large ensemble of atoms subject to
high-frequency vibrations. Such random vibrations are modeled by assuming
local phase-space coordinates (positions and momenta) drawn from local
Gaussian distributions. We discuss the impact of assuming independent Gaussian
distributions for each atom (termed _interatomic independence_ hereafter) and
the corresponding inability of the model to capture nonequilibrium thermal
transport. Moreover, the independent Gaussian assumption allows us to
formulate the dynamical system of equations governing the atoms and the
corresponding mean-field parameters. In subsequent sections, we use the
insights gained here to formulate isentropic and non-isentropic quasistatic
problems of finite-temperature crystal deformation (see Figure 1 for a
schematic description).
### 2.1 Hamiltonian dynamics
The time evolution of an atomic crystal, modeled as an ensemble of classical
particles, is fully characterized by the generalized positions
$\boldsymbol{q}=\\{\boldsymbol{q}_{i}(t),i=1,\ldots,N\\}$ and momenta
$\boldsymbol{p}=\\{\boldsymbol{p}_{i}(t),i=1,\ldots,N\\}$ of all $N$
particles, which evolve in time according to the Hamiltonian equations,
$\displaystyle\frac{\;\\!\mathrm{d}\boldsymbol{p}_{i}}{\;\\!\mathrm{d}t}=-\frac{\partial\mathcal{H}}{\partial\boldsymbol{q}_{i}}=-\nabla_{\boldsymbol{q}_{i}}V(\boldsymbol{q})=\boldsymbol{F}_{i}(\boldsymbol{q}),$
(2.1a)
$\displaystyle\frac{\;\\!\mathrm{d}\boldsymbol{q}_{i}}{\;\\!\mathrm{d}t}=\frac{\partial\mathcal{H}}{\partial\boldsymbol{p}_{i}}=\frac{\boldsymbol{p}_{i}}{m_{i}},$
(2.1b)
where
$\mathcal{H}\left(\boldsymbol{p},\boldsymbol{q}\right)=\sum_{i=1}^{N}\frac{\boldsymbol{p}_{i}\cdot\boldsymbol{p}_{i}}{2m_{i}}+V(\boldsymbol{q})$
(2.2)
is the Hamiltonian of the system, $V({\boldsymbol{q}})$ represents the
potential field acting on all atoms in the system,
$\boldsymbol{F}_{i}(\boldsymbol{q})$ is the force, and $m_{i}$ is the mass of
the $i^{\mathrm{th}}$ atom. Equations (2.1) are solved in standard molecular
dynamics studies of crystals, in which trajectories
$\left(\boldsymbol{p}_{i}(t),\boldsymbol{q}_{i}(t)\right)$ of all atoms are
resolved in time. As a consequence, such simulations require femtosecond-level
time steps (Tuckerman, 2010; Tadmor et al., 1996a, b) and are unable to
capture long-time-scale phenomena within computationally feasible times.
To this end, we consider the statistical treatment of the Hamiltonian dynamics
governed by (2.1) and (2.2), in which the local vibrations of all atoms about
their mean positions are modeled as random fluctuations in the phase-space
coordinate $\boldsymbol{z}=(\boldsymbol{p}(t),\boldsymbol{q}(t))$. Here and in
the following, we use $\boldsymbol{z}\in\mathbb{R}^{6N}$ for brevity to
represent the momenta and positions of the $N$ particles in three dimensions
(3D). It is convenient to introduce the distribution $f(\boldsymbol{z},t)$
such that the quantity
$\;\\!\mathrm{d}P=f(\boldsymbol{p},\boldsymbol{q},t)\prod_{i=1}^{N}\;\\!\mathrm{d}\boldsymbol{p}_{i}\prod_{i=1}^{N}\;\\!\mathrm{d}\boldsymbol{q}_{i}$
(2.3)
is the probability of finding the system of atoms such that the position and
momentum of the $i^{\mathrm{th}}$ atom lie within
$(\boldsymbol{q}_{i},\boldsymbol{q}_{i}+\;\\!\mathrm{d}\boldsymbol{q}_{i})$
and
$(\boldsymbol{p}_{i},\boldsymbol{p}_{i}+\;\\!\mathrm{d}\boldsymbol{p}_{i})$,
respectively. The probability distribution $f(\boldsymbol{z},t)$ is governed
by the Liouville equation (Evans and Morriss, 2007; Tadmor and Miller, 2011)
$\frac{\partial f\left(\boldsymbol{z},t\right)}{\partial
t}+i\mathcal{L}f=0\quad\text{with}\quad
f(\boldsymbol{z},0)=f_{0}(\boldsymbol{z}),\quad\lim_{\boldsymbol{z}\to\infty}f(\boldsymbol{z},t)=0,$
(2.4)
with the Liouville operator $\mathcal{L}$ defined by
$i\mathcal{L}=\frac{\partial}{\partial\boldsymbol{z}}\cdot\dot{\boldsymbol{z}}+\dot{\boldsymbol{z}}\cdot\frac{\partial}{\partial\boldsymbol{z}}$
(2.5)
and
$\dot{\boldsymbol{z}}=\left(\dot{\boldsymbol{p}},\dot{\boldsymbol{q}}\right)$.
Here and in the following, dots denote time derivatives. Given the equations
of motion in (2.1) and the Hamiltonian of the system in (2.2), the Liouville
operator in (2.5) becomes
$i\mathcal{L}=\dot{\boldsymbol{z}}\cdot\frac{\partial}{\partial\boldsymbol{z}}+\frac{\partial}{\partial\boldsymbol{z}}\cdot\dot{\boldsymbol{z}}=\dot{\boldsymbol{p}}\cdot\frac{\partial}{\partial\boldsymbol{p}}+\dot{\boldsymbol{q}}\cdot\frac{\partial}{\partial\boldsymbol{q}}+\frac{\partial}{\partial\boldsymbol{p}}\cdot\dot{\boldsymbol{p}}+\frac{\partial}{\partial\boldsymbol{q}}\cdot\dot{\boldsymbol{q}}=-\frac{\partial\mathcal{H}}{\partial\boldsymbol{q}}\cdot\frac{\partial}{\partial\boldsymbol{p}}+\frac{\partial\mathcal{H}}{\partial\boldsymbol{p}}\cdot\frac{\partial}{\partial\boldsymbol{q}}.$
(2.6)
We note that solution of (2.4) combined with (2.6) at all times is equivalent
to solving the equations of motion (2.1) with (2.2) (Evans and Morriss, 2007).
In general, such a solution requires a discretization of the $6N$-dimensional
phase-space $\\{\Gamma\subseteq\mathbb{R}^{6N}:\boldsymbol{z}\in\Gamma\\}$ and
of time for a system of $N$ atoms in 3D. To avoid such a computationally
intensive discretization, we parametrize $f(\boldsymbol{z},t)$ using the GPP
approximation (detailed below), which yields the equations of motion of the
mean phase-space coordinates and the respective fluctuation auto- and cross-
correlations, categorized as phase-space or mean-field parameters.
### 2.2 Crystal lattice of Gaussian phase packets
As discussed in Section 1, the mean-field approximation based temporal-
coarsening attempts (Kulkarni et al., 2008; Li et al., 2011; Venturini et al.,
2014) start from a constructing an ansatz for the distribution
$f(\boldsymbol{z})$ directly at steady state. If the Hamiltonian $\mathcal{H}$
is constrained (Venturini et al., 2014), the distribution function
$f(\boldsymbol{z})$ is only a function of $\mathcal{H}$, satisfying
$i\mathcal{L}f=0$. On the other hand, considering only variance constraints on
the distribution function (Kulkarni et al., 2008; Li et al., 2011) forces
steady state of the mean-field parameters, as shown in the section below. We
start with a multivariate Gaussian phase packet (GPP) ansatz to the
distribution function with no assumptions to the vanishing correlations and
systematically discuss the consequences of eliminating interatomic and cross
correlations of degrees of freedom. The GPP approximation was first introduced
in the context of quantum mechanics by Heller (1975) and used for classical
systems by Ma et al. (1993). Both Heller (1975) and Ma et al. (1993)
substituted Gaussian distributions into the Liouville equation (2.4)
(Schrödinger’s equation for quantum systems) to obtain the dynamical evolution
of the phase-space parameters. Following their idea, we approximate the
system-wide probability distribution $f(\boldsymbol{z},t)$ as a multivariate
Gaussian distribution, i.e.,
$f(\boldsymbol{z},t)=\frac{1}{\mathcal{Z}(t)}e^{-\frac{1}{2}\left(\boldsymbol{z}-\overline{\boldsymbol{z}}(t)\right)^{\mathrm{T}}\boldsymbol{\Sigma}^{-1}(t)\left(\boldsymbol{z}-\overline{\boldsymbol{z}}(t)\right)},$
(2.7)
where $\boldsymbol{\Sigma}$ is the $6N\times 6N$ covariance matrix composed of
the interatomic and momentum-displacement correlations,
$\overline{\boldsymbol{z}}$ represents the vector of all atoms’ mean positions
and momenta, and the partition function $\mathcal{Z}(t)$ is defined by
$\mathcal{Z}(t)=\frac{1}{N!h^{3N}}\int_{\mathbb{R}^{6N}}e^{-\frac{1}{2}\left(\boldsymbol{z}-\overline{\boldsymbol{z}}(t)\right)^{\mathrm{T}}\boldsymbol{\Sigma}^{-1}(t)\left(\boldsymbol{z}-\overline{\boldsymbol{z}}(t)\right)}d\boldsymbol{z}=\frac{(2\pi)^{3N}}{N!\,h^{3N}}\sqrt{\det\boldsymbol{\Sigma}}.$
(2.8)
The phase average of any function $A(\boldsymbol{z})$ is denoted by
$\left\langle{A(\boldsymbol{z})}\right\rangle$ and defined as
$\left\langle{A(\boldsymbol{z})}\right\rangle=\frac{1}{N!\,h^{3N}}\int_{\mathbb{R}^{6N}}A(\boldsymbol{z})f(\boldsymbol{z},t)\;\\!\mathrm{d}\boldsymbol{z},$
(2.9)
where $h$ is the Planck’s constant and the factor $N!\,h^{3N}$ is the
normalizing factor for the phase-space volume (Landau and Lifshitz, 1980).
This confirms that
$\overline{\boldsymbol{z}}(t)=\langle\boldsymbol{z}(t)\rangle$. We further
conclude that $\boldsymbol{\Sigma}$ can be written as a block-matrix with
components
$\boldsymbol{\Sigma}_{ij}=\frac{1}{N!\,h^{3N}}\int_{\mathbb{R}^{6N}}\left(\boldsymbol{z}_{i}-\overline{\boldsymbol{z}}_{i}\right)\otimes\left(\boldsymbol{z}_{j}-\overline{\boldsymbol{z}}_{j}\right)f(\boldsymbol{z},t)\;\\!\mathrm{d}\boldsymbol{z}=\left\langle{\left(\boldsymbol{z}_{i}-\overline{\boldsymbol{z}}_{i}\right)\otimes\left(\boldsymbol{z}_{j}-\overline{\boldsymbol{z}}_{j}\right)}\right\rangle,$
(2.10)
such that each block represents the correlation among displacements and
momenta of atoms $i$ and $j$. Consequently, we obtain the phase-space
parameters $\left(\overline{\boldsymbol{z}}(t),\boldsymbol{\Sigma}(t)\right)$.
That is, from now on we track the atomic ensemble through the mean positions
and momenta of all atoms, $\overline{\boldsymbol{z}}(t)$, and their covariance
matrix, $\boldsymbol{\Sigma}(t)$ – rather than time-resolving the positions
and momenta directly.
Time evolution equations for the phase-space parameters are obtained from the
Liouville equation (2.4) by using the following identity for the phase average
of any phase function $A(\boldsymbol{z})$ (Evans and Morriss, 2007; Zubarev,
1974)
$\frac{\;\\!\mathrm{d}\left\langle{A}\right\rangle}{\;\\!\mathrm{d}t}=\frac{1}{N!\,h^{3N}}\int_{\mathbb{R}^{6N}}f(\boldsymbol{z},t)\frac{\;\\!\mathrm{d}A}{\;\\!\mathrm{d}t}\;\\!\mathrm{d}\boldsymbol{z}=\left\langle{\frac{\;\\!\mathrm{d}A}{\;\\!\mathrm{d}t}}\right\rangle$
(2.11)
(see A for a brief discussion). Application to $\overline{\boldsymbol{z}}(t)$
and $\boldsymbol{\Sigma}(t)$ yields the dynamical equations
$\displaystyle\frac{\;\\!\mathrm{d}\overline{\boldsymbol{z}}}{\;\\!\mathrm{d}t}=\left\langle{\dot{\boldsymbol{z}}}\right\rangle,\qquad\frac{\;\\!\mathrm{d}\boldsymbol{\Sigma}_{ij}}{\;\\!\mathrm{d}t}=\left\langle{\left(\dot{\boldsymbol{z}}_{i}-\dot{\overline{\boldsymbol{z}}_{i}}\right)\otimes\left({\boldsymbol{z}}_{j}-{\overline{\boldsymbol{z}}_{j}}\right)}\right\rangle+\left\langle{\left({\boldsymbol{z}}_{i}-{\overline{\boldsymbol{z}}_{i}}\right)\otimes\left(\dot{\boldsymbol{z}}_{j}-\dot{\overline{\boldsymbol{z}}_{j}}\right)}\right\rangle.$
(2.12)
Let us further specify the second equation. Writing the components of
covariance matrix $\boldsymbol{\Sigma}_{ij}$ as
$\boldsymbol{\Sigma}_{ij}=\left(\begin{matrix}\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{p})}_{ij}&\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{q})}_{ij}\\\
\boldsymbol{\Sigma}^{(\boldsymbol{q},\boldsymbol{p})}_{ij}&\boldsymbol{\Sigma}^{(\boldsymbol{q},\boldsymbol{q})}_{ij}\end{matrix}\right),$
(2.13)
and assuming that all atoms have the same mass $m$, identity (2.11) yields the
following time evolution equations:
$\frac{\;\\!\mathrm{d}\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{p})}_{ij}}{\;\\!\mathrm{d}t}=\left\langle{\boldsymbol{F}_{i}(\boldsymbol{q})\otimes\left(\boldsymbol{p}_{j}-\overline{\boldsymbol{p}}_{j}\right)}\right\rangle+\left\langle{\left(\boldsymbol{p}_{i}-\overline{\boldsymbol{p}}_{i}\right)\otimes\boldsymbol{F}_{j}(\boldsymbol{q})}\right\rangle,$
(2.14a)
$\frac{\;\\!\mathrm{d}\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{q})}_{ij}}{\;\\!\mathrm{d}t}=\left\langle{\boldsymbol{F}_{i}(\boldsymbol{q})\otimes\left(\boldsymbol{q}_{j}-\overline{\boldsymbol{q}}_{j}\right)}\right\rangle+\frac{\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{p})}_{ij}}{m},$
(2.14b)
$\frac{\;\\!\mathrm{d}\boldsymbol{\Sigma}^{(\boldsymbol{q},\boldsymbol{p})}_{ij}}{\;\\!\mathrm{d}t}=\left\langle{\left(\boldsymbol{q}_{i}-\overline{\boldsymbol{q}}_{i}\right)\otimes{\boldsymbol{F}}_{j}}\right\rangle+\frac{\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{p})}_{ij}}{m}.$
(2.14c)
$\frac{\;\\!\mathrm{d}\boldsymbol{\Sigma}^{(\boldsymbol{q},\boldsymbol{q})}_{ij}}{\;\\!\mathrm{d}t}=\frac{\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{q})}_{i}+\boldsymbol{\Sigma}^{(\boldsymbol{q},\boldsymbol{p})}_{i}}{m},$
(2.14d)
Equations (2.14d) govern the evolution of the pairwise momentum and
displacement correlations of atoms $i$ and $j$, and they must be solved to
obtain the interatomic correlations at all times. Equations (2.14c)-(2.14d)
govern the thermomechanical coupling of the crystal. Note that we may identify
the pairwise kinetic tensor
$\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{p})}_{ij}$ as a measure of
temperature, so that (2.14b) and (2.14c) describe the evolution of the system-
wide distribution as a result of unbalanced pairwise virial tensors and
kinetic tensors (see Admal and Tadmor (2010) for the tensor virial theorem).
The virial tensor
$\left\langle{\boldsymbol{F}_{i}(\boldsymbol{q})\otimes\left(\boldsymbol{q}_{j}-\overline{\boldsymbol{q}}_{j}\right)}\right\rangle$
changes with changing displacement correlations of atoms due to varying
extents of atomic vibrations
$\boldsymbol{\Sigma}^{(\boldsymbol{q},\boldsymbol{q})}_{ij}$, thus coupling
(2.14d) with (2.14b) and (2.14c). The right-hand side of (2.14a) resembles the
tensor form of the interatomic heat current (Sääskilahti et al., 2015; Lepri
et al., 2003) and changes with varying correlation matrices
$\boldsymbol{\Sigma}^{\boldsymbol{q},\boldsymbol{p}}_{ij}$, thus coupling
(2.14a) with (2.14b) and (2.14c). Consequently, the imbalance between virial
and kinetic tensors in equations (2.14b) and (2.14c) drives the phase-space
motion of the system of particles, resulting in the time evolution of
$\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{p})}_{ij}$ and
$\boldsymbol{\Sigma}^{(\boldsymbol{q},\boldsymbol{q})}_{ij}$.
We identify the entropy $S$ of the atomic ensemble as
$S=-k_{B}\left\langle{\ln
f}\right\rangle=k_{B}\left({3N}\left[1+\ln(2\pi)\right]-\ln\left(N!\right)+{\ln\left(\frac{\sqrt{\det\boldsymbol{\Sigma}}}{h^{3N}}\right)}\right)=S_{0}+k_{B}\ln\left(\frac{\sqrt{\det\boldsymbol{\Sigma}}}{h^{3N}}\right),$
(2.15)
where $k_{B}$ is Boltzmann’s constant, and $S_{0}$ is a constant for a given
system with a constant number of atoms. The entropy rate of change follows as
$\frac{\;\\!\mathrm{d}S}{\;\\!\mathrm{d}t}=\frac{k_{B}}{2\det\boldsymbol{\Sigma}}\frac{\;\\!\mathrm{d}(\det\boldsymbol{\Sigma})}{\;\\!\mathrm{d}t}.$
(2.16)
Overall, equations (2.12) govern the phase-space motion of a system of atoms
and contain $6N+36N(N+1)/2$ equations, solving which is even more
computationally expensive than solving the state-space governing equations of
MD. As a simplifying assumption, Ma et al. (1993) and Heller (Heller, 1975)
assumed the statistical independence of atoms (i.e., of states in the quantum
analogue), which implies
$\boldsymbol{\Sigma}_{ij}=\mathbf{0}\quad\mathrm{for}\ i\neq j.$ (2.17)
As shown in the following section, the assumption (2.17) severely limits the
applicability of (2.12), since under this assumption the transport of heat is
not correctly resolved in the evolution equations. Consequently, we may apply
the phase-space evolution equations (2.12) and (2.14d) only for isentropic
(reversible) finite temperature simulations of quasistatic and dynamic
processes.
### 2.3 Independent Gaussian phase packets
Since solving the full system of evolution equations is prohibitively
expensive, as discussed above, let us apply (2.17) and assume non-zero
correlations between the position and momentum of each individual atom, but no
cross-correlations between different atoms. To this end, we apply the GPP
approximation to a single atom $i$, which yields the multivariate Gaussian
distribution of the phase-space coordinate $\boldsymbol{z}_{i}$ as
$f_{i}(\boldsymbol{z}_{i},t)=\frac{1}{\mathcal{Z}_{i}}e^{-\frac{1}{2}\left(\boldsymbol{z}_{i}-\overline{\boldsymbol{z}}_{i}\right)^{\mathrm{T}}\boldsymbol{\Sigma}^{-1}_{i}\left(\boldsymbol{z}_{i}-\overline{\boldsymbol{z}}_{i}\right)}\qquad\text{so
that}\qquad f(\boldsymbol{z},t)=\prod_{i=1}^{N}f_{i}(\boldsymbol{z}_{i},t),$
(2.18)
where the phase-space parameters
$(\overline{\boldsymbol{z}}_{i},\boldsymbol{\Sigma}_{i})$ denote the mean
phase-space coordinate and variance of the $i^{\text{th}}$ atom, respectively,
defined as
$\overline{\boldsymbol{z}}_{i}(t)=\frac{1}{h^{3}}\int_{\mathbb{R}^{6}}f_{i}(\boldsymbol{z}_{i},t)\boldsymbol{z}_{i}\;\\!\mathrm{d}\boldsymbol{z}_{i}=\left\langle{\boldsymbol{z}_{i}}\right\rangle,\qquad\boldsymbol{\Sigma}_{i}=\left\langle{\boldsymbol{z}_{i}\otimes\boldsymbol{z}_{i}}\right\rangle.$
(2.19)
The normalization quantity $\mathcal{Z}_{i}(t)$ may be identified as the
single particle partition function
$\mathcal{Z}_{i}(t)=\frac{1}{h^{3}}\int_{\mathbb{R}^{6}}e^{-\frac{1}{2}\left(\boldsymbol{z}_{i}-\overline{\boldsymbol{z}}_{i}\right)^{\mathrm{T}}\boldsymbol{\Sigma}^{-1}_{i}\left(\boldsymbol{z}_{i}-\overline{\boldsymbol{z}}_{i}\right)}\;\\!\mathrm{d}\boldsymbol{z}_{i}=\left(\frac{2\pi}{h}\right)^{3}\sqrt{\det\boldsymbol{\Sigma}_{i}}.$
(2.20)
$\boldsymbol{\Sigma}_{i}$ is the $6\times 6$ covariance matrix of the
multivariate Gaussian and accounts for the variance or uncertainty in the
momentum $\boldsymbol{p}_{i}$ and displacement $\boldsymbol{q}_{i}$ of the
$i^{\text{th}}$ atom.
The assumed interatomic independence eliminates the interatomic correlations
as independent variables, thus reducing the total number of equations to
$27N+6N$ for a system of $N$ atoms, which govern the time evolution of the
kinetic tensor $\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{p})}_{i}$,
displacement-correlation tensor
$\boldsymbol{\Sigma}^{(\boldsymbol{q},\boldsymbol{q})}$, and the momentum-
displacement-correlation tensor
$\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{q})}_{i}=\left(\boldsymbol{\Sigma}^{(\boldsymbol{q},\boldsymbol{p})}_{i}\right)^{\mathrm{T}}$
via
$\displaystyle\frac{\;\\!\mathrm{d}\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{p})}_{i}}{\;\\!\mathrm{d}t}=\left\langle{\boldsymbol{F}_{i}(\boldsymbol{q})\otimes\left(\boldsymbol{p}_{i}-\overline{\boldsymbol{p}}_{i}\right)}\right\rangle+\left\langle{\left(\boldsymbol{p}_{i}-\overline{\boldsymbol{p}}_{i}\right)\otimes\boldsymbol{F}_{i}(\boldsymbol{q})}\right\rangle,$
(2.21a)
$\displaystyle\frac{\;\\!\mathrm{d}\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{q})}_{i}}{\;\\!\mathrm{d}t}=\left\langle{\boldsymbol{F}_{i}(\boldsymbol{q})\otimes\left(\boldsymbol{q}_{i}-\overline{\boldsymbol{q}}_{i}\right)}\right\rangle+\frac{\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{p})}_{i}}{m},$
(2.21b)
$\displaystyle\frac{\;\\!\mathrm{d}\boldsymbol{\Sigma}^{(\boldsymbol{q},\boldsymbol{q})}_{i}}{\;\\!\mathrm{d}t}=\frac{\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{q})}_{i}+\boldsymbol{\Sigma}^{(\boldsymbol{q},\boldsymbol{p})}_{i}}{m},$
(2.21c)
combined with the phase-averaged equations of motion,
$\frac{\;\\!\mathrm{d}\left\langle{\boldsymbol{p}}\right\rangle_{i}}{\;\\!\mathrm{d}t}=\left\langle{\boldsymbol{F}}\right\rangle_{i},\qquad\frac{\;\\!\mathrm{d}\left\langle{\boldsymbol{q}}\right\rangle_{i}}{\;\\!\mathrm{d}t}=\frac{\left\langle{\boldsymbol{p}}\right\rangle_{i}}{m}.$
(2.22)
For simplicity, we further make the spherical distribution assumption that all
cross-correlations between different directions of momenta and displacements
vanish, hence approximating the _atomic clouds_ in phase-space as spherical
(thus eliminating any directional preference of the atomic vibrations). While
being a strong assumption, this allows us to reduce the above tensorial
evolution equations to scalar ones. Specifically, taking the trace
$\mathrm{tr}\left(\cdot\right)$ of (2.21), we obtain,
$\displaystyle\frac{\;\\!\mathrm{d}\Omega_{i}}{\;\\!\mathrm{d}t}=\frac{2\left\langle{\boldsymbol{F}_{i}(\boldsymbol{q})\cdot\left(\boldsymbol{p}_{i}-\overline{\boldsymbol{p}}_{i}\right)}\right\rangle}{3},$
(2.23a)
$\displaystyle\frac{\;\\!\mathrm{d}\Sigma_{i}}{\;\\!\mathrm{d}t}=\frac{2\beta_{i}}{m},$
(2.23b)
$\displaystyle\frac{\;\\!\mathrm{d}\beta_{i}}{\;\\!\mathrm{d}t}=\frac{\left\langle{\boldsymbol{F}_{i}(\boldsymbol{q})\cdot\left(\boldsymbol{q}_{i}-\overline{\boldsymbol{q}}_{i}\right)}\right\rangle}{3}+\frac{\Omega_{i}}{m},$
(2.23c)
where we introduced the three scalar parameters
$\mathrm{tr}\left(\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{p})}_{i}\right)=3\Omega_{i},\qquad\mathrm{tr}\left(\boldsymbol{\Sigma}^{(\boldsymbol{q},\boldsymbol{q})}_{i}\right)=3\Sigma_{i}\quad\mathrm{and}\quad\mathrm{tr}\left(\boldsymbol{\Sigma}^{(\boldsymbol{p},\boldsymbol{q})}_{i}\right)=3\beta_{i}.$
(2.24)
Equations (2.23) are $3N$ coupled scalar ODEs, which determine the changes in
the vibrational widths of atoms in phase-space ($\Omega_{i}$ and $\Sigma_{i}$)
and the correlation $\beta_{i}$ between the displacement and momentum
vibrations of the $i^{\text{th}}$ atom (see Figure 2). We note that (2.23) are
identical to the equations used by Ma et al. (1993), who used the formulation
as an optimization procedure for a system of atoms at a uniform constant
temperature.
Figure 2: Illustration of an atomic trajectory by GPP dynamics (on the right)
compared to a molecular dynamics trajectory (on the left) for a single
particle. Small-scale motions of the particle are approximated by the
parameters $\Omega$, $\Sigma$ and $\beta$ upon averaging over suitable time
intervals. As discussed in Section 3.2, in the quasistatic limit, $\Omega$ and
$\Sigma$ are proportional to the local temperature.
The physical role of momentum-displacement correlation $\beta_{i}$ becomes
evident upon applying a time-reversal transformation $t\mapsto-t$ to (2.22)
and (2.23), which results in the transformations
$\left(\overline{\boldsymbol{q}}_{i},\Omega_{i},\Sigma_{i}\right)\mapsto\left(\overline{\boldsymbol{q}}_{i},\Omega_{i},\Sigma_{i}\right)$
and
$\left(\overline{\boldsymbol{p}}_{i},\beta_{i}\right)\mapsto\left(-\overline{\boldsymbol{p}}_{i},-\beta_{i}\right)$.
Consequently, the correlation $\beta_{i}$ signifies the momentum of the
$i^{\text{th}}$ atom in phase-space, governing the time evolution of the
thermal coordinate $\Sigma_{i}$. Hence, $\beta_{i}$ will be referred to as the
_thermal momentum_ hereafter. The dynamics and thermodynamics of crystals
modeled via the independent GPP approximation can be summarized via
eliminating the thermal and dynamical momenta, yielding for every atom
$i=1,\ldots,N$
$\displaystyle\frac{\;\\!\mathrm{d}^{2}\left\langle{\boldsymbol{q}}\right\rangle_{i}}{\;\\!\mathrm{d}t^{2}}=\frac{\left\langle{\boldsymbol{F}}\right\rangle_{i}}{m},$
(2.25a)
$\displaystyle\frac{\;\\!\mathrm{d}^{2}\Sigma_{i}}{\;\\!\mathrm{d}t^{2}}=\frac{2\Omega_{i}}{m^{2}}+\frac{2\left\langle{\boldsymbol{F}_{i}(\boldsymbol{q})\cdot\left(\boldsymbol{q}_{i}-\overline{\boldsymbol{q}}_{i}\right)}\right\rangle}{3m},$
(2.25b)
$\displaystyle\frac{\;\\!\mathrm{d}\Omega_{i}}{\;\\!\mathrm{d}t}=\frac{2\left\langle{\boldsymbol{F}_{i}(\boldsymbol{q})\cdot\left(\boldsymbol{p}_{i}-\overline{\boldsymbol{p}}_{i}\right)}\right\rangle}{3}.$
(2.25c)
Time reversibility of (2.25) highlights that these evolution equations do not
capture nonequilibrium irreversible thermal transport at all scales due to the
simplification of (2.14d) to (2.21) under the interatomic independence
assumption (2.17). As mentioned previously, such interatomic independence is
necessary to keep a feasible number of unknowns for large ensembles. The
reversible entropy fluctuations can be obtained by substituting the
independent GPP assumption into (2.16), which gives
$\frac{\;\\!\mathrm{d}S}{\;\\!\mathrm{d}t}=\frac{k_{B}}{2}\frac{1}{\det\boldsymbol{\Sigma}}\frac{\;\\!\mathrm{d}\det\boldsymbol{\Sigma}}{\;\\!\mathrm{d}t}=\sum_{i=1}^{N}\left(\frac{k_{B}}{2}\frac{1}{\det\boldsymbol{\Sigma}_{i}}\frac{\;\\!\mathrm{d}\det\boldsymbol{\Sigma}_{i}}{\;\\!\mathrm{d}t}\right)=\sum_{i=1}^{N}\frac{\;\\!\mathrm{d}S_{i}}{\;\\!\mathrm{d}t},$
(2.26)
where the local entropy fluctuations of the $i^{\text{th}}$ atom are given by
$\displaystyle\frac{\;\\!\mathrm{d}S_{i}}{\;\\!\mathrm{d}t}=$
$\displaystyle\frac{k_{B}}{2}\frac{1}{\det\boldsymbol{\Sigma}_{i}}\frac{\;\\!\mathrm{d}\det\boldsymbol{\Sigma}_{i}}{\;\\!\mathrm{d}t}=\frac{k_{B}}{2\left(\Omega^{3}_{i}\Sigma^{3}_{i}-\beta^{6}_{i}\right)}\frac{\;\\!\mathrm{d}}{\;\\!\mathrm{d}t}\left(\Omega^{3}_{i}\Sigma^{3}_{i}-\beta^{6}_{i}\right)$
$\displaystyle=$
$\displaystyle\frac{k_{B}}{2\left(\Omega^{3}_{i}\Sigma^{3}_{i}-\beta^{6}_{i}\right)}\left(\frac{6\Omega_{i}\beta_{i}}{m}\left[(\Omega_{i}\Sigma_{i})^{2}-\beta^{4}_{i}\right]+2\left[\Omega^{2}_{i}\Sigma^{3}_{i}\boldsymbol{F}_{i}\cdot(\boldsymbol{p}_{i}-\overline{\boldsymbol{p}}_{i})-\beta^{5}_{i}\boldsymbol{F}_{i}\cdot(\boldsymbol{q}_{i}-\overline{\boldsymbol{q}}_{i})\right]\right).$
(2.27)
The above equation shows that the local entropy fluctuation $\dot{S}_{i}$ is
proportional to the thermal momentum $\beta_{i}$. We will return to relation
(2.27) when discussing specific types of interatomic potentials in subsequent
sections.
Since the interatomic independence assumption results in an incorrect
calculation of the interatomic heat flux in (2.14a), one needs to model
irreversible thermal transport, e.g., using the linear kinetic potential
framework used by Venturini et al. (2014), as discussed in the following.
## 3 Dynamics and Quasistatics of independent GPPs
We proceed to analyze the dynamic behavior of the GPP equations (2.25) (under
the interatomic independence assumption) and subsequently deduce the
quasistatic behavior as a limit case. For instructive purposes (and because
the (quasi-)harmonic assumption plays a frequent role in atomistic analysis),
we apply the equations to both harmonic and anharmonic potentials that
describe atomic interactions within the crystal. While the distribution ansatz
based on GPP consists of only quadratic function of $\boldsymbol{q}$
(resembling harmonic interaction), the force
$\boldsymbol{F}_{i}(\boldsymbol{q})$ is derived from the potential
$V(\boldsymbol{q})$ and can be anharmonic. Such an approximation is usually
known as _quasi-harmonic approximation_ because, at equilibrium, the
distribution resembles that of a canonical ensemble of harmonic oscillators,
even though the interatomic forces are anharmonic. We show that, for a
quasistatic change in the thermodynamic state of a crystal composed of the GPP
atoms (i.e., driving the mean dynamical and thermal momenta
$\overline{\boldsymbol{p}}$ and $\beta$, respectively, to zero), the
information of the evolution of $\Omega$ is lost (cf. (2.25)).
Correspondingly, we may assume a specific nature of the thermodynamic process
of interest (e.g., isothermal, isentropic, etc.) to determine the change in
$\Omega$ of the GPP atoms during the quasistatic change in the thermodynamic
state. Alternatively, the decay of correlation $\beta(t)$ can be modeled
empirically to obtain the change in $\Omega$, if the nature of the
thermodynamical process is unknown.
### 3.1 Dynamics
#### 3.1.1 Harmonic approximation
As a simplified case that admits analytical treatment, we consider a harmonic
approximation of the interaction potential $V(\boldsymbol{q})$, writing
$V(\boldsymbol{q})=V_{0}+\frac{1}{2}\sum_{i=1}^{N}\sum_{j\in\mathcal{N}(i)}(\boldsymbol{q}_{i}-\boldsymbol{q}_{j})^{\mathrm{T}}\boldsymbol{K}(\boldsymbol{q}_{i}-\boldsymbol{q}_{j}),$
(3.1)
where $\boldsymbol{K}\in\mathbb{R}^{3\times 3}$ is the local harmonic
dynamical matrix, $V_{0}$ is the equilibrium potential between the atoms
approximated as independent GPPs, and $\mathcal{N}(i)$ represents the
neighbourhood of the $i^{\mathrm{th}}$ atom. Equations (2.25) with (3.1)
become
$\frac{\;\\!\mathrm{d}^{2}\left\langle{\boldsymbol{q}}\right\rangle_{i}}{\;\\!\mathrm{d}t^{2}}=-2\sum_{j\in\mathcal{N}(i)}\boldsymbol{K}\left\langle{\boldsymbol{q}_{i}-\boldsymbol{q}_{j}}\right\rangle$
(3.2)
and
$\displaystyle\frac{\;\\!\mathrm{d}^{2}\Sigma_{i}}{\;\\!\mathrm{d}t^{2}}=\frac{2\Omega_{i}}{m^{2}}-\frac{4n_{i}\operatorname{tr}(\boldsymbol{K})}{m}\Sigma_{i},$
(3.3a)
$\displaystyle\frac{\;\\!\mathrm{d}\Omega_{i}}{\;\\!\mathrm{d}t}=-4n_{i}\operatorname{tr}(\boldsymbol{K})\beta_{i},$
(3.3b)
where $n_{i}\operatorname{tr}(\boldsymbol{K})$ is an effective force constant,
which depends on the number of immediate neighbours of the $i^{\mathrm{th}}$
atom, denoted by $n_{i}$. Equations (3.3) show that the mean mechanical
displacement $\left\langle{\boldsymbol{q}}\right\rangle_{i}$ of atom $i$ is
decoupled from its thermodynamic displacements $\Omega_{i}$ and $\Sigma_{i}$
for a harmonic potential field between the atoms. The resulting decoupled
thermodynamic equations
$\frac{\;\\!\mathrm{d}\Omega_{i}}{\;\\!\mathrm{d}t}=-4n_{i}\operatorname{tr}(\boldsymbol{K})\beta_{i},\qquad\frac{\;\\!\mathrm{d}\Sigma_{i}}{\;\\!\mathrm{d}t}=\frac{2\beta_{i}}{m},\quad\text{and}\quad\frac{\;\\!\mathrm{d}\beta_{i}}{\;\\!\mathrm{d}t}=\frac{\Omega_{i}}{m}-2n_{i}\operatorname{tr}(\boldsymbol{K})\Sigma_{i},$
(3.4)
exhibit the following independent eigenvectors
$(\boldsymbol{\phi}_{0},\boldsymbol{\phi}_{+},\boldsymbol{\phi}_{-})$ and
corresponding eigenvalues $(\omega_{0},\omega_{+},\omega_{-})$:
$\boldsymbol{\phi}_{0}=\left(\begin{matrix}2mn_{i}\operatorname{tr}(\boldsymbol{K})\\\
1\\\
0\end{matrix}\right),\quad\boldsymbol{\phi}_{\pm}=\left(\begin{matrix}-2mn_{i}\operatorname{tr}(\boldsymbol{K})\\\
1\\\
-\frac{im\omega_{\pm}}{2}\end{matrix}\right),\qquad\omega_{0}=0,\quad\omega_{\pm}=\pm
2i\sqrt{\frac{2n_{i}\operatorname{tr}(\boldsymbol{K})}{m}},$ (3.5)
so that a general homogeneous solution
$\left(\begin{matrix}\Omega_{i}(t)\\\ \Sigma_{i}(t)\\\
\beta_{i}(t)\end{matrix}\right)=a_{0}\boldsymbol{\phi}_{0}+a_{+}\boldsymbol{\phi}_{+}e^{i\omega_{+}t}+a_{-}\boldsymbol{\phi}_{-}e^{i\omega_{-}t},$
(3.6)
is composed of constant and oscillatory components. Coefficients
$(a_{0},a_{+},a_{-})$ are determined by the initial thermodynamic state of
each atom. The constant component $\boldsymbol{\phi}_{0}$ corresponds to
$\beta=0$ and
$\Omega_{i}=\Sigma_{i}/(2mn_{i}\operatorname{tr}(\boldsymbol{K}))$. To
interpret the three terms within this solution, let us formulate the excess
internal energy, which for the harmonic approximation (3.1) becomes
$E=\left\langle{\mathcal{H}}\right\rangle=\left\langle{V(\boldsymbol{q})}\right\rangle+\sum_{i=1}^{N}\frac{\left\langle{|\boldsymbol{p}_{i}|^{2}}\right\rangle}{2m}=\sum_{i=1}^{N}\left(\frac{\Omega_{i}}{2m}+n_{i}\operatorname{tr}(\boldsymbol{K})\Sigma_{i}\right).$
(3.7)
By insertion into (3.7), it becomes apparent that the constant component
$\boldsymbol{\phi}_{0}$ with
$\Omega_{i}=2mn_{i}\operatorname{tr}(\boldsymbol{K})\Sigma_{i}$ has equal
average kinetic and potential energies. This equipartition of energy implies
that $\boldsymbol{\phi}_{0}$ corresponds to the thermodynamic equilibrium.
Consequently, the components $\boldsymbol{\phi}_{\pm}$ correspond to
oscillations about the equilibrium state $\boldsymbol{\phi}_{0}$ with
frequencies $\omega_{\pm}=2\sqrt{2n_{i}\operatorname{tr}(\boldsymbol{K})/m}$.
Due to the decoupling of the thermodynamic equations (3.4) from the dynamic
equation of motion (3.2), a harmonic GPP lattice exhibits no thermomechanical
coupling and hence displays no heating or cooling of the lattice under
external stress (expansion due to local heating and vice-versa).
Finally, by substituting the harmonic potential (3.1) into (2.27), we obtain
the reversible fluctuations in entropy of atom $i$ for a system of atoms in a
harmonic field:
$\frac{\;\\!\mathrm{d}S_{i}}{\;\\!\mathrm{d}t}=\frac{3k_{B}n_{i}\operatorname{tr}(\boldsymbol{K})\beta_{i}\dot{\beta_{i}}\left(\Omega^{2}_{i}\Sigma^{2}_{i}-\beta^{4}_{i}\right)}{\Omega^{3}_{i}\Sigma^{3}_{i}-\beta^{6}_{i}}.$
(3.8)
#### 3.1.2 Anharmonic thermomechanical effects
As an harmonic approximation of the interatomic potential renders the GPP
crystal thermomechanically decoupled, as discussed above, we next study the
effects of anharmonicity in the potential. As the simplest possible extension
of the harmonic potential, we now consider
$V(\boldsymbol{q})=V_{0}+\frac{1}{2}\sum_{i=1}^{N}\sum_{j\in\mathcal{N}(i)}(\boldsymbol{q}_{i}-\boldsymbol{q}_{j})^{\mathrm{T}}\boldsymbol{K}(\boldsymbol{q}_{i}-\boldsymbol{q}_{j})+\frac{1}{6}\sum_{i=1}^{N}\sum_{j\in\mathcal{N}(i)}(\boldsymbol{q}_{i}-\boldsymbol{q}_{j})^{\mathrm{T}}\boldsymbol{\zeta}\left[(\boldsymbol{q}_{i}-\boldsymbol{q}_{j})\otimes(\boldsymbol{q}_{i}-\boldsymbol{q}_{j})\right],$
(3.9)
where $\boldsymbol{K}\in\mathbb{R}^{3\times 3}$ is the local dynamical matrix,
and $\boldsymbol{\zeta}$ denotes a constant anharmonic third-order tensor.
With this potential, the evolution equations (2.25) become
$\displaystyle\frac{\;\\!\mathrm{d}^{2}\left\langle{\boldsymbol{q}}\right\rangle_{i}}{\;\\!\mathrm{d}t^{2}}$
$\displaystyle=-2\sum_{j\in\mathcal{N}(i)}\boldsymbol{K}\left\langle{\boldsymbol{q}_{i}-\boldsymbol{q}_{j}}\right\rangle-2\sum_{j\in\mathcal{N}(i)}\boldsymbol{\zeta}\left\langle{(\boldsymbol{q}_{i}-\boldsymbol{q}_{j})\otimes\left(\boldsymbol{q}_{i}-\boldsymbol{q}_{j}\right)}\right\rangle,$
$\displaystyle=-2\sum_{j\in\mathcal{N}(i)}\boldsymbol{K}\left\langle{\boldsymbol{q}_{i}-\boldsymbol{q}_{j}}\right\rangle-2\sum_{j\in\mathcal{N}(i)}\boldsymbol{\zeta}\left[\left(\Sigma_{i}+\Sigma_{j}\right){\boldsymbol{I}}+\left\langle{\boldsymbol{q}_{i}-\boldsymbol{q}_{j}}\right\rangle\otimes\left\langle{\boldsymbol{q}_{i}-\boldsymbol{q}_{j}}\right\rangle\right]$
(3.10)
and
$\displaystyle\frac{\;\\!\mathrm{d}^{2}\Sigma_{i}}{\;\\!\mathrm{d}t^{2}}$
$\displaystyle=\frac{2\Omega_{i}}{m^{2}}-\frac{4}{m}\left(n_{i}\operatorname{tr}(\boldsymbol{K})\Sigma_{i}+\sum_{j\in\mathcal{N}(i)}\zeta_{lmn}\left\langle{(\boldsymbol{q}_{i}-\boldsymbol{q}_{j})_{l}(\boldsymbol{q}_{i}-\boldsymbol{q}_{j})_{m}(\boldsymbol{q}_{i}-\left\langle{\boldsymbol{q}_{i}}\right\rangle)_{n}}\right\rangle\right),$
$\displaystyle=\frac{2\Omega_{i}}{m^{2}}-\frac{4\Sigma_{i}}{m}\left(n_{i}\operatorname{tr}(\boldsymbol{K})-\sum_{j\in\mathcal{N}(i)}\zeta_{lmn}\left(\delta_{ml}\left\langle{\boldsymbol{q}_{j}}\right\rangle_{n}+\delta_{nl}\left\langle{\boldsymbol{q}_{j}}\right\rangle_{m}\right)\right),$
(3.11a) $\displaystyle\frac{\;\\!\mathrm{d}\Omega_{i}}{\;\\!\mathrm{d}t}$
$\displaystyle=-4n_{i}\operatorname{tr}(\boldsymbol{K})\beta_{i}-4\sum_{j\in\mathcal{N}(i)}\zeta_{lmn}\left\langle{(\boldsymbol{q}_{i}-\boldsymbol{q}_{j})_{l}(\boldsymbol{q}_{i}-\boldsymbol{q}_{j})_{m}(\boldsymbol{p}_{i}-\left\langle{\boldsymbol{p}_{i}}\right\rangle)_{n}}\right\rangle,$
$\displaystyle=-4\beta_{i}\left(n_{i}\operatorname{tr}(\boldsymbol{K})-\sum_{j\in\mathcal{N}(i)}\zeta_{lmn}\left(\delta_{ml}\left\langle{\boldsymbol{q}_{j}}\right\rangle_{n}+\delta_{nl}\left\langle{\boldsymbol{q}_{j}}\right\rangle_{m}\right)\right),$
(3.11b)
where $\zeta_{lmn}$ are the components of $\boldsymbol{\zeta}$, $(\cdot)_{l}$
denotes the $l^{\text{th}}$ component of vector $(\cdot)$, and $\delta$
represents Kronecker’s delta (and we use Einstein’s summation convention,
implying summation over $l,m,n$). Note that the second term in (3.10) couples
the mechanical perturbations with the thermodynamic perturbations of atom $i$
and its neighbors. Moreover, since equations (3.11) contain products of the
thermodynamic variables $\Sigma$ and $\beta$ with the mean mechanical
displacements $\left\langle{\boldsymbol{q}}\right\rangle$, the anharmonic
potential leads to thermomechanical coupling in the GPP evolution equations.
Due to the apparent harmonic nature of most standard interatomic potentials,
the time scale of the GPP equations (3.10) and (3.11), when being applied to
common potentials, is comparable to the time scale of atomic vibrations, since
the system exhibits eigenfrequencies of
$2\sqrt{2n_{i}\operatorname{tr}(\boldsymbol{K})/m}$ for a pure harmonic
potential (cf. (3.5)). Consequently, numerical time integration of the GPP
equations incurs a similar computational cost as a standard molecular dynamics
simulation of an identical system. Thus, even though mean motion and
statistical information have been separated, the interatomic independence
assumption within the GPP framework prevents significant temporal upscaling.
### 3.2 Quasistatics and thermal equation of state
Using the insight gained from the time evolution equations (3.3) and (3.11),
we proceed to study the GPP equations within the quasistatic approximation.
The latter yields a system of coupled nonlinear equations, whose solution
yields the thermodynamic equilibrium state of the crystal with atoms modeled
using the GPP ansatz. In the quasistatic approximation, the GPP equations
(2.25) with mean mechanical momentum $\overline{\boldsymbol{p}}_{i}=0$ and
thermal momentum $\beta_{i}=0$ reduce to the following steady-state equations:
$\left\langle{\boldsymbol{F}_{i}(\boldsymbol{q})}\right\rangle=0,$ (3.12a)
$\frac{\Omega_{i}}{m}+\frac{\left\langle{\boldsymbol{F}_{i}(\boldsymbol{q})\cdot(\boldsymbol{q}_{i}-\overline{\boldsymbol{q}}_{i})}\right\rangle}{3}=0,$
(3.12b)
which are to be solved for the equilibrium parameters
$(\overline{\boldsymbol{q}}_{i},\Sigma_{i},\Omega_{i})$ for each atom $i$.
Substitution of the quasistatic limits
$\overline{\boldsymbol{p}}_{i}\rightarrow 0,\beta_{i}\rightarrow 0$ in
equations (2.25) yields that, at thermomechanical equilibrium, solution of
equations (3.12b) corresponds to the equilibrium mean displacements
$\overline{\boldsymbol{q}}_{i}$ and equilibrium displacement-variance
$\Sigma_{i}$ of the atoms. Analogous to the loss of information of evolution
of mean mechanical momentum $\overline{\boldsymbol{p}}_{i}(t)$, the
quasistatic limit only states that, at final thermodynamic equilibrium, the
thermal momentum $\beta_{i}(t)$ has decayed to 0, and $\Omega_{i}$ and
$\Sigma_{i}$ are related by equation (3.12b). Note that substitution of
$\beta_{i}=0$ in (2.25) yields trivially
$\;\\!\mathrm{d}\Omega_{i}/\;\\!\mathrm{d}t=0$ at quasistatic thermomechanical
equilibrium. To determine the evolution of $\Omega_{i}(t)$ during the
thermomechanical relaxation of the system towards equilibrium, a model
$\beta_{i}(t)$ would be required. Hence, the quasistatic approximation results
in the loss of information about the thermodynamics of the process through
which the system is brought to the thermomechanical equilibrium. Moreover,
equations (3.12b) are insufficient to solve for all three equilibrium
parameters $(\overline{\boldsymbol{q}}_{i},\Sigma_{i},\Omega_{i})$.
Consequently, the quasistatic equations (3.12b) can only be solved for each
atom if a specific thermodynamic process is assumed and posed as an additional
constraint. To physically approximate the nature of a thermodynamic process,
we assume that the ergodic hypothesis holds for quasistatic processes, i.e.,
$\left\langle{A(\boldsymbol{z})}\right\rangle=\frac{1}{\tau}\int^{\tau}_{0}A(\boldsymbol{z})\;\\!\mathrm{d}t,$
(3.13)
where $\tau$ is a sufficiently large time interval, over which the evolution
of the system is assumed quasistatic. In the ergodic limit, the momentum
variance becomes
$\Omega_{i}=mk_{B}T_{i},$ (3.14)
where $T_{i}$ is the local temperature of the $i^{\text{th}}$ atom. Since
$\Omega_{i}$ is proportional to the local temperature, the quasistatic
equations (3.12b) can then be solved for
$(\overline{\boldsymbol{q}}_{i},\Sigma_{i})$ using the physical constraints
corresponding to the assumed thermodynamic process. For instance, an
_isothermal_ relaxation can be solved for by keeping $\Omega_{i}$ constant for
all atoms. By contrast, isentropic equilibrium parameters
$(\overline{\boldsymbol{q}}_{i},\Sigma_{i},\Omega_{i})$ can be obtained by
keeping $S_{i}$ fixed for each atom during the relaxation. From (2.15) we know
that
$S_{i}=S_{0,i}+3k_{B}\ln\left(\frac{\sqrt{\Omega_{i}\Sigma_{i}}}{h}\right)=\tilde{S}_{0}+{3k_{B}}S_{\Sigma,i}+{3k_{B}}S_{\Omega,i}=\text{const}.,$
(3.15)
where $\tilde{S}_{0}=S_{0,i}-3k_{B}\ln h$. Upon using suitable dimensional
constants of unit values, parameters $S_{\Omega,i}=\frac{1}{2}\ln\Omega_{i}$
and $S_{\Sigma,i}=\frac{1}{2}\ln\Sigma_{i}$ may be interpreted as
dimensionless momentum-variance and displacement-variance entropies,
respectively. In the following, it will be convenient to use $S_{\Omega,i}$
and $S_{\Sigma,i}$ as the mean free parameters instead of $\Omega_{i}$ and
$\Sigma_{i}$. Analogously, isobaric conditions can be derived from the system-
averaged Cauchy stress tensor (Admal and Tadmor, 2010)
$\boldsymbol{\sigma}=-\frac{1}{\mathcal{V}}\sum_{i}\left\langle{\frac{\boldsymbol{p}_{i}\otimes\boldsymbol{p}_{i}}{m}+\boldsymbol{F}_{i}(\boldsymbol{q})\otimes(\boldsymbol{q}_{i}-\overline{\boldsymbol{q}}_{i})}\right\rangle,$
(3.16)
where $\mathcal{V}$ is the volume of the system. The average hydrostatic
pressure $p$ of the system is
$p=-\frac{\mathrm{tr}(\boldsymbol{\sigma})}{3}=\frac{1}{\mathcal{V}}\sum_{i}\left(k_{B}T_{i}+\frac{\left\langle{\boldsymbol{F}_{i}(\boldsymbol{q})\cdot(\boldsymbol{q}-\overline{\boldsymbol{q}}_{i})}\right\rangle}{3}\right).$
(3.17)
Hence, setting $p=\text{const}.$ in (3.17) is the isobaric constraint, subject
to which equations (3.12b) become
$\left\langle{\boldsymbol{F}_{i}(\boldsymbol{q})}\right\rangle=0,\qquad\left(k_{B}T_{i}-\frac{p\mathcal{V}}{N}\right)+\frac{\left\langle{\boldsymbol{F}_{i}(\boldsymbol{q})\cdot(\boldsymbol{q}-\overline{\boldsymbol{q}}_{i})}\right\rangle}{3}=0.$
(3.18)
Solving these equations, in which
$\Omega_{i}=m\left(k_{B}T_{i}-\frac{p\mathcal{V}}{N}\right)$ serves as the
momentum variance corrected for pressure $p$, yields the equilibrium
parameters $(\overline{\boldsymbol{q}}_{i},S_{\Sigma,i},S_{\Omega,i})$ for a
given externally applied pressure $p$. Note that for a system of non-
interacting atoms, the quasistatic equations subjected to an external pressure
$p$ reduce to the ideal gas equation $p\mathcal{V}=Nk_{B}T$. Consequently,
equation (3.18) shows that the quasistatic approximation yields the thermal
equation of state of the system accounting for the interatomic potential,
which enables the thermomechanical coupling within the crystal. The three
constraints on $\Omega_{i}$ for isentropic, isothermal, and isobaric processes
– based on the above discussion – are summarized in Table 1.
Process | Isentropic | Isothermal | Isobaric
---|---|---|---
Constraint | $\Omega_{i}\Sigma_{i}=\mathrm{const.}$ | $\Omega_{i}-k_{B}mT_{i}=0$ | $N\left(\Omega_{i}-k_{B}mT_{i}\right)/\mathcal{V}=-pm=\text{const.}$
Table 1: Summary of the thermodynamic constraints on $\Omega_{i}$ for the
different assumptions about the thermodynamic process under which the system
is brought to equilibrium.
### 3.3 Helmholtz free energy minimization
The solution $(\overline{\boldsymbol{q}},S_{\Sigma},S_{\Omega})$ of the local
equilibrium relations (3.12b) may be re-interpreted as a minimizer of the
Helmholtz free energy $\mathcal{F}$ (note that
$(\overline{\boldsymbol{q}},S_{\Sigma},S_{\Omega})$ denotes the whole set of
parameters of all $N$ atoms constituting the system). The Helmholtz free
energy $\mathcal{F}$ is defined as
$\mathcal{F}(\overline{\boldsymbol{q}},S_{\Sigma},S_{\Omega})=\inf_{S}\left\\{E(\overline{\boldsymbol{q}},S_{\Sigma},S)-\sum_{i}\frac{\Omega_{i}S_{i}}{k_{B}m}\right\\},$
(3.19)
with the internal energy of the system being
$E(\overline{\boldsymbol{q}},S_{\Sigma},S)=\left\langle{\mathcal{H}}\right\rangle=\sum_{i}\left(\frac{3\Omega_{i}}{2m}+\left\langle{V_{i}(\boldsymbol{q})}\right\rangle\right).$
(3.20)
The definition (3.19) implies the local thermodynamic equilibrium definition
$\frac{\Omega_{i}}{k_{B}m}=\frac{\partial E}{\partial S_{i}},$ (3.21)
which can be verified using (3.15) and (3.20). In addition, minimization of
$\mathcal{F}$ with respect to the parameter sets $\overline{\boldsymbol{q}}$
and $S_{\Sigma}$, subject to any of the thermodynamic constraints in Table 1
for updating $S_{\Omega}$, yields equations (3.12b), i.e.,
$\displaystyle-\frac{\partial\mathcal{F}}{\partial\overline{\boldsymbol{q}}_{i}}=0\implies\left\langle{F_{i}(\boldsymbol{q})}\right\rangle=0,$
(3.22a) and $\displaystyle-\frac{\partial\mathcal{F}}{\partial
S_{\Sigma,i}}=0\implies\frac{3\Omega_{i}}{m}+\left\langle{F_{i}(\boldsymbol{q})\cdot(\boldsymbol{q}_{i}-\overline{\boldsymbol{q}}_{i})}\right\rangle=0.$
(3.22b)
A detailed derivation of (3.22b) is provided in B.
$(a)$$(b)$
Figure 3: Infinite crystal and finite box simulation domains used for
calculation of thermal expansion. $(a)$ Surface plot of the Helmholtz free
energy $\mathcal{F}$ vs the lattice parameter $a$ and displacement entropy
$S_{\Sigma}$ for Johnson’s EAM potential for FCC copper at
$T=1000~{}\mathrm{K}$. At the bottom right corner, the setup for computing
$\mathcal{F}$ is shown. Free energy of the central atom (blue) is computed
using the nearest neighbours (red) for Johnson’s potential
($r^{\mathrm{J}}_{\mathrm{cut}}=3.5~{}\mathrm{\AA}$) and up to second nearest
neighbours for Dai et al.’s EAM potential
($r^{\mathrm{EFS}}_{\mathrm{cut}}=4.32~{}\mathrm{\AA}$). Energy of a single
atom under the influence of full centrosymmetric neighbourhood equals the
energy of an atom in an infinite crystal. $(b)$ Finite box of $12\times
12\times 12$ atomic unit cells of pure single-crystalline copper and spatial
variation of $S_{\Sigma}$ due varying number of neighbours in the finite box
at $T=1000~{}\mathrm{K}$ modeled using Johnson’s potential. Thermal expansion
is measured using the volume of the inner domain of the crystal (outlined in
red). Atoms inside are marked in white.
$(a)$$(b)$
Figure 4: Thermal expansion of a finite box ($12\times 12\times 12$ atomic
unit cells) and an infinite crystal of pure single-crystalline copper modeled
using the Extended Finnis-Sinclair potential (Dai et al., 2006) and the
exponentially decaying potential of Johnson (1988). $(a)$ Comparison of
computed volumetric changes with the experimental data obtained from Nix and
MacNair (1941) and the molecular dynamics (MD) calculations
($V_{\mathrm{ref}}=V(T=273~{}\mathrm{K})$) where $V(T)$ is the volume of the
inner subdomain of the crystal (outlined in red in Figure 3$(b)$) for the
finite box calculation and $a^{3}(T)$ for the infinite crystal calculation, at
temperature $T$. $(b)$ Variation of the displacement-variance entropy
$S_{\Sigma}$ with temperature for the finite box calculation. As the
temperature increases, the vibrational kinetic energy increases, resulting in
an increase of $\Sigma$ at thermal equilibrium due to the equipartition of
energy. Due to different numbers of interacting atomic neighbours, atoms on
the corners, edges, faces, and in the bulk exhibit different values of
$S_{\Sigma}$ (see Figure 3). Note that the value of $S_{\Sigma}$ depends on
the interatomic potential (shown results are for the cube modeled using
Johnson’s potential).
Equations (3.12b) subject to the suitable thermodynamic constraints are
identical to the max-ent based formulations of Kulkarni et al. (2008) and
Venturini et al. (2014). Kulkarni et al. (2008) developed the max-ent
formulation by enforcing constraints on variances of momenta and displacements
of the atoms to obtain an ansatz for $f(\boldsymbol{z})$, which is a special
case of the GPP ansatz with no correlations (see Section 2.2). Venturini et
al. (2014) developed the max-ent formulation by generalizing the grand-
canonical ensemble to nonequilibrium situations and allowing non-uniform
thermodynamic properties among atoms. However, for computational
implementation purposes, Venturini et al. (2014) invoked trial-Hamiltonian
procedure to justify the Gaussian formulation of the distribution function
(for single-species cases), thus rendering their ansatz also to a special case
of GPP ansatz with no correlations. In above sections, we have shown that
_such special case arises only as a result of the quasistatic approximation,
which enforces vanishing mechanical and thermal momenta_. Consequently, within
the quasistatic assumption, the GPP based local-equilibrium equations (3.12b)
are identical to those of Kulkarni et al. (2008) and Venturini et al. (2014).
Moreover, subject to the isothermal constraints, GPP quasistatic equations
(3.12b) are identical to those used by Li et al. (2011) as well.
In Section 4, we discuss the application of local-equilibrium relations
(3.12b) combined with the phenomenological transport model (see Section 3.4)
in an updated-Lagrangian quasicontinuum framework. To this end, we have
developed an in-house updated-Lagrangian data structure in which, computations
are performed using elements (tetrahedra) generated as a result of a 3D
triangulation of the coarse-grained lattice. The generated mesh is
kinematically updated with the deformation of the crystal, the details of
which will be presented in future works. Within the scope of present study, we
aim to validate the computational implementation of the local-equilibrium
equations (3.12b) in the updated-Lagrangian data-structure, which will be used
in Section 4 to perform coarse-grained nonequilibrium thermomechanical
simulations. For numerical validation, we compute the thermal expansion
coefficient, the uniaxial and shear components of the linear elastic stiffness
tensor ($C_{11}$ and $C_{44}$ respectively in Voigt-Kelvin notation, see Reddy
(2007)), and the bulk modulus $\kappa$ (collectively referred as elasticity
coefficients here on) of a single-crystal of pure copper (Cu). For computing
phase-space averages we use the third-order multivariable Gaussian quadrature
(Stroud, 1971; Kulkarni, 2007).
Kulkarni (2007) evaluated the thermal expansion coefficient of a pure copper
crystal using the infinite crystal formulation which models a triply periodic
crystal, in which the mechanical forces are evaluated using the lattice-
spacing parameter as the independent variable (see appendix of Tembhekar
(2018) also). Unlike the atomistic simulation suits, our numerical solver
(briefly discussed in Section 4) is based on a finite-element updated-
Lagrangian data structure in which the periodic boundaries are fixed to the
domain. Consequently, in a triply periodic domain, the boundaries remain fixed
in our simulations, thus mandating the use of free boundary conditions in the
thermal expansion simulations. To this end, we relax a domain of $12\times
12\times 12$ FCC unit cells of Cu subject to the local-equilibrium equations
(3.12b), isothermal constraint, and free boundaries at various temperature
values in our solver (cf. Figure 3$(b)$). To model the infinite crystal
expansion, we determine the lattice parameter $a$ and displacement-variance
entropy $S_{\Sigma}$ which minimize the Helmholtz free energy $\mathcal{F}$ of
an atom under the influence of it’s full centrosymmetric neighbourhood (cf.
Figure 3$(a)$).
Figure 4 illustrates solutions of the isothermal local-equilibrium relations
for varying fixed uniform temperature $T_{i}=T$ for all atoms within a Cu
crystal, modeled using the EAM potentials of Dai et al. (2006) and Johnson
(1988). To validate the results, we report the thermal expansion values
obtained from the MD code, LAMMPS (Plimpton, 1995) with NPT ensemble fix and
periodic boundaries (cf. Figure 4$(a)$). As the temperature $T$ increases, the
local separation of atoms increases, thus increasing the volume of the
crystal. Furthermore, as the spacing between atoms increases, the
displacement-variance entropy $S_{\Sigma}$ also increases (cf. Figure 4$(b)$).
Further evident from that graph, $S_{\Sigma}$ is not uniform within the
crystal, owing to varying numbers $n_{i}$ of interacting neighboring atoms on
the corners, edges, and faces as well as within the bulk of the crystal.
Sufficiently far away from the free boundaries, the atoms are acted by the
full neighbourhood approximating an infinite crystal and characterized by
uniform values of the displacement entropy. To avoid the effects of free
boundaries, we compute the deviation in volume of a bounding box enveloping a
$2\times 2\times 2$ volume at the center of the crystal (outlined in red in
figure 4$(c)$ in the crystal bulk) with temperature to compute the thermal
expansion coefficient
$(a)$$(b)$
Figure 5: Domains used for calculating the elastic constants $C_{11},C_{44}$
and the bulk modulus $\kappa$. $(a)$ Domain of $2\times 2\times 2$ FCC unit
cells of Cu initialized with equilibrium lattice spacing $a(T)$ and
displacement variance entropy $S_{\Sigma}$ at temperature $T$ undergoing the
deformation measured by the strain tensor $\gamma^{(n)}\Xi$. Central atom
(red) interacts with the full neighbourhood and is relaxed isentropically,
keeping the rest of the atoms mechanically fixed. Its phase averaged potential
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{inf}}$ models the
average phase-averaged potential of an infinite crystal. $(b)$ Domain of
$18\times 18\times 18$ FCC unit cells of Cu. Blue atoms denote the outer layer
atoms which are mechanically fixed when the domain is isentropically relaxed
under applied deformation with displacements of all atoms governed by (3.23).
Red atoms denote the atoms initially bounded within the red-outlined box of
size equivalent to $6\times 6\times 6$ FCC unit-cells.
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{sub}}$ is the phase
averaged potential of the red atoms and
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{box}}$ is the phase
averaged potential of all the atoms in the domain, both averaged over
respective number of atoms as well. As $\gamma^{(n)}$ increases, both phase
averaged potentials vary. Curvatures of both
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{sub}}$ and
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{box}}$ w.r.t.
$\gamma^{(n)}$ at the respective energy minima are used for computing the
elasticity coefficients.
Calculation of the aforementioned elasticity coefficients ($C_{11}$, $C_{44},$
and $\kappa$) has been used as a benchmark by Venturini (2011) for a crystal
with impurities, and Tembhekar (2018), who all used the max-ent formulation.
Since the GPP quasistatic equations (3.12b) are identical to those of the max-
ent formulation, we obtain identical results. To compute the elasticity
coefficients $C_{11}$, $C_{44},$ and $\kappa$, we follow a procedure similar
to Amelang et al. (2015). We consider a domain of $18\times 18\times 18$ FCC
unit-cells of Cu subject to the local-equilibrium equations (3.12b) and the
isentropic deformation (see Figure 5$(b)$). We use the isentropic constraint
for relaxing the deformed crystal because the linear elasticity coefficients
are measured in an adiabatic setting (Overton Jr and Gaffney, 1955).
Initially, domain is relaxed isothermally with free boundaries. After the
initial relaxation, atoms are displaced according to,
$\boldsymbol{q}^{(n)}_{i}=\boldsymbol{q}^{(0)}_{i}+\gamma^{(n)}\boldsymbol{\Xi}\cdot\boldsymbol{q}^{(0)}_{i},$
(3.23)
where $\gamma^{(n)}=n\Delta\gamma_{c}$ is the engineering strain measure at
$n^{\mathrm{th}}$ deformation step, $\Delta\gamma$ is the strain increment,
and $\boldsymbol{\Xi}$ is the base deformation matrix. For $C_{11},C_{44},$and
$\kappa$, it is defined as,
$\boldsymbol{\Xi}_{11}=\left[\begin{matrix}1&0&0\\\ 0&0&0\\\
0&0&0\end{matrix}\right],~{}\boldsymbol{\Xi}_{44}=\left[\begin{matrix}0&1&0\\\
1&0&0\\\
0&0&0\end{matrix}\right],~{}\boldsymbol{\Xi}_{\kappa}=\left[\begin{matrix}1&0&0\\\
0&1&0\\\ 0&0&1\end{matrix}\right],$ (3.24)
respectively.
$(a)$$(b)$$(c)$$(d)$
Figure 6: Variation of
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{box}}$ (chained line),
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{sub}}$ (dashed line),
and $\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{inf}}$ (solid line)
with the engineering strain measure $\gamma$ for the base deformation matrix
$(a)~{}\boldsymbol{\Xi}_{11}$, $(b)~{}\boldsymbol{\Xi}_{\kappa}$,
$(c)~{}\boldsymbol{\Xi}_{44}$ for temperatures from $0$ K to $1000$ K. Solid
circles mark where
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{inf}}$ is minimum
($\gamma=\gamma_{c}$) and open circles mark the same for
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{box}}$ and
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{sub}}$. $(d)$
Elasticity coefficients of a Cu single-crystal (black: $C_{11}$, red:
$\kappa$, blue: $C_{44}$) for temperatures from $0$ K to $1000$ K evaluated
using the curvature of
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{box}}$ (chained line,
open circles), $\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{sub}}$
(dashed line, open circles), and
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{inf}}$ (solid line,
solid circles) at respective minima, using the EAM potential of Dai et al.
(2006), compared against the experimental data from Chang and Himmel (1966)
and Overton Jr and Gaffney (1955).
To model an infinite crystal and avoid size-effects posed by free boundaries,
we also consider a domain of $2\times 2\times 2$ FCC unit cells of Cu,
initialized using the lattice parameter $a$ and displacement-variance entropy
$S_{\Sigma}$ which minimize the Helmholtz free energy at the given temperature
(see figure 3$(a)$). The domain is then deformed using (3.23), isentropically
relaxing the central atom while keeping it’s neighbourhood mechanically fixed
by the external deformation (figure 5$(a)$). As the atoms are displaced,
potential of each atom changes. For the infinite-crystal model, the atom at
the center interacts with its whole neighbourhood which deforms as per (3.23)
and exhibits a change in its phase averaged potential
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{inf}}$. The deformation
mimics the straining of an infinite perfect crystal exactly, since all atoms
interact with all neighbourhoods in an infinite perfect crystal. Phase
averaged potential of the central atom
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{inf}}$ is measured and
stored for all $n$ to compute the elasticity coefficients. For the domain
modeling a finite-sized crystal, relaxation is performed while holding the
atoms in a layer touching the free boundaries of the domain mechanically fixed
(blue atoms in Figure 5$(b)$). Phase averaged potential of the whole domain
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{box}}$ and that of a
sub-domain within the bulk of the domain
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{sub}}$ (also averaged
over the number of respective type of atoms) are measured and stored for all
$n$. We use a $4^{\mathrm{th}}$ degree polynomial fit through the phase
averaged potentials to obtain continuous functions
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{box}}(\gamma)$,
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{sub}}(\gamma)$, and
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{inf}}(\gamma)$ and
compute the respective curvatures at $\gamma^{(n)}=\gamma_{c}$ for which
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{sub}}$,
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{box}}$, and
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{inf}}$ are minimum. The
elasticity coefficients at temperature $T$ are computed using the curvatures
as (Venturini, 2011),
$C_{11}(T)=\frac{1}{\mathcal{V}(\gamma_{c},T)}\frac{\partial^{2}\left\langle{V(\boldsymbol{q})}\right\rangle}{\partial\gamma^{2}}\Big{|}_{\gamma=\gamma_{c}},~{}C_{44}(T)=\frac{1}{4\mathcal{V}(\gamma_{c},T)}\frac{\partial^{2}\left\langle{V(\boldsymbol{q})}\right\rangle}{\partial\gamma^{2}}\Big{|}_{\gamma=\gamma_{c}},~{}\kappa(T)=\frac{1}{9\mathcal{V}(\gamma_{c},T)}\frac{\partial^{2}\left\langle{V(\boldsymbol{q})}\right\rangle}{\partial\gamma^{2}}\Big{|}_{\gamma=\gamma_{c}},$
(3.25)
where $\mathcal{V}(\gamma_{c},T)$ is the atomic volume at $\gamma_{c}$ and
temperature $T$. Figure 6 shows the change of
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{box}}$,
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{sub}}$, and
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{inf}}$ with $\gamma$
for various temperatures and the elasticity coefficients $C_{11},C_{44},$ and
$\kappa$ obtained using the GPP based local-equilibrium equations (3.12b), and
experimental data (Overton Jr and Gaffney, 1955; Chang and Himmel, 1966). The
results were computed for decreasing values of $\Delta\gamma$ to ensure
convergence. The reported results correspond to $\Delta\gamma=0.0005$. The
values exhibit similar thermal softening as observed in experiments. The
values of the reported elastic constants exhibit an offset at all temperatures
since Dai et al.’s EAM potential is calibrated using the elastic constants at
room-temperature values, which are treated as those at 0 K in the present
formulation (see Table 4 in Dai et al. (2006) and Table 3.1 in Kittel (1976)).
Furthermore, at 0 K, the values of $C_{11}$ and $C_{44}$ obtained from the
infinite crystal model ($C_{11}=168.41$ GPa, $C_{44}$ = 75.41 GPa) match
exactly to the reported values by Dai et al. (2006). Those obtained from the
finite-crystal simulation setup ($C_{11}=164.34$ GPa, $C_{44}$ = 78.01 GPa
using $\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{sub}}$ and
$C_{11}=154.20$ GPa, $C_{44}$ = 73.12 GPa using
$\left\langle{V(\boldsymbol{q})}\right\rangle_{\mathrm{box}}$) deviate from
those reported by Dai et al. (2006) due to the size-effects posed by the free
boundaries. Comparison of elasticity coefficients for various crystals at
finite temperature using accurate MD simulations will be presented in future
studies.
We emphasize that equations (3.22) are identical to the max-ent framework, as
derived and utilized previously (Kulkarni et al., 2008; Ariza et al., 2012;
Venturini et al., 2014; Ponga et al., 2015; Tembhekar, 2018). However, the
max-ent based framework is based on a different motivation and does not rely
on the physical insight gained in Sections 2.2, 3.1.1, and 3.1.2 about the
dynamics of atoms at long and short time intervals. Moreover, the GPP
framework clearly highlights the information loss as a result of the
quasistatic approximation (otherwise hidden in the max-ent framework) which
also enables the modeling of thermomechanical deformation of crystals under
various thermodynamical conditions (e.g., isentropic, isobaric, or isothermal
processes). Furthermore, recall that in Section 2.2 we showed that the
interatomic independence assumption precludes the framework from capturing any
changes in local temperature due to unequal temperatures of neighbouring
atoms. Such an assumption is also made in the max-ent framework. In reality, a
non-uniform temperature distribution may easily arise in a non-uniformly
deformed crystal lattice. For example, when a crystal is deformed by applied
mechanical stresses, the local temperature rises as atoms are compressed to
smaller interatomic distances. Therefore, both max-ent and the present GPP
formulation require explicit thermal transport models in order to capture the
latter. To model the transport of heat in a non-uniformly deformed lattice, we
here adopt the linear Onsager kinetics model introduced by Venturini et al.
(2014), based on the quadratic dissipation potential – as discussed in the
following.
### 3.4 Linear Onsager Kinetics
The total rate of change of entropy of an atom can be decomposed into a
reversible and an irreversible change, i.e.,
$\frac{\;\\!\mathrm{d}S_{i}}{\;\\!\mathrm{d}t}=\frac{q_{i,\mathrm{rev}}}{T_{i}}+\frac{q_{i,\mathrm{irrev}}}{T_{i}},$
(3.26)
where $q_{i,\mathrm{rev}}$ is the reversible heat addition and
$q_{i,\mathrm{irrev}}$ is the irreversible change signifying the net influx of
heat into the atom due to a non-uniform temperature distribution. In a dynamic
system (see Sections 3.1.1 and 3.1.2), the reversible change is composed of
fluctuations about the equilibrium state due to the local harmonic nature of
the interatomic potential, and it is proportional to the thermal momentum
$\beta$. Within the quasistatic approximation, the information about the
evolution of $\beta(t)$ as the system relaxes towards the equilibrium is lost
since $\beta\rightarrow 0$ is imposed, thus rendering the reversible changes
in entropy unknown. Therefore, such a reversible change is imposed implicitly
by the thermodynamic constraints (see Table 1).
For an isolated system of atoms, the reversible heat exchange vanishes
($q_{\mathrm{rev}}=0$), and the system relaxes to equilibrium adiabatically,
which we here term _free relaxation_. Note that this free relaxation is not
isentropic, since the temperature can be non-uniform as a result of an imposed
non-uniform deformation of the crystal, resulting in irreversible thermal
transport that increases the entropy. We model such irreversible change
$q_{i,\mathrm{irrev}}$ by adopting the kinetic formulation introduced by
Venturini et al. (2014):
$q_{i,\mathrm{irrev}}=\sum_{j\neq i}R_{ij},\qquad
R_{ij}=\frac{\partial\Psi}{\partial P_{ij}},$ (3.27)
where $R_{ij}$ is the local, pairwise heat flux, driven by a local pairwise
discrete temperature gradient $P_{ij}$ through the kinetic potential $\Psi$.
Using the dissipation inequality, Venturini et al. (2014) formulated the
discrete temperature gradient as
$P_{ij}=\frac{1}{T_{i}}-\frac{1}{T_{j}}.$ (3.28)
Within the linear assumption, the kinetic potential $\Psi$ is modeled as,
$\Psi=\frac{1}{2}\sum_{i=1}^{N}\sum_{j\neq
i}A_{ij}T^{2}_{ij}P^{2}_{ij}\quad\text{with}\quad
T_{ij}=\frac{1}{2}\left(T_{i}+T_{j}\right),$ (3.29)
where $A_{ij}$ denotes a pairwise heat transport coefficient (which is treated
as an empirical constant in this work), and $T_{ij}$ represents the pairwise
average temperature. Equations (3.29) and (3.27) yield the entropy rate
kinetic equation for a freely relaxing system of atoms as
$\frac{\;\\!\mathrm{d}S_{i}}{\;\\!\mathrm{d}t}=\frac{1}{T_{i}}\sum_{j\neq
i}A_{ij}T^{2}_{ij}P_{ij}=\frac{1}{T_{i}}\sum_{j\neq i}A_{ij}T^{2}_{ij}P_{ij},$
(3.30)
which yields the following discrete-to-continuum relation between $A_{ij}$ and
the thermal conductivity tensor $\boldsymbol{\kappa}_{i}$ at the location of
atom $i$ (Venturini et al., 2014):
$\boldsymbol{\kappa}_{i}=\frac{1}{2V_{i}}\sum_{j=1}^{N}A_{ij}\left(\overline{\boldsymbol{q}}_{i}-\overline{\boldsymbol{q}}_{j}\right)\otimes\left(\overline{\boldsymbol{q}}_{i}-\overline{\boldsymbol{q}}_{j}\right),$
(3.31)
where $V_{i}$ is the volume of the atomic unit cell in the crystal. Venturini
et al. (2014) derived equation (3.31) using first order approximations of
temperature differences between interacting atoms using Taylor expansions.
Depending on the thermal boundary conditions and the size of the domain,
temperature differences between interacting atoms may not be negligible. For
significant temperature differences, (3.31) is inaccurate, hence, we
approximate the $A_{ij}$ values using a differentially heated square-
crossection Cu nanowire simulation up to steady state, as discussed below.
Venturini et al. (2014) validated the thermal transport model based on linear
Onsager kinetics by demonstrating the capturing of size effects of the
macroscopic thermal conductivity of Silicon nanowires. Experimentally measured
values of $\boldsymbol{\kappa}_{i}$ for a given arrangement of atoms can be
used to obtain $A_{ij}$ from (3.31). The obtained values of $A_{ij}$ may be
interpreted to capture the interatomic heat current and regarded as the
intrinsic property of the material. Ponga and Sun (2018) used a similar
temperature difference based diffusive transport model to analyse large
thermomechanical deformation of carbon nanotubes. They formulated the
Arrhenius equation type master-equation model, identical to the one used for
mass-diffusion (Zhang and Curtin, 2008) problems and validated against Fourier
based heat conduction problems. In Ponga and Sun (2018)’s model also, an
empirical parameter (exchange rate) is fitted against theoretically or
experimentally characterized thermal conductivity values.
Figure 7: Schematic illustration of the local entropy dissipation via equation
(3.30): atom $i$ interacts with its interatomic neighbors, whereby differences
in local temperature are responsible for heat flux.
Equations (3.12b) combined with (3.30) complete the nonequilibrium
thermomechanical atomistic model. Every atom is assumed to be in thermal
equilibrium at some local temperature $T_{i}$, and mechanical and thermal
interactions of the atoms with different spacing and different temperatures
are modeled using the interatomic potential $V(\boldsymbol{q})$, the local
equation of state (e.g. (3.18)), and the entropy kinetic equation (3.30). For
a general system, the assumed thermodynamic process constraints yield the
reversible heat exchange(see Table 1). For instance, in an isothermal
constraint, $S_{\Omega,i}$ remains constant and $S_{\Sigma,i}$ changes with
mechanical deformation to satisfy the local equation of state (e.g. (3.18)) at
equilibrium, thus changing the net entropy (see equation (3.15). While the
nature of the assumed thermodynamic process via which the system relaxes
depends on the macroscopic and microscopic boundary conditions, the process is
generally assumed quasistatic with respect to the fine-scale vibrations of
each atom. However, the thermal transport equation (3.30) introduces a time
scale to the problem governing the irreversible evolution of entropy. If we
consider a two-atom toy model, consisting of only two atoms with temperatures
$T_{i}$ and $T_{j}$ (Figure 7), equations (3.30) reduce to
$\frac{\;\\!\mathrm{d}}{\;\\!\mathrm{d}t}\left(\begin{matrix}S_{i}\\\
S_{j}\end{matrix}\right)=\left(\begin{matrix}A_{0}T^{2}_{ij}P_{ij}/T_{i}\\\
A_{0}T^{2}_{ji}P_{ji}/T_{j}\end{matrix}\right),$ (3.32)
where we have assumed equal coefficients $A_{ij}=A_{0}$ for both atoms. Let us
further assume the thermomechanical relaxation of
$(\overline{\boldsymbol{q}},S_{\Sigma})$ takes place through an isentropic
process following the quasistatic equations (3.12b). When substituting
equation (3.15), equation (3.32) becomes
$\frac{\;\\!\mathrm{d}}{\;\\!\mathrm{d}t}\left(\begin{matrix}T_{i}\\\
T_{j}\end{matrix}\right)=\frac{2A_{0}}{3k_{B}}\left(\begin{matrix}T^{2}_{ij}P_{ij}\\\
T^{2}_{ji}P_{ji}\end{matrix}\right).$ (3.33)
The stationary state of equation (3.33) is a state with uniform temperature,
$T_{i}=T_{j}$. When assuming a leading-order perturbation about the stationary
state, equation (3.33) yields
$\frac{\;\\!\mathrm{d}}{\;\\!\mathrm{d}t}\left(\begin{matrix}T_{i}\\\
T_{j}\end{matrix}\right)\approx\frac{2A_{0}}{3k_{B}}\left(\begin{matrix}T_{j}-T_{i}\\\
T_{i}-T_{j}\end{matrix}\right),$ (3.34)
which shows that, when using a simple first-order explicit-Euler finite-
difference scheme for time integration, the time step $\Delta t$ is restricted
by
$\Delta t\leq\frac{3k_{B}}{2A_{0}}$ (3.35)
for numerical stability. Venturini et al. (2014) used $A_{0}=0.09~{}$nW/K for
the bulk region of a silicon nanowire, which yields a maximum allowed time
step of $0.23~{}$ps. With higher-order explicit schemes, a larger maximum time
step may be obtained, but the restriction remains at a few picoseconds. Such
restriction arises because the bulk thermal conductivity $\boldsymbol{\kappa}$
is used to fit the atomistic parameter $A_{ij}$, which reduces the length
scale as well as the time scale. Such a restriction also highlights the
coupling of length and time scales (Evans and Morriss, 2007). For larger
temperature differences and larger systems, the nonlinear equation (3.33)
yields the following stability limit on time step $\Delta t$ (see C for the
derivation)
$\Delta
t\left(\frac{2A_{0}}{3k_{B}}\right)\sum_{j}\frac{T^{2}_{ij}}{T_{i}T_{j}}\leq
1,$ (3.36)
which is stricter than the linear stability limit in (3.35).
$(a)$$(b)$
Figure 8: Thermal conduction in a Cu nanowire with square cross-section and
edges aligned with the $\langle 100\rangle$-directions. $(a)$ Schematic
showing the cross-sectional planes used for computing the macroscopic thermal
flux. $(b)$ Spatial variation of temperature (black solid line) and
macroscopic heat flux (red dashed line) along the axis of the nanowire.
In the following sections, we will apply the linear kinetic formulation (3.30)
to the QC framework to understand the deformation of a coarse-grained Cu
crystal. As a first step, we fit the kinetic coefficient $A_{0}$ for Cu, using
the thermal conductivity measurements for Cu nanowires obtained by Mehta et
al. (2015). To this end, we consider a Cu nanowire of square cross-section of
of size 145$\times$43$\times$43Å3, with the central axis of the nanowire
oriented along the $x$-direction and all edges aligned with the $\langle
100\rangle$-directions (see Figure 8$a$). Atoms in the region $x<x_{l}$ are
thermostatted at a temperature of 360 K, while the atoms in the region
$x>x_{r}$ are thermostatted at a temperature of 240 K. All the boundaries are
considered as free boundaries and the system is relaxed isentropically,
followed by the diffusive step (see Algorithm 1 below) for thermal transport.
As the system evolves according to (3.30) (using full atomistic resolution
everywhere in the nanowire), a uniform macroscopic heat flux is established
between $x_{l}<x<x_{r}$ (see Figure 8$b$). Dividing the nanowire into atomic
planes marked by $x_{i}$, the macroscopic flux across the plane at $x=x_{i}$
is given by
$T_{i}\frac{\;\\!\mathrm{d}S_{i}}{\;\\!\mathrm{d}t}=J_{x,i}-J_{x,{i-1}},~{}J_{x,i}=\sum^{i}_{j=1}T_{j}\frac{\;\\!\mathrm{d}S_{j}}{\;\\!\mathrm{d}t},$
(3.37)
where $T_{j}$ and $dS_{j}/dt$ are temperature and entropy generation at plane
$x_{j}$. From (3.37) the approximate thermal conductivity $\kappa$ is obtained
as
$\kappa=\frac{J_{x,i}}{S_{A}}\frac{\Delta x}{\Delta T},$ (3.38)
where $S_{A}$ is the cross-sectional area, $\Delta x$ is the length across
which the temperature difference $\Delta T$ is maintained. To find $A_{0}$, we
do not use the equation (3.31) since it is valid only for small temperature
differences. Instead, we start from a reference guess of $A_{0}=0.1~{}$nW/K,
compute the macroscopic flux $J_{x,i}$ and from it the thermal conductivity
$\kappa$ via (3.38) and compare it with the experimental value of Copper
nanowires. To obtain an approximate value of $A_{0}$ for our simulations, we
have considered $\kappa=100~{}\text{W}/(\text{m}\cdot\text{K})$ as determined
experimentally by Mehta et al. (2015) (cf. figure 3 in Mehta et al. (2015)).
The initial guess of $A_{0}$ is modified using the secant method till the
conductivity value of $\kappa=100~{}\text{W}/(\text{m}\cdot\text{K})$ is
achieved. The obtained value of $A_{0}\approx
15.92~{}\text{eV}/(\text{ns}\cdot\text{K})=2.55~{}$nW/K is used in further
simulations. We note that the numerical values, however, do not affect the
physical modeling framework described above and are only representative values
to be used in the simulations presented in the next section. In reality, large
deformations cause defects which modify the phonon and electron scattering
properties of the crystals due to which $A_{0}$ values would need to vary with
the deformation, however such a non-uniform modeling is beyond the scope of
the current work. As shown in Figure 8$b$, the temperature profile is linear
and the macroscopic thermal flux defined by (3.38) is constant at steady state
of the simulation, thus highlighting that the discrete model yields a behavior
similar to the Fourier’s heat conduction law where material between two
isothermal boundaries exhibits linear temperature distribution and constant
heat flux, given that the conductivity is uniform.
Before applying the QC coarse-graining, let us summarize the proposed
thermomechanical transport model for simulating quasistatic deformation of
crystals composed of GPP atoms (see Algorithm 1 for details):
* 1.
Step 1: Given the equilibrium parameters
$\left(\overline{\bf{q}}^{(n)}_{i},S^{(n)}_{\Sigma,i},S^{(n)}_{\Omega,i},S^{(n)}_{i}\right)$
from the (previous) $n^{\mathrm{th}}$ load step, an external stress/strain is
applied to the system at load step $n+1$,
* 2.
Step 2: Quasistatic relaxation, solving (3.12b) subject to one of the
constraints in Table (1) to obtain the intermediate state
$\left(\overline{\bf{q}}^{(*)}_{i},S^{(*)}_{\Sigma,i},S^{(*)}_{\Omega,i},S^{(*)}_{i}\right)$.
* 3.
Step 3: Staggered time stepping that alternates between irreversible updates
of the total entropy and quasistatic reversible relaxation steps of all
variables, until convergence is achieved. Specifically, the total entropy is
updated irreversibly from $S^{(*)}_{i}$ over a time interval $\delta t$, using
equation (3.30) and explicit forward-Euler updates, and reversibly using the
assumed thermodynamic process constraints during the subsequent
thermomechanical relaxation. The interval $\delta t^{(n)}$ depends on the
external stress/strain rate applied. By definition, the thermomechanical
relaxation is assumed quasistatic, hence only slow rates with respect to
atomistic vibrations can be modeled. However, the irreversible transport
imposes a time-scale restriction via (3.36). Consequently, time integration
via suitable time steps $\Delta t_{k}$ must be continued for $K$ steps, such
that $\sum^{K}_{k=1}\Delta t_{k}=\delta t$. Using the irreversible update of
the entropy from $S^{(*),k}_{i}\rightarrow S^{(**),k+1}_{i}$ new approximate
thermal distribution $S^{(**),k+1}_{\Omega,i}$ is obtained via equation (3.15)
as $S^{(**),k+1}_{\Omega,i}\rightarrow
S^{(**),k+1}_{i}/3k_{B}-\tilde{S}_{0}/3k_{B}-S^{(*),k}_{\Sigma,i}$, generating
thermal forces in atoms. Using the irreversibly updated entropy and the
approximate thermal distribution, quasistatic relaxation of state
$\left(\overline{\bf{q}}^{(*),k}_{i},S^{(*),k}_{\Sigma,i},S^{(**),k+1}_{\Omega,i},S^{(**),k+1}_{i}\right)\rightarrow\left(\overline{\bf{q}}^{(*),k+1}_{i},S^{(*),k+1}_{\Sigma,i},S^{(*),k+1}_{\Omega,i},S^{(*),k+1}_{i}\right)$
follows by solving (3.12b) with a constraint from Table 1. This completes a
single staggered time step of the thermomechanical model. Note that the update
$S^{(**),k+1}_{i}\rightarrow S^{(*),k+1}_{i}$ corresponds to the reversible
entropy update to satisfy (3.15) during the thermomechanical relaxation and
depends on the assumed thermodynamic process constraint. In a full quasistatic
setting, the transport equation is driven towards a steady state with
$\dot{S}^{(*),K}_{i}=0$, which defines the convergence criterion and hence
determines the total number of time steps.
* 4.
Step 4: Assignment of the final state as
$\left(\overline{\bf{q}}^{(n+1)}_{i},S^{(n+1)}_{\Sigma,i},S^{(n+1)}_{\Omega,i},S^{(n+1)}_{i}\right)=\left(\overline{\bf{q}}^{(*),K}_{i},S^{(*),K}_{\Sigma,i},S^{(*),K}_{\Omega,i},S^{(*),K}_{i}\right)$,
followed by Step 1 till the final loadstep.
In this work, we use a combination of robust inertial relaxation method known
as FIRE (Bitzek et al., 2006) and nonlinear generalized minimal residual
(NGMRES) using PETSc (Brune et al., 2015) to complete Step 2 in the model and
forward-Euler finite-difference scheme to update the entropy due to
irreversible transport in Step 3. The steps are explained in detail as
pseudocode below in Algorithm 1.
Result:
$\left(\overline{\bf{q}}^{(n+1)}_{i},S^{(n+1)}_{\Sigma,i},S^{(n+1)}_{\Omega,i},S^{(n+1)}_{i}\right)$
Input:
$\left(\overline{\bf{q}}^{(n)}_{i},S^{(n)}_{\Sigma,i},S^{(n)}_{\Omega,i},S^{(n)}_{i}\right)$,
$\delta t^{(n)}$, ${tol}$;
$k\leftarrow 0$;
quasistatic reversible relaxation of
$\left(\overline{\bf{q}}^{(n)}_{i},S^{(n)}_{\Sigma,i},S^{(n)}_{\Omega,i},S^{(n)}_{i}\right)$
to
$\left(\overline{\bf{q}}^{(*),k}_{i},S^{(*),k}_{\Sigma,i},S^{(*),k}_{\Omega,i},S^{(*),k}\right)$
by solving (3.12b) with a constraint from Table 1 using FIRE (Bitzek et al.,
2006) and/or NGMRES (Brune et al., 2015);
$t\leftarrow 0$;
compute $\dot{S}^{(*),k}_{i}$;
$\dot{S}_{i}\leftarrow\dot{S}^{(*),k}_{i}$;
while _$t <\delta t^{(n)}$ and
$\sqrt{\frac{1}{N}\sum_{i}\dot{S}^{2}_{i}}>\text{tol}$_ do
compute $\Delta t^{k}$ satisfying the constraint (3.36);
irreversible update of $S^{(*),k}_{i}$ to $S^{(**),{k+1}}_{i}$ using (3.30)
and a forward-Euler finite-difference scheme;
approximate thermal distribution update using (3.15) such that
$S^{(**),k+1}_{\Omega,i}\leftarrow
S^{(**),k+1}_{i}/3k_{B}-\tilde{S}_{0}/3k_{B}-S^{(*),k}_{\Sigma,i}$;
quasistatic reversible relaxation of
$\left(\overline{\bf{q}}^{(*),k}_{i},S^{(*),k}_{\Sigma,i},S^{(**),k+1}_{\Omega,i},S^{(**),k+1}\right)$
to
$\left(\overline{\bf{q}}^{(*),{k+1}}_{i},S^{(*),{k+1}}_{\Sigma,i},S^{(*),{k+1}}_{\Omega,i},S^{(*),{k+1}}\right)$
by solving (3.12b) with a constraint from Table 1;
$k\leftarrow k+1$;
$\dot{S}_{i}\leftarrow\dot{S}^{(*),{k}}_{i}$;
$t\leftarrow t+\Delta t^{k}$;
end while
$\left(\overline{\bf{q}}^{(n+1)}_{i},S^{(n+1)}_{\Sigma,i},S^{(n+1)}_{\Omega,i},S^{(n+1)}_{i}\right)\leftarrow\left(\overline{\bf{q}}^{(*),k}_{i},S^{(*),k}_{\Sigma,i},S^{(*),k}_{\Omega,i},S^{(*),k}\right)$
Algorithm 1 Single load step from $n$ to $n+1$ of the quasistatic
thermomechanical transport model for irreversible deformation of crystals
composed of GPP atoms. For a fully quasistatic transport simulation $\delta
t^{(n)}\to\infty$ and the thermal gradients are diffused till the RMS entropy
generation rate is higher than some tolerance $tol$.
## 4 Finite-temperature updated-Lagrangian quasicontinuum framework based on
GPP atoms
Having established the GPP framework, we proceed to discuss the application of
the thermomechanical and coupled thermal transport model (equations (3.12b)
and (3.30), respectively) to an updated-Lagrangian QC formulation for coarse-
grained simulations. Previous zero- and finite-temperature QC implementations
have usually been based on a total-Lagrangian setting (Ariza et al., 2012;
Tadmor et al., 2013; Knap and Ortiz, 2001; Amelang et al., 2015; Ponga et al.,
2015; Kulkarni et al., 2008), in which interpolations are defined and hence
atomic neighborhoods computed in an initial reference configuration.
Unfortunately, such total-Lagrangian calculations incur large computational
costs and render especially nonlocal QC formulations impractical (Tembhekar et
al., 2017), since atomistic neighborhoods change significantly during the
course of a simulation, so that the initial mesh used for all QC
interpolations increasingly loses its meaning and atoms that form local
neighborhoods in the current configuration may have been considerably farther
apart in the reference configuration. Therefore, we here introduce an updated-
Lagrangian QC framework to enable efficient simulations involving severe
deformation and atomic rearrangement. Moreover, we adopt the fully-nonlocal
energy-based QC formulation of Amelang et al. (2015), since an energy-based
summation rule allows for a consistent definition of the coarse-grained
internal energy and all thermodynamic potentials of the system.
$(a)$$(b)$
Figure 9: Illustration of the the quasicontinuum (QC) framework based on GPP
quasistatics (eqs. (3.12b)) combined with the linear Onsager kinetics for
thermal transport (eq. (3.30)). $(a)$ The thermomechanical transport
parameters
$(\overline{\boldsymbol{q}}_{k},S_{\Sigma,k},S_{\Omega,k},\dot{S}_{k})$ are
the degrees of freedom of repatom $k$. Repatoms are shown as red circles,
sampling atoms as small white circles (we use the first order sampling rule
$(0,1^{*})$ of Amelang et al. (2015)) . All thermodynamic potentials and hence
the quasistatic forces and thermal fluxes are computed from a weighted average
over a set of sampling atoms (e.g., forces and fluxes for repatom $k$ are
governed, among others, by sampling atom $\alpha$ and its atomic neighbors
$j$). The fully-nonlocal formulation bridges seamlessly and adaptively from
full atomistics to coarse-grained regions. $(b)$ Computation of sampling atom
weights for the updated Lagrangian implementation using tetrahedral solid
angles.
Within the QC approximation, we replace the full atomic ensemble of $N$ GPP
atoms (as described in Section 2.2) by a total of $N_{h}\ll N$ GPP
representative atoms (_repatoms_ for short), each having the thermomechanical
transport parameters
$(\overline{\boldsymbol{q}}_{k},S_{\Sigma,k},S_{\Omega,k},\dot{S}_{k})$ as
their degrees of freedom (see Figure 9). The position of each and every atom
in the coarse-grained crystal lattice is obtained by interpolation. For an
atom at location $\overline{\boldsymbol{q}}^{h}_{i}$ in the reference
configuration, the thermomechanical transport parameters in the current
configuration are obtained by interpolation from the respective parameters of
the repatoms:
$\left(\begin{matrix}\overline{\boldsymbol{q}}^{h}_{i}\\\ S^{h}_{\Sigma,i}\\\
S^{h}_{\Omega,i}\\\ \dot{S}^{h}_{i}\\\
\end{matrix}\right)=\sum^{N_{h}}_{k=1}\left(\begin{matrix}\overline{\boldsymbol{q}}_{k}\\\
S_{\Sigma,k}\\\ S_{\Omega,k}\\\ \dot{S}_{k}\\\
\end{matrix}\right)N_{k}(\overline{\boldsymbol{q}}_{i}),$ (4.1)
where the subscript $h$ denotes that the parameters are interpolated from the
$N_{h}$ GPP repatoms, and $N_{k}(\overline{\boldsymbol{q}}^{h}_{i})$ is the
shape function/interpolant of repatom $\boldsymbol{q}_{k}$ evaluated at
$\overline{\boldsymbol{q}}^{h}_{i}$. In the following we use linear
interpolation (i.e., constant-strain tetrahedra), while the method is
sufficiently general to extend to other types of interpolants. Based on the
interpolated parameters from (4.1), the free energy
$\mathcal{F}(\overline{q},S_{\Sigma},S_{\Omega})$ of the crystal is replaced
by the approximate free energy $\mathcal{F}^{h}$ of the QC crystal with
$\mathcal{F}^{h}(\overline{q}^{h},S^{h}_{\Sigma},S^{h}_{\Omega})=\sum^{N}_{i=1}\left(\frac{\Omega^{h}_{i}}{2m_{i}}-\frac{\Omega^{h}_{i}S^{h}_{i}}{k_{B}m_{i}}\right)+\left\langle{V(\boldsymbol{q})}\right\rangle=\sum^{N}_{i=1}\left(\frac{\Omega^{h}_{i}}{2m_{i}}-\frac{\Omega^{h}_{i}S^{h}_{i}}{k_{B}m_{i}}+\left\langle{V_{i}(\boldsymbol{q})}\right\rangle\right),$
(4.2)
where we assumed that the decomposition
$V(\boldsymbol{q})=\sum_{i}V_{i}({\boldsymbol{q}})$ of the interatomic
potential holds. Furthermore, we allow the masses of all atoms to be
different, denoting by $m_{i}$ the mass of atom $i$. Equation (4.2) defines
the free energy of the system accounting for all the atoms $N$ with their
thermomechanical parameters evaluated using the $N_{h}$ repatoms in a slave-
master fashion.
To reduce the computational cost stemming from the summation over all $N$
atoms in (4.2), sampling rules are introduced, which approximate the full sum
by a weighted sum over $N_{s}\ll N$ carefully selected sampling atoms (Eidel
and Stukowski, 2009; Iyer and Gavini, 2011; Amelang et al., 2015; Tembhekar et
al., 2017). Specifically, we adopt the so-called _optimal summation rule_ of
Amelang et al. (2015) to sample the free energy at $N_{s}$ sampling atoms.
Consequently, the approximate free energy $\mathcal{F}^{h}$ is further
approximated as
$\mathcal{F}^{h}(\overline{q}^{h},S^{h}_{\Sigma},S^{h}_{\Omega})\approx\sum^{N_{s}}_{\alpha=1}w_{\alpha}\left(\frac{\Omega^{h}_{\alpha}}{2m_{\alpha}}-\frac{\Omega_{\alpha}S_{\alpha}}{k_{B}m_{\alpha}}+\left\langle{V_{\alpha}(\boldsymbol{q})}\right\rangle\right),$
(4.3)
where $w_{\alpha}$ is the sampling weight of the $\alpha^{\mathrm{th}}$
sampling atom.
We use the first order summation rule of Amelang et al. (2015), in which all
nodes and the centroid of each simplex (tetrahedron in 3D) are assigned as
sampling atoms. Amelang et al. (2015) computed the sampling atom weights using
the geometrical division of the simplices by planes at a distance $r$ from the
nodes (Figure 9$(b)$) and adding the corresponding nodal volume to the
respective sampling atom weight, while the rest of the simplex volume was
assigned to the Cauchy-Born-type sampling atom at the centroid. Here, we point
out that a simpler weight calculation is possible by considering the spherical
triangle generated by the intersection of simplex $e$ with a ball of radius
$r$ centered at one of the nodes. Considering the arcs of the spherical
triangle subtend angles $\alpha$, $\beta$, and $\gamma$ at the opposite
points, the area of the triangle is given by $(\alpha+\beta+\gamma-\pi)r^{2}$.
Hence, the approximate volume of the enclosed region is
$v^{e}_{\alpha}\approx\frac{r^{3}}{3}\left(\alpha+\beta+\gamma-\pi\right),$
(4.4)
and $w_{\alpha}=\rho_{e}\sum_{e}v^{e}_{\alpha}$ are the sampling atom weights
at nodes, where $\rho_{e}$ is the density of simplex $e$ (expressed as the
number of atoms per unit volume). For the centroid sampling atoms, the
remaining volume times $\rho_{e}$ is assigned as its sampling weight
$w_{\alpha}$. Since the deformation is affine within each element $e$,
sampling atom weights in coarse-grained regions change negligibly in a typical
simulation and are therefore kept constant throughout our simulations. In the
following we will also need a separate set of repatom weights
$\widehat{w}_{k}$, which we calculate by lumping the sampling atom weights
$w_{k}$ to the repatoms: each repatom receives the weight of itself (each
repatom is a sampling atom) plus one quarter of the Cauchy-Born-type
centroidal sampling atoms within all adjacent elements $e$:
$\widehat{w}_{k}=w_{k}+\sum_{e}\frac{w_{e}}{4}.$ (4.5)
Given the sampling atom weights $w_{\alpha}$, minimization of the approximate
free energy $\mathcal{F}^{h}(\overline{q}^{h},S^{h}_{\Sigma},S^{h}_{\Omega})$
given in equation (4.3) with respect to degrees of freedom
$(\overline{\boldsymbol{q}}_{k},S_{\Sigma,k})$ of the $k^{\mathrm{th}}$
repatom yields the local mechanical equilibrium conditions
$-\frac{\partial\mathcal{F}^{h}}{\partial\overline{\boldsymbol{q}}_{k}}\approx-\sum^{N_{s}}_{\alpha=1}w_{\alpha}\frac{\partial\left\langle{V_{\alpha}(\boldsymbol{q})}\right\rangle}{\partial\overline{\boldsymbol{q}}_{k}}=-\sum^{N_{s}}_{\alpha=1}w_{\alpha}\left\langle{\frac{\partial
V_{\alpha}(\boldsymbol{q})}{\partial{\boldsymbol{q}}_{k}}}\right\rangle=0$
(4.6)
and the corresponding thermal equilibrium conditions
$-\frac{\partial\mathcal{F}^{h}}{\partial
S_{\Sigma,k}}\approx\sum^{N_{s}}_{\alpha=1}w_{\alpha}\left(\frac{3\Omega_{\alpha}}{m_{\alpha}}\frac{\partial
S_{\Sigma,\alpha}}{\partial
S_{\Sigma,k}}-\frac{\partial\left\langle{V_{\alpha}(\boldsymbol{q})}\right\rangle}{\partial
S_{\Sigma,k}}\right)=\sum^{N_{s}}_{\alpha=1}w_{\alpha}\left(\frac{3\Omega_{\alpha}}{m_{\alpha}}\frac{\partial
S_{\Sigma,\alpha}}{\partial S_{\Sigma,k}}-\left\langle{\frac{\partial
V_{\alpha}(\boldsymbol{q})}{\partial\boldsymbol{q}}\cdot\frac{\partial\boldsymbol{q}}{\partial
S_{\Sigma,k}}}\right\rangle\right)=0.$ (4.7)
Substituting the interpolation from (4.1) into (4.6) and (4.7) yields (for
repatoms $k=1,\ldots,N_{h}$)
$-\sum^{N_{s}}_{\alpha=1}w_{\alpha}\left(\sum_{j\in\mathcal{N}(\alpha)}\left\langle{\frac{\partial
V_{\alpha}(\boldsymbol{q})}{\partial\boldsymbol{q}_{j}}}\right\rangle
N_{k}(\overline{\boldsymbol{q}}_{j})+\left\langle{\frac{\partial
V_{\alpha}(\boldsymbol{q})}{\partial\boldsymbol{q}_{\alpha}}}\right\rangle
N_{k}(\overline{\boldsymbol{q}}_{\alpha})\right)=0$ (4.8a) and
$\displaystyle\sum^{N_{s}}_{\alpha=1}w_{\alpha}\left[\frac{3\Omega_{\alpha}}{m_{\alpha}}N_{k}(\overline{\boldsymbol{q}}_{\alpha})-\left(\sum_{j\in\mathcal{N}(\alpha)}\left\langle{\frac{\partial
V_{\alpha}(\boldsymbol{q})}{\partial\boldsymbol{q}_{j}}\cdot\left(\boldsymbol{q}_{j}-\overline{\boldsymbol{q}}_{j}\right)}\right\rangle
N_{k}(\overline{\boldsymbol{q}}_{j})+\left\langle{\frac{\partial
V_{\alpha}(\boldsymbol{q})}{\partial\boldsymbol{q}_{\alpha}}\cdot\left(\boldsymbol{q}_{\alpha}-\overline{\boldsymbol{q}}_{\alpha}\right)}\right\rangle
N_{k}(\overline{\boldsymbol{q}}_{\alpha})\right)\right]=0.$ (4.8b)
Note that these are similar to the equilibrium conditions derived by Tembhekar
(2018) following Kulkarni et al.’s max-ent formulation, although the max-ent
formulation bypasses the thermodynamic relevance of the parameters in the
dynamic setting. As noted previously in Section 3.2, the local thermal
equilibrium equation (4.8b) corresponds to the local equation of state of the
system, here providing the equation of state of the coarse-grained
quasicontinuum. Solving equations (4.6) and (4.7) subject to one of the
constraints from Table 1 depending upon the assumption of the thermodynamic
process yields the variables
$(\overline{\boldsymbol{q}}_{k},S_{\Sigma,k},S_{\Omega,k})$ for all repatoms,
thus yielding the thermodynamically reversible solution for the deformation of
the system.
To introduce seamless coarse-graining of the linear Onsager kinetic model for
irreversible thermal conduction governed by equation (3.30), we solve for the
entropy rates $\dot{S}_{k}$ of all repatoms and evolve the entropy in time for
each repatom. To this end, we notice that the first term in equation (4.8b)
represents a thermal force due to the thermal kinetic energy of the system.
Since that thermal force in our energy-based setting follows a force-based
summation rule of Knap and Ortiz (2001), the entropy rate calculation of the
repatom $k$ simplifies to
$\widehat{w}_{k}\frac{\;\\!\mathrm{d}S_{k}}{\;\\!\mathrm{d}t}=\sum^{N_{s}}_{\alpha=1}w_{\alpha}\frac{\;\\!\mathrm{d}S_{\alpha}}{\;\\!\mathrm{d}t}N_{k}(\overline{\boldsymbol{q}}_{\alpha})=\sum^{N_{s}}_{\alpha=1}\frac{w_{\alpha}}{T_{\alpha}}\sum_{j\in\mathcal{N}(\alpha)}A_{\alpha
j}T^{2}_{\alpha j}P_{\alpha j}N_{k}(\overline{\boldsymbol{q}}_{\alpha}),$
(4.9)
where $\widehat{w}_{k}$ is the repatom weight. Note that the kinetic potential
in equation (3.29) can, in principle, also be coarse-grained analogously to
the free energy in equation (4.3). However, the resulting calculation of the
entropy rates is computationally costly since it involves summation over all
repatoms for each sampling atom calculation, which is why this approach is not
pursued here. Equations (4.8a) and (4.8b) combined with the thermodynamic
constraints in Table 1 and the coarse-grained thermal transport model in
equation (4.9) yield the solution of a generic thermomechanical deformation
problem subject to a non-uniform temperature distribution and loading and
boundary conditions, as illustrated in Figure 1. Convergence of the force-
based summation rule was analysed by Knap and Ortiz (2001) for repatom forces.
Hence, equation (4.9) converges to (3.30) as the coarse-grained mesh is
refined down to atomistic resolution (weights $\hat{w}_{k}$ and $w_{\alpha}$
approach unity and dependence on all sampling atoms excluding the repatom
vanishes). Consequently, even in the coarse-grained regions, equation (4.9)
approximates the atomistic thermal transport model governed by (3.30). As
shown in Section 3.4, the linear Onsager kinetic model approximates the
Fourier’s law type thermal transport for a relaxed crystal, since the
temperature reaches a linear distribution at steady state and the macroscopic
flux reaches a constant value for differentially heated boundaries. Since the
deformation in very large coarse-grained elements in a QC simulation is
expected to be small, the coarse-grained equation (4.9) approximates the
Fourier’s law type thermal transport.
### 4.1 Updated-Lagrangian QC implementation
Figure 10: Illustration of the updated-Lagrangian QC implementation at
different external load/strain steps denoted by $n$. The local Bravais basis
$\boldsymbol{A}_{n}$ of the highlighted element is shown in blue, with the
edge vectors $\boldsymbol{S}_{n}$. Repatoms are shown as red circles, sampling
atoms as small white circles. Deformation gradient $\boldsymbol{F}_{i\to i+1}$
deforms the edge vectors $\boldsymbol{S}_{i}$ to $\boldsymbol{S}_{i+1}$ and
the local Bravais basis $\boldsymbol{A}_{i}$ to $\boldsymbol{A}_{i+1}$.
We implement the thermomechanical local equilibrium relations (4.8a) and
(4.8b) combined with a thermodynamic constraint from Table 1 and the coarse-
grained thermal transport equation (4.9) in an updated-Lagrangian QC setting.
The latter is chosen since atoms in regions undergoing large deformation tend
to have significant neighborhood changes, for which the initial reference
configuration loses its meaning in the fully-nonlocal QC formulation, as
illustrated by Amelang (2016) and Tembhekar et al. (2017). Consequently,
tracking the interatomic potential neighborhoods in the undeformed
configuration incurs high computational costs. Alternatively, one could
strictly separate between atomistic and coarse-grained regions (as in the
local-nonlocal QC method of Tadmor et al. (1996a)), yet even this approach
suffers from severe mesh distortion in the coarse-grained regions in case of
large deformation, and it furthermore does not easily allow for the automatic
tracking of, e.g., lattice defects with full resolution (Tembhekar et al.,
2017). It also requires a-priori knowledge about where full resolution will be
required during a simulation. As a remedy, we here deform the mesh with the
moving repatoms and we take the deformed configuration from the previous load
step as the reference configuration for each new load step, thus discarding
the initial configuration and continuously updating the reference
configuration.
For every element $e$, we store the three initial edge vectors (i.e., three
node-to-node vectors forming a right-handed system) in a matrix
$\boldsymbol{S}^{e}_{0}$, and the three Bravais lattice vectors indicating the
initial atomic arrangement within the element in a matrix
$\boldsymbol{A}^{e}_{0}$. As the system is relaxed quasistatically under
applied loads, all repatoms move to the deformed configuration (e.g., from
load step $n=i$ to $n=i+1$), thus deforming the edge vectors of element $e$
from $\boldsymbol{S}^{e}_{i}$ to $\boldsymbol{S}^{e}_{i+1}$ (and likewise the
Bravais basis from $\boldsymbol{A}^{e}_{i}$ to $\boldsymbol{A}^{e}_{i+1}$).
The incremental deformation gradient of element $e$, from step $i$ to $i+1$,
can hence be computed from the kinematic relation
$\boldsymbol{F}^{e}_{i\rightarrow{i+1}}=\boldsymbol{S}^{e}_{i+1}\left(\boldsymbol{S}^{e}_{i}\right)^{-1},$
(4.10)
which assumes an affine deformation within the element due to the chosen
linear interpolation (see Figure 10). As the element deforms, the lattice
vectors also deform in an affine manner:
$\boldsymbol{A}^{e}_{i+1}=\boldsymbol{F}^{e}_{i\rightarrow{i+1}}\boldsymbol{A}^{e}_{i}.$
(4.11)
Consequently, the integer matrix $\boldsymbol{N}$, which contains the numbers
of lattice vector hops along the element edges, evaluated as
$\boldsymbol{N}^{e}_{i}=\boldsymbol{S}^{e}_{i}\left(\boldsymbol{A}^{e,}_{i}\right)^{-1}=\mathrm{const.},$
(4.12)
remains constant throughout deformation of a given element $e$. Moreover, each
element edge has a constant number vector, denoted by the rows of
$\boldsymbol{N}^{e}_{i}$ (see Figure 10). That is, in the updated-Lagrangian
setting, the number matrix $\boldsymbol{N}^{e}_{i}$ remains constant during
deformation. Such conservation of lattice vector hops along the element
edges/faces is particularly useful for adaptive remeshing scenarios, where
existing elements may need to be removed and new elements need to be
reconnected, with or without changes to the number of lattice sites used for
re-connections. The conservation of lattice vector hops can then be used for
computing the Bravais lattice vectors local to new elements. The Bravais
lattice vectors are used for calculating the neighborhoods of the nodal and
centroid sampling atoms belonging to the large elements in the fully nonlocal
QC formulation. The local lattice is generated within a threshold radius
distance from the sampling atom using those lattice vectors. We use the
adaptive neighborhoods calculation strategy of Amelang (2016), which requires
larger threshold radii compared to the interatomic potential cut-off chosen as
$r_{th}=r_{\text{cut}}+r_{\text{buff}},$ (4.13)
where $r_{\text{buff}}$ is a buffer distance used for triggering re-
computations of neighborhoods, and $r_{\text{cut}}$ is the interatomic
potential cut-off. If the maximum relative displacement among the neighbors
with respect to a sampling atom exceeds $r_{\text{buff}}$, then neighborhoods
of the sampling atom are re-computed (see Amelang (2016) for details).
Within the region with atomistic resolution, only nodal sampling atoms have
finite weights (close to unity) and hence only their neighborhoods are
computed. For such neighborhood calculations Bravais lattice vectors are not
required. Instead, the unique nodes of all elements within the threshold
radius of the sampling atom are added as the neighbors. Consequently, even
severely deforming meshes do not require element reconnection/remeshing as
long as the deformation stays restricted within the atomistic region, since
only nodes of the elements are required. Hence, we use meshes with large
atomistic regions in the benchmark cases presented below, to restrict the
analysis towards thermodynamics of the deformations. Such simulations do not
require adaptive remeshing, the analysis of which is left for future studies.
$(a)$$(b)$
Figure 11: Initial conditions of the dislocation dipole setup. $(a)$ Initial
condition after displacing the atoms according to the isotropic linear elastic
displacement field solution. Due to the separation $\epsilon$ of the slip
planes of two dislocation, a line of atoms at the left most end remains
unaffected in the initial condition and is removed from the domain, thus
initiating a void.$(b)$ Isothermally relaxed state consisting of a single atom
void created by annihilation of the dislocations.Shown are 3D views of the
full simulation domain with a magnified view of the fully-resolved central
region and a top view. Atoms are colored by the centrosymmetry parameter in
arbitrary units and shown between threshold values of $2$ to $10$.
### 4.2 Thermal effects on shear activation of dislocations
$(a)$$(b)$$(c)$$(d)$
Figure 12: Comparison of the isothermal nucleation of dislocation dipoles from
a single-atom void as obtained from fully atomistic (left) and QC (right)
simulations at varying temperatures in a Cu single-crystal modeled using the
EFS potential (Dai et al., 2006). Shown is the final sheared state of $(a)$
snapshot of the atomistic simulation and $(b)$that of the QC simulation. Atoms
are colored by the centrosymmetry parameter in arbitrary units,. While the
atoms in region A are kept fixed, atoms in region B are allowed to relax.
(c,d) The shear stress $\tau$ vs. the engineering strain $\gamma$ is plotted
for $(c)$ the atomistic simulation and $(d)$ the QC simulation. The shear
stress is evaluated as the net force in the $[110]$-direction on the atoms in
region A per cross-sectional area in the $(\overline{1}11)$ plane. Faces
$(1\overline{1}2)$ are periodic. Figure 13: Comparison of the critical shear
stress $\tau_{\text{cr}}$ and strain $\gamma_{\mathrm{cr}}$ required to
nucleate a dislocation dipole from the void as obtained from QC and from
atomistic simulations at various temperatures. The critical strain
$\gamma_{\mathrm{cr}}$ is the external shear strain $\gamma$ at which
$\tau_{\mathrm{cr}}$ is achieved. Figure 14: Variation of the position entropy
$S_{\Sigma}$ with temperature inside one of the dislocations nucleated from
the void. The highlighted atoms are identified using the centrosymmetry
parameter (values $>2$ are shown). As discussed in Section 3.3, the number of
neighbors (and their positions) affects the local interatomic potential of an
atom, thus modifying the local variation of positions. Atoms within the
dislocations that are closer than the equilibrium interatomic spacing have
smaller position variance than those that are further apart.Atoms at the
boundaries have higher $\Sigma$ values due to lesser number of interacting
neighbours at the open-surface of the domain. Figure 15: Shear stress $\tau$
on the $(\overline{1}11)$-plane and dislocation separation distance $d$ vs.
applied shear strain $\gamma$ for isothermal and adiabatic deformations
obtained from both atomistic and QC simulations. Note that the differences
between isothermal vs. adiabatic data are small, because the temperature
increase is not significant. Critical shear stress value deviations are within
6% (e.g., isothermal QC: 3.7914 GPa, isothermal atomistics: 3.6389 GPa) and
and critical strain values are within 12% (isothermal QC: 0.112, isothermal
atomistics: 0.100).
As a benchmark example, we use the updated-Lagrangian QC method discussed
above to analyze the effects of temperature on dislocations and specifically
on edge dislocation nucleation under an applied shear stress. We present the
analysis for both cases of isothermal and adiabatic constraints (the latter
combined with the irreversible entropy transport based on linear Onsager
kinetics). The adiabatic constraint here signifies that the simulation domain
is thermally isolated from the surroundings and there is no heat exchange
between the domain boundaries and the surroundings. For both cases, we
generate a pair of dislocations (i.e., a dislocation dipole) using the
isotropic linear elastic displacement field solutions of edge dislocations
with opposite Burgers’ vectors (Nabarro, 1967), given by,
$\displaystyle
u_{1}=\frac{b}{4\pi\left(1-\nu\right)}\frac{x^{\prime}_{1}x^{\prime}_{2}}{x^{\prime
2}_{1}+x^{\prime
2}_{2}}-\frac{b}{2\pi}\tan^{-1}\left(\frac{x^{\prime}_{1}}{x^{\prime}_{2}}\right),$
(4.14a) $\displaystyle
u_{2}=-\frac{(1-2\nu)b}{8\pi\left(1-\nu\right)}\ln\left(\frac{x^{\prime
2}_{1}+x^{\prime
2}_{2}}{b}\right)+\frac{b}{4\pi\left(1-\nu\right)}\frac{x^{\prime
2}_{2}}{x^{\prime 2}_{1}+x^{\prime 2}_{2}},$ (4.14b)
superposed linearly in a $32\times 25\times 1.8$ nm3 slab of pure single-
crystalline Cu, consisting of 125,632 lattice sites and edges oriented along
the slip crystallographic directions. In (4.14b), $x^{\prime}_{1}$ and
$x^{\prime}_{2}$ denote the shifted coordinates along $[110]$ and
$[\overline{1}11]$ axis respectively with the dislocation centers, $u_{1}$ and
$u_{2}$ are the displacements along these axis, and $b=\pm\frac{a}{\sqrt{2}}$
is the magnitude of the Burgers’ vector along with the dislocation
orientation. Figure 11$(a)$ shows the initial condition generated for the edge
dislocation dipole with Burgers’ vectors $\boldsymbol{b}=\pm\frac{a}{2}[110]$,
separated by a distance of $80$ Å. The centers of the dislocations are
separated by $80$ Å in $[110]$ direction and a very small value ($1\times
10^{-9}$ Å) in the $[\overline{1}11]$ direction. Imposing such displacement
fields causes the slip planes of the two dislocations to separate in $x_{2}$
and leaves a line of atoms on $x_{2}=0$ plane at the left end of the domain
with zero displacements. These atoms are removed from the simulation domain,
thus creating void at the edge of the domain. Displacements are restricted to
the $(1\overline{1}2)$ plane, while the simulation domain is set up in 3D with
periodic boundary conditions on opposite out-of-plane faces. After initial
relaxation, the dislocations annihilate each other due to their interacting
long-range elastic field. The result is a line defect in the form of a
through-thickness (non-straight) vacancy column (in the following for
simplicity referred to as the void), as shown in Figure 11($b$). This void is
created in the initial condition when the non-displaced atoms are removed from
the simulation domain, and simply migrates to the center of the domain during
the initial relaxation. We note that this is a direct consequence of
separating the slip planes of the two dislocations in our simulations. If the
slip planes are identical, then the dislocations annihilate and form a perfect
crystal. Creation of a single line defect is important since it ensures that
only two dislocations of opposite orientation are activated in the domain. We
continue loading the simulation domain in simple shear (moving the top and
bottom faces relative to each other), while computing the effective applied
shear stress from the atomic forces. Periodic boundary conditions are imposed
on $\left(1\overline{1}2\right)$ surfaces, while the rest of the boundaries
are included within the region A, which is mechanically fixed during
relaxation. At sufficient applied shear, the pre-existing defect will nucleate
and emit a dislocation dipole, whose activation energy and behavior depends on
temperature. For an assessment of the accuracy of the QC framework, we carry
out both fully atomistic (125,632 atoms) and QC simulations (52,246 repatoms)
in isothermal and adiabatic settings. QC simulations are performed on a mesh
generated by coarse-graining in $x_{2}$ direction. All three lattice vectors
are expanded by a factor of $4$ in the coarse-grained region. The atomistic
region extends fully in the $x_{1}$ and $x_{3}$ directions and till $\pm 51$ Å
in the $x_{2}$ direction. Coarse-graining is done only in $x_{2}$ direction to
prevent the dislocations colliding with the atomistic and coarse-grained
subdomain interface. We acknowledge that the QC setup is relatively simple and
there is not yet a significant reduction in the total number of degrees of
freedom nor does it involve automatic mesh refinement. Yet, this study
presents a simple and instructive example highlighting the efficacy and
accuracy of the GPP-based QC formulation introduced in previous sections.
#### 4.2.1 Isothermal
In face-centered cubic (FCC) crystals, edge dislocations preferably glide on
the close-packed crystallographic $\\{\overline{1}11\\}$-planes (Hull and
Bacon, 2001). As the void generated due to the initial annihilation of the
dislocation dipole is strained under shear deformation, dislocations nucleate
from the void at a sufficient level of applied shear, propagating in opposite
directions, as shown in Figure 12. We apply a shear deformation to all
repatoms in the slab such that, at the $n^{\mathrm{{th}}}$ load step,
$\overline{q}^{(n)}_{k,1}=\overline{q}^{(n-1)}_{k,1}+\Delta\gamma\,\overline{q}^{(n-1)}_{k,2},$
(4.15)
where indices $1$ and $2$ refer to the respective components of the mean
position vector in the chosen coordinate system, and $\Delta\gamma$ is the
applied shear strain increment. As the strain is applied, repatoms in the
inner region B (Figure 12) are relaxed while keeping those in the outer region
A mechanically fixed to impose the average shear strain. Note that, due to
small deformation in the atomic neighborhoods, the displacement-variance
entropy $S_{\Sigma}$ of repatoms close to the interface between regions A and
B changes and, hence, all repatoms in the domain are thermally relaxed
assuming an isothermal relaxation (cf. Table 1). While the shear strain is
increased, the horizontal component of the force on all repatoms in region A
is computed. The effective shear stress $\tau$ on the $(\overline{1}11)$-plane
is computed by normalizing the net horizontal force by the cross-sectional
area of the slab. Results are shown in Figure 12$(c)$ and $(d)$. Once the
stress reaches a critical value, the stress drops as dislocations nucleate
from the void and move to the ends of region B. We observe that the critical
stress value decreases slowly with temperature (see Figure 13). Moreover, the
value of the critical stress obtained from a fully atomistic simulation and
the quasicontinuum simulation are within about 6% of each other (see Figure
13), both capturing the apparent thermal plastic softening of the crystal.
Figure 16: Local variation of temperature as the slab with void is deformed
adiabatically. In the intial stages, the temperature rises slowly due to the
external deformation. At the critical shear strain
$\gamma_{\mathrm{cr}}=n_{\mathrm{cr}}\Delta\gamma$, dislocations nucleate from
the void and move along the $[110]$-direction. The temperature rise of those
atoms within the dislocations causes a rapid increase in the temperature of
the slab, as may be expected from the heat generated by work hardening. The
temperature of a few atoms within the dislocations at the intermediate stage
$n=n^{*}_{\text{cr}}$ before the irreversible transport exceeds the colorbar
range.
Miller and Tadmor (2009) studied a similar 2D scenario with a different
crystal orientation, in which the dislocation dipole is stable and the
dislocations do not annihilate to form the void. In such a case, the
(theoretical) critical shear stress corresponds to the shear stress required
to cause dislocation movement in the crystallographic plane. However, in our
analysis (which is based on a more realistic crystallographic setup since the
slip planes of the dislocations are close-packed planes of $\\{111\\}$ family)
a critical shear stress may be defined as the shear stress required to
nucleate dislocations from the void-like defect.
#### 4.2.2 Adiabatic
To simulate the quasistatic adiabatic activation of dislocations under shear,
we repeat the above simulations, now with all repatoms in the domain being
relaxed isentropically (cf. Table 1), followed by the thermal transport model
according to the steps discussed in Section 3.4 and Algorithm 1. As noted
above, since the boundaries do not allow any heat exchange out of the domain
to the surrounding (thermally insulated), the term _adiabatic_ is used to
describe the setup. We further assume that the strain-rate is singificantly
lower than the time scale imposed by the molecular thermal transport, thus
imposing quasistatic conditions for the transport ($\delta t^{(n)}\to\infty$
in Algorithm 1, see Step 3 at the end of Section 3.4). As we describe below,
the thermomechanical deformation approaches isothermal conditions since the
quasistatic assumption results in homogenization of the temperature field to a
large extent. The initial condition (again, prepared using the isotropic
elastic displacement field solutions) is relaxed isothermally at $300$ K,
before the adiabatic deformation begins. We compare the adiabatic deformation
with the isothermal deformation of the slab at $300$ K. Figure 15 shows the
variation of the shear stress on the $(\overline{1}11)$-plane with external
shear strain $\gamma$. Due to the mechanical work done by the external shear
deformation, the temperature of the slab increases slightly, causing apparent
softening compared to the isothermal deformation. Figure 16 shows the spatial
variation of temperature as the slab is deformed adiabatically. Before the
critical strain, heating caused by local deformation is negligible. As the
dislocations are nucleated at the critical strain $\gamma_{\mathrm{cr}}$, the
temperature of those atoms around the dislocations changes significantly, as
shown by the intermediate stage $(\overline{\boldsymbol{q}}^{*},T^{*})$ in
Figure 16. Due to quasistatic thermomechanical deformation, the temperature
field is homogenized to within 1 K, even after dislocation nucleation from the
void. Such close-to-isothermal feature of thermomechanical deformation at slow
strain rates was observed by (Ponga et al., 2016) while studying strain rate
effects on nano-void growth in magnesium and (Ponga and Sun, 2018) while
studying thermomechanical deformation of carbon nanotubes. Further plastic
deformation causes increased heating of the slab, particularly due to the
restricted dislocation motion beyond the interfaces between regions A and B
(Figure 12).
As noted above, the critical stress values obtained from QC simulations are
within $6$% of those obtained from the fully atomistic simulations.
Furthermore, the critical strain values are within $12$%. Repatoms on the
$(110)$\- and $(\overline{1}11)$-surfaces, which includes the repatoms on
vertices of very large elements and the repatoms in the transition region
between regions A and B, exhibit both mechanical and thermal spurious forces
in 3D (Amelang et al., 2015; Amelang and Kochmann, 2015; Tembhekar et al.,
2017). These artifacts are expected to be the primary sources of error in the
coarse-graining strategy adopted here. Mechanical spurious forces within the
energy-based fully nonlocal QC setup were discussed in detail in Amelang et
al. (2015), and thermal spurious forces are expected to show an analogous
behavior. Therefore, we do not study spurious forces here in detail to
maintain the focus on the thermomechanics of the GPP formulation.
$(a)$$(b)$$(c)$
Figure 17: Illustration of the undeformed and deformed QC meshes of a $0.077$
$\mu$m FCC single-crystal of pure Cu. $(a)$ Cross-section of the initial,
undeformed mesh, $(b,c)$ zoomed-in perspective views of the atomistic region
(33$\times$33$\times$33 unit cells) and the surrounding coarsened regions in
the ($b$) undeformed and $(c)$ deformed configurations underneath a 5 nm
spherical indenter at an indentation depth of $0.75$ nm. Figure 18: Variation
of the indenter force with the indenter depth for isothermal
($T=0~{}\mathrm{K},~{}300~{}\mathrm{K},~{}600~{}\mathrm{K}$) and adiabatic
(initially at $T=300~{}\mathrm{K}$) conditions. After an initial elastic
region, the curve shows the typical serrated behavior due to dislocation
activity underneath the indenter.
### 4.3 Thermal effects on nanoindentation of copper
Figure 19: Microstructure generated below the 5 nm spherical indenter at an
indentation depth of $1$ nm for isothermal deformation of a Cu single-crystal.
The generated dislocations move towards the boundaries of the atomistic
region, creating stacking faults in the crystallographic planes of the
$\\{111\\}$-family (shaded in gray). Red repatoms are top surface repatoms and
blue atoms denote those within the microstructure, identified using the
centrosymmetry parameter (values $>5$ identify boundary and microstructure
atoms only). All repatoms are shown with reduced opacity for comparison of
size of the microstructure with the atomistic domain.
$(a)$$(b)$
Figure 20: Adiabatic deformation of the Cu single-crystal under a spherical
indenter. $(a)$ Spatial variation of temperature in the cross-section of the
atomistic region. As dislocations and stacking faults are created, local
thermal gradients are generated which are diffused via the thermal transport
model. $(b)$ Microstructure generated below the spherical indenter at an
indentation depth of $1$ nm. The generated dislocations move towards the
boundaries of the atomistic region, creating stacking faults in the
crystallographic planes of the $\\{111\\}$-family (shaded in gray). Red
repatoms are top surface repatoms and blue atoms denote those within the
microstructure, identified using the centrosymmetry parameter (values $>5$
identify boundary and microstructure atoms only). Shaded repatoms denote all
repatoms with reduced opacity, shown here for comparison of size of the
microstructure with the atomistic domain.
Finally, we apply the discussed thermomechanical transport model to the case
of nanoindentation into a Cu single-crystal. While the problem is well studied
in 2D using finite-temperature QC implementations (Tadmor et al., 2013), only
few QC studies exist studying the finite-temperature effects in 3D. Kulkarni
(2007) studied the nanoindentation of a $32\times 32\times 32$ unit cell FCC
nearest-neighbour Lennard-Jones crystal, using the Wentzel-Kramers-Brillouin
(WKB) approximation, which captures the thermomechanical coupling at
comparably low temperature only. We here perform nanoindentation simulations
of a Cu cube of $0.077$ $\mu$m side length (approximately
215$\times$215$\times$215 unit cells, see Figure 17), modeled by the EAM
potential of Dai et al. (2006), underneath a $5$ nm spherical indenter modeled
using the indenter potential of Kelchner et al. (1998) with a force constant
of $1000$ eV/Å3 and a maximum displacement of $1.26$ nm or approximately $3.5$
times the lattice parameter at $0~{}$K. The crystal consists of approximately
50 million atoms, represented in the QC framework by approximately 0.2 million
repatoms. The top surface is modeled as a free boundary, while all other
boundaries suppress wall-normal displacements, allowing only in-plane motion.
Below, we discuss the results for isothermal deformation at
$T=0~{}\mathrm{K},~{}300~{}\mathrm{K},~{}600~{}\mathrm{K}$ and for quasistatic
adiabatic deformation, the latter being initially at $T=300~{}\mathrm{K}$.
Figure 18 shows the variation of the total force on the spherical indenter vs.
indentation depth for both isothermal and quasistatic adiabatic conditions.
The force increases nonlinearly with indentation depth, showing the typical
Hertzian-type initial elastic section. With increasing indentation depth,
atoms underneath the indenter generate dislocations and stacking faults,
overall creating a complex microstructure consisting of prismatic dislocation
loops (PDLs) as observed by Ponga et al. (2015) in nano-void growth in a
copper crystal. These PDLs are created along the slip planes of the crystal of
$\\{111\\}$ family, as shown in Figure 19. At the first dislocation
activation, the indenter force drops after reaching a critical force. This
critical force decreases with increasing temperature, indicating plastic
softening with increasing temperature. The dislocations move towards the
boundaries of the atomistic region, gliding in the crystallographic planes of
the $(111)$-family, giving way to stacking faults in those planes. As shown in
Figure 19, while the initial dislocations maintain their structure, the
stacking fault structure changes significantly at the same indentation depth
as temperature increases.
Figure 20 shows the spatial temperature distribution and the emergent
microstructure during the quasistatic adiabatic deformation of the Cu crystal.
With the dislocations local temperature gradients are generated along the PDLs
generated in the slip planes of $\\{111\\}$ type due to large gradients in
$S_{\Sigma}$, which are triggered due to large deviations from a
centrosymmetric neighborhood (as identified in Figure 16). These temperature
gradients are diffused as a result of the thermal transport.
We note that, for a thorough quantitative analysis, one may want to obtain
results averaged over multiple simulations with initial conditions and/or
indenter position slightly perturbed, since the emergence of microstructure
below the indenter within the highly-symmetric single-crystal is associated
with instability and strongly depends on local fluctuations and initial
conditions. Such a statistical analysis is deferred to future work.
## 5 Conclusion and discussion
We have presented a Gaussian phase packets-based (GPP-based) formulation of
finite-temperature equilibrium and nonequilibrium thermomechanics applied to
atomistic systems. We have shown that approximating the global statistical
distribution function with a multivariate Gaussian ansatz enables capturing of
thermal transport only via interatomic correlations. Due to high computational
costs, we have neglected the interatomic correlations, which results in a
local GPP approximation of the system. Such a system exhibits reversible
dynamics with thermomechanical coupling, causing local heating and cooling
upon movement of atoms relative to local neighborhoods. Moreover, in the
quasistatic limit we have shown that the equations yield local mechanical and
thermal equilibrium conditions, the latter yielding the local equation of
state of the atoms based on the interatomic force field. To capture the
irreversibility due to local thermal transport triggered by the adiabatic
heating/cooling of atoms, we have coupled the quasistatic framework with
linear Onsager kinetics. Such a model involves an empirical coefficient fitted
to obtain approximate bulk conductivity measurements and captures the
experimentally observed size-effects of the thermal conductivity, as shown by
Venturini et al. (2014). Moreover, we have shown that the time scale imposed
by the atomic-scale transport is approximately 100 times as that of atomic
vibrations. While the atomic-scale thermal transport imposes a small time
scale, as the system reaches a non-uniform steady state, the local heat flux
imbalance decreases. Below a tolerance value, the heat transport can be
terminated, yielding a steady state solution. Based on the global multivariate
Gaussian ansatz, interatomic correlations may be fitted to obtain correlation
functions (akin to interatomic potentials), which can help develop the
transport constitutive properties of atomistic systems and also advance
current understanding of the long-standing nanoscale thermal transport
problem. Finally, we have combined the quasistatic thermomechanical equations
based on the local GPP approximation with thermal transport in a high-
performance, distributed-memory, updated-Lagrangian 3D QC solver, which is
capable of modeling thermomechanical deformation of large-scale systems by
coarse-graining the atomistic ensemble in space. Benchmark simulations of
dislocation nucleation and nanoindentation under isothermal and adiabatic
constraints showed convincing agreement between coarse-grained and fully
resolved atomistic simulations. Since the time integration of the atomic
transport can be terminated for small heat flux imbalance (discussed in
Algorithm 1), the quasistatic simulations offer significant advantages over
traditional MD studies, which can tackle only high strain rates. The presented
methodology of coupling local thermal equilibrium with a surrogate empirical
model of thermal transport and spatial coarse-graining (by the QC method) can
model deformation of large crystalline systems at mesoscales and at
quasistatic loading rates. Due to the time-scale limitations of MD, a one-to-
one comparison of the presented simulations with finite-temperature MD
simulations is prohibitively costly. A detailed analysis of the accuracy of
the spatial coarse-graining of the thermomechanical model presented here, and
a comparison with suitable MD simulations. qualifies as a possible extension
of this work. For such comparisons, however, very large scale nonequilibrium
molecular dynamics (NEMD) simulations are required at sufficiently slow strain
rates.
## Acknowledgments
The support from the European Research Council (ERC) under the European
Union’s Horizon 2020 research and innovation program (grant agreement no.
770754) is gratefully acknowledged. The authors thank Michael Ortiz for
stimulating discussions and Miguel Spinola for aiding with the LAMMPS
simulations for numerical validation.
## References
* Admal and Tadmor (2010) Admal, N. C., Tadmor, E. B., 2010. A unified interpretation of stress in molecular systems. Journal of Elasticity 100 (1-2), 63–143.
* Amelang (2016) Amelang, J. S., 2016. A fully-nonlocal energy-based formulation and high-performance realization of the quasicontinuum method. Ph.D. thesis.
* Amelang and Kochmann (2015) Amelang, J. S., Kochmann, D. M., 2015. Surface effects in nanoscale structures investigated by a fully-nonlocal energy-based quasicontinuum method. Mechanics of Materials 90, 166 – 184.
* Amelang et al. (2015) Amelang, J. S., Venturini, G. N., Kochmann, D. M., 2015. Summation rules for a fully nonlocal energy-based quasicontinuum method. Journal of the Mechanics and Physics of Solids 82, 378–413.
* Ariza et al. (2012) Ariza, M., Romero, I., Ponga, M., Ortiz, M., 2012. Hotqc simulation of nanovoid growth under tension in copper. International Journal of Fracture 174 (1), 75–85.
* Belytschko and Xiao (2003) Belytschko, T., Xiao, S., 2003. Coupling methods for continuum model with molecular model. International Journal for Multiscale Computational Engineering 1 (1).
* Bitzek et al. (2006) Bitzek, E., Koskinen, P., Gähler, F., Moseler, M., Gumbsch, P., 2006. Structural relaxation made simple. Physical review letters 97 (17), 170201.
* Brune et al. (2015) Brune, P. R., Knepley, M. G., Smith, B. F., Tu, X., Jan 2015. Composing scalable nonlinear algebraic solvers. SIAM Review 57 (4), 535–565.
URL http://dx.doi.org/10.1137/130936725
* Butcher (1986) Butcher, P., 1986. The theory of electron transport in crystalline semiconductors. In: Crystalline Semiconducting Materials and Devices. Springer, pp. 131–194.
* Chang and Himmel (1966) Chang, Y., Himmel, L., 1966. Temperature dependence of the elastic constants of cu, ag, and au above room temperature. Journal of Applied Physics 37 (9), 3567–3572.
* Chen et al. (2017) Chen, X., Xiong, L., McDowell, D. L., Chen, Y., 2017. Effects of phonons on mobility of dislocations and dislocation arrays. Scripta Materialia 137, 22–26.
* Chen (2009) Chen, Y., 2009. Reformulation of microscopic balance equations for multiscale materials modeling. The Journal of chemical physics 130 (13), 134706.
* Dai et al. (2006) Dai, X., Kong, Y., Li, J., Liu, B., 2006. Extended finnis–sinclair potential for bcc and fcc metals and alloys. Journal of Physics: Condensed Matter 18 (19), 4527.
* Dobson et al. (2010) Dobson, M., Luskin, M., Ortner, C., 2010. Accuracy of quasicontinuum approximations near instabilities. Journal of the Mechanics and Physics of Solids 58 (10), 1741–1757.
* Eidel and Stukowski (2009) Eidel, B., Stukowski, A., 2009. A variational formulation of the quasicontinuum method based on energy sampling in clusters. Journal of the Mechanics and Physics of Solids 57 (1), 87–108.
* Espanol et al. (2013) Espanol, M. I., Kochmann, D. M., Conti, S., Ortiz, M., 2013. A $\gamma$-convergence analysis of the quasicontinuum method. Multiscale Modeling & Simulation 11 (3), 766–794.
* Evans and Morriss (2007) Evans, D. J., Morriss, G. P., 2007. Statistical Mechanics of Nonequilbrium Liquids. ANU Press.
* Gunzburger and Zhang (2010) Gunzburger, M., Zhang, Y., 2010. A quadrature-rule type approximation to the quasi-continuum method. Multiscale Modeling & Simulation 8 (2), 571–590.
* Heller (1975) Heller, E. J., 1975. Time-dependent approach to semiclassical dynamics. The Journal of Chemical Physics 62 (4), 1544–1555.
* Hicks and Dresselhaus (1993) Hicks, L., Dresselhaus, M. S., 1993. Thermoelectric figure of merit of a one-dimensional conductor. Physical review B 47 (24), 16631.
* Hirth (1980) Hirth, J. P., 1980. Effects of hydrogen on the properties of iron and steel. Metallurgical Transactions A 11 (6), 861–890.
* Hull and Bacon (2001) Hull, D., Bacon, D. J., 2001. Introduction to dislocations. Butterworth-Heinemann.
* Irving and Kirkwood (1950) Irving, J., Kirkwood, J. G., 1950. The statistical mechanical theory of transport processes. iv. the equations of hydrodynamics. The Journal of chemical physics 18 (6), 817–829.
* Iyer and Gavini (2011) Iyer, M., Gavini, V., 2011. A field theoretical approach to the quasi-continuum method. Journal of the Mechanics and Physics of Solids 59 (8), 1506–1535.
* Johnson (1988) Johnson, R., 1988. Analytic nearest-neighbor model for fcc metals. Physical Review B 37 (8), 3924.
* Kelchner et al. (1998) Kelchner, C. L., Plimpton, S., Hamilton, J., 1998. Dislocation nucleation and defect structure during surface indentation. Physical Review B 58 (17), 11085\.
* Kim et al. (2014a) Kim, S.-P., Datta, D., Shenoy, V. B., 2014a. Atomistic mechanisms of phase boundary evolution during initial lithiation of crystalline silicon. The Journal of Physical Chemistry C 118 (31), 17247–17253.
* Kim et al. (2014b) Kim, W. K., Luskin, M., Perez, D., Voter, A., Tadmor, E. B., 2014b. Hyper-qc: An accelerated finite-temperature quasicontinuum method using hyperdynamics. Journal of the Mechanics and Physics of Solids 63, 94–112.
* Kittel (1976) Kittel, C., 1976. Introduction to solid state physics.
* Knap and Ortiz (2001) Knap, J., Ortiz, M., 2001. An analysis of the quasicontinuum method. Journal of the Mechanics and Physics of Solids 49 (9), 1899–1923.
* Kulkarni (2007) Kulkarni, Y., 2007. Coarse-graining of atomistic description at finite temperature. Ph.D. thesis, California Institute of Technology.
* Kulkarni et al. (2008) Kulkarni, Y., Knap, J., Ortiz, M., 2008. A variational approach to coarse graining of equilibrium and non-equilibrium atomistic description at finite temperature. Journal of the Mechanics and Physics of Solids 56 (4), 1417–1449.
* Landau and Lifshitz (1980) Landau, L., Lifshitz, E., 1980. Statistical Physics, Part 1: Volume 5.
* Lepri et al. (2003) Lepri, S., Livi, R., Politi, A., 2003. Thermal conduction in classical low-dimensional lattices. Physics Reports 377 (1), 1–80.
* Li et al. (2011) Li, J., Sarkar, S., Cox, W. T., Lenosky, T. J., Bitzek, E., Wang, Y., 2011. Diffusive molecular dynamics and its application to nanoindentation and sintering. Physical Review B 84 (5), 054103.
* Ma et al. (1993) Ma, J., Hsu, D., Straub, J. E., 1993. Approximate solution of the classical liouville equation using gaussian phase packet dynamics: Application to enhanced equilibrium averaging and global optimization. The Journal of Chemical Physics 99 (5), 4024–4035.
* Marian et al. (2009) Marian, J., Venturini, G. N., Hansen, B., Knap, J., Ortiz, M., Campbell, G., 2009\. Finite-temperature non-equilibrium quasicontinuum method based on langevin dynamics. Modelling and Simulation in Materials Science and Engineering, vol. 18, N/A, December 10, 2001, pp. 015003 18 (LLNL-JRNL-412903).
* McLachlan (1964) McLachlan, A., 1964. A variational solution of the time-dependent schrodinger equation. Molecular Physics 8 (1), 39–44.
* Mehta et al. (2015) Mehta, R., Chugh, S., Chen, Z., 2015. Enhanced electrical and thermal conduction in graphene-encapsulated copper nanowires. Nano letters 15 (3), 2024–2030.
* Mendez et al. (2018) Mendez, J., Ponga, M., Ortiz, M., 2018. Diffusive molecular dynamics simulations of lithiation of silicon nanopillars. Journal of the Mechanics and Physics of Solids 115, 123–141.
* Miller et al. (1998) Miller, R., Tadmor, E., Phillips, R., Ortiz, M., 1998. Quasicontinuum simulation of fracture at the atomic scale. Modelling and Simulation in Materials Science and Engineering 6 (5), 607.
* Miller and Tadmor (2009) Miller, R. E., Tadmor, E. B., 2009. A unified framework and performance benchmark of fourteen multiscale atomistic/continuum coupling methods. Modelling and Simulation in Materials Science and Engineering 17 (5), 053001.
* Motamarri et al. (2020) Motamarri, P., Das, S., Rudraraju, S., Ghosh, K., Davydov, D., Gavini, V., 2020\. Dft-fe–a massively parallel adaptive finite-element code for large-scale density functional theory calculations. Computer Physics Communications 246, 106853.
* Nabarro (1967) Nabarro, F. R. N., 1967. Theory of crystal dislocations. Clarendon Press.
* Nix and MacNair (1941) Nix, F., MacNair, D., 1941. The thermal expansion of pure metals: copper, gold, aluminum, nickel, and iron. Physical Review 60 (8), 597.
* Overton Jr and Gaffney (1955) Overton Jr, W., Gaffney, J., 1955. Temperature variation of the elastic constants of cubic elements. I. Copper. Physical Review 98 (4), 969.
* Plimpton (1995) Plimpton, S., 1995. Fast parallel algorithms for short-range molecular dynamics. Journal of computational physics 117 (1), 1–19.
* Ponga et al. (2015) Ponga, M., Ortiz, M., Ariza, M., 2015. Finite-temperature non-equilibrium quasi-continuum analysis of nanovoid growth in copper at low and high strain rates. Mechanics of Materials 90, 253–267.
* Ponga et al. (2016) Ponga, M., Ramabathiran, A. A., Bhattacharya, K., Ortiz, M., 2016. Dynamic behavior of nano-voids in magnesium under hydrostatic tensile stress. Modelling and Simulation in Materials Science and Engineering 24 (6), 065003.
* Ponga and Sun (2018) Ponga, M., Sun, D., 2018. A unified framework for heat and mass transport at the atomic scale. Modelling and Simulation in Materials Science and Engineering 26 (3), 035014.
* Qu et al. (2005) Qu, S., Shastry, V., Curtin, W., Miller, R. E., 2005. A finite-temperature dynamic coupled atomistic/discrete dislocation method. Modelling and Simulation in Materials Science and Engineering 13 (7), 1101.
* Reddy (2007) Reddy, J. N., 2007. An introduction to continuum mechanics. Cambridge university press.
* Sääskilahti et al. (2015) Sääskilahti, K., Oksanen, J., Volz, S., Tulkki, J., 2015. Frequency-dependent phonon mean free path in carbon nanotubes from nonequilibrium molecular dynamics. Physical Review B 91 (11), 115426.
* Shilkrot et al. (2002) Shilkrot, L., Miller, R. E., Curtin, W., 2002. Coupled atomistic and discrete dislocation plasticity. Physical review letters 89 (2), 025501.
* Srivastava and Nemat-Nasser (2014) Srivastava, A., Nemat-Nasser, S., 2014. On the limit and applicability of dynamic homogenization. Wave Motion 51 (7), 1045–1054.
* Stroud (1971) Stroud, A. H., 1971. Approximate calculation of multiple integrals. Prentice-Hall.
* Tadmor et al. (2013) Tadmor, E. B., Legoll, F., Kim, W., Dupuy, L., Miller, R. E., 2013. Finite-temperature quasi-continuum. Applied Mechanics Reviews 65 (1).
* Tadmor and Miller (2011) Tadmor, E. B., Miller, R. E., 2011. Modeling materials: continuum, atomistic and multiscale techniques. Cambridge University Press.
* Tadmor et al. (1996a) Tadmor, E. B., Ortiz, M., Phillips, R., 1996a. Quasicontinuum analysis of defects in solids. Philosophical Magazine A 73 (6), 1529–1563.
* Tadmor et al. (1996b) Tadmor, E. B., Phillips, R., Ortiz, M., 1996b. Mixed atomistic and continuum models of deformation in solids. Langmuir 12 (19), 4529–4534.
* Tembhekar (2018) Tembhekar, I., 2018. The fully nonlocal, finite-temperature, adaptive 3d quasicontinuum method for bridging across scales. Ph.D. thesis, California Institute of Technology.
* Tembhekar et al. (2017) Tembhekar, I., Amelang, J. S., Munk, L., Kochmann, D. M., 2017. Automatic adaptivity in the fully nonlocal quasicontinuum method for coarse-grained atomistic simulations. International Journal for Numerical Methods in Engineering 110 (9), 878–900.
* Tuckerman (2010) Tuckerman, M., 2010. Statistical mechanics: theory and molecular simulation. Oxford University Press.
* van der Giessen et al. (2020) van der Giessen, E., Schultz, P. A., Bertin, N., Bulatov, V. V., Cai, W., Csányi, G., Foiles, S. M., Geers, M. G., González, C., Hütter, M., et al., 2020. Roadmap on multiscale materials modeling. Modelling and Simulation in Materials Science and Engineering 28 (4), 043001.
* Venturini et al. (2014) Venturini, G., Wang, K., Romero, I., Ariza, M., Ortiz, M., 2014. Atomistic long-term simulation of heat and mass transport. Journal of the Mechanics and Physics of Solids 73, 242–268.
* Venturini (2011) Venturini, G. N., 2011. Topics in multiscale modeling of metals and metallic alloys. Ph.D. thesis, California Institute of Technology.
* Voter (1997) Voter, A. F., 1997. A method for accelerating the molecular dynamics simulation of infrequent events. The Journal of chemical physics 106 (11), 4665–4677.
* Wagner et al. (2008) Wagner, G. J., Jones, R., Templeton, J., Parks, M., 2008. An atomistic-to-continuum coupling method for heat transfer in solids. Computer Methods in Applied Mechanics and Engineering 197 (41-42), 3351–3365.
* Weiner (2012) Weiner, J. H., 2012. Statistical mechanics of elasticity. Courier Corporation.
* Xu and Chen (2019) Xu, S., Chen, X., 2019. Modeling dislocations and heat conduction in crystalline materials: atomistic/continuum coupling approaches. International Materials Reviews 64 (7), 407–438.
* Zhang and Curtin (2008) Zhang, F., Curtin, W., 2008. Atomistically informed solute drag in al–mg. Modelling and Simulation in Materials Science and Engineering 16 (5), 055006.
* Ziman (2001) Ziman, J. M., 2001. Electrons and phonons: the theory of transport phenomena in solids. Oxford University Press.
* Zubarev (1974) Zubarev, D. N., 1974. Nonequilibrium statistical thermodynamics. Consultants Bureau.
* Zubarev et al. (1996) Zubarev, D. N., Morozov, V., Ropke, G., 1996. Statistical Mechanics of Nonequilibrium Processes, Volume 1 (See 3527400834): Basic Concepts, Kinetic Theory. Vol. 1. Wiley-VCH.
## Appendix A Time evolution of phase averaged quantities
The time evolution of the phase average of a phase-space quantity
$A(\boldsymbol{z})$ can be derived using the representative solution of the
Liouville equation,
$\frac{\partial f}{\partial t}+i\mathcal{L}f=0\ \implies\
f(\boldsymbol{z},t)=e^{-i\mathcal{L}t}f(\boldsymbol{z}_{0},0)=e^{-i\mathcal{L}t}f(\boldsymbol{z}),$
(A.1)
where $f(\boldsymbol{z}_{0},0)$ is the initial condition and
$e^{-i\mathcal{L}t}$ is the propagating operator, which transforms the
probability distribution initially defined at phase-space coordinate
$\boldsymbol{z}_{0}$ to the probability distribution at $\boldsymbol{z}(t)$
(Evans and Morriss, 2007; Zubarev et al., 1996). Furthermore, the time
evolution of the phase-space quantity $A(\boldsymbol{z})$ is given by
$\frac{\;\\!\mathrm{d}A}{\;\\!\mathrm{d}t}=i\mathcal{L}A\ \implies\
A(\boldsymbol{z},t)=e^{i\mathcal{L}t}A(\boldsymbol{z}_{0},0)=e^{i\mathcal{L}t}A(\boldsymbol{z}).$
(A.2)
Equations (A.1) and (A.2) reveal that the operators $e^{\pm i\mathcal{L}t}$
transport the probability distribution $f(\boldsymbol{z})$ and phase-space
quantities $A(\boldsymbol{z})$ defined in terms of $\boldsymbol{z}$ of a
system of particles, given that $\boldsymbol{z}$ also changes in time as the
system of particles evolves. Operator $i\mathcal{L}$ satisfies the property
$\int_{\Gamma}A(\boldsymbol{z})i\mathcal{L}f(\boldsymbol{z})d\boldsymbol{z}=\int_{\Gamma}(-i\mathcal{L})A(\boldsymbol{z})f(\boldsymbol{z})d\boldsymbol{z}$
(A.3)
for real-valued $A$ and $f\to 0$ as $\boldsymbol{z}\to\partial\Gamma$ where
$\partial\Gamma$ is the boundary of $\Gamma$ and
$\Gamma\subseteq\mathbb{R}^{6N}$. For a $\Gamma$ almost covering
$\mathbb{R}^{6N}$, $f(\boldsymbol{z})$ approaches 0 as any component of
momentum approaches infinity or any spatial dimension approaches infinity
(probability of finding classical atoms far away from their mean position must
decay to 0) . Using this property, the time evolution of the phase average of
a phase-space quantity $A(\boldsymbol{z})$, defined in terms of
$\boldsymbol{z}$, is obtained from
$N!\,h^{3N}\frac{\;\\!\mathrm{d}\left\langle{A}\right\rangle}{dt}=\frac{\;\\!\mathrm{d}}{\;\\!\mathrm{d}t}\int_{\Gamma}f(\boldsymbol{z},t)A(\boldsymbol{z})\;\\!\mathrm{d}\boldsymbol{z}=\int_{\Gamma}\frac{\;\\!\mathrm{d}}{\;\\!\mathrm{d}t}\left[e^{-i\mathcal{L}t}f(\boldsymbol{z})\right]A(\boldsymbol{z})\;\\!\mathrm{d}\boldsymbol{z}=\int_{\Gamma}e^{-i\mathcal{L}t}f(\boldsymbol{z})i\mathcal{L}tA(\boldsymbol{z})\;\\!\mathrm{d}\boldsymbol{z}.$
(A.4)
Using equation (A.2), we obtain
$\frac{\;\\!\mathrm{d}\left\langle{A}\right\rangle}{\;\\!\mathrm{d}t}=\frac{1}{N!\,h^{3N}}\int_{\Gamma}f(\boldsymbol{z},t)\frac{\;\\!\mathrm{d}A}{\;\\!\mathrm{d}t}\;\\!\mathrm{d}\boldsymbol{z}=\left\langle{\frac{\;\\!\mathrm{d}A}{\;\\!\mathrm{d}t}}\right\rangle.$
(A.5)
We note that equation (A.5) is obtained using only the evolution equations
(A.1), (A.2) and property (A.3), which hold for any Hamiltonian system and
thus contain no time-coarsening approximations. Accordingly, time-variational
formulations such as the Frenkel-Dirac-McLachlan variational principle
(McLachlan, 1964) leads to identical equations.
## Appendix B Quasistatic GPP as Helmholtz free energy minimization
The Helmholtz free energy $\mathcal{F}$ as a function of parameter set
$(\overline{\boldsymbol{q}},S_{\Sigma},S_{\Omega})$ is defined as
$\mathcal{F}(\overline{\boldsymbol{q}},S_{\Sigma},S_{\Omega})=E(\overline{\boldsymbol{q}},S_{\Sigma},S)-\sum_{i}\frac{\Omega_{i}S_{i}}{k_{B}m_{i}}$
(B.1)
with the relation
$\frac{\Omega_{i}}{k_{B}m_{i}}=\frac{\partial E}{\partial S_{i}}.$ (B.2)
Minimization of $\mathcal{F}$ with respect to the set
$\overline{\boldsymbol{q}}$ yields
$-\frac{\partial\mathcal{F}}{\partial\overline{\boldsymbol{q}}_{i}}=0\
\implies\ -\left\langle{\frac{\partial
V(\boldsymbol{q})}{\partial\boldsymbol{q}_{i}}}\right\rangle=\left\langle{F_{i}(\boldsymbol{q})}\right\rangle=0.$
(B.3)
To minimize $\mathcal{F}$ with respect to the set $S_{\Sigma}$, we consider
the relation
$\boldsymbol{q}_{i}-\overline{\boldsymbol{q}}_{i}=\sqrt{\Sigma_{i}}\boldsymbol{x}_{i},$
(B.4)
for some normalized vector $\boldsymbol{x}_{i}$. From equation (B.4) it
follows that
$\frac{\partial\boldsymbol{q}_{i}}{\partial
S_{\Sigma,i}}=\frac{\partial\boldsymbol{q}_{i}}{\partial\sqrt{\Sigma_{i}}}\frac{\partial\sqrt{\Sigma_{i}}}{\partial
S_{\Sigma,i}}=\sqrt{\Sigma_{i}}\boldsymbol{x}_{i}=\boldsymbol{q}_{i}-\overline{\boldsymbol{q}}_{i}.$
(B.5)
Finally, minimization of $\mathcal{F}$ with respect to the set $S_{\Sigma}$
yields the following set of equations:
$-\frac{\partial\mathcal{F}}{\partial S_{\Sigma,i}}=0\ \implies\
\frac{3\Omega_{i}}{m_{i}}-\left\langle{\frac{\partial
V(\boldsymbol{q})}{\partial\boldsymbol{q}_{i}}\cdot\frac{\partial\boldsymbol{q}_{i}}{\partial
S_{\Sigma,i}}}\right\rangle=\frac{3\Omega_{i}}{m_{i}}+\left\langle{\boldsymbol{F}_{i}(\boldsymbol{q})\cdot\left(\boldsymbol{q}_{i}-\overline{\boldsymbol{q}}_{i}\right)}\right\rangle=0.$
(B.6)
Equations (B.3) and (B.6) are identical to the quasistatic GPP equations
(3.12b).
## Appendix C Time step stability bounds for entropy transport
Applying a forward-Euler explicit time discretization to equation (3.33), we
obtain
$\frac{1}{\Delta t^{(k)}}\left(\left(\begin{matrix}T_{i}\\\
T_{j}\end{matrix}\right)^{(k+1)}-\left(\begin{matrix}T_{i}\\\
T_{j}\end{matrix}\right)^{(k)}\right)=\frac{2A_{0}}{3k_{B}}\frac{T^{2,(k)}_{ij}}{T^{(k)}_{i}T^{(k)}_{j}}\left(\begin{matrix}T_{i}-T_{j}\\\
T_{j}-T_{i}\end{matrix}\right)^{(k)},$ (C.1)
where superscript $(k)$ implies a quantity evaluated at the $k^{\text{th}}$
time step. Rearranging the above equation yields
$\left(\begin{matrix}T_{i}\\\
T_{j}\end{matrix}\right)^{(k+1)}=\boldsymbol{T}^{(k)}\left(\begin{matrix}T_{i}\\\
T_{j}\end{matrix}\right)^{(k)},$ (C.2)
where $\boldsymbol{T}^{(k)}$ is the transition matrix at the $k^{\mathrm{th}}$
time step, defined by
$\boldsymbol{T}^{(k)}=\boldsymbol{I}+\frac{2A_{0}\Delta
t^{(k)}}{3k_{B}}\frac{T^{2,(k)}_{ij}}{T^{(k)}_{i}T^{(k)}_{j}}\left(\begin{matrix}1&-1\\\
-1&1\end{matrix}\right).$ (C.3)
For numerical stability, the transition matrix must have eigenvalues with
magnitude $\leq 1$, which yields the bound
$\frac{2A_{0}\Delta
t^{(k)}}{3k_{B}}\left(\frac{T^{2,(k)}_{ij}}{T^{(k)}_{i}T^{(k)}_{j}}\right)\leq
1.$ (C.4)
Applying the above limit to a system in which the $i^{\mathrm{th}}$ atom has
multiple neighbors, we obtain the constraint in equation (3.36).
|
Further author information: (Send correspondence to J.E.G)
E-mail<EMAIL_ADDRESS>
# Design and Fabrication of Metamaterial Anti-Reflection Coatings for the
Simons Observatory
Joseph E. Golec Department of Physics, University of Chicago, Chicago, IL,
USA Jeffrey J. McMahon Department of Physics, University of Chicago,
Chicago, IL, USA Department of Astronomy and Astrophysics, University of
Chicago, Chicago, IL, USA Kavli Institute of Cosmological Physics, University
of Chicago, Chicago, IL, USA Enrico Fermi Institute, University of Chicago,
Chicago, IL, USA Aamir M. Ali Department of Physics, University of
California-Berkeley, Berkeley, CA, USA Simon Dicker Department of Physics
and Astronomy, University of Pennsylvania, Philadelphia, PA, USA Nicholas
Galitzki Department of Physics, University of California-San Diego, La Jolla,
CA, USA Kathleen Harrington Department of Astronomy and Astrophysics,
University of Chicago, Chicago, IL, USA Benjamin Westbrook Department of
Physics, University of California-Berkeley, Berkeley, CA, USA Edward J.
Wollack NASA/Goddard Space Flight Center, Greenbelt, MD, USA Zhilei Xu
Department of Physics and Astronomy, University of Pennsylvania, Philadelphia,
PA, USA Ningfeng Zhu Department of Physics and Astronomy, University of
Pennsylvania, Philadelphia, PA, USA
###### Abstract
The Simons Observatory (SO) will be a cosmic microwave background (CMB) survey
experiment with three small-aperture telescopes and one large-aperture
telescope, which will observe from the Atacama Desert in Chile. In total, SO
will field over 60,000 transition-edge sensor (TES) bolometers in six spectral
bands centered between 27 and 280 GHz in order to achieve the sensitivity
necessary to measure or constrain numerous cosmological quantities, as
outlined in The Simons Observatory Collaboration et al. (2019). These
telescopes require 33 highly transparent, large aperture, refracting optics.
To this end, we developed mechanically robust, highly efficient, metamaterial
anti-reflection (AR) coatings with octave bandwidth coverage for silicon
optics up to 46 cm in diameter for the 22-55, 75-165, and 190-310 GHz bands.
We detail the design, the manufacturing approach to fabricate the SO lenses,
their performance, and possible extensions of metamaterial AR coatings to
optical elements made of harder materials such as alumina.
###### keywords:
Simons Observatory, millimeter wavelengths, CMB, anti-reflection coatings
## 1 INTRODUCTION
The Simons Observatory (SO) is an up-coming ground-based survey experiment
that will provide the most sensitive measurements of the cosmic microwave
background to date in order to constrain fundamental cosmological properties
of our universe [1, 2]. To make high fidelity measurements of the CMB, SO will
use silicon refractive optics to focus light onto detector arrays. Silicon is
an excellent choice of lens material for the millimeter and sub-millimeter
wavelengths due to its low loss and high index of refraction which leads to
high-throughput and large field of view optical designs ideal for a large sky
CMB survey. However, the high index of refraction of the silicon lenses also
means that a significant fraction of the incident light is reflected. This not
only causes less light to reach the detectors, thus causing a decrease in
overall sensitivity, it can lead to other non-ideal instrument sytematics due
to multiple reflections in the instrument. The undesirable consequences of the
high index means that the lenses must have an anti-reflection (AR) coatings in
order to deliver state-of-the-art measurements of the CMB.
The standard method to AR coat lenses is to layer thin films, usually made of
a plastic material, onto the lens surface. The thickness and index of the thin
films can be tuned to optimize the reflection across a given frequency band.
While this works in many applications, the lenses for SO will be kept at
cryogenic temperatures and any plastic AR coating risks delamination from the
lens due to a mismatch between the coefficient of thermal expansion between
silicon and plastic. To solve this problem metamaterial AR coatings have been
implemented in CMB experiments to great success [3, 4, 5]. Metamaterial AR
coatings consist of sub-wavelength features either placed onto or machined
into an optical surface. Those sub-wavelength features then act as effective
dielectric layers akin to traditional thin film plastic coatings. Since the
metamaterial coatings are made of the lens material itself there is no risk of
delamination of the AR coating from the optic. The shape and dimensions of the
sub-wavelength features of the metamaterial coating can be tuned to result in
sub-percent reflections across octave bandwidths which is optimal for
experiments like SO.
We present the work done to realize metamaterial AR coatings for lenses in the
three SO observing bands and at the necessary production scale for an
experiment the size of SO. The organization of this paper is as follows:
Section 2 presents the design of the AR coatings for all three SO observing
bands. Section 3 describes the production process of the AR coatings and
presents the achieved production rates. In Section 4 the results of reflection
measurements taken of the AR coatings are presented. Section 5 discusses the
possible extensions of this AR coating technique to alumina, another material
used for optical elements in CMB observation. Finally, Section 6 concludes
with discussion of the how this AR coating technology fits into the broader
context of the sensitivity of SO and comments on the feasibility of
metamaterial AR coatings for future CMB missions at a scale larger than SO.
## 2 Design
Metamaterial AR coatings consisting of sub-wavelength features have been used
to mitigate reflections off optical elements for many years. In general, the
idea behind metamaterial AR coatings is to create a periodic array of features
that are sufficiently smaller than the wavelength of the incident light. By
tuning the geometries of those features, reflections can be minimized across a
given bandwidth. The geometry and fabrication of the sub-wavelength features
varies greatly depending on the application and the wavelength of the incident
light. Raut et al (2011) gives a general review of designs and fabrication
techniques used to create AR coatings [6].
The design of the sub-wavelength features we present here closely follows the
geometry presented in Datta et al (2013). [3] The features consist of
metamaterial layers that are an array of square “stepped-pyramids”. This
design was chosen because these features are easily fabricated with a silicon
dicing saw, where the saw makes a series of nested cuts across an optic, the
optic is then rotated 90 degrees, and the series of cuts are made again.
Figure 1 shows a fiducial model of a three-layer metamaterial AR coating. In
principle, the number of layers can be increased to accommodate large
bandwidth coverage, but this is subject to physical constraints such as dicing
blade thickness and aspect ratio. Metamaterial coatings with five-layers have
been demonstrated and show excellent performance over more than an octave
bandwidth [4]. Following the fiducial design the pitch, or the spacing between
the periodic cuts, each layer’s width (kerf), and depth must be optimized to
minimize the reflections across the observing bands.
The SO will observe in three dichroic frequency bands; the low frequency (LF),
mid frequency (MF), and ultra-high frequency (UHF). The band edges for those
three channels observe from 23-47 GHz, 70-170 GHz, and 195-310 GHz for the
respective LF, MF, and UHF bands. We begin the optimization process by
modeling the physical AR coating structure in the finite-element analysis
software, Ansys High Frequency Structure Simulator
(HFSS)111https://www.ansys.com/products/electronics/ansys-hfss. Rather than
start the optimization process from scratch we began the optimization process
with the three-layer 75-170 GHz metamaterial AR coating presented in Coughlin
et al (2018).
The pitch of the sub-wavelength features is set by the criterion for
diffraction which is given by Equation 1 in Datta et al (2013)[3]
$\displaystyle p<\frac{\lambda}{n_{\text{si}}+n_{\text{i}}\sin\theta_{i}}$ (1)
Where p is the pitch, $\lambda$ is the wavelength corresponding to the upper
edge of the frequency band, $n_{\text{si}}$ and $n_{\text{i}}$ are the indices
of refraction of silicon and the incident medium (in this case vacuum)
respectively, and $\theta_{i}$ is the angle of incidence of light on the
surface of the lens. In our design we choose the pitch such that this
criterion is met up to the upper edge of the observing frequency band at an
angle of 15 degrees which is roughly the average angle of incidence of a light
ray in the telescopes.
With the pitch set we then optimize the AR coating design using two free
parameters, the kerf and depth, per each metamaterial layer. Therefore, for a
three layer coating there are six parameters to optimize and for a two layer
coating there are four. An optimization algorithm in HFSS is used to vary the
kerfs and depths of the metamaterial layers to achieve the lowest reflection
possible across the SO MF band. Since the size of the sub-wavelength features
dictate at what frequencies the coating is effective, the dimensions of the
parameters of the AR coating can be scaled up or down to cover the LF and UHF
bands. However, that scaling may not produce an optimized AR coatings in that
new frequency band or the resulting design may not be physically realizable
with dicing saw blades.
For the SO LF band, the optimized MF AR coating design parameters were scaled
up and then re-optimized. The resulting coating from that optimization
achieved sub-percent reflection across the LF band. Finally, the MF AR coating
was scaled down to try and cover the UHF band, but kerf of the third layer
became thinner than any physically producible dicing saw blade. This drove the
design of the UHF coating from a three-layer design to a two-layer design.
After optimizing the two-layer UHF design we still achieved a simulated sub-
percent reflection across the band due to the UHF’s more narrow fractional
bandpass.
After the completed optimization we then have three AR coating designs that
all achieve sub-percent reflection across their respective frequency bands.
The dimensions of the three optimized AR coatings for the LF, MF, and UHF
bands are presented in Table 1. The parameters refer to the Figure 1. Note
that the UHF design is a two-layer and therefore the dimensions of the third
layer are non-applicable. The simulated performance of the AR coatings at
normal incidence are presented later in Section 4.
---
Figure 1: (Left) Isometric view of a fiducial three-layer AR coating design. (Right) Side view of the fiducial three-layer design with the relevant design parameters labeled. Table 1: Parameters of the three AR coating designs | LF | MF | UHF
---|---|---|---
Pitch (P) | 1.375 mm | 0.450 mm | 0.266 mm
Kerf 1 (K1) | 0.735 mm | 0.245 mm | 0.122 mm
Depth 1 (D1) | 1.520 mm | 0.452 mm | 0.200 mm
Kerf 2 (K2) | 0.310 mm | 0.110 mm | 0.033 mm
Depth 2 (D2) | 1.000 | 0.294 mm | 0.120 mm
Kerf 3 (K3) | 0.070 mm | 0.025 mm | -
Depth 3 (D3) | 0.750 mm | 0.234 mm | -
## 3 Production
The SO will deploy over 30 silicon lenses which is the most by any single
experiment to date and therefore the production rate of the metamaterial AR
coatings for those lenses must be high enough to follow the deployment
timeline. This combined with the added complications that we need to dice the
largest diameter silicon lenses deployed on an experiment to date and the
complex surface profiles of some of the lenses [7] lead to the development of
a custom silicon dicing saw system.
The saw system uses nickel alloy dicing saw blades embedded with diamonds to
dice the metamaterial features into the lens surface. Figure 2 shows a picture
of the dicing saw system cutting a lens surface. There are numerous features
that allows for significant increases in production rate compared to previous
efforts to produce metamaterial AR coatings for CMB experiments. First there
are multiple dicing spindles which can each be fit with a different blade
corresponding to different cuts in the AR coating’s design. By mounting all of
the blades at once we can AR coat an entire optical surface without changing
any blades. This provides for nearly continuous operation of the saw system.
Another feature of the custom system is that the lens is mounted to a rotary
stage which allows for the lens to remain mounted to the system for the
entirety of the fabrication process. This eliminates the need to perform
metrology on the cut lens surface which is a time consuming process. Careful
commissioning and calibration of this dicing saw system lead to micron
accurate stage positioning and repeatability which is well within the
tolerances required for AR coating application presented here.
---
Figure 2: Picture of the dicing saw system.
The general production procedure of the AR coating on a lens is described
hereafter. A lens is mounted to the dicing saw and metrology is taken of its
surface using a sub-micron accurate metrology probe mounted to the system. A
program then takes the surface metrology, fits a model surface to the data,
and generates program files that are used to command the system to dice the
cuts. The room that the dicing saw is situated in is temperature regulated and
the dicing process uses temperature controlled flood cooling. This temperature
regulation is necessary to ensure the lens surface does not thermally expand
or contract during the cutting process. The design-specific blades are then
mounted to the spindles, and are “prepared,” to eliminate diamond burs on the
blade and to ensure the blade is circular. This is done by making several cuts
in a special dressing block made to hone dicing saw blades. Test cuts are then
diced into a small silicon wafer affixed to the side of the mounted lens.
These cuts are then inspected and their dimensions measured with a microscope.
This is a check that all the blades are dicing properly and the CNC system is
correctly programmed to dice the cuts into the lens. The layers of the AR
coating are diced into the lens, one at a time, from the largest to the
smallest cut. After the layers are diced into the lens, it is then rotated 90
degrees and the process is repeated to fully realize the AR coating. After the
AR coating is completely diced, additional test cuts are made in the
sacrificial wafer to monitor if any cutting abnormalities may have arisen
during the fabrication. After one optical surface of a lens is finished it is
flipped and the procedure repeated on the other side. After both sides of a
lens have been AR coated, the lens is cleaned with water in an ultrasonic
cleaning bath.
That is the process for the MF and the UHF coatings but for longer wavelengths
where the feature size is much larger we must modify this approach. Dicing
blades cannot be fabricated to have an arbitrary kerf, so for the top two
layers of the LF coating we use three defining cuts and two clearing cuts to
create a kerf that is much wider than the maximum blade thickness. In order to
not load the dicing blades with too much cutting force we make multiple passes
of defining and clearing cuts to realize the full depth of the top two LF
layers.
In total we have fabricated nine lenses to date for SO. All nine were coated
with the MF coating. Figure 3 shows an image of one of the MF lenses installed
inside an optics tube (see paper #11453-183 in these proceedings for a
discussion of the SO optics tubes). It also shows a zoomed in picture of the
fabricated coating. In addition to the MF lenses for SO, three LF lenses using
the SO design were fabricated for the AdvACT experiment. The UHF coating has
yet to be fabricated. At the end of the fabrication run for the MF lenses we
achieved a production rate of one lens per week. The defect rate was around
100 broken pyramid features out of a million which is not expected to impact
the lens quality or the AR coating performance.
---
Figure 3: (Left) Picture of a SO LAT lens installed in an optics tube. (Right
Top) A zoomed in image of the MF metamaterial AR coating. (Right Bottom) A
Picture of the production team with six SO lenses. The three closest to the
camera are a set of SO Small Aperture Telesscope lenses and the three farther
away are a set of SO Large Aperture Telescope lenses.
## 4 Optical Performance
The lenses were tested and the optical performance measured using a coherent
reflectometer. The reflectometer setup is described in detail in Chesmore et
al.(2018) [8]. The lenses are mounted like in Figure 1 of the Chesmore paper
with the flat side down toward the parabolic mirrors. In cases where the lens
does not have a flat side, we measure the concave side as close to the center
of the lens as possible where it is the most flat. The results of the
measurements is summarized in Figure 4. The presented data for the LF AR
coating is from lenses made for the AdvACTPol experiment which share the same
design and fabrication procedure as the SO lenses. This data shows good
agreement with simulations with sub-percent reflections across the LF bands.
Due to the coronavirus pandemic, it was not possible to make reflection
measurements of the MF AR coatings produced for SO. The data for the MF
coating presented in Figure 4 is of the AR coatings produced for the ACTPol
experiment which have a slightly different design from the SO coating. The
performance of all of the measured coatings so far have achieved sub-percent
reflection across their bands. Since the UHF coating has yet to be fabricated
we have included the simulation and performance of the high frequency (HF)
metamaterial AR coating used for the AdvACT experiment to show that sub-
percent reflection is achievable at frequencies comparable to the UHF
frequencies.
---
Figure 4: Plot of the reflection performance of the SO AR coatings. The solid
line represent the simulated performance of the AR coating and the dots
represent measurements.
## 5 Extensions to Alumina
The success of metameterial AR coatings at achieving sub-percent reflection
across observing bands in silicon motivates investigating if this method can
be extended to alumina, another material used for millimeter-wave optics.
Alumina is used as an IR blocking filter in SO and, like silicon, has a
relatively high index of refraction, so AR coating is just as important for
the alumina optical elements. Current methods of AR coating alumina optics are
to glue layers of epoxies and plastics on the surface [9]. While this results
in an effective AR coating, it still suffers from the differential thermal
contraction between the AR coatings and the optic, which can lead to
delamination of the AR coating. This failure may not occur on early thermal
cycles of the optic, but over the course of subsequent observing campaigns
where the optics are cryogenically cycled numerous times there is no guarantee
that the AR coating will remain affixed to the optic. While considerable
effort has been made to reduce or prevent the delamination of the plastic
coatings through careful surface preparation and laser strain relief,
metamaterial AR coatings avoid this differential thermal contraction all-
together.
While it is straightforward to come up with a design of a metamaterial AR
coating for alumina the fabrication of that coating is not straightforward due
to alumina’s hardness. Alumina shares the same chemical composition of
sapphire but is not in a crystalline form. Instead alumina optics are made by
taking aluminum oxide power and binding it together with heat and pressure in
a mold. The resulting optic is nearly as hard as sapphire, which makes
machining possible but much more difficult than machining silicon, meaning
issues like blade wear become an issue.
To overcome the difficulty of dicing alumina, we began testing different
dicing blade types and have found that a combination of resin and nickel-alloy
blades, each with different diamond grit and density, can be used to fabricate
a diced metamaterial coating in alumina. Tool wear is still an issue with
these more resilient dicing blades however we have found that the tool wear
scales linearly with the amount of material cut so it can be compensated for
in the saw cutting software.
We successfully fabricated a prototype metamaterial AR coating on a six-inch
diameter alumina wafer (Figure 5 Left). Due to blade thickness limitations,
the alumina AR coating is only a two-layer AR coating which leads to percent-
level reflections across the MF band. At time of writing we are currently
measuring the performance of this coating.
---
Figure 5: (Left) Picture of the six-inch alumina wafer coated with a
prototype metamaterial AR coating. The black mark at the center is permanent
marker from the fabrication process. (Right Upper) A zoomed in picture of the
AR coating. (Right Lower) Plot of the simulated performance of the AR coating.
## 6 Conclusions
We have presented the design, fabrication process, and performance of the
metamaterial AR coatings for the three SO bands as well as a prototype alumina
AR coating for the MF band. All of these coatings achieve percent or sub-
percent levels of reflection which permit sensitive and precise measurement of
the CMB. In addition, we have shown that these AR coatings can be fabricated
on a one to two week time scale with little to no defects. This high
production rate was achieved with a custom dicing saw and is nearly the
maximum rate that the coatings can be fabricated thus the limiting schedule
drivers are then the procurement of the silicon on the fabrication of the non-
AR coated lens blanks. Such a high production rate of the AR coatings is
crucial for meeting the large demand for silicon lenses the SO imposes and
reinforces the feasibility of future CMB experiments that will be at an even
larger scale than SO like CMB-S4.
###### Acknowledgements.
This work was funded by the Simons Foundation (Award #457687, B.K.). JG is
supported by a NASA Space Technology Research Fellowship (Grant
80NSSC19K1157). ZX is supported by the Gordon and Betty Moore Foundation
## References
* [1] The Simons Observatory Collaboration and et al., “The simons observatory: science goals and forecasts,” Journal of Cosmology and Astroparticle Physics 2019(02), 056–056 (2019).
* [2] Galitzki, N. and et al., “The simons observatory: instrument overview,” SPIE: Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy IX 10708, 1–13 (2018).
* [3] Datta, R., Munson, C. D., Niemack, M. D., McMahon, J. J., and et al, “Large-aperture wide-bandwidth antireflection-coated silicon lenses for millimeter wavelengths,” Appl. Opt. 52, 8747–8758 (2013).
* [4] Coughlin, K. P., McMahon, J. J., Crowley, K. T., Koopman, B. J., Miller, K. H., Simon, S. M., and Wollack, E. J., “Pushing the limits of broadband and high frequency metamaterial silicon antireflection coatings,” J Low Temp Phys 193, 876–885 (2018).
* [5] Harrington, K., Marriage, T., and et al., “The Cosmology Large Angular Scale Surveyor,” Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy VIII 9914, 380 – 400, International Society for Optics and Photonics, SPIE (2016).
* [6] Raut, H. K., Ganesh, V. A., Nair, A. S., and Ramakrishna, S., “Anti-reflective coatings: A critical, in-depth review,” Energy Environ. Sci. 4, 3779–3804 (2011).
* [7] Ali, A. M. and et al., “Small aperture telescopes for the simons observatory,” J Low Temp Phys 200, 461–471 (2020).
* [8] Chesmore, G. E., Mroczkowski, T., McMahon, J., Sutariya, S., Josaitis, A., and Jensen, L., “Reflectometry measurements of the loss tangent in silicon at millimeter wavelengths,” arxiv e-prints , arXiv:1812.03785 (2018).
* [9] Rosen, D., Suzuki, A., Keating, B., Krantz, W., Lee, A. T., Quealy, E., Richards, P. L., Siritanasak, P., and Walker, W., “Epoxy-based broadband antireflection coating for millimeter-wave optics,” Appl. Opt. 52(33), 8102–8105 (2013).
|
# Discovery of an insulating ferromagnetic phase of electrons in two
dimensions
Kyung-Su Kim∗ & Steven A. Kivelson† Department of Physics, Stanford
University, Stanford, CA 93405
The two dimensional electron gas (2DEG) realized in semiconductor hetero-
structures has been the focus of fruitful study for decades. It is, in many
ways, the paradigmatic system in the field of highly correlated electrons. The
list of discoveries that have emerged from such studies and have opened new
fields of physics is extraordinary, including discoveries related to the
integer and fractional quantum Hall effects, weak localization, metal-
insulator transitions, Wigner crystalization, mesoscopic quuantum transport
phenomena, etc. Now, a set of recent studies[1, 2] on ultra-clean modulation-
doped AlAs quantum wells have uncovered a new set of ordering transitions
associated with the onset of ferrromagnetism and electron nematicity (or
orbital ordering).
The 2DEG in the current generation of “ultra-clean” AlAs quantum wells have
been gate-tuned over a range of electron density from $n=1.1\times
10^{10}\textrm{cm}^{-2}$ to $n=2\times 10^{11}\textrm{cm}^{-2}$. The electrons
occupy two valleys - one located about the $(0,\pi)$ and the other about the
$(\pi,0)$ point in the 2D Brillouin zone - so in addition to two spin-
polarizations, the electrons carry a valley “pseudo-spin” index. In the
absence of shear strain, the 2DEG has a discrete $C_{4}$ rotational symmetry
that interchanges the valleys. Older studies of the 2DEG in Si MOSFETs and
modulation-doped GaAs quantum wells have explored the same general range of
correlation strengths; while there is considerable overlap in results – for
instance concerning the existence and character of a metal-insulator
transition (MIT) – a number of aspects have been seen here for the first time.
Since each realization of the 2DEG differs in some details – e.g. the
character and strength of the disorder, the geometry of the device, the
existence of a valley pseudo-spin, and the effective mass anisotropy of each
valley – both the similarities and the differences in observed behaviors are
significant. To facilitate such comparisons, it is useful to invoke the
dimensionless parameter, $r_{s}\equiv 1/(a_{B}^{\star}\sqrt{\pi n})$, where
$n$ is the areal electron density, $a_{B}^{\star}$ is the effective Bohr
radius. $r_{s}=\bar{V}/\bar{K}$ is thus the ratio of a characteristic
electron-electron interaction strength, $\bar{V}$, to a characteristic kinetic
energy, $\bar{K}$.
Because the 2DEG is burried deep in a heterostructure, many experiments that
one would like to perform are not possible - the present studies depend
entirely on measurements of the components of the resistivity tensor,
$\rho_{ab}$. However, these have been carried out with great precision as a
function of the electron density, $n$ (controlled by a remote gate $150\mu$m
from the 2DEG), applied magnetic field, both in-plane, $B_{\parallel}$, and
out-of-plane, $B_{\perp}$, shear strain to controlably break the underlying
$C_{4}$ symmetry, and temperature, $T$. In an era in which local and/or time-
resolved probes are opening new horizons, it is worth recalling that most
discoveries concerning the physics of quantum materials have stemmed from
measurements of the resistance.
Figure 1: Schematic phase digram of the 2DEG in AlAs. Solid lines and circles
represent transitions or sharp crossovers for which direct evidence is
presented in Refs. [1, 2]. The thick line represents a first order transition
while the thin line is continuous. The dashed lines indicate boundaries
suggested in the concluding theoretical discussion of the present paper.
Abbreviations are as follows: MIT - metal-insulator transition; FMI -
Ferromagnetic insulator; FP-FMI - fully polarized FMI; WC - Wigner crystal.
None-the-less, the absence of direct thermodynamic information means that much
about the phase diagram has to be inferred from indirect arguments. Moreover,
since the primary focus of much of the study is on $T\to 0$ quantum phases of
matter, there is an implicit assumption that no major changes in the physics
occur at new emergent scales below the base temperature of $T=0.3$K. With
these caveats, we begin by summarizing the major inferences (See also the
solid lines in the schematic phase diagram in Fig. 1.)
* •
i) For $r_{s}<r_{n}\approx 20$, there is an isotropic (i.e. $C_{4}$ invariant)
paramagnetic metallic phase.
* •
ii) There is a first order transition at $r_{s}=r_{n}\approx 20$ and $T\to 0$
to a fully valley polarized metallic phase. Since this phase spontaneously
breaks the $C_{4}$ symmetry to $C_{2}$, it is an Ising nematic phase. For
$r_{s}>r_{n}$, as a function of increasing $T$, there is a finite $T$
continuous transition to the $C_{4}$-symmetric phase at $T_{n}(r_{s})\approx
1.2$K for $r_{s}>r_{n}$.
* •
iii) At $r_{s}=r_{\textrm{mit}}\approx 27$ there is an apparent MIT. There has
been considerable “philosophical” debate about what this means, given that a
precise definition of a MIT necessarily involves an extrapolation to $T=0$.
However, from a practical “physics” perspective, there is nothing subtle about
this “transition” – for $r_{s}<r_{\textrm{mit}}$, the resistivity is well
below the quantum of resistance, $\rho_{q}=e^{2}/h$, and decreases strongly
with decreasing $T$, while for $r_{s}>r_{\textrm{mit}}$, $\rho>\rho_{q}$ and
is a strongly increasing function of decreasing $T$. This is very similar to
what is seen in a variety of other semiconductor heterostructures [3].
* •
iv) For $r_{s}>r_{F}\approx 35$, the ground-state is a fully polarized
ferromagnetic insulator (FP-FMI). The evidence of this (which we find
compelling) is that the value of $B_{\parallel}$ necessary to achieve full
polarization (i.e. beyond which $\rho_{xx}$ is $B_{\parallel}$ independent)
tends to zero as $r_{s}\to r_{F}^{-}$, while for $r_{s}>r_{F}$, $\rho_{xx}$ is
essentially independent of $B_{\parallel}$. (It seems to us that it is an
interesting open question whether or not there exists a range of
$r_{F}^{\star}<r_{s}<r_{F}$ in which the 2DEG ground state is partially spin
polarized.)
* •
v) A final change in behavior was observed at $r_{\textrm{wc}}=38$; for
$r_{s}>r_{\textrm{wc}}$ the $I-V$ curves show pronounced non-linearities,
behavior that the authors of Ref. [1] associate with the existence of some
form of moderately long-range Wigner-crystalline (WC) order. While this is
likely valid in some approximate sense, given that WC long-range order is not
possible (in the presence of even weak quenched disorder) it is probably not
possible to give a precise criterion for this crossover. At any rate, also for
$r_{s}>r_{\textrm{wc}}$ the 2DEG remains ferromagnetic and increasingly
strongly insulating, the larger $r_{s}$.
Consistent with long-standing results of microscopic theory[4, 5] and with
decades of work on various realizations of the 2DEG, the simple metallic phase
- presumably a Fermi liquid - is stable up to remarkably large values of
$r_{s}$ in the present class of devices. However, this gives way to various
other phases at still larger $r_{s}$. Two features of this evolution that are
newly established are the existence of a fully orbitally polarized nematic
metal [2] and of a fully polarized ferromagnetic insulator [1]. Indeed, it
seems hard to escape the conclusion that at the largest accessible values of
$r_{s}$, the ground-state is a ferromagnetic WC, which is presumably still
nematic as well.
Where $r_{s}$ is large, the interaction energy is the largest energy in the
problem, meaning that there is no formal or intuitive justification for
applying essentially perturbative methods, such as Hartree-Fock, random-phase
approximation (RPA), or indeed any diagramatic approach to the theoretical
analysis of this problem. At large enough $r_{s}$, the problem is amenable to
systematic strong-coupling analysis[6, 7], but strictly speaking this approach
can only be used to explore the behavior deep in the WC phase. To obtain
theoretical understanding of the phases that occur at large but finite
$r_{s}$, one must either rely on essentially variational microscopic
approaches or on more phenomenological arguments. In Fig. 1, we have attempted
to combine results from Refs. [1, 2] – indicated as solid lines – with some
speculative additions largely based on theoretical considerations – as dashed
lines.
There are two distinct theoretical arguments that lead to the conclusion that
first order transitions are forbidden in 2D. The first - based on Imry-Ma
arguments - invokes the effects of quenched disorder. The second - based on
Coulomb-frustrated phase separation - is a special feature of Coulomb
interactions in 2D [8]. Both these arguments imply that where a first order
transition is expected, instead there should occur a range of densities in
which some form of “puddle” phase arises - a mesoscopic version of phase
separation in which regions of the sample are in an approximate sense in one
of these phases and other regions are in the other. Despite this, empirically,
the nematic transition at $T=0$ appears to be first order; this can be
rationalized as it occurs where the system is highly conducting on both sides
of the transition, which leads to strong screening both of any quenched
disorder and of the long-range Coulomb interactions, likely meaning that any
such bubble phase occurs in an unobservably narrow range of $r_{s}$.
These considerations do not apply to the transition to a WC, given that the WC
is an insulating phase. Indeed, one of us and Spivak[8] have argued that the
physics generally associated with the MIT in 2DEGs is a reflection of the
existence of micro-emulsion phases consisting of regions of insulating WC and
regions of metallic liquid. This is consistent with the recent evidence [9]
that the MIT in Si MOSFETs is more of a percolation phenomenon than a true
quantum phase transition. However, it is difficult to distinguish this
intrinsic physics from the alternative disorder driven picture, in which the
coexisting regions of WC and Fermi liquid reflect subtle differences in the
local distribution of impurities or other structural defects [10].
111Concerning the role of disorder in the MIT: The fact that the MIT is
observed in the cleanest achievable 2DEGs and that the phenomena look similar
in such different platforms as Si MOSFETS and modulation-doped AlAs quantum
wells, argues that there is likely a large intrinsic character to any puddle
formation - even if at the end of the day disorder proves to be important in
pinning the puddles. We thus speculate the existence of a ”bubble” regime in
the phase diagram without specifying the degree to which disorder is the
driving force. The MIT occurs when the liquid portions cease to percolate. The
backward slope shown for the left edge of the bubble regime is reminiscent of
the Pomeranchuk effect in He – it reflects the fact that the low energy scale
associated with exchange interactions in the WC implies that it is generally a
higher entropy phase than the liquid[8].
There is one other striking observation in [1] that can be interpreted as the
evidence of the existence of such a bubble state at large $r_{s}$, i.e. deep
in the insulating regime. When a perpendicular magnetic field, $B_{\perp}$, is
applied to the system with $r_{s}>r_{\textrm{mit}}$, the longitudinal
resistance at first increases strongly, but then shows pronounced minima at
fields corresponding to a full Landau level, $\nu=1$, and a partially filled
Landau level, $\nu=1/3$. Moreover, $\rho_{xy}$ exhibits a plateau at the same
fields with values $\rho_{xy}\approx(h/e^{2})$ and $\rho_{xy}\approx
3(h/e^{2})$ respectively. However, this is not a quantum Hall liquid since at
$T=0.3$K and $r_{s}=38$, $\rho_{xx}(\nu=1)\approx 30(h/e^{2})$ and
$\rho_{xx}(\nu=1/3)\approx 75(h/e^{2})$. In a quantum Hall liquid $\rho_{xx}$
should vanish as $T\to 0$. Put another way, all components of the conductivity
tensor, $\sigma_{ab}$, are very far from their expected values in a quantum
Hall state. This behavior is an approximation of a “quantized Hall
insulator,”[11].222It is mentioned in Ref. [1] that $\rho_{xx}$ is a weakly
decreasing function of decreasing $T$ in the regime we have identified as a
quantum Hall insulator; however, in the observable range of $T$, $\rho_{xx}$
is one to two orders of magnitude larger than the quantum of resistance, and
the $T$ dependence is relatively weak. It is the behavior expected from a
macroscopic mixture of small puddles of a quantum Hall liquid in an insulating
background [12].333Note that even for $r_{s}>r_{\textrm{wc}}$, where one might
think that the system is a uniform (pinned) WC at $B_{\perp}=0$, puddles of
quantum Hall liquids might still arise at large $B_{\perp}$ since, as shown in
Ref [13], a quantum Hall liquid will typically compete more successfully with
the WC than does the Fermi liquid.
Finally, we comment a bit on the nature of the ferromagnetism. It was shown in
Ref. [7] that at asymptotically large $r_{s}$, the localized spins in the WC
of an isotropic 2DEG form a ferromagnetic state. The result is delicate – it
depends on small differences between two- and three-particle exchange
contributions – and so could be changed by all sorts of microscopic
considerations.444Taken at face value, the exchange couplings computed in Ref.
[7] would be smaller than the measurement temperatures. However, microscopic
details, such as the thickness of the 2DEG and the mass anisotropy, could
change these results both qualitatively and quantitatively. The energy scales
involved at large $r_{s}$ are, moreover, exponentially small. Still, in the
present context, it is tempting to view the ferromagnetism as being a feature
of the WC rather than of the metallic liquid. This interpretation is
consistent with the fact that the fully polarized ferromagnetic phase seems to
extend to the largest accessible values of $r_{s}$, and that it onsets only
within the insulating phase; $r_{F}>r_{\textrm{mit}}$. It would be interesting
to further explore the magnetic response of the system in the neighborhood of
$r_{\textrm{mit}}$. For instance, while the experimental evidence that full
ferromagnetic polarization onsets at $r_{F}$, if some sort of puddle state
indeed occurs, it would be natural to expect an onset of some degree of
ferromagnetism at $r_{F}^{\star}<r_{F}$.
## References
* [1] M. S. Hossain, M. K. Ma, K. A. Villegas Rosales, Y. J. Chung, L. N. Pfeiffer, K. W. West, K. W. Baldwin, and M. Shayegan. Observation of spontaneous ferromagnetism in a two-dimensional electron system. Proceedings of the National Academy of Sciences, 2020.
* [2] Md S Hossain, MK Ma, KA Rosales, YJ Chung, LN Pfeiffer, KW West, KW Baldwin, and M Shayegan. Observation of spontaneous valley polarization of itinerant electrons. arXiv:2011.06721, 2020.
* [3] B Spivak, SV Kravchenko, SA Kivelson, and XPA Gao. Colloquium: Transport in strongly correlated two dimensional electron fluids. Reviews of Modern Physics, 82(2):1743, 2010.
* [4] B Tanatar and David M Ceperley. Ground state of the two-dimensional electron gas. Physical Review B, 39(8):5005, 1989.
* [5] Xuejun Zhu and Steven G Louie. Variational quantum monte carlo study of two-dimensional wigner crystals: Exchange, correlation, and magnetic-field effects. Physical Review B, 52(8):5863, 1995.
* [6] Eugene Wigner. On the interaction of electrons in metals. Physical Review, 46(11):1002, 1934.
* [7] Sudip Chakravarty, Steven Kivelson, Chetan Nayak, and Klaus Voelker. Wigner glass, spin liquids and the metal-insulator transition. Philosophical Magazine B, 79(6):859–868, 1999.
* [8] Boris Spivak and Steven A Kivelson. Transport in two dimensional electronic micro-emulsions. Annals of Physics, 321(9):2071–2115, 2006.
* [9] Shiqi Li, Qing Zhang, Pouyan Ghaemi, and MP Sarachik. Evidence for mixed phases and percolation at the metal-insulator transition in two dimensions. Physical Review B, 99(15):155302, 2019.
* [10] S Das Sarma, MP Lilly, EH Hwang, LN Pfeiffer, KW West, and JL Reno. Two-dimensional metal-insulator transition as a percolation transition in a high-mobility electron system. Physical Review Letters, 94(13):136401, 2005.
* [11] D Shahar, DC Tsui, Mansour Shayegan, JE Cunningham, E Shimshoni, and Shivaji L Sondhi. On the nature of the Hall insulator. Solid State Communications, 102(11):817–821, 1997.
* [12] AM Dykhne and IM Ruzin. Theory of the fractional quantum Hall effect: the two-phase model. Physical Review B, 50(4):2369, 1994.
* [13] Jianyun Zhao, Yuhe Zhang, and JK Jain. Crystallization in the fractional quantum Hall regime induced by Landau-level mixing. Physical Review Letters, 121(11):116802, 2018.
|
# Maximum Number of Almost Similar Triangles in the Plane
József Balogh 111Department of Mathematics, University of Illinois at Urbana-
Champaign, Urbana, Illinois 61801, USA, and Moscow Institute of Physics and
Technology, Russian Federation. E-mail<EMAIL_ADDRESS>Research is
partially supported by NSF Grant DMS-1764123, NSF RTG grant DMS 1937241,
Arnold O. Beckman Research Award (UIUC Campus Research Board RB 18132), the
Langan Scholar Fund (UIUC), and the Simons Fellowship. Felix Christian Clemen
222Department of Mathematics, University of Illinois at Urbana-Champaign,
Urbana, Illinois 61801, USA, E-mail<EMAIL_ADDRESS>Bernard Lidický
333Iowa State University, Department of Mathematics, Iowa State University,
Ames, IA., E-mail:
<EMAIL_ADDRESS>Research of this author is partially supported by NSF
grant DMS-1855653.
(August 27, 2024)
###### Abstract
A triangle $T^{\prime}$ is $\varepsilon$-similar to another triangle $T$ if
their angles pairwise differ by at most $\varepsilon$. Given a triangle $T$,
$\varepsilon>0$ and $n\in\mathbb{N}$, Bárány and Füredi asked to determine the
maximum number of triangles $h(n,T,\varepsilon)$ being $\varepsilon$-similar
to $T$ in a planar point set of size $n$. We show that for almost all
triangles $T$ there exists $\varepsilon=\varepsilon(T)>0$ such that
$h(n,T,\varepsilon)=n^{3}/24(1+o(1))$. Exploring connections to hypergraph
Turán problems, we use flag algebras and stability techniques for the proof.
Keywords: similar triangles, extremal hypergraphs, flag algebras.
2020 Mathematics Subject Classification: 52C45, 05D05, 05C65
## 1 Introduction
Let $T,T^{\prime}$ be triangles with angles $\alpha\leq\beta\leq\gamma$ and
$\alpha^{\prime}\leq\beta^{\prime}\leq\gamma^{\prime}$ respectively. The
triangle $T^{\prime}$ is _$\varepsilon$ -similar_ to $T$ if
$|\alpha-\alpha^{\prime}|<\varepsilon,|\beta-\beta^{\prime}|<\varepsilon,$ and
$|\gamma-\gamma^{\prime}|<\varepsilon$. Bárány and Füredi [5], motivated by
Conway, Croft, Erdős and Guy [7], studied the maximum number
$h(n,T,\varepsilon)$ of triangles in a planar set of $n$ points that are
$\varepsilon$-similar to a triangle $T$. For every $T$ and
$\varepsilon=\varepsilon(T)>0$ sufficiently small, Bárány and Füredi [5] found
the following lower bound construction: Place the $n$ points in three groups
with as equal sizes as possible, and each group very close to the vertices of
the triangle $T$. Now, iterate this by splitting each of the three groups into
three further subgroups of points, see Figure 1 for an illustration of this
construction. Define a sequence $h(n)$ by $h(0)=h(1)=h(2)=0$ and for $n\geq 3$
$\displaystyle h(n):=\max\\{abc+h(a)+h(b)+h(c):a+b+c=n,\
a,b,c\in\mathbb{N}\\}.$
Figure 1: Construction sketch on 27 vertices.
By the previously described construction, this sequence $h(n)$ is a lower
bound on $h(n,T,\varepsilon)$. For $T$ being an equilateral triangle equality
holds.
###### Theorem 1.1 (Bárány, Füredi[5]).
Let $T$ be an equilateral triangle. There exists $\varepsilon_{0}\geq
1^{\circ}$ such that for all $\varepsilon\in(0,\varepsilon_{0})$ and all $n$
we have $h(n,T,\varepsilon)=h(n)$. In particular, when $n$ is a power of $3$,
$h(n,T,\varepsilon)=\frac{1}{24}(n^{3}-n)$.
Bárány and Füredi [5] also found various examples of triangles $T$ (e.g. the
isosceles right angled triangle) where $h(n,T,\varepsilon)$ is larger than
$h(n)$.
The space of triangle shapes $S\subseteq\mathbb{R}^{3}$ can be represented
with triples $(\alpha,\beta,\gamma)\in\mathbb{R}^{3}$ of angles
$\alpha,\beta,\gamma>0$ with $\alpha+\beta+\gamma=\pi$. When we make
statements about almost every triangle, we mean it in a measure theoretic
sense, i.e. that there exists a set $S^{\prime}\subseteq S$ with the
$2$-dimensional Lebesque measure being $0$ such that the statements holds for
all triangles $T\in S\setminus S^{\prime}$. In [5] it also was proved that
$h(n,T,\varepsilon)$ can only be slightly larger than $h(n)$ for almost every
triangle $T$.
###### Theorem 1.2 (Bárány, Füredi [5]).
For almost every triangle $T$ there is an $\varepsilon>0$ such that
$\displaystyle h(n,T,\varepsilon)\leq 0.25072\binom{n}{3}(1+o(1)).$
The previously described construction gives a lower bound of
$0.25\binom{n}{3}(1+o(1))$. Bárány and Füredi [5] reduced the problem of
determining $h(n,t,\varepsilon)$ to a hypergraph Turán problem and used the
method of flag algebras, to get an upper bound on the corresponding Turán
problem. Flag algebras is a powerful tool invented by Razborov [13], which has
been used to solve problems in various different areas, including graph theory
[10, 12], permutations [3, 14] and discrete geometry [4, 11]. An obstacle
Bárány and Füredi [5] encountered is that the conjectured extremal example is
an iterative construction and flag algebras tend to struggle with those. We
will overcome this issue by using flag algebras only to prove a weak stability
result and then use cleaning techniques to identify the recursive structure.
Similar ideas have been used in [1] and [2]. This allows us to prove the
asymptotic result and for large enough $n$ an exact recursion.
###### Theorem 1.3.
For almost every triangle $T$ there is an $\varepsilon=\varepsilon(T)>0$ such
that
$\displaystyle h(n,T,\varepsilon)=\frac{1}{4}\binom{n}{3}(1+o(1)).$ (1)
###### Theorem 1.4.
There exists $n_{0}$ such that for all $n\geq n_{0}$ and for almost every
triangle $T$ there is an $\varepsilon=\varepsilon(T)>0$ such that
$\displaystyle h(n,T,\varepsilon)=a\cdot b\cdot
c+h(a,T,\varepsilon)+h(b,T,\varepsilon)+h(c,T,\varepsilon),$ (2)
where $n=a+b+c$ and $a,b,c$ are as equal as possible.
We will observe that Theorem 1.4 implies the exact result when $n$ is a power
of $3$.
###### Corollary 1.5.
Let $n$ be a power of $3$. Then, for almost every triangle $T$ there is an
$\varepsilon=\varepsilon(T)>0$ such that
$\displaystyle h(n,T,\varepsilon)=\frac{1}{24}(n^{3}-n).$
The paper is organized as follows. In Section 2 we introduce terminology and
notation that we use, we establish a connection from maximizing the number of
similar triangles to Turán problems; and we apply flag algebras in our setting
to derive a weak stability result. In Section 3 we apply cleaning techniques
to improve the stability result and derive our main results. Finally, in
Section 4 we discuss further questions.
## 2 Preparation
### 2.1 Terminology and Notation
###### Definition 2.1.
Let $G$ be a $3$-uniform hypergraph (shortly a $3$-graph), $\mathcal{H}$ be a
family of $3$-graphs, $v\in V(G)$ and $A,B\subseteq V(G)$. Then,
* •
$G$ is _$\mathcal{H}$ -free_, if it does not contain a copy of any
$H\in\mathcal{H}$.
* •
a $3$-graph $G$ on $n$ vertices is _extremal_ with respect to $\mathcal{H}$,
if $G$ is $\mathcal{H}$-free and $e(G^{\prime})\leq e(G)$ for every
$\mathcal{H}$-free 3-graph $G^{\prime}$ on $n$ vertices. If it is clear from
context, we only say $G$ is extremal.
* •
for $a,b\in V(G)$, denote $N(a,b)$ the _neighborhood_ of $a$ and $b$, i.e. the
set of vertices $c\in V(G)$ such that $abc\in E(G)$.
* •
we write $L(v)$ for the _linkgraph_ of $v$, that is the graph $G^{\prime}$
with $V(G^{\prime})=V(G)\setminus\\{v\\}$ and $E(G^{\prime})$ being the set of
all pairs ${a,b}$ with $abv\in E(G)$.
* •
we write $L_{A}(v)$ for the linkgraph of $v$ on $A$, that is the graph
$G^{\prime}$ with $V(G^{\prime})=A\setminus\\{v\\}$ and $E(G^{\prime})$ being
the set of all pairs ${a,b}\subseteq A\setminus\\{v\\}$ with $abv\in E(G)$.
* •
we write $L_{A,B}(v)$ for the (bipartite) linkgraph of $v$ on $A\cup B$, that
is the graph $G^{\prime}$ with $V(G^{\prime})=A\cup B\setminus\\{v\\}$ and
$E(G^{\prime})$ being the set of all pairs ${a,b}$ with $a\in A,b\in B$ and
$abv\in E(G)$.
* •
we denote by $|L(v)|,|L_{A}(v)|$ and $|L_{A,B}(v)|$ the number of edges of the
linkgraphs $L(v),L_{A}(v)$ and $L_{A,B}(v)$ respectively.
Define a $3$-graph $S(n)$ on $n$ vertices recursively. For $n=1,2$, let $S(n)$
be the $3$-graph on $n$ vertices with no edges. For $n\geq 3$, choose $a\geq
b\geq c$ as equal as possible such that $n=a+b+c$. Then, define $S(n)$ to be
the $3$-graph constructed by taking vertex disjoint copies of $S(a),S(b)$ and
$S(c)$ and adding all edges with all $3$ vertices coming from a different
copy. Bárány and Füredi [5] observed that $|S(n)|\geq\frac{1}{24}n^{3}-O(n\log
n)$.
Given a set $B\subseteq\mathbb{C}$ and $\delta>0$, we call the set
$U_{\delta}(B):=\\{z:|z-b|<\delta\text{ for some }b\in B\\}$ the
$\delta$-_neighborhood_ of $B$. If $B=\\{b\\}$ for some $b\in\mathbb{C}$,
abusing notation, we write $U_{\delta}(b)$ for it.
### 2.2 Forbidden subgraphs
Given a finite point set $P\subseteq\mathbb{R}^{2}$ in the plane, a triangle
$T\in S$ and an $\varepsilon>0$, we denote $G(P,T,\varepsilon)$ the $3$-graph
with vertex set $V(G(P,T,\varepsilon))=P$ and triples $abc$ being an edge in
$G(P,T,\varepsilon)$ iff $abc$ forms a triangle $\varepsilon$-similar to $T$.
A $3$-graph $H$ is called _forbidden_ if $|V(H)|\leq 12$ and for almost every
triangle shape $T\in S$ there exists an $\varepsilon=\varepsilon(T)>0$ such
that for every point set $P\subseteq\mathbb{R}^{2}$, $G(P,T,\varepsilon)$ is
$H$-free. Denote $\mathcal{F}$ the family of all forbidden $3$-graphs and
$\mathcal{T}_{\mathcal{F}}\subseteq S$ the set of all triangles $T$ such that
there exists $\varepsilon=\varepsilon(T)>0$ such that for every point set
$P\subseteq\mathbb{R}^{2}$, $G(P,T,\varepsilon)$ is $\mathcal{F}$-free. Given
$T\in\mathcal{T}_{\mathcal{F}}$, we denote some $\varepsilon(T)>0$ to be a
positive real number such that for every point set $P\subseteq\mathbb{R}^{2}$,
$G(P,T,\varepsilon(T))$ is $\mathcal{F}$-free.
In our definition of forbidden $3$-graphs we restrict the size to be at most
$12$. The reason we choose the number $12$ is that the largest forbidden
subgraph we need for our proof has size $12$ and we try to keep the family
$\mathcal{F}$ to be small.
We will prove Theorem 1.3, Theorem 1.4 and Corollary 1.5 for all triangles
$T\in\mathcal{T}_{\mathcal{F}}$. Note that by the definition of $\mathcal{F}$,
almost all triangles are in $\mathcal{T}_{\mathcal{F}}$. Bárány and Füredi [5]
determined the following hypergraphs to be members of $\mathcal{F}$.
###### Lemma 2.2 (Bárány and Füredi [5], see Lemma 11.2).
The following hypergraphs are members of $\mathcal{F}$.
• $K_{4}^{-}=\\{123,124,134\\}$ • $C_{5}^{-}=\\{123,124,135,245\\}$ •
$C_{5}^{+}=\\{126,236,346,456,516\\}$ • $L_{2}=\\{123,124,125,136,456\\}$ •
$L_{3}=\\{123,124,135,256,346\\}$ • $L_{4}=\\{123,124,156,256,345\\}$ •
$L_{5}=\\{123,124,135,146,356\\}$ • $L_{6}=\\{123,124,145,346,356\\}$ •
$P_{7}^{-}=\\{123,145,167,246,257,347\\}.$
For the non-computer assisted part our proof, we will need to extend this
list. For the computer assisted part, we excluded additional graphs on $7$ and
$8$ vertices.
###### Lemma 2.3.
The following hypergraphs are members of $\mathcal{F}$.
* •
$L_{7}=\\{123,124,125,136,137,458,678\\}$
* •
$L_{8}=\\{123,124,125,136,137,468,579,289\\}$
* •
$L_{9}=\\{123,124,125,136,237,469,578,189\\}$
* •
$L_{10}=\\{123,124,125,126,137,138,239,58a,47b,69c,abc\\}.$
Note that this is not the complete list. To verify that those hypergraphs are
forbidden, we will we use the same method as Bárány and Füredi [5] used to
show that the hypergraphs from Lemma 2.2 are forbidden. For sake of
completeness, we repeat their argument here.
###### Proof.
We call a $3$-graph $H$ on $r$ vertices dense if there exists a vertex
ordering $v_{1},v_{2},\ldots,v_{r}$ such that for every $3\leq i\leq r-1$
there exists exactly one edge $e_{i}\in E(H[\\{v_{1},\ldots,v_{i}\\}])$
containing $v_{i}$, and there exists exactly two edges $e_{r},e_{r}^{\prime}$
containing $v_{r}$. Note that $L_{7},L_{8},L_{9}$ and $L_{10}$ are dense.
For convenience, we will work with a different representation of triangles
shapes. A triangle shape $T\in S$ is characterized by a complex number
$z\in\mathbb{C}\setminus\mathbb{R}$ such that the triangle with vertices
$0,1,z$ is similar to $T$. Note that there are at most twelve complex numbers
$w$ such that the triangle $\\{0,1,w\\}$ is similar to $T$.
Let $H$ be a dense hypergraph on $r$ vertices with vertex ordering
$v_{1},\ldots,v_{r}$ and let $P=\\{p_{1},\ldots,p_{r}\\}$
$\subseteq\mathbb{R}^{2}$ be a point set such that $G(P,T,\varepsilon)$
contains $H$ (with $p_{i}$ corresponding to $v_{i}$), where $\varepsilon$ is
small enough such that the following argument holds. Let $\delta>0$ be
sufficiently small. Without loss of generality, we can assume that
$p_{1}=(0,0)$ and $p_{2}=(1,0)$. Now, since $H$ is dense, $v_{1}v_{2}v_{3}\in
E(H)$ and therefore $p_{1}p_{2}p_{3}$ forms a triangle $\varepsilon$-similar
to $T$. Therefore, there exists at most $12$ points (which are functions in
$z$) such that $p_{3}$ is in a $\delta$-neighborhood of one of them. Since,
$v_{4}$ is contained in some edge with vertices from
$\\{v_{1},v_{2},v_{3},v_{4}\\}$, there are at most $12\cdot 12=144$ points
(which are functions in $z$) such that $q_{4}$ is in a $\delta$-neighborhood
of one of them. Continuing this argument, we find functions $f_{i,j}(z)$ in
$z$ where $3\leq i\leq r-1$ and $j\leq 12^{r-3}$ such that
$\displaystyle(p_{3},p_{4},\ldots,p_{r})\in U_{\delta}(f_{3,j}(z))\times
U_{\delta}(f_{4,j}(z))\times\ldots\times U_{\delta}(f_{r-1,j}(z))$
for some $j\leq 12^{r-3}$. Since $H$ is dense, $v_{r}$ is contained in exactly
two edges $e_{r}$ and $e_{r}^{\prime}$. For each $j\leq 12^{r-3}$, because
$v_{k}\in e_{r}$, there exists at most $12$ points $f_{r,j,\ell}(z)$ where
$\ell\leq 12$ such that
$\displaystyle p_{k}\in U_{\delta}\left(f_{r,j,\ell}(z)\right).$
Similarly, because $v_{k}\in e_{r}^{\prime}$, there exists at most $12$ points
$g_{r,j,\ell}(z)$ where $\ell^{\prime}\leq 12$ such that
$\displaystyle p_{k}\in U_{\delta}\left(g_{r,j,\ell^{\prime}}(z)\right).$
Thus,
$\displaystyle p_{k}\in\bigcup_{\ell,\ell^{\prime}\leq
12}U_{\delta}\left(f_{r,j,\ell}(z)\right)\cap
U_{\delta}\left(g_{r,j,\ell^{\prime}}(z)\right).$ (3)
Note that if there exists a $z$ such that for each $1\leq j\leq 12^{r-3}$ none
of the equations
$\displaystyle f_{r,j,\ell}(z)=g_{r,j,\ell^{\prime}}(z),\quad\quad
1\leq\ell,\ell^{\prime}\leq 12$ (4)
hold, then we can choose $\varepsilon>0$ such that
$\displaystyle\delta<\frac{1}{3}\max_{\ell,\ell^{\prime}}|f_{r,j,\ell}(z)-g_{r,j,\ell^{\prime}}(z)|,$
(5)
and therefore the set in (3) is empty, contradicting that $G(P,T,\varepsilon)$
contains a copy of $H$. Note that, because of (5), $\varepsilon$ depends on
$z$ and therefore on $T$. If we could find one $z\in\mathbb{C}$ not satisfying
any of the equations in (4), then each of the equations is non-trivial (the
solution space is not $\mathbb{C}$). Thus, for each equation the solution set
has Lebesque measure 0. Since there are only at most $12^{r-2}$ equations, the
union of the solution sets still has measure $0$. Thus, we can conclude that
for almost all triangles $T$ there exists $\varepsilon$ such that
$G(P,T,\varepsilon)$ is $H$-free for every point set $P$. It remains to show
that for $H\in\\{L_{7},L_{8},L_{9},L_{10}\\}$ there exists $z\in\mathbb{C}$
not satisfying any of the equations in (4). We will show this for a $z$
corresponding to the equilateral triangle
($z=\frac{1}{2}+i\cdot\frac{\sqrt{3}}{2}$). For $T$ being the equilateral
triangle, there are at most $2^{r-2}$ equations to check. Because of the large
amount of cases, we will use a computer to verify it.
Our computer program is a simple brute force recursive approach. It starts by
embedding $p_{1}=(0,0)$ and $p_{2}=(1,0)$. For each subsequent $3\leq i\leq r$
it tries both options for embedding $p_{i}$ dictated by $e_{i}$. Finally, it
checks if the points forming $e^{\prime}_{r}$ form an equilateral triangle. If
in none of the $2^{r-2}$ generated point configurations the points of
$e^{\prime}_{r}$ form an equilateral triangle, then $H$ is a member of
$\mathcal{F}$. An implementation of this algorithm in python is available at
http://lidicky.name/pub/triangle. This completes the proof of Lemma 2.3. ∎
Instead of Theorem 1.3 we will actually prove the following stronger result.
###### Theorem 2.4.
We have
$\displaystyle\textup{ex}(n,\mathcal{F})=0.25\binom{n}{3}(1+o(1)).$
First, we observe that Theorem 2.4 implies Theorem 1.3. Let
$P\subseteq\mathbb{R}^{2}$ be a point set of size $n$ and let
$T\in\mathcal{T}_{\mathcal{F}}$. Then, $G(P,T,\varepsilon(T))$ is
$\mathcal{F}$-free. Now, the number of $\varepsilon$-similar triangles $T$
equals the number of edges in $G(P,T,\varepsilon(T))$. Since
$G(P,T,\varepsilon(T))$ is $\mathcal{F}$-free, we have
$\displaystyle h(n,T,\varepsilon)\leq\textup{ex}(n,\mathcal{F}).$
Therefore, Theorem 2.4 implies Theorem 1.3.
### 2.3 A structural result via Flag Algebras
It is a standard application of flag algebras to determine an upper bound for
$\textup{ex}(n,\mathcal{G})$ given a family $\mathcal{G}$ of 3-uniform
hypergraphs. Running the method of flag algebras on $7$ vertices, Bárány and
Füredi [5] obtained
$\displaystyle\textup{ex}(n,\mathcal{F})\leq\textup{ex}(n,\\{K_{4}^{-},C_{5}^{-},C_{5}^{+},L_{2},L_{3},L_{4},L_{5},L_{6},P_{7}^{-}\\})\leq
0.25072\binom{n}{3}(1+o(1)).$ (6)
It is conjectured in [9] that
$\textup{ex}(n,\\{K_{4}^{-},C_{5}\\})=0.25\binom{n}{3}(1+o(1))$. We note that
when running flag algebras on $8$ vertices and forbidding more $3$-graphs in
$\mathcal{F}$, then we can obtain the following improved bound.
$\displaystyle\textup{ex}(n,\mathcal{F}))\leq 0.2502\binom{n}{3}(1+o(1)).$ (7)
Note that Conjecture 4.2 is a significant strengthening of (6) and (7). We use
flag algebras to prove a stability result. For an excellent explanation of
flag algebras in the setting of $3$-graphs see [9]. Here, we will focus on the
formulation of the problem rather than providing a formal explanation of the
general method. As a consequence, we obtain the following lemma, which gives
the first rough structure of extremal constructions. This approach was
developed in [1] and [2].
###### Lemma 2.5.
Let $n\in\mathbb{N}$ be sufficiently large and let $G$ be an
$\mathcal{F}$-free $3$-graph on $n$ vertices and $|E(G)|\geq
1/24n^{3}(1+o(1))$ edges. Then there exists an edge $x_{1}x_{2}x_{3}\in E(G)$
such that for $n$ large enough
1. (i)
the neighborhoods $N(x_{1},x_{2}),N(x_{2},x_{3})$, and $N(x_{1},x_{3})$ are
pairwise disjoint.
2. (ii)
$\min\\{|N(x_{1},x_{2})|,|N(x_{2},x_{3})|,|N(x_{1},x_{3})|\\}\geq 0.26n.$
3. (iii)
$n-|N(x_{1},x_{2})|-|N(x_{2},x_{3})|-|N(x_{1},x_{3})|\leq 0.012n.$
###### Proof.
Denote $T_{i,j,k}$ the family of $3$-graphs that are obtained from a complete
$3$-partite $3$-graph with part sizes $i$, $j$ and $k$ by adding
$\mathcal{F}$-free $3$-graphs in each of the three parts. Let $X$ be a
subgraph of $G$ isomorphic to $T_{2,2,1}$ on vertices
$x_{1},x_{1}^{\prime},x_{2},x_{2}^{\prime},x_{3}$ with edges
$x_{1}x_{2}x_{3},x_{1}x_{2}^{\prime}x_{3},x_{1}^{\prime}x_{2}x_{3},x_{1}^{\prime}x_{2}^{\prime}x_{3}$.
Further, define
$\displaystyle A_{1}$ $\displaystyle:=N(x_{2},x_{3})\cap
N(x_{2}^{\prime},x_{3}),$ $\displaystyle A_{3}$
$\displaystyle:=N(x_{1},x_{2})\cap N(x_{1}^{\prime},x_{2})\cap
N(x_{1},x_{2}^{\prime})\cap N(x_{1}^{\prime},x_{2}^{\prime}),$ $\displaystyle
A_{2}$ $\displaystyle:=N(x_{1},x_{3})\cap N(x_{1}^{\prime},x_{3}),$
$\displaystyle J$ $\displaystyle:=V(G)\setminus\left(A_{1}\cup A_{2}\cup
A_{3}\right).$
Let $a_{i}:=|A_{i}|/n$ for $1\leq i\leq 3$. Note that $V(G)=A_{1}\cup
A_{2}\cup A_{3}\cup J$ is a partition, because the sets
$N(x_{1},x_{2}),N(x_{1},x_{3})$ and $N(x_{2},x_{3})$ are pairwise disjoint.
Indeed, without loss of generality, assume $N(x_{1},x_{2})\cap
N(x_{1},x_{3})\neq\emptyset$. Let $v\in N(x_{1},x_{2})\cap N(x_{1},x_{3})$.
Then $v,x_{1},x_{2},x_{3}$ spans at least $3$ edges and therefore $G$ contains
a copy of $K_{4}^{-}$, a contradiction. We choose $X$ such that
$\displaystyle
a_{1}a_{2}+a_{1}a_{3}+a_{2}a_{3}-\frac{1}{4}\left(a_{1}^{2}+a_{2}^{2}+a_{3}^{2}\right)$
(8)
is maximized. Flag algebras can be used to give a lower bound on the expected
value of (8) for $X$ chosen uniformly at random and therefore also a lower
bound on (8) when $X$ is chosen to maximize (8).
Let $Z$ be a fixed _labeled_ subgraph of $G$ belonging to
$T_{i^{\prime},j^{\prime},k^{\prime}}$. Denote by $T_{i,j,k}(Z)$ the family of
subgraphs of $G$ that contain $Z$, belong to $T_{i,j,k}$, where
$i^{\prime}\leq i$, $j^{\prime}\leq j$, and $k^{\prime}\leq k$, and the
natural three parts of $Z$ are mapped to the same 3 parts in $T_{i,j,k}(Z)$.
The normalized number of $T_{i,j,k}(Z)$ is
$t_{i,j,k}(Z):=\frac{|T_{i,j,k}(Z)|}{\binom{n-|V(Z)|}{i+j+k-|V(Z)|}}.$
The subgraphs of $G$ isomorphic to $T_{i,j,k}$ are denoted by
$T_{i,j,k}(\emptyset)$. The normalized number is
$t_{i,j,k}:=\frac{|T_{i,j,k}(\emptyset)|}{\binom{n}{i+j+k}}.$
Notice that $a_{1}=t_{3,2,1}(X)+o(1)$, $2a_{1}a_{2}=t_{3,3,1}(X)+o(1)$, and
$a_{1}^{2}=t_{4,3,1}(X)+o(1)$. We start with (8) and obtain the following.
$\displaystyle\leavevmode\nobreak\
\left(a_{1}a_{2}+a_{1}a_{3}+a_{2}a_{3}-\frac{1}{4}\left(a_{1}^{2}+a_{2}^{2}+a_{3}^{2}\right)\right)n^{2}$
$\displaystyle=$ $\displaystyle\leavevmode\nobreak\
\left(2a_{1}a_{2}+2a_{1}a_{3}+2a_{2}a_{3}-\frac{1}{2}\left(a_{1}^{2}+a_{2}^{2}+a_{3}^{2}\right)\right)\binom{n-5}{2}+o(n^{2})$
$\displaystyle=$ $\displaystyle\leavevmode\nobreak\
\left(t_{3,3,1}(X)+t_{3,2,2}(X)+t_{2,3,2}(X)-\frac{1}{2}\left(t_{4,2,1}(X)+t_{2,4,1}(X)+t_{2,2,3}(X)\right)\right)\binom{n-5}{2}$
$\displaystyle+o(n^{2})$ $\displaystyle\geq$
$\displaystyle\leavevmode\nobreak\
\frac{1}{t_{2,2,1}\binom{n}{5}}\Bigg{(}\sum_{Y\in
T_{2,2,1}(\emptyset)}(t_{3,3,1}(Y)+t_{3,2,2}(Y)+t_{2,3,2}(Y)$
$\displaystyle-\frac{1}{2}\left(t_{4,2,1}(Y)+t_{2,4,1}(Y)+t_{2,2,3}(Y)\right)\Bigg{)}\binom{n-5}{2}+o(n^{2})$
$\displaystyle\geq$ $\displaystyle\leavevmode\nobreak\
\frac{1}{t_{2,2,1}\binom{n}{5}}\left(9\,t_{3,3,1}+12\,t_{3,2,2}-\frac{1}{2}\left(6\,t_{4,2,1}+3\,t_{2,2,3}\right)\right)\binom{n}{7}+o(n^{2})$
$\displaystyle=$ $\displaystyle\leavevmode\nobreak\
\frac{1}{7\,t_{2,2,1}}\left(3\,t_{3,3,1}+3.5\,t_{3,2,2}-\,t_{4,2,1}\right)\binom{n-5}{2}+o(n^{2}).$
###### Claim 2.6.
Using flag algebras, we get that if $\,t_{1,1,1}\geq 0.25$ then
$\frac{1}{7\,t_{2,2,1}}\left(3\,t_{3,3,1}+3.5\,t_{3,2,2}-\,t_{4,2,1}\right)\geq\frac{1.2814228}{7\cdot
0.37502377}>0.48813.$
The calculations for Claim 2.6 are computer assisted; we use CSDP [6] to
calculate numerical solutions of semidefinite programs. The data files and
programs for the calculations are available at
http://lidicky.name/pub/triangle. Claim 2.6 gives a lower bound on (8) as
follows
$\displaystyle
a_{1}a_{2}+a_{1}a_{3}+a_{2}a_{3}-\frac{1}{4}\left(a_{1}^{2}+a_{2}^{2}+a_{3}^{2}\right)\geq\frac{1.2814228}{14\cdot
0.37502377}>0.24406.$ (9)
Notice that if $a_{1}=a_{2}=a_{3}=\frac{1}{3}$, then (8), which is the left
hand side of (9), is $0.25$. The conclusions (ii) and (iii) of Lemma 2.5 can
be obtained from (9). Indeed, assume $a_{1}<0.26$. Then,
$\displaystyle
a_{1}a_{2}+a_{1}a_{3}+a_{2}a_{3}-\frac{1}{4}\left(a_{1}^{2}+a_{2}^{2}+a_{3}^{2}\right)$
$\displaystyle\leq
a_{1}(1-a_{1})+\left(\frac{1-a_{1}}{2}\right)^{2}-\frac{1}{4}\left(a_{1}^{2}+2\left(\frac{1-a_{1}}{2}\right)^{2}\right)$
$\displaystyle=-\frac{9}{8}a_{1}^{2}+\frac{3}{4}a_{1}+\frac{1}{8}<-\frac{9}{8}0.26^{2}+\frac{3}{4}0.26+\frac{1}{8}=0.24325,$
contradicting (9). Thus, we have $a_{1}\geq 0.26$, concluding (ii). Next,
assume $a_{1}+a_{2}+a_{3}\leq 0.988$. Then,
$\displaystyle
a_{1}a_{2}+a_{1}a_{3}+a_{2}a_{3}-\frac{1}{4}\left(a_{1}^{2}+a_{2}^{2}+a_{3}^{2}\right)$
$\displaystyle\leq
a_{1}(0.988-a_{1})+\left(\frac{0.988-a_{1}}{2}\right)^{2}-\frac{1}{4}\left(a_{1}^{2}+2\left(\frac{0.988-a_{1}}{2}\right)^{2}\right)$
$\displaystyle=-\frac{9}{8}a_{1}^{2}+0.741a_{1}+0.122018\leq\frac{61009}{250000}<0.244037,$
where in the last step we used that the maximum is obtained at
$a_{1}=247/750$. This contradicts (9). Thus, we have $a_{1}+a_{2}+a_{3}\geq
0.988$, concluding (iii). ∎
In the proof of Lemma 2.5, we chose a suitable copy of $T_{2,2,1}$ to find the
initial 3-partition. One could do the same approach by starting with base
$T_{1,1,1}$ instead. However, the resulting bounds would be weaker and not
sufficient for the rest of the proof. This is caused by obtaining a lower
bound on (8) by taking a random base.
## 3 Proof of Theorem 1.2
In this section, we will strengthen our flag algebra result Lemma 2.5 by
applying cleaning techniques.
### 3.1 The top layer
###### Lemma 3.1.
Let $G$ be an $\mathcal{F}$-free $3$-graph on $n$ vertices and $|E(G)|\geq
1/24n^{3}(1+o(1))$, satisfying $|L(w)|\geq\frac{1}{8}n^{2}(1+o(1))$ for every
$w\in V(G)$. Then there exists an edge $x_{1}x_{2}x_{3}\in E(G)$ such that for
$\displaystyle A_{1}:=N(x_{2},x_{3}),\ \ A_{2}:=N(x_{1},x_{3}),\ \
A_{3}:=N(x_{1},x_{2}),\ \ J:=V(G)\setminus(A_{1}\cup A_{2}\cup A_{3}),$
$\displaystyle A_{1}^{\prime}:=A_{1}\setminus\\{x_{1}\\},\ \
A_{2}^{\prime}:=A_{2}\setminus\\{x_{2}\\},\ \ \text{and}\ \
A_{3}^{\prime}:=A_{3}\setminus\\{x_{3}\\}$
we have for $n$ sufficiently large
* (a)
$0.26n\leq|A_{i}|\leq 0.48n$ for $i\in[3]$.
* (b)
$|J|\leq 0.012n$.
* (c)
No triple $abc$ with $a,b\in A_{i}^{\prime}$ and $c\in A_{j}^{\prime}$ for
some $i,j\in[3],i\neq j$ forms an edge.
* (d)
For $v\in V(G)\setminus\\{x_{1},x_{2},x_{3}\\},\ w_{1},w_{2}\in
A_{i}^{\prime}$, $u_{1},u_{2}\in A_{j}^{\prime}$ with $i,j\in[3]$ and $i\neq
j$ we have $vw_{1}w_{2}\not\in E(G)$ or $vu_{1}u_{2}\not\in E(G)$.
* (e)
For every $v\in V(G)\setminus\\{x_{1},x_{2},x_{3}\\}$, there exists $i\in[3]$
such that $|L_{A_{j},A_{k}}(v)|\geq 0.001n^{2}$, where $j,k\in[3],j\neq
k,j\neq i,k\neq i$.
###### Proof.
Apply Lemma 2.5 and get an edge $x_{1}x_{2}x_{3}$ with the properties from
Lemma 2.5. The sets $A_{1},A_{2},A_{3}$ are pairwise disjoint.
###### Claim 3.2.
Properties (a)–(c) holds.
###### Proof.
Note that $(a)$ and $(b)$ hold by Lemma 2.5. To prove $(c)$, assume that there
exists $abc\in E(G)$ with $a,b\in A_{i}^{\prime}$ and $c\in A_{j}^{\prime}$
for some $i,j\in[3],i\neq j$. Let $k\in[3],k\neq i,k\neq j$. See Figure 3 for
an illustration. Now,
$\displaystyle x_{i}x_{j}x_{k},abc,x_{j}x_{k}a,x_{j}x_{k}b,cx_{i}x_{k}\in
E(G).$
$x_{j}$$x_{k}$$x_{i}$$a$$b$$c$ Figure 2: Situation in Claim 3.2.
$x_{j}$$x_{k}$$x_{i}$$w_{1}$$w_{2}$$u_{1}$$u_{2}$$v$ Figure 3: Situation in
Claim 3.3.
Therefore $G$ contains a copy of $L_{2}$ on $\\{x_{1},x_{2},x_{3},a,b,c\\}$, a
contradiction. ∎
###### Claim 3.3.
Property (d) holds.
###### Proof.
Towards contradiction, assume that there exists $v\in
V(G)\setminus\\{x_{1},x_{2},x_{3}\\},w_{1},w_{2}\in A_{i}^{\prime}$,
$u_{1},u_{2}\in A_{j}^{\prime}$ for $i,j\in[3]$ with $i\neq j$ such that
$vw_{1}w_{2}\in E(G)$ and $vu_{1}u_{2}\in E(G)$. Let $k\in[3],k\neq i,k\neq
j$. See Figure 3 for an illustration. Now,
$\\{x_{1},x_{2},x_{3},v,u_{1},u_{2},w_{1},w_{2}\\}$ spans a copy of $L_{7}$
because
$\displaystyle
x_{i}x_{j}x_{k},vw_{1}w_{2},vu_{1}u_{2},x_{j}x_{k}w_{1},x_{j}x_{k}w_{2},x_{i}x_{k}u_{1},x_{i}x_{k}u_{2}\in
E(G).$
However, $L_{7}\in\mathcal{F}$ by Lemma 2.3, contradicting that $G$ is
$\mathcal{F}$-free. ∎
###### Claim 3.4.
Property (e) holds.
###### Proof.
Let $v\in V(G)\setminus\\{x_{1},x_{2},x_{3}\\}$. Towards contradiction, assume
$\displaystyle|L_{A_{1},A_{2}}(v)|<0.001n^{2}\quad\text{and}\quad|L_{A_{1},A_{3}}(v)|<0.001n^{2}\quad\text{and}\quad|L_{A_{2},A_{3}}(v)|<0.001n^{2}.$
By property $(d)$, there exists $i\in[3]$ such that
$|L_{A_{j}^{\prime}}(v)|=|L_{A_{k}^{\prime}}(v)|=0$ for
$j,k\in[3]\setminus\\{i\\}$ with $j\neq k$. Note, that
$|L_{A_{i}}(v)|\leq|A_{i}|^{2}/4$, since $L_{A_{i}}(v)$ is triangle-free,
because otherwise there was a copy of $K_{4}^{-}$ in $G$. We have
$\displaystyle|L(v)|$ $\displaystyle\leq|J|\cdot
n+|L_{A_{1},A_{2}}(v)|+|L_{A_{2},A_{3}}(v)|+|L_{A_{1},A_{3}}(v)|$
$\displaystyle+|L_{A_{1}}(v)|+|L_{A_{2}}(v)|+|L_{A_{3}}(v)|$
$\displaystyle\leq|J|\cdot n+0.003n^{2}+\frac{|A_{i}|^{2}}{4}+2n\leq
0.012n^{2}+0.003n^{2}+0.06n^{2}+2n$ $\displaystyle<0.08n^{2}(1+o(1)),$
contradicting the assumption $|L(v)|\geq\frac{1}{8}n^{2}(1+o(1))$. Note that
we used $|A_{i}|\leq 0.48n$ and $|J|\leq 0.012n$ from properties $(a)$ and
$(b)$. ∎
This completes the proof of Lemma 3.1. ∎
###### Lemma 3.5.
Let $n\in\mathbb{N}$ be sufficiently large and let $G$ be an
$\mathcal{F}$-free $3$-graph on $n$ vertices and $|E(G)|\geq
1/24n^{3}(1+o(1))$, satisfying $|L(w)|\geq\frac{1}{8}n^{2}(1+o(1))$ for every
$w\in V(G)$. Then there exists a vertex partition $V(G)=X_{1}\cup X_{2}\cup
X_{3}$ with $|X_{i}|\geq 0.26n$ for $i\in[3]$ such that no triple $abc$ with
$a,b\in X_{i}$ and $c\in X_{j}$ for some $i,j\in[3]$ with $i\neq j$ forms an
edge.
###### Proof.
Let $x_{1}x_{2}x_{3}\in E(G)$ be an edge with the properties from Lemma 3.1.
By property (e) we can partition $J=J_{1}\cup J_{2}\cup J_{3}$ such that for
every $v\in J_{i}$ we have $|L_{A_{j},A_{k}}(v)|\geq 0.001n^{2}$, where
$j,k\in[3],j\neq k,j\neq i,k\neq i$. Set $X_{i}:=A_{i}\cup J_{i}$. Note that
by properties (c) and (e) for every $v\in X_{i}\setminus\\{x_{i}\\}$ we have
$|L_{A_{j},A_{k}}(v)|\geq 0.001n^{2}$, where $j,k\in[3],j\neq k,j\neq i,k\neq
i$. Further, by property (a) and definition of $X_{i}$ we have $|X_{i}|\geq
0.26n$ for $n$ large enough.
Towards contradiction, assume that there exists $a,b\in X_{1}$ and $c\in
X_{2}$ with $abc\in E(G)$. For each $a,b,c$ we find their neighbors in
$A_{1}\cup A_{2}\cup A_{3}$ that put them to $J_{1}$ and $J_{2}$. These
neighbors are in $A_{1}\cup A_{2}\cup A_{3}$ because they were adjacent to
some of $x_{1},x_{2},x_{3}$. This will eventually form one of the forbidden
subgraphs. We will distinguish cases depending on how $a,b,c$ coincide with
$x_{1},x_{2},x_{3}$.
$x_{2}$$x_{3}$$x_{1}$$a$$b$$c_{1}$$c$$a_{3}$$b_{3}$$c_{3}$$a_{2}$$b_{2}$
Figure 4: Case 1.
$x_{2}$$x_{3}$$x_{1}$$c_{1}$$b$$c$$b_{3}$$c_{3}$$b_{2}$ Figure 5: Case 2.
$x_{2}$$x_{3}$$x_{1}$$a$$b$$a_{3}$$b_{3}$$a_{2}$$b_{2}$ Figure 6: Case 4.
Case 1: $a,b\neq x_{1}$ and $c\neq x_{2}$.
Since
$\displaystyle|L_{A_{2},A_{3}}(a)|\geq
0.001n^{2},\quad|L_{A_{2},A_{3}}(b)|\geq
0.001n^{2}\quad\text{and}\quad|L_{A_{a},A_{3}}(c)|\geq 0.001n^{2},$
there exists distinct vertices $a_{3},b_{3},c_{3}\in A_{3},a_{2},b_{2}\in
A_{2}\setminus\\{c\\}$ and $c_{1}\in A_{1}\setminus\\{a,b\\}$ such that
$aa_{2}a_{3},bb_{2}b_{3},cc_{1}c_{3}\in E(G)$. See Figure 6 for an
illustration. We have
$\displaystyle
x_{1}x_{2}x_{3},abc,aa_{2}a_{3},bb_{2}b_{3},cc_{1}c_{3},c_{1}x_{2}x_{3},a_{2}x_{1}x_{3},b_{2}x_{1}x_{3},c_{3}x_{1}x_{2},b_{3}x_{1}x_{2},a_{3}x_{1}x_{2}\in
E(G),$
and therefore the vertices
$\\{x_{1},x_{2},x_{3},a,b,c,c_{1},a_{2},b_{2},a_{3},b_{3},c_{3}\\}$ span a
copy of $L_{10}$, a contradiction.
Case 2: $a=x_{1}$ and $c\neq x_{2}$.
By property $(d)$, there exists distinct vertices $b_{3},c_{3}\in
A_{3},b_{2}\in A_{2}\setminus\\{c\\}$ and $c_{1}\in A_{1}\setminus\\{a,b\\}$
such that $bb_{2}b_{3},cc_{1}c_{3}\in E(G)$. See Figure 6 for an illustration.
We have
$\displaystyle
x_{1}x_{2}x_{3},x_{1}bc,bb_{2}b_{3},cc_{1}c_{3},c_{1}x_{2}x_{3},b_{2}x_{1}x_{3},c_{3}x_{1}x_{2},b_{3}x_{1}x_{2}\in
E(G),$
and therefore the vertices
$\\{x_{1},x_{2},x_{3},b,c,c_{1},b_{2},b_{3},c_{3}\\}$ span a copy of $L_{9}$,
a contradiction.
Case 3: $b=x_{1}$ and $c\neq x_{2}$.
This case is similar to Case 2.
Case 4: $a,b\neq x_{1}$ and $c=x_{2}$.
By property $(d)$, there exists distinct vertices $a_{3},b_{3}\in
A_{3},a_{2},b_{2}\in A_{2}\setminus\\{c\\}$ such that
$aa_{2}a_{3},bb_{2}b_{3}\in E(G)$. See Figure 6 for an illustration. We have
$\displaystyle
x_{1}x_{2}x_{3},abx_{2},aa_{2}a_{3},bb_{2}b_{3},a_{2}x_{1}x_{3},b_{2}x_{1}x_{3},b_{3}x_{1}x_{2},a_{3}x_{1}x_{2}\in
E(G),$
and therefore the vertices
$\\{x_{1},x_{2},x_{3},a,b,a_{2},b_{2},a_{3},b_{3}\\}$ span a copy of $L_{8}$,
a contradiction.
Case 5: $a=x_{1}$ and $c=x_{2}$.
This means that $b\in N(x_{1},x_{2})=A_{3}$, contradicting $b\in X_{1}$.
Case 6: $b=x_{1}$ and $c=x_{2}$.
This case is similar to case 5.
We conclude that for $a,b\in X_{1},c\in X_{3}$, we have $abc\not\in E(G)$.
Similarly, for $a,b\in X_{i},c\in X_{j}$ with $i\neq j$, we have $abc\not\in
E(G)$.
∎
### 3.2 The asymptotic result
In this subsection we will prove Theorem 2.4. We first observe that an
extremal $\mathcal{F}$-free $3$-graph satisfies a minimum degree condition.
###### Lemma 3.6.
Let $G$ be an $\mathcal{F}$-free $3$-graph and $v\in V(G)$. Denote $G_{u,v}$
the $3$-graph constructed from $G$ by adding a copy $w$ of $v$ and deleting
$u$, i.e.
$\displaystyle V(G_{u,v})=V(G)\cup\\{w\\}\setminus\\{u\\},\quad
E(G_{u,v})=E(G[V(G)\setminus\\{u\\}])\cup\\{wab\ |\ abv\in E(G)\\}.$
Then, $G_{u,v}$ is also $\mathcal{F}$-free.
###### Proof.
Towards contradiction assume that $G_{u,v}$ does contain a copy of some
$F\in\mathcal{F}$. Since $G$ is $\mathcal{F}$-free, this copy $F^{\prime}$ of
$F$ contains the vertices $v$ and $w$. $F^{\prime}-w$ is a subgraph of $G$ and
thus $\mathcal{F}$-free, in particular $F^{\prime}-w\notin\mathcal{F}$. Thus,
there exists a set of triangles shape $\mathcal{T}$ of positive measure such
that for $T\in\mathcal{T}$ and $\varepsilon>0$ there exists a point set
$P=P(T,\varepsilon)\subseteq\mathbb{R}^{2}$ with $F^{\prime}-w$ being
isomorphic to $G(P,T,\varepsilon)$. Construct a new point set $P^{\prime}$
from $P(T,\varepsilon)$ by adding a new point $p_{w}$ close enough to the
point corresponding to $v$. This guarantees that $v$ and $p_{w}$ have the same
linkgraph in $G(P^{\prime},T,\varepsilon)$ and that there is no edge in
$G(P^{\prime},T,\varepsilon)$ containing both $p_{w}$ and $v$. Now,
$G(P^{\prime},T,\varepsilon)$ contains a copy of $F$, contradicting that
$F\in\mathcal{F}$.
∎
###### Lemma 3.7.
Let $G$ be an extremal $\mathcal{F}$-free $3$-graph on $n$ vertices. Then for
every $w\in V(G)$, we have $|L(w)|\geq\frac{1}{8}n^{2}(1+o(1))$.
###### Proof.
Assume that there exists $u\in V(G)$ with $|L(u)|<\frac{1}{8}n^{2}-n^{3/2}$
for $n$ sufficiently large. Let $v\in V(G)$ be a vertex maximizing $|L(v)|$.
The $3$-graph $G_{u,v}$ is $\mathcal{F}$-free by Lemma 3.6 and has more edges
than $G$:
$\displaystyle|E(G_{u,v})|-|E(G)|\geq-|L(u)|+|L(v)|-d(v,u)\geq-\frac{1}{8}n^{2}+n^{3/2}+\frac{3|E(G)|}{n}-n$
$\displaystyle\geq$
$\displaystyle-\frac{1}{8}n^{2}+n^{3/2}+\frac{3|E(S(n))|}{n}-n\geq-\frac{1}{8}n^{2}+n^{3/2}+\frac{1}{8}n^{3}-O(n\log
n)>0,$
for $n$ sufficiently large. This contradicts the extremality of $G$. Thus for
every $w\in V(G)$, we have
$|L(w)|\geq\frac{1}{8}n^{2}-n^{3/2}=\frac{1}{8}n^{2}(1+o(1))$. ∎
###### Proof of Theorem 2.4.
For the lower bound, we have
$\displaystyle\textup{ex}(n,\mathcal{F})\geq
e(S(n))=\frac{1}{24}n^{3}(1+o(1)).$
For the upper bound, let $n_{0}$ be large enough such that the following
reasoning holds. For $n\geq n_{0}$, $\textup{ex}(n,\mathcal{F})\leq
0.251\binom{n}{3}$ by (6). We will prove by induction on $n$ that
$\textup{ex}(n,\mathcal{F})\leq\frac{1}{24}n^{3}+n\cdot n_{0}^{2}$. This
trivially holds for $n\leq n_{0}$, because
$\displaystyle\textup{ex}(n,\mathcal{F})\leq\binom{n}{3}\leq\frac{1}{24}n^{3}+n\cdot
n_{0}^{2}.$
For $n_{0}\leq n\leq 4n_{0}$, we have
$\displaystyle\textup{ex}(n,\mathcal{F})\leq
0.251\binom{n}{3}\leq\frac{1}{24}n^{3}+0.001\frac{n^{3}}{6}\leq\frac{1}{24}n^{3}+n\cdot
n_{0}^{2}.$
Now, let $G$ be an extremal $\mathcal{F}$-free $3$-graph on $n\geq 4n_{0}$
vertices. By Lemma 3.7 we have $|L(w)|\geq\frac{1}{8}n^{2}(1+o(1))$ for every
$w\in V(G)$. Therefore, the assumptions for Lemma 3.5 hold. Take a vertex
partition $V(G)=X_{1}\cup X_{2}\cup X_{2}$ with the properties from Lemma 3.5.
Now, for all $i\in[3]$, $|X_{i}|\geq 0.26n\geq n_{0}$ and since $G[X_{i}]$ is
$\mathcal{F}$-free, we have
$\displaystyle e(G[X_{i}])\leq\frac{1}{24}|X_{i}|^{3}+|X_{i}|\cdot n_{0}^{2}$
by the induction assumption. We conclude
$\displaystyle e(G)$
$\displaystyle\leq|X_{1}||X_{2}||X_{3}|+\sum_{i=1}^{3}e(G[X_{i}])\leq|X_{1}||X_{2}||X_{3}|+n\cdot
n_{0}^{2}+\frac{1}{24}\sum_{i=1}^{3}|X_{i}|^{3}$
$\displaystyle\leq\frac{1}{24}n^{3}+n\cdot n_{0}^{2},$
where in the last step we used that the function
$g(x_{1},x_{2},x_{3}):=x_{1}x_{2}x_{3}+1/24(x_{1}^{3}+x_{2}^{3}+x_{3}^{3})$
with domain $\\{(x_{1},x_{2},x_{3})\in[0.26,0.48]^{3}:x_{1}+x_{2}+x_{3}=1\\}$
archives its maximum at $x_{1}=x_{2}=x_{3}=1/3$. This can be verified quickly
using basic calculus or simply by using a computer, we omit the details. ∎
Analyzing the previous proof actually gives a stability result.
###### Lemma 3.8.
Let $G$ be an $\mathcal{F}$-free $3$-graph on $n$ vertices and
$|E(G)|=1/24n^{3}(1+o(1))$, satisfying $|L(w)|\geq\frac{1}{8}n^{2}(1+o(1))$
for every $w\in V(G)$. Then $G$ has a vertex partition $V(G)=X_{1}\cup
X_{2}\cup X_{2}$ such that
* •
$|X_{i}|=\frac{n}{3}(1+o(1))$ for every $i\in[3]$,
* •
there is no edge $e=xyz$ with $x,y\in X_{i}$ and $z\notin X_{i}$ for
$i\in[3]$.
###### Proof.
Take a vertex partition $V(G)=X_{1}\cup X_{2}\cup X_{2}$ from Lemma 3.5. Since
$G[X_{i}]$ is $\mathcal{F}$-free, we have by Theorem 2.4 that
$e(G[X_{i}])\leq\frac{1}{24}|X_{i}|^{3}(1+o(1))$. Now, again
$\displaystyle\frac{1}{24}n^{3}(1+o(1))$
$\displaystyle=e(G)\leq|X_{1}||X_{2}||X_{3}|+\sum_{i=1}^{3}e(G[X_{i}])$
$\displaystyle\leq|X_{1}||X_{2}||X_{3}|+\frac{1}{24}\sum_{i=1}^{3}|X_{i}|^{3}(1+o(1).$
Again, since the polynomial $g$ with domain
$\\{(x_{1},x_{2},x_{3})\in[0.26,0.48]^{3}:x_{1}+x_{2}+x_{3}=1\\}$ achieves its
unique maximum at $x_{1}=x_{2}=x_{3}=1/3$, we get $|X_{i}|=(1/3+o(1))n$. ∎
### 3.3 The exact result
###### Lemma 3.9.
Let $T\in\mathcal{T}_{\mathcal{F}}$ and $P\subseteq\mathbb{R}^{2}$ be a point
set. Denote $G=G(P,T,\varepsilon(T))$. For every $u,v\in V(G)$ there exists a
point set $P^{\prime}$ such that $G_{u,v}=G(P^{\prime},T,\varepsilon(T))$.
###### Proof.
Let $u,v\in V(G)$. Construct $P^{\prime}$ from $P$ by removing the point
corresponding to $u$ and adding a point close enough to the point
corresponding to $v$. This point set satisfies
$G_{u,v}=G(P^{\prime},T,\varepsilon(T))$. ∎
###### Lemma 3.10.
Let $T\in\mathcal{T}_{\mathcal{F}}$ be a triangle shape and let
$P\subseteq\mathbb{R}^{2}$ be an $n$-element point set maximizing the number
of triangles being $\varepsilon(T)$-similar to $T$. Denote
$G=G(P,T,\varepsilon(T))$. Then for every $w\in V(G)$, we have
$|L(w)|\geq\frac{1}{8}n^{2}(1+o(1))$.
###### Proof.
We have that $G$ is $\mathcal{F}$-free. Assume that there exists $u\in V(G)$
with
$\displaystyle|L(u)|<\frac{1}{8}n^{2}-n^{3/2}.$
Let $v\in V(G)$ be a vertex maximizing $|L(v)|$. By Lemma 3.9 there exists a
point set $P^{\prime}$ such that $G_{u,v}=G(P^{\prime},T,\varepsilon(T))$. We
have $|E(G_{u,v})|>|E(G)|$ by the same calculation as in the proof of Lemma
3.7. This contradicts the maximality of $P$. ∎
Now, we will strengthen the previous stability result.
###### Lemma 3.11.
Let $T\in\mathcal{T}_{\mathcal{F}}$. There exists $n_{0}$ such that for every
$n\geq n_{0}$ the following holds. Let $P$ be an $n$-element point set
maximizing the number of triangles being $\varepsilon(T)$-similar to $T$.
Then, the $3$-graph $G=G(P,T,\varepsilon(T))$ has a vertex partition
$V(G)=X_{1}\cup X_{2}\cup X_{2}$ such that
1. (i)
there is no edge $e=xyz$ with $x,y\in X_{i}$ and $z\notin X_{i}$ for
$i\in[3]$,
2. (ii)
$xyz\in E(G)$ for $x\in X_{1},y\in X_{2},z\in X_{3}$,
3. (iii)
$|X_{i}|-|X_{j}|\leq 1$ for all $i,j\in[3]$.
###### Proof.
By Lemma 3.10, for every $w\in V(G)$, $|L(w)|\geq\frac{1}{8}n^{2}(1+o(1))$.
Further, we have
$\displaystyle e(G)\geq e(S(n))=\frac{1}{4}\binom{n}{3}(1+o(1)).$
Therefore, the assumptions from Lemma 3.8 hold. Let $V(G)=X_{1}\cup X_{2}\cup
X_{3}$ be a partition having the properties from Lemma 3.8. Towards
contradiction, assume that there exists $x\in X_{1},y\in X_{2},z\in X_{3}$
with $xyz\notin E(G)$. For $i\in[3]$, let $P_{i}$ be the point set
corresponding to the set $X_{i}$. We have,
$\displaystyle e(G[X_{i}])=e(G(P_{i},T,\varepsilon(T)).$
Construct a new point set $P^{\prime}$ by taking a large enough triangle of
shape $T$ and placing each of the point sets $P_{i}$ close to one of the three
vertices of $T$. Using condition (i), this new point set $P^{\prime}$
satisfies
$\displaystyle e(G(P^{\prime},T,\varepsilon(T)))$
$\displaystyle=|X_{1}||X_{2}||X_{3}|+\sum_{i=1}^{3}e(G(P_{i},T,\varepsilon(T))$
$\displaystyle=|X_{1}||X_{2}||X_{3}|+\sum_{i=1}^{3}e(G[X_{i}])>e(G),$
contradicting the maximality of $P$. Therefore, for all $x\in X_{1},y\in
X_{2},z\in X_{3}$ we have $xyz\in E(G)$. By Theorem 2.4, we have
$\displaystyle\frac{e(G[X_{1}])}{\binom{|X_{1}|}{3}}=\frac{1}{4}+o(1)\quad\text{
and }\quad\frac{e(G[X_{2}])}{\binom{|X_{2}|}{3}}=\frac{1}{4}+o(1).$
Next, towards contradiction, assume that without loss of generality
$|X_{1}|\geq|X_{2}|+2$. Let $v_{1}\in X_{1}$ be minimizing
$|L_{X_{1}}(v_{1})|$ and let $v_{2}\in X_{2}$ be maximizing
$|L_{X_{2}}(v_{2})|$. By the choice of $v_{1}$ and $v_{2}$,
$\displaystyle|L_{X_{1}}(v_{1})|\leq\frac{3e(G[X_{1}])}{|X_{1}|}\quad\text{and}\quad|L_{X_{2}}(v_{2})|\geq\frac{3e(G[X_{2}])}{|X_{2}|}.$
The hypergraph $G_{v_{1},v_{2}}$ is still $\mathcal{F}$-free by Lemma 3.6 and
has more edges than $G$:
$\displaystyle|E(G_{v_{1},v_{2}})|-|E(G)|=|X_{1}||X_{3}|+|L_{X_{2}}(v_{2})|-|L_{X_{1}}(v_{1})|-|X_{2}||X_{3}|-|X_{3}|$
$\displaystyle\geq$
$\displaystyle\frac{3e(G[X_{2}])}{|X_{2}|}-\frac{3e(G[X_{1}])}{|X_{1}|}+|X_{3}|(|X_{1}|-|X_{2}|-1)$
$\displaystyle=$
$\displaystyle\frac{3|e(G[X_{2}])|X_{1}|-3e(G[X_{1}])|X_{2}|}{|X_{1}||X_{2}|}+|X_{3}|(|X_{1}|-|X_{2}|-1)$
$\displaystyle=$
$\displaystyle\left(\frac{1}{4}+o(1)\right)\frac{3\binom{|X_{2}|}{3}|X_{1}|-3\binom{|X_{1}|}{3}|X_{2}|}{|X_{1}||X_{2}|}+|X_{3}|(|X_{1}|-|X_{2}|-1)$
$\displaystyle\geq$
$\displaystyle\left(\frac{1}{8}+o(1)\right)\frac{|X_{2}|^{3}|X_{1}|-|X_{1}|^{3}|X_{2}|}{|X_{1}||X_{2}|}+|X_{3}|(|X_{1}|-|X_{2}|-1)$
$\displaystyle=$
$\displaystyle\left(\frac{1}{8}+o(1)\right)(|X_{2}|^{2}-|X_{1}|^{2})+|X_{3}|(|X_{1}|-|X_{2}|-1)$
$\displaystyle=$
$\displaystyle(|X_{1}|-|X_{2}|)\left(|X_{3}|-(|X_{1}|+|X_{2}|)\left(\frac{1}{8}+o(1)\right)\right)-|X_{3}|$
$\displaystyle=$
$\displaystyle(|X_{1}|-|X_{2}|)\left(\frac{n}{4}+o(n)\right)-|X_{3}|\geq
n\left(\frac{1}{2}+o(1)\right)-\left(\frac{1}{3}+o(1)\right)n>0.$
∎
### 3.4 Proof of Theorem 1.4
Let $T\in\mathcal{T}_{\mathcal{F}}$ and $P$ be an $n$-element point set
maximizing the number of triangles being $\varepsilon(T)$-similar to $T$.
Denote $G=G(P,T,\varepsilon(T))$. By Lemma 3.11, the $3$-graph $G$ has a
vertex partition $V(G)=X_{1}\cup X_{2}\cup X_{2}$ such that
$|X_{i}|-|X_{j}|\leq 1$ for all $i,j\in[3]$ and there is no edge $e=xyz$ with
$xy\in X_{i}$ and $z\notin X_{i}$ for $i\in[3]$. Since the sets
$X_{1},X_{2},X_{3}$ correspond to point sets of the same sizes, we have
$e(G[X_{i}])\leq h(|X_{i}|,T,\varepsilon(T))$ for $i\in[3]$. Let
$a=|X_{1}|,b=|X_{2}|$ and $c=|X_{3}|$. Now,
$\displaystyle h(n,T,\varepsilon(T))$ $\displaystyle=e(G)\leq a\cdot b\cdot
c+e(G[X_{1}])+e(G[X_{2}])+e(G[X_{3}])$ $\displaystyle\leq a\cdot b\cdot
c+h(a,T,\varepsilon(T))+h(b,T,\varepsilon(T))+h(c,T,\varepsilon(T)).$
It remains to show
$\displaystyle h(n,T,\varepsilon(T))\geq a\cdot b\cdot
c+h(a,T,\varepsilon(T))+h(b,T,\varepsilon(T))+h(c,T,\varepsilon(T)).$
There exists point sets $P_{a},P_{b},P_{c}\subseteq\mathbb{R}^{2}$ of sizes
$a,b,c$ respectively, such that
$\displaystyle e(G(P_{a},T,\varepsilon(T)))=h(a,T,\varepsilon(T)),\quad\quad
e(G(P_{b},T,\varepsilon(T)))=h(b,T,\varepsilon(T))$
$\displaystyle\text{and}\quad\quad
e(G(P_{c},T,\varepsilon(T)))=h(c,T,\varepsilon(T)).$
Note that we can assume that $\text{diam}(P_{a})=1$, $\text{diam}(P_{b})=1$
and $\text{diam}(P_{c})=1$, where $\text{diam}(Q)$ of a point set $Q$ is the
largest distance between two points in the point set. By arranging the three
point sets $P_{a},P_{b},P_{c}$ in shape of a large enough triangle $T$, we get
a point set $P$ such that
$\displaystyle h(n,T,\varepsilon(T))$ $\displaystyle\geq
e(G(P,T,\varepsilon(T)))=a\cdot b\cdot
c+h(a,T,\varepsilon(T))+h(b,T,\varepsilon(T))+h(c,T,\varepsilon(T)),$
completing the proof of Theorem 1.4.
### 3.5 Proof of Corollary 1.5
Let $T$ be a triangle shape such that there exists $\varepsilon(T)$ that (2)
holds. By Theorem 1.4, (2) holds for almost all triangles. Take a point set
$P$ on $3^{\ell}\geq n_{0}$ points maximizing the number of triangles being
$\varepsilon(T)$-similar to $T$. Denote $H=G(P,T,\varepsilon(T))$. Note that
because of scaling invariance we can assume that $\text{diam}(P)$ is arbitrary
small. By applying (2) iteratively, we have
$\displaystyle h(3^{\ell+i},T,\varepsilon(T))=3^{i}\cdot
e(H)+3^{3\ell}\frac{1}{24}\left(3^{3i}-3^{i}\right)$ (10)
for all $i\geq 0$.
Now, towards contradiction, assume that there exists a point set
$P^{\prime}\subseteq\mathbb{R}^{2}$ of $3^{k}$ points such that the number of
triangles similar to $\varepsilon(T)$ is more than $e(S(3^{k}))$. Let
$G=G(P^{\prime},T,\varepsilon(T)).$ Then,
$\displaystyle e(G)>e(S(3^{k}))=\frac{1}{24}\left(3^{3k}-3^{k}\right).$
Construct a point set $\bar{P}\subseteq\mathbb{R}^{2}$ of $3^{\ell+k}$ points
by taking all points $p_{G}+p_{H}$, $p_{G}\in P^{\prime},p_{H}\in P$ where
addition is coordinate-wise. Let $\bar{G}:=G(\bar{P},T,\varepsilon(T))$. Since
we can assume that $\text{diam}(P^{\prime})$ is arbitrary small, $\bar{G}$ is
the $3$-graph constructed from $G$ by replacing every vertex by a copy of $H$.
Now,
$\displaystyle e(\bar{G})=e(G)\cdot 3^{3\ell}+e(H)\cdot 3^{k}>3^{k}\cdot
e(H)+3^{3\ell}\frac{1}{24}\left(3^{3k}-3^{k}\right),$
contradicting (10). This completes the proof of Corollary 1.5.
## 4 Concluding remarks
When carefully reading the proof, one can observe that also the following
Turán type results hold. Recall that $\mathcal{F}$ is the set of forbidden
$3$-graphs defined in Section 2.2.
###### Theorem 4.1.
* The following statements holds.
* (a)
There exists $n_{0}$ such that for all $n\geq n_{0}$
$\displaystyle\textup{ex}(n,\mathcal{F})=a\cdot b\cdot
c+\textup{ex}(a,\mathcal{F})+\textup{ex}(b,\mathcal{F})+\textup{ex}(c,\mathcal{F}),$
where $n=a+b+c$ and $a,b,c$ are as equal as possible.
* (b)
Let $n$ be a power of $3$. Then,
$\displaystyle\textup{ex}(n,\mathcal{F})=\frac{1}{24}(n^{3}-n).$
It would be interesting to prove the Turán type results, Theorem 1.3 and
Theorem 4.1, for a smaller family of hypergraphs than $\mathcal{F}$.
Potentially the following conjecture by Falgas-Ravry and Vaughan could be
tackled in a similar way.
###### Conjecture 4.2 (Falgas-Ravry and Vaughan [9]).
$\displaystyle\textup{ex}(n,\\{K_{4}^{-},C_{5}\\})=\frac{1}{4}\binom{n}{3}(1+o(1)).$
Considering that for our proof it was particularly important that $K_{4}^{-}$
and $L_{2}=\\{123,124,125,136,456\\}$ are forbidden, we conjecture that $S(n)$
has asymptotically the most edges among $\\{K_{4}^{-},L_{2}\\}$-free
$3$-graphs.
###### Conjecture 4.3.
$\displaystyle\textup{ex}(n,\\{K_{4}^{-},L_{2}\\})=\frac{1}{4}\binom{n}{3}(1+o(1)).$
Note that a standard application of flag algebras on 7 vertices shows
$\displaystyle\textup{ex}(n,\\{K_{4}^{-},L_{2}\\})\leq 0.25074\binom{n}{3}$
for $n$ sufficiently large.
Theorem 1.3 determines $h(n,T,\varepsilon)$ asymptotically for almost all
triangles $T$ and $\varepsilon>0$ sufficiently small. It remains open to
determine $h(n,T,\varepsilon)$ for some triangles $T\in S$. Bárány and Füredi
[5] provided asymptotically better bounds stemming from recursive
constructions for some of those triangles. Potentially a similar proof
technique to ours could be used to determine $h(n,T,\varepsilon)$ for some of
those triangle shapes.
Another interesting question is to change the space, and study point sets in
$\mathbb{R}^{3}$ or even $\mathbb{R}^{d}$ instead of the plane. Given a
triangle $T\in S$, $\varepsilon>0$, $d\geq 2$ and $n\in\mathbb{N}$, denote
$g_{d}(n,T,\varepsilon)$ the maximum number of triangles in a set of $n$
points from $\mathbb{R}^{d}$ that are $\varepsilon$-similar to a triangle $T$.
Being allowed to use one more dimension might help us to find constructions
with more triangles being $\varepsilon$-similar to $T$.
For an acute triangle $T$ and $d=3$, we can group the $n$ points into four
roughly equal sized groups and place each group very close to a vertex of a
tetrahedron with each face being similar to $T$. For a crafty reader, we are
including a cutout that leads to a tetrahedron with all sides being the same
triangle in Figure 7 on the left. Each group can again be split up in the same
way. Keep doing this iteratively gives us
$\displaystyle g_{3}(n,T,\varepsilon)\geq\frac{1}{15}n^{3}(1+o(1))$
for some $\varepsilon>0$. Note that for almost all acute triangles $T$,
$g_{2}(n,T,\varepsilon)=h(n,T,\varepsilon)=\frac{1}{24}n^{3}(1+o(1))<g_{3}(n,T,\varepsilon).$
✃
✃
Figure 7: A cutout of a tetrahedron using an acute triangle on the left. A
cutout not giving a tetrahedron coming from an obtuse triangle on the right.
Bend along the dashed lines.
For $T$ being an equilateral triangle and $d\geq 4$ we can find a better
construction. There is a $d$-simplex with all faces forming equilateral
triangles. Grouping the $n$ points into $d+1$ roughly equal sized groups and
placing each group very close to the vertex of the $d$-simplex and then
iterating this, gives us
$\displaystyle g_{d}(n,T,\varepsilon)\geq\sum_{i\geq
1}\left(\frac{n}{(d+1)^{i}}\right)^{3}\binom{d+1}{3}(d+1)^{i-1}\
(1+o(1))=\frac{1}{6}\frac{d-1}{d+2}n^{3}(1+o(1)).$
The following variation of the problem could also be interesting. We say that
two triangles are _$\varepsilon$ -isomorphic_ if their side lengths are $a\leq
b\leq c$ and $a^{\prime}\leq b^{\prime}\leq c^{\prime}$ and
$|a-a^{\prime}|,|b-b^{\prime}|,|c-c^{\prime}|<\varepsilon$. Maximizing the
number of $\varepsilon$-isomorphic triangles has the following upper bound.
Denote the side lengths of a triangle $T$ by $a$, $b$, and $c$. Now color
edges of $K_{n}$ with colors $a$, $b$, and $c$ such that the number of rainbow
triangles is maximized. Note that rainbow triangles would correspond to
triangles isomorphic to $T$, if there exists an embedding of $K_{n}$ in some
$R^{d}$ such that the distances correspond to the colors. The problem of
maximizing the number of rainbow triangles in a $3$-edge-colored $K_{n}$ is a
problem of Erdős and Sós (see [8]) that was solved by flag algebras [2]. The
asymptotic construction is an iterated blow-up of a properly $3$-edge-colored
$K_{4}$. Properly $3$-edge-colored $K_{4}$ can be embedded as a tetrahedron in
$\mathbb{R}^{3}$. This gives $\frac{1}{16}n^{3}(1+o(1))$
$\varepsilon$-isomorphic triangles in $\mathbb{R}^{3}$. This heuristics
suggests that increasing the dimension beyond $3$ may allow us to embed
slightly more $\varepsilon$-isomorphic triangles by making it possible to
embed more of the iterated blow-up of $K_{4}$ construction. The number of
rainbow triangles the iterated blow-up of a properly $3$-edge-colored $K_{4}$
is $\frac{1}{15}n^{3}(1+o(1))$ which is an upper bound on the number of
$\varepsilon$-isomorphic triangles for any $d$.
In our construction maximizing the number of $\varepsilon$-similar triangles
for $d=3$, the majority of triangles are actually $\varepsilon$-isomorphic.
Already for $d=3$, we can embed $\frac{1}{15}n^{3}(1+o(1))$
$\varepsilon$-similar triangles, which is the upper bound on the number of
$\varepsilon$-isomorphic triangles for any $d$. This suggests that increasing
the dimension beyond $d=3$ may result in only very small increases on the
number $\varepsilon$-isomorphic triangles or a very different construction is
needed.
The above heuristic does not apply to isosceles triangles. Maximizing the
number of $\varepsilon$-isomorphic triangles would correspond to a $2$-edge-
coloring of $K_{n}$ and maximizing the number of induced path on $3$ vertices
in one of the two colors. The extremal construction is a balanced complete
bipartite graph in one color. Increasing the dimension helps with embedding a
bigger $2$-edge-coloring of $K_{n}$ and in turn obtaining larger number of
$\varepsilon$-isomorphic triangles with $\frac{1}{8}n^{3}(1+o(1))$ being the
upper bound.
In general, the number obtuse triangles do not seem to benefit as much from
higher dimensions. Embedding three $\varepsilon$-similar obtuse triangles on
$4$ points is not possible for any $d$ for almost all obtuse triangles. This
contrasts with acute triangles, where $4$ points can give four
$\varepsilon$-isomorphic triangles for dimension at least $3$. The reader may
try it for $\varepsilon$-isomorphic triangles with cutouts in Figure 7. We
have not explored the above problems for obtuse triangles further.
## References
* [1] J. Balogh, P. Hu, B. Lidický, and F. Pfender. Maximum density of induced 5-cycle is achieved by an iterated blow-up of 5-cycle. European J. Combin., 52(part A):47–58, 2016.
* [2] J. Balogh, P. Hu, B. Lidický, F. Pfender, J. Volec, and M. Young. Rainbow triangles in three-colored graphs. J. Combin. Theory Ser. B, 126:83–113, 2017.
* [3] J. Balogh, P. Hu, B. Lidický, O. Pikhurko, B. Udvari, and J. Volec. Minimum number of monotone subsequences of length 4 in permutations. Combin. Probab. Comput., 24(4):658–679, 2015.
* [4] J. Balogh, B. Lidický, and G. Salazar. Closing in on Hill’s conjecture. SIAM J. Discrete Math., 33(3):1261–1276, 2019.
* [5] I. Bárány and Z. Füredi. Almost similar configurations. Bull. Hellenic Math. Soc., 63:17–37, 2019.
* [6] B. Borchers. CSDP, a C library for semidefinite programming. Optim. Methods Softw., 11/12(1-4):613–623, 1999. Interior point methods.
* [7] J. H. Conway, H. T. Croft, P. Erdős, and M. J. T. Guy. On the distribution of values of angles determined by coplanar points. J. London Math. Soc. (2), 19(1):137–143, 1979.
* [8] P. Erdős and A. Hajnal. On Ramsey like theorems. Problems and results. In Combinatorics (Proc. Conf. Combinatorial Math., Math. Inst., Oxford, 1972), pages 123–140, 1972.
* [9] V. Falgas-Ravry and E. R. Vaughan. Applications of the semi-definite method to the Turán density problem for 3-graphs. Combin. Probab. Comput., 22(1):21–54, 2013.
* [10] A. Grzesik, P. Hu, and J. Volec. Minimum number of edges that occur in odd cycles. J. Combin. Theory Ser. B, 137:65–103, 2019.
* [11] D. Kráľ, L. Mach, and J.-S. Sereni. A new lower bound based on Gromov’s method of selecting heavily covered points. Discrete Comput. Geom., 48(2):487–498, 2012.
* [12] O. Pikhurko, J. Sliačan, and K. Tyros. Strong forms of stability from flag algebra calculations. J. Combin. Theory Ser. B, 135:129–178, 2019.
* [13] A. A. Razborov. Flag algebras. J. Symbolic Logic, 72(4):1239–1282, 2007.
* [14] J. Sliačan and W. Stromquist. Improving bounds on packing densities of 4-point permutations. Discrete Math. Theor. Comput. Sci., 19(2):Paper No. 3, 18, 2017\.
|
# Weave Realizability for $D-$type
James Hughes
###### Abstract.
We study exact Lagrangian fillings of Legendrian links of $D_{n}$-type in the
standard contact 3-sphere. The main result is the existence of a Lagrangian
filling, represented by a weave, such that any algebraic quiver mutation of
the associated intersection quiver can be realized as a geometric weave
mutation. The method of proof is via Legendrian weave calculus and a
construction of appropriate 1-cycles whose geometric intersections realize the
required algebraic intersection numbers. In particular, we show that in
$D$-type, each cluster chart of the moduli of microlocal rank-1 sheaves is
induced by at least one embedded exact Lagrangian filling. Hence, the
Legendrian links of $D_{n}$-type have at least as many Hamiltonian isotopy
classes of Lagrangian fillings as cluster seeds in the $D_{n}$-type cluster
algebra, and their geometric exchange graph for Lagrangian disk surgeries
contains the cluster exchange graph of $D_{n}$-type.
## 1\. Introduction
Legendrian links in contact 3-manifolds [Ben83, Ad90] are central to the study
of 3-dimensional contact topology [OS04, Gei08]. Recent developments [CZ20,
CG20, CN21] have revealed new phenomena regarding their Lagrangian fillings,
including the existence of (many) Legendrian links
$\Lambda\subseteq(\mathbb{S}^{3},\xi_{st})$ with infinitely many (smoothly
isotopic) Lagrangian fillings in the Darboux 4-ball
$(\mathbb{D}^{4},\lambda_{st})$ which are not Hamiltonian isotopic. The
relationship between cluster algebras and Lagrangian fillings [CZ20, GSW20]
has also led to new conjectures on the classification of Lagrangian fillings
[Cas20]. In particular, [Cas20, Conjecture 5.1] introduced a conjectural ADE
classification of Lagrangian fillings. The object of this manuscript is to
study $D$-type and prove part of the conjectured classification.
The $A$-type was studied in [EHK16, Pan17], via Floer-theoretic methods, and
in [STWZ19, TZ18] via microlocal sheaves. Their main result is that the
$A_{n}$-Legendrian link $\lambda(A_{n})\subseteq(\mathbb{S}^{3},\xi_{st})$,
which is the max-tb representative of the $(2,n+1)$-torus link, has at least a
Catalan number $C_{n+1}=\frac{1}{n+2}{2n+2\choose n+1}$ of embedded exact
Lagrangian fillings, where $C_{n+1}$ is precisely the number of cluster seeds
in the finite type $A_{n}$ cluster algebra [FWZ20b]. We will show that the
same holds in $D$-type, namely that $D_{n}$-type Legendrian links have at
least as many distinct Hamiltonian isotopy classes of Lagrangian fillings as
there are cluster seeds in the $D_{n}$-type cluster algebra. This will be a
consequence of a stronger geometric result, weave realizability in $D-$type,
which we discuss below.
By definition, the Legendrian link
$\lambda(D_{n})\subseteq(\mathbb{S}^{3},\xi_{st})$, $n\geq 4$ of $D_{n}$-type
is the standard satellite of the Legendrian link defined by the front
projection given by the 3-stranded positive braid
$\sigma_{1}^{n-2}(\sigma_{2}\sigma_{1}^{2}\sigma_{2})(\sigma_{1}\sigma_{2})^{3}$,
where $\sigma_{1}$ and $\sigma_{2}$ are the Artin generators for the
3-stranded braid group. Figure 1 depicts a front diagram for $\lambda(D_{n})$;
note that the $(-1)$-framed closure of
$\sigma_{1}^{n-2}(\sigma_{2}\sigma_{1}^{2}\sigma_{2})(\sigma_{1}\sigma_{2})^{3}$
is Legendrian isotopic to the rainbow closure of
$\sigma_{1}^{n-2}(\sigma_{2}\sigma_{1}^{2}\sigma_{2})$, the latter being
depicted. The Legendrian link $\lambda(D_{n})$ is also a max-tb representative
of the smooth isotopy class of the link of the singularity
$f(x,y)=y(x^{2}+y^{n-2})$. Since these are algebraic links, the max-tb
representative given above is unique – e.g. [Cas20, Proposition 2.2] – and has
at least one exact Lagrangian filling [HS15].
Figure 1. The front projection of
$\lambda(D_{n})\subseteq(\mathbb{S}^{3},\xi_{st})$. The box labelled with an
$n-2$ represents $n-2$ positive crossings given by $\sigma_{1}^{n-2}.$ When
$n$ is even, $\lambda(D_{n})$ has 3-components, while when $n$ is odd,
$\lambda(D_{n})$ only has 2 components.
The $N$-graph calculus developed by Casals and Zaslow in [CZ20] allows us to
associate an exact Lagrangian filling of a ($-1$)-framed closure of a positive
braid to a pair of trivalent planar graphs satisfying certain properties. See
Figure 2 (left) for an example of a particular 3-graph, denoted by
$\Gamma_{0}(D_{4})$ and associated to the Legendrian link
$\lambda(D_{4})$.111We use $\lambda(D_{4})$, i.e. n=4, as a first example
because $n=3$ would correspond to $\lambda(A_{3})$, which has been studied
previously [EHK16, Pan17]. The study of $\lambda(D_{4})$ is also the first
instance where we require the machinery of 3-graphs rather than 2-graphs. In
Section 3, we will show that the 3-graph $\Gamma_{0}(D_{4})$ generalizes to a
family of 3-graphs $\Gamma_{0}(D_{n})$, depicted in Figure 2 (right) for any
$n\geq 3.$ In a nutshell, a 3-fold branched cover of $\mathbb{D}^{2}$, simply
branched at the trivalent vertices of these 3-graphs, yields an exact
Lagrangian surface in $(T^{*}\mathbb{D}^{2},\lambda_{st})$, whose Legendrian
lift is a Legendrian weave. One of the distinct advantages of the 3-graph
calculus is that it combinatorializes an operation, known as Lagrangian disk
surgery [Pol91, Yau17] that modifies the weave in such a way as to yield
additional – non-Hamiltonian isotopic – exact Lagrangian fillings of the link.
Figure 2. 3-graphs $\Gamma_{0}(D_{4})$ (left) and $\Gamma_{0}(D_{n})$ (right),
each pictured with its associated intersection quiver
$Q(\Gamma_{0}(D_{4}),\\{\gamma_{i}^{(0)}\\})$ (right). The basis
$\\{\gamma_{i}^{(0)}\\}$ for $H_{1}(\Lambda(\Gamma_{0}(D_{4}));\mathbb{Z})$ is
depicted by the light green, dark green, orange, and purple cycles drawn in
the graph. Note that the quivers corresponds to the $D_{4}$ and $D_{n}$ Dynkin
diagrams, usually depicted rotated $90^{\circ}$ counterclockwise.
If we consider a 3-graph $\Gamma$ and a basis $\\{\gamma_{i}\\}$ for the first
homology of the weave $\Lambda(\Gamma)$, $i\in[1,b_{1}(\Lambda(\Gamma))]$, we
can define a quiver $Q(\Gamma,\\{\gamma_{i}\\})$ whose adjacency matrix is
given by the intersection form in $H_{1}(\Lambda(\Gamma))$. Quivers come
equipped with a involutive operation, known as quiver mutation, that produces
new quivers; see subsection 2.6 or [FWZ20a] for more on quivers. A key result
of [CZ20] tells us that Legendrian mutation of the weave induces a quiver
mutation of the intersection quiver. Quivers related by a sequence of
mutations are said to be mutation equivalent, and the quivers that are of
finite mutation type (i.e. the set of mutation equivalent quivers is finite)
have an ADE classification [FWZ20b]. This classification parallels the naming
convention for the $D_{n}$ links described above: the intersection quiver
associated to $\lambda(D_{n})$ is a quiver in the mutation class of the
$D_{n}$-Dynkin diagram. See Figure 2 (left) for an example of a $D_{4}$
quiver. For our 3-graph $\Gamma_{0}(D_{n})$, $n\geq 3$, we will give an
explicit basis $\\{\gamma_{i}^{(0)}\\}$ for
$H_{1}(\Lambda(\Gamma_{0}(D_{n})),\mathbb{Z})$, whose intersection quiver
$Q(\Gamma_{0}(D_{n}),\\{\gamma_{i}^{(0)}\\})$ is the standard $D_{n}$-Dynkin
diagram.
By definition, a sequence of quiver mutations for
$Q(\Gamma_{0}(D_{n}),\\{\gamma_{i}^{(0)}\\})$ is said to be weave realizable
if each quiver mutation in the sequence can be realized as a Legendrian weave
mutation for a 3-graph. Our main result is the following theorem:
###### Theorem 1.
Any sequence of quiver mutations of
$Q(\Gamma_{0}(D_{n}),\\{\gamma_{i}^{(0)}\\})$ is weave realizable.
In other words, Theorem 1 states that in $D$-type, any algebraic quiver
mutation can actually be realized geometrically by a Legendrian weave
mutation. Weave realizability is of interest because it measure the difference
between algebraic invariants – e.g. the cluster structure in the moduli of
sheaves – and geometric objects, in this case Hamiltonian isotopy classes of
exact Lagrangian fillings. If any sequence of quiver mutations was weave
realizable, we would know that each cluster is inhabited by at least one
embedded exact Lagrangian filling – this general statement remains open for an
arbitrary Legendrian link. For instance, any link with an associated quiver
that is not of finite mutation type satisfying the weave realizability
property would admit infinitely many Lagrangian fillings, distinguished by
their quivers.222This would be independent of the cluster structure defined by
the microlocal monodromy functor, which we actually must use for $D$-type.
Note that weave realizability was shown for A-type in [TZ18], and beyond $A$
and $D$-types we currently do not know whether there are any other links
satisfying the weave realizability property.
We can further distinguish fillings by studying the cluster algebra structure
on the moduli of microlocal rank-1 sheaves $\mathcal{C}(\Gamma)$ of a weave
$\Lambda(\Gamma)$, e.g. see [CZ20]. Specifically, sheaf quantization of each
exact Lagrangian filling of $\lambda(D_{n})$ induces a cluster chart on the
coordinate ring of functions of $\mathcal{C}(\Gamma_{0}(D_{n}))$ via the
microlocal mondromy functor [STZ17, STWZ19]. Describing a single cluster chart
in the cluster variety requires the data of the quiver associated to the
weave, and the microlocal monodromy around each 1-cycle of the weave.
Crucially, applying the Legendrian mutation operation to the weave induces a
cluster transformation on the cluster chart, and the specific cluster chart
defined by a Lagrangian fillings is a Hamiltonian isotopy invariant.
Therefore, Theorem 1 has the following consequence.
###### Corollary 1.
Every cluster chart of the moduli of microlocal rank-$1$ sheaves
$\mathcal{C}(\Gamma_{0}(D_{n}))$, which is a cluster variety of $D_{n}$-type,
is induced by at least one embedded exact Lagrangian filling of
$\lambda(D_{n})\subset(\mathbb{S}^{3},\xi_{st})$. In particular, there exist
at least $(3n-2)C_{n-1}$ exact Lagrangian fillings of the link
$\lambda(D_{n})$ up to Hamiltonian isotopy, where $C_{n}$ denotes the $n$th
Catalan number
Moreover, weave realizability implies a slightly stronger result.
Specifically, we can consider the filling exchange graph associated to a link
of $D_{n}$-type, where the vertices are Hamiltonian isotopy classes of
embedded exact Lagrangians, and two vertices are connected by an edge if the
two fillings are related by a Lagrangian disk surgery. Then weave
realizability implies that the filling exchange graph contains a subgraph
isomorphic to the cluster exchange graph for the cluster algebra of
$D_{n}$-type.
###### Remark.
As of yet, we have no way of determining whether our method produces all
possible exact Lagrangian fillings of a type $D_{n}$-link. This question
remains open for $A$-type Legendrian links as well. In fact, the only known
link for which we have a complete nonempty classification of Lagrangian
fillings is the Legendrian unknot, which has a unique filling, and so thus the
Legendrian unlink [EP96]. $\Box$
In summary, our method for constructing exact Lagrangian fillings will be to
represent them using the planar diagrammatic calculus of N-graphs developed in
[CZ20]. This diagrammatic calculus includes a mutation operation on the
diagrams that yields additional fillings. We distinguish the resulting
fillings using a sheaf-theoretic invariant of our filling. From this data, we
extract a cluster algebra structure and show that every mutation of the quiver
associated to the cluster can be realized by applying our Legendrian mutation
operation to the 3-graph, thus proving that there are at least as many
distinct fillings as distinct cluster seeds of $D_{n}$-type. The main theorem
will be proven in Section 3 after giving the necessary preliminaries in
Section 2.
### Acknowledgments
Many thanks to Roger Casals for his support and encouragement throughout this
project. Thanks also to Youngjin Bae for helpful conversations.
### Relation to [ABL21]
While writing this manuscript, we learned that recent independent work by
Byung Hee An, Youngjin Bae, and Eunjeong Lee also produces at least as many
exact Lagrangian fillings as cluster seeds for links of $ADE$ type [ABL21].
From our understanding, they use an inductive argument that relies on the
combinatorial properties of the finite type generalized associahedron.
Specifically, they leverage the fact that the Coxeter transformation in finite
type is transitive, if starting with a particular set of vertices, by finding
a weave pattern that realizes Coxeter mutations. While both this manuscript
and [ABL21] use the framework of $N$-graphs to approach the problem of
enumerating exact Lagrangian fillings, the proofs are different, independent,
and our approach is able to give an explicit construction for realizing any
sequence of quiver mutations via an explicit sequence of mutations in the
3-graph. $\Box$
## 2\. Preliminaries
In this section we introduce the necessary ingredients required for the proof
of Theorem 1 and Corollary 1. We first discuss the contact topology needed to
understand weaves and their homology. We then discuss the sheaf-theoretic
material related to distinguishing fillings via cluster algebraic methods.
### 2.1. Contact Topology and Exact Lagrangian Fillings
A contact structure $\xi$ on $\mathbb{R}^{3}$ is a 2-plane field given locally
as the kernel of a 1-form $\alpha\in\Omega^{1}(\mathbb{R}^{3})$ satisfying
$\alpha\wedge d\alpha\neq 0$. The standard contact structure on
$(\mathbb{R}^{3},\xi_{st})$ is given by the kernel of $\alpha=dz-ydx$. A
Legendrian link $\lambda$ in $(\mathbb{R}^{3},\xi)$ is an embedding of a
disjoint union of copies of $\mathbb{S}^{1}$ that is always tangent to $\xi$.
By definition, the contact 3-sphere $(\mathbb{S}^{3},\xi_{st})$ is the one
point compactification of $(R^{3},\xi_{st})$ . Since a link in
$\mathbb{S}^{3}$ can always be assumed to avoid a point, we will equivalently
be considering Legendrian links in $(\mathbb{R}^{3},\xi_{st})$ and
$(\mathbb{S}^{3},\xi_{st}).$ The symplectization of
$(\mathbb{R}^{3},\xi_{st})$ is given by
$(\mathbb{R}^{3}\times\mathbb{R}_{t},d(e^{t}\alpha))$.
Given two Legendrian links $\lambda_{+}$ and $\lambda_{-}$ in
$(\mathbb{R}^{3},\xi)$, an exact Lagrangian cobordism $\Sigma$ from
$\lambda_{-}$ to $\lambda_{+}$ is an embedded compact orientable surface in
the symplectization $(\mathbb{R}^{3}\times\mathbb{R}_{t},d(e^{t}\alpha))$ such
that
* •
$\Sigma\cap\left(\mathbb{R}^{3}\times[T,\infty)\right)=\lambda_{+}\times[T,\infty)$
* •
$\Sigma\cap\left(\mathbb{R}^{3}\times(-\infty,-T)\right)=\lambda_{-}\times(-\infty,-T]$
* •
$\Sigma$ is an exact Lagrangian, i.e. $e^{t}\alpha=df$ for some function
$f:\Sigma\to\mathbb{R}.$
The asymptotic behavior of $\Sigma$, as specified by the first two conditions,
ensures that we can concatenate Lagrangian cobordisms. By definition, an exact
Lagrangian filling of $\lambda_{+}$ is an exact Lagrangian cobordism from
$\emptyset$ to $\lambda_{+}$.
We can also consider the Legendrian lift of an exact Lagrangian in the
contactization
$(\mathbb{R}_{s}\times\mathbb{R}^{4},\ker\\{ds-d(e^{t}\alpha)\\})$ of
$(\mathbb{R}^{4},d(e^{t}\alpha))$. Note that there exists a contactomorphism
between $(\mathbb{R}_{s}\times\mathbb{R}^{4},\ker\\{ds-d(e^{t}\alpha)\\})$ and
the standard contact Darboux structure $(\mathbb{R}^{5},\xi_{st})$,
$\xi_{st}=\ker\\{dz-y_{1}dx_{1}-y_{2}dx_{2}\\}$, and we will often work with
the Legendrian front projection
$(\mathbb{R}^{5},\xi_{st})\longrightarrow\mathbb{R}^{3}_{x_{1},x_{2},z}$ for
the latter. This will be a useful perspective for us, as it allows us to
construct Lagrangian fillings by studying (wave)fronts in
$\mathbb{R}^{3}=\mathbb{R}^{3}_{x_{1},x_{2},z}$ of Legendrian surfaces in
$(\mathbb{R}^{5},\xi_{st})$, and then projecting down to the standard
symplectic Darboux chart
$\mathbb{R}^{4}=\mathbb{R}^{4}_{x_{1},y_{1},x_{2},y_{2}}$. In this setting,
the exact Lagrangian surface is embedded in $\mathbb{R}^{4}$ if and only if
its Legendrian lift has no Reeb chords. The construction will be performed
through the combinatorics of $N$-graphs, as we now explain.
### 2.2. 3-graphs and Weaves
In this subsection, we discuss the diagrammatic method of constructing and
manipulating exact Lagrangian fillings of links arising as the ($-1$)-framed
closures of positive braids via the calculus of $N$-graphs. For this
manuscript, it will suffice to take $N=3$.
###### Definition 1.
A 3-graph is a pair of embedded planar trivalent graphs
$B,R\subseteq\mathbb{D}^{2}$ such that, at any vertex $v\in B\cap R$, the six
edges belonging to $B$ and $R$ incident to $v$ alternate. $\Box$
Equivalently, a 3-graph is a edge-bicolored graph with monochromatic trivalent
vertices and interlacing hexavalent vertices. $\Gamma_{0}(D_{4}),$ depicted in
Figure 2 (left) contains two hexavalent vertices displaying the alternating
behavior described in the definition.
###### Remark.
[CZ20] gives a general framework for working with N-graphs, where $N-1$ is the
number of embedded planar trivalent graphs. This allows for the study of
fillings of Legendrian links associated to $N$-stranded positive braids. This
can also be generalized to consider N-graphs in a surface other than
$\mathbb{D}^{2}$. Here, the family of links we are interested in can be
expressed as a family of 3-stranded braids, hence our choice to restrict $N$
to 3 in $\mathbb{D}^{2}$. $\Box$
Given a 3-graph $\Gamma\subseteq\mathbb{D}^{2},$ we describe how to associate
a Legendrian surface $\Lambda(\Gamma)\subseteq(\mathbb{R}^{5},\xi_{st})$. To
do so, we first describe certain singularities of $\Lambda(\Gamma)$ that arise
under the Legendrian front projection
$\pi:(\mathbb{R}^{5},\xi_{st})\to(\mathbb{R}^{3},\xi_{st})$. In general, such
singularities are known as Legendrian singularities or singularities of
fronts. See [Ad90] for a classification of such singularities. The three
singularities we will be interested in are the $A_{1}^{2}$, $A_{1}^{3}$ and
$D_{4}^{-}$ singularities, pictured in Figure 3 below.
Figure 3. $A_{1}^{2}$ (left), $A_{1}^{3}$ (center), and $D_{4}^{-}$ (right)
singularities represented in the 3-graph by an edge, hexavalent vertex, and
trivalent vertex, respectively.
Before we describe our Legendrian surfaces, we must first discuss the ambient
contact structure that they live in. For $\Gamma\subseteq\mathbb{D}^{2}$ we
will take $\Lambda(\Gamma)$ to live in the first jet space
$(J^{1}\mathbb{D}^{2},\xi_{st})=(T^{*}\mathbb{D}^{2}\times\mathbb{R}_{z},\ker(dz-\lambda_{st}))$,
where $\lambda_{st}$ is the standard Liouville form on the cotangent bundle
$T^{*}\mathbb{D}^{2}$. We can view $J^{1}\mathbb{D}^{2}$ as a certain local
model for a contact structure, in the following way. If we take $(Y,\xi)$ to
be a contact 5-manifold, then by the Weinstein neighborhood theorem, any
Legendrian embedding $i:\mathbb{D}^{2}\to(Y,\xi)$ extends to an embedding from
$(J^{1}\mathbb{D}^{2},\xi_{st})$ to a small open neighborhood of
$i(\mathbb{D}^{2})$ with contact structure given by the restriction of $\xi$
to that neighborhood. In particular, a Legendrian embedding of
$i:\mathbb{S}^{1}\to\mathbb{S}^{3}$ gives rise to a contact embedding
$\tilde{i}:J^{1}\mathbb{S}^{1}\longrightarrow\mbox{Op}(i(\mathbb{S}^{1}))$
into some open neighborhood
$\mbox{Op}(i(\mathbb{S}^{1}))\subseteq\mathbb{S}^{3}$. Of particular note in
our case is that, under a Legendrian embedding
$\mathbb{D}^{2}\subseteq(\mathbb{R}^{5},\xi_{st})$, a Legendrian link
$\lambda$ in $J^{1}\partial\mathbb{D}^{2}$ is mapped to a Legendrian link in
the contact boundary $(\mathbb{S}^{3},\xi_{st})$ of the symplectic
$(\mathbb{R}^{4},\lambda_{\text{st}})$ given as the co-domain of the
Lagrangian projection
$(\mathbb{R}^{5},\xi_{st})\rightarrow(\mathbb{R}^{4},\lambda_{\text{st}})$.
See [NR13] for a description of the Legendrian satellite operation.
To construct a Legendrian weave
$\Lambda(\Gamma)\subseteq(J^{1}\mathbb{D}^{2},\xi_{st})$ from a 3-graph
$\Gamma$, we glue together the local germs of singularities according to the
edges of $\Gamma$. First, consider three horizontal wavefronts
$\mathbb{D}^{2}\times\\{1\\}\sqcup\mathbb{D}^{2}\times\\{2\\}\sqcup\mathbb{D}^{2}\times\\{3\\}\subseteq\mathbb{D}^{2}\times\mathbb{R}$
and a 3-graph $\Gamma\subseteq\mathbb{D}^{2}\times\\{0\\}$. We construct the
associated Legendrian weave $\Lambda(\Gamma)$ as follows.
* •
Above each blue (resp. red) edge, insert an $A_{1}^{2}$ crossing between the
$\mathbb{D}^{2}\times\\{1\\}$ and $\mathbb{D}^{2}\times\\{2\\}$ sheets (resp
$\mathbb{D}^{2}\times\\{2\\}$ and $\mathbb{D}^{2}\times\\{3\\}$ sheets) so
that the projection of the $A_{1}^{2}$ singular locus under
$\pi:\mathbb{D}^{2}\times\mathbb{R}\to\mathbb{D}^{2}\times\\{0\\}$ agrees with
the blue (resp. red) edge.
* •
At each blue (resp. red) trivalent vertex $v$, insert a $D_{4}^{-}$
singularity between the sheets $\mathbb{D}^{2}\times\\{1\\}$ and
$\mathbb{D}^{2}\times\\{2\\}$ (resp. $\mathbb{D}^{2}\times\\{2\\}$ and
$\mathbb{D}^{2}\times\\{3\\}$) in such a way that the projection of the
$D_{4}^{-}$ singular locus agrees with $v$ and the projection of the
$A_{2}^{1}$ crossings agree with the edges incident to $v$.
* •
At each hexavalent vertex $v$, insert an $A_{1}^{3}$ singularity along the
three sheets in such a way that the origin of the $A_{1}^{3}$ singular locus
agrees with $v$ and the $A_{1}^{2}$ crossings agree with the edges incident to
$v$.
Figure 4. The weaving of the singularities pictured in Figure 3 along the
edges of the N-graph. Gluing these local pictures together according to the
3-graph $\Gamma$ yields the weave $\Lambda(\Gamma)$.
If we take an open cover $\\{U_{i}\\}_{i=1}^{m}$ of
$\mathbb{D}^{2}\times\\{0\\}$ by open disks, refined so that any disk contains
at most one of these three features, we can glue together the resulting fronts
according to the intersection of edges along the boundary of our disks.
Specifically, if $U_{i}\cap U_{j}$ is nonempty, then we define
$\Sigma(U_{1}\cup U_{2})$ to be the wavefront resulting from considering the
union of wavefronts $\Sigma(U_{1})\cup\Sigma(U_{j})$ in $(U_{1}\cup
U_{2})\times\mathbb{R}$. We define the Legendrian weave $\Lambda(\Gamma)$ as
the Legendrian surface contained in $(J^{1}\mathbb{D}^{2},\xi_{st})$ with
wavefront $\Sigma(\Gamma)=\Sigma(\cup_{i=1}^{m}U_{i})$ given by gluing the
local wavefronts of singularities together according to the 3-graph $\Gamma$
[CZ20, Section 2.3].
The smooth topology of a Legendrian weave $\Lambda(\Gamma)$ is given as a
3-fold branched cover over $\mathbb{D}^{2}$ with simple branched points
corresponding to each of the trivalent vertices of $\Gamma$. The genus of
$\Lambda(\Gamma)$ is then computed using the Riemann-Hurwitz formula:
$g(\Lambda(\Gamma))=\frac{1}{2}(v(\Gamma)+2-3\chi(\mathbb{D}^{2})-|\partial\Lambda(\Gamma)|)$
where $v(\Gamma)$ is the number of trivalent vertices of $\Gamma$ and
$|\partial\Lambda(\Gamma)|$ denotes the number of boundary components of
$\Gamma$.
###### Example.
If we apply this formula to the 3-graph $\Gamma_{0}(D_{4})$, pictured in
Figure 2, we have $6$ trivalent vertices and 3 link components, so the genus
is computed as $g(\Lambda(\Gamma_{0}(D_{4})))=\frac{1}{2}(6+2-3-3)=1.$
For $\Gamma_{0}(D_{n})$, we have three boundary components for even $n$ and
two boundary components for odd n. The number of trivalent vertices is $n+2$,
so the genus $g(\Lambda(\Gamma_{0}(D_{n}))$ is $\lfloor\frac{n-1}{2}\rfloor$,
assuming $n\geq 2$.
This computation tells us that $\Lambda(\Gamma_{0}(D_{4}))$ is smoothly a
3-punctured torus bounding the link $\lambda(D_{4}).$ Therefore, we can give a
basis for $H_{1}(\Lambda(\Gamma_{0}(D_{4}));\mathbb{Z})$ in terms of the four
cycles pictured in Figure 2. For $\Gamma_{0}(D_{n})$, the corresponding weave
$\Lambda(\Gamma_{0}(D_{n}))$ will be smoothly a genus
$\lfloor\frac{n-1}{2}\rfloor$ surface with a basis of
$H_{1}(\Lambda(\Gamma);\mathbb{Z})$ given by $n$ cycles. By a theorem of
Chantraine [Cha10], our computation implies that any filling of
$\lambda(D_{n})$ has genus $\lfloor\frac{n-1}{2}\rfloor$. In the next section,
we describe a general method for giving a basis
$\\{\gamma_{i}^{(0)}\\},i\in[1,n]$ of the first homology
$H_{1}(\Lambda(\Gamma_{0}(D_{n}));\mathbb{Z})\cong\mathbb{Z}^{n}$.
### 2.3. Homology of Weaves
We require a description of the first homology
$H_{1}(\Lambda(\Gamma));\mathbb{Z})$ in order to apply the mutation operation
to a 3-graph $\Gamma$. We first consider an edge connecting two trivalent
vertices. Closely examining the sheets of our surface, we can see that each
such edge corresponds to a 1-cycle, as pictured in Figure 5 (left). We refer
to such a 1-cycle as a short I-cycle. Similarly, any three edges of the same
color that connect a single hexavalent vertex to three trivalent vertices
correspond to a 1-cycle, as pictured in 6 (left). We refer to such a 1-cycle
as a short Y-cycle. See figures 5 (right) and 6 (right) for a diagram of these
1-cycles in the wavefront $\Sigma(\Gamma)$. We can also consider a sequence of
edges starting and ending at trivalent vertices and passing directly through
any number of hexavalent vertices, as pictured in Figure 7. Such a cycle is
referred to as a long I-cycle. Finally, we can combine any number of I-cycles
and short Y-cycles to describe an arbitrary 1-cycle as a tree with leaves on
trivalent vertices and edges passing directly through hexavalent vertices.
In the proof of our main result, we will generally give a basis for
$H_{1}(\Lambda(\Gamma);\mathbb{Z})$ in terms of short I-cycles and short
Y-cycles. Indeed, Figure 8 gives a basis of
$H_{1}(\Lambda(\Gamma_{0}(D_{n}));\mathbb{Z})$ consisting of $n-1$ short
I-cycles and a single Y-cycle.
Figure 5. A short I-cycle $\gamma(e)$ for the edge $e\in G$ pictured in the
wavefront $\Sigma(\Gamma)$ (left) and a vertical slicing of $\Sigma(\Gamma)$
(right).
Figure 6. A short Y-cycle $\gamma(e)$ defined by the edges
$e_{1},e_{2},e_{3}\in G$ pictured in the wavefront $\Sigma(\Gamma)$ (left) and
a vertical slicing of $\Sigma(\Gamma)$ (right).
Figure 7. A pair of long I-cycles, both denoted by $\gamma$. The cycle on the
left passes through an even number of hexavalent vertices, while the cycle on
the right passes through an odd number. Figure 8. The 3-graph
$\Gamma_{0}(D_{n})$ and its associated intersection quiver. The black dotted
line represents $n-3$ short I-cycles and the blue dotted line represents a
total of $n-2$ blue edges. The basis $\\{\gamma_{i}^{(0)}\\}$ of
$H_{1}(\Lambda(\Gamma_{0}(D_{n}));\mathbb{Z})$ is given by the orange Y-cycle,
the green I-cycles, and the $n-3$ I-cycles represented by the dotted black
line.
The intersection form $\langle\cdot,\cdot\rangle$ on $H_{1}(\Lambda(\Gamma))$
plays a key role in distinguishing our Legendrian weaves. If we consider a
pair of 1-cycles $\gamma_{1},\gamma_{2}\in H_{1}(\Lambda(\Gamma))$ with
nonempty geometric intersection in $\Gamma$, as pictured in Figure 9, we can
see that the intersection of their projection onto the 3-graph differs from
the intersection in $\Lambda(\Gamma).$ Specifically, we can carefully examine
the sheets that the 1-cycles cross in order to see that $\gamma_{1}$ and
$\gamma_{2}$ intersect only in a single point of $\Lambda(\Gamma)$. If we fix
an orientation on $\gamma_{1}$ and $\gamma_{2},$ then we can assign a sign to
this intersection based on the convention given in Figure 9. We refer to the
signed count of the intersection of $\gamma_{1}$ and $\gamma_{2}$ as their
algebraic intersection and denote it by $\langle\gamma_{1},\gamma_{2}\rangle.$
Notation: For the sake of visual clarity, we will represent an element of
$H_{1}(\Lambda(\Gamma);\mathbb{Z})$ by a colored edge for the remainder of
this manuscript. This also ensures that the geometric intersection more
accurately reflects the algebraic intersection. The original coloring of the
blue or red edges can be readily obtained by examining $\Gamma$ and its
trivalent vertices. $\Box$
Figure 9. Intersection of two cycles, $\gamma_{1}$ and $\gamma_{2}$. The
intersection point is indicated by a black dot. We will set
$\langle\gamma_{1},\gamma_{2}\rangle=-1$ as our convention.
In our correspondence between 3-graphs and weaves, we must consider how a
Legendrian isotopy of the weave $\Lambda(\Gamma)$ affects the 3-graph $\Gamma$
and its homology basis. We can restrict our attention to certain isotopies,
referred to as Legendrian Surface Reidemeister moves. These moves create
specific changes in the Legendrian front $\Sigma(\Gamma)$, known as
perestroikas or Reidemeister moves [Ad90]. From [CZ20], we have the following
theorem relating perestroikas of fronts to the corresponding 3-graphs.
###### Theorem 2 ([CZ20], Theorem 4.2).
Let $\Gamma$ and $\Gamma^{\prime}$ be two 3-graphs related by one of the moves
shown in Figure 10. Then the associated weaves $\Lambda(\Gamma)$ and
$\Lambda(\Gamma^{\prime})$ are Legendrian isotopic relative to their
boundaries. $\Box$
Figure 10. Legendrian Surface Reidemeister moves for 3-graphs. From left to
right, a candy twist, a push-through, and a flop, denoted by I, II, and III
respectively.
See Figure 11 for a description of the behavior of elements of
$H_{1}(\Lambda(\Gamma);\mathbb{Z})$ under these Legendrian Surface
Reidemeister moves. In the pair of 3-graphs in Figure 11 (center), we have
denoted a push-through by II or II-1 depending on whether we go from left to
right or right to left.This helps us to specify the simplifications we make in
the figures in the proof of Theorem 1, as this move is not as readily apparent
as the other two. We will refer to the II-1 move as a reverse push-through.
Note that an application of this move eliminates the geometric intersection
between the light green and dark green cycles in Figure 11.
Figure 11. Behavior of certain homology cycles under Legendrian Surface
Reidemeister moves.
###### Remark.
It is also possible to verify the computations in Figure 11 by examining the
relative homology of a cycle. Specifically, if we have a basis of the relative
homology $H_{1}(\Lambda(\Gamma),\partial\Lambda(\Gamma);\mathbb{Z})$, then the
intersection form on that basis allows us to determine a given cycle by
Poincaré-Lefschetz duality. $\hfill\Box$
### 2.4. Mutations of 3-graphs
We complete our discussion of general 3-graphs with a description of
Legendrian mutation, which we will use to generate distinct exact Lagrangian
fillings. Given a Legendrian weave $\Lambda(\Gamma)$ and a 1-cycle $\gamma\in
H_{1}(\Lambda(\Gamma);\mathbb{Z})$, the Legendrian mutation
$\mu_{\gamma}(\Lambda(\Gamma))$ outputs a 3-graph and a corresponding
Legendrian weave smoothly isotopic to $\Lambda(\Gamma)$ but whose Lagrangian
projection is generally not Hamiltonian isotopic to that of $\Lambda(\Gamma)$.
###### Definition 2.
Two Legendrian surfaces
$\Lambda_{0},\Lambda_{1}\subseteq(\mathbb{R}^{5},\xi_{st})$ with equal
boundary $\partial\Lambda_{0}=\partial\Lambda_{1}$, are mutation-equivalent if
and only if there exists a compactly supported Legendrian isotopy
$\\{\tilde{\Lambda}_{t}\\}$ relative to the boundary, with
$\tilde{\Lambda}_{0}=\Lambda_{0}$ and a Darboux ball $(B,\xi_{st})$ such that
1. (i)
Outside the Darboux ball, we have
$\tilde{\Lambda}_{1}|_{\mathbb{R}^{5}\backslash
B}=\Lambda_{1}|_{\mathbb{R}^{5}\backslash B}$
2. (ii)
There exists a global front projection $\pi:\mathbb{R}^{5}\to\mathbb{R}^{3}$
such that the pair of fronts $\pi|_{B\cap\Lambda_{1}}$ and
$\pi|_{B\cap\Lambda_{2}}$ coincides with the pair of fronts in Figure 12
below.
$\Box$
Figure 12. Local fronts for two Legendrian cylinders non-Legendrian isotopic
relative to their boundary.
We briefly note that these two fronts lift to non-Legendrian isotopic
Legendrian cylinders in $(\mathbb{R}^{5},\xi_{st})$, relative to the boundary,
and that the 1-cycle we input for our operation is precisely the 1-cycle
defined by the cylinder corresponding to $\Lambda_{0}$.
Combinatorially, we can describe mutation as certain manipulations of the
edges of our graph. Figure 13 (left) depicts mutation at a short I-cycle,
while Figure 13 (right) depicts mutation at a short Y-cycle. In the $N=2$
setting, we can identify 2-graphs with triangulations of an $n-$gon, in which
case mutation at a short I-cycle corresponds to a Whitehead move. In the
3-graph setting, in order to describe mutation at a short Y-cycle, we can
first reduce the short Y-cycle case to a short I-cycle, as shown in Figure 14,
before applying our mutation. See [CZ20, Section 4.9] for a more general
description of mutation at long I and Y-cycles in the 3-graph.
Figure 13. Mutations of a 3-graph. The pair of 3-graphs on the left depicts
mutation at the orange I-cycle, while the pair of 3-graphs on the right
depicts mutation at the orange Y-cycle. In both cases, the dark green edge
depicts the effect of mutation on any cycle intersecting the orange cycle.
The geometric operation above coincides with the combinatorial manipulation of
the 3-graphs. Specifically, we have the following theorem.
###### Theorem 3 ([CZ20], Theorem 4.2.1).
Given two 3-graphs, $\Gamma$ and $\Gamma^{\prime}$ related by either of the
combinatorial moves described in Figure 13, the corresponding Legendrian
weaves $\Lambda(\Gamma)$ and $\Lambda(\Gamma^{\prime})$ are mutation-
equivalent relative to their boundary. $\Box$
Figure 14. Mutation at a short Y-cycle given as a sequence of Legendrian
Surface Reideister moves and mutation at a short I-cycle. The Y-cycle in the
initial 3-graph is given by the three blue edges that each intersect the
yellow vertex in the center.
### 2.5. Lagrangian Fillings from Weaves
We now describe in more detail how an exact Lagrangian filling of a Legendrian
link arises from a Legendrian weave. If we label all edges of
$\Gamma\subseteq\mathbb{D}^{2}$ colored blue by $\sigma_{1}$ and all edges
colored red by $\sigma_{2}$, then the points in the intersection
$\Gamma\cap\partial\mathbb{D}^{2}$ give us a braid word in the Artin
generators $\sigma_{1}$ and $\sigma_{2}$ of the 3-stranded braid group. We can
then view the corresponding link $\beta$ as living in
$(J^{1}\mathbb{S}^{1},\xi_{st})$. If we consider our Legendrian weave
$\Lambda(\Gamma)$ as an embedded Legendrian surface in
$(\mathbb{R}^{5},\xi_{st})$, then according to our discussion above, it has
boundary $\Lambda(\beta),$ where $\Lambda(\beta)$ is the Legendrian satellite
of $\beta$ with companion knot given by the standard unknot. In our local
contact model, the projection
$\pi:(J^{1}\mathbb{D}^{2},\xi_{st})\to(T^{*}\mathbb{D}^{2},\lambda_{\text{st}})$
gives an immersed exact Lagrangian surface with immersion points corresponding
to Reeb chords of $\Lambda(\Gamma)$. If $\Lambda(\Gamma)$ has no Reeb chords,
then $\pi$ is an embedding and $\Lambda(\Gamma)$ is an exact Lagrangian
filling of $\Lambda(\beta).$ Since $(\mathbb{S}^{3},\xi_{st})$ minus a point
is contactomorphic to $(\mathbb{R}^{3},\xi_{st})$, we have that an embedding
of $\Lambda(\Gamma)$ into $(\mathbb{R}^{5},\xi_{st})$ gives an exact
Lagrangian filling in $(\mathbb{R}^{4},\xi_{st})$ of
$\Lambda(\beta)\subseteq(\mathbb{R}^{3},\xi_{st})$, as it can be assumed –
after a Legendrian isotopy – to be disjoint from the point at infinity.
###### Remark.
We study embedded – rather than immersed – Lagrangian fillings due to the
existence of an $h$-principle for immersed Lagrangian fillings [EM02, Theorem
16.3.2]. In particular, any pair of immersed exact Lagrangian fillings is
connected by a one-parameter family of immersed exact Lagrangian fillings
relative to the boundary. See also [Gro86].
Our desire for embedded Lagrangians motivates the following definition.
###### Definition 3.
A 3-graph $\Gamma\subseteq\mathbb{D}^{2}$ is free if the associated Legendrian
front $\Sigma(\Gamma)$ can be woven with no Reeb chords. $\Box$
$\Gamma_{0}(D_{n})$, depicted in Figure 8, is an example of a free 3-graph of
$D_{n}$-type. Crucially, the mutation operation described above preserves the
free property of a 3-graph.
###### Lemma 1 ([CZ20], lemma 7.4).
Let $\Gamma\subseteq\mathbb{D}^{2}$ be a free 3-graph. Then the 3-graph
$\mu(\Gamma)$ obtained by mutating according to any of the Legendrian mutation
operations given above is also a free 3-graph. $\Box$
Therefore, starting with a free 3-graph and performing the Legendrian mutation
operation gives us a method of creating additional embedded exact Lagrangian
fillings.
At this stage, we have described the geometric and combinatorial ingredients
needed for Theorem 1. The two subsequent subsections introduce the necessary
algebraic invariants relating Legendrian weaves and 3-graphs to cluster
algebras. These will be used to distinguish exact Lagrangian fillings.
### 2.6. Quivers from Weaves
Before we describe the cluster algebra structure associated to a weave, we
must first describe quivers and how they arise via the intersection form on
$H_{1}(\Lambda(\Gamma);\mathbb{Z}).$ A quiver is a directed graph without
loops or directed 2-cycles. In the weave setting, the data of a quiver can be
extracted from a weave and a basis of its first homology. The intersection
quiver is defined as follows: each basis element $\gamma_{i}\in
H_{1}(\Lambda(\Gamma);\mathbb{Z})$ defines a vertex $v_{i}$ in the quiver and
we have $k$ arrows pointing from $v_{j}$ to $v_{i}$ if
$\langle\gamma_{i},\gamma_{j}\rangle=k$. We will only ever have $k$ either 0
or 1 for quivers arising from fillings of $\lambda(D_{n})$. See Figure 2
(left) for an example of the quiver
$Q(\Lambda(\Gamma_{0}(D_{4})),\\{\gamma_{i}^{(0)}\\})$ defined by
$\Lambda(\Gamma_{0}(D_{4}))$ and the indicated basis for
$H_{1}(\Lambda(\Gamma_{0}(D_{4});\mathbb{Z})$.
The combinatorial operation of quiver mutation at a vertex $v$ is defined as
follows, e.g. see [FWZ20a]. First, for every pair of incoming edge and
outgoing edges, we add an edge starting at the tail of the incoming edge and
ending at the head of the outgoing edge. Next, we reverse the direction of all
edges adjacent to $v$. Finally, we cancel any directed 2-cycles. If we started
with the quiver $Q$, then we denote the quiver resulting from mutation at $v$
by $\mu_{v}(Q).$ See Figure 15 (bottom) for an example. Under this operation,
we can naturally identify the vertices of $Q$ with $\mu_{v}(Q)$, just as we
can identify the homology bases of a weave before and after Legendrian
mutation.
###### Remark.
The crucial difference between algebraic and geometric intersections is
captured in the step canceling directed 2-cycles. This cancellation is
implemented by default in a quiver mutation, as the arrows of the quiver only
capture algebraic intersections. In contrast, the geometric intersection of
homology cycles after a Legendrian mutation will, in general, not coincide
with the algebraic intersection. This dissonance will be explored in detail in
Section 3. $\Box$
The following theorem relates the two operations of quiver mutation and
Legendrian mutation:
Figure 15. Mutation of $\Gamma_{0}(D_{4})$ and its associated intersection
quiver at the short Y-cycle colored in orange.
###### Theorem 4 ([CZ20], Section 7.3).
Given a 3-graph $\Gamma$, Legendrian mutation at an embedded cycle $\gamma$
induces a quiver mutation for the associated intersection quivers, taking
$Q(\Gamma,\\{\gamma_{i}\\})$ to $\mu_{\gamma}(Q(\Gamma,\\{\gamma_{i}\\})).$
$\Box$
See Figure 15 for an example showing the quiver mutation of
$Q(\Gamma_{0}(D_{4}),\\{\gamma_{i}^{(0)}\\})$, $i\in[1,4]$, corresponding to
Legendrian mutation applied to $\Lambda(\Gamma_{0}(D_{4})).$
### 2.7. Microlocal Sheaves and Clusters
To introduce the cluster structure mentioned above, we need to define a sheaf-
theoretic invariant. We first consider the category of dg complexes of sheaves
of $\mathbb{C}-$modules on $\mathbb{D}^{2}\times\mathbb{R}$ with constructible
cohomology sheaves. For a given 3-graph $\Gamma$ and its associated Legendrian
$\Lambda(\Gamma)$, we denote by
$\mathcal{C}(\Gamma):=Sh^{1}_{\Lambda(\Gamma)}(\mathbb{D}^{2}\times\mathbb{R})_{0}$
the subcategory of microlocal rank-one sheaves with microlocal support along
$\Lambda(\Gamma)$, which we require to be zero in a neighborhood of
$\mathbb{D}^{2}\times\\{-\infty\\}$. Here we identify the unit cotangent
bundle $T^{\infty,-}(\mathbb{D}^{2}\times\mathbb{R})$ with the first jet space
$J^{1}(\mathbb{D}^{2}).$ With this identification, the sheaves of
$\mathcal{C}(\Gamma)$ are constructible with respect to the stratification
given by the Legendrian front $\Sigma(\Gamma).$ Work of Guillermou, Kashiwara,
and Schapira implies that that $\mathcal{C}(\Gamma)$ is an invariant under
Hamiltonian isotopy [GKS12].
As described in [CZ20, Section 5.3], this category has a combinatorial
description. Given a 3-graph $\Gamma$, the data of the moduli space of
microlocal rank-one sheaves is equivalent to providing:
1. (i)
An assignment to each face $F$ (connected component of
$\mathbb{D}^{2}\backslash G$) of a flag $\mathcal{F}^{\bullet}(F)$ in the
vector space $\mathbb{C}^{3}$.
2. (ii)
For each pair $F_{1},F_{2}$ of adjacent faces sharing an edge labeled by
$\sigma_{i}$, we require that the corresponding flags satisfy
$\mathcal{F}^{j}(F_{1})=\mathcal{F}^{j}(F_{2}),\qquad 0\leq j\leq 3,j\neq
i,\qquad\text{ and }\qquad\mathcal{F}^{i}(F_{1})\neq\mathcal{F}^{i}(F_{2}).$
Finally, we consider the moduli space of flags satisfying (i) and (ii) modulo
the diagonal action of $GL_{n}$ on $\mathcal{F}^{\bullet}$. The precise
statement [CZ20, Theorem 5.3] is that the flag moduli space, denoted by
$\mathcal{M}(\Gamma)$, is isomorphic to the space of microlocal rank-one
sheaves $\mathcal{C}(\Gamma)$. Since $\mathcal{C}(\Gamma)$ is an invariant of
$\Lambda(\Gamma)$ up to Hamiltonian isotopy, it follows that
$\mathcal{M}(\Gamma)$ is an invariant as well. In the I-cycle case, when the
edges are labeled by $\sigma_{1}$, the moduli space is determined by four
lines $a\neq b\neq c\neq d\neq a$, as pictured in Figure 16 (left). If the
edges are labeled by $\sigma_{2}$, then the data is given by four planes
$A\neq B\neq C\neq D\neq A.$ Around a short Y-cycle, the data of the flag
moduli space is given by three distinct planes $A\neq B\neq C\neq A$ contained
in $\mathbb{C}^{3}$ and three distinct lines $a\subsetneq A,b\subsetneq
B,c\subsetneq C$ with $a\neq b\neq c\neq a,$ as pictured in Figure 16 (right).
Figure 16. The data of the flag moduli space given in the neighborhood of a
short I-cycle (left) and a short Y-cycle (right). Lines are represented by
lowercase letters, while planes are written in uppercase. The intersection of
the two lines $a$ and $b$ is written as $ab$.
To describe the cluster algebra structure on $\mathcal{C}(\Gamma)$, we need to
specify the cluster seed associated to the quiver
$Q(\Lambda(\Gamma),\\{\gamma_{i}\\})$ via the microlocal mondromy functor
$\mu_{mon}$, which takes us from the category $\mathcal{C}(\Gamma)$ to the
category of rank one local systems on $\Lambda(\Gamma)$. As described in
[STZ17, STWZ19], the functor $\mu_{mon}$ takes a 1-cycle as input and outputs
the isomorphism of sheaves given by the monodromy about the cycle. Since it is
locally defined, we can compute the microlocal monodromy about an I-cycle or
Y-cycle using the data of the flag moduli space in a neighborhood of the
cycle. If we have a short I-cycle $\gamma$ with flag moduli space described by
the four lines $a,b,c,d$, as in Figure 16 (left), then the microlocal
monodromy about $\gamma$ is given by the cross ratio
$\frac{a\wedge b}{b\wedge c}\frac{c\wedge d}{d\wedge a}$
Similarly, for a short Y-cycle with flag moduli space given as in Figure 16
(right), the microlocal monodromy is given by the triple ratio
$\frac{B(a)C(b)A(c)}{B(c)C(a)A(b)}$
As described in [CZ20, Section 7.2], the microlocal monodromy about a 1-cycle
gives rise to an $X$-cluster variable at the corresponding vertex in the
quiver. Under mutation of the 3-graph, the cross ratio and triple ratio
transform as cluster X-coordinates. Specifically, if we start with a 3-graph
with cluster variables $x_{j}$, then the cluster variables $x_{j}^{\prime}$ of
the 3-graph after mutating at $\gamma_{i}$ are given by the equation
$x_{j}^{\prime}=\begin{cases}x_{j}^{-1}&i=j\\\
x_{j}(1+x_{i}^{-1})^{-\langle\gamma_{i},\gamma_{j}\rangle}&\langle\gamma_{i},\gamma_{j}\rangle>0\\\
x_{j}(1+x_{i})^{-\langle\gamma_{i},\gamma_{j}\rangle}&\langle\gamma_{i},\gamma_{j}\rangle<0\end{cases}$
See Figure 17 for an example.
Figure 17. Prior to mutating at $\gamma_{1},$ we have
$\langle\gamma_{1},\gamma_{2}\rangle=-1$. Computing the cross ratios for
$\gamma_{1}$ and $\mu_{1}(\gamma_{1})$ we can see that the cross ratio
transforms as $\mu_{1}(\gamma_{1})=\frac{b\wedge c}{c\wedge e}\frac{e\wedge
a}{a\wedge b}=x_{1}^{-1}$ under mutation. Similarly, computing the cross
ratios for $\gamma_{1}$ and $\mu_{1}(\gamma_{2})$ and applying the relation
$e\wedge b\cdot a\wedge c=b\wedge c\cdot e\wedge a+a\wedge b\cdot c\wedge e,$
we have $\mu_{1}(x_{2})=\frac{e\wedge a}{a\wedge c}\frac{c\wedge d}{d\wedge
e}\left(1+\frac{a\wedge b}{b\wedge c}\frac{c\wedge e}{e\wedge a}\right).$
The goal of the next section will be to realize each possible mutation of the
$D_{n}$ quiver as a mutation of the corresponding 3-graph. This will imply
that there are at least as many exact Lagrangian fillings as cluster seeds of
$D_{n}$-type. There exists a complete classification of all finite mutation
type cluster algebras, and in fact, the number of cluster seeds of
$D_{n}$-type is $(3n-2)C_{n-1}$ [FWZ20b].
###### Remark.
It is not known whether other methods of generating exact Lagrangian fillings
for $\lambda(D_{n})$ access all possible cluster seeds of $D_{n}$-type. When
constructing fillings of $D_{4}$ by opening crossings, as in [EHK16, Pan17],
experimental evidence suggests that it is only possible to access at most 46
out of the possible 50 cluster seeds by varying the order of the crossings
chosen. Of note in the combinatorial setting, we also contrast the 3-graphs
$\Gamma(D_{4})$ with double wiring diagrams for the torus link $T(3,3)$, which
is the smooth type of $\lambda(D_{4})$. The moduli of sheaves
$\mathcal{C}(\Gamma(D_{4}))$ for $\Gamma(D_{4})$ embeds as an open positroid
cell into the Grassmanian $Gr(3,6)$ [CG20], so we can identify some cluster
charts with double wiring diagrams. The double wiring diagrams associated to
$Gr(3,6)$ only access 34 distinct cluster seeds – out of 50 – via local moves
applied to an initial double wiring diagram [FWZ20a]. $\Box$
## 3\. Proof of Main Results
In this section, we state and prove Theorem 5, which implies Theorem 1. The
following definitions relate the algebraic intersections of cycles to
geometric intersections in the context of 3-graphs.
###### Definition 4.
A 3-graph $\Gamma$ with associated basis $\\{\gamma_{i}\\},$
$i\in[1,b_{1}(\Lambda(\Gamma)]$ of $H_{1}(\Lambda(\Gamma);\mathbb{Z})$ is
_sharp at a cycle_ $\gamma_{j}$ if, for any other cycle
$\gamma_{k}\in\\{\gamma_{i}\\}$, the geometric intersection number of
$\gamma_{j}$ with $\gamma_{k}$ is equal to the algebraic intersection
$\langle\gamma_{j},\gamma_{k}\rangle$.
$\Gamma$ is _locally sharp_ if, for any cycle $\gamma\in\\{\gamma_{i}\\},$
there exist a sequence of Legendrian Surface Reidemeister moves taking
$\Gamma$ to some other 3-graph $\Gamma^{\prime}$ such that $\Gamma^{\prime}$
is sharp at the corresponding cycle $\gamma^{\prime}\in
H_{1}(\Lambda(\Gamma^{\prime});\mathbb{Z})$.
A 3-graph $\Gamma$ with a set of cycles $\Gamma$ is _sharp_ if $\Gamma$ is
sharp at all $\gamma_{i}\in\\{\gamma_{i}\\}$. $\Box$
For 3-graphs that are not sharp, it is possible that a sequence of mutations
will cause a cycle to become immersed. This is the only obstruction to weave
realizability. Therefore, sharpness is a desirable property for our 3-graphs,
as it simplifies our computations and helps us avoid creating immersed cycles.
We will not be able to ensure sharpness for all $\Gamma(D_{n})$ that arise as
part of our computations, (e.g., see the type III.i normal form in Figure 19)
but we will be able to ensure that each of our 3-graphs is locally sharp.
### 3.1. Proof of Theorem 1
The following result is slightly stronger than the statement of Theorem 1, as
we are able to show that each 3-graph in our sequence of mutations is locally
sharp.
###### Theorem 5.
Let $\mu_{v_{1}},\dots,\mu_{v_{k}}$ be a sequence of quiver mutations, with
initial quiver $Q(\Gamma_{0}(D_{n}),\\{\gamma_{i}^{(0)}\\})$. Then, there
exists a sequence $\Gamma_{0}(D_{n}),\dots,\Gamma_{k}(D_{n})$ of 3-graphs such
that
1. i.
$\Gamma_{j-1}(D_{n})$ is related to $\Gamma_{j}(D_{n})$ by mutation at a cycle
$\gamma_{j}$ and by Legendrian Surface Reidemeister moves I, II and III. The
cycle $\gamma_{j}$ represents the vertex $v_{j}$ in the intersection quiver
and it is given by one of the cycles in the initial basis
$\\{\gamma_{i}^{(0)}\\}$ after mutation and Reidemeister moves.
2. ii.
$\Gamma_{j}(D_{n})$ is sharp at $\gamma_{j}$.
3. iii.
$\Gamma_{j}(D_{n})$ is locally sharp.
4. iv.
The basis of cycles for $\Gamma_{j}(D_{n})$, obtained from the initial basis
$\\{\gamma_{i}^{(0)}\\}$ by mutation and Reidemeister moves, consists entirely
of short Y-cycles and short I-cycles.
The conditions ii-iv allow us to continue to iterate mutations after applying
a small number of simplifications at each step. Theorem 1 thus follows from
Theorem 5.
###### Proof.
We proceed by organizing the 3-graphs arising from any sequence of mutations
of $\Gamma_{0}(D_{n})$ into four types, in line with the organization scheme
introduced by Vatne for quivers of $D_{n}$-type [Vat10]. Vatne’s
classification of quivers in the mutation class of $D_{n}$-type uses the
configuration of a certain subquiver to define the different types. Outside of
that subquiver, there are a number of disjoint subquivers of $A_{n}$-type that
are referred to as $A_{n}$ tail subquivers. We will refer to the corresponding
cycles in the 3-graph as $A_{n}$ tail subgraphs, or simply $A_{n}$ tails when
it is clear from context whether we are referring to the quiver or the
3-graph. For each type, Vatne describes the results of quiver mutation at
different vertices, which can depend on the existence of $A_{n}$ tail
subquivers. See Figures 20, 26, 30, and 34 for the four types and their
mutations.
Notation. As mentioned in the previous section, cycles are pictured as colored
edges for the sake of visual clarity. Throughout this section, we denote all
of the pink cycles by $\gamma_{1},$ purple cycles by $\gamma_{2}$, orange
cycles by $\gamma_{3}$, dark green cycles by $\gamma_{4}$, light green cycles
by $\gamma_{5}$ and light blue cycles by $\gamma_{6}$. With this notation,
$\gamma_{i}$ will correspond to the vertex labeled by $v_{i}$ in the quivers
given below.
$\boldsymbol{A_{n}}$ Tails. We briefly describe the behavior of the $A_{n}$
tail subquivers, as given in [Vat10], in terms of weaves. Any of the $n$
vertices in an $A_{n}$ subquiver can have valence between 0 and 4. Cycles in
the quiver are oriented with length 3. If a vertex $v$ has valence 3, then two
of the edges form part of a 3-cycle, while the third edge is not part of any
3-cycle. If $v$ has valence 4, then two of the edges belong to one 3-cycle and
the remaining two edges belong to a separate 3-cycle.
Any $A_{n}$ tail of the quiver can be represented by a sharp configuration of
$n$ I-cycles in the 3-graph. See Figure 18 for an identification of I-cycles
with quiver vertices of a given valence. Mutation at any vertex $v_{i}$ in the
quiver corresponds to mutation at the I-cycle $\gamma_{i}$ in the 3-graph, so
it is readily verified that mutation preserves the number of I-cycles and
requires no application of Legendrian Surface Reidemeister moves to simplify.
As a consquence, any sequence of $A_{n}$ tail mutations is weave realizable,
and a sharp 3-graph remains sharp after mutation at $A_{n}$ tail I-cycles that
only intersect other $A_{n}$ tail I-cycles.
Figure 18. I-cycles in an $A_{n}$ tail of the 3-graph and the corresponding
$A_{n}$ tail subquiver.
Normal Forms. For each of the four types of $D_{n}$ quivers described in
[Vat10], we give one or two specific subgraphs of $\Gamma(D_{n})$, which we
refer to as normal forms. These normal forms are pictured in Figure 19. We
indicate the possible existence of $A_{n}$ tail subgraphs by a black circle.
We will say that an edge of the 3-graph carries a cycle if it is part of a
homology cycle. We will generally use this terminology to specify which edges
cannot carry a cycle.
Figure 19. Normal forms of types I-IV. In the top row, pictured from left to
right, are the normal forms for Types I, II, III.i, and III.ii. In the bottom
row, are normal forms for Types IV.i, IV.ii, and IV $(k>3)$. The possible
addition of I-cycles corresponding to $A_{n}$ tails of the quiver are
represented by black circles. For Type IV, $k$ represents the length of the
directed cycle of edges in the quiver that remains after deleting all of the
circle vertices. The dotted lines in the $k>3$ case represent $k-3$ I-cycles
that, together with the two Y-cycles and single I-cycle pictured, form a
$k+2$-gon with a single blue diagonal.
For each possible quiver mutation, we describe the possible mutations of the
3-graph and show that the result matches the quiver type and retains the
properties listed in Theorem 5 above. In addition, the Legendrian Surface
Reidemeister moves we describe ensure that the $A_{n}$ tail subgraphs continue
to consist solely of short I-cycles. If the mutation results in a long I-cycle
or pair of long I-cycles connecting our $A_{n}$ tail to the rest of the
3-graph, we can simplify by applying a sequence of $n$ push-throughs to ensure
that these are all short I-cycles. It is readily verified that we can always
do this and that no other simplifications of the $A_{n}$ tails are required
following any other mutations. We include $A_{n}$ tail cycles only where
relevant to the specific mutation. In our computations below, we generally
omit the final steps of applying a series of push-throughs to make any long I
or Y-cycles into short I or Y-cycles. Figure 25 provides an example where
these push-throughs are shown for both an I-cycle and a Y-cycle.
###### Remark.
The Type I normal form does not cover every possible arrangement of the
3-graph corresponding to a Type I quiver. Mutating at either of the short
I-cycles $\gamma_{1}$ or $\gamma_{2}$ produces one of four possible
arrangements of the cycles $\gamma_{1},\gamma_{2},$ and $\gamma_{3}$ in a
3-graph corresponding to a Type I quiver. Since these mutations are somewhat
straightforward, we simplify our calculations by giving a single normal form
rather than four, and describing the relevant mutations of two of the four
possible 3-graphs in figures 21, 22, 23, and 24. The remaining cases can be
seen by swapping the cycle(s) to the left of the short Y-cycle with the
cycle(s) to the right of it. This symmetry corresponds to reversing all of the
arrows in the quiver. In general, we will implicitly appeal to similar
symmetries of the normal form 3-graphs to reduce the number of cases we must
consider. $\Box$
Type I. We start with 3-graphs, always endowed with a homology basis, whose
associated intersection quivers are a Type I quiver. See Figure 20 for the
relevant quiver mutations.
Figure 20. From top to bottom, Type I to Type I, Type I to Type II, and Type I
to Type IV quiver mutations. The arrow labeled by $\mu_{v_{i}}$ indicates
mutation at the vertex $v_{i}$. In each line, the first quiver mutation shows
the case where $v_{3}$ is only adjacent to one $A_{n}$ tail vertex, while the
second quiver mutation shows the case where $v_{3}$ is adjacent to two $A_{n}$
tail vertices. Note that reversing the direction of all of the arrows
simultaneously before mutating gives additional possible quiver mutations of
the same type. Figure 21. Type I to Type I mutation. Arrows labeled by $\mu$
indicate mutation at a cycle of the same color.
* i.
(Type I to Type I) There are two possible Type I to Type I mutations of
3-graphs depicted in Figure 21 (left) and (right). The second 3-graph in the
first sequence is the result of mutating at $\gamma_{3}$. As shown there,
mutation does not create any new additional geometric or algebraic
intersections. Instead, it takes positive intersections to negative
intersections and vice versa. This is reflected in the quivers pictured
underneath the 3-graphs, as the orientation of edges has reversed under the
mutation. As explained above, we could simplify the resulting 3-graph by
applying a push-through move to each of the long I-cycles to get a sharp
3-graph where the homology cycles are made up of a single short Y-cycle and
some number of short I-cycles.
* ii.
(Type I to Type I) For the second possible Type I to Type I mutation, we
proceed as pictured in Figure 21 (right). There we can see that mutation at
$\gamma_{2}$ only affects the sign of the intersection of $\gamma_{2}$ with
the $\gamma_{3}$. This reflects the fact that the corresponding quiver
mutation has only reversed the orientation of the edge between $v_{2}$ and
$v_{3}$. Mutating at any other I-cycle is equally straightforward and yields a
Type I to Type I mutation as well.
* iii.
(Type I to Type II) In Figure 22 we consider the cases where the Y-cycle
$\gamma_{3}$ intersects one I-cycle (top) or two I-cycles (bottom) in the
$A_{n}$ tail subgraph. Mutation at $\gamma_{3}$ introduces an intersection
between $\gamma_{2}$ and $\gamma_{4}$ that causes the second 3-graph in of
each mutation sequences to no longer be sharp. Applying a push-through to
$\gamma_{2}$ resolves this intersection so that the geometric intersection
between $\gamma_{2}$ and $\gamma_{4}$ matches their algebraic intersection.
This simplification ensures that the result of $\mu_{\gamma_{3}}$ is a sharp
3-graph that matches the Type II normal form. If we compare the mutations in
the sequence on the left and the sequence on the right of the figure, we can
see that the presence of the $A_{n}$ tail cycle $\gamma_{5}$ does not affect
the computation.
Figure 22. Type I to Type II mutations. Arrows labeled by $I$, $II,$ or $III$
indicate a twist, push-through, or flop involving a cycle of the same color.
* iv.
(Type I to Type IV.i) We now consider the first of two Type I to Type IV
mutations, shown in Figure 23. Starting with the configuration of cycles at
the left of each sequence and mutating at $\gamma_{3}$ causes $\gamma_{1}$ and
$\gamma_{2}$ to cross. Applying a push-through to $\gamma_{1}$ or to
$\gamma_{2}$ (not pictured) simplifies the resulting intersection and yields a
Type IV.i normal form made up of the cycles
$\gamma_{1},\gamma_{2},\gamma_{3},$ and $\gamma_{4}$. The sequences on the
left and right of Figure 23 differ only by the presence of the $A_{n}$ tail
cycle $\gamma_{5}.$
Figure 23. Type I to Type IV.i mutations.
* v.
(Type I to Type IV.ii) In Figure 24, we consider the cases where $\gamma_{1}$
intersects one I-cycle (left) or two I-cycles (right) in the $A_{n}$ tail
subgraph, as we did in the Type I to Type II case. As in the Type I to Type II
case, we must apply a push-through to resolve the new intersections between
that cause the second 3-graph in each sequence to fail to be sharp. When we
include both $\gamma_{4}$ and $\gamma_{5}$ in the sequence on the right, we
get two new intersections after mutating, and therefore require two push-
throughs. Note that in the IV.ii case, we must first apply the push-through to
$\gamma_{1}$ and $\gamma_{2}$ in order to ensure that we can apply a push-
through to any additional cycles in the $A_{n}$ tail subgraph. This causes the
Y-cycles of the graph to correspond to different vertices in the quiver than
in the Type IV.i normal form, which is the main reason we distinguish between
the normal forms for Type IV.i and Type IV.ii.
Figure 24. Type I to Type IV.ii mutations.
In Figure 25 we show how to apply push-throughs to completely simplify the
long I and Y-cycles pictured in the Type I to Type IV.ii graph. As mentioned
above, these push-throughs are identical to any other computation required to
simplify our resulting 3-graphs to a set of short I-cycles and short Y-cycles.
Figure 25. Push-through examples. The first push-through move simplifies the
pink long I-cycle $\gamma_{1}$, while the second simplifies the dark green
long Y-cycle $\gamma_{4}$
The above cases describe all possible mutations of the Type 1 normal form.
Each of these mutations yields a sharp 3-graph with short I-cycles and
Y-cycles, as desired.
Type II. We now consider mutations of our Type II normal form. See Figure 26
for the relevant quivers. As shown in the figure, performing a quiver mutation
at the 2-valent vertices labeled by $v_{1}$ or $v_{2}$ yields a Type III
quiver, while a quiver mutation at the vertices labeled $v_{3}$ or $v_{4}$
yields either another Type II quiver or a Type I quiver, depending on the
intersection of $v_{3}$ or $v_{4}$ with any $A_{n}$ tail subquivers.
Figure 26. From top to bottom, Type II to Type I mutations, Type II to Type
II, and Type II to Type III quiver mutations.
Figure 27. Type II to Type I mutations.
* i.
(Type II to Type I) We first consider the sequence of 3-graphs pictured in
Figure 27. Mutation at $\gamma_{4}$ results in a new geometric intersection
between $\gamma_{2}$ and $\gamma_{3}$ even though
$\langle\gamma_{2},\gamma_{3}\rangle=0$. We can resolve this by applying a
reverse push-through at the trivalent vertex where $\gamma_{2}$ and
$\gamma_{3}$ meet. The resulting 3-graph is sharp, as $\gamma_{2}$ and
$\gamma_{3}$ no longer have any geometric intersection. This computation is
identical if $\gamma_{3}$ were to intersect a single $A_{n}$ tail cycle and we
mutated at $\gamma_{3}$ instead. Note that here we require the red edge
adjacent to the trivalent vertex where we applied our push-through not carry a
cycle, as specified by our normal form.
Figure 28. Type II to Type II mutations.
* ii.
(Type II to Type II) We now consider the sequence shown in Figure 28. After
mutating at $\gamma_{4}$, we have the same intersection between $\gamma_{2}$
and $\gamma_{3}$ as in the previous case, which we again resolve by reverse
push-through at the same trivalent vertex. In this case, we also have an
intersection between $\gamma_{1}$ and $\gamma_{5},$ which we resolve via push
through of $\gamma_{1}$. As a result, $\gamma_{5}$ becomes a Y-cycle, and the
Type II normal form is now made up of the cycles $\gamma_{1},$ $\gamma_{2}$,
$\gamma_{4},$ and $\gamma_{5}$, while $\gamma_{3}$ becomes an $A_{n}$ tail
cycle.
Figure 29. Type II to Type III mutations.
* iii.
(Type II to Type III.i) Mutation at $\gamma_{1}$ or $\gamma_{2}$ in the Type
II normal form yields either of the Type III normal forms. In the sequence on
the left of Figure 29, mutation at $\gamma_{2}$ leads to a geometric
intersection between $\gamma_{3}$ and $\gamma_{4}$ at two trivalent vertices.
Since the signs of these two intersections differ, the algebraic intersection
$\langle\gamma_{3},\gamma_{4}\rangle$ is zero, so the resulting 3-graph is not
sharp. However, it is sharp at $\gamma_{1}$ and $\gamma_{2}$, and applying a
flop to the 3-graph removes the geometric intersection between $\gamma_{3}$
and $\gamma_{4}$ at the cost of introducing the same intersection between
$\gamma_{1}$ and $\gamma_{2}$. Therefore, applying the flop does not make the
3-graph sharp, but it does show that the 3-graph resulting from our mutation
is locally sharp at every cycle.
* iv.
(Type II to Type III.ii) In the sequence on the right of Figure 29, mutation
at $\gamma_{1}$ yields a sharp 3-graph that matches the Type III.ii normal
form.
Type III: Figure 30 illustrates the Type III quiver mutations. Figures 31, 32,
and 33 depict the corresponding Legendrian mutations of the Type III normal
forms.
Figure 30. Type III quiver mutations Figure 31. Type III.i to Type II
mutations (left) and Type III.ii to Type II mutations (right).
* i.
(Type III.i to Type II) We first consider the sequence of 3-graphs in Figure
31 (left). Mutating at $\gamma_{1}$ or $\gamma_{2}$ immediately yields a Type
II normal form. Mutating at $\gamma_{1}$ and $\gamma_{2}$ in succession yields
a Type III.ii normal form. Note that if the 3-graph were not sharp at
$\gamma_{1}$ or $\gamma_{2}$ we would first need to apply a flop. We can
always apply this move because the 3-graph is locally sharp at each of its
cycles. See the Type III.i to Type IV.i subcase below for an example where we
demonstrate this move.
* ii.
(Type III.ii to Type II) In the sequence on the right of Figure 31, mutation
at either $\gamma_{1}$ or $\gamma_{2}$ yields a Type II normal form. Mutation
at $\gamma_{1}$ and $\gamma_{2}$ in succession yields a Type III.i normal
form. Therefore, applying these two moves in succession can take us between
both of our Type III normal forms.
Figure 32. Type III.i to Type IV mutations
* iii.
(Type III.i to Type IV) We now consider the sequence of 3-graphs in Figure 32.
Since the initial 3-graph is not sharp at $\gamma_{4}$, we must first apply a
flop before mutating. After applying this flop, $\gamma_{4}$ is a short
I-cycle and the 3-graph is sharp at $\gamma_{4}$. Mutating at $\gamma_{4}$
then yields a Type IV.i normal form. The short I-cycles $\gamma_{5}$ and
$\gamma_{6}$ are included to indicate where any $A_{n}$ tail cycles would be
sent under this mutation.
Figure 33. Type III.ii to Type IV mutations
* iv.
(Type III.ii to Type IV) In Figure 33, mutation at $\gamma_{4}$ causes
$\gamma_{1}$ and $\gamma_{2}$ to cross while still intersecting $\gamma_{3}$
and $\gamma_{4}$ at either end. We resolve this by first applying a push-
through to $\gamma_{2}$ and then applying a reverse push-through to the
trivalent vertex where $\gamma_{1}$ and $\gamma_{3}$ intersect a red edge.
This results in a sharp 3-graph with $\gamma_{1},$ $\gamma_{2}$, $\gamma_{3}$,
and $\gamma_{4}$ making up the Type IV normal form. We again include
$\gamma_{5}$ and $\gamma_{6}$ as cycles belonging to a potential $A_{n}$ tail
subgraph in order to show where the $A_{n}$ tail cycles are sent under this
mutation.
Type IV: Figure 34 illustrates all of the relevant Type IV quivers and their
mutations. In general, the edges of a Type IV quiver have the form of a single
$k-$cycle with the possible existence of 3-cycles or outward-pointing “spikes”
at any of the edges along the $k-$cycle. At the tip of each of these spikes is
a possible $A_{n}$ tail subquiver. We will refer to a vertex at the tip of any
of the spikes (e.g., the vertex $v_{3}$ in Figure 34) as a spike vertex and
any vertex along the $k-$cycle will be referred to as a $k-$cycle vertex. A
homology cycle corresponding to a spike vertex will be referred to as a spike
cycle. Mutating at a spike vertex increases the length of the internal
$k-$cycle by one, while mutating at a $k-$cycle vertex decreases the length by
1, so long as $k>3$. Figures 35, 36, 37, and 38 illustrate the corresponding
mutations of 3-graphs for Type IV to Type I and Type IV to Type III when
$k=3$.
Figure 34. Type IV quiver mutations Figure 35. Type IV.i to Type I mutations
* i.
(Type IV.i to Type I) We first consider the sequence of 3-graphs in Figure 35.
Mutation at $\gamma_{1}$ causes $\gamma_{2}$ and $\gamma_{4}$ to cross.
Application of a reverse push-through at the trivalent vertex where
$\gamma_{2}$ and $\gamma_{4}$ intersect a red edge removes this crossing and
yields a Type I normal form where $\gamma_{1}$ is the sole Y-cycle.
Figure 36. Type IV.ii to Type I mutations
* ii.
(Type IV.ii to Type I) Mutation at $\gamma_{3}$ in Figure 36 yields a 3-graph
with geometric intersections between $\gamma_{1}$ and $\gamma_{5}$, and
$\gamma_{2}$ and $\gamma_{4}$. The application of reverse push-throughs at the
trivalent vertex intersections of $\gamma_{1}$ with $\gamma_{5}$ and
$\gamma_{2}$ with $\gamma_{4}$ removes these geometric intersections,
resulting in a Type I normal form where $\gamma_{1}$ is the sole Y-cycle. We
also apply a candy twist (Legendrian Surface Reidemeister move I) to simplify
the intersection at the top of the resulting 3-graph.
Figure 37. Type IV.i to Type III mutations
* iii.
(Type IV.i to Type III) We now consider the two sequences of 3-graphs in
Figure 37. Mutation at any of $\gamma_{1},\gamma_{2}$, $\gamma_{3}$, or
$\gamma_{4}$ in the Type IV.i normal form yields a Type III normal form.
Specifically, mutation at $\gamma_{4}$ yields a Type III.i normal form that
requires no simplification, while mutation at $\gamma_{3}$ (not pictured)
yields a Type III.ii normal form that also requires no simplification. The
computation for mutation at $\gamma_{1}$ is pictured in the sequence on the
right and is identical to the computation for mutation at $\gamma_{2}.$ The
first step of the simplification is the same as the Type IV.i to Type I
subcase described above. However, we require the application of an additional
push-through to remove the geometric intersection between $\gamma_{2}$ and
$\gamma_{6}.$ This makes $\gamma_{6}$ into a Y-cycle and results in a Type III
normal form.
Figure 38. Type IV.ii to Type III mutations
* iv.
(Type IV.ii to Type III) Mutation at $\gamma_{1}$ in our Type IV.ii normal
form, depicted in Figure 38, results in a pair of geometric intersections
between $\gamma_{3}$ and $\gamma_{5}$. Application of a flop removes these
geometric intersections and results in a sharp 3-graph with Y-cycles
$\gamma_{1}$ and $\gamma_{4}$, which matches our Type III.ii normal form. Note
that the computations for mutations of the two possible Type IV.ii 3-graphs
given in Figure 24 (left) and (right) are identical.
The remaining three subcases are all Type IV to Type IV mutations.
* v.
(Type IV.ii to Type IV) Figure 39 depicts mutation of a Type IV.ii normal form
at a spike cycle. Mutating at $\gamma_{5}$ results in an additional geometric
intersection between $\gamma_{1}$ and $\gamma_{3}$. We first apply a reverse
push-through at the trivalent vertex where $\gamma_{1},\gamma_{2}$ and
$\gamma_{3}$ meet. This introduces an additional geometric intersection
between $\gamma_{2}$ and $\gamma_{3}$, that we resolve by applying a push-
through to $\gamma_{3}$. Application of a reverse push-through to the
trivalent vertex where $\gamma_{2}$ and $\gamma_{4}$ intersect a red edge
resolves the final geometric intersection between $\gamma_{2}$ and
$\gamma_{4}$. The Y-cycles of the resulting 3-graph correspond to $k-$cycle
vertices of the quiver. As shown below, none of the other Type IV to Type IV
mutations result in Y-cycles corresponding to spike vertices. Therefore,
assuming we have simplified after each of our mutations in the manner
described above, the only possible way a Type IV.ii 3-graph arises is by
mutating from the initial Type I graphs in Figure 24. Hence, all other Type IV
3-graphs only have Y-cycles corresponding to $k-$cycle vertices in the quiver.
The computations for the different Type IV.ii 3-graphs given in Figure 24 (top
right and bottom right) are again identical.
Figure 39. Type IV.ii graph mutation at a spike cycle.
* vi.
(Type IV to Type IV) Figure 40 depicts Type IV to Type IV mutations when the
length of the quiver $k-$cycle is greater than 3. When mutating at a homology
cycle corresponding to a $k-$cycle vertex of the quiver, we have two
possibilities. Figure 40 (top) shows the case where $\gamma_{4}$ intersects
another Y-cycle $\gamma_{2}$, which corresponds to a $k-$cycle vertex in the
quiver. Figure 40 (bottom) considers the case where $\gamma_{4}$ only
intersects I-cycles. In both of these cases we must apply a reverse push-
through to the trivalent vertex where $\gamma_{3}$ and $\gamma_{4}$ intersect
a red edge in order to simplify the 3-graph. This particular simplification
requires that neither of the two edges adjacent to the leftmost edge of
$\gamma_{4}$ carry a cycle before we mutate. A similar computation involving
the purple Y-cycle (not pictured) also requires that neither of the two edges
adjacent to the bottommost edge of $\gamma_{2}$ carry a cycle. Crucially, our
computations show that Type IV to Type IV mutation preserve this property,
i.e., that both of the Y-cycles have an edge that is adjacent to a pair of
edges which do not carry a cycle. When $k=4,$ the resulting 3-graph in the top
line will have a short I-cycle adjacent to $\gamma_{2}$ and $\gamma_{3}$,
while the resulting 3-graph in the middle line will have a short Y-cycle
adjacent to $\gamma_{2}$ and $\gamma_{3}$.
Figure 40. Type IV to Type IV mutations at homology cycles corresponding to
$k-$cycle vertices in the quiver. Mutating at $\gamma_{2},\gamma_{3},$ or
$\gamma_{4}$ (corresponding to $k-$cycle vertices in the quiver) in the
3-graphs on the left decreases the length of the $k-$cycle in the quiver by 1.
Figure 41. Type IV to Type IV mutations at spike cycles. Mutating at the spike
cycles $\gamma_{1}$ or $\gamma_{5}$ in the 3-graphs on the left increases the
length of the $k-$cycle in the intersection quiver by 1.
* vii.
(Type IV to Type IV) Figure 41 depicts mutation at a spike cycle. Since we
have already discussed the Type IV.ii spike cycle subcase above, we need only
consider the case where each of the spike cycles is a short I-cycle. The navy
short I-cycle and $\gamma_{6}$ are included to help indicate where $A_{n}$
tail cycles are sent under this mutation. The computation for mutating at a
spike edge for Type IV.i (i.e. the $k=3$ case) is identical to the $k>3$ case.
We have omitted the case where each of the cycles involved in our mutation is
an I-cycle, but the computation is again a straightforward mutation of a
single I-cycle that requires no simplification.
In each of the Type IV to Type IV subcases above, mutating at a Y-cycle or an
I-cycle and applying the simplifications as shown preserves the number of
Y-cycles in our graph. Therefore, our computations match the normal form we
gave in Figure 19 with $k-2$ short I-cycles in the normal form 3-graph not
belonging to any $A_{n}$ tail subgraphs.
This completes our classification of the mutations of normal forms. In each
case, we have produced a 3-graph of the correct normal form that is locally
sharp and made up of short Y-cycles and I-cycles. Thus, any sequence of quiver
mutations for the intersection quiver
$Q(\Gamma_{0}(D_{n}),\\{\gamma_{i}^{(0)}\\})$ of our initial
$\Gamma_{0}(D_{n})$ is weave realizable. Hence, given any sequence of quiver
mutations, we can apply a sequence of Legendrian mutations to our original
3-graph to arrive at a 3-graph with intersection quiver given by applying that
sequence of quiver mutations to $Q(\Gamma_{0}(D_{n}),\\{\gamma_{i}^{(0)}\\})$,
as desired.
∎
Having proven weave realizability for $\Gamma_{0}(D_{n})$, we conclude with a
proof of Corollary 1.
### 3.2. Proof of Corollary 1
We take our initial cluster seed in $\mathcal{C}(\Gamma)$ to be the cluster
seed associated to $\Gamma_{0}(D_{n})$. The cluster variables in this initial
seed exactly correspond to the microlocal monodromies along each of the
homology cycles of the initial basis $\\{\gamma_{i}^{(0)}\\}$. The
intersection quiver $Q(\Gamma_{0}(D_{n}),\\{\gamma_{i}^{0}\\})$ is the $D_{n}$
Dynkin diagram and thus the cluster seed is $D_{n}$-type. By definition, any
other cluster seed in the $D_{n}$-type cluster algebra is obtained by a
sequence of quiver mutations starting with the quiver
$Q(\Gamma_{0}(D_{n}),\\{\gamma_{i}^{0}\\})$ and its associated cluster
variables. Theorem 1 implies that any quiver mutation of
$Q(\Gamma_{0}(D_{n}),\\{\gamma_{i}^{0}\\})$ can be realized by a Legendrian
mutation in $\Lambda(\Gamma_{0}(D_{n})),$ so we have proven the first part of
the corollary. The remaining part of the corollary follows from the fact that
the $D_{n}$-type cluster algebra is known to be of finite mutation type with
$(3n-2)C_{n-1}$ distinct cluster seeds. $\Box$
## References
* [ABL21] Byung Hee An, Youngjin Bae, and Eunjeong Lee. Lagrangian fillings for legendrian links of finite type. arXiv:2101.01943, 2021.
* [Ad90] V. I. Arnol′ d. Singularities of caustics and wave fronts, volume 62 of Mathematics and its Applications (Soviet Series). Kluwer Academic Publishers Group, Dordrecht, 1990.
* [Ben83] Daniel Bennequin. Entrelacements et équations de Pfaff. In Third Schnepfenried geometry conference, Vol. 1 (Schnepfenried, 1982), volume 107 of Astérisque, pages 87–161. Soc. Math. France, Paris, 1983.
* [Cas20] Roger Casals. Lagrangian skeleta and plane curve singularities. arXiv:2009.06737, 2020.
* [CG20] Roger Casals and Honghao Gao. Infinitely many Lagrangian fillings. arXiv:2001.01334, 2020.
* [Cha10] Baptiste Chantraine. Lagrangian concordance of Legendrian knots. Algebr. Geom. Topol., 10(1):63–85, 2010.
* [CN21] Roger Casals and Lenhard Ng. Braid loops with infinite monodromy on the legendrian contact dga. arXiv:2101.02318, 2021.
* [CZ20] Roger Casals and Eric Zaslow. Legendrian weaves. arXiv:2007.04943, 2020.
* [EHK16] Tobias Ekholm, Ko Honda, and Tamás Kálmán. Legendrian knots and exact Lagrangian cobordisms. J. Eur. Math. Soc. (JEMS), 18(11):2627–2689, 2016.
* [EM02] Y. Eliashberg and N. Mishachev. Introduction to the $h$-principle, volume 48 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2002.
* [EP96] Y. Eliashberg and L. Polterovich. Local Lagrangian $2$-knots are trivial. Ann. of Math. (2), 144(1):61–76, 1996.
* [FWZ20a] Sergey Fomin, Lauren Williams, and Andrei Zelevinsky. Introduction to cluster algebras: Chapters 1-3. arXiv:1608.05735, 2020.
* [FWZ20b] Sergey Fomin, Lauren Williams, and Andrei Zelevinsky. Introduction to cluster algebras: Chapters 4-5. arXiv:1707.07190, 2020.
* [Gei08] Hansjörg Geiges. An introduction to contact topology, volume 109 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2008.
* [GKS12] Stéphane Guillermou, Masaki Kashiwara, and Pierre Schapira. Sheaf quantization of Hamiltonian isotopies and applications to nondisplaceability problems. Duke Math. J., 161(2):201–245, 2012.
* [Gro86] Mikhael Gromov. Partial differential relations, volume 9 of Ergebnisse der Mathematik und ihrer Grenzgebiete (3). Springer-Verlag, Berlin, 1986.
* [GSW20] Honghao Gao, Linhui Shen, and Daping Weng. Augmentations, fillings, and clusters. arXiv:2008.10793, 2020.
* [HS15] Kyle Hayden and Joshua M. Sabloff. Positive knots and Lagrangian fillability. Proc. Amer. Math. Soc., 143(4):1813–1821, 2015.
* [NR13] Lenhard Ng and Daniel Rutherford. Satellites of Legendrian knots and representations of the Chekanov-Eliashberg algebra. Algebr. Geom. Topol., 13(5):3047–3097, 2013.
* [OS04] Burak Ozbagci and András I. Stipsicz. Surgery on contact 3-manifolds and Stein surfaces, volume 13 of Bolyai Society Mathematical Studies. Springer-Verlag, Berlin, 2004.
* [Pan17] Yu Pan. Exact Lagrangian fillings of Legendrian $(2,n)$ torus links. Pacific J. Math., 289(2):417–441, 2017.
* [Pol91] L. Polterovich. The surgery of Lagrange submanifolds. Geom. Funct. Anal., 1(2):198–210, 1991.
* [STWZ19] Vivek Shende, David Treumann, Harold Williams, and Eric Zaslow. Cluster varieties from Legendrian knots. Duke Math. J., 168(15):2801–2871, 2019.
* [STZ17] Vivek Shende, David Treumann, and Eric Zaslow. Legendrian knots and constructible sheaves. Invent. Math., 207(3):1031–1133, 2017.
* [TZ18] David Treumann and Eric Zaslow. Cubic planar graphs and Legendrian surface theory. Adv. Theor. Math. Phys., 22(5):1289–1345, 2018.
* [Vat10] Dagfinn F. Vatne. The mutation class of $D_{n}$ quivers. Comm. Algebra, 38(3):1137–1146, 2010.
* [Yau17] Mei-Lin Yau. Surgery and isotopy of Lagrangian surfaces. In Proceedings of the Sixth International Congress of Chinese Mathematicians. Vol. II, volume 37 of Adv. Lect. Math. (ALM), pages 143–162. Int. Press, Somerville, MA, 2017.
|
# Detection of Bidirectional System-Environment Information Exchanges
Adrián A. Budini Consejo Nacional de Investigaciones Científicas y Técnicas
(CONICET), Centro Atómico Bariloche, Avenida E. Bustillo Km 9.5, (8400)
Bariloche, Argentina, and Universidad Tecnológica Nacional (UTN-FRBA), Fanny
Newbery 111, (8400) Bariloche, Argentina
###### Abstract
Quantum memory effects can be related to a bidirectional exchange of
information between an open system and its environment, which in turn modifies
the state and dynamical behavior of the last one. Nevertheless, non-
Markovianity can also be induced by environments whose dynamics is not
affected during the system evolution, implying the absence of any physical
information exchange. An unsolved open problem in the formulation of quantum
memory measures is the apparent impossibility of discerning between both
paradigmatic cases. Here, we present an operational scheme that, based on the
outcomes of successive measurements processes performed over the system of
interest, allows to distinguishing between both kinds of memory effects. The
method accurately detects bidirectional information flows in diverse
dissipative and dephasing non-Markovian open system dynamics.
## I Introduction
In its modern conception, quantum non-Markovianity breuerbook ; vega ; wiseman
is related to a twofold exchange of information between an open system and its
environment BreuerReview ; plenioReview . Over the basis of unitary system-
environment models, it is commonly assumed that this bidirectional
informational flow (BIF) is mediated by physical processes that modify the
state and dynamical behavior of the environment. In spite of the consistence
of this picture EnergyBackFLow ; Energy ; HeatBackFlow , it is well known that
memory effects can also be induced by reservoirs whose state and dynamical
behavior are not affected at all by its coupling with the open system.
Evidently, this feature implies the absence of any physical system-bath
information exchange. Stochastic Hamiltonians cialdi ; GaussianNoise ; morgado
; bordone , incoherent bath fluctuations lindbladrate ; boltzman ; vasano ;
PostMarkovian ; shabani , collisional models colisionVacchini ; embedding ,
and (system) unitary dynamics characterized by random parameters ciracR ;
buchleitner ; nori ; wudarski are some examples of this “casual bystander”
(non-Markovian) environment action. The environment affects the system
dynamics but its (statistical) state is never influenced by the system.
An open problem in the formulation of quantum non-Markovianity is the lack of
an underlying prescription (based only on system information) able to
discriminate between the previous two complementary cases. In fact, even when
a wide variety of memory witnesses (defined from the system propagator
properties) has been proposed BreuerFirst ; cirac ; rivas ; breuerDecayTLS ;
fisher ; fidelity ; dario ; mutual ; geometrical ; DarioSabrina ; brasil ;
sabrina ; canonicalCresser ; cresser ; Acin ; indu ; poland ; chile and
implemented experimentally BreuerExp ; breuerDrift ; urrego ; khurana ; sun ;
mataloni ; pan , even in absence of BIFs most of them may inaccurately detect
an “environment-to-system backflow of information” cialdi ; GaussianNoise ;
morgado ; bordone ; lindbladrate ; boltzman ; vasano ; PostMarkovian ; shabani
; colisionVacchini ; embedding ; wudarski ; buchleitner . This incongruence
emerges because quantum master equations with very similar structures describe
the (non-Markovian) system dynamics in presence or absence of BIFs.
The previous limitation implies a severe constraint on the classification and
interpretation of memory effects in quantum systems. For example, there exist
non-Markovian dynamics whose underlying memory effects are classified as
“extreme” ones. Nevertheless, these dynamics emerge from simple classical
statistical mixtures of (memoryless) Markovian system evolutions. Added to the
absence of any physical BIF, the reading of memory effects as quantum ones
becomes meaningless in this situation. Remarkable cases are quantum master
equations with an ever (time-dependent) negative rate (eternal non-
Markovianity) canonicalCresser ; megier as well as “maximally non-Markovian
dynamics” where the stationary state may recover the initial condition
DarioSabrina ; maximal . On the other hand, the interpretation of this kind of
dynamics in terms of measurement-based stochastic wave vector evolutions may
becomes ambiguous (Markovian or non-Markovian) by taking into account or not
the underling statistical mixture. In fact, for each Markovian system
evolution in the statistical ensemble one can associate a Markovian stochastic
wave vector evolution. Hence, there is not any memory effect at the level of
single realizations. Alternatively, a non-Markovian wave vector evolution that
in average recovers the system evolution may also be proposed piiloSWF . These
examples confirm that a procedure capable to determine when memory effects
rely or not on physically mediated BIFs is in general highly demanded.
The aim of this work is introduce an operational technique that accurately
detects the presence of physically mediated system-environment BIFs.
Consistently with the operational character, instead of a definition in the
system Hilbert space BreuerReview ; plenioReview , the approach relies on a
probabilistic condition that indicates when an environment is unaffected by
its coupling with the system. Correspondingly, memory effects emerge from a
statistical average of a Markovian system dynamics that parametrically depends
on the (unaffected) bath degrees of freedom. It is shown that these conditions
can be checked by performing a minimal number of three system measurement
processes, added to an intermediate (random) update of the system state that
may depends on previous outcomes. Similarly to operational memory approaches
based on causal breaks modi ; budiniCPF ; pollock ; pollockInfluence ;
bonifacio ; budiniChina ; budiniBrasil ; han ; goan , here a generalized
conditional past-future (CPF) correlation budiniCPF ; budiniChina ;
budiniBrasil ; bonifacio ; han defined between the first and last (past-
future) measurement outcomes, conditioned to the intermediate updated system-
state, becomes an indicator of BIFs.
The three-joint outcome probabilities and its associated generalized CPF
correlation are calculated for both quantum and classical environmental
fluctuations. Consistently, for classical noise fluctuations, or in general,
when memory effects can be associated to environments with an invariant
dynamics, the generalized CPF correlation vanishes. This property furnishes a
novel and explicit experimental test for detecting BIFs. Its feasibility is
explicitly demonstrated through its characterization in ubiquitous dissipative
and dephasing non-Markovian dynamics that admit an exact treatment.
## II Probabilistic approach
Our aim is to distinguish between memory effects that occur with and without
BIFs. These opposite cases are related to the dependence or independence of
the reservoir dynamics on system degrees of freedom. This property can be
explicitly defined by means of the following scheme, which is valid in both
classical and quantum realms.
We assume that both the system and the environment are subjected to a set of
(bipartite separable) measurements at successive times
$t_{1}\\!<\\!t_{2}\cdots\\!<\\!t_{n}.$ The set of strings
$\mathbf{s}\equiv(s_{1},s_{2}\cdots s_{n})$ and
$\mathbf{e}\equiv(e_{1},e_{2},\cdots e_{n})$ denote the respective outcomes,
which in turn label the corresponding system and environment post-measurement
states. The outcome statistics is set by a joint probability
$P(\mathbf{s},\mathbf{e}).$ This object in general depends on which
measurement processes are performed.
In agreement with our definition, in absence of BIFs the environment
probability $P\mathbf{(e)}=\sum_{\mathbf{s}}P(\mathbf{s},\mathbf{e})$ must be
an invariant object that is independent of the system initialization and
dynamics. Bayes rule allows to write
$P\mathbf{(e)}=\sum_{\mathbf{s}}P(\mathbf{e|s})P(\mathbf{s}),$ where
$P(\mathbf{e|s})$ is the conditional probability of $\mathbf{e}$ given
$\mathbf{s},$ while $P(\mathbf{s})$ gives the probability of $\mathbf{s.}$
Hence, the absence of BIFs can be expressed by the condition
$P(\mathbf{e|s})=P(\mathbf{e}),$ (1)
which guarantees that the environment statistics is independent of the system
state and dynamics.
The marginal probability for the system outcomes can always be written as
$P\mathbf{(s)}=\sum_{\mathbf{e}}P(\mathbf{s},\mathbf{e})=\sum_{\mathbf{e}}P(\mathbf{s|e})P(\mathbf{e}),$
where $P(\mathbf{s|e})$ is the conditional probability of $\mathbf{s}$ given
$\mathbf{e}.$ When condition (1) is fulfilled, we can affirm that any possible
memory effect in the system measurements follows from an (invariant)
environmental average
$[\langle\cdots\rangle_{\mathbf{e}}\mathbf{\equiv}\sum\nolimits_{\mathbf{e}}\cdots
P\mathbf{(e)}]$ of a (system) joint probability
$P^{(\mathbf{e})}(\mathbf{s})\leftrightarrow P(\mathbf{s|e})$ that
parametrically depends on the bath states,
$P(\mathbf{s})=\langle P^{(\mathbf{e})}(\mathbf{s})\rangle_{\mathbf{e}}.$ (2)
Notice that $P^{(\mathbf{e})}(\mathbf{s})$ denotes the conditional probability
$P(\mathbf{s|e})$ given that condition (1) is fulfilled.
In the present approach Eqs. (1) and (2) define the absence of any physical
system-environment BIF. System memory effects emerge due to the conditional
action of the bath. Our problem now is to detect these probability structures
by taking into account only the system outcome statistics. Before this step,
we introduce one extra assumption.
As usual in open quantum systems, we assume that the system-bath bipartite
dynamics (without interventions) admits an underlying semigroup (memoryless)
description. Hence, $P^{(\mathbf{e})}(\mathbf{s})$ fulfills a Markovian
property with respect to system outcomes,
$P^{(\mathbf{e})}(\mathbf{s})=P^{(\mathbf{e})}(s_{n}|s_{n-1})\cdots
P^{(\mathbf{e)}}(s_{2}|s_{1})P^{(\mathbf{e})}(s_{1}).$ (3)
For notational convenience, the parametric dependence of the conditional
probabilities $P^{(\mathbf{e)}}(s|s^{\prime})$ on the bath states is written
through the supra index $(\mathbf{e)}.$ This dependence must be consistent
with causality, meaning that $P^{(\mathbf{e)}}(s|s^{\prime})$ cannot depend on
(non-selected) future bath outcomes.
### II.1 Detection scheme
The developing of BIFs, that is, departures with respect to the structure
defined by Eqs. (2)-(3), can be detected with the following minimal scheme.
Three measurements processes performed at times $0\rightarrow t\rightarrow t+$
$\tau,$ deliver the successive system outcomes
$x~{}\rightarrow~{}(y\rightarrow~{}\breve{y})\rightarrow z.$ After the
intermediate measurement, the system state—labelled by $y$—is externally (and
instantaneously) updated to a renewed state—labelled by $\breve{y}$—, while
the bath state is unaffected. Each $\breve{y}$-state is chosen with an
arbitrary conditional probability $\wp(\breve{y}|y,x).$ The scheme is closed
after specifying $\wp(\breve{y}|y,x)$ and calculating the marginal probability
$P(z,\breve{y},x)=\sum_{y}P(z,\breve{y},y,x).$ In addition, it is assumed that
system and environment are uncorrelated before the first measurement. A
“deterministic scheme” (d) corresponds to
$\wp(\breve{y}|y,x)=\delta_{\breve{y},y}.$ Hence, not any change is introduced
after the intermediate measurement. A “random scheme” (r) is defined by
$\wp(\breve{y}|y,x)=\wp(\breve{y}|x).$ These two cases are motivated by the
following features.
In absence of BIFs, the joint probability for the four events, from Eqs. (2)
and (3), reads
$P(z,\breve{y},y,x)=\langle
P^{(\mathbf{e})}(z|\breve{y})\wp(\breve{y}|y,x)P^{(\mathbf{e})}(y|x)P(x)\rangle_{\mathbf{e}}.$
(4)
Notice that this result also relies on Eq. (1), which guarantees that
$\langle\cdots\rangle_{\mathbf{e}}$ remains invariant even when changing the
system state at a given time, $(y\rightarrow~{}\breve{y}).$ On the other hand,
by assumption $\wp(\breve{y}|y,x)$ and $P(x)$ do not depend on the
environmental degrees of freedom. In the deterministic scheme, Eq. (4) leads
to
$P(z,\breve{y},x)\overset{d}{=}\langle
P^{(\mathbf{e})}(z|\breve{y})P^{(\mathbf{e})}(\breve{y}|x)\rangle_{\mathbf{e}}\,P(x),$
(5)
while in the random case, using $\sum_{y}P^{(\mathbf{e})}(y|x)=1,$
$P(z,\breve{y},x)\overset{r}{=}\langle
P^{(\mathbf{e})}(z|\breve{y})\rangle_{\mathbf{e}}\,\wp(\breve{y}|x)P(x).$ (6)
The deterministic scheme [Eq. (5)], given that $P(z,\breve{y},x)$ does not
fulfill a Markov property, shows that memory effects may in fact develop even
in absence of BIFs. Nevertheless, due to the structure defined by Eqs. (2) and
(3), they are completely “washed out” in the random scheme, which delivers a
Markovian joint probability [Eq. (6)]. Taking into account the derivation of
Eq. (4), this last property break down when Eq. (1) is not fulfilled. Thus, in
the random scheme departure of $P(z,\breve{y},x)$ from Markovianity witnesses
BIFs, which solves our problem.
### II.2 System and environment observables
In contrast to classical systems, in a quantum regime the previous results
have an intrinsic dependence of which system and environment observables are
considered.
For quantum systems, the absence of BIFs is defined by the validity of the
probability structures Eqs. (5) and (6) for any kind of system measurement
processes. Thus, arbitrary system observables are considered.
On the other hand, we only consider environment observables that allow to read
$\langle\cdots\rangle_{\mathbf{e}}$ as an unconditional average over the bath
degrees of freedom. This extra assumption is completely consistent with the
developed approach. Furthermore, this election (due to the unconditional
character) implies that $P(z,\breve{y},x)$ can be measured without involving
any explicit environment measurement process. This important feature is valid
for both classical and quantum environmental fluctuations.
When the environment is defined by classical stochastic degrees of freedom
with a fixed statistics [Sec. (III.1)], given that classical systems are not
affected by a measurement process, the previous assumption applies
straightforwardly. When the reservoir must be described in a quantum regime,
the previous constraint implies observables whose non-selective breuerbook
measurement transformations do not affect the environment state at each stage
[Sec. (III.2)]. Thus, independently of the environment nature, the detection
of BIFs can always be performed without measuring explicitly the environment.
### II.3 BIF witness
Independently of the nature (incoherent or quantum) of both the system and the
environment, as an explicit witness of BIF we consider a generalized CPF
correlation that takes into account the intermediate system state update
operation (deterministic $\leftrightarrow d$ or random $\leftrightarrow r$).
It measures the correlation between the initial and final (past-future)
outcomes conditioned to the intermediate system state ($\breve{y}$)
$C_{pf}^{(d/r)}|_{\breve{y}}\equiv\sum_{z,x}O_{z}O_{x}[P(z,x|\breve{y})-P(z|\breve{y})P(x|\breve{y})].$
(7)
Here, all conditional probabilities follow from $P(z,\breve{y},x)$
Conditionals , while the sum indexes run over all possible outcomes at each
stage. The scalar quantities $\\{O_{z}\\}$ and $\\{O_{x}\\}$ define the system
observables for each outcome.
In the deterministic scheme, similarly to Ref. budiniCPF ,
$C_{pf}^{(d)}|_{\breve{y}}$ detects memory effects independently of its
underlying origin. In the random scheme, the condition
$C_{pf}^{(r)}|_{\breve{y}}~{}\neq~{}0$ provides the desired witness of BIFs.
This result follows directly from the Markovian property Eq. (6), which leads
to $P(z,x|\breve{y})=P(z|\breve{y})P(x|\breve{y})\rightarrow
C_{pf}^{(r)}|_{\breve{y}}=0.$
For quantum systems, the three system measurement processes are defined by a
set of operators $\\{\Omega_{x}\\},$ $\\{\Omega_{y}\\},$ and
$\\{\Omega_{z}\\},$ with normalization
$\sum\nolimits_{x}\Omega_{x}^{{\dagger}}\Omega_{x}=\sum\nolimits_{y}\Omega_{y}^{{\dagger}}\Omega_{y}=\sum\nolimits_{z}\Omega_{z}^{{\dagger}}\Omega_{z}=\mathrm{I,}$
where $\mathrm{I}$ is the system identity operator. The intermediate
$y$-measurement in taken as a projective one, $\Omega_{y}=|y\rangle\langle
y|.$ Thus, in the random scheme the system state transformation reads
$\rho_{y}\equiv|y\rangle\langle y|\rightarrow\rho_{\breve{y}},$ where the
states $\\{\rho_{\breve{y}}\\}$ (independently of outcome $y)$ are randomly
chosen with probability $\wp(\breve{y}|x).$ This operation can be implemented,
for example, as $\rho_{\breve{y}}=U(\breve{y}|y)[\rho_{y}],$ where the
(conditional) unitary operator $U(\breve{y}|y)$ leads to the state
$\rho_{\breve{y}}$ independently of the obtained $y$-outcome repraration .
## III Application to different system-environment models
The consistence of the developed approach is supported by studying fundamental
system-reservoir models that leads to memory effects.
### III.1 Classical noise environmental fluctuations
Here the open system is coupled to classical stochastic degrees of freedom.
Its density matrix is written as
$\rho_{t}=\overline{\mathcal{E}_{t,0}^{st}}[\rho_{0}],$ where the overbar
symbol denotes an average over the environmental realizations. For each noise
realization the stochastic propagator fulfills
$\mathcal{E}_{t+\tau,0}^{st}=\mathcal{E}_{t+\tau,t}^{st}\mathcal{E}_{t,0}^{st},$
property consistent with the assumption (3). Stochastic Hamiltonians cialdi ;
GaussianNoise ; morgado ; bordone as well as random unitary evolutions
wudarski fall in this category. As usual in these models, the statistics of
the noise realizations is independent of the system dynamics. Hence, not any
BIF should be detected in this case.
Given that each noise realization labels the environment state, we can take
the equivalence
$\langle\cdots\rangle_{\mathbf{e}}\leftrightarrow\overline{(\cdots)}.$ By
using the standard formulation of quantum measurement theory, the joint
probability associated to the measurement scheme can be written as (see
Appendix A)
$\frac{P(z,\breve{y},y,x)}{\wp(\breve{y}|y,x)}=\overline{\mathrm{Tr}_{s}(E_{z}\mathcal{E}_{t+\tau,t}^{st}[\rho_{\breve{y}}])\mathrm{Tr}_{s}(E_{y}\mathcal{E}_{t,0}^{st}[\tilde{\rho}_{x}])},$
(8)
where $E_{i}\equiv\Omega_{i}^{\dagger}\Omega_{i}$ $(i=x,y,z)$ and
$\tilde{\rho}_{x}\equiv\Omega_{x}\rho_{0}\Omega_{x}^{\dagger}$ is the
(unnormalized) system state after the first $x$-measurement.
$\mathrm{Tr}_{s}(\cdots)$ denotes a trace operation in the system Hilbert
space. $\rho_{\breve{y}}$ is the (updated) system state after the second
$y$-measurement, while $t$ and $\tau$ are the elapsed times between
consecutive measurements.
In the deterministic scheme $[\wp(\breve{y}|y,x)=\delta_{\breve{y},y}],$ using
that $P(z,\breve{y},x)=\sum_{y}P(z,\breve{y},y,x),$ Eq. (8) leads to
$P(z,\breve{y},x)\overset{d}{=}\overline{\mathrm{Tr}_{s}(E_{z}\mathcal{E}_{t+\tau,t}^{st}[\rho_{\breve{y}}])\mathrm{Tr}_{s}(E_{\breve{y}}\mathcal{E}_{t,0}^{st}[\tilde{\rho}_{x}])}.$
(9)
In general, this joint probability does not fulfill a Markov condition. Thus,
$C_{pf}^{(d)}|_{\breve{y}}\neq 0$ [Eq. (7)] detects memory effects. On the
other hand, in the random scheme $[\wp(\breve{y}|y,x)=\wp(\breve{y}|x)]$ from
Eq. (8) it follows
$P(z,\breve{y},x)\overset{r}{=}\overline{\mathrm{Tr}_{s}(E_{z}\mathcal{E}_{t+\tau,t}^{st}[\rho_{\breve{y}}])}\wp(\breve{y}|x)\mathrm{Tr}_{s}(\tilde{\rho}_{x}),$
(10)
which recovers the Markovian result Eq. (6) with $\langle
P^{(\mathbf{e})}(z|\breve{y})\rangle_{\mathbf{e}}\leftrightarrow\overline{\mathrm{Tr}_{s}(E_{z}\mathcal{E}_{t+\tau,t}^{st}[\rho_{\breve{y}}])}=P(z|\breve{y})$
and $P(x)=\mathrm{Tr}_{s}(\tilde{\rho}_{x})=\mathrm{Tr}_{s}(E_{x}\rho_{0}).$
Thus, independently of the chosen system measurement observables it follows
$C_{pf}^{(r)}|_{\breve{y}}~{}=~{}0$ [Eq. (7)], indicating, as expected, the
absence of any BIF.
### III.2 Completely positive system-environment dynamics
Alternatively, system-environment ($s$-$e$) dynamics can be described in a
bipartite Hilbert space. Their density matrix
$\rho_{t}^{se}=\mathcal{E}_{t,0}[\rho_{0}^{se}]$ is set by a bipartite
propagator that satisfies
$\mathcal{E}_{t+\tau,0}=\mathcal{E}_{t+\tau,t}\mathcal{E}_{t,0}.$ This
property also supports assumption (3). We consider separable initial
conditions $\rho_{0}^{se}=\rho_{0}\otimes\sigma_{0}.$ Hence,
$\mathcal{E}_{t,0}$ leads to a completely positive system dynamics
$\rho_{t}=\mathrm{Tr}_{e}(\mathcal{E}_{t,0}[\rho_{0}^{se}]).$ Unitary system-
environment models breuerbook as well as bipartite (time-irreversible)
Lindblad dynamics fall in this category. As system and environment are
intrinsically coupled, the developing of BIFs is expected in general.
Here, we take the equivalence
$\langle\cdots\rangle_{\mathbf{e}}\leftrightarrow\mathrm{Tr}_{e}(\cdots).$
This unconditional environment average applies when the successive (non-
selective breuerbook ) measurements of the environment do not modify its state
at each stage (the bath state remains the same after each non-selective
measurement). Due to the dynamics induced by $\mathcal{E}_{t,0},$ in general
it is not possible to know explicitly which physical reservoir observables
fulfill this condition. Nevertheless, the demanded invariance
straightforwardly allows to read and to obtain
$\langle\cdots\rangle_{\mathbf{e}}$ from the bath trace operation
$\mathrm{Tr}_{e}(\cdots)$ breuerbook (see also Appendix A). Hence, similarly
to the previous environment model the validity (or not) of Eqs. (5) and (6)
can be checked without performing any explicit reservoir measurement process.
From standard quantum measurement theory, the joint probability of system
outcomes here reads (Appendix A)
$\frac{P(z,\breve{y},y,x)}{\wp(\breve{y}|y,x)}=\mathrm{Tr}_{se}(E_{z}\mathcal{E}_{t+\tau,t}[\rho_{\breve{y}}\otimes\mathrm{Tr}_{s}(E_{y}\mathcal{E}_{t,0}[\tilde{\rho}_{x}^{se}])]),$
(11)
where
$\tilde{\rho}_{x}^{se}\equiv\Omega_{x}\rho_{0}\Omega_{x}^{\dagger}\otimes\sigma_{0}=\tilde{\rho}_{x}\otimes\sigma_{0}$
is the bipartite state after the first $x$-measurement and, as before,
$\rho_{\breve{y}}$ is the updated system state.
In the deterministic scheme $[\wp(\breve{y}|y,x)=\delta_{\breve{y},y}],$ the
previous expression $[P(z,\breve{y},x)=\sum_{y}P(z,\breve{y},y,x)]$ leads to
$P(z,\breve{y},x)\overset{d}{=}\mathrm{Tr}_{se}(E_{z}\mathcal{E}_{t+\tau,t}[\rho_{\breve{y}}\otimes\mathrm{Tr}_{s}(E_{\breve{y}}\mathcal{E}_{t,0}[\tilde{\rho}_{x}^{se}])]).$
(12)
As expected, a Markovian property is not fulfilled in general implying the
presence of memory effects, $C_{pf}^{(d)}|_{\breve{y}}\neq 0.$ In the random
scheme $[\wp(\breve{y}|y,x)=\wp(\breve{y}|x)]$ it follows
$P(z,\breve{y},x)\overset{r}{=}\mathrm{Tr}_{se}(E_{z}\mathcal{E}_{t+\tau,t}[\rho_{\breve{y}}\otimes\mathrm{Tr}_{s}(\mathcal{E}_{t,0}[\tilde{\rho}_{x}^{se}])])\wp(\breve{y}|x).$
(13)
In contrast to Eq. (10), here in general a Markov property is not fulfilled.
Thus, $C_{pf}^{(r)}|_{\breve{y}}\neq 0.$ Nevertheless, there are bipartite
dynamics than in fact occur without a BIF. Below, we found the conditions that
guarantee $C_{pf}^{(r)}|_{\breve{y}}=0$ for arbitrary system measurement
processes.
#### III.2.1 Invariant environment dynamics
The environment state follows by tracing out the system degrees of freedom,
$\sigma_{t}\equiv\mathrm{Tr}_{s}(\mathcal{E}_{t,0}[\rho_{0}^{se}]),$ where
$\rho_{0}^{se}=\rho_{0}\otimes\sigma_{0}.$ When this state is independent of
the system initialization
$\sigma_{t}=\mathrm{Tr}_{s}(\mathcal{E}_{t,0}[\rho_{0}^{se}])=\mathrm{Tr}_{s}(\mathcal{E}_{t,0}[\mathcal{M}_{s}[\rho_{0}^{se}]]),$
(14)
where $\mathcal{M}_{s}$ represents an arbitrary (trace-preserving) system
transformation, a Markovian property is immediately recovered in the random
scheme. In fact, introducing
$\mathrm{Tr}_{s}(\mathcal{E}_{t,0}[\tilde{\rho}_{x}^{se}])])=P(x)\sigma_{t},$
Eq. (13) becomes
$P(z,\breve{y},x)\overset{r}{=}\mathrm{Tr}_{se}(E_{z}\mathcal{E}_{t+\tau,t}[\rho_{\breve{y}}\otimes\sigma_{t}])\wp(\breve{y}|x)P(x),$
which recovers the structure (6). Thus, environments with an invariant
dynamics do not induce any BIF $[C_{pf}^{(r)}|_{\breve{y}}=0].$ Notice that
this property supports the complete consistence of the proposed approach.
A relevant situation where Eq. (14) applies is the case of systems coupled to
incoherent degrees of freedom governed by a (invariant) classical master
equation lindbladrate . While these dynamics lead to memory effects
PostMarkovian ; shabani ; boltzman , our approach correctly identify the
absence of any BIF. Random unitary evolutions wudarski , as well as quantum
Markov chains megier ; maximal fall in this case.
It is important to remark that environments developing quantum features
(coherences) may also fulfill condition (14). This is the case, for example,
of some collisional models colisionVacchini whose underlying description can
be formulated with bipartite Lindblad equations embedding .
#### III.2.2 Unitary system-environment models
When modeling open quantum dynamics from an underlying bipartite Hamiltonian
dynamics, the unitary propagator reads
$\mathcal{E}_{t,0}[\cdot]=\exp(-itH_{T})\cdot\exp(+itH_{T}),$ where $H_{T}$ is
$H_{T}=H_{s}+H_{e}+H_{I}.$ (15)
The first two terms define respectively the system and bath Hamiltonians,
while the last one introduces their interaction. Given the system-environment
mutual interaction, for nearly all Hamiltonians $H_{T}$ it is expected that
the developing of memory effects [Eq. (12)] rely on BIFs [Eq. (13)].
One exception to the previous rule arises when the bath and interaction
Hamiltonians commute,
$[H_{e},H_{I}]=0.$ (16)
Under this condition, denoting the bath eigenvectors as
$H_{e}|e\rangle=e|e\rangle,$ the system density matrix reads
$\rho_{t}=\mathrm{Tr}_{e}(\rho_{t}^{se})=\sum\nolimits_{e}w_{e}\exp(-itH_{s}^{(e)})\rho_{0}\exp(+itH_{s}^{(e)}),$
where the weights are $w_{e}\equiv\langle e|\sigma_{0}|e\rangle$ and
$H_{s}^{(e)}\equiv H_{s}+\langle e|H_{I}|e\rangle.$ Thus, the system dynamics
can be represented by a random unitary map nori . For arbitrary dynamics, this
property does not guaranty the absence of BIFs. In fact, here the environment
invariance property (14) is not fulfilled in general invariance .
Nevertheless, after a straightforward calculation, the probabilities of the
deterministic and random schemes, Eqs. (12) and (13), can be written as in
Eqs. (5) and (6) (valid in absence of BIFs) respectively. In fact, under the
replacement
$\langle\cdots\rangle_{\mathbf{e}}\rightarrow\sum\nolimits_{e}w_{e}(\cdots),$
the conditional probabilities are
$P^{(\mathbf{e})}(z|\breve{y})\rightarrow\mathrm{Tr}_{s}(E_{z}\mathbb{G}_{\tau}^{(e)}[\rho_{\breve{y}}])$
and
$P^{(\mathbf{e})}(\breve{y}|x)\rightarrow\mathrm{Tr}_{s}(E_{\breve{y}}\mathbb{G}_{t}^{(e)}[\rho_{x}]),$
where
$\mathbb{G}_{t}^{(e)}[\cdot]\equiv\exp(-itH_{s}^{(e)})\cdot\exp(+itH_{s}^{(e)})$
and $\rho_{x}\equiv\tilde{\rho}_{x}/\mathrm{Tr}_{s}(\tilde{\rho}_{x}).$ Thus,
from these expressions we conclude that the condition (16) guaranties that the
joint probabilities, for arbitrary system measurement processes, can also be
obtained from a statistical mixture (with invariant weights $\\{w_{e}\\}$) of
unitary system evolutions (with propagators $\\{\mathbb{G}_{t}^{(e)}\\}$),
which consistently implies $C_{pf}^{(r)}|_{\breve{y}}=0.$
## IV Examples
Here, different explicit examples that admit an exact treatment are studied.
### IV.1 Eternal non-Markovianity
As a first explicit example we consider the non-Markovian system evolution
$\frac{d\rho_{t}}{dt}=\frac{1}{2}\sum_{\alpha=\hat{x},\hat{y},\hat{z}}\gamma_{\alpha}(t)(\sigma_{\alpha}\rho_{t}\sigma_{\alpha}-\rho_{t}),$
(17)
where $\\{\sigma_{\alpha}\\}$ are the $\alpha$-Pauli matrixes (directions in
Bloch sphere are denoted with a hat symbol). The time-dependent rates are
$\gamma_{\hat{x}}(t)=\gamma_{\hat{y}}(t)=\gamma,$ and
$\gamma_{\hat{z}}(t)=-\gamma\tanh[\gamma t].$ As demonstrated in Ref. megier
this kind of eternal non-Markovian evolution $[\gamma_{\hat{z}}(t)<0$ $\forall
t]$ is induced by the coupling of the system with a statistical mixture of
classical random fields. In fact, the system state can be written as
$\rho_{t}=\sum_{\alpha=\hat{x},\hat{y},\hat{z}}q_{\alpha}\exp[\gamma
t\mathbb{L}_{\alpha}][\rho_{0}],$ where
$\mathbb{L}_{\alpha}[\cdot]\equiv(\sigma_{\alpha}\cdot\sigma_{\alpha}-\cdot)$
is induced by each random field, whose (mixture) weights are
$q_{\hat{x}}=q_{\hat{y}}=1/2,$ and $q_{\hat{z}}=0.$ This underlying
“microscopic” description allows to calculating multi-time statistics in an
exact way. In particular, the CPF correlations follow straightforwardly from
Eqs. (9) and (10),
$\overline{(\cdots)}\rightarrow\sum_{\alpha=\hat{x},\hat{y},\hat{z}}q_{\alpha}(\cdots),$
where the (time-independent) “noise environmental realizations” only assumes
the values $\alpha=\hat{x},\hat{y},\hat{z},$ each with probability
$q_{\alpha}.$
Assuming that the three measurements processes are performed in the Bloch
directions $\hat{x}$-$\hat{n}$-$\hat{x},$ where $\hat{n}$ is an arbitrary
direction in the $\hat{z}$-$\hat{x}$ plane (with azimuthal angle $\theta$),
for the deterministic scheme it follows (see Appendix B)
$C_{pf}^{(d)}|_{\breve{y}=\pm
1}\underset{\hat{x}\hat{n}\hat{x}}{=}\sin^{2}(\theta)[c(t+\tau)-c(t)c(\tau)],$
(18)
where $c(t)\equiv q_{\hat{x}}+(q_{\hat{y}}+q_{\hat{z}})\exp[-2\gamma t].$ The
initial system state was taken as $\rho_{0}=|\pm\rangle\langle\pm|,$ where
$|\pm\rangle$ denotes the eigenvectors of $\sigma_{\hat{z}}.$ In Fig. 1(a) we
plot $C_{pf}^{(d)}|_{\breve{y}}$ [Eq. (18)] and $C_{pf}^{(r)}|_{\breve{y}}$
for equal measurement time intervals, $t=\tau.$ The property
$\lim_{t\rightarrow\infty}C_{pf}^{(d)}|_{\breve{y}}\neq 0$ indicates that the
environment correlation do not decay in time budiniCPF . On the other hand,
independently of the election of the renewed (pure) states
$\rho_{\breve{y}=\pm 1}$ and $\wp(\breve{y}|x),$ we get
$C_{pf}^{(r)}|_{\breve{y}}=0$ (see Appendix B). As expected from Eq. (10),
this result indicates the absence of any BIF.
### IV.2 Interaction with a bosonic bath
As a second example, we consider a two-level system coupled to a bosonic bath,
$H_{T}=\frac{\omega_{0}}{2}\sigma_{\hat{z}}+\sum_{k}\omega_{k}b_{k}^{{\dagger}}b_{k}+\sum_{k}g_{k}Sb_{k}^{{\dagger}}+g_{k}^{\ast}S^{\dagger}b_{k}.$
(19)
Each contribution defines the system, bath, and interaction Hamiltonians
respectively [Eq. (15)]. The bosonic operators satisfy
$[b_{k},b_{k^{\prime}}^{{\dagger}}]=\delta_{k,k^{\prime}}.$ Taking the system
operators $S^{\dagger}=\left|{+}\right\rangle\left\langle{-}\right|$ and
$S=\left|{-}\right\rangle\left\langle{+}\right|$ as the raising and lowering
operators in the natural basis $\left|{\pm}\right\rangle,$ the system dynamics
is dissipative breuerbook , while in the case $S=S^{\dagger}=\sigma_{\hat{z}}$
a dephasing dynamics is recovered. We assume the bipartite initial state
$|\Psi_{0}^{se}\rangle=|\psi_{0}\rangle\otimes\prod_{k}|0\rangle_{k},$ where
$\\{|0\rangle_{k}\\}$ are the ground states of each bosonic mode. In this
case, by working the observables in an interaction representation, similarly
to Refs. budiniChina ; budiniBrasil , the joint probabilities (12) and (13)
can be calculated in an exact way unpublished .
Figure 1: CPF correlation [Eq. (7)] for the deterministic and random schemes,
left and right columns respectively, for equal measurement time intervals
$t=\tau.$ (a) Eternal non-Markovianity, measurements
$\hat{x}$-$\hat{n}$-$\hat{x}.$ (b) Decay in a bosonic bath, measurements
$\hat{z}$-$\hat{z}$-$\hat{z}$ and $\hat{x}$-$\hat{z}$-$\hat{x}.$ (c) Dephasing
in a bosonic bath, measurements $\hat{n}$-$\hat{y}$-$\hat{x}.$ In all cases,
the $\hat{n}-$direction is defined by the angle $\theta.$ The renewed states
$\rho_{\breve{y}=\pm 1}$ are described in the main text.
For the dissipative dynamics [$S=\left|{-}\right\rangle\left\langle{+}\right|$
in Eq. (19)] the CPF correlation in the random scheme reads unpublished
$C_{pf}^{(r)}|_{\breve{y}=-1}\underset{\hat{z}\hat{z}\hat{z}}{=}|G(t,\tau)|^{2},\
\ \ \ \
C_{pf}^{(r)}|_{\breve{y}=-1}\underset{\hat{x}\hat{z}\hat{x}}{=}-\mathrm{Re}[G(t,\tau)].$
(20)
Here, we consider two different measurement possibilities,
$\hat{z}$-$\hat{z}$-$\hat{z}$ and $\hat{x}$-$\hat{z}$-$\hat{x}$ directions,
both with conditional $\breve{y}=-1.$ The renewed states are
$\rho_{\breve{y}=\pm}=|\pm\rangle\langle\pm|,$ and we take
$\wp(\breve{y}|x)=1/2.$ The initial system state $|\psi_{0}\rangle$ is chosen
such that $P(x)=1/2.$ Under this condition, for both measurement directions,
in the deterministic scheme we get
$C_{pf}^{(d)}|_{\breve{y}=-1}=[1-|G(t)|^{2}/2]^{-2}C_{pf}^{(r)}|_{\breve{y}=-1}.$
In these expressions,
$G(t,\tau)\equiv\int_{0}^{t}dt^{\prime}\int_{0}^{\tau}d\tau^{\prime}f(\tau^{\prime}+t^{\prime})G(t-t^{\prime})G(\tau-\tau^{\prime}),$
where $G(t)$ is defined by the evolution$\
(d/dt)G(t)=-\int_{0}^{t}f(t-t^{\prime})G(t^{\prime})dt^{\prime},$ $G(0)=1.$
The memory kernel is the bath correlation
$f(t)\equiv\sum_{k}|g_{k}|^{2}\exp[+i(\omega_{0}-\omega_{k})t].$
In Fig. 1(b), for a Lorentzian spectral density budiniBrasil ,
$f(t)=(\gamma/2\tau_{c})\exp(-|t|/\tau_{c}),$ with $\gamma\tau_{c}=5,$ we plot
the CPF correlations. In contrast to the previous case, here for both the
deterministic and random schemes, the CPF correlations do not vanish. Thus,
memory effects rely on BIFs, which are present independently of the bath
correlation time $\tau_{c}.$
In the dephasing case [$S=\sigma_{\hat{z}}$ in Eq. (19)], the CPF correlation
in the random scheme is unpublished
$C_{pf}^{(r)}|_{\breve{y}}\underset{\hat{n}\hat{y}\hat{x}}{=}\breve{y}\cos(\theta)\exp(-\gamma_{\tau})\sin(\Phi_{t,\tau}).$
(21)
Here, we consider the successive measurements in Bloch directions
$\hat{n}$-$\hat{y}$-$\hat{x}.$ Furthermore, we take $\wp(\breve{y}|x)=1/2,$
and pure states $\rho_{\breve{y}}$ corresponding to the eigenvectors of
$\sigma_{\hat{y}}.$ The initial condition $|\psi_{0}\rangle$ is such that
independently of $\hat{n},$ $P(x)=1/2.$ Under this condition the CPF
correlation of the deterministic scheme can be written as
$C_{pf}^{(d)}|_{\breve{y}}\underset{\hat{n}\hat{y}\hat{x}}{=}\sin(\theta)\exp[-(\gamma_{t}+\gamma_{t})]\sinh(\Gamma_{t,\tau})+C_{pf}^{(r)}|_{\breve{y}}.$
In these expressions,
$\Gamma_{t,\tau}=\gamma_{t}+\gamma_{\tau}-\gamma_{t+\tau}$ and
$\Phi_{t,\tau}=\phi_{t}+\phi_{\tau}-\phi_{t+\tau}$ where $\gamma_{t}\equiv
4\sum\nolimits_{k}(|g_{k}|^{2}/\omega_{k}^{2})[1-\cos(\omega_{k}t)],$ and
$\phi_{t}\equiv
4\sum\nolimits_{k}(|g_{k}|^{2}/\omega_{k}^{2})\sin(\omega_{k}t).$
Assuming the spectral density
$J(\omega)=\lambda\omega\exp(-\omega/\omega_{c}),$ where $\omega_{c}$ is a
cutoff frequency breuerbook , it follows
$\gamma_{t}=(1/2)\ln[1+(\omega_{c}t)^{2}]$ and $\phi_{t}=\arctan[\omega_{c}t]$
($\lambda=1$). In Fig. 1(c) we plot the CPF correlation of both schemes. Even
when the unperturbed system dynamics can be written as a (continuous)
statistical superposition of unitary dynamics nori , our approach detects the
presence of BIFs, $C_{pf}^{(r)}|_{\breve{y}}\neq 0.$ In fact,
$C_{pf}^{(r)}|_{\breve{y}}=0$ only occurs for very specific measurement
directions.
## V Conclusions
Memory effects in open quantum systems may underlay or not on a bidirectional
system-environment physical exchange of information. We introduced an
operational scheme that allow to distinguishing between both situations,
solving a long standing problem in the theory of non-Markovian open quantum
systems. The method is based on a probabilistic relation that relates the
developing of BIFs with the modification of the environmental dynamical
behavior. We showed that BIFs can be detected with a minimal number of three
system measurement processes added to an intermediate system update operation.
A generalized CPF correlation, defined between the first and last measurement
outcomes, witnesses memory effects. Depending on the system state update
scheme, deterministic vs. random, it witnesses memory effects independently of
its underlying origin or restricted to the presence of BIFs respectively.
Consistently, for environments modeled by classical noise fluctuations or when
the environment dynamics (incoherent or quantum) is not affected during the
system evolution, not any BIFs is detected. The presence of BIFs for decay and
dephasing dynamics modeled through unitary system-environment interactions
also support the consistence of the developed approach.
Given the operational character of the proposed scheme, it can be implemented,
for example, in quantum optical arrangements budiniChina ; budiniBrasil ,
providing in general a valuable experimental tool for studying the underlying
origin of quantum memory effects. Generalizations for an arbitrary number of
measurement processes can also be worked out in a similar way. The proposed
theoretical ground may also shed light on the possibility of classifying
memory effects in classical and quantum ones costa , and may also provide an
explicit test for different (causal) structures arising in quantum causal
modelling causal .
## Acknowledgments
This paper was supported by Consejo Nacional de Investigaciones Científicas y
Técnicas (CONICET), Argentina.
## Appendix A Joint probabilities
The system is subjected to three measurement processes performed at times
$0\rightarrow t\rightarrow t+\tau.$ The corresponding measurement operators
are denoted as $\\{\Omega_{x}\\},$ $\\{\Omega_{y}\\},$ and $\\{\Omega_{z}\\}.$
The intermediate $y$-measurement is taken as a projective one,
$\Omega_{y}=|y\rangle\langle y|.$ The corresponding post-measurement system
state is $\rho_{y}=|y\rangle\langle y|.$ After this step, the state
transformation $\rho_{y}\rightarrow\rho_{\breve{y}}$ is externally applied.
Each of the possible states $\\{\rho_{\breve{y}}\\}$ is chosen with
conditional probability $\wp(\breve{y}|y,x),$ which only depends on the
previous particular measurement outcomes $x$ and $y.$
The relevant joint probability $P(z,\breve{y},x)$ for the present proposal can
be obtained as
$P(z,\breve{y},x)=\sum_{y}P(z,\breve{y},y,x).$ (22)
The joint probability for the four events $P(z,\breve{y},y,x)$ follows from
standard quantum measurement theory after knowing the open system dynamics.
The CPF probability $P(z,x|\breve{y}),$ which determine the CPF correlation
[Eq. (7)] budiniCPF , can straightforwardly be obtained as
$P(z,x|\breve{y})=P(z,\breve{y},x)/P(\breve{y}),$ (23)
where
$P(\breve{y})=\sum_{z,x}P(z,\breve{y},x)=\sum_{z,y,x}P(z,\breve{y},y,x).$ In
addition, $P(z|\breve{y})=\sum_{x}P(z,x|\breve{y})$ and
$P(x|\breve{y})=\sum_{z}P(z,x|\breve{y}).$
### A.1 Classical noise environmental fluctuations
For classical noisy environments the outcomes probabilities are obtained for
each realization, while an ensemble average is performed at the end of the
calculation.
Let $\rho_{0}$ denotes the initial system state. After performing the first
system measurement, with operators $\\{\Omega_{x}\\},$ it occurs the
transformation $\rho_{0}\rightarrow\rho_{x},$ where
$\rho_{x}=\frac{\Omega_{x}\rho_{0}\Omega_{x}^{\dagger}}{\mathrm{Tr}_{s}(E_{x}\rho_{0})}.$
(24)
Here, $E_{x}=\Omega_{x}^{\dagger}\Omega_{x}.$ The probability of each outcome
is
$P(x)=\mathrm{Tr}_{s}(E_{x}\rho_{0}).$ (25)
During the time interval$\ 0\rightarrow t,$ the system evolves with a
(completely positive) dynamics defined by the stochastic propagator
$\mathcal{E}_{t,0}^{st}.$ After the second $y$-measurement, with operators
$\\{\Omega_{y}\\},$ it follows the transformation
$\mathcal{E}_{t,0}^{st}[\rho_{x}]\rightarrow\rho_{y},$ where
$\rho_{y}=\frac{\Omega_{y}\mathcal{E}_{t,0}^{st}[\rho_{x}]\Omega_{y}^{\dagger}}{\mathrm{Tr}_{s}(E_{y}\mathcal{E}_{t,0}^{st}[\rho_{x}])}=|y\rangle\langle
y|,$ (26)
and $E_{y}=\Omega_{y}^{\dagger}\Omega_{y}.$ Here, we used that the
$y$-measurement is a projective one, $\Omega_{y}=|y\rangle\langle y|.$ The
conditional probability $P^{st}(y|x)$ of outcome $y$ given that the previous
one was $x$ is
$P^{st}(y|x)=\mathrm{Tr}_{s}(E_{y}\mathcal{E}_{t,0}^{st}[\rho_{x}]).$ (27)
At this stage, independently of the outcome $y,$ the system state is updated
as $\rho_{y}\rightarrow\rho_{\breve{y}}.$ The states $\\{\rho_{\breve{y}}\\}$
are chosen with conditional probability $\wp(\breve{y}|y,x),$ which does not
depend on the particular noise realization.
In the final steps $(t\rightarrow t+\tau),$ the system evolves with the
propagator $\mathcal{E}_{t+\tau,t}^{st}$ and the last $z$-measurement, with
operators $\\{\Omega_{z}\\},$ is performed ($\tau$ is the time interval
between the measurements). Thus,
$\mathcal{E}_{t+\tau,t}^{st}[\rho_{\breve{y}}]\rightarrow\rho_{z}^{st},$ where
$\rho_{z}^{st}=\frac{\Omega_{z}\mathcal{E}_{t+\tau,t}^{st}[\rho_{\breve{y}}]\Omega_{z}^{\dagger}}{\mathrm{Tr}_{s}(E_{z}\mathcal{E}_{t+\tau,t}^{st}[\rho_{\breve{y}}])},$
(28)
with $E_{z}=\Omega_{z}^{\dagger}\Omega_{z}.$ The conditional probability of
outcome $z$ given that the previous ones were $x$ and $y,$ and given that the
state $\rho_{\breve{y}}$ was imposed, is
$P^{st}(z|\breve{y},y,x)=\mathrm{Tr}_{s}(E_{z}\mathcal{E}_{t+\tau,t}^{st}[\rho_{\breve{y}}]).$
(29)
For each noise realization, this object does not depend on outcomes $y$ and
$x.$
The joint probability of the four events $P(z,\breve{y},y,x)$ can be obtained
as an average over an ensemble of realizations. Denoting the average operation
with the overbar symbol, Bayes rule leads to
$P(z,\breve{y},y,x)=\overline{P^{st}(z|\breve{y},y,x)\wp(\breve{y}|y,x)P^{st}(y|x)P(x)}.$
(30)
From Eqs. (25), (27), and (29), we get
$\frac{P(z,\breve{y},y,x)}{\wp(\breve{y}|y,x)}=\overline{\mathrm{Tr}_{s}(E_{z}\mathcal{E}_{t+\tau,t}^{st}[\rho_{\breve{y}}])\mathrm{Tr}_{s}(E_{y}\mathcal{E}_{t,0}^{st}[\tilde{\rho}_{x}])},$
(31)
where $\tilde{\rho}_{x}\equiv\Omega_{x}\rho_{0}\Omega_{x}^{\dagger},$ which
recovers Eq. (8).
### A.2 Completely positive system-environment dynamics
Let $\rho_{0}^{se}=\rho_{0}\otimes\sigma_{0}$ denotes the bipartite state at
the initial time. After performing the first system measurement, with
operators $\\{\Omega_{x}\\},$ it occurs the transformation
$\rho_{0}^{se}\rightarrow\rho_{x}^{se},$ where the post-measurement state is
$\rho_{x}^{se}=\frac{\Omega_{x}\rho_{0}^{se}\Omega_{x}^{\dagger}}{\mathrm{Tr}_{se}(E_{x}\rho_{0}^{se})},$
(32)
with $E_{x}=\Omega_{x}^{\dagger}\Omega_{x}.$ The probability of each outcome
is
$P(x)=\mathrm{Tr}_{s}(E_{x}\rho_{0}).$ (33)
During the time interval$\ 0\rightarrow t,$ the bipartite arrangement evolves
with a completely positive dynamics defined by the propagator
$\mathcal{E}_{t,0}.$ After the second $y$-measurement, it follows the
transformation $\mathcal{E}_{t,0}[\rho_{x}^{se}]\rightarrow\rho_{y}^{se},$
where
$\rho_{y}^{se}=\frac{\Omega_{y}\mathcal{E}_{t,0}[\rho_{x}^{se}]\Omega_{y}^{\dagger}}{\mathrm{Tr}_{se}(E_{y}\mathcal{E}_{t}[\rho_{x}^{se}])}=\rho_{y}\otimes\sigma_{e}^{yx}.$
(34)
Here, $E_{y}=\Omega_{y}^{\dagger}\Omega_{y}.$ In the last equality we used
that the second measurement is a projective one, $\Omega_{y}=|y\rangle\langle
y|$ and $\rho_{y}=|y\rangle\langle y|.$ The environment state is
$\sigma_{e}^{yx}=\frac{\mathrm{Tr}_{s}(E_{y}\mathcal{E}_{t,0}[\rho_{x}^{se}])}{\mathrm{Tr}_{se}(E_{y}\mathcal{E}_{t,0}[\rho_{x}^{se}])}.$
(35)
The conditional probability $P(y|x)$ of outcome $y$ given that the previous
one was $x$ is
$P(y|x)=\mathrm{Tr}_{se}(E_{y}\mathcal{E}_{t,0}[\rho_{x}^{se}]).$ (36)
At this stage, independently of the outcome $y,$ the system is initialized in
an independently chosen state $\rho_{\breve{y}},$ with conditional probability
$\wp(\breve{y}|y,x).$ Thus, the bipartite state [Eq. (34)] becomes
$\rho_{y}^{se}\rightarrow\rho_{\breve{y}}^{se}=\rho_{\breve{y}}\otimes\sigma_{e}^{yx}.$
(37)
In the final steps $(t\rightarrow t+\tau),$ the bipartite system arrangement
evolves with the propagator $\mathcal{E}_{t+\tau,t},$ and the last
$z$-measurement is performed. Hence,
$\mathcal{E}_{t+\tau,t}[\rho_{\breve{y}}\otimes\sigma_{e}^{yx}]\rightarrow\rho_{z}^{se},$
where
$\rho_{z}^{se}=\frac{\Omega_{z}\mathcal{E}_{t+\tau,t}[\rho_{\breve{y}}\otimes\sigma_{e}^{yx}]\Omega_{z}^{\dagger}}{\mathrm{Tr}_{se}(E_{z}\mathcal{E}_{t+\tau,t}[\rho_{\breve{y}}\otimes\sigma_{e}^{yx}])},$
(38)
with $E_{z}=\Omega_{z}^{\dagger}\Omega_{z}.$ The conditional probability of
outcome $z$ given that the previous ones were $x$ and $y,$ and given that the
state $\rho_{\breve{y}}$ was imposed, is
$P(z|\breve{y},y,x)=\mathrm{Tr}_{se}(E_{z}\mathcal{E}_{t+\tau,t}[\rho_{\breve{y}}\otimes\sigma_{e}^{yx}]).$
(39)
From Bayes rule, the joint probability $P(z,\breve{y},y,x)$ of the four events
can be written as
$P(z,\breve{y},y,x)=P(z|\breve{y},y,x)\wp(\breve{y}|y,x)P(y|x)P(x).$ (40)
From Eqs. (33), (36), and (39), it follows
$\displaystyle P(z,\breve{y},y,x)$ $\displaystyle=$
$\displaystyle\mathrm{Tr}_{se}(E_{z}\mathcal{E}_{t+\tau,t}[\rho_{\breve{y}}\otimes\sigma_{e}^{yx}])$
(41)
$\displaystyle\wp(\breve{y}|y,x)\mathrm{Tr}_{se}(E_{y}\mathcal{E}_{t,0}[\tilde{\rho}_{x}^{se}]),$
where
$\tilde{\rho}_{x}^{se}\equiv\Omega_{x}\rho_{0}^{se}\Omega_{x}^{\dagger}.$
Using Eq. (35) for $\sigma_{e}^{yx},$ finally we get
$\frac{P(z,\breve{y},y,x)}{\wp(\breve{y}|y,x)}=\mathrm{Tr}_{se}(E_{z}\mathcal{E}_{t+\tau,t}[\rho_{\breve{y}}\otimes\mathrm{Tr}_{s}(E_{y}\mathcal{E}_{t,0}[\tilde{\rho}_{x}^{se}])]),$
(42)
which recovers Eq. (11).
### A.3 Unconditional environment average
The calculus of $P(z,\breve{y},y,x)$ in the previous section relies on the
association
$\langle\cdots\rangle_{\mathbf{e}}\leftrightarrow\mathrm{Tr}_{e}(\cdots).$
This unconditional environment average emerges when the the successive (non-
selective) measurement of the environment do not modify its state at each
stage. While this result follows straightforwardly from quantum measurement
theory breuerbook , here it is explicitly confirmed.
We consider three measurement processes but now they provide information of
both the system and the environment. The successive outcomes are denoted as
$x\rightarrow(y\rightarrow\breve{y})\rightarrow z$ and
$\mathfrak{X}\rightarrow\mathfrak{Y}\rightarrow\mathfrak{Z}$ (Latin and
Fraktur letters) respectively. Introducing the notation $X=(x,\mathfrak{X}),$
$Y=(y,\mathfrak{Y}),$ and $Z=(z,\mathfrak{Z}),$ the measurement operators are
denoted as $\\{\Omega_{X}\\},$ $\\{\Omega_{Y}\\},$ and $\\{\Omega_{Z}\\},$
where $\Omega_{X}=\Omega_{x}\otimes\Omega_{\mathfrak{X}},$
$\Omega_{Y}=\Omega_{y}\otimes\Omega_{\mathfrak{Y}}$ and
$\Omega_{Z}=\Omega_{z}\otimes\Omega_{\mathfrak{Z}}.$ As before, the
intermediate system measurement is taken as a projective one,
$\Omega_{y}=|y\rangle\langle y|.$
From Bayes rule, the probability of all measurements and preparation events
can be written as
$P(Z,\breve{y},Y,X)=P(Z|\breve{y},Y,X)\wp(\breve{y}|y,x)P(Y|X)P(X).$ (43)
By performing the same calculus steps as in the previous section, from Eqs.
(41) straightforwardly we obtain
$\displaystyle P(Z,\breve{y},Y,X)$ $\displaystyle=$
$\displaystyle\mathrm{Tr}_{se}(E_{Z}\mathcal{E}_{t+\tau,t}[\rho_{\breve{y}}\otimes\sigma_{e}^{YX}])$
(44)
$\displaystyle\wp(\breve{y}|y,x)\mathrm{Tr}_{se}(E_{Y}\mathcal{E}_{t,0}[\tilde{\rho}_{X}^{se}]),$
where $E_{J}=\Omega_{J}^{\dagger}\Omega_{J}$ $(J=X,Y,Z),$ and
$\tilde{\rho}_{X}^{se}=\Omega_{X}\rho_{0}^{se}\Omega_{X}^{\dagger}.$
Furthermore,
$\sigma_{e}^{YX}=\frac{\Omega_{\mathfrak{Y}}\mathrm{Tr}_{s}(E_{y}\mathcal{E}_{t,0}[\rho_{X}^{se}])\Omega_{\mathfrak{Y}}^{{\dagger}}}{\mathrm{Tr}_{se}(E_{Y}\mathcal{E}_{t,0}[\rho_{X}^{se}])},$
(45)
where
$\rho_{X}^{se}=\tilde{\rho}_{X}^{se}/\mathrm{Tr}_{se}(E_{X}\rho_{0}^{se}).$
Similarly, Eq. (44) can be rewritten as
$\\!\frac{P(\\!Z,\\!\breve{y},\\!Y,\\!X\\!)}{\wp(\breve{y}|y,x)}\\!=\\!\mathrm{Tr}_{se}(\\!E_{Z}\mathcal{E}_{t+\tau,t}[\rho_{\breve{y}}\otimes\Omega_{\mathfrak{Y}}\mathrm{Tr}_{s}(E_{y}\mathcal{E}_{t,0}[\tilde{\rho}_{X}^{se}])\Omega_{\mathfrak{Y}}^{{\dagger}}]).$
(46)
The probability for the environment outcomes follows by marginating the system
outcomes,
$P(\mathfrak{Z},\mathfrak{Y},\mathfrak{X})=\sum_{z,\breve{y},y,x}P(Z,\breve{y},Y,X).$
(47)
Similarly, the probability for the system outcomes follows by marginating the
outcomes corresponding to the reservoir measurements,
$P(z,\breve{y},y,x)=\sum_{\mathfrak{Z},\mathfrak{Y},\mathfrak{X}}P(Z,\breve{y},Y,X).$
(48)
This result for $P(z,\breve{y},y,x)$ relies on explicit environment
measurements. In contrast, the results of the previous section were derived
assuming that the environment is not observed at all. Nevertheless, both kind
of results can be put in one-to-one correspondence. In fact, Eqs. (41) and
(42) can be recovered from Eqs. (44) and (46), via the margination (48), under
the conditions
$\sigma_{0}=\sum_{\mathfrak{X}}\Omega_{\mathfrak{X}}\sigma_{0}\Omega_{\mathfrak{X}}^{{\dagger}},\
\ \ \ \ \ \
\sigma_{e}^{yx}=\sum_{\mathfrak{Y}}\Omega_{\mathfrak{Y}}\sigma_{e}^{yx}\Omega_{\mathfrak{Y}}^{{\dagger}},$
(49)
where $\sigma_{0}$ is the initial bath state and $\sigma_{e}^{yx}$ is defined
by Eq. (35). As expected, these equalities imply that the bath states at each
stage are not modified by the corresponding reservoir (non-selective)
measurement processes. Thus, the unconditional environment average of the
previous section [Eq. (42)] relies on this kind of observables, which allow us
to formulate the full approach without performing any explicit reservoir
measurement.
For projective environment measurements, the relations (49) implies the
commutation relations $[\sigma_{0},\Omega_{\mathfrak{X}}]=0,$
$[\sigma_{e}^{yx},\Omega_{\mathfrak{Y}}]=0.$ In classical (incoherent)
reservoirs, where the bath state is diagonal in (a unique) privileged basis,
these conditions define the corresponding “classical environment observables.”
## Appendix B Eternal non-Markovianity
The non-Markovian system density matrix evolution is given by Eq. (17). There
exist different underlying dynamic that lead to this dynamics. The solution
map $\rho_{0}\rightarrow\rho_{t}$ can be written as a mixture of three
Markovian maps megier
$\rho_{t}=\sum_{\alpha=\hat{x},\hat{y},\hat{z}}q_{\alpha}\mathcal{E}_{t,0}^{(\alpha)}[\rho_{0}],$
(50)
with positive and normalized statistical weights $\\{q_{\alpha}\\},$
$\sum_{\alpha=\hat{x},\hat{y},\hat{z}}q_{\alpha}=1.$ The Markovian propagators
are
$\mathcal{E}_{t,t_{0}}^{(\alpha)}[\rho_{0}]=h_{t-t_{0}}^{(+)}\rho_{0}+h_{t-t_{0}}^{(-)}\sigma_{\alpha}\rho_{0}\sigma_{\alpha},$
(51)
with scalar functions $h_{t}^{(\pm)}\equiv(1\pm e^{-2\gamma t})/2.$ Each
propagator
$\mathcal{E}_{t,t_{0}}^{(\alpha)}=\exp[\gamma(t-t_{0})\mathbb{L}_{\alpha}]$
corresponds to the solution of the Markovian Lindblad evolution
$\frac{d}{dt}=\gamma\mathbb{L}_{\alpha}[\rho_{t}]=\gamma(\sigma_{\alpha}\rho_{t}\sigma_{\alpha}-\rho_{t}).$
(52)
The evolution (17) emerges with $q_{\hat{x}}=q_{\hat{y}}=1/2$ and
$q_{\hat{z}}=0$ megier .
The probability $P(z,\breve{y},y,x)$ can be straightforwardly be obtained from
Eq. (31) under the replacement
$\overline{(\cdots)}\rightarrow\sum_{\alpha}q_{\alpha}\cdots.$ We get
$\frac{P(z,\breve{y},y,x)}{\wp(\breve{y}|y,x)}=\\!\sum_{\alpha=\hat{x},\hat{y},\hat{z}}\\!\\!\\!q_{\alpha}\mathrm{Tr}_{s}(E_{z}\mathcal{E}_{t+\tau,t}^{(\alpha)}[\rho_{\breve{y}}])\mathrm{Tr}_{s}(E_{y}\mathcal{E}_{t,0}^{(\alpha)}[\tilde{\rho}_{x}]),$
(53)
where $E_{i}\equiv\Omega_{i}^{\dagger}\Omega_{i}$ $(i=x,y,z)$ and
$\tilde{\rho}_{x}\equiv\Omega_{x}\rho_{0}\Omega_{x}^{\dagger}$ is the
(unnormalized) system state after the first $x$-measurement.
$\mathrm{Tr}_{s}(\cdots)$ denotes a trace operation in the system Hilbert
space. $\rho_{\breve{y}}$ is the (updated) system state after the second
$y$-measurement.
In the deterministic scheme $[\wp(\breve{y}|y,x)=\delta_{\breve{y},y}],$ using
that $P(z,\breve{y},x)=\sum_{y}P(z,\breve{y},y,x),$ Eq. (53) leads to
$P(z,\breve{y},x)\overset{d}{=}\sum_{\alpha=\hat{x},\hat{y},\hat{z}}q_{\alpha}\mathrm{Tr}_{s}(E_{z}\mathcal{E}_{t+\tau,t}^{(\alpha)}[\rho_{\breve{y}}])\mathrm{Tr}_{s}(E_{\breve{y}}\mathcal{E}_{t,0}^{(\alpha)}[\tilde{\rho}_{x}]).$
(54)
In general, this joint probability does not fulfill a Markov condition. Thus,
$C_{pf}^{(d)}|_{\breve{y}}\neq 0$ detects memory effects. On the other hand,
in the random scheme $[\wp(\breve{y}|y,x)=\wp(\breve{y}|x)]$ it follows
$P(z,\breve{y},x)\overset{r}{=}\sum_{\alpha=\hat{x},\hat{y},\hat{z}}q_{\alpha}\mathrm{Tr}_{s}(E_{z}\mathcal{E}_{t+\tau,t}^{(\alpha)}[\rho_{\breve{y}}])\wp(\breve{y}|x)\mathrm{Tr}_{s}(\tilde{\rho}_{x}),$
(55)
which recovers a Markovian structure,
$P(z,\breve{y},x)=P(z|\breve{y})\wp(\breve{y}|x)P(x),$ with
$P(z|\breve{y})=\sum_{\alpha=\hat{x},\hat{y},\hat{z}}q_{\alpha}\mathrm{Tr}_{s}(E_{z}\mathcal{E}_{t+\tau,t}^{(\alpha)}[\rho_{\breve{y}}])$
and $P(x)=\mathrm{Tr}_{s}(\tilde{\rho}_{x}).$ Thus, independently of the
chosen system measurement observables it follows
$C_{pf}^{(r)}|_{\breve{y}}=0,$ indicating consistently the absence of any BIF.
### $\hat{x}$-$\hat{n}$-$\hat{x}$ measurements
We consider the case in which the three measurements are projective ones. The
first and third ones are performed in $\hat{x}$-direction of the Bloch sphere.
The intermediate one is performed in a direction
$\hat{n}=\\{\sin(\theta),0,\cos(\theta)\\},$ which lies in the
$\hat{x}$-$\hat{z}$ plane of the Bloch sphere. Thus, the measurement operators
are $\Omega_{x=\pm}=|\hat{x}_{\pm}\rangle\langle\hat{x}_{\pm}|,$
$\Omega_{y=\pm}=|\hat{n}_{\pm}\rangle\langle\hat{n}_{\pm}|,$ and
$\Omega_{z=\pm}=|\hat{x}_{\pm}\rangle\langle\hat{x}_{\pm}|.$ Consistently with
the chosen directions, we have
$|\hat{x}_{\pm}\rangle=(|+\rangle\pm|-\rangle)/\sqrt{2},$ jointly with
$|\hat{n}_{+}\rangle=\cos(\theta/2)|+\rangle+\sin(\theta/2)|-\rangle,$ and
$|\hat{n}_{-}\rangle=\sin(\theta/2)|+\rangle-\cos(\theta/2)|-\rangle.$
For an explicit calculation of the previous probabilities we need to calculate
$P_{\alpha}(\hat{n}|\hat{x})\equiv\mathrm{Tr}_{s}(E_{\hat{n}}\mathcal{E}_{t_{f},t_{i}}^{(\alpha)}[\rho_{\hat{x}}])$
and
$P_{\alpha}(\hat{x}|\hat{n})\equiv\mathrm{Tr}_{s}(E_{\hat{x}}\mathcal{E}_{t_{f},t_{i}}^{(\alpha)}[\rho_{\hat{n}}]),$
where $E_{\hat{n}}=|\hat{n}\rangle\langle\hat{n}|$ and
$\rho_{\hat{x}}=|\hat{x}\rangle\langle\hat{x}|.$ From Eq. (51) and the
definition of the measurement operators, we get
$P_{\alpha}(\hat{n}|\hat{x})=P_{\alpha}(\hat{x}|\hat{n})=h_{t_{f}-t_{i}}^{(+)}|\langle\hat{n}|\hat{x}\rangle|^{2}+h_{t_{f}-t_{i}}^{(-)}|\langle\hat{n}|\sigma_{\alpha}|\hat{x}\rangle|^{2}.$
(56)
Using this result, after a straightforward calculation, from Eqs. (54) and
(56) we get
$\displaystyle
P(z,\breve{y},x)\overset{d}{=}\frac{1}{4}[1+\breve{y}x\sin(\theta)c(t)+z\breve{y}\sin(\theta)c(\tau)$
$\displaystyle\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
+zx\sin^{2}(\theta)c(t+\tau)]P(x),$ (57)
where $P(x)=\mathrm{Tr}_{s}(E_{x}\rho_{0}),$ and
$c(t)\equiv q_{\hat{x}}+(q_{\hat{y}}+q_{\hat{z}})\exp[-2\gamma t].$ (58)
In the random scheme, from Eq. (55) we obtain
$P(z,\breve{y},x)\overset{r}{=}\frac{1}{2}[1+z\breve{y}\sin(\theta)c(\tau)]\wp(\breve{y}|x)P(x),$
(59)
where we considered the updated states $\rho_{\breve{y}=\pm
1}=|\hat{n}_{\pm}\rangle\langle\hat{n}_{\pm}|.$
The generalized CPF correlation is given by Eq. (7),
$C_{pf}^{(d/r)}|_{\breve{y}}=\sum_{zx}O_{z}O_{x}[P(z,x|\breve{y})-P(z|\breve{y})P(x|\breve{y})],$
where $P(z,x|\breve{y})$ follows from Eq. (23). Furthermore, $O_{z}=z=\pm 1$
and $O_{x}=x=\pm 1.$ From Eq. (57), the CPF correlation in the deterministic
scheme reads
$C_{pf}^{(d)}|_{\breve{y}}\underset{\hat{x}\hat{n}\hat{x}}{=}\sin^{2}(\theta)\frac{[1-\langle
x\rangle^{2}]}{4[P(\breve{y})]^{2}}[c(t+\tau)-c(t)c(\tau)],$ (60)
where $P(\breve{y})=(1/2)[1+\breve{y}\langle x\rangle\sin(\theta)c(t)]$ and
$\langle x\rangle\equiv\sum_{x=\pm 1}xP(x).$ When
$\rho_{0}=|\pm\rangle\langle\pm|$ it follows $P(x)=1/2$ and consequently
$\langle x\rangle=0.$ This case recovers Eq. (18).
In the random scheme, from Eq. (59) consistently it follows
$C_{pf}^{(r)}|_{\breve{y}}=0.$ (61)
This equality is valid independently of the chosen measurement processes and
updated system states [see Eq. (55)].
## References
* (1) H. P. Breuer and F. Petruccione, The theory of open quantum systems, (Oxford University Press, 2002).
* (2) I. de Vega and D. Alonso, Dynamics of non-Markovian open quantum systems, Rev. Mod. Phys. 89, 015001 (2017).
* (3) L. Li, M. J. W. Hall, and H. M. Wiseman, Concepts of quantum non-Markovianity: A hierarchy, Phys. Rep. 759, 1 (2018).
* (4) H. P. Breuer, E. M. Laine, J. Piilo, and V. Vacchini, Colloquium: Non-Markovian dynamics in open quantum systems, Rev. Mod. Phys. 88, 021002 (2016).
* (5) A. Rivas, S. F. Huelga, and M. B. Plenio, Quantum non-Markovianity: characterization, quantification and detection, Rep. Prog. Phys. 77, 094001 (2014).
* (6) R. Schmidt, S. Maniscalco, and T. Ala-Nissila, Heat flux and information backflow in cold environments, Phys. Rev. A 94, 010101(R) (2016); S. H. Raja, M. Borrelli, R. Schmidt, J. P. Pekola, and S. Maniscalco, Thermodynamic fingerprints of non-Markovianity in a system of coupled superconducting qubits, Phys. Rev. A 97, 032133 (2018).
* (7) G. Guarnieri, C. Uchiyama, and B. Vacchini, Energy backflow and non-Markovian dynamics, Phys. Rev. A 93, 012118 (2016).
* (8) G. Guarnieri, J. Nokkala, R. Schmidt, S. Maniscalco, and B. Vacchini, Energy backflow in strongly coupled non-Markovian continuous-variable systems, Phys. Rev. A 94, 062101 (2016).
* (9) S. Cialdi, C. Benedetti , D. Tamascelli, S. Olivares, M. G. A. Paris, and B. Vacchini, Experimental investigation of the effect of classical noise on quantum non-Markovian dynamics, Phys. Rev. A 100, 052104 (2019).
* (10) J. I. Costa-Filho, R. B. B. Lima, R. R. Paiva, P. M. Soares, W. A. M. Morgado, R. Lo Franco, and D. O. Soares-Pinto, Enabling quantum non-Markovian dynamics by injection of classical colored noise, Phys. Rev. A 95, 052126 (2017).
* (11) J. Trapani and M. G. A. Paris, Nondivisibility versus backflow of information in understanding revivals of quantum correlations for continuous-variable systems interacting with fluctuating environments, Phys. Rev. A 93, 042119 (2016); C. Benedetti, F. Buscemi, P. Bordone, and M. G. A. Paris, Non-Markovian continuous-time quantum walks on lattices with dynamical noise, Phys. Rev. A 93, 042313 (2016).
* (12) J. Trapani, M. Bina, S. Maniscalco, and M. G. A. Paris, Collapse and revival of quantum coherence for a harmonic oscillator interacting with a classical fluctuating environment, Phys. Rev. A 91, 022113 (2015); T. Grotz, L. Heaney, and W. T. Strunz, Quantum dynamics in fluctuating traps: Master equation, decoherence, and heating, Phys. Rev. A 74, 022102 (2006); A. A. Budini, Quantum systems subject to the action of classical stochastic fields, Phys. Rev. A 64, 052110 (2001).
* (13) H. P. Breuer, Non-Markovian generalization of the Lindblad theory of open quantum systems, Phys. Rev. A 75, 022103 (2007); A. A. Budini, Lindblad rate equations, Phys. Rev. A 74, 053815 (2006).
* (14) B. Vacchini, Non-Markovian dynamics for bipartite systems, Phys. Rev. A 78, 022112 (2008).
* (15) A. A. Budini, Post-Markovian quantum master equations from classical environment fluctuations, Phys. Rev. E 89, 012147 (2014).
* (16) C. Sutherland, T. A. Brun, and D. A. Lidar, Non-Markovianity of the post-Markovian master equation, Phys. Rev. A 98, 042119 (2018); A. Shabani and D. A. Lidar, Completely positive post-Markovian master equation via a measurement approach, Phys. Rev. A 71, 020101(R) (2005).
* (17) B. Donvil, P. Muratore-Ginanneschi, and J. P. Pekola, Hybrid master equation for calorimetric measurements, Phys. Rev. A 99, 042127 (2019).
* (18) B. Vacchini, Non-Markovian master equations from piecewise dynamics, Phys. Rev. A 87, 030101(R) (2013).
* (19) A. A. Budini, Embedding non-Markovian quantum collisional models into bipartite Markovian dynamics, Phys. Rev. A 88, 032115 (2013); A. A. Budini and P. Grigolini, Non-Markovian nonstationary completely positive open-quantum-system dynamics, Phys. Rev. A 80, 022103 (2009); A. A. Budini, Stochastic representation of a class of non-Markovian completely positive evolutions, Phys. Rev. A 69, 042107 (2004).
* (20) D. Chruściński and F. A. Wudarski, Non-Markovian random unitary qubit dynamics, Phys. Lett. A 377, 1425 (2013); D. Chruściński and F. A. Wudarski, Non-Markovianity degree for random unitary evolution, Phys. Rev. A 91, 012104 (2015); F. A. Wudarski, P. Nalezyty, G. Sarbicki, and D. Chruściński, Admissible memory kernels for random unitary qubit evolution, Phys. Rev. A 91, 042105 (2015); F. A. Wudarski and D. Chruściński, Markovian semigroup from non-Markovian evolutions, Phys. Rev. A 93, 042120 (2016); K. Siudzińska and D. Chruściński, Memory kernel approach to generalized Pauli channels: Markovian, semi-Markov, and beyond, Phys. Rev. A 96, 022129 (2017).
* (21) C. M. Kropf, C. Gneiting, and A. Buchleitner, Effective Dynamics of Disordered Quantum Systems, Phys. Rev. X 6, 031023 (2016); C. Gneiting, F. R. Anger, and A. Buchleitner, Incoherent ensemble dynamics in disordered systems, Phys. Rev. A 93, 032139 (2016).
* (22) H.-B. Chen, C. Gneiting, P.-Y. Lo, Y.-N. Chen, and F. Nori, Simulating Open Quantum Systems with Hamiltonian Ensembles and the Nonclassicality of the Dynamics, Phys. Rev. Lett. 120, 030403 (2018).
* (23) B. Paredes, F. Verstraete, and J. I. Cirac, Exploiting Quantum Parallelism to Simulate Quantum Random Many-Body Systems, Phys. Rev. Lett. 95, 140501 (2005).
* (24) H. P. Breuer, E. M. Laine, and J. Piilo, Measure for the Degree of Non-Markovian Behavior of Quantum Processes in Open Systems, Phys. Rev. Lett. 103, 210401 (2009).
* (25) M. M. Wolf, J. Eisert, T. S. Cubitt, and J. I. Cirac, Assessing Non-Markovian Quantum Dynamics, Phys. Rev. Lett. 101, 150402 (2008).
* (26) A. Rivas, S. F. Huelga, and M. B. Plenio, Entanglement and Non-Markovianity of Quantum Evolutions, Phys. Rev. Lett. 105, 050403 (2010).
* (27) E. -M. Laine, J. Piilo, and H. -P. Breuer, Measure for the non-Markovianity of quantum processes, Phys. Rev. A 81, 062115 (2010).
* (28) X.-M. Lu, X. Wang, and C. P. Sun, Quantum Fisher information flow and non-Markovian processes of open systems, Phys. Rev. A 82, 042103 (2010).
* (29) A. K. Rajagopal, A. R. Usha Devi, and R. W. Rendell, Kraus representation of quantum evolution and fidelity as manifestations of Markovian and non-Markovian forms, Phys. Rev. A 82, 042107 (2010).
* (30) D. Chruściński, A. Kossakowski, and A. Rivas, Measures of non-Markovianity: Divisibility versus backflow of information, Phys. Rev. A 83, 052128 (2011).
* (31) S. Luo, S. Fu, and H. Song, Quantifying non-Markovianity via correlations, Phys. Rev. A 86, 044101 (2012).
* (32) S. Lorenzo, F. Plastina, and M. Paternostro, Geometrical characterization of non-Markovianity, Phys. Rev. A 88, 020102(R) (2013).
* (33) D. Chruściński and S. Maniscalco, Degree of Non-Markovianity of Quantum Evolution, Phys. Rev. Lett. 112, 120404 (2014).
* (34) F. F. Fanchini, G. Karpat, B. Çakmak, L. K. Castelano, G. H. Aguilar, O. Jiménez Farías, S. P. Walborn, P. H. Souto Ribeiro, and M. C. de Oliveira, Non-Markovianity through Accessible Information, Phys. Rev. Lett. 112, 210402 (2014).
* (35) C. Addis, G. Brebner, P. Haikka, and S. Maniscalco, Coherence trapping and information backflow in dephasing qubits, Phys. Rev. A 89, 024101 (2014).
* (36) M. J. W. Hall, J. D. Cresser, L. Li, and E. Andersson, Canonical form of master equations and characterization of non-Markovianity, Phys. Rev. A 89, 042120 (2014).
* (37) P. Haikka, J. D. Cresser, and S. Maniscalco, Comparing different non-Markovianity measures in a driven qubit system, Phys. Rev. A 83, 012112 (2011). C. Addis, B. Bylicka, D. Chruściński, and S. Maniscalco, Comparative study of non-Markovianity measures in exactly solvable one- and two-qubit models, Phys. Rev. A 90, 052103 (2014).
* (38) B. Bylicka, M. Johansson, and A. Acín, Constructive Method for Detecting the Information Backflow of Non-Markovian Dynamics, Phys. Rev. Lett. 118, 120501 (2017).
* (39) S. Chakraborty, Generalized formalism for information backflow in assessing Markovianity and its equivalence to divisibility, Phys. Rev. A 97, 032130 (2018); S. Chakraborty and D. Chruscinski, Information flow versus divisibility for qubit evolution, Phys. Rev. A 99, 042105 (2019).
* (40) J. Kołodynski, S. Rana, and A. Streltsov, Entanglement negativity as a universal non-Markovianity witness, Phys. Rev. A 101, 020303(R) (2020).
* (41) A. Norambuena, J. R. Maze, P. Rabl, and R. Coto, Quantifying phonon-induced non-Markovianity in color centers in diamond, Phys. Rev. A 101, 022110 (2020).
* (42) B.-H. Liu, L. Li, Y.-F. Huang, C. F. Li, G. C. Guo, E.-M. Laine, H.-P. Breuer, and J. Piilo, Experimental control of the transition from Markovian to non-Markovian dynamics of open quantum systems, Nat. Phys. 7, 931 (2011).
* (43) M. Wittemer, G. Clos, H. P. Breuer, U. Warring, and T. Schaetz, Measurement of quantum memory effects and its fundamental limitations, Phys. Rev. A 97, 020102(R) (2018).
* (44) D. F. Urrego, J. Flórez, J. Svozilík, M. Nuñez, and A. Valencia, Controlling non-Markovian dynamics using a light-based structured environment, Phys. Rev. A 98, 053862 (2018).
* (45) D. Khurana, B. K. Agarwalla, and T. S. Mahesh, Experimental emulation of quantum non-Markovian dynamics and coherence protection in the presence of information backflow, Phys. Rev. A 99, 022107 (2019).
* (46) S.-J. Xiong, Q. Hu, Z. Sun, L. Yu, Q. Su, J.-M. Liu, and C.-P. Yang, Non-Markovianity in experimentally simulated quantum channels: Role of counterrotating-wave terms, Phys. Rev. A 100, 032101 (2019).
* (47) A. Cuevas, A. Geraldi, C. Liorni, L. D. Bonavena, A. De Pasquale, F. Sciarrino, V. Giovannetti, and P. Mataloni, All-optical implementation of collision-based evolutions of open quantum systems, Sci. Rep. 9, 3205 (2019).
* (48) Y.-N. Lu ,Y.-R. Zhang, G.-Q. Liu , F. Nori, H. Fan, and X.-Y. Pan, Observing Information Backflow from Controllable Non-Markovian Multichannels in Diamond, Phys. Rev. Lett. 124, 210502 (2020).
* (49) N. Megier, D. Chruściński, J. Piilo, and W. T. Strunz, Eternal non-Markovianity: from random unitary to Markov chain realisations, Sci. Rep. 7, 6379 (2017).
* (50) A. A. Budini, Maximally non-Markovian quantum dynamics without environment-to-system backflow of information, Phys. Rev. A 97, 052133 (2018).
* (51) A. Smirne, M. Caiaffa, and J. Piilo, Rate Operator Unraveling for Open Quantum System Dynamics, Phys. Rev. Lett. 124, 190402 (2020).
* (52) F. A. Pollock, C. Rodríguez-Rosario, T. Frauenheim, M. Paternostro, and K. Modi, Operational Markov Condition for Quantum Processes, Phys. Rev. Lett. 120, 040405 (2018); F. A. Pollock, C. Rodríguez-Rosario, T. Frauenheim, M. Paternostro, and K. Modi, Non-Markovian quantum processes: Complete framework and efficient characterization, Phys. Rev. A 97, 012127 (2018).
* (53) P. Taranto, F. A. Pollock, S. Milz, M. Tomamichel, and K. Modi, Quantum Markov Order, Phys. Rev. Lett. 122, 140401 (2019); P. Taranto, S. Milz, F. A. Pollock, and K. Modi, Structure of quantum stochastic processes with finite Markov order, Phys. Rev. A 99, 042108 (2019).
* (54) M. R. Jørgensen and F. A. Pollock, Exploiting the Causal Tensor Network Structure of Quantum Processes to Efficiently Simulate Non-Markovian Path Integrals, Phys. Rev. Lett. 123, 240602 (2019).
* (55) Y. -Y. Hsieh, Z. -Y. Su, and H. -S. Goan, Non-Markovianity, information backflow, and system-environment correlation for open-quantum-system processes, Phys. Rev. A 100, 012120 (2019).
* (56) A. A. Budini, Quantum Non-Markovian Processes Break Conditional Past-Future Independence, Phys. Rev. Lett. 121, 240401 (2018); A. A. Budini, Conditional past-future correlation induced by non-Markovian dephasing reservoirs, Phys. Rev. A 99, 052125 (2019).
* (57) S. Yu, A. A. Budini, Y. -T. Wang, Z. -J. Ke, Y. Meng, W. Liu, Z. -P. Li, Q. Li, Z. -H. Liu, J. -S. Xu, J. -S. Tang, C. -F. Li , and G. -C. Guo, Experimental observation of conditional past-future correlations, Phys. Rev. A 100, 050301(R) (2019).
* (58) T. de Lima Silva, S. P. Walborn, M. F. Santos, G. H. Aguilar, and A. A. Budini, Detection of quantum non-Markovianity close to the Born-Markov approximation, Phys. Rev. A 101, 042120 (2020).
* (59) M. Bonifacio and A. A. Budini, Perturbation theory for operational quantum non-Markovianity, Phys. Rev. A 102, 022216 (2020).
* (60) L. Han, J. Zou, H. Li, and B. Shao, Non-Markovianity of A Central Spin Interacting with a Lipkin–Meshkov–Glick Bath via a Conditional Past–Future Correlation, Entropy 22, 895 (2020).
* (61) In fact, from Bayes rule $P(z,x|\breve{y})=P(z,\breve{y},x)/P(\breve{y})$ where $P(\breve{y})=\sum_{z,x}P(z,\breve{y},x),$ jointly with $P(z|\breve{y})=\sum_{x}P(z,x|\breve{y})$ and $P(x|y)=\sum_{z}P(z,x|\breve{y}).$
* (62) Formally, the update $\rho_{y}\rightarrow\rho_{\breve{y}}$ is equivalent to discard the system state and feeds forward an independent system state. Named as “repreparation,” this operation was introduced in Ref. pollock for studying “quantum Markov order.” The formulation with unitary transformations $\\{U(\breve{y}|y)\\}$ defines a feasible experimental implementation.
* (63) Under the condition $[H_{e},H_{I}]=0,$ the invariance property (14) is fulfilled only when the initial bath state satisfies $[\sigma_{0},H_{e}]=0.$
* (64) A. A. Budini, unpublished.
* (65) C. Giarmatzi and F. Costa, Witnessing quantum memory in non-Markovian processes, arXiv:1811.03722v3.
* (66) F. Costa and S. Shrapnel, Quantum causal modelling, New J. Phys. 18, 063032 (2016); A. Feix and C. Brukner, Quantum superpositions of ‘common-cause’ and ‘direct-cause’ causal structures, New J. Phys. 19, 123028 (2017).
|
# Heterotic solitons on four-manifolds
Andrei Moroianu , Ángel Murcia and C. S. Shahbazi Université Paris-
Saclay, CNRS, Laboratoire de mathématiques d’Orsay, 91405, Orsay, France
<EMAIL_ADDRESS>Instituto de Física Teórica UAM/CSIC, España
<EMAIL_ADDRESS>Fachbereich Mathematik, Universität Hamburg, Deutschland
<EMAIL_ADDRESS>
###### Abstract.
We investigate four-dimensional Heterotic solitons, defined as a particular
class of solutions of the equations of motion of Heterotic supergravity on a
four-manifold $M$. Heterotic solitons depend on a parameter $\kappa$ and
consist of a Riemannian metric $g$, a metric connection with skew torsion $H$
on $TM$ and a closed 1-form $\varphi$ on $M$ satisfying a differential system.
In the limit $\kappa\to 0$, Heterotic solitons reduce to a class of
generalized Ricci solitons and can be considered as a higher-order curvature
modification of the latter. If the torsion $H$ is equal to the Hodge dual of
$\varphi$, Heterotic solitons consist of either flat tori or closed Einstein-
Weyl structures on manifolds of type $S^{1}\times S^{3}$ as introduced by P.
Gauduchon. We prove that the moduli space of such closed Einstein-Weyl
structures is isomorphic to the product of $\mathbb{R}$ with a certain finite
quotient of the Cartan torus of the isometry group of the typical fiber of a
natural fibration $M\to S^{1}$. We also consider the associated space of
essential infinitesimal deformations, which we prove to be obstructed. More
generally, we characterize several families of Heterotic solitons as
suspensions of certain three-manifolds with prescribed constant principal
Ricci curvatures, amongst which we find hyperbolic manifolds, manifolds
covered by $\widetilde{\mathrm{Sl}}(2,\mathbb{R})$ and E$(1,1)$ or certain
Sasakian three-manifolds. These solutions exhibit a topological dependence in
the string slope parameter $\kappa$ and yield, to the best of our knowledge,
the first examples of Heterotic compactification backgrounds not locally
isomorphic to supersymmetric compactification backgrounds.
C.S.S. would like to thank J. Streets and Y. Ustinovskiy for their useful
comments on the notion of generalized Ricci soliton. Part of this work was
undertaken during a visit of C.S.S. to the University Paris-Saclay under the
Deutsch-Französische Procope Mobilität program. C.S.S. would like to thank A.
Moroianu and this very welcoming institution for providing a nice and
stimulating working environment. The work of Á.M. was funded by the Spanish
FPU Grant No. FPU17/04964, with additional support from the MCIU/AEI/FEDER UE
grant PGC2018-095205-B-I00 and the Centro de Excelencia Severo Ochoa Program
grant SEV-2016-0597. The work of C.S.S. was supported by the Germany
Excellence Strategy _Quantum Universe_ \- 390833306.
## 1\. Introduction
The goal of this article is to investigate a system of partial differential
equations, which we call the _Heterotic system_ , that occurs naturally as the
equations of motion of the bosonic sector of Heterotic supergravity in four
dimensions. The Heterotic system is defined on a principal bundle $P$ over a
four-manifold $M$ and involves a Riemannian metric $g$ on $M$, a pair of
1-forms $\varphi$ and $\alpha$ on $M$ and a connection $A$ on $P$ coupled
through a system of highly non-linear partial differential equations
completely determined by _supersymmetry_. The Heterotic system generalizes the
Einstein-Yang-Mills system and contains, through its Killing spinor equations,
the celebrated Hull-Strominger system [33, 55]. Despite the fact that (four-
dimensional) supersymmetric solutions of the Heterotic system have been fully
classified in [23, 55], the classification of all possibly non-supersymmetric
solutions of the Heterotic system on a compact four-manifold seems to be
currently out of reach and, in fact, and to the best of our knowledge, no non-
locally supersymmetric compactification background of the Heterotic system was
known prior to this work. On the other hand, in Euclidean dimension higher
than four, the existence, uniqueness and moduli problems of Heterotic
supersymmetric solutions remain wide open and have attracted extensive
attention in the physics as well as in the mathematics literature, see for
instance the reviews [13, 21, 29, 44, 56] and their references and citations
for more details. In this regard, Yau’s conjecture on the existence of
solutions to the Hull-Strominger on certain polystable holomorphic vector
bundles over compact balanced complex manifolds stands as an outstanding open
problem in the field [18, 19, 57].
Given the complexity of the four-dimensional full-fledged Heterotic system, in
this work we propose an educated truncation which is obtained by taking the
structure group of the _gauge bundle_ $P$ to be trivial, that is, $P=M$. With
this assumption, the Heterotic system reduces to a system of partial
differential equations for a Riemannian metric an a pair of 1-forms $\varphi$
and $\alpha$ on a four-manifold $M$. Solutions of this system are by
definition _Heterotic solitons_ on $M$, see Definition 3.1. Heterotic
solutions depend on a non-negative constant $\kappa$, which corresponds
physically to the _slope parameter_ of the Heterotic string to which the
theory corresponds. In the limit $\kappa\to 0$ Heterotic solitons reduce to a
particular class of generalized Ricci solitons as introduced in [26]. The
latter can be understood as stationary points of generalized Ricci flow [41,
50, 51], which originates as the renormalization group flow of the NS-NS
string at one loop [47, 48]. In the same vein, Heterotic solitons correspond
to stationary points of the generalized Ricci flow corrected by higher loops
in $\kappa$, which turn out to introduced higher curvature terms in the system
of equations. Therefore, Heterotic solitons can be understood as a natural
extension of the notion of generalized Ricci solitons in the context of
Heterotic string theory. The investigation of flow equations inspired by
supergravity and superstring theories is an increasingly active topic of
research in the mathematics literature, see [14, 15, 16, 45, 46, 52] and
references therein.
Having introduced the notion of Heterotic soliton, which seems to be new in
the literature, our first goal is to construct non-trivial examples and study
the associated moduli space of solutions in simple cases. Heterotic solitons
$(g,\varphi,\alpha)$ with $\varphi=\alpha$ can be easily proven to be
manifolds of type $S^{1}\times S^{3}$ as introduced by P. Gauduchon in [27],
which in turn leads us to revisit Reference [43] and reconsider the study of
the moduli space of such manifolds. Our first result in this direction is the
following.
###### Theorem 1.1.
Let $\Sigma$ be a spherical three-manifold. The moduli space of manifolds of
type $S^{1}\times S^{3}$ and class $\Sigma$ is in bijection with the direct
product of $\mathbb{R}$ with a finite quotient of a maximal torus $T$ in the
isometry group of $\Sigma$. In particular, the moduli space of manifolds of
type $S^{1}\times S^{3}$ has dimension $1+\mathrm{rk}(\mathrm{Iso}(\Sigma))$,
where $\mathrm{rk}(\mathrm{Iso}(\Sigma))$ denotes the rank of
$\mathrm{Iso}(\Sigma)$, that is, the dimension of any of its maximal torus
subgroups.
The reader is referred to Theorem 3.10 for more details. The previous theorem
characterizes the moduli space of manifolds of type $S^{1}\times S^{3}$
_globally_. Since such type of characterization is relatively rare in
differential-geometric moduli problems, we perform in addition a local study
of the moduli, characterizing its virtual tangent space
$T_{[g,\varphi]}\mathfrak{M}^{0}_{\omega}(M)$ of infinitesimal deformations
that preserve the norm of $\varphi$, chosen to be 1, and the Riemannian volume
$\omega$ form of $g$. This eliminates trivial deformations such as constant
rescalings of $\varphi$ and $g$, and is also called the vector space of
_essential_ deformations, according to the terminology introduced by N. Koiso
[35, 36].
###### Theorem 1.2.
There exists a canonical bijection:
$T_{[g,\varphi]}\mathfrak{M}^{0}_{\omega}(M)\to\mathcal{K}(\Sigma)\,,$
where the Riemannian three-manifold $\Sigma$ is the typical fiber of the
natural fibration structure determined by $(g,\varphi)$ on $M$ and
$\mathcal{K}(\Sigma)$ denotes the vector space of Killing vector fields of
$\Sigma$.
In particular, the previous result implies that the infinitesimal deformations
of manifolds of type $S^{1}\times S^{3}$ are in general obstructed. The reader
is referred to Theorem 3.19 for more details. The Heterotic solitons obtained
by imposing $\varphi=\alpha$ are all locally isomorphic to a supersymmetric
solution, as a direct inspection of the classification presented in [23]
shows. In order to obtain Heterotic solitons not locally isomorphic to a
supersymmetric solution we consider instead Heterotic solitons such that
$\varphi=0$ (that is, the dilaton vanishes) and $\alpha\neq 0$. We obtain a
classification result, which we summarize as follows.
###### Theorem 1.3.
Let $M$ be a compact and oriented four-manifold admitting a non-flat Heterotic
soliton $(g,\alpha)$ with vanishing dilaton and parallel torsion $\kappa>0$.
Then, the kernel of $\alpha$ defines an integrable distribution whose leaves,
equipped with the metric induced by $g$, are all isometric to an oriented
Riemannian three-manifold $(\Sigma,h)$ satisfying one of the following
possibilities:
1. (1)
There exists a double cover of $(\Sigma,h)$ that admits a Sasakian structure
determined by $h$ as prescribed in Theorem 4.9.
2. (2)
$(\Sigma,h)$ is isometric to a discrete quotient of either
$\widetilde{\mathrm{Sl}}(2,\mathbb{R})$ or $\mathrm{E}(1,1)$ (the universal
cover of the Poincaré group of two-dimensional Minkowski space) equipped with
a left-invariant metric with constant principal Ricci curvatures given by
$(0,0,-\frac{1}{2\kappa})$.
3. (3)
$(\Sigma,h)$ is a hyperbolic three-manifold.
The reader is referred to Theorem 4.9 for more details and a precise statement
of the result. The previous theorem can be used to obtain large families of
Heterotic solitons with vanishing dilaton and parallel torsion, as summarized
for instance in corollaries 4.12 and 4.13.
Due to the fact that Heterotic solitons conform a particular class of
Heterotic supergravity solutions, they are expected to inherit a _generalized
geometric_ interpretation on a transitive Courant algebroid, as described in
[1, 10, 20, 25] for the general bosonic sector of Heterotic supergravity.
Adapting the framework developed in Op. Cit. to Heterotic solitons would yield
a natural geometric framework, adapted to the symmetries of the system, to
further investigate Heterotic solitons and their moduli. The power of this
formalism is illustrated in [53, 54], where generalized Ricci solitons were
thoroughly studied in the framework of generalized complex geometry. The
generalized geometry underlying Heterotic supergravity is also positioned to
play a prominent role in the study of the T-duality [5, 2, 22] of Heterotic
solitons, which is a fundamental tool to classify the latter and to generate
new Heterotic solitons of novel topologies. In this context, a specially
attractive case corresponds to considering left-invariant Heterotic solitons
on four-dimensional Lie groups, where T-duality can be algebraically described
[11]. We plan to develop these ideas in future publications.
## 2\. Four-dimensional Heterotic supergravity
Let $M$ be an oriented four-dimensional manifold and let $P$ be a principal
bundle over $M$ with semi-simple and compact structure group $\mathrm{G}$.
Denote by $\mathfrak{g}$ the Lie algebra of $\mathrm{G}$. We fix an invariant
and positive-definite symmetric bilinear form
$c\colon\mathfrak{g}\times\mathfrak{g}\to\mathbb{R}$ on $\mathfrak{g}$, and we
denote by $\mathfrak{c}$ the inner product induced by $c$ on the adjoint
bundle $\mathfrak{g}_{P}:=P\times_{\operatorname{Ad}}\mathfrak{g}$ of $P$. We
denote by $\mathcal{A}_{P}$ the affine space of connections on $P$ and for
every connection $A\in\mathcal{A}_{P}$ we denote by
$\mathcal{F}_{A}\in\Omega^{2}(\mathfrak{g}_{P})$ its curvature. For every
Riemannian metric $g$ on $M$, we denote by $\mathrm{F}_{g}(M)$ the bundle of
oriented orthonormal frames defined by $g$ and the given orientation of $M$,
and we denote by
$\mathfrak{so}_{g}(M):=\mathrm{F}_{g}(M)\times_{\operatorname{Ad}}\mathfrak{so}(4)$
its associated adjoint bundle of $\mathfrak{so}(4)$ algebras, which we will
consider equipped with the positive-definite inner product $\mathfrak{v}$
yielded by the trace in $\mathfrak{so}(4)$. The curvature of a connection
$\nabla$ on $\mathrm{F}_{g}(M)$ will be denoted by
$\mathcal{R}_{\nabla}\in\Omega^{2}(\mathfrak{so}_{g}(M))$. Given
$(M,P,\mathfrak{c})$ and a Riemannian metric $g$ on $M$, we define the
following bilinear map:
$\mathfrak{c}(-\circ-)\colon\Omega^{k}(\mathfrak{g}_{P})\times\Omega^{k}(\mathfrak{g}_{P})\to\Gamma(T^{\ast}M\odot
T^{\ast}M)\,,$
as follows:
$\mathfrak{c}(\alpha\circ\beta)(v_{1},v_{2})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\frac{1}{2}((g\otimes\mathfrak{c})(v_{1}\lrcorner\alpha,v_{2}\lrcorner\beta)+(g\otimes\mathfrak{c})(v_{2}\lrcorner\alpha,v_{1}\lrcorner\beta))\,,$
for every pair of vector fields $v_{1},v_{2}\in\mathfrak{X}(M)$ and any pair
of $k$-forms $\alpha,\beta\in\Omega^{k}(\mathfrak{g}_{P})$ taking values in
$\mathfrak{g}_{P}$. Here $g\otimes\mathfrak{c}(-,-)$ denotes the non-
degenerate metric induced by $g$ and $\mathfrak{c}$ on the differentiable
forms valued in $\mathfrak{g}_{P}$. In particular, for the curvature
$\mathcal{F}_{A}\in\Omega^{2}(\mathfrak{g}_{P})$ of a connection
$A\in\mathcal{A}_{P}$ we have:
$\mathfrak{c}(\mathcal{F}_{A}\circ\mathcal{F}_{A})(v_{1},v_{2})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}(g\otimes\mathfrak{c})(v_{1}\lrcorner\mathcal{F}_{A},v_{2}\lrcorner\mathcal{F}_{A})\,,\qquad
v_{1},v_{2}\in\mathfrak{X}(M)\,,$
where $v_{1}\lrcorner\mathcal{F}_{A}$ denotes the 1-form with values in
$\mathfrak{g}_{P}$ obtained by evaluation of $v_{1}$ in $\mathcal{F}_{A}$, and
similarly for $v_{2}\lrcorner\mathcal{F}_{A}$. If $\left\\{T_{a}\right\\}$
denotes a local orthonormal frame on $\mathfrak{g}_{P}$ satisfying
$\mathfrak{c}(T_{a},T_{b})=\delta_{ab}$ and $e_{i}$ denotes a local
orthonormal frame of $(TM,g)$, then the expression above reads:
$\mathfrak{c}(\mathcal{F}_{A}\circ\mathcal{F}_{A})(v_{1},v_{2})=\sum_{a,i}\mathcal{F}_{A}^{a}(v_{1},e_{i})\,\mathcal{F}_{A}^{a}(v_{2},e_{i})\,.$
Therefore, in local coordinates $\left\\{x^{i}\right\\}$,
$i,j,k,m=1,\ldots,4$, the previous equation corresponds to:
$\mathfrak{c}(\mathcal{F}_{A}\circ\mathcal{F}_{A})(v_{1},v_{2})=\sum_{a}(\mathcal{F}_{A}^{a})_{im}\,(\mathcal{F}_{A}^{a})_{jk}\,g^{mk}\,.$
Similarly, for a 3-form $H\in\Omega^{3}(M)$ we define:
$(H\circ H)(v_{1},v_{2})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}g(v_{1}\lrcorner H,v_{2}\lrcorner H)\,,\qquad
v_{1},v_{2}\in\mathfrak{X}(M)\,,$
which in local coordinates reads:
$(H\circ H)_{ij}=H_{ilm}H_{j}^{\,\,\,lm}\,.$
Note that the inner product induced by $g$ is to be understood in the sense of
tensors (rather than forms). The analogous bilinear map:
$\mathfrak{v}(-\circ-)\colon\Omega^{k}(\mathfrak{so}_{g}(M))\times\Omega^{k}(\mathfrak{so}_{g}(M))\to\Gamma(T^{\ast}M\odot
T^{\ast}M)\,,$
is defined identically to $\mathfrak{c}(-\circ-)$. In particular, in local
coordinates we have:
$\mathfrak{v}(\mathcal{R}_{\nabla}\circ\mathcal{R}_{\nabla})_{ij}=(\mathcal{R}_{\nabla})_{iklm}(\mathcal{R}_{\nabla})^{\,\,klm}_{j}\,,$
where $(\mathcal{R}_{\nabla})_{iklm}$ is the local coordinate expression of
the curvature tensor of the connection $\nabla$ on $\mathrm{F}_{g}(M)$.
###### Remark 2.1.
For any Riemannian metric $g$ and 3-form $H$ on $M$ we define the connection
$\nabla^{H}$ on the tangent bundle $TM$ as the unique $g$-compatible
connection on $M$ with totally antisymmetric torsion given by $-H$. The metric
connection $\nabla^{H}$ is explicitly given in terms of the Levi-Civita
connection $\nabla^{g}$ associated to $g$ as follows:
$\nabla^{H}=\nabla^{g}-\frac{1}{2}H^{\sharp}\,,$
where:
$H^{\sharp}(v_{1},v_{2})=H(v_{1},v_{2})^{\sharp}=(v_{2}\lrcorner
v_{1}\lrcorner H)^{\sharp}\in TM\,,\qquad\forall\,\,v_{1},v_{2}\in TM\,,$
and $\sharp\colon T^{\ast}M\to TM$ is the musical isomorphism induced by $g$.
###### Definition 2.2.
Let $\kappa>0$ be a positive real constant. The bosonic sector of Heterotic
supergravity on $(M,P,\mathfrak{c})$ is defined through the following system
of partial differential equations [3, 4, 24]:
$\displaystyle\operatorname{Ric}^{g}+\nabla^{g}\varphi-\frac{1}{4}H\circ
H-\kappa\,\mathfrak{c}(\mathcal{F}_{A}\circ\mathcal{F}_{A})+\kappa\,\mathfrak{v}(\mathcal{R}_{\nabla^{H}}\circ\mathcal{R}_{\nabla^{H}})=0\,,$
$\displaystyle\delta^{g}H+\iota_{\varphi}H=0\,,\quad\mathrm{d}_{A}\ast\mathcal{F}_{A}-\varphi\wedge\ast\mathcal{F}_{A}-\mathcal{F}_{A}\wedge\ast
H=0\,,$ (2.1)
$\displaystyle\delta^{g}\varphi+|\varphi|^{2}_{g}-|H|^{2}_{g}-\kappa\,|\mathcal{F}_{A}|^{2}_{g,\mathfrak{c}}+\kappa\,|\mathcal{R}_{\nabla^{H}}|^{2}_{g,\mathfrak{v}}=0\,,$
together with the _Bianchi identity_ :
$\mathrm{d}H=\kappa(\mathfrak{c}\left(\mathcal{F}_{A}\wedge\mathcal{F}_{A}\right)-\mathfrak{v}(\mathcal{R}_{\nabla^{H}}\wedge\mathcal{R}_{\nabla^{H}}))\,,$
(2.2)
for tuples $(g,H,\varphi,A)$, where $g$ is a Riemannian metric on $M$,
$\varphi\in\Omega^{1}_{cl}(M)$ is a closed one form, $H\in\Omega^{3}(M)$ is a
3-form and $A\in\mathcal{A}_{P}$ is a connection on $P$. Here the Hodge dual
$\ast$ is defined with respect to $g$ and the induced Riemannian volume form.
The norms $|-|_{g}$, $|-|_{g,\mathfrak{c}}$ and $|-|_{g,\mathfrak{v}}$ are all
taken as norms on forms by interpreting the curvatures $\mathcal{F}_{A}$ and
$\mathcal{R}_{\nabla^{H}}$ as 2-forms taking values on the adjoint bundle of
$P$ and $\mathrm{F}_{g}(M)$, respectively. This convention is delicate for
$\mathcal{R}_{\nabla^{H}}\in\Omega^{2}(\mathfrak{so}_{g}(M))$. In this case,
$\mathfrak{so}_{g}(M)\subset\operatorname{End}(TM)$ is naturally isomorphic to
$\Lambda^{2}T^{\ast}M$ and $\mathcal{R}_{\nabla^{H}}$ can be interpreted as a
section of $\Lambda^{2}T^{\ast}M\otimes\Lambda^{2}T^{\ast}M$. Within this
interpretation, the norm induced by $\mathfrak{v}$ is by definition the form
norm in the first factor $\Lambda^{2}T^{\ast}M$ and the tensor norm in the
second factor $\Lambda^{2}T^{\ast}M=\mathfrak{so}_{g}(M)$. Hence:
$|\mathcal{R}_{\nabla^{H}}|^{2}_{g,\mathfrak{v}}=\frac{1}{2}\mathrm{Tr}_{g}(\mathfrak{v}(\mathcal{R}_{\nabla^{H}}\circ\mathcal{R}_{\nabla^{H}}))\,,$
and, in local coordinates:
$|\mathcal{R}_{\nabla^{H}}|^{2}_{g,\mathfrak{v}}=\frac{1}{2}(\mathcal{R}_{\nabla^{H}})_{ijkl}(\mathcal{R}_{\nabla^{H}})^{ijkl}\,.$
Alternatively, and as mentioned earlier, $\mathfrak{v}$ can be defined as the
norm induced by the form norm on 2-forms and the trace norm for elements in
$\mathfrak{so}_{g}(M)\subset\operatorname{End}(TM)$.
###### Remark 2.3.
Equations (2.2) and (2.2) are completely and unambiguously determined by
supersymmetry, see for instance [42] and references therein for more details.
In particular, these equations describe the low-energy dynamics of the
massless bosonic sector of Heterotic string theory. The first equation in
(2.2) is usually called the _Einstein equation_ , the second equation in (2.2)
is usually called the _Maxwell equation_ , the third equation in (2.2) is
usually called the _Yang-Mills equation_ whereas the last equation in (2.2) is
usually called the _dilaton equation_. The constant $\kappa$ is the _string
slope_ parameter and has a specific physical interpretation which is not
relevant for our purposes.
Suppose that $M$ admits spin structures. Given a tuple $(g,\varphi,H,A)$ as
introduced above and a choice of $\mathrm{Spin}(4)$ structure $Q_{g}$, we
denote by $\mathrm{S}_{g}$ the bundle of irreducible complex spinors
canonically associated to $Q_{g}$. This is a rank-four complex vector bundle
$\mathrm{S}_{g}$ which admits a direct sum decomposition:
$\mathrm{S}_{g}=\mathrm{S}^{+}_{g}\oplus\mathrm{S}^{-}_{g}\,,\qquad\mathrm{S}^{\pm}_{g}:=\frac{1}{2}(\mathrm{Id}\mp\nu_{g})\mathrm{S}_{g}\,,$
in terms of the rank-two chiral bundles $\mathrm{S}^{+}_{g}$ and
$\mathrm{S}^{-}_{g}$. The symbol $\nu_{g}$ denotes the Riemannian volume form
on $(M,g)$ acting by Clifford multiplication on $\mathrm{S}_{g}$.
###### Definition 2.4.
We say that a tuple $(g,\varphi,H,A)$ solving Equation (2.2) is a
_supersymmetric solution_ of Heterotic supergravity if there exists a bundle
of irreducible complex spinors
$\mathrm{S}_{g}=\mathrm{S}^{+}_{g}\oplus\mathrm{S}^{-}_{g}$ on $(M,g)$ and a
spinor $\epsilon\in\Gamma(\mathrm{S}^{+}_{g})$ such that the following
equations are satisfied:
$\nabla^{-H}\epsilon=0\,,\qquad(\varphi-H)\cdot\epsilon=0\,,\qquad\mathcal{F}_{A}\cdot\epsilon=0\,.$
(2.3)
Equations (2.3) are called the _Killing spinor equations_ of Heterotic
supergravity. For ease of notation we denote with the same symbol the
canonical lift of $\nabla^{-H}$ (which has torsion $H$) to the spinor bundle
$\mathrm{S}_{g}$.
###### Remark 2.5.
The existence of solutions to equations (2.3) may depend on the choice of spin
structure on $M$, in the sense that a supersymmetric solution on $M$ with
respect to a particular choice of spin structure may be non-supersymmetric
with respect to a different choice of spin structure, see [17] for more
details and explicit examples of this situation.
###### Remark 2.6.
By a theorem of S. Ivanov [34], a quintuple $(g,\varphi,H,A,\epsilon)$
satisfying the Killing spinor equations and the Bianchi identity automatically
satisfies all the equations of motion of Heterotic supergravity if and only if
the connection $\nabla^{H}$ is an _instanton_.
The existence of Killing spinor equations compatible with the system (2.2) and
(2.2), in the sense specified in the previous remark, is a consequence of
supersymmetry. More precisely, the Killing spinor equations are obtained by
imposing the vanishing of the Heterotic supersymmetry transformations on a
given bosonic background. We refer the reader to [28, 42] and references
therein for more details.
There is a large amount of meat to unpack in the partial differential
equations that define Heterotic supergravity. In order to proceed further it
is convenient to consider a reformulation of Heterotic supergravity that
profits from the fact that we restrict the underlying manifold to be four-
dimensional. For every tuple $(g,\varphi,H,A)$, we define $\alpha:=-\ast
H\in\Omega^{1}(M)$.
###### Lemma 2.7.
A tuple $(g,\varphi,H,A)$ with $H=\ast\alpha$ satisfies the Bianchi identity
if and only if:
$\frac{1}{\kappa}\delta^{g}\alpha=|\mathcal{F}^{-}_{A}|^{2}_{g,\mathfrak{c}}-|\mathcal{R}^{-}_{\nabla^{H}}|^{2}_{g,\mathfrak{v}}-|\mathcal{F}^{+}_{A}|^{2}_{g,\mathfrak{c}}+|\mathcal{R}^{+}_{\nabla^{H}}|^{2}_{g,\mathfrak{v}}\,,$
where:
$\mathcal{F}^{+}_{A}:=\frac{1}{2}(\mathcal{F}_{A}+\ast\mathcal{F}_{A})\,,\qquad\mathcal{F}^{-}_{A}:=\frac{1}{2}(\mathcal{F}_{A}-\ast\mathcal{F}_{A})\,,$
respectively denotes the self-dual and anti-self-dual projections of
$\mathcal{F}_{A}$, and similarly for $\mathcal{R}^{\pm}_{\nabla^{H}}$.
###### Proof.
Using that $\mathcal{F}^{+}_{A}\wedge\mathcal{F}^{-}_{A}=0$ and
$\mathcal{R}^{+}_{\nabla^{H}}\wedge\mathcal{R}^{-}_{\nabla^{H}}=0$, we
compute:
$\displaystyle-\frac{1}{\kappa}\delta^{g}\alpha=\frac{1}{\kappa}\ast\mathrm{d}H=\ast\mathfrak{c}(\mathcal{F}^{+}_{A}\wedge\mathcal{F}_{A}^{+})+\ast\mathfrak{c}(\mathcal{F}^{-}_{A}\wedge\mathcal{F}_{A}^{-})-\ast\mathfrak{v}(\mathcal{R}^{+}_{\nabla^{H}}\wedge\mathcal{R}^{+}_{\nabla^{H}})-\ast\mathfrak{v}(\mathcal{R}^{-}_{\nabla^{H}}\wedge\mathcal{R}^{-}_{\nabla^{H}})$
$\displaystyle=\ast\mathfrak{c}(\mathcal{F}^{+}_{A}\wedge\ast\mathcal{F}_{A}^{+})-\ast\mathfrak{c}(\mathcal{F}^{-}_{A}\wedge\ast\mathcal{F}_{A}^{-})-\ast\mathfrak{v}(\mathcal{R}^{+}_{\nabla^{H}}\wedge\ast\mathcal{R}^{+}_{\nabla^{H}})+\ast\mathfrak{v}(\mathcal{R}^{-}_{\nabla^{H}}\wedge\ast\mathcal{R}^{-}_{\nabla^{H}})$
$\displaystyle=|\mathcal{F}_{A}^{+}|^{2}_{g,\mathfrak{c}}-|\mathcal{F}_{A}^{-}|^{2}_{g,\mathfrak{c}}-|\mathcal{R}^{+}_{\nabla^{H}}|^{2}_{g,\mathfrak{v}}+|\mathcal{R}^{-}_{\nabla^{H}}|^{2}_{g,\mathfrak{v}}\,,$
and hence we conclude. ∎
On the other hand, regarding the Maxwell equation in (2.2) we have:
$\delta^{g}H+\iota_{\varphi}H=\ast\mathrm{d}\alpha+\iota_{\varphi}\ast\alpha=\ast(\mathrm{d}\alpha-\varphi\wedge\alpha)=0\,,$
whence it is equivalent to $\mathrm{d}\alpha=\varphi\wedge\alpha$. The
previous computation together with Lemma 2.7 proves that Heterotic
supergravity, as introduced in Definition 2.2, is equivalent to the _Heterotic
system_ , which we proceed to introduce.
###### Definition 2.8.
Let $\kappa>0$ be a real number. The four-dimensional Heterotic system on
$(M,P,\mathfrak{c})$ is the following system of partial differential
equations:
$\displaystyle\mathrm{Ric}^{g}+\nabla^{g}\varphi+\frac{1}{2}\alpha\otimes\alpha-\frac{1}{2}|\alpha|^{2}_{g}\,g+\kappa\,\mathfrak{v}(\mathcal{R}_{\nabla^{\alpha}}\circ\mathcal{R}_{\nabla^{\alpha}})=\kappa\,\mathfrak{c}(\mathcal{F}_{A}\circ\mathcal{F}_{A})\,,\quad\mathrm{d}\alpha=\varphi\wedge\alpha$
(2.4)
$\displaystyle\mathrm{d}_{A}^{\ast}\mathcal{F}_{A}+\iota_{\varphi}\mathcal{F}_{A}-\iota_{\alpha}\ast\mathcal{F}_{A}=0\,,\quad\delta^{g}\varphi+|\varphi|^{2}_{g}+\kappa|\mathcal{R}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}}=|\alpha|^{2}_{g}+\kappa|\mathcal{F}_{A}|^{2}_{g,\mathfrak{c}}\,,$
(2.5)
$\displaystyle\frac{1}{\kappa}\delta^{g}\alpha=|\mathcal{F}^{-}_{A}|^{2}_{g,\mathfrak{c}}-|\mathcal{R}^{-}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}}-|\mathcal{F}^{+}_{A}|^{2}_{g,\mathfrak{c}}+|\mathcal{R}^{+}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}}$
(2.6)
for tuples $(g,\varphi,\alpha,A)\in\mathrm{Conf}(M,P,\mathfrak{c})$, where by
definition we have set $\nabla^{\alpha}:=\nabla^{H}$ with $H=\ast\alpha$.
We will denote by $\mathrm{Conf}_{\kappa}(M,P,\mathfrak{c})$ the configuration
space of the Heterotic system on $(M,P,\mathfrak{c})$ for the given
$\kappa>0$, which consists of the set of all tuples $(g,\varphi,\alpha,A)$ as
introduced in the previous definition. Furthermore, we denote by
$\mathrm{Sol}_{\kappa}(M,P,\mathfrak{c})\subset\mathrm{Conf}_{\kappa}(M,P,\mathfrak{c})$
the set of tuples
$(g,\varphi,\alpha,A)\in\mathrm{Conf}_{\kappa}(M,P,\mathfrak{c})$ that satisfy
the partial differential equations (2.2) and (2.2) of the Heterotic system. We
denote by $\mathrm{Sol}^{s}_{\kappa}(M,P,\mathfrak{c})$ the set of
supersymmetric solutions of the Heterotic system on $(M,P,\mathfrak{c})$ for
the given $\kappa$. Note that the same solution $(g,\varphi,\alpha,A)$ may
admit several spinors $\epsilon\in\Gamma(\mathrm{S}^{+}_{g})$ satisfying the
Killing spinor equations.
###### Remark 2.9.
Given a triple $(M,P,\mathfrak{c})$, it may be possible that the solution
space $\mathrm{Sol}_{\kappa}(M,P,\mathfrak{c})$ (and hence also
$\mathrm{Sol}^{s}_{\kappa}(M,P,\mathfrak{c})$) be empty, that
$\mathrm{Sol}_{\kappa}(M,P,\mathfrak{c})$ be non-empty but
$\mathrm{Sol}^{s}_{\kappa}(M,P,\mathfrak{c})$ be empty, or that both
$\mathrm{Sol}_{\kappa}(M,P,\mathfrak{c})$ and
$\mathrm{Sol}^{s}_{\kappa}(M,P,\mathfrak{c})$ be non-empty. The set
$\mathrm{Sol}^{s}_{\kappa}(M,P,\mathfrak{c})$ of supersymmetric solutions has
been thoroughly classified in [23, 55].
To every solution
$(g,\varphi,\alpha,A)\in\mathrm{Sol}_{\kappa}(M,P,\mathfrak{c})$ of the
Heterotic system we can associate a cohomology class $\sigma$ in
$H^{1}(M,\mathbb{R})$ defined by $\sigma:=[\varphi]\in H^{1}(M,\mathbb{R})$.
We will call $\sigma$ the _Lee class_ of
$(g,\varphi,\alpha,A)\in\mathrm{Sol}_{\kappa}(M,P,\mathfrak{c})$. If
$\mathrm{Sol}_{\kappa}(M,P,\mathfrak{c})$ is non-empty, the Bianchi identity
of any element in $\mathrm{Sol}_{\kappa}(M,P,\mathfrak{c})$ immediately
implies the following equation in $H^{4}(M,\mathbb{R})$:
$p_{1}(P)=p_{1}(M)\in H^{4}(M,\mathbb{R})\,,$
that is, the first Pontryagin class $p_{1}(P)$ of $P$ needs to be equal to the
first Pontryagin class $p_{1}(M)$ of $M$ with real coefficients. This gives a
simple topological obstruction to the existence of Heterotic solutions on a
given triple $(M,P,\mathfrak{c})$.
###### Remark 2.10.
As we will see later, see for instance Section 3, the topology and geometry of
compact four-manifolds admitting solutions to the Heterotic system depends
crucially on whether $\sigma=0$ or $\sigma\neq 0$.
## 3\. Heterotic solitons and the moduli of manifolds of type $S^{1}\times
S^{3}$
This section introduces the notion of _Heterotic soliton_ and develops the
classification of NS-NS pairs, introduced below, which will lead us to study
the global moduli space of _manifolds of type $S^{1}\times S^{3}$_ as defined
by P. Gauduchon in [27].
### 3.1. Heterotic solitons
If $P$ is the trivial principal bundle over $M$, that is $P=M$, the triple
$(M,P,\mathfrak{c})$ reduces simply to the oriented four-manifold $M$. In this
case, the configuration space of the four-dimensional Heterotic system, which
we denote by $\mathrm{Conf}_{\kappa}(M)$, consists of all triples of the form
$(g,\varphi,\alpha)$, where $g$ is a Riemannian metric on $M$, $\varphi$ is a
closed 1-form and $\alpha$ is a 1-form. The Heterotic system reduces to:
$\displaystyle\mathrm{Ric}^{g}+\nabla^{g}\varphi+\frac{1}{2}\alpha\otimes\alpha-\frac{1}{2}|\alpha|^{2}_{g}\,g+\kappa\,\mathfrak{v}(\mathcal{R}_{\nabla^{\alpha}}\circ\mathcal{R}_{\nabla^{\alpha}})=0\,,\quad\mathrm{d}\alpha=\varphi\wedge\alpha$
(3.1)
$\displaystyle\delta^{g}\varphi+|\varphi|^{2}_{g}+\kappa\,|\mathcal{R}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}}=|\alpha|^{2}_{g}\,,\qquad\delta^{g}\alpha=\kappa\,(|\mathcal{R}^{+}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}}-|\mathcal{R}^{-}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}})\,,$
(3.2)
for $(g,\varphi,\alpha)\in\mathrm{Conf}_{\kappa}(M)$. In the limit $\kappa\to
0$, the previous system recovers the _generalized_ Ricci solition system in
four dimensions [26] and therefore can be considered as a natural
generalization of the latter in the context of Heterotic string theory
corrections to the effective supergravity action. We introduce now the
following definition.
###### Definition 3.1.
The (four-dimensional) _Heterotic soliton system_ consists of equations (3.1)
and (3.2). Solutions of the Heterotic soliton system are (four-dimensional)
_Heterotic solitons_.
If we further impose $\alpha=\varphi$ the Heterotic soliton system further
reduces to:
$\displaystyle\mathrm{Ric}^{g}+\nabla^{g}\varphi+\frac{1}{2}\varphi\otimes\varphi-\frac{1}{2}|\varphi|^{2}_{g}\,g+\kappa\,\mathfrak{v}(\mathcal{R}_{\nabla^{\varphi}}\circ\mathcal{R}_{\nabla^{\varphi}})=0\,,$
(3.3)
$\displaystyle\delta^{g}\varphi+\kappa\,|\mathcal{R}^{-}_{\nabla^{\varphi}}|^{2}_{g,\mathfrak{v}}=0\,,\qquad|\mathcal{R}^{+}_{\nabla^{\varphi}}|^{2}_{g,\mathfrak{v}}=0\,,$
(3.4)
for pairs $(g,\varphi)$ consisting on a Riemannian metric $g$ on $M$ and a
closed 1-form $\varphi\in\Omega^{1}_{cl}(M)$. Equations (3.3) and (3.4)
define, in physics terminology, the so-called _NS-NS supergravity_.
Consequently, we will refer to pairs $(g,\varphi)$ solving (3.3) and (3.4) as
_NS-NS pairs_.
### 3.2. Compact NS-NS pairs
Let $M$ be an oriented and connected four-manifold equipped with a NS-NS pair
$(g,\varphi)$. Recall that the connection $\nabla^{\varphi}$ is an anti-self-
dual instanton on the tangent bundle of $M$. We will say that a NS-NS pair is
_complete_ if $(M,g)$ is a complete Riemannian four-manifold.
###### Lemma 3.2.
Let $(g,\varphi)$ be a NS-NS pair on $M$. We have:
$\mathfrak{v}(\mathcal{R}_{\nabla^{\varphi}}\circ\mathcal{R}_{\nabla^{\varphi}})=\frac{g}{2}|\mathcal{R}^{-}_{\nabla^{\varphi}}|^{2}_{g,\mathfrak{v}}\,,$
and $(g,\varphi)$ satisfies:
$\mathrm{Ric}^{g}+\nabla^{g}\varphi+\frac{1}{2}\varphi\otimes\varphi-\frac{1}{2}(|\varphi|^{2}_{g}+\delta^{g}\varphi)g=0\,,$
(3.5)
which is equivalent to the Einstein equation (3.3) for $(g,\varphi)$.
###### Proof.
Let $T_{a}$ denote a local basis of $\mathfrak{so}_{g}(M)$ satisfying
$\mathfrak{v}(T_{a},T_{b})=\delta_{ab}$ and write
$\mathcal{R}_{\nabla^{\alpha}}=\sum_{a}\mathcal{R}^{a}_{\nabla^{\alpha}}\otimes
T_{a}$. Identifying each 2-form $\mathcal{R}_{\nabla^{\alpha}}^{a}$ with a
skew-symmetric endomorphism of $TM$ we have:
$\mathfrak{v}(\mathcal{R}_{\nabla^{\alpha}}\circ\mathcal{R}_{\nabla^{\alpha}})(v_{1},v_{2})=\sum_{a}g(\mathcal{R}_{\nabla^{\alpha}}^{a}\circ\mathcal{R}_{\nabla^{\alpha}}^{a}(v_{1}),v_{2}).$
Using that $\mathcal{R}_{\nabla^{\alpha}}$ is anti-self-dual, the same holds
for each component $\mathcal{R}_{\nabla^{\alpha}}^{a}$, and thus
$\mathcal{R}_{\nabla^{\alpha}}^{a}\circ\mathcal{R}_{\nabla^{\alpha}}^{a}=\frac{1}{2}|\mathcal{R}_{\nabla^{\alpha}}^{a}|^{2}\mathrm{Id}_{TM}$.
Hence, we obtain:
$\mathfrak{v}(\mathcal{R}_{\nabla^{\alpha}}\circ\mathcal{R}_{\nabla^{\alpha}})=\frac{1}{2}|\mathcal{R}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}}g\,.$
The second part follows directly after substituting the first equation in
(3.4) into equation (3.3), upon use of the previous identity. ∎
Equation (3.5) can be naturally interpreted in the framework of conformal
geometry and Einstein-Weyl structures. Let $\mathcal{C}$ be the conformal
class of Riemannian metrics on $M$ containing $g$, and assume that
$Dg=-2\theta\otimes g\,.$
The Ricci curvature $\mathrm{Ric}^{D}$ of $D$ reads:
$\mathrm{Ric}^{D}=\mathrm{Ric}^{g}-2(\nabla^{g}\theta-\theta\otimes\theta)+(\delta^{g}\theta-2|\theta|_{g}^{2})g\,.$
(3.6)
Using the previous expression, we readily conclude that (3.5) is equivalent to
$\mathrm{Ric}^{D}=0$, where $D$ is the Weyl connection on $(M,\mathcal{C})$
whose Lee form with respect to $g$ is $\theta=-\frac{\varphi}{2}$.
Consequently (3.5) is conformally invariant, in the sense that, given a NS-NS
pair $(g,\varphi)$, every other metric $\tilde{g}=e^{f}g$ in the conformal
class of $g$ satisfies:
$\operatorname{Ric}^{\tilde{g}}+\nabla^{\tilde{g}}\tilde{\varphi}+\frac{1}{2}\tilde{\varphi}\otimes\tilde{\varphi}-\frac{1}{2}(|\tilde{\varphi}|_{\tilde{g}}^{2}+\delta^{\tilde{g}}\tilde{\varphi})\tilde{g}=0\,,$
(3.7)
for $\tilde{\varphi}:=\varphi+\mathrm{d}f$. Recall that a closed Weyl
structure is said to be _closed Einstein-Weyl_ if it satisfies (3.7).
###### Lemma 3.3.
Let $(\mathcal{C},D)$ a closed Einstein-Weyl manifold on a compact four-
manifold $M$. Then, the Lee-form $\theta$ associated to the Gauduchon metric
$g$ of $\mathcal{C}$ is parallel.
###### Proof.
Let $(\mathcal{C},D)$ a closed Einstein-Weyl manifold and let $(g,\theta)$ be
a Gauduchon representative, that is, $\theta$ is coclosed with respect to $g$.
Hence, the pair $(g,\theta)$ satisfies:
$\mathrm{Ric}^{g}-2(\nabla^{g}\theta-\theta\otimes\theta)-2|\theta|_{g}^{2}g=0\,.$
Taking the trace in this equation and using the fact that $\theta$ is
coclosed, we obtain that the scalar curvature $s^{g}$ of $g$ satisfies
$s^{g}=6|\theta|_{g}^{2}$. Using the contracted Bianchi identity
$(\nabla^{g})^{\ast}\mathrm{Ric}^{g}=-\frac{1}{2}\mathrm{d}s^{g}$ and the
formula $(\nabla^{g})^{\ast}(|\theta|_{g}^{2}g)=-\mathrm{d}|\theta|_{g}^{2}$,
we then compute the total norm of $\nabla^{g}\theta$ with respect to $g$:
$\left\lVert\nabla^{g}\theta\right\rVert^{2}_{g}=\langle\nabla^{g}\theta,\frac{1}{2}\mathrm{Ric}^{g}+\theta\otimes\theta-|\theta|_{g}^{2}g\rangle_{g}=\langle\theta,(\nabla^{g})^{\ast}(\theta\otimes\theta)+\frac{1}{2}\mathrm{d}|\theta|_{g}^{2}\rangle_{g}=\langle\theta,(\nabla^{g})^{\ast}(\theta\otimes\theta)\rangle_{g}\,.$
On the other hand:
$\langle\theta,(\nabla^{g})^{\ast}(\theta\otimes\theta)\rangle_{g}=-\frac{1}{2}\int_{M}\langle\theta,\mathrm{d}|\theta|_{g}^{2}\rangle_{g}\nu_{g}=0\,,$
where $\nu_{g}$ denotes the Riemannian volume volume form on $(M,g)$. Hence
$\nabla^{g}\theta=0$. ∎
###### Proposition 3.4.
Assume $M$ is compact and admits a NS-NS pair
$(g,\varphi)\in\mathrm{Sol}_{\kappa}(M)$, with associated Lee class $\sigma\in
H^{1}(M,\mathbb{R})$.
1. (1)
If $\sigma=[\varphi]=0\in H^{1}(M,\mathbb{R})$ then $(M,g)$ is flat and
therefore admits a finite covering conformal to a flat torus.
2. (2)
If $0\neq\sigma=[\varphi]\in H^{1}(M,\mathbb{R})$, then $b^{1}(M)=1$, and the
universal Riemannian cover of $(M,g)$ is isometric to $\mathbb{R}\times S^{3}$
equipped with the direct product of the standard metric of $\mathbb{R}$ and
the round metric on $S^{3}$ of sectional curvature
$\frac{1}{4}|\varphi|^{2}_{g}$, where $|\varphi|_{g}$ is the point-wise
constant norm of $\varphi$.
###### Proof.
If $(g,\varphi)$ is a NS-NS pair with $\sigma=0$ then $\varphi$ is exact and
parallel whence $\varphi=0$ and $(M,g)$ is a flat compact four-manifold, thus
finitely covered by a torus. Assume now that $(g,\varphi)$ is a solution with
$\sigma\neq 0$. Lemma 3.3 implies that $\varphi$ is a non-zero parallel 1-form
on $M$. Therefore, by the de Rham theorem the universal Riemannian cover
$(\hat{M},\hat{g})$ of $(M,g)$ is isometric to a Riemannian product
$(\mathbb{R}\times N,\mathrm{d}t^{2}+g_{N})$, where $(N,g_{N})$ is a complete
and simply connected Riemannian three-manifold, such that $\varphi$ is a
constant multiple of $\mathrm{d}t$. The Einstein equation for $(g,\varphi)$
implies that $g_{N}$ is Einstein with positive sectional curvature
$\frac{1}{4}|\varphi|^{2}_{g}$. Therefore, $(N,g_{N})$ is isometric to the
round sphere $S^{3}$ of constant sectional curvature
$\frac{1}{4}|\varphi|^{2}_{g}$. A direct computation shows that the metric
connection on $(\mathbb{R}\times N,\hat{g}=\mathrm{d}t^{2}+g_{N})$ with
torsion $|\varphi|_{g}\ast_{\hat{g}}\mathrm{d}t$ is flat and therefore both
equations in (3.4) are automatically satisfied and we conclude. ∎
Reference [27] gives, using results of [8, 9], a detailed account of compact
Riemannian four-manifolds covered by the Riemannian product $\mathbb{R}\times
S^{3}$. These manifolds were called in Op. Cit. _manifolds of type
$S^{1}\times S^{3}$_. Manifolds of type $S^{1}\times S^{3}$ admit a very
explicit description, which we will review in the following. This description
will be important in order to construct globally the moduli space of NS-NS
pairs.
### 3.3. Moduli space of manifolds of type $S^{1}\times S^{3}$
In this subsection we construct the global moduli space of NS-NS pairs with
non-vanishing Lee class. This is possible due to the fact that, as described
in Proposition 3.4, NS-NS pairs with non-trivial Lee class yield closed
Einstein-Weyl structures on $M$. The deformation problem (around an Einstein
metric) of Einstein-Weyl structures with Gauduchon constant one has been
studied in [43]. However, the analysis of Op. Cit. does not cover the case we
consider here, since the Weyl structure associated to a NS-NS pair
$(g,\varphi)$ on $M$ has zero Gauduchon constant and furthermore such $M$,
which corresponds to a manifold of type $S^{1}\times S^{3}$, does not admit
positive curvature Einstein metrics. Concerning the case of vanishing
Gauduchon constant, [43, Remark 7] states that the moduli space of manifolds
of type $S^{1}\times S^{3}$ is one-dimensional. We will show in Theorem 3.10
that this is not correct, see also Corollary 3.13.
###### Definition 3.5.
[27] A manifold of type $S^{1}\times S^{3}$ is a connected and oriented
Riemannian manifold locally isometric to $\mathbb{R}\times S^{3}$, where
$\mathbb{R}$ is equipped with its canonical metric and $S^{3}$ is equipped
with its round metric of sectional curvature $\frac{1}{4}$.
###### Remark 3.6.
Reference [27] introduces manifolds of type $S^{1}\times S^{3}$ by requiring
the metric on $S^{3}$ to have sectional curvature 1. Our choice for the
sectional curvature to be equal to $\frac{1}{4}$ in the above definition is
motivated by the the fact that in this way NS-NS pairs correspond directly to
manifolds of type $S^{1}\times S^{3}$, without the need of rescaling the
metric.
From its very definition it follows that the universal Riemannian cover of a
manifold of type $S^{1}\times S^{3}$ is $\mathbb{R}\times S^{3}$, which we
consider to be oriented and _time oriented_ , the latter meaning that an
orientation on the factor $\mathbb{R}$ has been fixed. Manifolds of type
$S^{1}\times S^{3}$ are determined by the embedding, modulo conjugation, of
their fundamental group $\Gamma$ into the orientation-preserving isometry
group $\mathrm{Iso}(\mathbb{R}\times S^{3})$ of $\mathbb{R}\times S^{3}$.
Since $\Gamma$ acts without fixed points, we actually have
$\Gamma\subset\mathrm{Iso}(\mathbb{R})\times\mathrm{Iso}(S^{3})$, that is,
elements of $\Gamma$ act by translations on $\mathbb{R}$ preserving the
canonical 1-form on $\mathbb{R}$ as well as the orientation on $S^{3}$. Every
manifold $(M,g)$ of type $S^{1}\times S^{3}$ can be written as a quotient:
$(M,g)=(\mathbb{R}\times S^{3})/\Gamma\,,$
where $\Gamma\subset\mathrm{Iso}(\mathbb{R})\times\mathrm{Iso}(S^{3})$ acts
freely and properly on $\mathbb{R}\times S^{3}$ through the action of the
isometry group of the latter. Elements of
$\mathrm{Iso}(\mathbb{R})\times\mathrm{Iso}(S^{3})$ preserve the canonical
unit norm vector field on $\mathbb{R}$. Consequently, every manifold of type
$S^{1}\times S^{3}$ is equipped with a canonical unit norm parallel vector
field, whose musical dual corresponds with $\varphi$, modulo a multiplicative
positive constant. Alternatively, every manifold of type $S^{1}\times S^{3}$
can be obtained from a direct product $[0,a]\times\Sigma$, where $a>0$ is a
real constant and $\Sigma$ is a compact Riemannian three-manifold of constant
sectional curvature equal to $\frac{1}{4}$, through the suspension of $\Sigma$
over $[0,a]$ by an isometry $\psi$ of $\Sigma$. Therefore, a manifold of type
$S^{1}\times S^{3}$ is the total space of a fibration over the circle of
length $a$ with fiber $\Sigma$ which comes equipped with a connection of
holonomy generated by $\psi$. Each fiber is isometric to a quotient
$\Sigma=S^{3}/\Gamma_{0}$, where $\Gamma_{0}$ is a finite subgroup
$\Gamma_{0}\subset\mathrm{Iso}(S^{3})=\mathrm{SO}(4)$ acting freely on $S^{3}$
as an embedded subgroup of $\Gamma$ which preserves each sphere
$\left\\{t\right\\}\times S^{3}$ in $\mathbb{R}\times S^{3}$. Therefore, the
group of isometries $\mathrm{Iso}(\Sigma)$ is identified canonically with
$N(\Gamma_{0})/\Gamma_{0}$, where $N(\Gamma_{0})$ is the normalizer of
$\Gamma_{0}$ in $\mathrm{SO}(4)$. Consequently, the fundamental group of a
manifold of type $S^{1}\times S^{3}$ is a semi-direct product of $\Gamma_{0}$
with the infinite cyclic group $\mathbb{Z}$ which is realized as a subgroup of
$\mathbb{R}\times\mathrm{SO}(4)$ as follows:
$n\mapsto(na,[\psi]^{n})\,,\qquad\gamma\mapsto(0,\gamma)\,,\qquad\forall\,n\in\mathbb{Z}\,,\
\forall\,\gamma\in\Gamma_{0}\,,$ (3.8)
where $\psi\in\mathrm{Iso}(\Sigma)$. In particular, given a closed three
manifold $\Sigma=S^{3}/\Gamma_{0}$, a pair $(\lambda,\psi)$ consisting in a
positive real number $\lambda$ and an isometry $\psi$ of $\Sigma$ uniquely
determines a manifold of type $S^{1}\times S^{3}$ as the quotient:
$(M,g)=(\mathbb{R}\times\Sigma)/\langle(\lambda,\psi)\rangle\,,$ (3.9)
where $\psi$ is considered as an element of
$\mathrm{Iso}(\Sigma)=N(\Gamma_{0})/\Gamma_{0}$ and
$\langle(\lambda,\psi)\rangle$ is the infinite cyclic group generated by the
isometry $(\lambda,\psi)$ of $\mathbb{R}\times\Sigma$ acting as the
translation by $\lambda$ on $\mathbb{R}$ and $\psi$ on $\Sigma$.
###### Definition 3.7.
A manifold of type $S^{1}\times S^{3}$ is of _class_ $\Sigma$ with respect to
$(\lambda,\psi)$ if it is isometric to a quotient of the form (3.9).
###### Lemma 3.8.
Let $F\colon(M_{1},g_{1})\to(M_{2},g_{2})$ be an isometry between manifolds of
type $S^{1}\times S^{3}$ and of class $\Sigma$ with respect to
$(\lambda_{i},\psi_{i})$, with $\lambda_{i}\in\mathbb{R}_{+}$ and
$\psi_{i}\in\mathrm{Iso}(\Sigma)$. Then, $\lambda_{1}=\lambda_{2}$ and:
$\mathfrak{f}\circ\psi_{1}\circ\mathfrak{f}^{-1}=\psi_{2}\,,$
for an isometry $\mathfrak{f}\in\mathrm{Iso}(\Sigma)$.
###### Remark 3.9.
We recall that, by definition,
$\hat{F}\colon\mathbb{R}\times\Sigma\to\mathbb{R}\times\Sigma$ is a covering
lift of $F\colon(M_{1},g_{1})\to(M_{2},g_{2})$ if it fits into the following
commutative diagram equivariantly with respect to deck transformations:
${\mathbb{R}\times\Sigma}$${\mathbb{R}\times\Sigma}$${(M_{1},g_{1})}$${(M_{2},g_{2})}$$p_{1}$$\hat{F}=\hat{F}_{0}\times\mathfrak{f}$$F$$p_{2}$
where $p_{1}$ and $p_{2}$ denote the cover projections and
$\mathbb{R}\times\Sigma$ is endowed with the product metric. In particular,
$\hat{F}\in\mathrm{Iso}(\mathbb{R}\times\Sigma)$ is an isometry and
$\hat{F}_{0}$ acts by translations.
###### Proof.
Since $p_{1}\colon\mathbb{R}\times\Sigma\to(M_{1},g_{1})$ is a covering map
and $F$ is a diffeomorphism, the map:
$F\circ p_{1}\colon\mathbb{R}\times\Sigma\to(M_{2},g_{2})\,,$
is also a covering map. Using the fact that covering maps induce injective
morphisms at the level of fundamental groups, it follows that $(F\circ
p_{1})_{\ast}(\pi_{1}(\Sigma))\subset\pi_{1}(M_{2})$ and
$(p_{2})_{\ast}(\pi_{1}(\Sigma))\subset\pi_{1}(M_{2})$ are subgroups of
$\pi_{1}(M_{2})$ abstractly isomorphic to $\pi_{1}(\Sigma)$. Since both
$(F\circ p_{1})_{\ast}(\pi_{1}(\Sigma))$ and $(p_{2})_{\ast}(\pi_{1}(\Sigma))$
contain all torsion elements of $\pi_{1}(M_{2})$ and are normal subgroups of
$\pi_{1}(M_{2})$, we conclude:
$(F\circ p_{1})_{\ast}(\pi_{1}(\Sigma))=(p_{2})_{\ast}(\pi_{1}(\Sigma))\,,$
in $\pi_{1}(M_{2})$. Therefore, standard covering theory implies that $F\circ
p_{1}$ and $p_{2}$ are isomorphic covering maps (equivariantly with respect to
deck transformations). Hence, there exists a diffeomorphism
$\hat{F}\colon\mathbb{R}\times\Sigma\to\mathbb{R}\times\Sigma$ fitting
equivariantly in the commutative diagram 3.9. This map can be shown to be an
isometry with respect to the product metric on $\mathbb{R}\times\Sigma$. The
fact that $\hat{F}$ is an isometry implies the decomposition
$\hat{F}=\hat{F}_{0}\times\mathfrak{f}$ where $\hat{F}_{0}$ acts by constant
translations on $\mathbb{R}$. The equivariance of $\hat{F}$ implies in turn:
$\hat{F}((r,s)\cdot(\lambda_{1},\psi_{1}))=\hat{F}((r,s))\cdot(\lambda_{2},\psi_{2})^{n}\,,$
where $n$ is a natural number. The fact that $\hat{F}$ is a diffeomorphism
together with the fact that the fibers of $p_{a}$ are torsors over
$\langle\lambda_{a},\psi_{a}\rangle$, $a=1,2$, implies that $n=1$, since
otherwise $\hat{F}$ would not be surjective. Therefore:
$\hat{F}\circ(\lambda_{1},\psi_{1})\circ\hat{F}^{-1}=(\lambda_{2},\psi_{2})\,,$
implying $\lambda_{1}=\lambda_{2}$, as well as:
$\mathfrak{f}\circ\psi_{1}\circ\mathfrak{f}^{-1}=\psi_{2}\,.$
Since the lift $\hat{F}$ we have considered is unique modulo conjugation by
isometries in $\mathrm{Iso}(\mathbb{R})\times\mathrm{Iso}(\Sigma)$, we
conclude. ∎
Fix now an oriented and closed Riemannian three-manifold of the form
$\Sigma=S^{3}/\Gamma_{0}$ and define the set:
$\mathcal{I}(\Sigma):=\mathrm{Iso}(\Sigma)/\mathrm{Ad}(\mathrm{Iso}(\Sigma))\,.$
to be the set of orbits of the adjoint action
$\mathrm{Ad}\colon\mathrm{Iso}(\Sigma)\to\mathrm{Aut}(\mathrm{Iso}(\Sigma))$,
that is, the set of conjugacy classes of $\mathrm{Iso}(\Sigma)$. Furthermore,
denote by $\mathfrak{M}(\Sigma)$ the set of manifolds of type $S^{1}\times
S^{3}$ and of class $\Sigma$ modulo the natural action of the orientation-
preserving diffeomorphism group via pull-back.
###### Theorem 3.10.
There is a canonical bijection of sets:
$\mathbb{R}_{+}\times\mathcal{I}(\Sigma)\xrightarrow{\simeq}\mathfrak{M}(\Sigma)\,.$
###### Proof.
To every element $(\lambda,[\psi])\in\mathbb{R}_{+}\times\mathcal{I}(\Sigma)$
we associate the element in $\mathfrak{M}(\Sigma)$ given by the isomorphism
class of manifolds of type $S^{1}\times S^{3}$ defined by the following
manifold of type $S^{1}\times S^{3}$:
$(M,g)=(\mathbb{R}\times\Sigma)/\langle(\lambda,\psi)\rangle\,,$
where $\psi$ is any representative of $[\psi]\in\mathcal{I}(\Sigma)$. Changing
the representative yields an isometric manifold of type $S^{1}\times S^{3}$
and class $\Sigma$, whence the assignment is well defined. Conversely, Lemma
3.8 implies that to any isomorphism class in $\mathfrak{M}(\Sigma)$ we can
associate a unique element in $\mathbb{R}_{+}\times\mathcal{I}(\Sigma)$ and
that this assignment is inverse to the previous construction and thus we
conclude. ∎
The set of conjugacy classes of a compact Lie group admits a very explicit
description as a polytope in the Cartan algebra of $\mathrm{Iso}(\Sigma)$. Fix
a maximal torus $T\subset\mathrm{Iso}(\Sigma)$, with Lie algebra
$\mathfrak{t}$. We denote by:
$W(\Sigma,T):=\frac{N(T)}{T}\,,$
the Weyl group of $\mathrm{Iso}(\Sigma)$, where $N(T)$ denotes the normalizer
of $T$ in $\mathrm{Iso}(\Sigma)$. The exponential map
$\mathrm{Exp}\colon\mathfrak{t}\to T$ gives a surjective map onto $T$ and its
kernel is a lattice in $\mathfrak{t}$ which allows to recover $T$ as:
$T=\frac{\mathfrak{t}}{\mathrm{ker}(\mathrm{Exp})}\,.$
Every conjugacy class in $\mathrm{Iso}(\Sigma)$ intersects $T$ in at least one
point [32], unique modulo the natural adjoint action of the Weyl group $W$ on
$T$. This fact can be used to prove that we have a bijection:
$\mathcal{I}(\Sigma)=\frac{T}{W(\Sigma,T)}=\frac{\mathfrak{t}}{W(\Sigma,T)\ltimes\mathrm{ker}(\mathrm{Exp})}\,,$
which gives an explicit description of $\mathcal{I}(\Sigma)$ in terms of the
fundamental region of the action of
$W(\Sigma,T)\ltimes\mathrm{ker}(\mathrm{Exp})$ on $\mathfrak{t}$.
###### Remark 3.11.
The isometry groups of compact elliptic three-manifolds
$\Sigma=S^{3}/\Gamma_{0}$ have been classified in [38]. The Weyl group of most
of the subgroups of $\mathrm{SO}(4)$ appearing as isometry groups of elliptic
three-manifolds can be directly computed, a fact that allows for a direct
construction of the corresponding moduli space of manifolds of type
$S^{1}\times S^{3}$.
Let $\mathrm{rk}(\mathrm{Iso}(\Sigma))$ denote the rank of
$\mathrm{Iso}(\Sigma)$, that is, the dimension of any of its maximal torus
subgroups. As a direct consequence of Theorem 3.10 we obtain the following
result.
###### Corollary 3.12.
The moduli space of manifolds of type $S^{1}\times S^{3}$ of class $\Sigma$
has dimension $1+\mathrm{rk}(\mathrm{Iso}(\Sigma))$.
Returning to the problem of classifying NS-NS pairs, the previous discussion
implies the following classification result.
###### Corollary 3.13.
The moduli space $\mathfrak{M}_{\mathrm{NS}}(\Sigma)$ of NS-NS pairs on a
manifold of the form (3.9) admits a finite covering given by
$\mathbb{R}^{2}\times T$, where $T$ is a maximal torus of
$\mathrm{Iso}(\Sigma)$. In particular
$\dim(\mathfrak{M}_{\mathrm{NS}}(\Sigma))=2+\mathrm{rk}(\mathrm{Iso}(\Sigma))$.
###### Proof.
Every NS-NS pair $(g,\varphi)$ defines a manifold of type $S^{1}\times S^{3}$
given by $(|\varphi|^{2}_{g}\,g,|\varphi|^{-2}_{g}\varphi)$. Indeed, note that
$|\varphi|^{-2}_{g}\varphi$ has norm one with respect to the metric
$|\varphi|^{2}_{g}\,g$ and its dual defines the canonical unit-norm parallel
vector field that every manifold of type $S^{1}\times S^{3}$ carries.
Furthermore, it can be seen that the restriction of $|\varphi|^{2}_{g}\,g$ to
the kernel of $|\varphi|^{-2}_{g}\varphi$ precisely yields a metric of
sectional curvature $\frac{1}{4}$ (see Definition 3.5) by following the same
steps as in the proof of Proposition 3.4. Hence, the assignment:
$(g,\varphi)\mapsto(|\varphi|^{2}_{g}\,g,|\varphi|^{-2}_{g}\varphi,|\varphi|_{g})\,,$
gives the desired bijection upon use of Theorem 3.10. ∎
###### Example 3.14.
For $\Sigma=S^{3}$ we have $\mathrm{Iso}(S^{3})=\mathrm{SO}(4)$ and the space
of conjugacy classes $\mathcal{I}(S^{3})=T/W(S^{3},T)$ admits a very explicit
description. A maximal torus of $\mathrm{SO}(4)$ can be conjugated to a group
of matrices of the form:
$\begin{bmatrix}\cos(x)&\sin(x)&0&0\\\ -\sin(x)&\cos(x)&0&0\\\
0&0&\cos(y)&\sin(y)\\\ 0&0&-\sin(y)&\cos(y)\end{bmatrix}$
where $x,y\in[0,2\pi]$. Hence, $T$ is a two torus and thus
$\dim(\mathfrak{M}(S^{3}))=3$. Furthermore, the Weyl group can be shown to be
the group of even signed permutations of two elements.
### 3.4. Infinitesimal deformations of NS-NS pairs
We consider now the infinitesimal deformation problem of NS-NS structures on a
manifold $M$ of type $S^{1}\times S^{3}$ around a fixed NS-NS pair
$(g,\varphi)$ modulo the action of the diffeomorphism group of $M$, with the
goal of obtaining the _infinitesimal_ counterpart of the results obtained in
the previous subsection. As we will see momentarily, the differential operator
controlling the infinitesimal deformations of a given NS-NS pair has a nice
geometric interpretation when restricted to an appropriate submanifold of $M$.
Let $M$ be a compact four-manifold and let $\omega$ be a fixed volume form on
$M$. We denote by $\mathrm{Met}_{\omega}(M)\subset\Gamma(T^{\ast}M^{\odot 2})$
the space of Riemannian metrics on $M$ whose associated Riemannian volume form
$\nu_{g}$ is equal to $\omega$. Using the equations defining the notion of NS-
NS pair we introduce the following map:
$\displaystyle\mathcal{E}=(\mathcal{E}_{1},\mathcal{E}_{2},\mathcal{E}_{3},\mathcal{E}_{4})\colon\mathrm{Met}(M)\times\Omega^{1}(M)$
$\displaystyle\to$ $\displaystyle\Gamma(T^{\ast}M^{\odot
2})\times\Gamma(T^{\ast}M^{\odot 2})\times\Omega^{2}(M)\times
C^{\infty}(M)\,,$ $\displaystyle(g,\varphi)$ $\displaystyle\mapsto$
$\displaystyle(\mathrm{Ric}^{g}+\frac{1}{2}\varphi\otimes\varphi-\frac{1}{2}|\varphi|^{2}_{g}\,g,\mathcal{L}_{\varphi^{\sharp}}g,\mathrm{d}\varphi,|\varphi|_{g}^{2}-1)\,,$
where $\mathcal{L}_{\varphi^{\sharp}}$ denotes Lie derivative along
$\varphi^{\sharp}$, the metric dual of $\varphi$. Using the fact that
$\nabla^{g}\varphi=0$ if and only if $\mathcal{L}_{\varphi^{\sharp}}g=0$ and
$\mathrm{d}\varphi=0$, it follows that the preimage $\mathcal{E}^{-1}(0)$ of
$0$ by $\mathcal{E}$ is by construction the set of all NS-NS pairs
$(g,\varphi)$ on $M$ with unit norm $\varphi$ and inducing $\omega$ as
Riemannian volume form of $g$. We assume that both $\mathrm{Met}_{\omega}(M)$
and $\Omega^{1}(M)$ are completed in the Sobolev norm
$\mathrm{H}^{s}=\mathrm{L}^{2}_{s}$ with $s$ large enough so
$\mathrm{Met}_{\omega}(M)\times\Omega^{1}(M)$ becomes a Hilbert manifold. The
operator $\mathcal{E}$ admits a canonical extension to the Sobolev completion
of $\mathrm{Met}_{\omega}(M)\times\Omega^{1}(M)$, which we denote for ease of
notation by the same symbol. The tangent space of
$\mathrm{Met}_{\omega}(M)\times\Omega^{1}(M)$ at $(g,\varphi)$ is given by:
$T_{(g,\varphi)}(\mathrm{Met}_{\omega}(M)\times\Omega^{1}(M))=\left\\{(\tau,\eta)\in\Gamma(T^{\ast}M^{\odot
2})\times\Omega^{1}(M)\,\,|\,\,\mathrm{Tr}_{g}(\tau)=0\right\\}\,,$
which again is assumed to be completed in the appropriate Sobolev norm. The
trace-less condition appearing in the previous equation occurs due to the fact
that $\mathrm{Met}_{\omega}(M)$ is restricted to those Riemannian metrics
inducing Riemannian volume forms equal to $\omega$. In the standard
deformation problem of Einstein metrics such condition follows automatically
simply from restricting to metrics of unit volume [6]. For every Riemannian
metric $g$ on $M$, we introduce the linear map of vector bundles:
$o^{g}\colon S^{2}T^{\ast}M\to S^{2}T^{\ast}M\,,\quad\tau\mapsto
o^{g}(\tau)\,,$
where, given a local orthonormal frame $\left\\{e_{i}\right\\}$, we define:
$o^{g}(\tau)(v_{1},v_{2})=\sum_{i}\tau(\mathcal{R}^{g}_{v_{1},e_{i}}v_{2},e_{i})\,,$
for every $v_{1},v_{2}\in TM$. With this definition, the Lichnerowicz
Laplacian restricted to symmetric $(2,0)$ tensors is given by [6]:
$\Delta_{L}^{g}\tau=(\nabla^{g})^{\ast}\nabla^{g}\tau+\mathrm{Ric}^{g}\circ_{g}\tau+\tau\circ_{g}\mathrm{Ric}^{g}-2\,o^{g}(\tau)\,,$
where $(\nabla^{g})^{\ast}$ is the adjoint of the Levi-Civita connection
acting on $(2,0)$ tensors and the contraction $\circ_{g}$ is defined
analogously to its counterpart for forms as introduced in Section 2. In
particular:
$(\mathrm{Ric}^{g}\circ_{g}\tau)(v_{1},v_{2})=g(\mathrm{Ric}^{g}(v_{1}),\tau(v_{2}))\,,\qquad
v_{1},v_{2}\in\mathfrak{X}(M)\,,$
and similarly for $\tau\circ_{g}\mathrm{Ric}^{g}$. Note that the 1-form
$\varphi$ of a NS-NS pair $(g,\varphi)$ has constant norm, so for definiteness
we will assume in the following that such $\varphi$ has in fact unit norm.
###### Lemma 3.15.
Let $(g,\varphi)$ be a NS-NS pair. The differential of $\mathcal{E}$ at
$(g,\varphi)$ reads:
$\displaystyle\mathrm{d}_{(g,\varphi)}\mathcal{E}_{1}(\tau,\eta)=\frac{1}{2}\Delta^{g}_{L}(\tau)-2\delta^{\ast}_{g}\delta_{g}\tau+\frac{1}{2}(\tau\otimes\varphi+\varphi\otimes\tau)-\frac{1}{2}\tau\,,\quad\mathrm{d}_{(g,\varphi)}\mathcal{E}_{3}(\tau,\eta)=\mathrm{d}\eta\,,$
$\displaystyle\mathrm{d}_{(g,\varphi)}\mathcal{E}_{2}(\tau,\eta)=\mathcal{L}_{\eta^{\sharp}}g-\mathcal{L}_{(\varphi\lrcorner\tau)^{\sharp}}g+\mathcal{L}_{\varphi^{\sharp}}\tau\,,\quad\mathrm{d}_{(g,\varphi)}\mathcal{E}_{4}(\tau,\eta)=2g(\eta,\varphi)-\tau(\varphi,\varphi)\,,$
where $\delta_{g}\tau$ denotes the divergence of $\tau$ and
$\delta^{\ast}_{g}$ denotes the formal adjoint of $\delta_{g}$.
###### Proof.
By definition, the (Gateaux) differential of the maps $\mathcal{E}_{a}$,
$a=1,\ldots 4$, at the point $(g,\varphi)$ and evaluated on $(\tau,\eta)\in
T_{(g,\varphi)}(\mathrm{Met}_{\omega}(M)\times\Omega^{1}(M))$ is given by:
$\mathrm{d}_{(g,\varphi)}\mathcal{E}_{a}(\tau,\eta)=\lim_{t\to
0}\frac{\mathcal{E}_{a}(g+t\,\tau,\varphi+t\,\eta)-\mathcal{E}_{a}(g,\varphi)}{t}\,.$
On the other hand, recall that the differential of the map
$(g,\varphi)\mapsto\varphi^{\sharp_{g}}$ at $(g,\varphi)$ along $(\tau,\eta)$
is given by
$\eta^{\sharp_{g}}-(\varphi^{\sharp_{g}}\lrcorner\tau)^{\sharp_{g}}$. This
immediately implies:
$\mathrm{d}_{(g,\varphi)}\mathcal{E}_{4}(\tau,\eta)=2g(\eta,\varphi)-\tau(\varphi,\varphi)\,.$
Furthermore, a direct computation, using that $\varphi$ has unit norm together
with the previous equation, shows that:
$\mathrm{d}_{(g,\varphi)}\mathcal{E}_{1}(\tau,\eta)=\mathrm{d}_{g}\mathrm{Ric}(\tau,\eta)+\frac{1}{2}(\tau\otimes\varphi+\varphi\otimes\tau)-\frac{1}{2}\tau\,,$
where $\mathrm{d}_{g}\mathrm{Ric}$ denotes the differential of the Ricci map
$\mathrm{Ric}\colon\mathrm{Met}_{\omega}(M)\to\Gamma(T^{\ast}M^{\odot 2})$.
Computing this differential explicitly, see [6, Equation (1.180a)] gives the
result in the statement upon use of $\mathrm{Tr}_{g}(\tau)=0$. Similarly,
computing for $\mathrm{d}_{(g,\varphi)}\mathcal{E}_{2}(\tau,\eta)$ we obtain:
$\mathrm{d}_{(g,\varphi)}\mathcal{E}_{2}(\tau,\eta)=\mathcal{L}_{\eta^{\sharp}}g-\mathcal{L}_{(\varphi^{\sharp}\lrcorner\tau)^{\sharp}}g+\mathcal{L}_{\varphi^{\sharp}}\tau\,,$
where we have used, as remarked above, that the differential of the map
$(g,\varphi)\mapsto\varphi^{\sharp_{g}}$ is given by
$\eta^{\sharp}-(\varphi^{\sharp}\lrcorner\tau)^{\sharp}$. The differential of
$\mathcal{E}_{3}\colon\mathrm{Met}_{\omega}(M)\times\Omega^{1}(M)\to\Omega^{2}(M)$
follows easily since $\mathcal{E}_{3}$ does not depend on $g$. ∎
The kernel of $\mathrm{d}_{(g,\varphi)}\mathcal{E}\colon
T_{(g,\varphi)}(\mathrm{Met}_{\omega}(M)\times\Omega^{1}(M))\to\Gamma(T^{\ast}M^{\odot
2})\times\Gamma(T^{\ast}M^{\odot 2})\times\Omega^{2}(M)\times C^{\infty}(M)$
describes the space of infinitesimal deformations of $(g,\varphi)$ that
preserve the norm of $\varphi$ and the Riemannian volume form induced by $g$.
These conditions eliminate the _spurious_ deformations given by constant
rescalings of $\varphi$ or homothecies of the metric. The group of
diffeomorphisms $\mathrm{Diff}_{\omega}(M)$ that preserves the fixed volume
form $\omega$, again completed in the Sobolev norm $\mathrm{H}^{s}$, acts
naturally on $\mathrm{Met}_{\omega}(M)\times\Omega^{1}(M)$ through pull-back.
Recall that the tangent space of $\mathrm{Diff}_{\omega}(M)$ at the identity
corresponds to the vector fields on $M$ that preserve $\omega$. This action
preserves $\mathcal{E}^{-1}(0)$ and hence maps solutions to solutions. The
moduli space of NS-NS pairs $(g,\varphi)$ with constant norm $\varphi$ and
associated Riemannian volume form equal to $\omega$ is defined as:
$\mathfrak{M}^{0}_{\omega}(M):=\mathcal{E}^{-1}(0)/\mathrm{Diff}_{\omega}(M)\,,$
endowed with the quotient topology. Define:
$\mathcal{O}_{(g,\varphi)}:=\left\\{(u^{\ast}g,u^{\ast}\varphi)\,\,|\,\,u\in\mathrm{Diff}_{\omega}(M)\right\\}\,,$
to be the orbit of the diffeomorphism group passing through $(g,\varphi)$. The
tangent space to the orbit at $(g,\varphi)\in\mathcal{O}_{(g,\varphi)}$ can be
computed to be:
$T_{(g,\varphi)}\mathcal{O}_{(g,\varphi)}=\left\\{(\mathcal{L}_{v}g,\mathrm{d}(\iota_{v}\varphi))\,,\,\,v\in\mathfrak{X}(M)\,\,|\,\,\mathcal{L}_{v}\omega=0\right\\}\,,$
where $\mathcal{L}$ denotes the Lie derivative.
###### Lemma 3.16.
The vector subspace $T_{(g,\varphi)}\mathcal{O}_{(g,\varphi)}\subset
T_{(g,\varphi)}(\mathrm{Met}_{\omega}(M)\times\Omega^{1}(M))$ is closed in the
Hilbert space $T_{(g,\varphi)}(\mathrm{Met}_{\omega}(M)\times\Omega^{1}(M))$.
###### Proof.
Follows from the fact that the differential operator $\mathfrak{X}(M)\ni
v\mapsto(\mathcal{L}_{v}g,\mathrm{d}\iota_{v}\varphi)$ has injective symbol. ∎
By the previous lemma, the $L^{2}$ orthogonal complement of
$T_{(g,\varphi)}\mathcal{O}_{(g,\varphi)}$ is a Hilbert subspace of
$T_{(g,\varphi)}(\mathrm{Met}_{\omega}(M)\times\Omega^{1}(M))$, which is given
by:
$T_{(g,\varphi)}\mathcal{O}_{(g,\varphi)}^{\perp}=\left\\{(\tau,\eta)\in
T_{(g,\varphi)}\mathcal{O}_{(g,\varphi)}\,\,|\,\,(\nabla^{g})^{\ast}\tau=0\,,\,\,\delta^{g}\eta=0\right\\}\,.$
By an extension of the celebrated Ebin’s slice theorem [12], for every pair
$(g,\varphi)$ the action of $\mathrm{Diff}_{\omega}(M)$ on
$\mathrm{Met}_{\omega}(M)\times\Omega^{1}(M)$ admits a _slice_
$\mathcal{S}_{(g,\varphi)}$ whose tangent space at $(g,\varphi)$ is precisely
$T_{(g,\varphi)}\mathcal{O}_{(g,\varphi)}^{\perp}$. Therefore, by applying
standard Kuranishi theory for differential-geometric moduli spaces, the
_virtual_ tangent space of $\mathfrak{M}^{0}_{\omega}(M)$ at the equivalence
class $[g,\varphi]$ defined by $(g,\varphi)\in\mathcal{E}^{-1}(0)$ in
$\mathfrak{M}^{0}_{\omega}(M)$, is given by:
$T_{[g,\varphi]}\mathfrak{M}^{0}_{\omega}(M):=\operatorname{Ker}(\mathrm{d}_{(g,\varphi)}\mathcal{E})\cap\operatorname{Ker}((\nabla^{g})^{\ast}\oplus\delta^{g})\,.$
Using the terminology introduced by Koiso [35, 36] in the study of
deformations of Einstein metrics and Yang-Mills connections, we will call
elements of $T_{[g,\varphi]}\mathfrak{M}^{0}_{\omega}(M)$ _essential
deformations_ of $(g,\varphi)$. Roughly speaking, essential deformations are
infinitesimal deformations of $(g,\varphi)$ that cannot be eliminated via the
infinitesimal action of the diffeomorphism group.
###### Lemma 3.17.
The pair $(\tau,\eta)\in\Gamma(T^{\ast}M^{\odot 2})\times\Omega^{1}(M)$ is an
essential deformation of the NS-NS pair $(g,\varphi)$ if and only if
$\eta=\lambda\,\varphi$ for a constant $\lambda\in\mathbb{R}$ and the
following equations are satisfied:
$\displaystyle\Delta^{g}_{L}\tau+2\lambda\,\varphi\otimes\varphi-\tau=0\,,\quad\mathcal{L}_{(\varphi^{\sharp}\lrcorner\tau)^{\sharp}}g=\mathcal{L}_{\varphi^{\sharp}}\tau\,,\quad(\nabla^{g})^{\ast}\tau=0\,,\quad\mathrm{Tr}_{g}(\tau)=0\,.$
(3.10)
###### Proof.
A pair $(\tau,\eta)\in\Gamma(T^{\ast}M^{\odot 2})\times\Omega^{1}(M)$ is an
essential deformation if and only if:
$\mathrm{d}_{(g,\varphi)}\mathcal{E}(\tau,\eta)=0\,,\quad(\nabla^{g})^{\ast}\tau=0\,,\qquad\delta^{g}\eta=0\,,\quad\mathrm{Tr}_{g}(\tau)=0\,.$
By Lemma 3.15, we have
$\mathrm{d}_{(g,\varphi)}\mathcal{E}_{3}(\tau,\eta)=\mathrm{d}\eta$ hence if
$(\tau,\eta)$ is an essential deformation then $\eta$ is closed and co-closed
whence harmonic. Since $b^{1}(M)=1$ and $\varphi$ is parallel, in particular
harmonic, we conclude that $\eta=\lambda\varphi$ for a real constant
$\lambda\in\mathbb{R}$. Plugging $\eta=\lambda\varphi$ into the explicit
expression of $\mathrm{d}_{(g,\varphi)}\mathcal{E}(\tau,\eta)=0$, given in
Lemma 3.15, we obtain equations (3.10) and hence we conclude. ∎
Since, by assumption, $M$ admits NS-NS pairs $(g,\varphi)$ with non-vanishing
Lee class $[\varphi]$, Proposition 3.4 implies that $(M,g)$ is a manifold of
type $S^{1}\times S^{3}$ and, consequently, it is a fibre bundle over $S^{1}$
with fiber $\Sigma=S^{3}/\Gamma$, $\Gamma\subset\mathrm{SO}(4)$, as described
in Subsection 3. For simplicity in the exposition, we will assume that $(M,g)$
is isometric to:
$(M,g)=(S^{1}\times\Sigma,\varphi\otimes\varphi+h)\,,$
where $h$ is a Riemannian metric on $\Sigma$. Analogous results can be
obtained in the general case by using the integrable distribution defined by
the kernel of $\varphi$. Given $\tau\in\Gamma(T^{\ast}M^{\odot 2})$ we
decompose it according to the orthogonal decomposition defined by $g$, that
is:
$\tau=\mathfrak{f}\,\varphi\otimes\varphi+\varphi\odot\beta+\tau^{\perp}\,,$
where the superscript $\perp$ denotes projection along $\Sigma$ and $\beta$ is
a 1-form along $\Sigma$.
###### Proposition 3.18.
The pair
$(\tau=\mathfrak{f}\,\varphi\otimes\varphi+\varphi\odot\beta+\tau^{\perp},\eta=\lambda\,\varphi)\in\Gamma(T^{\ast}M^{\odot
2})\times\Omega^{1}(M)$ is an essential deformation of the NS-NS pair
$(g,\varphi)$ only if:
$\displaystyle\lambda=0\,,\qquad\mathfrak{f}=0\,,\qquad\nabla^{g}_{\varphi^{\sharp}}\beta=0\,,\qquad\tau^{\perp}=0\,,\qquad\mathcal{L}_{\beta^{\sharp}}h=0\,.$
###### Proof.
A pair $(\tau,\eta)$ is an essential deformation if and only if conditions
(3.10) hold. Given the decomposition
$\tau=\mathfrak{f}\,\varphi\otimes\varphi+\varphi\odot\beta+\tau^{\perp}$, we
impose first the _slice_ condition $(\nabla^{g})^{\ast}\tau=0$. We obtain:
$(\nabla^{g})^{\ast}\tau=-\mathrm{d}\mathfrak{f}(\varphi^{\sharp})\varphi-\nabla^{g}_{\varphi^{\sharp}}\beta+\varphi\,\delta^{g}\beta+(\nabla^{h})^{\ast}\tau^{\perp}=0\,,$
hence $\mathrm{d}\mathfrak{f}(\varphi^{\sharp})=\delta^{g}\beta$ and
$\nabla^{g}_{\varphi}\beta=(\nabla^{h})^{\ast}\tau^{\perp}$. On the other
hand, equation
$\mathcal{L}_{(\varphi^{\sharp}\lrcorner\tau)^{\sharp}}g=\mathcal{L}_{\varphi^{\sharp}}\tau$
reduces to:
$\mathrm{d}\mathfrak{f}=0\,,\qquad\varphi\odot\nabla^{g}_{\varphi^{\sharp}}\beta+\mathcal{L}_{\varphi^{\sharp}}\tau^{\perp}=\mathcal{L}_{\beta^{\sharp}}h\,,$
where we have used that
$\varphi^{\sharp}\lrcorner\tau=\mathfrak{f}\,\varphi+\beta$. Hence, isolating
by type we obtain $\nabla^{g}_{\varphi^{\sharp}}\beta=0$ and
$\mathcal{L}_{\varphi^{\sharp}}\tau^{\perp}=\mathcal{L}_{\beta^{\sharp}}h$.
Note that since $\mathfrak{f}$ is constant we have $\delta^{g}\beta=0$. We
decompose now the first equation in (3.10). For this, we first compute:
$\mathrm{Ric}^{g}\circ_{g}\tau+\tau\circ_{g}\mathrm{Ric}^{g}=\frac{1}{2}(h\circ\tau+\tau\circ
h)=\frac{1}{2}\varphi\odot\beta+\tau^{\perp}\,,$
as well as:
$(\nabla^{g})^{\ast}\nabla^{g}\tau=(\nabla^{g})^{\ast}\nabla^{g}(\mathfrak{f}\,\varphi\otimes\varphi+\varphi\odot\beta+\tau^{\perp})=\varphi\odot(\nabla^{g})^{\ast}\nabla^{g}\beta+(\nabla^{g})^{\ast}\nabla^{g}\tau^{\perp}\,,$
which in turn implies:
$\displaystyle\Delta_{L}^{g}\tau=\varphi\odot(\nabla^{g})^{\ast}\nabla^{g}\beta+(\nabla^{g})^{\ast}\nabla^{g}\tau^{\perp}+\frac{1}{2}\varphi\odot\beta+\tau^{\perp}-2o^{h}(\tau^{\perp})$
$\displaystyle=\Delta_{L}^{h}\tau^{\perp}+\varphi\odot(\frac{1}{2}\beta+(\nabla^{g})^{\ast}\nabla^{g}\beta)\,.$
Hence, the first equation in (3.10) is equivalent to:
$\Delta_{L}^{h}\tau^{\perp}+\varphi\odot((\nabla^{g})^{\ast}\nabla^{g}\beta-\frac{1}{2}\beta)+2\lambda\,\varphi\otimes\varphi-\mathfrak{f}\,\varphi\otimes\varphi-\tau^{\perp}=0\,.$
Solving by type, we obtain:
$\Delta_{L}^{h}\tau^{\perp}=\tau^{\perp}\,,\quad(\nabla^{g})^{\ast}\nabla^{g}\beta=\frac{1}{2}\beta\,,\quad\mathfrak{f}=2\lambda\,.$
Solutions to the first equation above correspond to infinitesimal essential
Einstein deformations of $(\Sigma,h)$, which by [37] are necessarily trivial
since $(\Sigma,h)$ is covered by the round sphere. Hence $\tau^{\perp}=0$.
This in turn implies $\mathcal{L}_{\beta^{\sharp}}h=0$. The second equation
above follows automatically from $\beta^{\sharp}$ being a Killing vector field
on an Einstein three-manifold with Einstein constant $1/2$. Moreover, the
third equation above uniquely determines $\mathfrak{f}$ in terms of $\lambda$.
Putting all together, we obtain:
$\tau=2\lambda\,\varphi\otimes\varphi+\varphi\odot\beta\,.$
With these provisos in mind, equation $\mathrm{Tr}_{g}(\tau)=0$ is equivalent
to $\lambda=0$ whence:
$\tau=\varphi\odot\beta\,.$
Conversely, such $\tau$ solves all equations in (3.10) with $\eta=0$ and hence
we conclude. ∎
The previous proposition shows that the 1-form $\beta$ descends to a 1-form on
$\Sigma$ whose metric dual is a Killing vector field of $h$. Denote by
$\mathcal{K}(\Sigma,h)$ the vector space of Killing vector fields on
$(\Sigma,h)$.
###### Theorem 3.19.
There exists a canonical bijection:
$T_{[g,\varphi]}\mathfrak{M}^{0}_{\omega}(M)\to\mathcal{K}(\Sigma,h)\,,\quad(\tau,0)\mapsto\beta^{\sharp}\,,$
where, for every $(\tau,0)\in T_{[g,\varphi]}\mathfrak{M}(M)$ we write
uniquely $\tau=\varphi\odot\beta$.
###### Proof.
By Lemma 3.17 and Proposition 3.18 a pair $(\tau,\eta)$ is an essential
deformation, that is, belongs to $T_{[g,\varphi]}\mathfrak{M}^{0}_{\omega}(M)$
if and only if $\eta=0$ and $\tau=\varphi\otimes\beta$ for a Killing vector
field $\beta^{\sharp}$. This implies the statement of the theorem. ∎
Taking $(\Sigma,h)$ to be the round sphere and assuming $M=S^{1}\times S^{3}$
we have $\dim(\mathcal{K}(\Sigma,h))=6$ and thus
$\dim(T_{[g,\varphi]}\mathfrak{M}^{0}_{\omega}(S^{1}\times S^{3}))=6$. On the
other hand, in subsection 3.3 we constructed the full moduli space of
manifolds of type $S^{1}\times S^{3}$ and in the case in which $(\Sigma,h)$ is
the round sphere we proved that it was two-dimensional after removing the
spurious deformation consisting in rescalings of $\varphi$. Since
$\dim(T_{[g,\varphi]}\mathfrak{M}^{0}_{\omega}(S^{1}\times S^{3}))=6$, we
conclude that the space of essential deformations is obstructed and there
exist _four directions_ in
$T_{[g,\varphi]}\mathfrak{M}^{0}_{\omega}(S^{1}\times S^{3})$ which cannot be
integrated and therefore do not correspond to honest deformations.
## 4\. Heterotic solitons with parallel torsion
In this section we restrict our attention to Heterotic solitons with constant
dilaton and parallel non-vanishing torsion, that is, Heterotic solitons that
satisfy $\varphi=0$ and $\nabla^{g}\alpha=0$ with $\alpha\neq 0$. These
Heterotic solitons with constant dilaton, in the specific case of four
dimensions, can never be supersymmetric since the second equation in (2.3) is
equivalent to $\alpha=\varphi$. Therefore, this class of Heterotic solitons
provides a convenient framework to explore non-supersymmetric solutions of
Heterotic supergravity.
### 4.1. Null Heterotic solitons
Assuming $\varphi=0$, the Heterotic system reduces to the following system of
equations:
$\displaystyle\mathrm{Ric}^{g}+\frac{1}{2}\alpha\otimes\alpha-\frac{1}{2}|\alpha|^{2}_{g}\,g+\kappa\,\mathfrak{v}(\mathcal{R}_{\nabla^{\alpha}}\circ\mathcal{R}_{\nabla^{\alpha}})=0\,,$
(4.1)
$\displaystyle\mathrm{d}\alpha=0\,,\quad\delta^{g}\alpha=\kappa(|\mathcal{R}^{+}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}}-|\mathcal{R}^{-}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}})\,,\quad\kappa|\mathcal{R}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}}=|\alpha|^{2}_{g}\,,$
(4.2)
for pairs $(g,\alpha)$, where $g$ is a Riemannian metric on $M$ and
$\alpha\in\Omega^{1}(M)$ is a 1-form.
###### Definition 4.1.
The _null Heterotic soliton system_ consists of equations (4.1) and (4.2).
Solutions of the null Heterotic soliton system are _null Heterotic solitons_.
In the following we will study a particular case of the null Heterotic system
that is obtained by imposing $\alpha$ to be parallel. Assuming that
$\nabla^{g}\alpha=0$, the null Heterotic system further reduces to:
$\displaystyle\mathrm{Ric}^{g}+\frac{1}{2}\alpha\otimes\alpha-\frac{1}{2}|\alpha|^{2}_{g}\,g+\kappa\,\mathfrak{v}(\mathcal{R}_{\nabla^{\alpha}}\circ\mathcal{R}_{\nabla^{\alpha}})=0\,,$
(4.3)
$\displaystyle|\mathcal{R}^{+}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}}=|\mathcal{R}^{-}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}}\,,\quad\kappa|\mathcal{R}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}}=|\alpha|^{2}_{g}\,,$
(4.4)
Throughout this section, $\mathrm{Conf}_{\kappa}(M)$ will denote the set of
pairs $(g,\alpha)$ as described above, with $\alpha$ being a non-vanishing
1-form satisfying $\nabla^{g}\alpha=0$, and $\mathrm{Sol}_{\kappa}(M)$ will
denote the space of null Heterotic solitons $(g,\alpha)$ with parallel 1-form
$\alpha$. Also, we shall denote a vector and its metric dual by the same
symbol. A direct computation proves the following lemma.
###### Lemma 4.2.
Let $\alpha$ be a parallel 1-form. The following formulas hold:
$\displaystyle\mathcal{R}^{\nabla^{\alpha}}_{v_{1},v_{2}}=\mathcal{R}^{g}_{v_{1},v_{2}}+\frac{1}{4}(|\alpha|^{2}_{g}v_{1}\wedge
v_{2}+\alpha(v_{2})\alpha\wedge v_{1}-\alpha(v_{1})\alpha\wedge
v_{2})\in\Omega^{2}(M)\,,\quad\forall\,\,v_{1},v_{2}\in TM\,,$
$\displaystyle\mathfrak{v}(\mathcal{R}^{\nabla^{\alpha}}\circ\mathcal{R}^{\nabla^{\alpha}})=\mathfrak{v}(\mathcal{R}^{g}\circ\mathcal{R}^{g})-|\alpha|^{2}_{g}\mathrm{Ric}^{g}+\frac{|\alpha|^{2}_{g}}{4}(|\alpha|^{2}_{g}g-\alpha\otimes\alpha)\,,$
where the curvature tensor is defined by
$\mathcal{R}^{\nabla^{\alpha}}_{v_{1},v_{2}}=\nabla^{\alpha}_{v_{1}}\nabla^{\alpha}_{v_{2}}-\nabla^{\alpha}_{v_{2}}\nabla^{\alpha}_{v_{1}}-\nabla^{\alpha}_{[v_{1},v_{2}]}$.
Exploiting the fact that $\alpha$ is parallel, equations (4.3) and (4.4) can
be further simplified.
###### Lemma 4.3.
Let $(g,\alpha)\in\mathrm{Conf}_{\kappa}(M)$. Then:
$|\mathcal{R}^{+}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}}=|\mathcal{R}^{-}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}}\,,$
whence the first equation in (4.4) automatically holds for every
$(g,\alpha)\in\mathrm{Conf}_{\kappa}(M)$.
###### Proof.
The equality
$|\mathcal{R}^{+}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}}=|\mathcal{R}^{-}_{\nabla^{\alpha}}|^{2}_{g,\mathfrak{v}}$
holds if and only if:
$\langle\mathcal{R}_{\nabla^{\alpha}},\ast\mathcal{R}_{\nabla^{\alpha}}\rangle_{g}=0\,.$
The fact that $\alpha$ is parallel implies
$\alpha\lrcorner\mathcal{R}_{\nabla^{\alpha}}=0$. Consequently, one can write:
$\ast\mathcal{R}_{\nabla^{\alpha}}=\alpha\wedge r\,,$
for a certain $\mathfrak{so}_{g}(M)$-valued 1-form
$r\in\Omega^{1}(M,\mathfrak{so}_{g}(M))$. Therefore:
$\langle\mathcal{R}_{\nabla^{\alpha}},\ast\mathcal{R}_{\nabla^{\alpha}}\rangle_{g}=\langle\mathcal{R}_{\nabla^{\alpha}},\alpha\wedge
r\rangle_{g}=\langle\alpha\lrcorner\mathcal{R}_{\nabla^{\alpha}},r\rangle_{g}=0\,,$
and we conclude. ∎
###### Remark 4.4.
In the following we will use on several occasions the following identity:
$\mathcal{R}^{h}_{v_{1},v_{2}}=\frac{s^{h}}{2}v_{1}\wedge
v_{2}+v_{2}\wedge\mathrm{Ric}^{h}(v_{1})+\mathrm{Ric}^{h}(v_{2})\wedge
v_{1}\,,\qquad v_{1},v_{2}\in TN\,,$
which yields the Riemann curvature tensor of a Riemannian metric $h$ on a
three-dimensional manifold $N$ in terms of its Ricci curvature
$\mathrm{Ric}^{h}$ and its scalar curvature $s^{h}$. In particular, using the
previous formula it is easy to show that the contraction
$\mathfrak{v}(\mathcal{R}^{h}\circ\mathcal{R}^{h})$, defined exactly as we did
in four dimensions in Section 2, is given by:
$\mathfrak{v}(\mathcal{R}^{h}\circ\mathcal{R}^{h})=-2\,\mathrm{Ric}^{h}\circ\mathrm{Ric}^{h}+2s^{h}\mathrm{Ric}^{h}+(2|\mathrm{Ric}^{h}|^{2}_{h}-(s^{h})^{2})h\,.$
(4.5)
where:
$\mathrm{Ric}^{h}\circ\mathrm{Ric}^{h}(v_{1},v_{2})=h(\mathrm{Ric}^{h}(v_{1}),\mathrm{Ric}^{h}(v_{2}))\,,\qquad
v_{1},v_{2}\in TN\,.$
In particular, the norm of $\mathrm{R}^{h}$ is given by:
$|\mathrm{R}^{h}|_{h}^{2}=\frac{1}{2}\mathrm{Tr}_{h}(\mathfrak{v}(\mathcal{R}^{h}\circ\mathcal{R}^{h}))=2|\mathrm{Ric}^{h}|_{h}^{2}-\frac{1}{2}(s^{h})^{2}\,.$
Given a pair $(g,\alpha)\in\mathrm{Conf}_{\kappa}(M)$ we denote by
$\mathcal{H}\subset TM$ the rank-three distribution defined by the kernel of
$\alpha$, which is integrable since the latter is parallel. We denote the
corresponding foliation by $\mathcal{F}_{\alpha}\subset M$.
###### Lemma 4.5.
Let $(g,\alpha)\in\mathrm{Conf}_{\kappa}(M)$ be complete. Then,
$(g,\alpha)\in\mathrm{Sol}_{\kappa}(M)$ if and only if the leaves of
$\mathcal{F}_{\alpha}$ endowed with the metric induced by $g$ are all
isometric to a complete Riemannian three-manifold $(\Sigma,h)$ satisfying:
$\displaystyle-2\kappa\,\mathrm{Ric}^{h}\circ\mathrm{Ric}^{h}+(1-2\kappa|\alpha|_{g}^{2})\mathrm{Ric}^{h}+\frac{|\alpha|_{g}^{2}}{2}(1-\kappa\,|\alpha|_{g}^{2})h=0\,,\quad
s^{h}=-\frac{1}{2}|\alpha|_{g}^{2}\,,$ (4.6)
for a certain $\kappa>0$. In particular,
$|\mathrm{Ric}^{h}|^{2}_{h}=\frac{|\alpha|_{g}^{2}}{2\kappa}(1-\frac{\kappa|\alpha|_{g}^{2}}{2})$.
###### Proof.
If $g$ is complete then standard results in foliation theory imply that
$\mathcal{F}_{\alpha}$ has no holonomy and its leaves are all diffeomorphic.
Furthermore, since $\alpha$ is parallel it is in particular Killing and its
flow preserves the metric, whence all leaves are not only diffeomorphic but
isometric to a Riemannian three-manifold $(\Sigma,h)$ when equipped with the
metric induced by $g$. Using the fact that Equation (4.3) evaluated in
$\alpha$ is automatically satisfied, it follows that it is equivalent to its
restriction to $\mathcal{H}\otimes\mathcal{H}$. Since all the leaves are
isometric, Equation (4.3) is satisfied if and only if its restriction to any
leaf is satisfied. Denoting this leaf by $(\Sigma,h)$, where $h$ is the metric
induced by $g$, the restriction of Equation (4.3) to $\Sigma$ reads:
$\displaystyle(1-\kappa|\alpha|^{2}_{g})\mathrm{Ric}^{h}-\frac{1}{2}|\alpha|^{2}_{g}\,h+\kappa\,(\mathfrak{v}(\mathcal{R}^{h}\circ\mathcal{R}^{h})+\frac{|\alpha|^{4}_{g}}{4}h)=0\,.$
where we have used Lemma 4.2 to expand
$\mathfrak{v}(\mathcal{R}^{h}\circ\mathcal{R}^{h})$. Plugging now Equation
(4.5) into the previous equation, we obtain:
$\displaystyle-2\kappa\,\mathrm{Ric}^{h}\circ\mathrm{Ric}^{h}+(1-\kappa|\alpha|^{2}_{g}+2\kappa
s^{h})\mathrm{Ric}^{h}+\left(\kappa\frac{|\alpha|^{4}_{g}}{4}-\frac{1}{2}|\alpha|^{2}_{g}+2\kappa|\mathrm{Ric}^{h}|_{h}^{2}-\kappa(s^{h})^{2}\right)h=0\,.$
(4.7)
Moreover, using Equation (4.5) and Lemma 4.2 it can be seen that the second
equation in (4.4),
$\kappa|\mathcal{R}_{\nabla^{\alpha}}|_{g,\mathfrak{v}}^{2}=|\alpha|^{2}_{g}$,
is equivalent to:
$4|\mathrm{Ric}^{h}|^{2}_{h}-(s^{h})^{2}-|\alpha|^{2}_{g}s^{h}+\frac{3}{4}|\alpha|^{4}_{g}=\frac{2}{\kappa}|\alpha|^{2}_{g}\,.$
Combining this equation together with the trace of the previous equation we
can isolate both $|\mathrm{Ric}^{h}|^{2}_{h}$ and $s^{h}$. Upon substitution
into Equation (4.7), we get Equations (4.6). ∎
###### Proposition 4.6.
Let $(g,\alpha)\in\mathrm{Conf}_{\kappa}(M)$ be complete and non-flat. Then,
$(g,\alpha)\in\mathrm{Sol}_{\kappa}(M)$ is a null Heterotic soliton with
parallel torsion if and only if $2\kappa|\alpha|^{2}_{g}\in\\{1,2,3\\}$ and
the leaves of $\mathcal{F}_{\alpha}$ endowed with the metric induced by $g$
are all isometric to a complete Riemannian three-manifold $(\Sigma,h)$ whose
principal Ricci curvatures $(\mu_{1},\mu_{1},\mu_{2})$ are constant and
satisfy:
* •
$\mu_{1}=-\frac{1}{4\kappa}\,,\quad\mu_{2}=\frac{1}{4\kappa}$ if
$2\kappa|\alpha|^{2}_{g}=1$.
* •
$\mu_{1}=0\,,\quad\mu_{2}=-\frac{1}{2\kappa}$ if $2\kappa|\alpha|^{2}_{g}=2$.
* •
$\mu_{1}=-\frac{1}{4\kappa}\,,\quad\mu_{2}=-\frac{1}{4\kappa}$ if
$2\kappa|\alpha|^{2}_{g}=3$.
###### Proof.
By Lemma 4.5, a pair $(g,\alpha)\in\mathrm{Conf}_{\kappa}(M)$ is a solution of
Equations (4.3) and (4.4) if and only if Equations (4.6) are satisfied. The
first equation in (4.6) gives a second-degree polynomial satisfied by the
Ricci endomorphism of $h$, whose roots are $-\frac{|\alpha|^{2}_{g}}{2}$ and
$\frac{1-|\alpha|^{2}_{g}\,\kappa}{2\kappa}$. Therefore, solving the algebraic
equation we find that the principal Ricci curvatures
$(\mu_{1},\mu_{1},\mu_{2})$ of $h$ are constant and given by one of the
following possibilities:
$\displaystyle(\mu_{1}=-\frac{|\alpha|^{2}_{g}}{2},\mu_{2}=-\frac{|\alpha|^{2}_{g}}{2})\,,\quad(\mu_{1}=-\frac{|\alpha|^{2}_{g}}{2},\mu_{2}=\frac{1-|\alpha|^{2}_{g}\,\kappa}{2\kappa})\,,$
$\displaystyle(\mu_{1}=\frac{1-|\alpha|^{2}_{g}\,\kappa}{2\kappa},\mu_{2}=-\frac{|\alpha|^{2}_{g}}{2})\,,\quad(\mu_{1}=\frac{1-|\alpha|^{2}_{g}\,\kappa}{2\kappa},\mu_{2}=\frac{1-|\alpha|^{2}_{g}\,\kappa}{2\kappa})\,,$
Imposing now that the scalar curvature of $h$ is $s^{h}=2\mu_{1}+\mu_{2}$ and
using the second equation in (4.6), we obtain the cases and relations given in
the statement of the proposition. ∎
###### Remark 4.7.
Since $\alpha$ is by assumption parallel, if
$(g,\alpha)\in\mathrm{Sol}_{\kappa}(M)$ is complete then the lift
$(\hat{g},\hat{\alpha})$ of $(g,\alpha)$ to the universal cover $\hat{M}$ of
$M$ of is isometric to the following model:
$(\hat{M},\hat{g},\hat{\alpha})=(\mathbb{R}\times
N,\mathrm{d}t^{2}+\hat{h}\,,|\alpha|_{g}\mathrm{d}t)\,,$
where $N$ is a simply connected three-manifold, $t$ is the Cartesian
coordinate of $\mathbb{R}$ and $\hat{h}$ is a complete Riemannian metric on
$N$ whose principal Ricci curvatures satisfy the conditions established in
Proposition 4.6. Moreover, the foliation $\mathcal{F}_{\alpha}\subset M$
associated to $\alpha$ is induced by the standard foliation of
$\mathbb{R}\times N$ whose leaves are given by $\left\\{x\right\\}\times
N\subset\mathbb{R}\times N$, $x\in\mathbb{R}$.
As stated in Proposition 4.6, the principal Ricci curvatures are constant and
they can take at most two different values, $\mu_{1}$ and $\mu_{2}$. Suppose
$\mu_{1}\neq\mu_{2}$ and assume that $\mu_{2}$ is the eigenvalue of simple
multiplicity. The eigenvectors with eigenvalue $\mu_{2}$ define a rank-one
distribution $\mathcal{V}\subset T\Sigma$, which may not be trivializable.
Therefore, going perhaps to a covering of $\Sigma$, we assume that
$\mathcal{V}$ is trivializable and fix a unit trivialization
$\xi\in\Gamma(\mathcal{V})$, whose metric dual we denote by
$\eta\in\Omega^{1}(\Sigma)$. We define the endomorphism
$\mathcal{C}\in\operatorname{End}(T\Sigma)$ as follows:
$\mathcal{C}(v):=\nabla^{h}_{v}\xi\,,\quad v\in T\Sigma\,,$
which we split $\mathcal{C}=\mathcal{A}+\mathcal{S}$ in its antisymmetric
$\mathcal{A}$ and symmetric $\mathcal{S}$ parts.
###### Lemma 4.8.
Assume $\mu_{1}\neq\mu_{2}$. The following formulas hold:
$\displaystyle\nabla^{h}_{\xi}\xi=0\,,\quad\delta^{h}\eta=0\,,\quad\mathrm{Tr}(\mathcal{C})=0\,,\quad\nabla^{h}_{\xi}\mathcal{C}=0\,,\quad\mathcal{C}^{2}=-\frac{\mu_{2}}{2}\operatorname{Id}_{\mathcal{H}}\,,$
$\displaystyle\mathcal{L}_{\xi}\mathcal{C}=0\,,\quad\mathcal{L}_{\xi}\mathcal{A}=-2\mathcal{S}\mathcal{A}\,,\quad\mathcal{L}_{\xi}\mathcal{S}=2\mathcal{S}\mathcal{A}\,,\quad\mathcal{L}_{\xi}\mathrm{d}\eta=0\,,$
where $\mathcal{H}$ is the orthogonal complement of $\mathcal{V}$ in
$T\Sigma$.
###### Proof.
The condition of $h$ having a constant Ricci eigenvalue $\mu_{1}$ of
multiplicity two and a simple constant Ricci eigenvalue $\mu_{2}$ is
equivalent to $h$ satisfying:
$\mathrm{Ric}^{h}=\mu_{1}\,h+(\mu_{2}-\mu_{1})\,\eta\otimes\eta\,,\quad
s^{h}=2\mu_{1}+\mu_{2}\,,$
where $\eta$ is the metric dual of a unit eigenvector with eigenvalue
$\mu_{2}$. Since $\mu_{1}\neq\mu_{2}$, the divergence of the previous equation
together with the contracted Bianchi identity yields:
$\nabla^{h}_{\xi}\eta=\eta\,\delta^{h}\eta\,,$
which in turn implies, using that $\xi$ is of unit norm,
$\nabla^{h}_{\xi}\xi=0$, $\delta^{h}\eta=0$ and consequently
$\mathrm{Tr}(\mathcal{C})=0$. Furthermore, for every vector field $v$
orthogonal to $\xi$ we compute:
$\mathrm{d}\eta(\xi,v)=-\eta(\nabla^{h}_{\xi}v-\nabla^{h}_{v}\xi)=-\eta(\nabla^{h}_{\xi}v)=-h(\xi,\nabla^{h}_{\xi}v)=0\,,$
implying $\mathcal{L}_{\xi}\mathrm{d}\eta=0$. Since $\mathcal{C}$ is trace-
free, its square satisfies:
$\mathcal{C}^{2}=\frac{1}{2}\mathrm{Tr}(\mathcal{C}^{2})\operatorname{Id}_{\mathcal{H}}\,.$
On the other hand, using Remark 4.4 we obtain:
$\mathcal{R}^{h}_{v,\xi}=\frac{\mu_{2}}{2}\,\eta\wedge
v\,,\qquad\mathcal{R}^{h}_{v_{1},v_{2}}=\frac{\mu_{2}-2\mu_{1}}{2}\,v_{1}\wedge
v_{2}\,.$
where $v,v_{1},v_{2}\in\mathfrak{X}(\Sigma)$ are orthogonal to $\xi$. Taking
the interior product with $\xi$ in the first equation above we obtain:
$\frac{\mu_{2}}{2}v=\mathcal{R}^{h}_{v,\xi}\xi=-\nabla_{\xi}^{h}\nabla_{v}^{h}\xi-\nabla_{[v,\xi]}^{h}\xi=-\nabla_{\xi}^{h}(\mathcal{C}(v))+\mathcal{C}(\nabla_{\xi}^{h}v)-\mathcal{C}^{2}(v)=-(\mathcal{C}^{2}+\nabla^{h}_{\xi}\mathcal{C})(v)\,.$
This shows that $\nabla_{\xi}^{h}\mathcal{C}$ restricted to $\mathcal{H}$ is a
multiple of $\mathrm{Id}_{\mathcal{H}}$, whence it vanishes since it is trace-
free. We conclude:
$\nabla_{\xi}^{h}\mathcal{C}=0,\qquad\mathcal{C}^{2}=-\frac{\mu_{2}}{2}\mathrm{Id}_{\mathcal{H}}\,.$
(4.8)
Clearly $(\mathcal{L}_{\xi}\mathcal{C})(\xi)=0$. For $v\in\mathcal{H}$, we
compute:
$(\mathcal{L}_{\xi}\mathcal{C})(v)=\mathcal{L}_{\xi}(\mathcal{C}(v))-\mathcal{C}(\mathcal{L}_{\xi}v)=\nabla^{h}_{\xi}(\mathcal{C}(v))-\nabla^{h}_{\mathcal{C}(v)}\xi-\mathcal{C}(\nabla^{h}_{\xi}v)+\mathcal{C}(\nabla^{h}_{v}\xi)=0\,,$
upon use of $\nabla^{h}_{\xi}\mathcal{C}=0$. Furthermore, we have:
$\displaystyle(\mathcal{L}_{\xi}\mathcal{A})(v)=\mathcal{L}_{\xi}(\mathcal{A}(v))-\mathcal{A}(\mathcal{L}_{\xi}v)=\nabla^{h}_{\xi}(\mathcal{A}(v))-\nabla^{h}_{\mathcal{A}(v)}\xi-\mathcal{A}(\nabla^{h}_{\xi}v)+\mathcal{A}(\nabla^{h}_{v}\xi)$
$\displaystyle=-\mathcal{C}(\mathcal{A}(v))+\mathcal{A}(\mathcal{C}(v))=-2\mathcal{S}\mathcal{A}(v)\,,$
where we have used $\nabla^{h}_{\xi}\mathcal{A}=0$. A similar computation,
using $\nabla^{h}_{\xi}\mathcal{S}=0$ shows that
$\mathcal{L}_{\xi}\mathcal{A}=2\mathcal{S}\mathcal{A}$ whence
$\mathcal{L}_{\xi}\mathcal{C}=0$. The last equation in the statement is a
direct consequence of Cartan’s formula for the Lie derivative of a form and
hence we conclude. ∎
In the following result, we denote by $t$ the Cartesian coordinate of
$\mathbb{R}$ and we denote by $\mathbb{H}$ the three-dimensional hyperbolic
space equipped with a metric of constant negative sectional curvature.
Furthermore, we denote by $\mathrm{E}(1,1)$ the simply connected group of
rigid motions of two-dimensional Minkowski space. This is a solvable and
unimodular Lie group, see [39] for more details.
###### Theorem 4.9.
Let $M$ be a compact and oriented four-manifold and $\kappa>0$. A non-flat
pair $(g,\alpha)\in\mathrm{Conf}_{\kappa}(M)$ is a null Heterotic soliton with
parallel torsion if and only if:
1. (1)
Relations $\kappa|\alpha|^{2}_{g}=1$ and
$(\mu_{1}=-\frac{1}{4\kappa},\mu_{2}=\frac{1}{4\kappa})$ hold. In particular,
there exists a double cover of $(\Sigma,h)$ that admits a Sasakian structure
$(h_{S},\xi_{S})$ determined by:
$\xi_{S}:=\sqrt{\dfrac{\mu_{2}}{2}}\xi\,,\quad\mathrm{Ric}^{h}(\xi)=\frac{1}{4\kappa}\xi\,,\quad|\xi|^{2}_{h}=1\,,\quad\xi\in\mathfrak{X}(M)\,,$
as well as:
$h_{S}(v_{1},v_{2})=\left\\{\begin{matrix}-2h(\mathcal{A}\circ\mathcal{C}(v_{1}),v_{2})&\mathrm{if}\quad
v_{1},v_{2}\in\mathcal{H}\\\ \\\ 0&\mathrm{if}\quad v_{1}\in\mathcal{H},\
v_{2}\in\mathrm{Span}(\xi)\\\ \\\
\dfrac{\mu_{2}}{2}h(v_{1},v_{2})&\quad\,\mathrm{if}\quad
v_{1},v_{2}\in\mathrm{Span}(\xi)\end{matrix}\right.$
where $(\Sigma,h)$ denotes the typical leaf of the foliation
$\mathcal{F}_{\alpha}\subset M$ defined by $\alpha$.
2. (2)
Relation $\kappa|\alpha|^{2}_{g}=1$ holds and the lift
$(\hat{g},\hat{\alpha})$ of $(g,\alpha)$ to the universal cover $\hat{M}$ of
$M$ is isometric to either
$\mathbb{R}\times\widetilde{\mathrm{Sl}}(2,\mathbb{R})$ or
$\mathbb{R}\times\mathrm{E}(1,1)$ equipped with a left-invariant metric with
constant principal Ricci curvatures given by $(0,0,-\frac{1}{2\kappa})$ and
$\hat{\alpha}=|\alpha|_{g}\mathrm{d}t$.
3. (3)
Relation $\kappa|\alpha|^{2}_{g}=\frac{3}{2}$ holds and the lift
$(\hat{g},\hat{\alpha})$ of $(g,\alpha)$ to the universal cover $\hat{M}$ of
$M$ is isometric to $\mathbb{R}\times\mathbb{H}$ equipped with the standard
product metric of scalar curvature $-\frac{3}{4\kappa}$ and
$\hat{\alpha}=|\alpha|_{g}\mathrm{d}t$.
###### Remark 4.10.
Reference [39, Corollary 4.7] proves that both $\widetilde{\mathrm{Sl}}(2)$
and $\mathrm{E}(1,1)$ do admit Riemannian metrics with Ricci principal
curvatures $(0,0,-\frac{1}{2\kappa})$.
###### Proof.
Let $(g,\alpha)\in\mathrm{Sol}_{\kappa}(M)$. To prove the statement it is
enough to assume that $M$ is a simply connected four-manifold admitting a co-
compact discrete group acting on $(M,g)$ by isometries that preserve $\alpha$.
In that case, $(M,g)=(\mathbb{R}\times\Sigma,\mathrm{d}t^{2}+h)$ and
$\alpha=|\alpha|_{g}\mathrm{d}t$, see Remark 4.7. Assume first that
$\mu_{1}=\mu_{2}$. Then, Proposition 4.6 immediately implies that $(\Sigma,h)$
is isometric to $\mathbb{H}$ equipped with the standard metric of scalar
curvature $-\frac{3}{4\kappa}$, whence item $(3)$ follows. Therefore assume
that $\mu_{1}\neq\mu_{2}$ and, possibly going to a double cover, denote by
$\xi$ a unit-norm eigenvector of $\mathrm{Ric}^{h}$ with simple eigenvalue
$\mu_{2}$. Furthermore, assume $\mu_{2}\neq 0$ since by Proposition 4.6,
$\mu_{2}=0$ is not allowed. Using the notation introduced in Lemma 4.8,
consider the decomposition $\mathcal{C}=\mathcal{S}+\mathcal{A}$ into its
symmetric and skew-symmetric parts and let $\Sigma_{0}\subset\Sigma$ denote a
connected component of the open set of $\Sigma$ where $\mathcal{S}$ and
$\mathcal{A}$ are both non-vanishing. Since $\mathcal{A}$ is skew and
$\operatorname{tr}(\mathcal{S})=0$, there exist smooth positive functions
$\mathfrak{s}$ and $\mathfrak{a}$ on $\Sigma_{0}$ with
$\mathcal{S}^{2}=\mathfrak{s}^{2}\operatorname{Id}_{\mathcal{H}}$ and
$\mathcal{A}^{2}=-\mathfrak{a}^{2}\operatorname{Id}_{\mathcal{H}}$. By Lemma
4.8 we obtain:
$\mathfrak{a}^{2}=\mathfrak{s}^{2}+\frac{\mu_{2}}{2}\,.$ (4.9)
On $\Sigma_{0}$ we can diagonalize $\mathcal{S}$ through a smooth orthonormal
frame $(u_{1},u_{2})$ satisfying $\mathcal{S}(u_{1})=\mathfrak{s}u_{1}$ and
$\mathcal{S}(u_{2})=-\mathfrak{s}u_{2}$. Moreover, by replacing $u_{2}$ with
its opposite if necessary, we can assume that
$\mathcal{A}(u_{1})=\mathfrak{a}u_{2}$ and
$\mathcal{A}(u_{2})=-\mathfrak{a}u_{1}$. We have:
$\nabla_{u_{1}}^{h}\xi=\mathfrak{s}\,u_{1}+\mathfrak{a}\,u_{2}\,,\qquad\nabla_{u_{2}}^{h}\xi=-\mathfrak{s}\,u_{2}-\mathfrak{a}\,u_{1}\,.$
(4.10)
Moreover, by Lemma 4.8 we have $\nabla_{\xi}^{h}\mathcal{S}=0$ and
$\nabla_{\xi}^{h}\mathcal{A}=0$, which, together with the assumption
$\mu_{2}\neq 0$ implies:
$\xi(\mathfrak{a})=\xi(\mathfrak{s})=0,\qquad\nabla_{\xi}^{h}u_{1}=\nabla_{\xi}^{h}u_{2}=0\,.$
(4.11)
Furthermore, Equation (4.10) implies $h([u_{1},u_{2}],\xi)=-2\mathfrak{a}$, so
there exist two smooth functions $a,b$ on $M_{0}$ such that
$[u_{1},u_{2}]=au_{1}+bu_{2}-2\mathfrak{a}\xi.$ The Koszul formula then gives:
$\nabla_{u_{1}}^{h}u_{2}=a\,u_{1}-\mathfrak{a}\,\xi\,,\,\,\nabla_{u_{2}}^{h}u_{1}=-b\,u_{2}+\mathfrak{a}\,\xi\,,\,\,\nabla_{u_{1}}^{h}u_{1}=-a\,u_{2}-\mathfrak{s}\,\xi\,,\,\,\nabla_{u_{2}}^{h}u_{2}=b\,u_{1}+\mathfrak{s}\,\xi\,.$
(4.12)
Using Lemma 4.8 as well as equations (4.10)–(4.12) we can compute the
following components of the Riemann tensor of $h$ along $\Sigma_{0}$, which
must vanish as a consequence of Remark 4.4. We obtain:
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\mathcal{R}^{h}_{u_{1},\xi}u_{2}=-\nabla_{\xi}^{h}(a\,u_{1}-\mathfrak{a}\,\xi)-\nabla_{\mathfrak{s}\,u_{1}+\mathfrak{a}\,u_{2}}^{h}u_{2}$
$\displaystyle=$
$\displaystyle-\xi(a)u_{1}-\mathfrak{s}(a\,u_{1}-\mathfrak{a}\,\xi)-\mathfrak{a}\,(b\,u_{1}+\mathfrak{s}\,\xi)=-(\xi(a)+\mathfrak{s}\,a+\mathfrak{a}\,b)\,u_{1}\,,$
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\mathcal{R}^{h}_{u_{2},\xi}u_{1}=-\nabla_{\xi}^{h}(-b\,u_{2}+\mathfrak{a}\,\xi)+\nabla_{\mathfrak{a}\,u_{1}+\mathfrak{s}\,u_{2}}^{h}u_{1}$
$\displaystyle=$
$\displaystyle\xi(b)u_{2}-\mathfrak{a}\,(a\,u_{2}+\mathfrak{s}\,\xi)-\mathfrak{s}\,(-b\,u_{2}+\mathfrak{a}\,\xi)=(\xi(b)-\mathfrak{a}\,a-\mathfrak{s}\,b)u_{2}\,,$
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\mathcal{R}^{h}_{u_{1},u_{2}}\xi=-\nabla_{u_{1}}^{h}(\mathfrak{a}\,u_{1}+\mathfrak{s}\,u_{2})-\nabla_{u_{2}}^{h}(\mathfrak{s}\,u_{1}+\mathfrak{a}\,u_{2})-\nabla_{a\,u_{1}+b\,u_{2}-2\mathfrak{a}\,\xi}^{h}\xi$
$\displaystyle=$ $\displaystyle-
u_{1}(\mathfrak{a})u_{1}+\mathfrak{a}(a\,u_{2}+\mathfrak{s}\,\xi)-u_{1}(\mathfrak{s})u_{2}-\mathfrak{s}\,(a\,u_{1}-\mathfrak{a}\,\xi)-u_{2}(\mathfrak{s})\,u_{1}-\mathfrak{s}\,(-b\,u_{2}+\mathfrak{a}\,\xi)$
$\displaystyle-
u_{2}(\mathfrak{a})u_{2}-\mathfrak{a}\,(b\,u_{1}+\mathfrak{s}\,\xi)-a\,(\mathfrak{s}\,u_{1}+\mathfrak{a}\,u_{2})+b\,(\mathfrak{a}\,u_{1}+\mathfrak{s}\,u_{2})$
$\displaystyle=$
$\displaystyle-(u_{1}(\mathfrak{a})+u_{2}(\mathfrak{s})+2\mathfrak{s}\,a)\,u_{1}-(u_{1}(\mathfrak{s})+u_{2}(\mathfrak{a})-2\mathfrak{s}\,b)\,u_{2}\,.$
We thus have at each point of $\Sigma_{0}$:
$\xi(a)=-(\mathfrak{s}\,a+\mathfrak{a}\,b),\qquad\xi(b)=\mathfrak{a}\,a+\mathfrak{s}\,b\,,$
(4.13) $u_{1}(\mathfrak{a})+u_{2}(\mathfrak{s})+2\mathfrak{s}\,a=0\,,\qquad
u_{1}(\mathfrak{s})+u_{2}(\mathfrak{a})-2\mathfrak{s}\,b=0\,.$ (4.14)
Note that by (4.9) we also have:
$\mathfrak{a}\,u_{1}(\mathfrak{a})=\mathfrak{s}\,u_{1}(\mathfrak{s})\,,\qquad\mathfrak{a}\,u_{2}(\mathfrak{a})=\mathfrak{s}\,u_{2}(\mathfrak{s})\,.$
(4.15)
We consider now the cases $\mu_{2}<0$ and $\mu_{2}>0$ separately.
Case 1: $\mu_{2}<0$. From (4.9) we have $\mathfrak{s}^{2}>0$ on $\Sigma$. In
particular, $u_{1}$ and $u_{2}$ are smooth vector fields on $\Sigma$, and $a$
and $b$ are smooth functions on $\Sigma$. Applying $\xi$ to Equation (4.13)
and using Equation (4.11) we get:
$\xi(\xi(a))=-\mathfrak{s}\,\xi(a)-\mathfrak{a}\,\xi(b)=(\mathfrak{s}^{2}-\mathfrak{a}^{2})\,a=-\frac{\mu_{2}}{2}a\,,$
(4.16)
and similarly $\xi(\xi(b))=-\frac{\mu_{2}}{2}b.$ The assumption that
$(\mathbb{R}\times\Sigma,\mathrm{d}t\otimes\mathrm{d}t+h)$ has a co-compact
discrete group $\Gamma$ acting freely by isometries implies that $a$ and $b$
are bounded functions on $\Sigma$. Indeed, each $\gamma\in\Gamma$ preserves
the Ricci tensor of $(\mathbb{R}\times\Sigma,\mathrm{d}t^{2}+h)$, so
$\gamma_{*}u_{1}=\pm u_{1}$ and $\gamma_{*}u_{2}=\pm u_{2}$. Thus $a(x)=\pm
a(\gamma(x))$ and $b(x)=\pm b(\gamma(x))$ for every
$x\in\mathbb{R}\times\Sigma$ and $\gamma\in\Gamma$. By co-compactness of
$\Gamma$, this shows that $a$ and $b$ are bounded.
Let $x\in\Sigma_{0}$ be some arbitrary point. Since $\xi$ is a geodesic vector
field and the curve $c(t):=\exp_{x}(t\xi)$ satisfies $\dot{c}(t)=\xi_{c(t)}$
for every $t$, then $a$ is constant along $c(t)$ and in particular non-
vanishing. Thus $c(t)\in\Sigma_{0}$ for all $t$. By (4.16) the function
$f:=a\circ c$ satisfies the ordinary differential equation
$f^{\prime\prime}=-\frac{\mu_{2}}{2}f$. Thus $f$ is a linear combination of
$\cosh(\sqrt{-\frac{\mu}{2}}t)$ and $\sinh(\sqrt{-\frac{\mu}{2}}t)$.
Therefore, since $f$ is bounded, it has to vanish. In particular $a(x)=0$, and
since $x$ was arbitrary, $a=0$ on $\Sigma_{0}$. Similarly, $b=0$ on
$\Sigma_{0}$. By (4.14) and (4.15), we obtain:
$\mathfrak{s}^{2}\,u_{1}(\mathfrak{s})=\mathfrak{s}\,\mathfrak{a}u_{1}(\mathfrak{a})=-\mathfrak{s}\,\mathfrak{a}\,u_{2}(\mathfrak{s})=-\mathfrak{a}^{2}u_{2}(\mathfrak{a})=\mathfrak{a}^{2}u_{1}(\mathfrak{s})\,,$
whence $u_{1}(\mathfrak{s})=0$. Similarly we obtain $u_{2}(\mathfrak{s})=0$,
thus showing that $\mathfrak{a}$ and $\mathfrak{s}$ are constant on
$\Sigma_{0}$. In particular, $\Sigma_{0}$ is open and closed in $\Sigma$, so
either $\Sigma_{0}=\Sigma$ and $\mathfrak{a}$ is non-vanishing, or
$\Sigma_{0}$ is empty and $\mathfrak{a}=0$ on $\Sigma$. If $\Sigma_{0}$ was
empty, then $\mathcal{C}=0$ and $\mu_{2}=0$, which is not possible. Hence
$\Sigma_{0}=\Sigma$ and all equations above are valid on $\Sigma$. The
orthonormal frame $(\xi,u_{1},u_{2})$ satisfies:
$[\xi,u_{1}]=-(\mathfrak{s}u_{1}+\mathfrak{a}u_{2}),\qquad[\xi,u_{2}]=\mathfrak{s}u_{2}+\mathfrak{a}u_{1},\qquad[u_{1},u_{2}]=-2\mathfrak{a}\,\xi\,,$
hence $\Sigma$ is an unimodular Lie group equipped with a left-invariant
metric $h$. The Killing form of its Lie algebra $\mathfrak{g}$ can be easily
computed to be:
$B(\xi,\xi)=-\mu_{2}\,,\,\,B(u_{1},u_{1})=B(u_{2},u_{2})=-4\mathfrak{a}^{2}\,,\,\,B(u_{1},u_{2})=4\mathfrak{a}\mathfrak{s}\,,\,\,B(u_{1},\xi)=B(u_{2},\xi)=0\,.$
For $\mathfrak{a}\neq 0$, $B$ is non-degenerate and has signature $(2,1)$, so
$\mathfrak{g}$ is isomorphic to $\mathfrak{sl}(2,\mathbb{R})$. If
$\mathfrak{a}=0$, $\mathfrak{g}$ is solvable and isomorphic to a semi-direct
product $\mathbb{R}\ltimes\mathbb{R}^{2}$ that can be identified with the Lie
algebra of E$(1,1)$, the group of rigid motions of Minkowski two-dimensional
space. In both cases we can easily compute using (4.10) and (4.12):
$\displaystyle\mathcal{R}^{h}_{u_{1},u_{2}}u_{1}=\nabla_{u_{1}}^{h}(\mathfrak{a}\,\xi)-\nabla_{u_{2}}^{h}(-\mathfrak{s}\,\xi)-\nabla_{[u_{1},u_{2}]}^{h}u_{1}$
$\displaystyle=\mathfrak{a}(\mathfrak{s}\,u_{1}+\mathfrak{a}\,u_{2})-\mathfrak{s}\,(\mathfrak{a}\,u_{1}+\mathfrak{s}u_{2})=(\mathfrak{a}^{2}-\mathfrak{s}^{2})u_{2}=\frac{\mu_{2}}{2}u_{2}\,,$
which by Lemma 4.8 implies $\mu_{1}=0$, in agreement with Proposition 4.6.
This proves item $(2)$.
Case 2: $\mu_{2}>0$. We define the following endomorphism
$\Psi\in\operatorname{End}(T\Sigma)$ of $T\Sigma$:
$\Psi(\xi)=0\,,\qquad\Psi(v)=-\sqrt{\frac{2}{\mu_{2}}}\mathcal{C}(v)\,,\qquad\forall\,\,v\in\mathcal{H}\,.$
Define $\xi_{S}=\sqrt{\dfrac{2}{\mu_{2}}}\xi$ and
$\eta_{S}=\sqrt{\dfrac{\mu_{2}}{2}}\eta$. Clearly:
$\Psi(\xi_{S})=0\,,\qquad\eta_{S}(\xi_{S})=1\,,\qquad\Psi^{2}=-\operatorname{Id}^{2}+\xi_{S}\otimes\eta_{S}\,.$
Moreover, define the symmetric tensor $h_{S}\in\mathrm{Sym}^{2}(T^{*}\Sigma)$
as follows:
$h_{S}(v_{1},v_{2})=\left\\{\begin{matrix}-2h(\mathcal{A}\circ\mathcal{C}(v_{1}),v_{2})&\mathrm{if}\quad
v_{1},v_{2}\in\mathcal{H}\\\ \\\
\dfrac{\mu_{2}}{2}h(v_{1},v_{2})&\quad\,\,\mathrm{if}\quad
v_{1},v_{2}\in\mathrm{Span}(\xi_{S})\end{matrix}\right.$
we check that:
$h_{S}(\Psi(v_{1}),\Psi(v_{2}))=h_{S}(v_{1},v_{2})-\eta_{S}(v_{1})\,\eta_{S}(v_{2})\,,\quad\forall\,\,v_{1},v_{2}\in
T\Sigma\,.$
On the other hand:
$h_{S}(\Psi(v_{1}),v_{2})=-2\sqrt{\frac{\mu_{2}}{2}}h(\mathcal{A}(v_{1}),v_{2})=-\mathrm{d}\eta_{S}(v_{1},v_{2})\,,\quad\forall\,\,v_{1},v_{2}\in
T\Sigma\,.$
Furthermore, it can be verified that $h_{S}$ is non-degenerate since
$\det(\mathcal{A}\mathcal{C})>0$, which in turn implies that $h_{S}$ is
positive definite. In addition, by Equation (4.9), we observe that
$\mathfrak{a}$ is nowhere vanishing, implying that $\mathcal{A}$ is nowhere
singular. We infer that $\mathrm{d}\eta_{S}\neq 0$ everywhere on $\Sigma$ and
therefore $(\xi_{S},\eta_{S},\Psi)$ defines a contact structure on $\Sigma$
compatible with the Riemannian metric $h_{S}$. By Lemma 4.8 the Lie derivative
$\mathcal{L}_{\xi_{S}}\Psi=0$ vanishes whence $(h_{S},\xi_{S},\Psi)$ is
K-contact structure on $\Sigma$, a condition that in three dimensions is well-
known to be equivalent to $(h_{S},\xi_{S},\eta_{S},\Psi)$ being Sasakian and
hence we conclude. ∎
###### Remark 4.11.
In the cases in which the leaves of $\mathcal{F}_{\alpha}\subset M$ are
Sasakian three-manifolds, with respect to an _auxiliary_ metric as described
in the previous theorem, their cone is a Kähler four-manifold, and in
particular of special holonomy, whence realizing the proposal made in [30, 31]
to _geometrize_ supergravity fluxes. While the occurrence of Sasakian
structures in supersymmetric supergravity solutions is well-documented, see
for instance [49] and references therein, the natural appearance on these
structures in a non-supersymmetric framework, such as the one considered here,
was highlighted only recently in [40].
Theorem 4.9 can be used to construct large families of solutions of the
Heterotic system. These are, to the best knowledge of the authors, the first
solutions in the literature that are not locally isomorphic to a
supersymmetric Heterotic solution. For example, as a direct consequence of
Theorem 4.9 we have the obtain the following corollaries.
###### Corollary 4.12.
Every mapping torus of a complete hyperbolic three-manifold or a manifold
covered by $\widetilde{\mathrm{Sl}}(2,\mathbb{R})$ or $\mathrm{E}(1,1)$ admits
a null Heterotic soliton with parallel torsion.
###### Corollary 4.13.
Let $(h_{S},\xi_{S})$ be a Sasakian structure on $\Sigma$ with contact 1-form
$\eta_{S}$ satisfying:
$\mathrm{Ric}^{h_{S}}=-\frac{1}{2}h_{S}+\eta_{S}\otimes\eta_{S}\,.$
Then, the mapping torus of $(\Sigma,c^{2}h_{S})$ admits a null Heterotic
soliton with parallel torsion for $c^{2}=2\kappa$.
###### Remark 4.14.
The Sasakian three-manifolds occurring in the previous corollary are a
particular type of $\eta$-Einstein Sasakian manifolds, a class of Sasakian
manifolds extensively studied in the literature, see for example [7] and its
references and citations.
The topology of the Heterotic solitons constructed in the previous theorem
depends rather explicitly in the string slope parameter $\kappa$. Set
$|\alpha|^{2}_{g}=1/2$ for simplicity, whence
$\kappa\in\left\\{1,2,3\right\\}$ is _discrete_ , and different values of
$\kappa$ will correspond in general with Heterotic solitons of different
topology. For example, if $\kappa=1$, $(M,g,\alpha)$ can be the suspension of
a Sasakian three-manifold, if $\kappa=2$ then $(M,g,\alpha)$ can become the
suspension of a three-manifold covered by E$(1,1)$ or
$\widetilde{\mathrm{Sl}}(2,\mathbb{R})$, and if $\kappa=3$ then $(M,g,\alpha)$
can become the suspension of a hyperbolic three-manifold, which again results
in a new topology change. We remark that for the Heterotic solitons described
in Theorem 4.9 the limit $\kappa\to 0$ is not well-defined, whence they can be
considered as genuinely _stringy_.
## References
* [1] A. Ashmore, C. Strickland-Constable, D. Tennyson and D. Waldram, Heterotic backgrounds via generalised geometry: moment maps and moduli, JHEP 11 (2020), 071.
* [2] D. Baraglia and P. Hekmati, Transitive Courant Algebroids, String Structures and T-duality, Adv. Theor. Math. Phys. 19 (2015), 613.
* [3] E. Bergshoeff and M. de Roo, Supersymmetric Chern-Simons Terms in Ten-dimensions, Phys. Lett. B 218 (1989), 210.
* [4] E. Bergshoeff and M. de Roo, The Quartic Effective Action of the Heterotic String and Supersymmetry, Nucl. Phys. B 328 (1989), 439.
* [5] E. Bergshoeff, B. Janssen and T. Ortin, Solution generating transformations and the string effective action, Class. Quant. Grav. 13 (1996), 321–343.
* [6] A. L. Besse, Einstein Manifolds, Classics in Mathematics, Springer (1987).
* [7] C. P. Boyer, K. Galicki and P. Matzeu, On eta-Einstein Sasakian geometry, Commun. Math. Phys. 262 (2006), 177–208.
* [8] J. Cheeger and D. Gromoll, The splitting theorem for manifolds of non-negative Ricci curvature, J. Differential Geom. 6 (1971) 119–128.
* [9] J. Cheeger and D. Gromoll, On the Structure of Complete Manifolds of Nonnegative Curvature, Ann. of Math. 96 (3) (1972), 413–443.
* [10] A. Coimbra, R. Minasian, H. Triendl and D. Waldram, Generalised geometry for string corrections, JHEP 11 (2014), 160.
* [11] V. del Barco, L. Grama and L. Soriani, $T$-duality on nilmanifolds, JHEP 05 (2018), 153.
* [12] D. G. Ebin, The manifold of Riemannian metrics, In Global Analysis (Proc. Sympos. Pure Math., Vol. XV, Berkeley, Calif., 1968), 11–40. Amer. Math. Soc., Providence, R.I., 1970.
* [13] T. Fei, Generalized Calabi-Gray Geometry and Heterotic Superstrings, arXiv:1807.08737.
* [14] T. Fei, B. Guo and D. H. Phong, _Parabolic Dimensional Reductions of 11D Supergravity_ , Commun. Math. Phys. 369 (2019), 811–836.
* [15] T. Fei, D. H. Phong, S. Picard and X. Zhang, Geometric Flows for the Type IIA String, arXiv:2011.03662.
* [16] T. Fei, D. H. Phong, S. Picard and X. Zhang, Estimates for a geometric flow for the Type IIB string, arXiv:2004.14529.
* [17] J. M. Figueroa-O’Farrill and S. Gadhia, Supersymmetry and spin structures, Class. Quant. Grav. 22 (2005) L121.
* [18] J.-X. Fu and S.-T. Yau, A Monge-Ampère type equation motivated by string theory, Comm. Anal. Geom. 15 (2007) 29–76.
* [19] J.-X. Fu and S.-T. Yau, The theory of superstring with flux on non-Kähler manifolds and the complex Monge-Ampère equation, J. Differential Geom. 78 (2008) 369–428.
* [20] M. Garcia-Fernandez, Torsion-free generalized connections and Heterotic Supergravity, Commun. Math. Phys. 332 (1) (2014), 89–115.
* [21] M. Garcia-Fernandez, Lectures on the Strominger system, Travaux mathématiques, XXIV (2016) 7–61.
* [22] M. Garcia-Fernandez, Ricci flow, Killing spinors, and T-duality in generalized geometry, Adv. in Math. 350 (2019), 1059–1108.
* [23] M. Garcia-Fernandez, R. Rubio, C. S. Shahbazi, C. Tipler, Canonical metrics on holomorphic Courant algebroids, arXiv:1803.01873.
* [24] M. Garcia-Fernandez, R. Rubio, C. S. Shahbazi, C. Tipler, Heterotic supergravity and moduli stabilization, to appear.
* [25] M. Garcia-Fernandez, R. Rubio, C. Tipler, Gauge theory for string algebroids, arXiv:2004.11399.
* [26] M. García - Fernández and J. Streets, Generalized Ricci Flow, MS University Lecture Series, 2020.
* [27] P. Gauduchon, Structures de Weyl-Einstein, espace de twisteurs et variétés de type $S^{1}\times S^{3}$, J. reine angew. Math. 469 (1) (1995), 1–50.
* [28] U. Gran, J. Gutowski and G. Papadopoulos, Classification, geometry and applications of supersymmetric backgrounds, Phys. Rept. 794 (2019) 1.
* [29] M. Graña, Flux compactifications in string theory: A Comprehensive review, Phys. Rept. 423 (2006), 91–158.
* [30] M. Graña, C. S. Shahbazi and M. Zambon, Spin(7)-manifolds in compactifications to four dimensions, JHEP 11 (2014), 046.
* [31] M. Graña and C. S. Shahbazi, M-theory moduli spaces and torsion-free structures, JHEP 05 (2015), 085.
* [32] B.C. Hall, Lie Groups, Lie Algebras, and Representations. An elementary Introduction, Springer, Second Edition, 2015
* [33] C.M. Hull, Anomalies, ambiguities and superstrings, Phys. Lett. B 167 (1986), 51–55.
* [34] S. Ivanov, Heterotic supersymmetry, anomaly cancellation and equations of motion, Phys. Lett. B 685 (2010) 190–196.
* [35] N. Koiso, Einstein metrics and complex structures, Invent Math 73 (1983), 71–106.
* [36] N. Koiso, Yang-Mills connections and moduli space, Osaka J. Math. 24 (1987), 147–171.
* [37] N. Koiso, Rigidity and infinitesimal deformability of Einstein metrics, Osaka J. Math. 19 (3) (1982), 643–668.
* [38] D. McCullough, Isometries of Elliptic 3-Manifolds, J. London Math. Soc. 65 (1) (2002), 167–182.
* [39] J. Milnor, Curvatures of Left Invariant Metrics on Lie Groups, Adv. in Math. 21 (1976), 293–329.
* [40] Á. Murcia and C. S. Shahbazi, Contact metric three manifolds and Lorentzian geometry with torsion in six-dimensional supergravity, J. Geom. Phys. 158 (2020), 103868.
* [41] T. Oliynyk, V. Suneeta and E. Woolgar, _A Gradient flow for worldsheet nonlinear sigma models_ , Nucl. Phys. B 739 (2006), 441–458.
* [42] T. Ortín, Gravity and Strings, Cambridge Monographs on Mathematical Physics, 2nd edition, 2015.
* [43] H. Pedersen, Y. S. Poon and A. Swann, Einstein-Weyl deformations and submanifolds, Int. J. Math. 7 (1996), 705–719.
* [44] Duong H. Phong, Geometric Partial Differential Equations from Unified String Theories, Contribution to the Proceedings of the ICCM 2018, Taipei, Taiwan.
* [45] D. H. Phong, S. Picard and X. Zhang, _New curvature flows in complex geometry_ , Surveys in Differential Geometry Volume 22 (2017).
* [46] D.H. Phong, S. Picard, and X.W. Zhang, _Geometric flows and Strominger systems_ , Math. Z. 288 (2018) 101–113.
* [47] J. Polchinski, String theory. Vol. 1: An introduction to the bosonic string, Cambridge Monographs on Mathematical Physics (1998).
* [48] J. Polchinski, String theory. Vol. 2: Superstring theory and beyond, Cambridge Monographs on Mathematical Physics (1998).
* [49] J. Sparks, Sasaki-Einstein Manifolds, Surveys Diff. Geom. 16 (2011), 265–324.
* [50] J. Streets, Regularity and expanding entropy for connection Ricci flow, J. Geom. Phys. 58 (7) (2008), 900–912.
* [51] J. Streets, Generalized geometry, T-duality, and renormalization group flow, J. Geom. Phys. 114 (2017), 506–522.
* [52] J. Streets, Generalized Kähler–Ricci flow and the classification of nondegenerate generalized Kähler surfaces, Adv. in Math. 316 (20) (2017), 187–215.
* [53] J. Streets and Y. Ustinovskiy, Classification of generalized Kähler-Ricci solitons on complex surfaces, arXiv:1907.03819.
* [54] J. Streets and Y. Ustinovskiy, The Gibbons-Hawking ansatz in generalized Kähler geometry, arXiv:2009.00778.
* [55] A. Strominger, Superstrings with torsion, Nuclear Physics B 274 (2) (1986), 253–284.
* [56] S.-T. Yau, Complex geometry: Its brief history and its future, Science in China Series A Mathematics 48 (2005), 47–60.
* [57] S.-T. Yau, Metrics on complex manifolds, Science in China Series A Mathematics 53 (2010), 565–572.
|
# $2$-generated axial algebras of Monster type
Clara Franchi, Mario Mainardis, Sergey Shpectorov
###### Abstract.
In this paper we provide the basic setup for a project, initiated by Felix
Rehren in [16], aiming at classifying all $2$-generated primitive axial
agebras of Monster type $(\alpha,\beta)$. We first revise Rehren’s
construction of an initial object in the cathegory of primitive $n$-generated
axial algebras giving a formal one, filling some gaps and, though correcting
some inaccuracies, confirm Rehren’s results. Then we focus on $2$-generated
algebras which naturally part into three cases: the two critical cases
$\alpha=2\beta$ and $\alpha=4\beta$, and the generic case (i.e. all the rest).
About these cases, which will be dealt in detail in subsequent papers, we give
bounds on the dimensions (the generic case already treated by Rehen) and
classify all 2-generated primitive axial algebras of Monster type
$(\alpha,\beta)$ over ${\mathbb{Q}}(\alpha,\beta)$ for $\alpha$ and $\beta$
algebraically independent indeterminates over ${\mathbb{Q}}$. Finally we
restrict to the $2$-generated Majorana algebras (i.e. when
$\alpha=\frac{1}{4}$ and $\beta=\frac{1}{32}$), showing that these fall
precisely into the nine isomorphism types of the Norton-Sakuma algebras.
## 1\. Introduction
Axial algebras constitute a class of commutative non-associative algebras
generated by idempotent elements called axes such that their adjoint action is
semisimple and the relative eigenvectors satisfy a prescribed fusion law. Let
$R$ be a ring, $\\{\alpha,\beta\\}\subseteq R\setminus\\{0,1\\}$ and
$\alpha\neq\beta$. An axial algebra over $R$ is called of Monster type
$(\alpha,\beta)$ if it satisfies the fusion law $\mathcal{M}(\alpha,\beta)$
given in Table 1.
$\begin{array}[]{|c||c|c|c|c|}\hline\cr\star&1&0&\alpha&\beta\\\
\hline\cr\hline\cr 1&1&\emptyset&\alpha&\beta\\\ \hline\cr
0&\emptyset&0&\alpha&\beta\\\ \hline\cr\alpha&\alpha&\alpha&1,0&\beta\\\
\hline\cr\beta&\beta&\beta&\beta&1,0,\alpha\\\ \hline\cr\end{array}$ Table 1.
Fusion table $\mathcal{M}(\alpha,\beta)$
This means that the adjoint action of every axis has spectrum
$\\{1,0,\alpha,\beta\\}$ and, for any two eigenvectors $v_{\gamma}$,
$v_{\delta}$ with relative eigenvalues
$\gamma,\delta\in\\{1,0,\alpha,\beta\\}$, the product $v_{\gamma}\cdot
v_{\delta}$ is a sum of eigenvectors relative to eigenvalues contained in
$\gamma\star\delta$. This class was introduced by J. Hall, F. Rehren and S.
Shpectorov [7] in order to axiomatise some key features of important classes
of algebras, including the weight-2 components of OZ-type vertex operator
algebras, Jordan algebras and Matsuo algebras (see the introductions of [7],
[16] and [5]). They are also of particular interest for finite group theorists
as most of the finite simple groups, or their automorphism groups, can be
faithfully and effectively represented as automorphism groups of these
algebras.
The motivating example is the Griess algebra $V_{2}^{\sharp}$ which is a real
axial algebra of Monster type $(\frac{1}{4},\frac{1}{32})$ and coincides with
the weight-2 component of the Monster vertex operator algebra $V^{\sharp}$.
Here axes are associated to the involutions of type $2A$ in the Monster (i.e.
those having the double cover of the Baby Monster as centraliser). The
subalgebras of the Griess algebra which are generated by two axes were first
classified by S. Norton in [14] who showed that there are nine isomorphism
classes of such algebras, corresponding to the $9$ conjugacy classes in the
Monster group $M$ of the dihedral subgroups generated by the pairs of $2A$
involutions associated to the two generating axes. These algebras are labelled
$1A$, $2A$, $3A$, $4A$, $5A$, $6A$, $4B$, $2B$.
On the basis of an earlier work by M. Miyamoto [12], who observed that Ising
vectors in a vertex operator algebra of CFT-type satisfy the Monster fusion
law, in [17] S. Sakuma classified all $OZ$-type vertex operator algebras
generated by a pair of two Ising conformal vectors showing that, up to
rescaling, the isomorphism types of their weight-$2$ subspaces match precisely
the $9$ classes of Norton, now often called Norton-Sakuma algebras.
By extracting the relevant properties of these weight-$2$ subspaces, A. A.
Ivanov introduced in 2009 the concept of Majorana algebras [10], which are
real axial algebras of Monster type $(\frac{1}{4},\frac{1}{32})$ satisfying
some additional properties, in particular they are endowed with a positive
definite associative bilinear form. In 2010 A. Ivanov, D. Pasechnik, A. Seress
and S. Shpectorov obtained Norton’s classification within the axiomatic
context of Majorana algebras (see [11]). A further development was achieved by
J. Hall, F. Rehren and S. Shpectorov [7] who constructed a universal object
for primitive Frobenius axial algebras with a prescribed fusion law and
extended Norton-Sakuma theorem to 2-generated primitive Frobenius axial
algebras of Monster type $(\frac{1}{4},\frac{1}{32})$. Subsequently F. Rehren
[16, 15] addressed the general case dropping the assumption on the existence
of the bilinear form (which characterises Frobenius axial algebras) and
described a universal object for primitive axial algebras with a prescribed
fusion law. In the particular case of $2$-generated algebras of Monster type
$(\alpha,\beta)$ with $\alpha\not\in\\{2\beta,4\beta\\}$ and in characteristic
other than $2$, he produced a spanning set of $8$ elements and computed the
structure constants with respect to these elements. Finally he produced new
examples of $2$-generated primitive Frobenius axial algebras of Monster type.
This paper is part of a project of the authors aiming to classifying
$2$-generated primitive axial algebras of Monster type over fields. We start
by giving, for any positive integer $n$, a formal construction of a universal
$n$-generated primitive axial algebra mapping epimorphically onto every
$n$-generated axial algebra with a prescribed fusion law. We then focus on
$2$-generated primitive axial algebras of Monster type. We say that such an
algebra is symmetric if the map that swaps the two generating axes extends to
an automorphism of the entire algebra. All the algebras considered by Rehren
in [15, 16] are symmetric. In Section 4, we re-prove Rehren’s result on the
number of generators and get the following bound for symmetric algebras in the
case $\alpha=4\beta$.
###### Theorem 1.1.
Every $2$-generated primitive axial algebra of Monster type $(\alpha,\beta)$
over a field ${\mathbb{F}}$ of characteristic other than $2$ has dimension at
most $8$, provided $\alpha\not\in\\{2\beta,4\beta\\}$.
###### Theorem 1.2.
Every $2$-generated symmetric primitive axial algebra of Monster type
$(4\beta,\beta)$ over a field ${\mathbb{F}}$ of characteristic other than $2$
has dimension at most $8$, except possibly when
$(\alpha,\beta)=(2,\frac{1}{2})$.
The case $(\alpha,\beta)=(2,\frac{1}{2})$ is truly exceptional, as the
infinite dimensional $2$-generated symmetric primitive axial algebra of Moster
type $(2,\frac{1}{2})$ constructed in [4] shows. On the other hand, the same
bound $8$ holds for $2$-generated primitive axial algebras of Monster type
$(2\beta,\beta)$ over any ring in which $2$ and $\beta$ are invertible and it
is the best possible (see [3]).
In Section 5 we consider in more details the case when $\alpha-2\beta$ and
$\alpha-4\beta$ are invertible in the field ${\mathbb{F}}$, which we call the
generic case. Denote by ${\mathbb{F}}_{0}$ the prime subfield of
${\mathbb{F}}$ and let ${\mathbb{F}}_{0}(\alpha,\beta)[x,y,z,t]$ be the
polynomial ring in $4$ variables over ${\mathbb{F}}_{0}(\alpha,\beta)$. We
prove the following result.
###### Theorem 1.3.
There exists a subset $T\subseteq{\mathbb{F}}_{0}(\alpha,\beta)[x,y,z,t]$ of
size $4$, depending only on ${\mathbb{F}}_{0}$, $\alpha$, and $\beta$, such
that every $2$-generated primitive axial algebra of Monster type
$(\alpha,\beta)$ over a field ${\mathbb{F}}$ of characteristic other than $2$,
with $\alpha,\beta\in{\mathbb{F}}$ and $\alpha\not\in\\{2\beta,4\beta\\}$ is
completely determined, up to homomorphic images, by a quadruple
$(x_{0},y_{0},z_{0},t_{0})\in{\mathbb{F}}^{4}$ which is a common zero of all
the elements of $T$.
Using Theorem 1.3, we classify the algebras defined over the field
${\mathbb{Q}}(\alpha,\beta)$ with $\alpha$ and $\beta$ independent
indeterminates over ${\mathbb{Q}}$. We refer to [8] for the definition of the
algebras of type $1A$, $2B$, $3C(\eta)$, $\eta\in{\mathbb{F}}$. We denote by
$3A(\alpha,\beta)$ the algebra of dimension $4$ defined in [16, Table 9] for
$\alpha\neq\frac{1}{2}$.
###### Theorem 1.4.
Let $V$ be a $2$-generated primitive axial algebra of Monster type
$(\alpha,\beta)$ over the field ${\mathbb{Q}}(\alpha,\beta)$, with $\alpha$
and $\beta$ algebraically independent indeterminates over ${\mathbb{Q}}$. Then
we have one of the following:
1. (1)
$V$ is the trivial algebra ${\mathbb{Q}}(\alpha,\beta)$ of type $1A$;
2. (2)
$V$ is an algebra of type $2B$;
3. (3)
$V$ is an algebra of Jordan type $\alpha$ of type $3C(\alpha)$;
4. (4)
$V$ is an algebra of Jordan type $\beta$ of type $3C(\beta)$;
5. (5)
$V$ is an algebra of dimension $4$ of type $3A(\alpha,\beta)$.
In particular, $V$ is symmetric.
Recall that the fusion law $\mathcal{M}(\alpha,\beta)$ admits a
${\mathbb{Z}}_{2}$-grading $\mathcal{M}(\alpha,\beta)_{+}=\\{1,0,\alpha\\}$
and $\mathcal{M}(\alpha,\beta)_{-}=\\{\beta\\}$ and this implies that every
axis induces an automorphism of the algebra called Miyamoto involution. The
Miyamoto group is the group generated by all Miyamoto involutions (see [9]).
###### Corollary 1.5.
Let $V$ be a primitive finitely generated axial algebra of Monster type
$(\alpha,\beta)$ over the field ${\mathbb{Q}}(\alpha,\beta)$, with $\alpha$
and $\beta$ independent indeterminates over ${\mathbb{Q}}$. Then the Miyamoto
group of $V$ is a group of $3$-transpositions.
Finally, as a consequence of Theorem 1.3, we get that the Norton-Sakuma
Theorem holds in general for $2$-generated primitive axial algebra of Monster
type $(\frac{1}{4},\frac{1}{32})$ over a field of characteristic zero, without
any assumption on the existence of a Frobenius form. This fact has been also
checked computationally in [19].
###### Theorem 1.6.
Every $2$-generated primitive axial algebra of Monster type
$(\frac{1}{4},\frac{1}{32})$ over a field of characteristic zero is a Norton-
Sakuma algebra.
Throughout this paper $R$ is a commutative associative ring with $1$.
While this paper was in preparation, Yabe posted a preprint in arXiv giving an
almost complete classification of the primitive symmetric $2$-generated axial
algebras of Monster type $(\alpha,\beta)$ [18].
## 2\. Primitive axial algebras
We begin with some basic results about endomorphisms of $R$-modules, which are
well known for vector spaces. The main difference is that, when considering
$R$-modules instead of vector spaces, it is no longer true in general that
eigenvectors relative to different eigenvalues are linearly independent.
Let $V$ be an $R$-module. For $\xi\in End(V)$, $\lambda\in R$, and
$\Gamma\subseteq R$, define
$V_{\lambda}^{\xi}:=\\{v\in V|\xi(v)=\lambda v\\}\mbox{ and
}V_{\Gamma}^{\xi}:=\sum_{\lambda\in\Gamma}V_{\lambda}^{\xi}.$
If $V$ is also an $R$-algebra and $a\in V$, denote by ${\rm ad}_{a}$ the
endomorphism of $V$ induced by multiplication by $a$:
$\begin{array}[]{rccc}{\rm ad}_{a}:&V&\to&V\\\ &x&\mapsto&ax\end{array}.$
In this case, we’ll write simply $V_{\lambda}^{a}$ and $V_{\Gamma}^{a}$
instead of $V_{\lambda}^{{\rm ad}_{a}}$ and $V_{\Gamma}^{{\rm ad}_{a}}$,
respectively.
Two elements $\alpha$ and $\beta$ of $R$ are called distinguishable if
$\alpha-\beta$ is a unit in $R$. In the remainder of this section we assume
that $\Gamma$ is a finite set of pairwise distinguishable elements of $R$.
Note that every nontrivial ring homomorphism maps sets of pairwise
distinguishable elements into sets of pairwise distinguishable elements. Let
$R[x]$ be the ring of polynomials over $R$ with indeterminate $x$.
###### Lemma 2.1.
If $g\in R[x]$ satisfies $g(\lambda)=0$ for every $\lambda\in\Gamma$ and $\deg
g<|\Gamma|$, then $g=0$.
###### Proof.
We proceed by induction on $|\Gamma|$. If $|\Gamma|=1$, then $g$ is a constant
and since the value of $g$ is zero in at least one element (the element of
$\Gamma$), it must be $g=0$. Suppose $|\Gamma|>1$. Assume by contradiction
that $g$ is not the zero polynomial and set $k:=\deg g$. Then $k<|\Gamma|$.
Let $\lambda\in\Gamma$, then, since $x-\lambda$ is monic, we have
$g=q\cdot(x-\lambda)+r$, with $r=g(\lambda)=0$, so $g=q\cdot(x-\lambda)$. Then
clearly $\deg q=k-1<|\Gamma^{\prime}|$, where
$\Gamma^{\prime}=\Gamma\setminus\\{\lambda\\}$. Also for
$\mu\in\Gamma^{\prime}$, $0=g(\mu)=q(\mu)\cdot(\mu-\lambda)$. Since $\mu$ and
$\lambda$ are distinguishable, $(\mu-\lambda)$ is a unit and so $q(\mu)=0$. By
induction this means that $q=0$, whence also $g=0$, a contradiction.∎
For $\mu\in\Gamma$, define
(1) $f_{\mu}:=\prod_{\lambda\in{\Gamma}\setminus\\{\mu\\}}(x-\lambda),$
and
(2) $f:=\prod_{\lambda\in{\Gamma}}(x-\lambda),$
clearly
$f=(x-\mu)f_{\mu}$
for every $\mu\in\Gamma$. Note that, since elements of $\Gamma$ are pairwise
distinguishable, $f_{\mu}(\mu)$ is a unit in $R$.
###### Corollary 2.2.
$\sum_{\mu\in\Gamma}\frac{1}{f_{\mu}(\mu)}f_{\mu}=1.$
###### Proof.
Define
$g:=\sum_{\mu\in\Gamma}\frac{1}{f_{\mu}(\mu)}f_{\mu}-1,$
then $\deg g<|\Gamma|$ and clearly $g(\lambda)=0$ for every
$\lambda\in\Gamma$. Hence, by Lemma 2.1, $g=0$. ∎
###### Lemma 2.3.
For every $\xi\in End(V)$, the following statements are equivalent:
1. (1)
$V=\bigoplus_{\lambda\in\Gamma}V_{\lambda}^{\xi}$;
2. (2)
$V=V_{\Gamma}^{\xi}$;
3. (3)
$f(\xi)V=0$.
###### Proof.
Clearly (1) implies (2) and (2) implies (3). Suppose (3) is satisfied. Then,
for every $\mu\in\Gamma$, and $v\in V$,
$\displaystyle 0$ $\displaystyle=$
$\displaystyle(f(\xi))(v)=\left(\prod_{\lambda\in{\Gamma}}(\xi-\lambda)\right)(v)$
$\displaystyle=$
$\displaystyle(\xi-\mu)\left(\prod_{\lambda\in{\Gamma\setminus\\{\mu\\}}}(\xi-\lambda)\right)(v)$
$\displaystyle=$ $\displaystyle(\xi-\mu)((f_{\mu}(\xi))(v)),$
whence $(f_{\mu}(\xi))(v)\in V_{\mu}^{\xi}$. Set
$v_{\mu}:=\frac{1}{f_{\mu}(\mu)}(f_{\mu}(\xi))(v).$
By Corollary 2.2,
$id_{V}=\sum_{\mu\in\Gamma}\frac{1}{f_{\mu}(\mu)}f_{\mu}(\xi)$
and so
$v=\sum_{\mu\in\Gamma}\frac{1}{f_{\mu}(\mu)}(f_{\mu}(\xi))(v)=\sum_{\mu\in\Gamma}v_{\mu}.$
Now, assume $v=\sum_{\mu\in\Gamma}w_{\mu}$ for some $w_{\mu}\in
V_{\mu}^{\xi}$. Since
$\frac{1}{f_{\mu}(\mu)}(f_{\mu}(\xi))(w_{\lambda})=\delta_{\lambda\mu}w_{\mu}$
(where $\delta_{\lambda\mu}$ is the Kronecker delta), we get
$v_{\mu}=\frac{1}{f_{\mu}(\mu)}(f_{\mu}(\xi))(v)=\sum_{\mu\in\Gamma}\frac{1}{f_{\mu}(\mu)}(f_{\mu}(\xi))(w_{\lambda})=w_{\mu},$
giving (1). ∎
Recall [2] that a fusion law is a pair $(\mathcal{S},\ast)$ such that
$\mathcal{S}$ is a set and $\ast$ is a map from the cartesian product
${\mathcal{S}}\times{\mathcal{S}}$ to the power set $2^{\mathcal{S}}$. A
morphism between two fusion laws $(\mathcal{S}_{1},\ast_{1})$ and
$(\mathcal{S}_{2},\ast_{2})$ is a map
$\phi\colon{\mathcal{S}_{1}}\to{\mathcal{S}_{2}}$
such that, for $\alpha,\beta\in{\mathcal{S}_{1}}$,
$\phi(\alpha\ast_{1}\beta)\subseteq\phi(\alpha)\ast_{2}\phi(\beta).$
An isomorphism of fusion laws is a bijective morphism such that its inverse is
also a morphism. A fusion law $(\mathcal{S},\ast)$ is said to be finite if
${\mathcal{S}}$ is a finite set. In this paper we deal with fusion laws
$(\mathcal{S},\ast)$ where $\mathcal{S}$ is a finite set containing the
spectrum of the adjoint action of an idempotent element in an $R$-algebra.
Therefore, we assume $1_{R}\in\mathcal{S}\subseteq R$. Without loss of
generality, we may also assume that $0_{R}\in\mathcal{S}$. Further, for every
morphism $\phi$ of fusion laws, we’ll assume that $1^{\phi}=1$ and
$0^{\phi}=0$. More generally, when considering morphisms between fusion laws,
one may want to preserve some possible algebraic relations between the
elements of the set $\mathcal{S}$. To this aim, we call a morphism $\phi$ of
fusion laws an algebraic morphism if it is a ${\mathbb{Z}}$-linear map. An
axial algebra over $R$ with generating set $\mathcal{A}$ and fusion law
$(\mathcal{S},\star)$ is a quadruple
$(R,V,\mathcal{A},(\mathcal{S},\star))$
such that
1. (1)
$R$ is an associative commutative ring with identity $1$;
2. (2)
$\mathcal{S}$ is a subset of $R$ containing $1$ and $0$;
3. (3)
$V$ is a commutative non associative $R$-algebra;
4. (4)
$\mathcal{A}$ is a set of idempotent elements (called axes) of $V$ that
generate $V$ as an $R$-algebra and such that
1. Ax1
$V=V_{\mathcal{S}}^{a}$ for every $a\in{\mathcal{A}}$;
2. Ax2
$V_{\lambda}^{a}V_{\mu}^{a}\subseteq V_{\lambda\star\mu}^{a}$ for every
$\lambda,\mu\in\mathcal{S}$ and $a\in\mathcal{A}$.
Further, $V$ is called primitive if,
1. Ax3
$V_{1}^{a}=Ra$ for every $a\in{\mathcal{A}}$.
A Frobenius axial algebra is an axial algebra
$(R,V,\mathcal{A},(\mathcal{S},\star))$ endowed with an associative bilinear
form $\kappa:V\times V\to R$ such that the map $a\mapsto\kappa(a,a)$ is
constant on the set of axes.
Let $(R,V,\mathcal{A},(\mathcal{S},\star))$ be an axial algebra and assume the
elements of $\mathcal{S}$ are pairwise distinguishable. As in the proof of
Lemma 2.3, for every $v\in V$, denote by $v_{1}$ the projection of $v$ into
$V_{1}^{a}$ with respect to the decomposition of $V$ into
$ad_{a}$-eigenspaces. If $V$ is primitive, we have
$v_{1}=\lambda_{a}(v)a$
for some $\lambda_{a}(v)\in R$ which is generally not unique. On the other
hand, if the annihilator ideal
$Ann_{R}(a):=\\{r\in R|ra=0\\}$
of $a$ in $R$ is trivial, then $\lambda_{a}(v)$ is unique, and we say that $a$
is a free axis. Clearly this condition is satisfied when $R$ is a field. As an
immediate consequence we have the main result of this section.
###### Proposition 2.4.
Let $(R,V,\mathcal{A},(\mathcal{S},\star))$ be a primitive axial algebra and
assume that the elements of $\mathcal{S}$ are pairwise distinguishable and the
axes in $\mathcal{A}$ are free. Then, for every $a\in\mathcal{A}$, there is a
well defined $R$-linear map
(3) $\begin{array}[]{rcccc}\lambda_{a}&\colon&V&\to&R\\\
&&v&\mapsto&\lambda_{a}(v)\end{array}$
such that every $v\in V$ decomposes uniquely as
(4) $v=\lambda_{a}(v)a+\sum_{\mu\in\mathcal{S}\setminus\\{1\\}}v_{\mu},$
with $v_{\mu}\in V_{\mu}^{a}$.
We say that $V$ is weak primitive if, for every $a\in\mathcal{A}$ and every
element $v\in V$, $v$ can be decomposed as
(5) $v=l_{a}(v)a+\sum_{\mu\in\mathcal{S}\setminus\\{1\\}}v_{\mu}$
where $l_{a}(v)\in R$ depends on $v$ and $a$, and $v_{\mu}\in V^{a}_{\mu}$.
Note that in general the decomposition in (5) and $l_{a}(v)$ are not uniquely
determined by $v$.
###### Lemma 2.5.
If the elements of $\mathcal{S}$ are pairwise distinguishable, in particular,
if $R$ is a field, then weak primitivity is equivalent to primitivity.
###### Proof.
Trivially, primitivity implies weak primitivity. Conversely, suppose that $V$
is weak primitive, fix $a\in\mathcal{A}$ and let $v_{1}\in V^{a}_{1}$. Then,
by weak primitivity, there exist $l\in R$, $v_{\mu}\in V^{a}_{\mu}$, such that
$v_{1}=la+\sum_{\mu\in\mathcal{S}\setminus\\{1\\}}v_{\mu}.$
Hence
$\sum_{\mu\in\mathcal{S}\setminus\\{1\\}}v_{\mu}=v_{1}-la\in(\sum_{\mu\in\mathcal{S}\setminus\\{1\\}}V^{a}_{\mu})\cap
V^{a}_{1},$
and the last intersection is trivial by Lemma 2.3. Thus $v_{1}=la\in Ra$. ∎
We conclude this section with the following straightforward observation.
###### Lemma 2.6.
Let $(R,V,\mathcal{A},(\mathcal{S},\star))$ be a primitive axial algebra and
assume that the elements of $\mathcal{S}$ are pairwise distinguishable, and
the axes in $\mathcal{A}$ are free. Then, for every $a\in\mathcal{A}$,
$\gamma,\delta\in\mathcal{S}$, and $v,w\in V$, we have
$(f_{1}({\rm ad}_{a}))(w-\lambda_{a}(w)a)=0$
and
$\left(\prod_{\eta\in\gamma\star\delta}({\rm
ad}_{c}-\eta)\right)\left((f_{\gamma}(ad_{c}))(v)\cdot(f_{\delta}(ad_{c}))(w)\right)=0.$
## 3\. Universal primitive axial algebras
In this section we fix a positive integer $k$, a fusion law
$(\mathcal{S}_{0},\ast_{0})$, and denote by ${\mathcal{O}}$ the class whose
objects are the primitive axial algebras
$(R,V,{\mathcal{A}},(\mathcal{S},\ast))$
such that
* O1
${\mathcal{A}}$ has size at most $k$ and its elements are free axes,
* O2
$(\mathcal{S},\ast)$ is isomorphic to $(\mathcal{S}_{0},\ast_{0})$, and
* O3
the elements of $\mathcal{S}$ are pairwise distinguishable in $R$.
For two elements
(6)
${\mathcal{V}_{1}}:=(R_{1},V_{1},{\mathcal{A}}_{1},(\mathcal{S}_{1},\ast_{1}))\mbox{
and
}{\mathcal{V}}_{2}:=(R_{2},V_{2},{\mathcal{A}}_{2},(\mathcal{S}_{2},\ast_{2}))$
in ${\mathcal{O}}$, let ${\mathcal{H}om}(\mathcal{V}_{1},{\mathcal{V}}_{2})$
be the set of maps
$\phi\colon R_{1}\cup V_{1}\to R_{2}\cup V_{2}$
satisfying the following conditions:
1. H1
$\phi_{|{R_{1}}}$ is an homomorphism of rings with identity between $R_{1}$
and $R_{2}$ that induces by restriction an isomorphism of fusion laws between
$({\mathcal{S}}_{1},\ast_{1})$ and $({\mathcal{S}}_{2},\ast_{2})$;
2. H2
$\phi_{|{V_{1}}}$ is a (non-associative) ring homomorphism between $V_{1}$ and
$V_{2}$ such that $\phi({\mathcal{A}}_{1})\subseteq{\mathcal{A}}_{2}$;
3. H3
$(\gamma v)^{\phi}=\gamma^{\phi}v^{\phi}$, for every $\gamma\in R_{1}$ and
$v\in V_{1}$.
Note that, since $\phi_{|{R_{1}}}$ is a ring homomorphism, the induced
isomorphism of fusion law in (H1) is in fact an algebraic isomorphism. Denote
by $\mathcal{H}$ the class of all
$\phi\in{\mathcal{H}om}(\mathcal{V}_{1},{\mathcal{V}}_{2})$ where
${\mathcal{V}_{1}}$ and ${\mathcal{V}_{2}}$ range in $\mathcal{O}$. Then
clearly $({\mathcal{O}},{\mathcal{H}})$ is a category.
It will turn convenient to define some subcategories of
$({\mathcal{O}},{\mathcal{H}})$ in the following way. Let
* -
$n:=|\mathcal{S}\setminus\\{0,1\\}|$;
* -
$x_{i},y_{i},w_{i},z_{i,j},t_{h}$ be algebraically independent indeterminates
over ${\mathbb{Z}}$, for $i,j\in\\{1,\ldots,n\\}$, with $i<j$,
$h\in{\mathbb{N}}$;
* -
$D$ be the polynomial ring
${{\mathbb{Z}}}[x_{i},y_{i},w_{i},z_{i,j},t_{h}\>|\>i,j\in\\{1,\ldots
n\\},i<j,h\in{\mathbb{N}}];$
* -
$L$ be a proper ideal of $D$ containing the set
$\Sigma:=\\{x_{i}y_{i}-1,\>\>(1-x_{i})w_{i}-1,\mbox{ and
}\>(x_{i}-x_{j})z_{i,j}-1\mbox{ for all }1\leq i<j\leq n\\};$
* -
$\hat{D}:=D/L$. For $d\in D$, we denote the element $L+d$ by $\hat{d}$.
The extra indeterminates $t_{h}$ in the definition of $D$ have been introduced
here in order to guarantee, when necessary, the invertibility of certain
elements in the ring $\hat{D}$.
Since $L$ is proper and, for $1\leq i<j\leq n$, the elements
$\hat{x}_{i}-\hat{x}_{j}$ are invertible in $\hat{D}$, the elements
$\hat{x}_{1},\ldots,\hat{x}_{n}$ are still pairwise distinguishable in
${\hat{D}}$. Define $({\mathcal{O}}_{L},{\mathcal{H}}_{L})$ as the full
subcategory of $({{\mathcal{O}},{\mathcal{H}}})$ whose objects are the
primitive axial algebras
$(R,V,{\mathcal{A}},(\mathcal{S},\ast))\in{\mathcal{O}}$ that satisfy the
further condition
* O4
$R$ is a ring containing a subring isomorphic to a factor of $\hat{D}$.
Clearly, if $(\Sigma)$ is the ideal of $D$ generated by the set $\Sigma$, then
${\mathcal{O}}_{(\Sigma)}={\mathcal{O}}$.
###### Lemma 3.1.
Let $\mathcal{V}_{1},{\mathcal{V}}_{2}\in\mathcal{O}_{L}$ be as in Equation
(6) and let $\phi\in{\mathcal{H}om}(\mathcal{V}_{1},{\mathcal{V}}_{2})$. Then,
for every $a\in\mathcal{A}_{1},v\in V_{1}$, we have
$\lambda_{a^{\phi}}(v^{\phi})=(\lambda_{a}(v))^{\phi}.$
###### Proof.
Since $\mathcal{V}_{1},{\mathcal{V}}_{2}$ are primitive axial algebras, the
result follows applying $\phi$ to the decomposition of $v$ in Equation (4). ∎
We construct a universal object for each category
$(\mathcal{O}_{L},{\mathcal{H}}_{L})$ as follows. Let
* -
$\mathcal{A}$ be a set of size $k$;
* -
$W$ be the free commutative magma generated by the elements of $\mathcal{A}$
subject to the condition that every element of $\mathcal{A}$ is idempotent;
* -
${\hat{R}}:={\hat{D}}[\Lambda]$ be the ring of polynomials with coefficients
in $\hat{D}$ and indeterminates set
$\Lambda:=\\{\lambda_{c,w}\>|\>c\in\mathcal{A},w\in W,c\neq w\\}$ where
$\lambda_{c,w}=\lambda_{c^{\prime},w^{\prime}}$ if and only if $c=c^{\prime}$
and $w=w^{\prime}$.
* -
${\hat{V}}:={\hat{R}}[W]$ be the set of all formal linear combinations
$\sum_{w\in W}\gamma_{w}w$ of the elements of $W$ with coefficients in
${\hat{R}}$, with only finitely many coefficients different from zero. Endow
${\hat{V}}$ with the usual structure of a commutative non associative
${\hat{R}}$-algebra;
* -
${\hat{\mathcal{S}}}$ be the set $\\{1,0,\hat{x}_{1},\ldots,\hat{x}_{n}\\}$.
Let $\star\colon{\hat{\mathcal{S}}}\times{\hat{\mathcal{S}}}\to
2^{{\hat{\mathcal{S}}}}$ be a map such that $({\hat{\mathcal{S}}},\star)$ is a
fusion law isomorphic to $(\mathcal{S},\ast)$. Since, obviously, a fusion law
is isomorphic to $(S,\ast)$ if and only if it is isomorphic to
$({\hat{\mathcal{S}}},\star)$, we may assume
$(\mathcal{S},\ast)=({\hat{\mathcal{S}}},\star)$. For
$\mu\in\hat{\mathcal{S}}$, let $f_{\mu}$ be the polynomial defined in Equation
(1), for every $c\in\mathcal{A}$, let $\lambda_{c,c}:=1$, and let $J$ be the
ideal of ${\hat{V}}$ generated by all the elements
(7) $(f_{1}({\rm ad}_{c}))(w-\lambda_{c,w}c)\>\>\>\>\mbox{ for all
}c\in\mathcal{A}\mbox{ and }w\in W$
and
(8) $\left(\prod_{\eta\in\gamma\star\delta}({\rm ad}_{c}-\eta\>{\rm
id}_{\hat{V}})\right)\left((f_{\gamma}(ad_{c}))(v)\cdot(f_{\delta}(ad_{c}))(w)\right)$
for all $v,w\in{\hat{V}}$, $\gamma,\delta\in{\hat{\mathcal{S}}}$,
$c\in\mathcal{A}$. Let $I_{0}$ be the ideal of ${\hat{R}}$ generated by all
the elements
(9) $\sum_{w\in W}\gamma_{w}\lambda_{c,w}\>\>\mbox{ for all
}c\in\mathcal{A},w\in W,\gamma_{w}\in\hat{R}\>\>\mbox{ such that }\sum_{w\in
W}\gamma_{w}w\in J.$
Finally, set
* -
$\mathcal{A}/J:=\\{c+J\>|\>c\in\mathcal{A}\\}$
* -
$J_{0}:=J+I_{0}{\hat{V}}$,
* -
${\overline{R}}:={\hat{R}}/I_{0}$,
* -
${\overline{V}}:={\hat{V}}/J_{0}$,
* -
$\alpha_{i}:=x_{i}+I_{0},\mbox{ for }i\in\\{1,\ldots,n\\}$
* -
$\bar{c}:=c+J_{0},\mbox{ for }c\in\mathcal{A}$,
* -
${\overline{\mathcal{A}}}:=\\{\bar{c}\>|\>c\in\mathcal{A}\\}$,
* -
$\overline{\mathcal{S}}:=\\{1+I_{0},0+I_{0},\alpha_{i}\>|\>i\in\\{1,\ldots,n\\}\\}$.
###### Remark 3.2.
Since ${\hat{D}}\cap I_{0}=\\{0\\}$ and
${\overline{\mathcal{S}}}\leq{\hat{D}}I_{0}/I_{0}$,
$(\overline{\mathcal{S}},\overline{\star})$ is a fusion law isomorphic to
$(\mathcal{S},\ast)$ and the elements of $\overline{\mathcal{S}}$ are pairwise
distinguishable.
###### Lemma 3.3.
$({\hat{R}},{\hat{V}}/J,\mathcal{A}/J,({\hat{\mathcal{S}}},\star))$ and
$({\hat{R}},{\overline{V}},{\overline{\mathcal{A}}},({\hat{\mathcal{S}}},\star))$
are primitive axial algebras such that, for $J_{\ast}\in\\{J,J_{0}\\}$,
$\lambda_{c+J_{\ast}}(w+J_{\ast})=\lambda_{c,w}$
for every $c\in\mathcal{A}$ and $w\in W$.
###### Proof.
Clearly ${\hat{V}}/J$ is generated by $\mathcal{A}/J$ and ${\overline{V}}$ is
generated by ${\overline{\mathcal{A}}}$. Let $J_{\ast}\in\\{J,J_{0}\\}$ and
let $f$ be as in Equation (2). By the definition of $J$, for every
$a\in\mathcal{A}$, $f({\rm ad}_{a}){\hat{V}}\subseteq J$, whence, by Lemma
2.3, ${\hat{V}}/J_{\ast}$ satisfies condition Ax1. By Equation (8),
${\hat{V}}/J_{\ast}$ also satisfies the fusion law
$({\hat{\mathcal{S}}},\star)$. Furthermore, for every $c\in\mathcal{A}$ and
$w\in W$, since $f_{1}({\rm ad}_{c})(w-\lambda_{c,w}c)\in J_{\ast}$, we get
$w+J_{\ast}=(\lambda_{c,w}c+\sum_{\mu\in{\hat{\mathcal{S}}}\setminus\\{1\\}}w_{\mu})+J_{\ast},$
where $w_{\mu}+J_{\ast}$ is a $\mu$-eigenvector for ${\rm ad}_{c+J_{\ast}}$,
so ${\hat{V}}/J_{\ast}$ is also weak primitive. By Remark 3.2 and Lemma 2.5,
${\hat{V}}/J_{\ast}$ is also primitive. ∎
###### Lemma 3.4.
Let ${\mathcal{V}}:=(R,V,{\mathcal{A}},(\mathcal{S},\ast))$ be an element of
$\mathcal{O}_{L}$, let $\phi\colon{\hat{R}}\cup{\hat{V}}\to R\cup V$ be a map
that satisfies conditions (H1) (with respect to $({\hat{\mathcal{S}}},\star)$
and $(\mathcal{S},\ast)$), (H2), and (H3). Then
$I_{0}\subseteq\ker\phi_{|_{{\hat{R}}}}$,
$J_{0}\subseteq\ker\phi_{|_{{\hat{V}}}}$.
###### Proof.
By Lemma 2.6, $J\subseteq\ker\phi_{|_{{\hat{V}}}}$, so $\phi$ induces an
${\hat{R}}$-algebra homomorphism
$\begin{array}[]{clcl}\phi_{{\hat{V}}/J}:&{\hat{V}}/J&\to&V,\end{array}$
$\>\>\>\>\>\>\>\>\>\>\>\>\>\begin{array}[]{clcl}&v+J&\mapsto&v^{\phi}.\end{array}$
Since, by Lemma 3.3, ${\hat{V}}/J$ is a primitive axial algebra over the ring
${\hat{R}}$, for every $c\in\mathcal{A}$ and $w\in W$, we can write
$w+J=(\lambda_{c,w}c+\sum_{\mu\in{\hat{\mathcal{S}}}\setminus\\{1\\}}w_{\mu})+J,$
where, for every $\mu\in{\hat{\mathcal{S}}}\setminus\\{1\\}$, $w_{\mu}+J$ is a
$\mu$-eigenvector for ${\rm ad}_{c+J}$. Condition (H3) implies that, for every
$\mu\in\mathcal{S}$, $a\in\mathcal{A}$, $\phi_{{\hat{V}}/J}$ maps
$\mu$-eigenvectors for ${\rm ad}_{a+J}$ to $\mu^{\phi}$-eigenvectors for ${\rm
ad}_{a^{\phi}}$. Thus, if $v=\sum_{w\in W}\gamma_{w}w\in J$, then
$v^{\phi}=0$, whence
$\displaystyle 0=\lambda_{a^{\phi}}(v^{\phi})$ $\displaystyle=$
$\displaystyle\lambda_{a^{\phi}}\left(\sum_{w\in
W}\gamma_{w}^{\phi}w^{\phi}\right)$ $\displaystyle=$ $\displaystyle\sum_{w\in
W}\gamma_{w}^{\phi}\lambda_{a^{\phi}}(w^{\phi})$ $\displaystyle=$
$\displaystyle\sum_{w\in
W}\gamma_{w}^{\phi}\lambda_{a^{\phi}}((\lambda_{a,w}a+\sum_{\mu\in{\hat{\mathcal{S}}}\setminus\\{1\\}}w_{\mu})^{\phi})$
$\displaystyle=$ $\displaystyle\sum_{w\in
W}\gamma_{w}^{\phi}\lambda_{a^{\phi}}((\lambda_{a,w})^{\phi}a^{\phi})+\sum_{w\in
W}\gamma_{w}^{\phi}\lambda_{a^{\phi}}(\sum_{\mu\in{\hat{\mathcal{S}}}\setminus\\{1\\}}w_{\mu}^{\phi}))$
$\displaystyle=$ $\displaystyle\sum_{w\in
W}\gamma_{w}^{\phi}(\lambda_{a,w})^{\phi}=(\sum_{w\in
W}\gamma_{w}\lambda_{a,w})^{\phi}.$
Thus $I_{0}\subseteq\ker\phi_{|_{{\hat{R}}}}$. Finally, by condition (H3),
$(I_{0}{\hat{V}})^{\phi}=I_{0}^{\phi}V^{\phi}=0V_{1}=0$, whence
$J_{0}\subseteq\ker\phi_{|_{{\hat{V}}}}$. ∎
###### Lemma 3.5.
We have $J_{0}\neq{\hat{V}}$, in particular $|{\overline{\mathcal{A}}}|=k$.
###### Proof.
Let ${\overline{R}}^{k}$ be the direct sum of $k$ copies of ${\overline{R}}$.
Set $\mathcal{B}:=\\{e_{1},\ldots,e_{k}\\}$, where $(e_{1},\ldots,e_{k})$ is
the canonical basis of ${\overline{R}}^{k}$. Then, for every
$i\in\\{1,\ldots,k\\}$, $e_{i}$ is an idempotent and
${\overline{R}}^{k}={\overline{R}}e_{i}\oplus\ker{\rm ad}_{e_{i}}.$
Therefore, for every fusion law
$(\mathcal{S}_{\overline{R}},\ast_{\overline{R}})$, the $4$-tuple
$({\overline{R}},{\overline{R}}^{k},\mathcal{B},(\mathcal{S}_{\overline{R}},\ast_{\overline{R}}))$
is obviously a primitive (associative) axial algebra. By the construction of
${\hat{V}}$, any bijection from $\mathcal{A}$ to $\mathcal{B}$ extends
uniquely to a map $\phi_{{\hat{V}}}\colon{\hat{V}}\to{\overline{R}}^{k}$. Let
$\phi\colon{\hat{R}}\cup{\hat{V}}\to{\overline{R}}\cup{\overline{R}}^{k}$
be the map whose restrictions to ${\hat{R}}$ and ${\hat{V}}$ are the canonical
projection on ${\overline{R}}$ and $\phi_{{\hat{V}}}$, respectively. Then
$\phi$ satisfies the conditions (H1), (H2), and (H3). Therefore, by Lemma 3.4,
$J_{0}\subseteq\ker\phi_{|_{{\hat{V}}}}\neq{\hat{V}}$. Since
$k=|\mathcal{A}|\geq|{\overline{\mathcal{A}}}|\geq|\mathcal{B}|=k$, we get
$|{\overline{\mathcal{A}}}|=k$. ∎
###### Theorem 3.6.
$\overline{\mathcal{V}}:=({\overline{R}},{\overline{V}},{\overline{\mathcal{A}}},(\overline{\mathcal{S}},\overline{\star}))$
is a universal object in the category $({\mathcal{O}_{L}},{\mathcal{H}_{L}})$.
###### Proof.
Clearly ${\overline{V}}$ is an algebra over ${\overline{R}}$ generated by the
set of idempotents ${\overline{\mathcal{A}}}$. Since by Lemma 3.3,
$({\hat{R}},{\overline{V}},{\overline{\mathcal{A}}},(\hat{\mathcal{S}},\overline{\star}))$
is a primitive axial algebra, and $I_{0}\subseteq
Ann_{\hat{R}}({\overline{V}})$, we get that
$({\overline{R}},{\overline{V}},{\overline{\mathcal{A}}},(\overline{\mathcal{S}},\star))$
is a primitive axial algebra. Finally, the axes in ${\overline{\mathcal{A}}}$
are free, for, if $c\in\mathcal{A}$ and $r\in{\hat{R}}$ are such that $rc\in
J_{0}$, then there exist $j\in J,i_{0}\in I_{0}$ and $\sum_{w\in
W}\gamma_{w}w\in{\hat{V}}$ such that
$rc-i_{0}\left(\sum_{w\in W}\gamma_{w}w\right)=j,$
whence, by the definition of $I_{0}$,
$r\in i_{0}\left(\sum_{w\in W}\gamma_{w}\lambda_{c,w}\right)+I_{0}=I_{0}.$
By Lemma 3.5, the elements of ${\hat{\mathcal{S}}}$ are pairwise
distinguishable in ${\overline{R}}$, whence
$\overline{\mathcal{V}}\in\mathcal{O}_{L}$.
Now assume
$\mathcal{V}_{1}:=(R_{1},V_{1},\mathcal{A}_{1},(\mathcal{S}_{1},\ast_{1}))$ is
an object in $\mathcal{O}_{L}$ and let
$\bar{t}:{{\overline{\mathcal{A}}}}\to\mathcal{A}_{1}$ a map of sets. Since
every non-empty subset of $\mathcal{A}_{1}$ generates a primitive axial
subalgebra of $V_{1}$ with fusion law $(\mathcal{S}_{1},\ast_{1})$ and free
axes, without loss of generality we may assume that $\bar{t}$ is surjective.
Let $t$ be the composition of $\bar{t}$ with the (bijective) projection of
$\mathcal{A}$ to ${\overline{\mathcal{A}}}$. Since $W$ is the free commutative
magma generated by the set of idempotents $\mathcal{A}$ there is a unique
magma homomorphism
$\chi:W\to V_{1},$
inducing the map $t:\mathcal{A}\to\mathcal{A}_{1}$. Since the elements of
$\Lambda$ are alegbraically independent over $\hat{D}$, there is a unique
homomorphism of $\hat{D}$-algebras
$\hat{\psi}:{\hat{R}}\to R_{1}$
such that, for $c\in\mathcal{A}$ and $w\in W\setminus\\{c\\}$,
(10) $\lambda_{c,w}^{\hat{\psi}}=\lambda_{c^{t}}(w^{\chi}),$
where $\lambda_{c^{t}}$ is the function defined in Proposition 2.4. Define
$\begin{array}[]{rcccc}\hat{\chi}&:&{\hat{V}}&\to&V_{1}\\\ &&\sum_{w\in
W}\gamma_{w}w&\mapsto&\sum_{w\in
W}\gamma_{w}^{\hat{\psi}}\>w^{\chi}\end{array}.$
Then $\hat{\chi}$ is a ring homomorphism extending $t$ and such that $(\gamma
v)^{\hat{\chi}}=\gamma^{\hat{\psi}}v^{\hat{\chi}}$ for every
$\gamma\in{\hat{R}}$ and $v\in{\hat{V}}$. Since, for every $c\in\mathcal{A}$,
$v\in{\hat{V}}$, and $\gamma\in{\mathcal{S}}$,
$[({\rm ad}_{c}-\gamma\>{\rm id}_{\hat{V}})(v)]^{\hat{\chi}}=({\rm
ad}_{c^{t}}-\gamma^{\hat{\psi}}\>{\rm id}_{V_{1}})(v^{\hat{\chi}}),$
by Lemma 2.6, it follows that $J_{0}$ is contained in $\ker\hat{\chi}$ and so
$\hat{\chi}$ induces a ring homomorphism
$\begin{array}[]{rccc}\phi_{V}:&{\overline{V}}&\to&V_{1}\end{array}$
extending $\overline{t}$ and such that
$(\gamma\overline{w})^{\phi_{V}}=\gamma^{\hat{\psi}}{\overline{w}}^{\phi_{V}}$
for every $\gamma\in{\hat{R}}$ and ${\overline{w}}\in{\overline{V}}$. As in
the proof of Lemma 3.4 we get that $I_{0}\subseteq\ker\hat{\psi}$. Let
$\phi_{R}:{\overline{R}}\to R_{1}$ be the homomorphism of $\hat{D}$-algebras
induced by $\hat{\psi}$. Then
$(\phi_{R},\phi_{V})\in{\mathcal{H}om}(\overline{\mathcal{V}},\mathcal{V}_{1})$
.
Since ${\hat{R}}=\hat{D}[\Lambda]$, $\phi_{R}$ is completely determined by its
values on the elements $\lambda_{c,w}+I_{0}$. Further, by Equation (10), for
every $c\in\mathcal{A}$, $w\in W\setminus\\{c\\}$,
$(\lambda_{c,w}+I_{0})^{\phi_{R}}=\lambda_{c^{t}}((w+J_{0})^{\phi_{V}}),$
whence $\phi_{R}$ is completely determined by the images
$(w+J_{0})^{\phi_{V}}$, with $w\in W$. Since $\phi_{V}$ is a ring homomorphism
extending $t$, such images are uniquely determined, whence the uniqueness of
$(\phi_{R},\phi_{V})$. ∎
###### Corollary 3.7.
Every permutation $\sigma$ of the set ${\overline{\mathcal{A}}}$ extends to a
unique automorphism
$f_{\sigma}\in{\mathcal{H}om}({\overline{\mathcal{V}}},{\overline{\mathcal{V}}})$.
Note that, for a generic object $\mathcal{V}$, the above assertion is false
(see e.g. the algebra $Q_{2}(\eta)$ constructed in [5, Section 5.3]). We say
that $\mathcal{V}=(R,V,\mathcal{A},(\mathcal{S},\ast))\in\mathcal{O}_{L}$ is
symmetric if every permutation $\sigma$ of the set $\mathcal{A}$ extends to a
unique automorphism $f_{\sigma}\in{\mathcal{H}om}(\mathcal{V},\mathcal{V})$.
###### Corollary 3.8.
Let $\mathcal{V}:=(R,V,\mathcal{A},(\mathcal{S},\ast))\in{\mathcal{O}}_{L}$,
then
1. (1)
${\overline{\mathcal{V}}}\otimes
R:=({\overline{R}}\otimes_{\hat{D}}R,{\overline{V}}\otimes_{\hat{D}}R,{\overline{\mathcal{A}}}\otimes_{\hat{D}}1,(\overline{\mathcal{S}}\otimes_{\hat{D}}1,\overline{\star}))\in{\mathcal{O}}_{L}$,
2. (2)
$R$ is isomorphic to a factor of ${\overline{R}}\otimes_{\hat{D}}R$,
3. (3)
$V$ is isomorphic to a factor of ${\overline{V}}\otimes_{\hat{D}}R$.
###### Remark 3.9.
Note that, with the notation of Corollaries 3.7 and 3.8, $f_{\sigma}\otimes
id_{R}$ is an automorphism of ${\overline{\mathcal{V}}}\otimes R$.
Questions
1. (1)
Can we define a variety of axial algebras corresponding to the fusion law
$(\mathcal{S},\ast)$?
2. (2)
Is it true that any ideal $I$ of ${\hat{R}}$ containing $I_{0}$ defines an
axial algebra?
## 4\. $2$-generated primitive axial algebras of Monster type
$(\alpha,\beta)$
In this section we keep the notation of Section 3, with $k=n=2$,
$(\mathcal{S},\ast)$ equal to the Monster fusion law
$\mathcal{M}(\alpha,\beta)$, and $L$ an ideal of $D$ containing $2t_{1}-1$, so
that (the class of) $2$ is invertible in $\hat{D}$. In order to simplify
notation we’ll also identify $\alpha_{1}$ with $\alpha$ and $\alpha_{2}$ with
$\beta$.
Let $\mathcal{V}=(V,R,\mathcal{A},(\mathcal{S},\ast))\in{\mathcal{O}}_{L}$ and
$a\in\mathcal{A}$. Let ${\mathcal{S}}^{+}:=\\{1,0,\alpha\\}$ and
${\mathcal{S}}^{-}:=\\{\beta\\}$. The partition
$\\{{\mathcal{S}}^{+},{\mathcal{S}^{-}}\\}$ of $\mathcal{S}$ induces a
${\mathbb{Z}}_{2}$-grading on ${\mathcal{S}}$ which, on turn, induces, a
${\mathbb{Z}}_{2}$-grading $\\{V_{+}^{a},V_{-}^{a}\\}$ on $V$ where
$V_{+}^{a}:=V_{1}^{a}+V_{0}^{a}+V_{\alpha}^{a}$ and $V_{-}^{a}=V_{\beta}^{a}$.
It follows that, if $\tau_{a}$ is the map from $R\cup V$ to $R\cup V$ such
that $\tau_{a|_{V}}$ inverts $V_{\beta}^{a}$ and leaves invariant the elements
of $V_{+}^{a}$ and $\tau_{a|_{R}}$ is the identity, then $\tau_{a}$ is an
involutory automorphism of $\mathcal{V}$ (see [7, Proposition 3.4]). The map
$\tau_{a}$ is called the Miyamoto involution associated to the axis $a$. By
definition of $\tau_{a}$, the element $av-{\beta}v$ of $V$ is
$\tau_{a}$-invariant and, since $a$ lies in $V_{+}^{a}\leq C_{V}(\tau_{a})$,
also $av-{\beta}(a+v)$ is $\tau_{a}$-invariant. In particular, by symmetry,
###### Lemma 4.1.
Let $a$ and $b$ be axes of $V$. Then $ab-\beta(a+b)$ is fixed by the
2-generated group $\langle\tau_{a},\tau_{b}\rangle$.
Let $\mathcal{A}:=\\{a_{0},a_{1}\\}$ and, for $i\in\\{1,2\\}$, let $\tau_{i}$
be the Miyamoto involutions associated to $a_{i}$. Set
$\rho:=\tau_{0}\tau_{1}$, and for $i\in{\mathbb{Z}}$,
$a_{2i}:=a_{0}^{\rho^{i}}$ and $a_{2i+1}:=a_{1}^{\rho^{i}}$. Since $\rho$ is
an automorphism of $V$, for every $j\in{\mathbb{Z}}$, $a_{j}$ is an axis.
Denote by $\tau_{j}:=\tau_{a_{j}}$ the corresponding Miyamoto involution.
###### Lemma 4.2.
For every $n\in{\mathbb{N}}$, and $i,j\in{\mathbb{Z}}$ such that $i\equiv
j\>\bmod n$ we have
$a_{i}a_{i+n}-\beta(a_{i}+a_{i+n})=a_{j}a_{j+n}-\beta(a_{j}+a_{j+n}),$
###### Proof.
This follows immediately from Lemma 4.1. ∎
For $n\in{\mathbb{N}}$ and $r\in\\{0,\ldots,n-1\\}$ set
(11) $s_{r,n}:=a_{r}a_{r+n}-\beta(a_{r}+a_{r+n}).$
For every $a\in\mathcal{A}$, let $\lambda_{a}$ be as in Proposition 2.4.
###### Lemma 4.3.
For $i\in\\{1,2,3\\}$ we have
$\displaystyle a_{0}s_{0,i}$ $\displaystyle=$
$\displaystyle(\alpha-\beta)s_{0,i}+[(1-\alpha)\lambda_{a_{0}}(a_{i})+\beta(\alpha-\beta-1)]a_{0}+\frac{1}{2}\beta(\alpha-\beta)(a_{i}+a_{-i}).$
###### Proof.
This is [16, Lemma 3.1]. ∎
For $i\in\\{1,2,3\\}$, let
(12) $a_{i}=\lambda_{a_{0}}(a_{i})a_{0}+u_{i}+v_{i}+w_{i}$
be the decomposition of $a_{i}$ into $ad_{a_{0}}$-eigenvectors, where $u_{i}$
is a $0$-eigenvector, $v_{i}$ is an $\alpha$-eigenvector and $w_{i}$ is a
$\beta$-eigenvector.
###### Lemma 4.4.
With the above notation,
1. (1)
$u_{i}=\frac{1}{\alpha}((\lambda_{a_{0}}(a_{i})-\beta-\alpha\lambda_{a_{0}}(a_{i}))a_{0}+\frac{1}{2}(\alpha-\beta)(a_{i}+a_{-i})-s_{0,i})$;
2. (2)
$v_{i}=\frac{1}{\alpha}((\beta-\lambda_{a_{0}}(a_{i}))a_{0}+\frac{\beta}{2}(a_{i}+a_{-i})+s_{0,i})$;
3. (3)
$w_{i}=\frac{1}{2}(a_{i}-a_{-i})$.
###### Proof.
(3) follows from the definitions of $\tau_{0}$ and $a_{i}$, (2) is just a
rearranging of $a_{0}a_{i}=\lambda_{a_{0}}(a_{i})a_{0}+\alpha v_{i}+\beta
w_{i}$ using Equation (11), and (1) follows rearranging Equation (12). ∎
For $i,j\in\\{1,2,3\\}$, set
$P_{ij}:=u_{i}u_{j}+u_{i}v_{j}\>\>\mbox{ and
}\>\>Q_{ij}:=u_{i}v_{j}-\frac{1}{\alpha^{2}}s_{0,i}s_{0,j}.$
###### Lemma 4.5.
For $i,j\in\\{1,2,3\\}$ we have
(13) $s_{0,i}\cdot s_{0,j}=\alpha(a_{0}P_{ij}-\alpha Q_{ij}).$
###### Proof.
Since $u_{i}$ and $v_{j}$ are a $0$-eigenvector and an $\alpha$-eigenvector
for ${\rm ad}_{a_{0}}$, respectively, by the fusion rule, we have
$a_{0}P_{ij}=\alpha(u_{i}\cdot v_{j})$ and the result follows. ∎
From now on we assume ${\mathcal{V}}={{\overline{\mathcal{V}}}}$. Let $f$ be
the automorphism of ${\overline{\mathcal{V}}}$ induced by the permutation that
swaps $a_{0}$ with $a_{1}$ as defined in Corollary 3.7. For $i\in{\mathbb{N}}$
define
(14) $\lambda_{i}:=\lambda_{a_{0}}(a_{i}).$
Note that, by Lemma 3.1, for every $i\in{\mathbb{N}}$, we have
$\lambda_{a_{0}}(a_{-i})=\lambda_{i},\>\>\lambda_{a_{1}}(a_{0})=\lambda_{1}^{f},\mbox{
and }\>\>\lambda_{a_{1}}(a_{-1})=\lambda_{2}^{f}.$
Set $T_{0}:=\langle\tau_{0},\tau_{1}\rangle$ and
$T:=\langle\tau_{0},f\rangle$.
###### Lemma 4.6.
The groups $T_{0}$ and $T$ are dihedral groups, $T_{0}$ is a normal subgroup
of $T$ such that $|T:T_{0}|\leq 2$. For every $n\in{\mathbb{N}}$, the set
$\\{s_{0,n},\ldots,s_{n-1,n}\\}$ is invariant under the action of $T$. In
particular, if $K_{n}$ is the kernel of this action, we have
1. (1)
$K_{1}=T$;
2. (2)
$K_{2}=T_{0}$, in particular $s_{0,2}^{f}=s_{1,2}$;
3. (3)
$T/K_{3}$ induces the full permutation group on the set
$\\{s_{0,3},s_{1,3},s_{2,3}\\}$ with point stabilisers generated by
$\tau_{0}K_{3}$, $\tau_{1}K_{3}$ and $fK_{3}$, respectively. In particular
$s_{0,3}^{f}=s_{1,3}$ and $s_{0,3}^{\tau_{1}}=s_{2,3}$.
###### Proof.
This follows immediately from the definitions. ∎
###### Lemma 4.7.
In the algebra $\overline{\mathcal{V}}$, the following equalities hold:
$\displaystyle(\alpha-2\beta)a_{0}s_{1,2}=$
$\displaystyle\beta^{2}(\alpha-\beta)(a_{-2}+a_{2})$
$\displaystyle+\left[-2\alpha\beta\lambda_{1}+2\beta(1-\alpha)\lambda_{1}^{f}+\frac{\beta}{2}(4\alpha^{2}-2\alpha\beta-\alpha+4\beta^{2}-2\beta)\right](a_{1}+a_{-1})$
$\displaystyle+\frac{1}{(\alpha-\beta)}\left[(6\alpha^{2}-8\alpha\beta-2\alpha+4\beta)\lambda_{1}^{2}+(2\alpha^{2}-2\alpha)\lambda_{1}\lambda_{1}^{f}\right.$
$\displaystyle+2(-2\alpha^{2}-2\alpha\beta+\alpha)(\alpha-\beta)\lambda_{1}-4\beta(\alpha-1)(\alpha-\beta)\lambda_{1}^{f}-\alpha\beta(\alpha-\beta)\lambda_{2}$
$\displaystyle\left.+(4\alpha^{2}\beta-2\alpha\beta+2\beta^{3})(\alpha-\beta)\right]a_{0}$
$\displaystyle+\left[-4\alpha\lambda_{1}-4(\alpha-1)\lambda_{1}^{f}+(4\alpha^{2}-2\alpha\beta-\alpha+4\beta^{2}-2\beta)\right]s_{0,1}$
$\displaystyle+2\beta(\alpha-\beta)s_{0,2}$
and
$\displaystyle 4(\alpha-2\beta)s_{0,1}\cdot s_{0,1}=$
$\displaystyle\beta(\alpha-\beta)^{2}(\alpha-4\beta)(a_{-2}+a_{2})$
$\displaystyle+\left[4\alpha\beta(\alpha-\beta)\lambda_{1}+2(-\alpha^{3}+5\alpha^{2}\beta+\alpha^{2}-4\alpha\beta^{2}-5\alpha\beta+4\beta^{2})\lambda_{1}^{f}\right.$
$\displaystyle\>\>\;\;\;\left.\beta(-10\alpha^{2}\beta-\alpha^{2}+14\alpha\beta^{2}+7\alpha\beta-4\beta^{3}-6\beta^{2})\right](a_{-1}+a_{1})$
$\displaystyle+2\left[2(-3\alpha^{2}+4\alpha\beta+\alpha-2\beta)\lambda_{1}^{2}+2\alpha(1-\alpha)\lambda_{1}\lambda_{1}^{f}\right.$
$\displaystyle\>\>\>\>\>\left.+2(\alpha^{3}+4\alpha^{2}\beta-6\alpha\beta^{2}-3\alpha\beta+4\beta^{2})\lambda_{1}+2\alpha\beta(\alpha-1)\lambda_{1}^{f}+\alpha\beta(\alpha-\beta)\lambda_{2}\right.$
$\displaystyle\>\>\>\>\>\left.+\beta(-\alpha^{3}-8\alpha^{2}\beta+13\alpha\beta^{2}+4\alpha\beta-4\beta^{3}-4\beta^{2})\right]a_{0}$
$\displaystyle+4\left[2\alpha(\alpha-\beta)\lambda_{1}+\alpha(\alpha-1)\lambda_{1}^{f}+(-6\alpha^{2}\beta+10\alpha\beta^{2}+\alpha\beta-4\beta^{3})\right]s_{0,1}$
$\displaystyle-2\alpha\beta(\alpha-\beta)s_{0,2}+2\beta(\alpha-\beta)(\alpha-2\beta)s_{1,2}.$
###### Proof.
By the fusion law,
(15) $a_{0}(u_{1}\cdot u_{1}-v_{1}\cdot v_{1}+\lambda_{a_{0}}(v_{1}\cdot
v_{1})a_{0})=0.$
Substituting in the left side of (15) the values for $u_{1}$ and $v_{1}$ given
in Lemma 4.4 we get the first equality. The expression for
$(\alpha-2\beta)a_{)}s_{1,2}$ allows us to write explicitly as a linear
combination of $a_{-2},a_{-1},a_{0},a_{1},a_{2},s_{0,1},s_{0,2},s_{1,2}$ the
vector
$(\alpha-2\beta)(a_{0}P_{11}-\alpha Q_{11}).$
Thus, the second equality then follows from Equation (13) in Lemma 4.5. ∎
###### Lemma 4.8.
In the algebra $\overline{\mathcal{V}}$, the following equalities hold:
1. (1)
$\beta(\alpha-\beta)^{2}(\alpha-4\beta)(a_{3}-a_{-2})=c$,
2. (2)
$\beta(\alpha-\beta)^{2}(\alpha-4\beta)(a_{4}-a_{-1})=-c^{\tau_{1}},$
where
$\displaystyle c=$
$\displaystyle(\alpha-\beta)\left[4\alpha\beta\lambda_{1}-2(\alpha-1)(\alpha-4\beta)\lambda_{1}^{f}+\beta(-\alpha^{2}-5\alpha\beta-\alpha+6\beta)\right]a_{-1}$
$\displaystyle+\left[4(-3\alpha^{2}+4\alpha\beta+\alpha-2\beta)\lambda_{1}^{2}-4\alpha(\alpha-1)\lambda_{1}\lambda_{1}^{f}\right.$
$\displaystyle+(6\alpha^{3}+6\alpha^{2}\beta-2\alpha^{2}-16\alpha\beta^{2}-2\alpha\beta+8\beta^{2})\lambda_{1}+4\alpha\beta(\alpha-1)\lambda_{1}^{f}+2\alpha\beta(\alpha-\beta)\lambda_{2}$
$\displaystyle\left.+\beta(\alpha-\beta)(-2\alpha^{2}-8\alpha\beta+\alpha+4\beta^{2}+2\beta)\right]a_{0}$
$\displaystyle+\left[4\alpha(\alpha-1)\lambda_{1}\lambda_{1}^{f}+4(13\alpha^{2}-4\alpha\beta-\alpha+2\beta){\lambda_{1}^{f}}^{2}-4\alpha\beta(\beta-1)\lambda_{1}\right.$
$\displaystyle+(-6\alpha^{3}-6\alpha^{2}\beta+2\alpha^{2}+16\alpha\beta^{2}+2\alpha\beta-8\beta^{2})\lambda_{1}^{f}-2\alpha\beta(\alpha-\beta)\lambda_{2}^{f}$
$\displaystyle\left.+\beta(\alpha-\beta)(2\alpha^{2}+8\alpha\beta-\alpha-4\beta^{2}-2\beta)\right]a_{1}$
$\displaystyle+(\alpha-\beta)\left[2(\alpha-1)(\alpha-4\beta)\lambda_{1}-4\alpha\beta\lambda_{1}^{f}+\beta(\alpha^{2}+5\alpha\beta+\alpha-6\beta)\right]a_{2}$
$\displaystyle+(\alpha-\beta)\left[4\alpha(\alpha-2\beta+1)(\lambda_{1}-\lambda_{1}^{f})\right]s_{0,1}$
$\displaystyle-4\beta(\alpha-\beta)^{2}(s_{0,2}-s_{1,2}).$
###### Proof.
Since $s_{0,1}s_{0,1}$ is invariant under $f$, we have
$4(\alpha-2\beta)[s_{0,1}s_{0,1}-(s_{0,1}s_{0,1})^{f}]=0$. Then, equality
$(1)$ follows from the expression for $4(\alpha-2\beta)s_{0,1}s_{0,1}$ given
in Lemma 4.7. By applying $\tau_{1}$ to equation in $(1)$ we get (2). ∎
From the formulae in Lemmas 4.7 and 4.8, it is clear that we have different
pictures according whether $\alpha-2\beta$ and $\alpha-4\beta$ are invertible
in ${\overline{R}}$ or not. Since we are most concerned with algebras over a
field, later we will also assume that $\alpha-2\beta$ and $\alpha-4\beta$ are
either invertible or zero. Thus we deal separately with the following three
cases:
1. (1)
The generic case: the ideal $L$ contains the elements $t_{h}$ for
$h\in{\mathbb{N}}$, and $h\geq 3$, $2t_{1}-1$, $(x_{1}-2x_{2})t_{2}-1$, and
$(x_{1}-4x_{2})t_{3}-1$. In this case we set
$\mathcal{O}_{g}:=\mathcal{O}_{L}$.
2. (2)
The $\alpha=2\beta$ case: the ideal $L$ contains the elements $t_{h}$ for
$h\in{\mathbb{N}}$, and $h\geq 1$, $2t_{1}-1$, and $x_{1}-2x_{2}$. In this
case we set $\mathcal{O}_{2\beta}:=\mathcal{O}_{L}$.
3. (3)
The $\alpha=4\beta$ case: the ideal $L$ contains the elements $t_{h}$ for
$h\in{\mathbb{N}}$, and $h\geq 1$, $2t_{1}-1$, and $x_{1}-4x_{2}$. In this
case we set $\mathcal{O}_{4\beta}:=\mathcal{O}_{L}$.
In [3] the case $\alpha=2\beta$ is considered in details: in particular it is
shown that every $2$-generated primitive axial algebra in
$\mathcal{O}_{2\beta}$ is at most $8$ dimensional and this bound is attained.
The following result, which can be compared to Theorem 3.7 in [16], is a
consequence of the resurrection principle [11, Lemma 1.7].
###### Proposition 4.9.
Let
$\overline{\mathcal{V}}=({\overline{R}},{\overline{V}},{\overline{\mathcal{A}}},(\overline{\mathcal{S}},\overline{\star}))$
be the universal object in the cathegory $\mathcal{O}_{g}$. Then
${\overline{V}}$ is linearly spanned by the set
$\\{a_{-2},a_{-1},a_{0},a_{1},a_{2},s_{0,1},s_{0,2},s_{1,2}\\}$.
###### Proof.
Let $U$ be the linear span in ${\overline{V}}$ of the set
$B:=\\{a_{-2},a_{-1},a_{0},a_{1},a_{2},s_{0,1},s_{0,2},s_{1,2}\\}$
with coefficients in ${\overline{R}}$. From Lemmas 4.7 and 4.8, since
$\alpha-2\beta$ and $\alpha-4\beta$ are invertible in ${\overline{R}}$, we get
that
$a_{0}\cdot s_{1,2},a_{3}\in U.$
The set $B$ is clearly invariant under the action of $\tau_{0}$ and since
$a_{-2}^{f}=a_{3}$, $U$ is also invariant under $f$. By Equation (11),
$a_{0}a_{1}$ and $a_{0}a_{2}$ are contained in $U$; by applying alternatively
$\tau_{0}$ and $f$ we get that $U$ contains all the products $a_{i}a_{j}$ for
$i,j\in{\mathbb{Z}}$. Similarly, since by Lemma 4.3, for $i\in\\{1,2\\}$,
$a_{0}s_{0,i}\in U$, and by Lemma 4.7, $a_{0}s_{1,2}\in U$, we get that $U$
contains all the products $a_{j}s_{0,i}$ and $a_{j}s_{1,2}$ for
$j\in{\mathbb{Z}},i\in\\{1,2\\}$.
It follows that, for $i,j\in\\{1,2\\}$, the expression on the righthand side
of Equation (13) is contained in $U$, whence $s_{0,i}\cdot s_{0,j}$ is
contained in $U$. As $U$ is invariant under $f$, we have also $s_{1,2}\cdot
s_{1,2}\in U$. Finally, with a similar argument, using the identity
$a_{0}\cdot(u_{i}\cdot u_{-1}+u_{i}\cdot v_{-1})=\alpha(u_{i}\cdot v_{-1}),$
we can express $s_{0,i}\cdot s_{1,2}$ as linear combination of elements of
$B$. Hence $U$ is a subalgebra of ${\overline{V}}$, and since ${\overline{V}}$
is generated by $a_{0}$ and $a_{1}$, we get that $U={\overline{V}}$. ∎
###### Remark 4.10.
Note that the above proof gives an explicit way to compute the structure
constants of the algebra ${\overline{V}}$ relative to the generating set $B$.
This has been done with the use of GAP [6]. The explicit expressions however
are far too long to be written explicitly here.
###### Corollary 4.11.
Let
$\overline{\mathcal{V}}=({\overline{R}},{\overline{V}},{\overline{\mathcal{A}}},(\overline{\mathcal{S}},\overline{\star}))$
be the universal object in the cathegory $\mathcal{O}_{g}$. Then,
${\overline{R}}$ is generated as a $\hat{D}$-algebra by $\lambda_{1}$,
$\lambda_{2}$, $\lambda_{1}^{f}$, and $\lambda_{2}^{f}$.
###### Proof.
Since, for every $v\in{\overline{V}}$,
$\lambda_{a_{1}}(v)=(\lambda_{a_{0}}(v^{f}))^{f}$, $\lambda_{a_{0}}$ is a
linear function, and ${\overline{R}}={\overline{R}}^{f}$, by Proposition 4.9,
we just need to show that, for every
$v\in\\{a_{-2},a_{-1},a_{0},a_{1},a_{2},s_{0,1},s_{0,2},s_{1,2}\\}$,
$\lambda_{a_{0}}(v)$ can be written as a linear combination, with coefficients
in $\hat{D}$, of products of $\lambda_{1}$, $\lambda_{2}$, $\lambda_{1}^{f}$,
and $\lambda_{2}^{f}$. By definition we have
$\lambda_{a_{0}}(a_{0})=1,\>\>\lambda_{a_{0}}(a_{1})=\lambda_{1},\mbox{ and
}\>\>\lambda_{a_{0}}(a_{2})=\lambda_{2}.$
Since $\tau_{0}$ is an ${\overline{R}}$-automorphism of ${\overline{V}}$
fixing $a_{0}$, we get
$\lambda_{a_{0}}(a_{-1})=\lambda_{a_{0}}((a_{1})^{\tau_{0}})=\lambda_{1},$
$\lambda_{a_{0}}(a_{-2})=\lambda_{a_{0}}((a_{2})^{\tau_{0}})=\lambda_{2},$
and
$\lambda_{a_{0}}(s_{0,1})=\lambda_{a_{0}}(a_{0}a_{1}-\beta a_{0}-\beta
a_{1})=\lambda_{1}-\beta-\beta\lambda_{1},$
$\lambda_{a_{0}}(s_{0,2})=\lambda_{a_{0}}(aa_{2}-\beta a-\beta
a_{2})=\lambda_{2}-\beta-\beta\lambda_{2}.$
Finally, by the fusion law, $u_{1}u_{1}+u_{1}v_{1}$ is a sum of a $0$ and an
$\alpha$-eigenvector for ${\rm ad}_{a_{0}}$, whence
$\lambda_{a_{0}}(u_{1}u_{1}+u_{1}v_{1})=0$. By Lemma 4.3, we can compute
$u_{1}u_{1}+u_{1}v_{1}$ and find
$u_{1}u_{1}+u_{1}v_{1}=w+\frac{(\alpha-\beta)}{2\alpha}s_{1,2},$
with $w\in\langle a_{-2},a_{-1},a_{0},a_{1},a_{2},s_{0,1}\rangle$. So, we can
express $\lambda_{a_{0}}(s_{1,2})$ and we obtain
$\lambda_{a_{0}}(s_{1,2})=\frac{2(\alpha-1)}{\alpha-\beta}\lambda_{1}^{2}-\frac{2(\alpha-1)}{\alpha-\beta}\lambda_{1}\lambda_{1}^{f}+(1-2\beta)\lambda_{1}+\beta\lambda_{2}-\beta.$
∎
We conclude this section with a similar result for symmetric algebras over a
field in the case $\alpha=4\beta$. Note that, in this case, since we are
assuming $\alpha\neq\beta$, we also assume that the field is of characteristic
other than $3$.
###### Proposition 4.12.
Let $V$ be a primitive symmetric axial algebra of Monster type
$(4\beta,\beta)$ over a field ${\mathbb{F}}$ of characteristic greater than
$3$, generated by two axes $\overline{a}_{0}$ and $\overline{a}_{1}$. Then $V$
has dimension at most $8$, unless $2\beta-1=0$ and
$\lambda_{\overline{a}_{0}}(\overline{a}_{1})=\lambda_{\overline{a}_{1}}(\overline{a}_{0})=\lambda_{\overline{a}_{0}}(\overline{a}_{0}^{\tau_{\overline{a}_{1}}})=\lambda_{\overline{a}_{1}}(\overline{a}_{1}^{\tau_{\overline{a}_{0}}})=1.$
###### Proof.
Let
$\overline{\mathcal{V}}=({\overline{R}},{\overline{V}},{\overline{\mathcal{A}}},(\overline{\mathcal{S}},\overline{\star}))$
be the universal object in the cathegory $\mathcal{O}_{4\beta}$. Since
$\alpha-2\beta=2\beta$ is invertible in ${\overline{R}}$, Lemma 4.7 yields
that, $a_{0}\cdot s_{1,2}$ is contained in $\langle
a_{-2},a_{-1},a_{0},a_{1},a_{2},s_{0,1},s_{0,2},s_{1,2}\rangle$. Since
$\alpha=4\beta$, Equation $(1)$ in Lemma 4.8 becomes
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\left[(48\beta^{3})\lambda_{1}-(108\beta^{4}-6\beta^{3})\right]a_{-1}$
$\displaystyle+\left[(-128\beta^{2}+8\beta)\lambda_{1}^{2}+(-64\beta^{2}+16\beta)\lambda_{1}\lambda_{1}^{f}+(416\beta^{3}-32\beta^{2})\lambda_{1}\right.$
$\displaystyle\left.+(16\beta^{3}-16\beta^{2})\lambda_{1}^{f}+(24\beta^{3})\lambda_{2}+(-180\beta^{4}+18\beta^{3})\right]a_{0}$
$\displaystyle+\left[(64\beta^{2}-16\beta)\lambda_{1}\lambda_{1}^{f}+(128\beta^{2}-8\beta){\lambda_{1}^{f}}^{2}+(-16\beta^{3}+16\beta^{2})\lambda_{1}\right.$
$\displaystyle\left.+(-416\beta^{3}+32\beta^{2})\lambda_{1}^{f}+(-24\beta^{3})\lambda_{2}^{f}+(180\beta^{4}-18\beta^{3})\right]a_{1}$
$\displaystyle+\left[(-48\beta^{3})\lambda_{1}^{f}+(108\beta^{4}-6\beta^{3})\right]a_{2}$
$\displaystyle+48\beta^{2}(2\beta+1)(\lambda_{1}-\lambda_{1}^{f})s_{0,1}$
$\displaystyle-36\beta^{3}(s_{0,2}-s_{1,2}).$
By Corollary 3.8, $V$ is a homomorphic image of
${\overline{V}}\otimes_{\hat{D}}{\mathbb{F}}$, via a homomorphism $\phi_{V}$
mapping $a_{i}$ to $\overline{a}_{i}$, for $i\in\\{0,1\\}$ and ${\mathbb{F}}$
is a homomorphic image of ${\overline{R}}\otimes_{\hat{D}}{\mathbb{F}}$ via
$\phi_{R}$. We use the bar notation to denote the images of the elements of
${\overline{V}}\otimes_{\hat{D}}{\mathbb{F}}$ via $\phi_{V}$, while we
identify the image under $\phi_{R}$ of an element of
${\overline{R}}\otimes_{\hat{D}}{\mathbb{F}}$ with the element itself. When we
apply $\phi_{V}$ to the relation (4) we get a similar relation in $V$. If the
coefficient of $a_{2}$ is not zero in ${\mathbb{F}}$, then we get
$\overline{a}_{2}\in
U_{0}:=\langle\overline{a}_{-1},\overline{a}_{0},\overline{a}_{1},\overline{s}_{0,1},\overline{s}_{0,2},\overline{s}_{1,2}\rangle$.
Since $V$ is symmetric, $f$ induces an automorphism $\bar{f}$ of $V$ and
$U_{0}$ is $\bar{f}$ invariant. Since $U_{0}$ is also
$\tau_{\bar{a}_{0}}$-invariant, we get also $\overline{a}_{-2}\in U_{0}$. More
generally, by applying alternatively $\tau_{\bar{a}_{0}}$ and $\bar{f}$ we get
that $\bar{a}_{i}\in U_{0}$ for every $i\in{\mathbb{Z}}$. The argument used in
the proof of Proposition 4.9 yields $V=U_{0}$. If the coefficient of
$\overline{a}_{2}$ in the Equation (4) is zero in ${\mathbb{F}}$, then we may
consider the coefficient of $\overline{a}_{-1}$ and if it is not zero we
deduce as above that
$\overline{a}_{-1}\in\langle\overline{a}_{0},\overline{a}_{1},\overline{s}_{0,1},\overline{s}_{0,2},\overline{s}_{1,2}\rangle$.
By proceeding as in the previous case, we get
$V=\langle\overline{a}_{0},\overline{a}_{1},\overline{s}_{0,1},\overline{s}_{0,2},\overline{s}_{1,2}\rangle$.
If the coefficients of $\overline{a}_{2}$ and $\overline{a}_{-1}$ are both
zero, then we get
$\lambda_{\bar{a}_{0}}(\bar{a}_{1})=\lambda_{\bar{a}_{1}}(\bar{a}_{0})=\frac{18\beta-1}{8}.$
As above, if the coefficient of $\overline{a}_{0}$ (or the coefficient of
$\overline{a}_{1}$) is not zero, we can express $\overline{a}_{0}$ (or
$\overline{a}_{1}$ respectively) as a linear combination of
$\overline{a}_{-1},\overline{s}_{0,1},\overline{s}_{0,2}-\overline{s}_{1,2}$
($\overline{a}_{0},\overline{s}_{0,1},\overline{s}_{0,2}-\overline{s}_{1,2}$
respectively). In both cases, it follows that
$V=\langle\overline{a}_{-1},\overline{a}_{0},\overline{a}_{1},\overline{s}_{0,1},\overline{s}_{0,2},\overline{s}_{1,2}\rangle$.
If also the coefficients of $\overline{a}_{0}$ and $\overline{a}_{1}$ are both
zero, then we get
$\lambda_{\bar{a}_{0}}(\bar{a}_{2})=\lambda_{\bar{a}_{1}}(\bar{a}_{-1})=\frac{480\beta^{3}-228\beta^{2}+28\beta-1}{64\beta^{2}}$
and Equation (4) becomes
$0=36\beta^{2}(\overline{s}_{0,2}-\overline{s}_{1,2}).$
Hence, since ${\mathbb{F}}$ has caracteristic greater than $3$,
$\overline{s}_{0,2}=\overline{s}_{1,2}$ and the identity
$\lambda_{\overline{a}_{0}}(\overline{s}_{0,2})=\lambda_{\overline{a}_{0}}(\overline{s}_{1,2})$
gives that $\beta$ satisfies the relation
(17) $(2\beta-1)^{2}(12\beta-1)(14\beta-1)=0.$
From now on assume
$\beta\in\\{\frac{1}{12},\frac{1}{14}\\}\setminus\\{\frac{1}{2}\\}$, in
particular ${\mathbb{F}}$ has characteristic other than $5$. Set
$U_{1}:=\langle\overline{a}_{-3},\overline{a}_{-2},\overline{a}_{-1},\overline{a}_{0},\overline{a}_{1},\overline{a}_{2},\overline{a}_{3},\overline{s}_{0,1},\overline{s}_{0,2},\overline{s}_{0,3}\rangle$.
From the identity
$\overline{a}_{0}(\overline{u}_{1}\overline{u}_{2}-\overline{v}_{1}\overline{v}_{2}+\lambda_{\overline{a}_{0}}(\overline{v}_{1}\overline{v}_{2})\overline{a}_{0})=0$
we can express $\overline{a}_{0}(\overline{s}_{1,3}+\overline{s}_{3,2})$ as a
linear combination of $\overline{a}_{-3}$, $\overline{a}_{-2}$,
$\overline{a}_{-1}$, $\overline{a}_{0}$, $\overline{a}_{1}$,
$\overline{a}_{2}$, $\overline{a}_{3}$, $\overline{s}_{0,1}$, and
$\overline{s}_{0,2}$ and then, by Lemma 4.5, we get that
$\overline{s}_{0,1}\overline{s}_{0,2}\in U_{1}$. Then, from the identity
$\overline{s}_{0,1}\overline{s}_{0,2}-(\overline{s}_{0,1}\overline{s}_{0,2})^{f}=0$,
we derive that $\overline{s}_{1,3}\in U_{1}$, whence also
$\overline{s}_{2,3}=(\overline{s}_{1,3})^{\tau_{\overline{a}_{0}}}\in U_{1}$.
From the identity $\overline{s}_{2,3}-(\overline{s}_{2,3})^{f}=0$ we then get
$\overline{a}_{4}\in U_{1}$. It follows that $U_{1}$ is invariant under $f$
and $\tau_{\overline{a}_{0}}$, hence $\overline{a}_{\pm i}\in U_{1}$ for
$i\geq 4$. Since $U_{1}$ is also ${\rm ad}_{\overline{a}_{0}}$-invariant, it
follows that $U_{1}$ contains $\overline{s}_{r,n}$ for every $n\geq 1$,
$r\in\\{0,\ldots,n-1\\}$. Thus $U_{1}$ is a subalgebra of $V$, whence
$V=U_{1}$. From the identity
$\overline{a}_{0}(\overline{v}_{1}\overline{v}_{2}-\lambda_{\overline{a}_{0}}(\overline{v}_{1}\overline{v}_{2})\overline{a}_{0})=0$
we get
$\overline{s}_{0,3}\in\langle\overline{a}_{-3},\overline{a}_{-2},\overline{a}_{-1},\overline{a}_{0},\overline{a}_{1},\overline{a}_{2},\overline{a}_{3},\overline{s}_{0,1},\overline{s}_{0,2}\rangle$.
Finally, from the identity $\overline{a}_{5}^{2}-\overline{a}_{5}=0$ we get
$\overline{a}_{-3}\in\langle\overline{a}_{-2},\overline{a}_{-1},\overline{a}_{0},\overline{a}_{1},\overline{a}_{2},\overline{a}_{3},\overline{s}_{0,1},\overline{s}_{0,2}\rangle$,
that is $V$ has dimension at most $8$. ∎
## 5\. The generic case
Let
$\overline{\mathcal{V}}=({\overline{R}},{\overline{V}},{\overline{\mathcal{A}}},(\overline{\mathcal{S}},\overline{\star}))$
be the universal object in the cathegory $\mathcal{O}_{g}$. Note that in this
case we have
$\hat{D}={\mathbb{Z}}[1/2,x_{1},x_{2},x_{1}^{-1},x_{2}^{-1},(x_{1}-x_{2})^{-1},(x_{1}-2x_{2})^{-1},(x_{1}-4x_{2})^{-1}].$
By Corollary 4.11,
${\overline{R}}=\hat{D}[\lambda_{1},\lambda_{1}^{f},\lambda_{2},\lambda_{2}^{f}]$.
The elements $\lambda_{1},\lambda_{1}^{f},\lambda_{2},\lambda_{2}^{f}$ are not
necessarily indeterminates on $\hat{D}$, as they have to satisfy various
relations imposed by the definition of ${\overline{R}}$. In particular, since
by Lemma 4.6, $s_{2,3}-s_{2,3}^{f}=0$, in the ring ${\overline{R}}$ the
following relations hold
1. (1)
$\lambda_{a_{0}}(s_{2,3}-(s_{2,3})^{f})=0$,
2. (2)
$\lambda_{a_{0}}((s_{2,3}-(s_{2,3})^{f})^{\tau_{1}})=0$,
3. (3)
$\lambda_{a_{0}}(a_{3}a_{3}-a_{3})=0$,
4. (4)
$\lambda_{a_{1}}(s_{2,3}-(s_{2,3})^{f})=0$.
By Remark 4.10, the four expressions on the left hand side of the above
identities can be computed explicitly and produce respectively four
polynomials $p_{i}(x,y,z,t)$ for $i\in\\{1,\ldots,4\\}$ in $\hat{D}[x,y,z,t]$
(with $x,y,z,t$ indeterminates on $\hat{D}$), that simultaneously annihilate
on the quadruple $(\lambda_{1},\lambda_{1}^{f},\lambda_{2},\lambda_{2}^{f})$.
Define also, for $i\in\\{1,2\\}$, $q_{i}(x,z):=p_{i}(x,x,z,z)$. The
polynomials $p_{i}$’s and $q_{i}$’s are too long to be displayed here but can
be computed using [1] or [6].
Suppose $V$ is a primitive axial algebra of Monster type $(\alpha,\beta)$ over
a field ${\mathbb{F}}$ of odd characteristic , with
$\alpha,\beta\in{\mathbb{F}}$ and $\alpha\not\in\\{2\beta,4\beta\\}$,
generated by two axes $\bar{a}_{0}$ and $\bar{a}_{1}$. Then, by Corollary 3.8,
$V$ is a homomorphic image of ${\overline{V}}\otimes_{\hat{D}}{\mathbb{F}}$
and ${\mathbb{F}}$ is a homomorphic image of
${\overline{R}}\otimes_{\hat{D}}{\mathbb{F}}$. We denote the images of an
element $\delta$ of ${\overline{R}}\otimes_{\hat{D}}{\mathbb{F}}$ in
${\mathbb{F}}$ by $\bar{\delta}$ and by $\overline{p}_{i}$ and
$\overline{q}_{i}$ the polynomials in ${\mathbb{F}}[x,y,z,t]$ and
${\mathbb{F}}[x,z]$ corresponding to $p_{i}$ and $q_{i}$, respectively. Set
$T:=\\{\overline{p}_{1},\overline{p}_{2},\overline{p}_{3},\overline{p}_{4}\\}$
and
$T_{s}:=\\{\overline{p}_{1}(x,z),\overline{p}_{2}(x,z)\\}.$
Moreover, for $P\in\\{T,T_{s}\\}$, denote by $Z(P)$ the set of common zeroes
of all the elements of $P$ in ${\mathbb{F}}^{4}$ and ${\mathbb{F}}^{2}$
respectively. It is clear from the definition that the $\overline{p}_{i}$’s
have the coefficients in the field ${\mathbb{F}}_{0}(\alpha,\beta)$. By
Proposition 4.9 and Corollary 4.11, the algebra $V$ is completely determined,
up to homomorphic images, by the quadruple
$(\lambda_{\bar{a}_{0}}(\bar{a}_{1}),\lambda_{\bar{a}_{1}}(\bar{a}_{0}),\lambda_{\bar{a}_{0}}(\bar{a}_{2}),\lambda_{\bar{a}_{1}}(\bar{a}_{-1})).$
Furthermore, this quadruple is the homomorphic image in ${\mathbb{F}}^{4}$ of
the quadruple $(\lambda_{1},\lambda_{1}^{f},\lambda_{2},\lambda_{2}^{f})$ and
so it is a common zero of the elements of $T$.
If, in addition, the algebra $V$ is symmetric, then
$\lambda_{\bar{a}_{0}}(\bar{a}_{1})=\lambda_{\bar{a}_{1}}(\bar{a}_{0})\mbox{
and }\lambda_{\bar{a}_{0}}(\bar{a}_{2}),=\lambda_{\bar{a}_{1}}(\bar{a}_{-1}))$
and the pair
$(\lambda_{\bar{a}_{0}}(\bar{a}_{1}),\lambda_{\bar{a}_{0}}(\bar{a}_{2}))$ is a
common zero of the elements of the set $T_{s}$.
We thus have proved Theorem 1.3.
Computing the resultant of the polynomials $\overline{p}_{1}(x,z)$ and
$\overline{p}_{2}(x,z)$ with respect to $z$ one obtains a polynomial in $x$ of
degree 10, which is the product of the five linear factors
$x,\>\>x-1\>\>,2x-\alpha,\>\>2x-\beta,\>\>4(2\alpha-1)x-(3\alpha^{2}+3\alpha\beta-\alpha-2\beta)$
and a factor of degree at most $5$. This last factor has degree $5$ and is
irreducible in ${\mathbb{Q}}(\alpha,\beta)$, if $\alpha$ and $\beta$ are
indeterminates over ${\mathbb{Q}}$. On the other hand, for certain values of
$\alpha$ and $\beta$, this factor can be reducible: for example, it even
completely splits in ${\mathbb{Q}}(\alpha,\beta)[x]$ when $\alpha=2\beta$ (see
[3]), or in the Norton-Sakuma case, when $(\alpha,\beta)=(1/4,1/32)$ (see the
proof of Theorem 1.6 below).
Fixed a filed ${\mathbb{F}}$, in order to classify primitive generic axial
algebras of Monster type $(\alpha,\beta)$ over ${\mathbb{F}}$ generated by two
axes $\bar{a}_{0}$ and $\bar{a}_{1}$ we can proceed as follows. We first find
all the zeroes of the set $T_{s}$ and classify all symmetric algebras. Then we
observe that, the even subalgebra
$\langle\langle\bar{a}_{0},\bar{a}_{2}\rangle\rangle$ and the odd subalgebra
$\langle\langle\bar{a}_{-1},\bar{a}_{1}\rangle\rangle$ are symmetric, since
the automorphisms $\tau_{\bar{a}_{0}}$ and $\tau_{\bar{a}_{1}}$ respectively,
swap the generating axes. Hence, from the classification of the symmetric
case, we know all possible values for the pairs
$(\lambda_{\bar{a}_{0}}(\bar{a}_{2}),\lambda_{\bar{a}_{1}}(\bar{a}_{-1}))$
and we can look for common zeros $(x_{0},y_{0},z_{0},t_{0})$ of the set $T$
with those prescribed values for $(x_{0},z_{0})$.
Using this method, we now classify $2$-generated primitive axial algebras of
Monster type $(\alpha,\beta)$ over the field ${\mathbb{Q}}(\alpha,\beta)$,
with $\alpha$ and $\beta$ independent indeterminates over ${\mathbb{Q}}$.
###### Lemma 5.1.
If ${\mathbb{F}}={\mathbb{Q}}(\alpha,\beta)$, with $\alpha$ and $\beta$
independent indeterminates over ${\mathbb{Q}}$, the set $Z(T_{s})$ consists
exactly of the $5$ points
$(1,1),\>\>(0,1),\>\>\left(\frac{\beta}{2},\frac{\beta}{2}\right),\>\>\left(\frac{\alpha}{2},1\right),$
and
$(q(\alpha,\beta),q(\alpha,\beta)),\mbox{ with
}q(\alpha,\beta)=\frac{(3\alpha^{2}+3\alpha\beta-\alpha-2\beta)}{4(2\alpha-1)}.$
###### Proof.
The system can be solved in ${\mathbb{Q}}(\alpha,\beta)$ using [1] giving the
five solutions of the statement. ∎
###### Lemma 5.2.
Let ${\mathbb{F}}={\mathbb{Q}}(\alpha,\beta)$, with $\alpha$ and $\beta$
independent indeterminates over ${\mathbb{Q}}$ and let $(x_{0},z_{0})\in
Z(T_{s})$. Then $(x_{0},y_{0},z_{0},t_{0})\in Z(T)$ if and only if
$(y_{0},t_{0})=(x_{0},z_{0}).$
###### Proof.
This have been checked using [1]. ∎
###### Lemma 5.3.
The algebras $3C(\alpha)$, $3C(\beta)$, and $3A(\alpha,\beta)$ over the field
${\mathbb{Q}}(\alpha,\beta)$ are simple.
###### Proof.
The claim follows from [9, Theorem 4.11 and Corollary 4.6]. For algebras
$3C(\alpha)$ and $3C(\beta)$ it is proved in [8, Example 3.4]. The algebra
$3A(\alpha,\beta)$ is the same as the algebra $3A^{\prime}_{\alpha,\beta}$
defined by Reheren in [16]. By [16, Lemma 8.2], it admits a Frobenius form
wich is non degenerate over the field ${\mathbb{Q}}(\alpha,\beta)$ and such
that all the generating axes are non-singular with respect to this form.
Hence, by Theorem 4.11 in [9], every non trivial ideal contains at least one
of the generating axes. Then, Corollary 4.6 in [9] yields that the algebra is
simple. ∎
Proof of Theorem 1.4. It is straightforward to check that the algebras $1A$,
$2B$, $3C(\alpha)$, $3C(\beta)$, and $3A(\alpha,\beta)$ are $2$-generated
symmetric axial algebras of Monster type $(\alpha,\beta)$ over the field
${\mathbb{Q}}(\alpha,\beta)$ and their corresponding values of
$(\lambda_{\bar{a}_{0}}(\bar{a}_{1}),\lambda_{\bar{a}_{0}}(\bar{a}_{2}))$ are
respectively
$(1,1),\>\>(0,1),\>\>\left(\frac{\beta}{2},\frac{\beta}{2}\right),\>\>\left(\frac{\alpha}{2},1\right),\mbox{
and }(q(\alpha,\beta),q(\alpha,\beta)),$
where $q(\alpha,\beta)$ is defined in Lemma 5.1. Let $V$ be an axial algebra
of Monster type $(\alpha,\beta)$ over the field ${\mathbb{Q}}(\alpha,\beta)$
generated by the two axes $\bar{a}_{0}$ and $\bar{a}_{1}$. Set
$\bar{\lambda}_{1}:=\lambda_{\bar{a}_{0}}(\bar{a}_{1}),\>\bar{\lambda}_{1}^{\prime}:=\lambda_{\bar{a}_{1}}(\bar{a}_{0}),\>\bar{\lambda}_{2}:=\lambda_{a_{0}}(\bar{a}_{2}),\>\mbox{
and }\bar{\lambda}_{2}^{\prime}:=\lambda_{\bar{a}_{1}}(\bar{a}_{-1}).$
By Theorem 1.3, $V$ is determined, up to homomorphic images, by the quadruple
$(\bar{\lambda}_{1},\bar{\lambda}_{1}^{\prime},\bar{\lambda}_{2},\bar{\lambda}_{2}^{\prime})$,
which must be in $Z(T)$. By Lemma 5.2, we get the five quadruples
$(1,1,1,1),\>\>(0,1,0,1),\>\>\left(\frac{\beta}{2},\frac{\beta}{2},\frac{\beta}{2},\frac{\beta}{2}\right),\>\>\left(\frac{\alpha}{2},1,\frac{\alpha}{2},1\right),$
and
$(q(\alpha,\beta),q(\alpha,\beta),q(\alpha,\beta),q(\alpha,\beta)).$
By Corollary 3.8 and Proposition 4.9, $V$ is linearly generated on
${\mathbb{Q}}(\alpha,\beta)$ by the set $\bar{a}_{-2}$, $\bar{a}_{-1}$,
$\bar{a}_{0}$, $\bar{a}_{1}$, $\bar{a}_{2}$, $\bar{s}_{0,1}$, $\bar{s}_{0,2}$,
and $\bar{s}_{1,2}$. Define
$\bar{d}_{0}:=\bar{s}_{2,3}-\bar{s}_{1,3}^{\tau_{0}},\>\>\bar{d}_{1}:=\bar{d}_{0}^{f},\>\;\bar{d}_{2}:={\bar{d}_{0}}^{\tau_{1}},$
and, for $i\in\\{0,1,2\\}$,
$\bar{D}_{i}:={\bar{d}_{i}}^{\tau_{0}}-\bar{d}_{i}.$
By Lemma 4.6, all vectors $\bar{d}_{i},\bar{D}_{i}$ for $i\in\\{0,1,2\\}$ are
zero. For all the admissible values of
$(\bar{\lambda}_{1},\bar{\lambda}_{1}^{\prime},\bar{\lambda}_{2},\bar{\lambda}_{2}^{\prime})$,
the coefficient of $\bar{a}_{-2}$ in $D_{0}$ is non zero, hence we can express
$\bar{a}_{-2}$ as a linear combination of $\bar{a}_{-1}$, $\bar{a}_{0}$,
$\bar{a}_{1}$,$\bar{a}_{2}$, $\bar{s}_{0,1}$, $\bar{s}_{0,2}$, and
$\bar{s}_{1,2}$. Similarly, from identity $\bar{d}_{0}=0$ we can express
$\bar{s}_{1,2}$ as a linear combination of $\bar{a}_{-1}$, $\bar{a}_{0}$,
$\bar{a}_{1}$,$\bar{a}_{2}$, $\bar{s}_{0,1}$, and $\bar{s}_{0,2}$.
For
$(\bar{\lambda}_{1},\bar{\lambda}_{1}^{\prime},\bar{\lambda}_{2},\bar{\lambda}_{2}^{\prime})$
in
$\left\\{\left(\frac{\beta}{2},\frac{\beta}{2},\frac{\beta}{2},\frac{\beta}{2}\right),\left(q(\alpha,\beta),q(\alpha,\beta),q(\alpha,\beta),q(\alpha,\beta)\right)\right\\},$
from identity $\bar{d}_{2}=0$ we get $\bar{a}_{-1}=\bar{a}_{2}$ and
consequently $\bar{s}_{0,2}=\bar{s}_{0,1}$. Thus in this two cases the
dimension of $V$ is at most $4$. Moreover, if
$(\bar{\lambda}_{1},\bar{\lambda}_{1}^{\prime},\bar{\lambda}_{2},\bar{\lambda}_{2}^{\prime})=(\frac{\beta}{2},\frac{\beta}{2},\frac{\beta}{2},\frac{\beta}{2})$,
then from the identity
$\bar{s}_{0,2}\bar{s}_{0,1}-\bar{s}_{0,1}\bar{s}_{0,1}=0$ we get
$\bar{s}_{0,1}=-\frac{\beta}{2}(\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2})$ and
hence the dimension is at most $3$.
For
$(\bar{\lambda}_{1},\bar{\lambda}_{1}^{\prime},\bar{\lambda}_{2},\bar{\lambda}_{2}^{\prime})$
in
$\left\\{(1,1,1,1),(0,1,0,1),\left(\frac{\alpha}{2},1,\frac{\alpha}{2},1\right)\right\\},$
from the identity $\bar{D}_{2}=0$ we get $\bar{a}_{-1}=\bar{a}_{1}$. Then,
from the identity $\bar{d}_{2}=0$ we deduce $\bar{a}_{2}=\bar{a}_{0}$ and
hence $\bar{s}_{0,2}=(1-2\beta)\bar{a}_{0}$. Hence in this cases $V$ has
dimension at most $3$. Suppose
$(\bar{\lambda}_{1},\bar{\lambda}_{1}^{\prime},\bar{\lambda}_{2},\bar{\lambda}_{2}^{\prime})=(0,1,0,1)$.
Then, from the identity $\bar{s}_{0,1}\bar{s}_{0,2}+(2\beta-1)\bar{a}_{0}$ we
get $\bar{s}_{0,1}=-\beta(\bar{a}_{0}+\bar{a}_{1})$ and so
$\bar{a}_{0}\bar{a}_{1}=0$. Hence in this case $V$ is isomorphic to the
algebra $2B$. Finally, suppose
$(\bar{\lambda}_{1},\bar{\lambda}_{1}^{\prime},\bar{\lambda}_{2},\bar{\lambda}_{2}^{\prime})=(1,1,1,1)$.
From the identity $\bar{s}_{0,2}-\bar{s}_{1,2}^{f}=0$ we get
$\bar{a}_{0}=\bar{a}_{1}$, that is $V$ is the algebra $1A$.
Thus, for each
$(\bar{\lambda}_{1},\bar{\lambda}_{1}^{\prime},\bar{\lambda}_{2},\bar{\lambda}_{2}^{\prime})$
in the set
$\left\\{\left(\frac{\beta}{2},\frac{\beta}{2},\frac{\beta}{2},\frac{\beta}{2}\right),\left(q(\alpha,\beta),q(\alpha,\beta),q(\alpha,\beta),q(\alpha,\beta)\right),\left(\frac{\alpha}{2},1,\frac{\alpha}{2},1\right)\right\\},$
we get that $V$ satisfies the same multiplication table as the algebras
$3C(\beta)$, $3C(\alpha)$ and $3A(\alpha,\beta)$ respectively and has at most
the same dimension. Therefore, to conclude the proof, we need only to show
that the algebras $3C(\alpha)$, $3C(\beta)$ and $3A(\alpha,\beta)$ are simple.
This follows from Lemma 5.3. $\square$
As a corollary of Theorem 1.3, we can prove now Theorem 1.6.
Proof of Theorem 1.6. Let ${\mathbb{F}}$ be a field of characteristic zero.
Then ${\mathbb{F}}$ contains ${\mathbb{Q}}$. The resultant with respect to $z$
of the polynomials in $T_{s}$ has degree $9$ and splits in ${\mathbb{Q}}[x]$
as the product of a constant and the linear factors
$x,\>x-1,\>x-\frac{1}{8},\>\left(x-\frac{1}{64}\right)^{2},\>x-\frac{13}{2^{8}},\>x-\frac{1}{32},\>x-\frac{3}{2^{7}},\>x-\frac{5}{2^{8}}.$
In ${\mathbb{Q}}^{4}$, the set $Z(T)$ consists of the $9$ points
$(1,1,1,1),\>\>(0,0,1,1),\>\>(\frac{1}{8},\frac{1}{8},1,1),\>\>(\frac{1}{64},\frac{1}{64},\frac{1}{64},\frac{1}{64}),\>\>(\frac{13}{2^{8}},\frac{13}{2^{8}},\frac{13}{2^{8}},\frac{13}{2^{8}}),$
$(\frac{1}{32},\frac{1}{32},0,0),\>\>(\frac{1}{64},\frac{1}{64},\frac{1}{8},\frac{1}{8}),\>\>(\frac{3}{2^{7}},\frac{3}{2^{7}},\frac{3}{2^{7}},\frac{3}{2^{7}}),\>\>(\frac{5}{2^{8}},\frac{5}{2^{8}},\frac{13}{2^{8}},\frac{13}{2^{8}}).$
By [11, 7], each quadruple of the above list corresponds to a Norton-Sakuma
algebra. By Corollary 4.13 in [9], every Norton-Sakuma algebra is simple,
provieded it is not of type $2B$. Hence, the thesis follows from Theorem 1.3,
once we prove that in each case the dimension of $V$ is at most equal to the
dimension of the corresponding Norton-Sakuma algebra. This can be done, by
Remark 4.10, using [6]. $\square$
## References
* [1] Decker, W.; Greuel, G.-M.; Pfister, G.; Schönemann, H.: Singular 4-1-1 — A computer algebra system for polynomial computations. http://www.singular.uni-kl.de (2018).
* [2] De Medts, T., Peacock S.F., Shpectorov, S. and van Couwenberghe M.: Decomposition algebras and axial algebras, arXiv:1905.03481v1
* [3] Franchi, C., Mainardis M., Shpectorov, S.: $2$-generated axial algebras of Monster type $(2\beta,\beta)$, in preparation.
* [4] Franchi, C., Mainardis M., Shpectorov, S.: An infinite dimensional $2$-generated axial algebras of Monster type, arXiv:2007.02430
* [5] Galt, A., Joshi, V., Mamontov, A., Shpectorov, S., Staroletov, A.: Double axes and subalgebras of Monster type in Matsuo algebras, arXiv:2004.11180
* [6] GAP – Groups, Algorithms, and Programming, Version 4.8.6, The GAP group, 2016.
* [7] Hall, J., Rehren, F., Shpectorov, S.: Universal Axial Algebras and a Theorem of Sakuma, J. Algebra 421 (2015), 394-424.
* [8] Hall, J., Rehren, F., Shpectorov, S.: Primitive axial algebras of Jordan type, J. Algebra 437 (2015), 79-115.
* [9] Khasraw, S.M.S., McInroy, J., Shpectorov, S.: On the structure of axial algebras, Trans. Amer. Math. Soc., 373 (2020), 2135-2156
* [10] Ivanov, A. A.: The Monster group and Majorana involutions. Cambridge Tracts in Mathematics 176, Cambridge Univ. Press, Cambridge (2009)
* [11] Ivanov, A. A., Pasechnik, D. V., Seress, Á., Shpectorov, S.: Majorana representations of the symmetric group of degree $4$, J. Algebra 324 (2010), 2432-2463
* [12] Miyamoto, M.: Griess algebras and conformal vectors in vertex operator algebras, J. Algebra 179 (1996), 523-548
* [13] Norton, S. P.: The uniqueness of the Fischer-Griess Monster. In: McKay, J. (ed.) Finite groups-coming of age (Montreal, Que.,1982). Contemp. Math. 45, pp. 271–285. AMS, Providence, RI (1985)
* [14] Norton, S. P.: The Monster algebra: some new formulae. In Moonshine, the Monster and related topics (South Hadley, Ma., 1994), Contemp. Math. 193, pp. 297-306. AMS, Providence, RI (1996)
* [15] Rehren, F.: Axial algebras, PhD thesis, University of Birmingham, 2015.
* [16] Rehren, F.: Generalized dihedral subalgebras from the Monster, Trans. Amer. Math. Soc. 369 (2017), 6953-6986.
* [17] Sakuma, S.: $6$-transposition property of $\tau$-involutions of vertex operator algebras. Int. Math. Res. Not. (2007). doi:10.1093/imrn/rmn030
* [18] Yabe, T.: On the classification of $2$-generated axial algebras of Majorana type, arXiv:2008.01871
* [19] Whybrow, M.: Graded $2$-generated axial algebras, arXiv:2005.03577
|
# Terminus: A Versatile Simulator for Space-based Telescopes
Billy Edwards Blue Skies Space Ltd., 69 Wilson Street, London, EC2A 2BB, UK
Department of Physics and Astronomy, University College London, Gower Street,
London, WC1E 6BT, UK Ian Stotesbury Blue Skies Space Ltd., 69 Wilson Street,
London, EC2A 2BB, UK
###### Abstract
Space-based telescopes offer unparalleled opportunities for characterising
exoplanets, Solar System bodies and stellar objects. However, observatories in
low Earth orbits (e.g. Hubble, CHEOPS, Twinkle and an ever increasing number
of cubesats) cannot always be continuously pointed at a target due to Earth
obscuration. For exoplanet observations consisting of transit, or eclipse,
spectroscopy this causes gaps in the light curve, which reduces the
information content and can diminish the science return of the observation.
Terminus, a time-domain simulator, has been developed to model the occurrence
of these gaps to predict the potential impact on future observations. The
simulator is capable of radiometrically modelling exoplanet observations as
well as producing light curves and spectra. Here, Terminus is baselined on the
Twinkle mission but the model can be adapted for any space-based telescope and
is especially applicable to those in a low-Earth orbit. Terminus also has the
capability to model observations of other targets such as asteroids or brown
dwarfs.
## 1 Introduction
To date, several thousand extra-solar planets have been discovered. With many
of these now being detected around bright stars, and with many more to come
from missions such as the Transiting Exoplanet Survey Satellite (TESS, Ricker
et al. (2014); Barclay et al. (2018)), the characterisation of these worlds
has begun and will accelerate over the next decade. Ground-based instruments
have detected absorption and emission lines in exoplanet atmospheres via high
resolution spectra (e.g. Hoeijmakers et al., 2018; Ehrenreich et al., 2020))
while the Hubble and Spitzer space telescopes have used lower resolution
spectroscopy or photometry to probe the chemical abundances and thermal
properties of tens of planets (e.g. Sing et al., 2016; Iyer et al., 2016;
Tsiaras et al., 2018; Garhart et al., 2020).
In the coming years several missions, some of which are specifically designed
for exoplanet research, will be launched to provide further characterisation.
While the James Webb Space Telescope (JWST, Greene et al. (2016)) and Ariel
(Tinetti et al., 2018) will be located at L2, observatories such as the
CHaracterising ExOPlanets Satellite (CHEOPS, Benz et al. (2020)), which was
launched in December 2019, and Twinkle (Edwards et al., 2019d) will operate
from a low Earth orbit and as such will have to contend with Earth
obscuration.
The orbit will cause gaps in some of the observations obtained by these
missions which will impact their information content due to parts of the
transit light curve being missed, decreasing the precision of the recovered
transit parameters. Additionally, the thermal environment of a low Earth orbit
and the breaks in observing can lead to recurring systematic trends such as
ramps in the recorded flux due to thermal breathing of the telescope and
detector persistence. Such gaps and systematics are experienced in all
exoplanet observations with Hubble (e.g. Deming et al., 2013; Kreidberg et
al., 2014). It should be noted, however, that Hubble is situated in an
equatorial orbit which is significantly different to the sun-synchronous
orbits of CHEOPS and Twinkle. Sun-synchronous orbits allow for certain areas
of sky, specifically those in the anti-sun direction, to be observed for
longer periods without interruption. Additionally, the thermal environment is
more stable due to the smaller variations in the spacecraft-Earth-Sun
geometry. Previous missions to have operated in sun-synchronous orbits include
the Convection, Rotation and planetary Transits (CoRoT, Bordé et al. (2003)),
Akari (Murakami et al., 2007) and WISE/NEOWISE (Wright et al., 2010; Mainzer
et al., 2014). Due to it’s Earth-trailing orbit, Spitzer (Werner et al., 2004)
did not experience gaps in its observations.
When designing future instrumentation, understanding the expected performance
for the envisioned science cases is paramount. Static models, often referred
to as radiometric or sensitivity models, are suitable for studying the
instrument performance over a wide parameter space (i.e. for many different
targets) as they are generally quick to run and require relatively minimal
information about the instrumentation. Radiometric models are a useful way to
understand the capabilities of upcoming exoplanet observatories and have been
widely used. The ESA Radiometric Model (ERM, Puig et al. (2015)) was used to
simulate the performance of the ESA M3 candidate EChO (Exoplanet
Characterisation Observatory, Tinetti et al. (2012)) and was subsequently used
for Ariel (Puig et al., 2018). A newer, python-based version, ArielRad, was
recently developed (Mugnai et al., 2020) while PandExo has been created for
simulating exoplanet observations with Hubble and JWST (Batalha et al., 2017)
and the NIRSpec Exoplanet Exposure Time Calculator (NEETC) was built
specifically for modelling transit and eclipse spectroscopy with JWST’s
NIRSpec instrument (Nielsen et al., 2016). These usually account for
efficiency of the optics and simple noise contributions such as photon, dark
current, readout and instrument/telescope emission.
More complex effects, such as jitter, stellar variability and spots and
correlated noise sources require models which have a time-domain aspect. These
tools usually also produce simulated detector images which can act as
realistic data products for the mission, accounting for detector effects such
as correlated noise between pixels or inter- and intra-pixel variations. For
example, ExoSim is a numerical end-to-end simulator of transit spectroscopy
which is currently being utilised for the Ariel mission (Pascale et al., 2015;
Sarkar et al., 2016, 2017). The tool has been created to explore a variety of
signal and noise issues that occur in, and may bias, transit spectroscopy
observations, including instrument systematics and the other effects
previously mentioned. By producing realistic raw data products, the outputs
can also be fed into data reduction pipelines to explore, and remove,
potential biases within them as well as develop new reduction and data
correction methods. End-to-end simulators such as ExoSim are therefore
powerful tools for understanding the capabilities of an instrument design.
Additional time-domain simulators of note include ExoNoodle (Martin-Lagarde et
al., 2019), which utilises MIRISim (Geers et al., 2019) to model time-series
with the JWST MIRI instrument, Wayne which models Hubble spatial scans of
exoplanets (Varley et al., 2017) and the simulators developed for the CHEOPS
and Colorado Ultraviolet Transit Experiment (CUTE) missions (Futyan et al.,
2020; Sreejith et al., 2019). While the complexity of these types of tools can
be hugely advantageous in understanding intricate effects it can also be their
biggest weakness; such sophisticated models require a great deal of time to
develop and run as well as an excellent understanding of all parts of the
instrument design. They can therefore only be applied to highly refined
designs and run for a small number of cases. The solution to the issue of
complexity versus efficiency is to use both types of models. For Ariel, ExoSim
is used to validate the outcomes of ArielRad for selected, representative
targets. ArielRad is then used as the workhorse for modelling the capability
of thousands of targets due to its superior speed (Edwards et al., 2019b;
Mugnai et al., 2020).
Here, we describe the Terminus tool which has been developed to model transit
(and eclipse) observations with Twinkle, to explore the impact of Earth
obscuration and allow for efficient scheduling methods to be developed to
minimise this impact. The simulator, however, is not mission specific and
could be adapted for other observatories, with a particular applicability for
satellites in low Earth orbit.
The Twinkle Space Mission111http://www.twinkle-spacemission.co.uk is a new,
fast-track satellite designed to begin science operations in 2024. It has been
conceived for providing faster access to spectroscopic data from exoplanet
atmospheres and Solar System bodies. Twinkle is equipped with a visible and
infrared spectrometer which simultaneously covers 0.5-4.5 $\mu$m with a
resolving power of R$\sim$20-70 across this range. Twinkle has been designed
with a telescope aperture of 0.45 m. Twinkle’s field of regard is a cone with
an opening angle of 40∘, centred on the anti-sun vector (Savini et al., 2016).
Previously the ESA Radiometric Model (ERM, Puig et al. (2015, 2018)), which
assumes full light curves are observed, has been used to model the
capabilities of Twinkle (see Edwards et al. (2019d)). Terminus includes a
radiometric model, built upon the concepts of the ERM, but it has been
upgraded to also have the capacity to simulate light curves. The code also
contains the ability to model the orbit of a spacecraft, thus allowing for the
availability of targets to be understood given solar, lunar and Earth
exclusion angles. The capability to model these gaps is not available in other
tools such as ArielRad or ExoSim and is one of the driving factors behind the
creation of Terminus. Additionally, the Twinkle mission will not be limited to
exoplanet characterisation and will also observe solar system bodies, brown
dwarfs and other astrophysical objects. As such, Terminus builds upon the work
of Edwards et al. (2019a, c) and can be used to calculate the predicted data
quality and observational periods for these objects, another feature which is
not present in other similar codes.
In this work we first describe the portion of the simulator which calculates
the target signal and noise contributions before comparing the outputs of
simulated light curve fitting to radiometric estimates. Next the orbital
module is detailed and validated against outputs from an orbital dynamics
software. Using this we explore the effect of gaps for observations of HD
209458 b and WASP-127 b with Twinkle. Finally, we discuss Twinkle’s ability to
observe asteroids by focusing on potential observations of the Near-Earth
Object (NEO) 99942 Apophis (2004 MN4).
## 2 Simulator Structure
Terminus has been constructed in Python and has several different stages. It
can be operated as a simple radiometric model, used to calculate expected
signal-to-noise ratio (SNR) on a given number of atmospheric scale heights, or
be utilised to create simulated light curves. A instrument file is loaded
(which includes parameters such as telescope aperture, quantum efficiency
etc.) and the star flux on the detector calculated. PSFs can be imported from
external sources. In Sections 2.1 to 2.4 we discuss the structure of the
simulator and an overview is given in Figure 1.
### 2.1 Target Parameters
A catalogue of planets has been created following the methodology of (Edwards
et al., 2019b) and data is taken from the NASA Exoplanet Archive (Akeson et
al., 2013).
Figure 1: Overview of the simulator structure. Generic parts are represented
by blue shapes while red indicates functions which are exoplanet specific.
Dotted lines indicate portions which are not compulsory.
### 2.2 Radiometric Model
Figure 2: Example detector images generated by Terminus for Twinkle Ch0 (top)
and Ch1 (bottom). These are used purely for the calculation of the saturation
time for each target.
The stellar flux at Earth is calculated using spectral energy distributions
(SEDs) from the PHOENIX BT-Settl models by Allard et al. (2012); Husser et al.
(2013); Baraffe et al. (2015). The spectral irradiance from a host star at the
aperture of the telescope is given by:
$E_{S}(\lambda)=S_{S}(\lambda)(\frac{R_{*}}{d})^{2}$ (1)
where S${}_{\rm S}(\lambda)$ is the star spectral irradiance from the Phoenix
catalogue (Wm${}^{-2}\mu$m-1) and d is the distance to the star. The effective
collecting area of the telescope is then accounted for before the flux is
integrated into the spectral bins of the instrumentation to give a photon flux
per bin. The signal is then propagated through the instrument to the detector
focal planes, taking into account the transmission of each optical component
and diffracting element as well as the quantum efficiency of the detector. The
final signal, in electrons per second, from the star in each spectral bin is
determined as a 1D flux rate before being convolved with 2D point spread
functions (PSFs) and the instrument dispersion to create a detector image. The
detector image, like the one shown in Figure 2, is utilised to calculate the
saturation time for the target while the 1D flux rate is used for all other
calculations. A variety of sources of noise are accounted for in each of the
models. In addition to photon noise, the simulator calculates the
contributions from dark current, instrument and telescope emission, zodiacal
background emission, and readout noise. Additionally, photometric
uncertainties due to spacecraft jitter can be imported and interpolated from
time-domain simulators such as ExoSim (see Section 2.2.3). Some of these noise
sources are wavelength dependent (e.g. zodiacal background) while others are
not (e.g. read noise).
#### 2.2.1 Calculating Noise Per Exposure
In describing the acquisition of data we use the nomenclature of Rauscher et
al. (2007) in which a frame refers to the clocking and digitisation of pixels
within a specified area of the detector known as a sub-array. The size of sub-
array dictates the time required for it to read out. Here, given the footprint
of Twinkle’s spectrometer on the detector, we assume a fastest frame time of
0.5 seconds which is similar to that for the 1024A sub-array on JWST NIRSpec
(0.45 seconds, Pontoppidan et al. (2016)). A collection of frames then forms a
group although here, as with JWST time series, the number of frames is set to
one (i.e. tg = tf). A collection of non-destructively read groups, along with
a detector reset, forms an integration. Here, the detector reset time after a
destructive read is also assumed to be equivalent to the frame time. As the
duration of a transit/eclipse is generally orders of magnitude longer than the
saturation time of the detector, many integrations will be taken during an
observation. The total noise variance per integration, $\sigma^{2}_{\rm exp}$,
is given by:
$\sigma^{2}_{exp}=\frac{12(n_{g}-1)}{n_{g}(n_{g}+1)}n_{pix}\sigma^{2}_{read}+\frac{6(n_{g}^{2}+1)}{5n_{g}(n_{g}+1)}(n_{g}-1)t_{g}i_{total}$
(2)
from Rauscher et al. (2007) where ng is the number of groups (non-destructive
reads) per exposure, $\sigma_{\rm read}$ is the read noise in e-/pix rms, npix
is the number of pixels in the spectral bin, tg is the time for a single non-
destructive group read, and itotal is the total flux in e-/s. For JWST
observations, the standard practise for exoplanet observations is to maximise
the number of groups (Batalha et al., 2017). Meanwhile, Ariel will use a
variety of readout modes, depending upon the brightness of the target, with
correlated double sampling (CDS, ng =2) for brighter sources targets and
multiple up-the-ramp reads for fainter targets (Focardi et al., 2018).
Collecting several up-the-ramp reads can be useful in correcting for cosmic
ray impacts while also reducing the read noise. Additional reads, however,
increase the photon noise contribution and thus Terminus varies the number of
up-the-ramp reads according to the brightness of the target to attempt to
optimise noise. In each case, the maximum number of up-the-ramp reads is
calculated and Equation 2 used to selected the number of reads which yields
the lowest noise per transit observation (using Equations 2-7). npix can be
selected by specifying a required encircled energy but when importing jitter
simulations from ExoSim, npix is set to the values used in these simulations
as outlined in Section 2.2.3. In Equation 2, itotal is defined as:
$i_{total}=i_{sig}+n_{pix}(i_{dark}+i_{bdg})$ (3)
where isig is the total signal from the star in the spectral bin (e-/s) while
idark and ibdg are the dark current and background signals respectively (in
e-/s/pix). Currently, $i_{\rm bdg}$ is assumed to be from the emission of
optical elements and Zodiacal emission, as detailed in Section 2.2.2, but
future updates will include contributions from nearby stars. For exoplanet
spectroscopy, the total observational time is generally quantised in terms of
the duration of a transit/eclipse event, T. The model assumes the time spent
during ingress (T12) and egress (T34) is negligible to the primary transit
time (T23) and thus T = T23 = T14. The transit time can be calculated from:
$T_{14}=\sqrt{1-b^{2}}\frac{R_{*}P}{\pi a}$ (4)
for a given system where P is the orbital period. The fractional noise on the
star signal over one transit duration is then given by:
$\sigma_{Star}=\frac{1}{\sqrt{n_{int}}}\frac{\sigma_{exp}}{i_{sig}}$ (5)
where nint is the number of integrations over one transit duration which is
calculated from:
$n_{int}=\frac{T_{14}}{t_{r}+t_{g}n_{g}}$ (6)
where tr is the time taken to reset the detector. As a baseline we take tr to
be equivalent to the frame time, tf (0.5 seconds). As noted by Batalha et al.
(2017), if tg = tr = tf then the duty cycle (i.e. the efficiency) is given by
(ng $-$ 1)/(ng $+$ 1).
The measurement of the transit depth is differential and thus the error (i.e.
the 1$\sigma$ uncertainty) on the transit depth is given by:
$\sigma_{TD}=\sigma_{Star}\sqrt{1+\frac{1}{n_{{}_{T_{14}}}}}$ (7)
where n${}_{\rm T_{\rm 14}}$ is the number of transit durations observed out
of transit (i.e. the baseline). For all simulations presented here,
$n_{T_{14}}$ is set to 2 (i.e. 1 x T14 is spent both before/after the main
observation). The error is calculated in this way for every spectral bin.
#### 2.2.2 Zodiacal Emission
We calculate the contribution of zodiacal emission using the prescription from
Pascale et al. (2015) and Sarkar et al. (2020a). The signal is composed of two
black bodies, with associated coefficients, to model the reflected and emitted
components. The spectral brightness is given by:
$\displaystyle Zodi(\lambda)=\beta(3.5\times 10^{-14}B_{\lambda}(5500K)$ (8)
$\displaystyle+3.58\times 10^{-8}B_{\lambda}(270K))$
where the coefficient $\beta$ modifies the intensity of the zodiacal light
based upon the declination of the target. At the ecliptic poles, $\beta$ = 1
provides a good fit to the intensity shown in Leinert et al. (1998). Sarkar et
al. (2020a) fitted a polynomial to data from this study, along with zodiacal
intensities from James et al. (1997); Tsumura et al. (2010), to provide a
measure of the increase in intensity at different latitudes. If d is the
ecliptic latitude, then the coefficient is given by:
$\displaystyle\beta=-0.22968868\zeta^{7}+1.12162927\zeta^{6}-1.72338015\zeta^{5}$
(9) $\displaystyle+1.13119022\zeta^{4}-0.95684987\zeta^{3}+0.2199208\zeta^{2}$
$\displaystyle-0.05989941\zeta+2.57035947$
where $\zeta$ = $log_{10}(d+1)$. This relation falls below 1 at d = 57.355 ∘
and so $\beta$ is fixed to 1 for latitudes greater than this (Sarkar et al.,
2020a).
#### 2.2.3 Pointing Jitter
Directly modelling uncertainties due to spacecraft jitter is beyond the
capabilities of Terminus. Hence, ExoSim has been adapted to the Twinkle design
to study the effects of pointing jitter on science performance. ExoSim, first
conceived for EChO Pascale et al. (2015) and now used for the Ariel mission,
has previously been adapted for simulating observations with JWST (Sarkar et
al., 2020a) and the EXoplanet Climate Infrared TElescope (EXCITE, Tucker et
al. (2018); Nagler et al. (2019)). The modified version, christened
TwinkleSim, was run for a number of stellar types (TS = 3000, 5000, 6100 K)
and magnitudes (KS = 6, 9, 12) and the uncertainty due to jitter determined in
each case. Twinkle’s baseline pointing solution is based upon a high
performance gyroscope and a Power Spectral Density (PSD) was supplied by the
engineering team at the satellite manufacturer, Airbus. For each simulation, a
variety of different extraction apertures were trialled with larger apertures
reducing the jitter by ensuring clipping did not occur but increasing the
noise from other sources due to sampling more pixels (e.g. dark current).
After trialling a number of solutions, the aperture was set to be rectangular
with a width of 2.44 times the width of the Airy disk at longest wavelength of
each channel. In terms of pixels, this is equivalent to 12 and 22 in the
spatial direction, for Ch0 and Ch1 respectively, while the spectral pixels per
bin are set to 6 and 7.
When combining observations time-correlated noise may integrate down more
slowly than uncorrelated noise, which is assumed to decrease with the square
root of the number of observations, and thus can contribute more heavily to
the final noise budget. To account for this Allan deviations plots were
produced using TwinkleSim. A power law trend can be fitted to this and used to
derive a wavelength-dependent fractional noise term that jitter induces on the
photon noise. For more details on this process, we refer the reader to Sarkar
et al. (2020a).
#### 2.2.4 Transit Signal
During transit, the critical signal is the fraction of stellar light that
passes through the atmosphere of the exoplanet. This signal is determined by
the ratio of the projected area of the atmosphere to that of the stellar disk
and thus is given by:
$\frac{2R_{p}\Delta z(\lambda)}{R_{*}^{2}}$ (10)
where $\Delta z$ is the height of the atmosphere. The size of the atmosphere
is taken to be equivalent to the height above the the 10 bar radius, at which
point the atmosphere is assumed to be opaque. The pressure of an atmosphere at
a height, z, is given by:
$p(z)=p_{0}e^{\frac{-z}{H}}$ (11)
where H is the scale height, the distance over which the pressure falls by
1/e. In the literature, 5 scale heights are often assumed for $\Delta z$ for a
clear atmosphere (at which point one is above 99.5$\%$ of the atmosphere)
while 3 would be more reasonable in the moderately cloudy case (Puig et al.,
2015; Tinetti et al., 2018; Edwards et al., 2019b). The scale height of the
atmosphere is calculated from:
$H=\frac{kT_{p}N_{A}}{\mu g}$ (12)
where k is the Boltzmann constant, $N_{A}$ is Avogadro’s number, $\mu$ is the
mean molecular weight of the atmosphere and g is the surface gravity
determined from:
$g=\frac{GM_{p}}{R_{p}^{2}}$ (13)
where $M_{p}$ and $R_{p}$ are the mass and radius of the planet and G is the
gravitational constant.
#### 2.2.5 Eclipse Signal
During eclipse, the signal is calculated form two sources; reflected and
emitted light from the planet. Emission from the exoplanet day-side is
modelled as a black body and the wavelength-dependent surface flux density is
given by:
$S_{p}(\lambda,T_{p})=\pi\frac{2hc^{2}}{\lambda^{5}}\frac{1}{e^{\frac{hc}{\lambda
kT_{p}}}-1}$ (14)
where $T_{p}$ is the dayside temperature of the planet. The product of the
black body emission and the solid angle subtended by the exoplanet at the
telescope gives the spectral radiance at the aperture:
$E_{p}^{Emission}(\lambda,T_{p})=S_{p}(\lambda,T_{p})\left(\frac{R_{p}}{d}\right)^{2}$
(15)
in $Wm^{-2}\mu m^{-1}$. Additionally, a portion of the stellar light incident
on the exoplanet is reflected. The strength of this reflected signal is
strongly dependant on wavelength and can be significant at visible
wavelengths. The flux of reflected light at the telescope aperture is
calculated from:
$E_{p}^{Reflection}(\lambda)=\alpha_{geom}S_{s}(\lambda)\left(\frac{R_{*}}{d}\right)^{2}\left(\frac{R_{p}}{a}\right)^{2}$
(16)
where $S_{S}(\lambda)$ is the star spectral irradiance, $a$ is the star-planet
distance (i.e. the planet’s semi-major axis) and $\alpha_{geom}$ is the
geometric albedo, which is assumed to be that of a Lambertian sphere
($\frac{2}{3}\alpha_{bond}$), wavelength-independent and at a phase of $\phi$
= 1 (i.e. full disk illumination).
#### 2.2.6 Signal-to-Noise Ratio
From these equations, and the error on the transit/eclipse depth, the signal-
to-noise (SNR) on the atmospheric signal can be obtained for a single
observation. Assuming the SNR increase with the square root of the number of
observations, the SNR after multiple transits/eclipses is given by:
$SNR_{N}=\sqrt{N}SNR_{1}$ (17)
where SNR1 is the SNR of a single observation and N is the total number of
observations. By setting a requirement on the SNR (SNRR), the number of
observations needed for a given planet can be ascertained from:
$N=\left(\frac{SNR_{R}}{SNR_{1}}\right)^{2}$ (18)
The current requirements are set to a median SNR $>$ 7 across 1.0-4.5 $\mu$m
for transit observations and 1.5-4.5 $\mu$m for eclipse measurements. In the
former of these the shorter wavelengths are excluded to avoid biasing against
planets around cooler stars while the latter is chosen as planetary emission,
even for relatively hot planets ($\sim$1500 K), is low at wavelengths shorter
than 1.5 $\mu$m. Using Equation 18, one can then determine the type(s) of
observation the planet is suited to.
### 2.3 Atmospheric Modelling
To simulate transmission (and emission) forward models, the open-source
exoplanet atmospheric retrieval framework TauREx 3 (Al-Refaie et al., 2019;
Waldmann et al., 2015a, b) is used. Within TauREx 3, cross-section opacities
are calculated from the ExoMol database (Yurchenko & Tennyson, 2012) where
available and from HITEMP (Rothman & Gordon, 2014) and HITRAN (Gordon et al.,
2016) otherwise. The H- ion is included using the procedure outlined in John
(1988); Edwards et al. (2020). For atmospheric chemistry, two options are
available within the Terminus infrastructure: chemical equilibrium, which is
achieved using the ACE code (Venot et al., 2012; Agúndez et al., 2012) and
takes the C/O ratio and metallicity as input, or free-chemistry which allows
the user to choose molecules and their abundances. Alternatively, a high-
resolution spectrum produced by another radiative transfer code can be read in
or, if a retrieval on actual data has been performed, the atmosphere can be
extrapolated from a TauREx 3 hdf5 file. Once the forward model is created at
high resolution, it is then binned to the instrument resolution using TauREx
3’s integrated binning function.
### 2.4 Light Curve Modelling and Fitting
For each spectral bin, PyLightCurve222https://github.com/ucl-
exoplanets/pylightcurve (Tsiaras et al., 2016a) is used to model a noise-free
transit/eclipse of the planet. The transits were all modelled with quadratic
limb darkening coefficients from Claret et al. (2013), calculated using
ExoTETHyS (Morello et al., 2020). The Twinkle spectrometer features a split at
2.43 $\mu$m, creating two channels. For each of these a white light curve is
also generated. The spectral light curves are created at the native resolution
of the instrument (R$\sim$20-70). A time-series is created with a cadence
equal to the time between destructive reads and the light curve integrated
over each of these exposures. The noise per integration, as calculated in
Section 2.2, is then used to create noisy light curves by adding Gaussian
scatter. Further updates will include the ability to add ramps due to detector
persistence as well as other time-varying systematics.
For the fitting of the light curves a Markov chain Monte Carlo (MCMC) is run
using emcee (Foreman-Mackey et al., 2013) via the PyLightCurve package, here
with 150,000 iterations, a burn-in of 100,000, and 100 walkers. For the
simulations shown here, both white light curves are individually fitted with
the inclination (i), reduced semi-major axis (a/R∗), transit mid-time (T0) and
planet-to-star radius ratio (Rp/Rs) as free parameters. A weighted average of
the recovered values for each of these parameters, except the planet-to-star
radius ratio, is then fixed for the fitting of the spectral light curves where
only the planet-to-star radius ratio is fitted. This provides a retrieved
transit/eclipse depth for each light curve, along with the error associated
with this parameter. If further complexity, such as ramps, is added to the
light curve, future iterations of the code will allow for multiple light curve
fits. In this case the uncertainties in the individual data points are
increased such that their median matches the standard deviation of the
residuals, a common technique when analysing Hubble observations of exoplanets
(e.g. Kreidberg et al., 2014; Tsiaras et al., 2016b).
For fainter targets, a spectrum with a reduced resolution can be requested and
Terminus will combine the light curves and provide a spectrum with a
resolution as close to the desired as possible. While the default cadence is
set by the saturation time of the detector it can lowered or exposures can be
combined. Additionally, multiple transits (or eclipses) can be individually
modelled, fitted and then combined. These functionalities are all controlled
by the input configuration file. Once a spectrum has been generated, an
automated interface with TauREx 3 can then be used to fit the data and
retrieve the atmospheric parameters.
To compare the errors predicted by the radiometric model to those from fitted
light curves, we model a single observation of HD 209458 b (Charbonneau et
al., 2000; Henry et al., 2000). For the atmosphere we model a composition
based loosely on that retrieved from the HST data of this planet (Tsiaras et
al., 2016b; MacDonald & Madhusudhan, 2017). We assume a plane parallel
atmosphere with 100 layers and include the contributions of collision-induced
absorption (CIA) of H2-H2 (Abel et al., 2011; Fletcher et al., 2018) and H2-He
(Abel et al., 2012), Rayleigh scattering and grey-clouds. In terms of
molecular line lists, we import the following: H2O (Polyansky et al., 2018),
NH3 (Yurchenko et al., 2011), CH4 (Yurchenko et al., 2017) and HCN (Barber et
al., 2014).
Figure 3 displays the errors on the transit depth predicted by the radiometric
portion of Terminus as well as the uncertainties recovered from the light
curve fits. While the agreement is generally good, within 10%, there appears
to be a wavelength-dependent effect on the accuracy of the radiometric tool.
The trend seen could be due to the limb darkening coefficients, which change
with wavelength and alter the shape of the light curve.
Figure 3: Comparison of error bars obtained from the radiometric model (black)
and light curve fitting for HD 209458 b. The wavelength dependent difference
between the models could be due to limb darkening coefficients.
## 3 Orbit Modelling
Observatories in low Earth orbits can experience interruptions in target
visibility due to Earth occultations. Additionally, instruments and spacecraft
usually have specific target-Sun, target-Moon or target-Earth limb
restrictions. To account for these, Terminus is capable of modelling the orbit
of a spacecraft and calculating angles between the target and the Earth limb,
the Sun or other celestial body, in a similar way to tools used for other
missions (e.g. for CHEOPS: Kuntzer et al. (2014)).
The tool operates within an Earth-centred frame and the positions of celestial
objects (the Sun, Moon etc.) are loaded from the JPL Horizons
service333https://ssd.jpl.nasa.gov/horizons.cgi. The spacecraft’s orbit is
defined by an ellipse which is subsequently inclined with respect to the X
plane. The right ascension of the ascending node (RAAN) is then used to rotate
this about the Z axis.
Twinkle will operate in a Sun-synchronous orbit and here we modelled the
following orbital parameters: altitude = 700 km, inclination = 90.4∘,
eccentricity = 0, RAAN = 190.4∘ (i.e. 6am). These are subject to change based
upon launch availability but provide an approximate description of the
expected operational state. The orbit of Twinkle during May 2024 is depicted
in Figure 4.
Figure 4: Modelled orbit of Twinkle (red) during May 2024. The yellow vector
indicates the direction of the Sun while the black represents the anti-sun
vector (i.e. the centre of Twinkle’s field of regard). The Earth is
represented by the sphere with the terminator between day and night roughly
shown.
Figure 5: Sky coverage of Twinkle given the specific exclusion angles. The
effects of individual constrains are shown for the Sun, Earth and Moon
alongside the combination of them all. Stars indicate known transiting
exoplanet hosts with HD 209458 and WASP-127 highlighted by light blue and
green stars respectively. We note that the colour bar axes differ between each
plot.
Figure 6: Sky coverage of JWST (left) and Ariel (right) which will have
continuous viewing zones at the ecliptic poles. These missions are unaffected
by Earth obscuration due to their L2 orbit.
As mentioned, the code can impose a number of exclusion angles to explore
their effects on target availability. Here we modelled Sun, Earth and Moon
exclusion angles of 140 ∘, 20 ∘ and 5 ∘ respectively. The first of these is
largely due to thermal constraints while the latter two are to reduce stray
light. The Earth and Moon exclusion angles for Twinkle are still under study
but the values chosen here are similar to those of other observatories
operating in sun-synchronous orbits or those proposed to do so (Kuntzer et
al., 2014; Deroo et al., 2012).
The effects of each exclusion angle on the sky coverage is shown in Figure 5
along with the effect of combining them all. In each case, the metric shown is
the total time the area of sky can be observed over the course of a year. The
plots highlight Twinkle’s excellent coverage of the ecliptic plane although
it, like CHEOPS, lacks the ability to study planets close to the ecliptic
poles. However, the JWST and Ariel missions will prefer the polar regions, as
shown in Figure 6, and thus both Twinkle and CHEOPS provide complimentary
coverage.
## 4 Partial Light Curves
Figure 7: Comparison of the predicted gap sizes for HD 209458 b (RA = 330, Dec
= 18) from Terminus and Freeflyer. The transit light curves are offset for
clarity and the gap sizes are seen to be highly similar. We note that these
gaps are due solely to physical obscuration by the Earth and no exclusion
angle is included. Figure 8: Effect of different Earth exclusion angles on the
percentage of time on target (black) and size of the gaps (red) for a transit
observation of HD 209458 b. Figure 9: The 17 transits of HD 209458 b that are
observable with Twinkle over the course of a single year. The gaps are due to
Earth obscuration plus an exclusion angle of 20∘. All light curves have gaps
of roughly 45 minutes which are comparable to those in the Hubble data of the
same planet and have been offset for clarity.
Figure 10: Recovered spectrum and error bars from different light curve fits
for HD 209458 b. In each case, red represents the fitting of a full light
curve (same as Figure 3), blue the fitting of the partial light curve (LC1
from Figure 9) and black represents the predicted error from the radiometric
model. The partial light curve results in far larger uncertainties due to the
reduction in the number of data points.
From an exoplanet modelling perspective, it has thus far it has been assumed
that a full light curve is observed. However, in reality, for space-telescopes
in a low-Earth orbit, sometimes only partial light curves will be obtained due
to Earth obscuration as discussed in Section 3. These gaps cannot be
completely accounted for in radiometric models and thus a time-domain code,
such as Terminus, is required.
To verify the orbital code created, and to explore the effect of partial light
curves, we check our results against those of Edwards (2019). In Edwards
(2019), the mission design, analysis and operation software
Freeflyer444https://ai-solutions.com/freeflyer/ was used to model the
obscurations of HD 209458 b by the Earth throughout a year. FreeFlyer has
previously been used to support planning for several missions including NASA’s
Solar Dynamics Observatory (SDO). We note that Freeflyer only models the
physical obscuration of the target star by Earth and thus for this comparison
we set the Earth exclusion angle to zero.
As mentioned, Twinkle’s field of regard means targets are not constantly
observable and in a year 17 transits of HD 209458 b would be observable by
Twinkle. Given the sky location of HD 209458, Right Ascension (RA): 330.79∘;
Declination (Dec): 18.88∘, the target will always be periodically obscured by
the Earth. In Figure 7, we show a comparison between the predicted gaps for
the first of these transits which are shown to be in excellent agreement.
Meanwhile, Figure 8 displays the increase in gap size that would be incurred
by various Earth exclusion angles. Going from an angle of 0 to 20 degrees
increases the gaps size from 20 minutes to 44 minutes. The latter case would
mean Twinkle could be on-target for over half an orbit (54 minutes). In
comparison, past Hubble observations featured gaps of 47 minutes, with 48
minutes on target per orbit (Deming et al., 2013; Tsiaras et al., 2016b).
Hence, Twinkle’s observing efficiency for HD 209458 b will probably be similar
to that of Hubble. All potential transit observations of HD 209458 b have gaps
or a similar size (see Figure 9).
Here we fit the first available light curve and the recovered spectrum, and
associated errors, is shown in Figure 10. As expected, the gaps increase the
uncertainties on the recovered transit depth. Using Equations 5 and 8, one
would expect the error to increase by 35%
($\sigma_{p}=\sigma_{f}\times\frac{1}{\sqrt{0.55}}=1.347\sigma_{f}$). We see
an increase of 20-40% and thus the radiometric model may also provide
reasonable errors for partial light curves.
Figure 11: The 18 transits of WASP-127 b that are observable with Twinkle in
2024 which have been offset for clarity. The gaps are due to Earth obscuration
plus an exclusion angle of 20∘. LC 9 has no gaps, highlighting the importance
of observational planning with Twinkle, or other LEO satellites, and the
benefit of a sun-synchronous orbit over the equatorial orbit of Hubble.
Figure 12: Recovered spectrum and error bars from different light curve fits
for WASP-127 b. In each case, red represents the fitting of a full light curve
(e.g. LC9 in Figure 11), blue the fitting of the partial light curve (LC1 from
Figure 11) and black represents the predicted error from the radiometric
model. The errors from the full light curve are found to agree with the
radiometric prediction, again with the exception of a slight, wavelength
dependent, variation. The partial light curve results in far larger
uncertainties.
However, some planets may have more variable gaps, due to their location in
the sky and a changing spacecraft-Earth-target geometry, and thus may be
affected more significantly. For these planets, the scheduling of observations
is likely to be highly important. Terminus is able to provide input into
studies exploring the effects of partial light curves.
As an initial step to understand the variability of Earth obscuration, we now
model observations of WASP-127 b (Lam et al., 2017). WASP-127 is located such
that Twinkle will potentially have a continuous, unobstructed view of the
target during a transit (RA: 160.56∘, Dec: -3.84∘). However, some potential
observations will incur Earth obscuration and the amount of time lost will be
dependent upon the Earth exclusion angle required. In the case of the 20∘
exclusion angle modelled here, Twinkle would have access to one complete
transit (i.e. no gaps due to Earth obscuration) in 2024 as shown in Figure 11.
The other available observation periods would incur interruption up to a
maximum of 45 minutes over a 98 minute orbit. In the case of the Hubble
observations of WASP-127 b (Skaf et al., 2020; Spake et al., 2020), the
spacecraft could only be pointed at the target for 40 minutes per orbit (55
minute gaps). Hence, through careful selection of observing windows, the
efficiency of Twinkle’s observations of WASP-127 b could be far greater than
that of Hubble’s for this target.
To understand the impact of these gaps, we simulate a set of light curves for
a single observation of WASP-127 b and compare the errors on the transit
depths when gaps are induced. Again we base the atmosphere off of current
observations which suggest a large water abundance and potentially the
presence of FeH (Skaf et al., 2020), which we model using the line lists from
Dulick et al. (2003); Wende et al. (2010).
The results of these fittings are shown in Figure 12. The full light curve
again has a wavelength dependent variation from the predicted radiometric
errors but this is again relatively small. As expected, the fitting of the
partial light curve results in larger uncertainties on the transit depth. In
the case modelled, LC1 from Figure 11, Twinkle only observes the target for
46% of the transit. Using Equations 5 and 8, one would expect the error to
increase by 48%
($\sigma_{p}=\sigma_{f}\times\frac{1}{\sqrt{0.46}}=1.476\sigma_{f}$). We see
the increase is wavelength dependent and generally between 20-40%, less than
predicted. Therefore the radiometric model may not always be capable of
providing accurate error estimations.
The recovered precision on different parameters is likely to be dependent upon
the location of the gaps in the light curve. In this case the central portion
of the transit is well sampled allowing for a precise recovery of the transit
depth. However, ingress/egress are less well sampled and thus orbital
parameters such as the inclination (i) and reduced semi-major axis (a/R∗) may
be less well determined.
Furthermore, the standard methodology of analysing transiting exoplanet data
is to fit to the light curves for planet-to-star radius ratio (Rp/Rs) to
achieve a spectrum with error bars before performing atmospheric retrievals on
said spectrum. This approach, which has essentially been followed here,
distils time-domain observations down to a single point and thus much
information about the orbital parameters of the system are lost. Fitting of
full light curves (no gaps) usually retrieves the orbital parameters
accurately but, as discussed, gaps can lead to less certainty. This potential
degeneracy is lost in the standard method and so, to bring the data analysis
one step closer to the raw data, retrievals with Terminus generated data could
be conducted using the light curves themselves and the methodology described
in Yip et al. (2019). The so called “L-retrieval” allows for the orbital
parameters (e.g. inclination, semi-major axis) to be free parameters in the
retrieval to ensure that orbital degeneracies are accounted for. Such a
methodology would be useful in the exploration of the effects of Earth
obscuration, particularly as these orbital elements have been shown to be
important in recovering the correct optical slope (Alexoudi et al., 2018). A
thorough analysis is needed to explore this fully and Terminus can feed vital
information into such an effort.
## 5 Availability of Solar System Bodies
Twinkle will also conduct spectroscopy of objects within our Solar System with
perhaps the most promising use of the mission in this regard being the
characterisation of small bodies. In particular, a diverse array of shapes for
the 3 $\mu$m hydration feature, which generally cannot be observed from the
ground, have been seen and used to classify asteroids (e.g. Mastrapa et al.,
2009; Campins et al., 2010; Rivkin & Emery, 2010; Takir & Emery, 2012; Takir
et al., 2013). Twinkle’s broad wavelength coverage will allow for studies of
this spectral feature, and many others, as outlined in Edwards et al. (2019a).
The times at which major and minor Solar System bodies are within Twinkle’s
field of regard has previously been studied in Edwards et al. (2019a, c).
These studies showed that the outer planets, and main belt asteroids, will
have long, regular periods within Twinkle’s field of regard. However, the
observation periods of Near-Earth Objects (NEOs) and Near-Earth Asteroids
(NEAs) are far more sporadic. Hence, we revisit this analysis with the
addition of considering Earth obscuration. For our example target, we choose
99942 Apophis (2004 MN4), a potentially hazardous asteroid (PHA). Apophis has
a diameter of around 400 m (Licandro et al., 2015; Müller et al., 2014) and
will have a close fly-by in 2029 (Figure 13). While it had been thought there
was potentially a high probability of impact during this fly-by, or one in
2036, this has now been significantly downgraded (Krolikowska & Sitarski,
2010; Chesley et al., 2010; Thuillot et al., 2015). Nevertheless, passing
around 31,000 km from the Earth’s surface, Apophis will come within the orbits
of geosynchronous satellites (see Figure 13).
By comparing the data to likely meteorite analogues, current spectral analyses
of Apophis have concluded it is an Sq-class asteroid that closely resembles
the LL ordinary chondrite meteorites in terms of composition (Binzel et al.,
2009; Reddy et al., 2018). This data was measured over 0.55-2.45 $\mu m$ and
similarities have been noted to that of the asteroid Itokawa which was visited
and studied by the Hayabusa mission (Abe et al., 2006).
Figure 13: Top: orbit of Earth and Apophis from June 2028 to June 2029. In the
period, Apophis crosses the orbit of Earth twice with the second of these
crosses occurring during April 2029. Bottom: the distance between Earth and
Apophis during the April 2029, highlighting that the minimum separation from
the Earth surface is closer than geosynchronous satellites. Data for these
plots was acquired via the NASA JPL Horizons service.
Figure 14: Visible magnitude (top) and rate of apparent motion (bottom) for
Apophis during it’s close fly-by in 2029. The availability of Apophis was
checked at a cadence of 1 minute with dark blue indicating it is unobstructed,
light blue showing times at which the Earth is occulting the target and black
representing times when it has left the field of regard (i.e. exclusion due to
Sun-target angle). The left-hand plots show these values for the week before
the closest approach while the right-hand plots display the Earth obscuration
more readily as Apophis approaches a rate of 30 mas/s. Figure 15: Average sky
coverage during the two weeks before the closest approach of Apophis and the
sky location of Apophis over that same period (white). It should be noted
that, for the plotted Apophis trajectory, the time spent outside the FOR is
only a few hours whereas the time spent within it equates to several days.
Figure 16: Simulated spectra for Apophis. The error bars are for a single
exposure with a 300 s integration time on an object at a visible magnitude of
12. The spectrum is of an LL6 ordinary chondrite meteorite, taken from the
RELAB database (bkr1dp015). We note that the reflectance shown here at shorter
wavelengths ($<0.8\mu m$) is slightly larger than found in actual studies of
Apophis (Binzel et al., 2009; Reddy et al., 2018).
Here, we analyse the availability of Apophis over the week before, and day
after, its closest approach to Earth. Terminus obtains asteroid ephemerides
using the astropy API to the JPL Horizons database (Astropy Collaboration et
al., 2018). In Figure 14 we show the visible magnitude and apparent rate of
motion of Apophis during this period. The interlaced dark and light blue
segments show the availability of the asteroid before it leaves the field of
regard soon after the closest point of its fly-by. The trajectory across the
sky of Apophis is depicted in Figure 15 along with the sky coverage of Twinkle
over this period.
The ability of spacecraft to accurately track non-sidereal objects is key for
their observation. The Spitzer Space Telescope was used extensively for
characterising small bodies (e.g. Trilling et al., 2007; Barucci et al., 2008)
and tracked objects moving at rates of 543 mas/s (Trilling et al., 2010).
Spitzer was oriented using a inertial reference unit comprising of several
high performance star trackers and observed asteroids using linear track
segments. These were commanded as a vector rate in J2000 coordinates, passing
through a specified RA and Dec at a specified time. The coordinates of the
target can be obtained from services such as Jet Propulsion Laboratory’s
Horizons System. JWST is expected be able to track objects moving at up to 30
mas/s (Thomas et al., 2016).
The maximum rate at which Twinkle can track non-sidereal objects is still
under definition but will be >30 mas/s which we take here as a conservative
maximum value. When this threshold is crossed, Apophis will have a visible
magnitude of approximately 11.8. During the day or so before this rate limit
is crossed, Apophis would be available for periods of 55 minutes, with 40
minute interruptions, again assuming a 20∘ Earth exclusion angle.
As demonstrated in Figure 16, such observation windows provide plenty of time
to achieve high quality spectra. Here we simulated spectra for Apophis at a
visible magnitude of 12 and an integration time of 5 minutes. We note that the
thermal emission from the asteroid has been subtracted, which was modelled as
a blackbody with a temperature of 300 K, to give the relative reflectance of
the asteroid. The input spectrum was taken from the RELAB
database555http://www.planetary.brown.edu/relab/ and is of an LL6 ordinary
chondrite meteorite.
Simulations have suggested the 2029 close encounter could cause landslides on
Apophis, if the structure of some parts of the structure are significantly
weak (Yu et al., 2014). The potential for resurfacing NEOs during terrestrial
encounters in discussed in e.g. Binzel et al. (2010); Nesvorný et al. (2010)
and spectral measurements can inform us on the freshness of the asteroid’s
surface, providing evidence for such mechanisms. Additionally, while an impact
in 2029 has been ruled out, the potential for a future collision cannot be
disregarded and further study of the object is needed to refine this. In
particularly, the Yarkovsky effect has been shown to significantly alter
predictions beyond 2029 and is sensitive to the physical parameters of
Apophis, such as its albedo, diameter and density (Farnocchia et al., 2013; Yu
et al., 2017).
By observing Apophis simultaneously from 0.5-4.5 $\mu m$, Twinkle could
significantly inform the debate surrounding the nature of Apophis and it’s
potential threat level to Earth. Therefore, Twinkle could have a role to play
in characterising known NEOs and NEAs, along with those predicted to be
discovered by Near-Earth Object Surveillance Mission (NEOSM, Mainzer et al.
(2019)) and Vera C. Rubin Observatory, previously known as the Large Synoptic
Survey Telescope (LSST, Jones et al. (2018)). The ability of Twinkle to
contribute to the study of NEOs and NEAs, and other specific asteroid
populations, will be thoroughly detailed in further work.
## 6 Conclusions and Future Work
Terminus, a simulator with some time-domain capabilities has been developed to
model observations with space-based telescopes. This model is especially
applicable to exoplanets and can incorporate gaps in the light curve, caused
by Earth obscuration, and be used to predict the potential impact on the
accuracy of the retrieved atmospheric composition. Here, Terminus is baselined
on the Twinkle Space Telescope but the model can be adapted for any space-
based telescope and is especially applicable to those in a low-Earth orbit.
The impact of gaps in exoplanet observations has not been fully explored and
further work is needed. Obtaining a full transit, or eclipse, light curve is
obviously the ideal case but when it is not possible, such as for HD 209458 b,
an optimisation of the location, and length, of the gaps is required. By being
able model when these gaps occur, it should be possible to begin to explore
this by running multiple fittings and comparing the retrieved transit depth
and atmospheric parameters.
The Earth exclusion angle considered here is identical for the lit and unlit
portions of the Earth. However, each will contribute different amounts of
stray light and thus likely have separate exclusion angles. Future work will
incorporate this capability, along with the capacity to quantitatively model
the expected stray light from the Earth and Moon to firmly establish the
exclusion angles required. The effect of different orbital parameters (e.g.
altitude, 6am vs 6pm RAAN) can also be explored. Terminus will be updated to
include the South Atlantic Anomaly (SAA) to model the impact in the event that
the spacecraft must limit scientific operations during its ingress into this
region. Other additional development aspects include satellite ground stations
and calculating potential accesses to these facilities. Such capabilities will
allow for the tool to serve wider concept of operations (CONOPS) concerns and,
in the event that spacecraft design for any reason limits operations during
downlink, this can then be accounted for in the scheduling. Additionally,
Terminus could also be used to model other effects such as stellar variability
or detector ramps such as those seen on Hubble and Spitzer.
Finally, Terminus will be incorporated into a web interface to provide the
community with simulations of Twinkle’s capabilities. Doing so will allow the
tool to be more widely used and facilitate in-depth studies of Twinkle’s
capabilities. These could include modelling various atmospheric scenarios for
each planet to judge its suitability for characterisation (e.g. Fortenbach &
Dressing, 2020), performing retrievals on populations of exoplanets (e.g.
Changeat et al., 2020), classifying groups of planets via colour-magnitude
diagrams (e.g. Dransfield & Triaud, 2020), testing machine-learning techniques
for atmospheric retrieval (e.g. Márquez-Neila et al., 2018; Zingales &
Waldmann, 2018; Hayes et al., 2020; Yip et al., 2020) or the exploration of
potential biases in current data analysis techniques (e.g. Feng et al., 2016;
Rocchetto et al., 2016; Changeat et al., 2019; Caldas et al., 2019; Powell et
al., 2019; MacDonald et al., 2020; Taylor et al., 2020). Additionally,
thorough analyses of Twinkle’s capabilities for specific scientific
endeavours, such as confirming/refuting the presence of thermal inversions and
identifying optical absorbers in ultra-hot Jupiters (e.g. Fortney et al.,
2008; Spiegel et al., 2009; Haynes et al., 2015; Evans et al., 2018;
Parmentier et al., 2018; von Essen et al., 2020; Edwards et al., 2020; Pluriel
et al., 2020; Changeat & Edwards, 2021), searching for an exoplanet mass-
metallicity trend (e.g. Wakeford et al., 2017; Welbanks et al., 2019), probing
the atmospheres of planets in/close to the radius valley to discern their true
nature (e.g. Owen & Wu, 2017; Fulton & Petigura, 2018; Zeng et al., 2019),
refining basic planetary and orbital characteristics (e.g. Berardo et al.,
2019; Dalba & Tamburo, 2019; Livingston et al., 2019), measuring planet masses
through accurate transit timings (e.g. Hadden & Lithwick, 2017; Grimm et al.,
2018; Petigura et al., 2018), verifying additional planets within systems
(e.g. Gillon et al., 2017; Bonfanti et al., 2021), studying non-transiting
planets by measuring the planetary infrared excess (Stevenson, 2020), or even
contributing to the search for exomoon candidates (e.g. Simon et al., 2015;
Heller et al., 2016; Teachey & Kipping, 2018), can also be undertaken.
## 7 Acknowledgements
This work has utilised data from FreeFlyer, a mission design, analysis and
operation software created by a.i. solutions. We thank Giovanna Tinetti,
Marcell Tessenyi, Giorgio Savini, Subhajit Sarkar, Enzo Pascale, Angelos
Tsiaras, Philip Windred, Andy Rivkin, Lorenzo Mugnai, Kai Hou Yip, Ahmed Al-
Refaie, Quentin Changeat and Lara Ainsman for their guidance, comments and
useful discussions. This work has been partially funded by the STFC grant
ST/T001836/1.
Software: TauREx3 (Al-Refaie et al., 2019), pylightcurve (Tsiaras et al.,
2016a), ExoTETHyS (Morello et al., 2020), ExoSim (Sarkar et al., 2020b),
Astropy (Astropy Collaboration et al., 2018), h5py (Collette, 2013), emcee
(Foreman-Mackey et al., 2013), Matplotlib (Hunter, 2007), Multinest (Feroz et
al., 2009; Buchner et al., 2014), Pandas (McKinney, 2011), Numpy (Oliphant,
2006), SciPy (Virtanen et al., 2020), corner (Foreman-Mackey, 2016).
## References
* Abe et al. (2006) Abe, M., Takagi, Y., Kitazato, K., et al. 2006, Science, 312, 1334. https://science.sciencemag.org/content/312/5778/1334
* Abel et al. (2011) Abel, M., Frommhold, L., Li, X., & Hunt, K. L. 2011, The Journal of Physical Chemistry A, 115, 6805
* Abel et al. (2012) —. 2012, The Journal of chemical physics, 136, 044319
* Agúndez et al. (2012) Agúndez, M., Venot, O., Iro, N., et al. 2012, Astronomy & Astrophysics, 548, A73. http://dx.doi.org/10.1051/0004-6361/201220365
* Akeson et al. (2013) Akeson, R. L., Chen, X., Ciardi, D., et al. 2013, PASP, 125, 989
* Al-Refaie et al. (2019) Al-Refaie, A. F., Changeat, Q., Waldmann, I. P., & Tinetti, G. 2019, arXiv e-prints, arXiv:1912.07759
* Alexoudi et al. (2018) Alexoudi, X., Mallonn, M., von Essen, C., et al. 2018, A&A, 620, A142
* Allard et al. (2012) Allard, F., Homeier, D., & Freytag, B. 2012, Philosophical Transactions of the Royal Society of London Series A, 370, 2765
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123
* Baraffe et al. (2015) Baraffe, I., Homeier, D., Allard, F., & Chabrier, G. 2015, Astronomy & Astrophysics, 577, A42
* Barber et al. (2014) Barber, R. J., Strange, J. K., Hill, C., et al. 2014, MNRAS, 437, 1828
* Barclay et al. (2018) Barclay, T., Pepper, J., & Quintana, E. V. 2018, ArXiv e-prints 1804.05050, arXiv:1804.05050
* Barucci et al. (2008) Barucci, M., Fornasier, S., Dotto, E., et al. 2008, A&A, 477, 665
* Batalha et al. (2017) Batalha, N. E., Mandell, A., Pontoppidan, K., et al. 2017, PASP, 129, 064501
* Benz et al. (2020) Benz, W., Broeg, C., Fortier, A., et al. 2020, arXiv e-prints, arXiv:2009.11633
* Berardo et al. (2019) Berardo, D., Crossfield, I. J. M., Werner, M., et al. 2019, AJ, 157, 185
* Binzel et al. (2009) Binzel, R. P., Rivkin, A. S., Thomas, C. A., et al. 2009, Icarus, 200, 480 . http://www.sciencedirect.com/science/article/pii/S001910350800434X
* Binzel et al. (2010) Binzel, R. P., Morbidelli, A., Merouane, S., et al. 2010, Nature, 463, 331
* Bonfanti et al. (2021) Bonfanti, A., Delrez, L., Hooton, M. J., et al. 2021, arXiv e-prints, arXiv:2101.00663
* Bordé et al. (2003) Bordé, P., Rouan, D., & Léger, A. 2003, A&A, 405, 1137
* Buchner et al. (2014) Buchner, J., Georgakakis, A., Nandra, K., et al. 2014, A&A, 564, A125
* Caldas et al. (2019) Caldas, A., Leconte, J., Selsis, F., et al. 2019, A&A, 623, A161
* Campins et al. (2010) Campins, H., Hargrove, K., Pinilla-Alonso, N., et al. 2010, Nature, 464, 1320
* Changeat et al. (2020) Changeat, Q., Al-Refaie, A., Mugnai, L. V., et al. 2020, AJ, 160, 80
* Changeat & Edwards (2021) Changeat, Q., & Edwards, B. 2021, arXiv e-prints, arXiv:2101.00469
* Changeat et al. (2019) Changeat, Q., Edwards, B., Waldmann, I. P., & Tinetti, G. 2019, ApJ, 886, 39
* Charbonneau et al. (2000) Charbonneau, D., Brown, T. M., Latham, D. W., & Mayor, M. 2000, ApJ, 529, L45
* Chesley et al. (2010) Chesley, S. R., Baer, J., & Monet, D. G. 2010, Icarus, 210, 158
* Claret et al. (2013) Claret, A., Hauschildt, P. H., & Witte, S. 2013, A&A, 552, A16
* Collette (2013) Collette, A. 2013, Python and HDF5 (O’Reilly)
* Dalba & Tamburo (2019) Dalba, P. A., & Tamburo, P. 2019, ApJ, 873, L17
* Deming et al. (2013) Deming, D., Wilkins, A., McCullough, P., et al. 2013, ApJ, 774, 95
* Deroo et al. (2012) Deroo, P., Swain, M. R., & Green, R. O. 2012, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 8442, Proc. SPIE, 844241
* Dransfield & Triaud (2020) Dransfield, G., & Triaud, A. H. M. J. 2020, MNRAS, 499, 505
* Dulick et al. (2003) Dulick, M., Bauschlicher, C. W., J., Burrows, A., et al. 2003, ApJ, 594, 651
* Edwards (2019) Edwards, B. 2019, PhD thesis, University College London
* Edwards et al. (2019a) Edwards, B., Lindsay, S., Savini, G., et al. 2019a, Journal of Astronomical Telescopes, Instruments, and Systems, 5, 034004
* Edwards et al. (2019b) Edwards, B., Mugnai, L., Tinetti, G., Pascale, E., & Sarkar, S. 2019b, Astronomical Journal, 157, 242
* Edwards et al. (2019c) Edwards, B., Savini, G., Tinetti, G., et al. 2019c, Journal of Astronomical Telescopes, Instruments, and Systems, 5, 014006
* Edwards et al. (2019d) Edwards, B., Rice, M., Zingales, T., et al. 2019d, Experimental Astronomy, 47, 29
* Edwards et al. (2020) Edwards, B., Changeat, Q., Baeyens, R., et al. 2020, AJ, 160, 8
* Ehrenreich et al. (2020) Ehrenreich, D., Lovis, C., Allart, R., et al. 2020, arXiv e-prints, arXiv:2003.05528
* Evans et al. (2018) Evans, T. M., Sing, D. K., Goyal, J. M., et al. 2018, AJ, 156, 283
* Farnocchia et al. (2013) Farnocchia, D., Chesley, S. R., Chodas, P. W., et al. 2013, Icarus, 224, 192
* Feng et al. (2016) Feng, Y. K., Line, M. R., Fortney, J. J., et al. 2016, ApJ, 829, 52
* Feroz et al. (2009) Feroz, F., Hobson, M. P., & Bridges, M. 2009, MNRAS, 398, 1601
* Fletcher et al. (2018) Fletcher, L. N., Gustafsson, M., & Orton, G. S. 2018, The Astrophysical Journal Supplement Series, 235, 24
* Focardi et al. (2018) Focardi, M., Pace, E., Farina, M., et al. 2018, Experimental Astronomy, 46, 1
* Foreman-Mackey (2016) Foreman-Mackey, D. 2016, The Journal of Open Source Software, 1, 24. https://doi.org/10.21105/joss.00024
* Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306
* Fortenbach & Dressing (2020) Fortenbach, C. D., & Dressing, C. D. 2020, PASP, 132, 054501
* Fortney et al. (2008) Fortney, J. J., Lodders, K., Marley, M. S., & Freedman, R. S. 2008, ApJ, 678, 1419
* Fulton & Petigura (2018) Fulton, B. J., & Petigura, E. A. 2018, ArXiv e-prints, arXiv:1805.01453
* Futyan et al. (2020) Futyan, D., Fortier, A., Beck, M., et al. 2020, A&A, 635, A23
* Garhart et al. (2020) Garhart, E., Deming, D., Mandell, A., et al. 2020, AJ, 159, 137
* Geers et al. (2019) Geers, V. C., Klaassen, P. D., Beard, S., & European Consortium, M. 2019, in Astronomical Society of the Pacific Conference Series, Vol. 523, Astronomical Data Analysis Software and Systems XXVII, ed. P. J. Teuben, M. W. Pound, B. A. Thomas, & E. M. Warner, 641
* Gillon et al. (2017) Gillon, M., Triaud, A. H. M. J., Demory, B.-O., et al. 2017, Nature, 542, 456
* Gordon et al. (2016) Gordon, I., Rothman, L. S., Wilzewski, J. S., et al. 2016, in AAS/Division for Planetary Sciences Meeting Abstracts, Vol. 48, AAS/Division for Planetary Sciences Meeting Abstracts #48, 421.13
* Greene et al. (2016) Greene, T. P., Line, M. R., Montero, C., et al. 2016, ApJ, 817, 17
* Grimm et al. (2018) Grimm, S. L., Demory, B.-O., Gillon, M., et al. 2018, A&A, 613, A68
* Hadden & Lithwick (2017) Hadden, S., & Lithwick, Y. 2017, AJ, 154, 5
* Hayes et al. (2020) Hayes, J. J. C., Kerins, E., Awiphan, S., et al. 2020, MNRAS, 494, 4492
* Haynes et al. (2015) Haynes, K., Mandell, A. M., Madhusudhan, N., Deming, D., & Knutson, H. 2015, The Astrophysical Journal, 806, 146. http://stacks.iop.org/0004-637X/806/i=2/a=146
* Heller et al. (2016) Heller, R., Hippke, M., Placek, B., Angerhausen, D., & Agol, E. 2016, A&A, 591, A67
* Henry et al. (2000) Henry, G. W., Marcy, G. W., Butler, R. P., & Vogt, S. S. 2000, ApJ, 529, L41
* Hoeijmakers et al. (2018) Hoeijmakers, H. J., Ehrenreich, D., Heng, K., et al. 2018, Nature, 560, 453
* Hunter (2007) Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90
* Husser et al. (2013) Husser, T.-O., Wende-von Berg, S., Dreizler, S., et al. 2013, Astronomy & Astrophysics, 553, A6. http://dx.doi.org/10.1051/0004-6361/201219058
* Iyer et al. (2016) Iyer, A. R., Swain, M. R., Zellem, R. T., et al. 2016, ApJ, 823, 109
* James et al. (1997) James, J. F., Mukai, T., Watanabe, T., Ishiguro, M., & Nakamura, R. 1997, MNRAS, 288, 1022
* John (1988) John, T. L. 1988, A&A, 193, 189
* Jones et al. (2018) Jones, R. L., Slater, C. T., Moeyens, J., et al. 2018, Icarus, 303, 181
* Kreidberg et al. (2014) Kreidberg, L., Bean, J. L., Désert, J.-M., et al. 2014, Nature, 505, 69
* Krolikowska & Sitarski (2010) Krolikowska, M., & Sitarski, G. 2010, arXiv e-prints, arXiv:1009.2639
* Kuntzer et al. (2014) Kuntzer, T., Fortier, A., & Benz, W. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9149, Proc. SPIE, 91490W
* Lam et al. (2017) Lam, K. W. F., Faedi, F., Brown, D. J. A., et al. 2017, A&A, 599, A3
* Leinert et al. (1998) Leinert, C., Bowyer, S., Haikala, L. K., et al. 1998, A&AS, 127, 1
* Licandro et al. (2015) Licandro, J., Müller, T., Alvarez, C., Alí-Lagoa, V., & Delbò, M. 2015, arXiv e-prints, arXiv:1510.06248
* Livingston et al. (2019) Livingston, J. H., Crossfield, I. J. M., Werner, M. W., et al. 2019, AJ, 157, 102
* MacDonald et al. (2020) MacDonald, R. J., Goyal, J. M., & Lewis, N. K. 2020, ApJ, 893, L43
* MacDonald & Madhusudhan (2017) MacDonald, R. J., & Madhusudhan, N. 2017, MNRAS, 469, 1979
* Mainzer et al. (2014) Mainzer, A., Bauer, J., Cutri, R. M., et al. 2014, ApJ, 792, 30
* Mainzer et al. (2019) Mainzer, A., Bauer, J., Cutri, R., et al. 2019, in EPSC-DPS Joint Meeting 2019, Vol. 2019, EPSC–DPS2019–1049
* Márquez-Neila et al. (2018) Márquez-Neila, P., Fisher, C., Sznitman, R., & Heng, K. 2018, Nature Astronomy, 2, 719
* Martin-Lagarde et al. (2019) Martin-Lagarde, M., Lagage, P.-O., Gastaud, R., et al. 2019, in EPSC-DPS Joint Meeting 2019, Vol. 2019, EPSC–DPS2019–1879
* Mastrapa et al. (2009) Mastrapa, R., Sandford, S., Roush, T., Cruikshank, D., & Dalle Ore, C. 2009, ApJ, 701, 1347
* McKinney (2011) McKinney, W. 2011, Python for High Performance and Scientific Computing, 14
* Morello et al. (2020) Morello, G., Claret, A., Martin-Lagarde, M., et al. 2020, AJ, 159, 75
* Mugnai et al. (2020) Mugnai, L. V., Pascale, E., Edwards, B., Papageorgiou, A., & Sarkar, S. 2020, Experimental Astronomy, 50, 303
* Müller et al. (2014) Müller, T. G., Kiss, C., Scheirich, P., et al. 2014, A&A, 566, A22
* Murakami et al. (2007) Murakami, H., Baba, H., Barthel, P., et al. 2007, PASJ, 59, S369
* Nagler et al. (2019) Nagler, P. C., Edwards, B., Kilpatrick, B., et al. 2019, Journal of Astronomical Instrumentation, 8, 1950011
* Nesvorný et al. (2010) Nesvorný, D., Bottke, W. F., Vokrouhlický, D., Chapman, C. R., & Rafkin, S. 2010, Icarus, 209, 510
* Nielsen et al. (2016) Nielsen, L. D., Ferruit, P., Giardino, G., et al. 2016, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9904, Space Telescopes and Instrumentation 2016: Optical, Infrared, and Millimeter Wave, ed. H. A. MacEwen, G. G. Fazio, M. Lystrup, N. Batalha, N. Siegler, & E. C. Tong, 99043O
* Oliphant (2006) Oliphant, T. E. 2006, A guide to NumPy, Vol. 1 (Trelgol Publishing USA)
* Owen & Wu (2017) Owen, J. E., & Wu, Y. 2017, ApJ, 847, 29
* Parmentier et al. (2018) Parmentier, V., Line, M. R., Bean, J. L., et al. 2018, A&A, 617, A110. https://doi.org/10.1051/0004-6361/201833059
* Pascale et al. (2015) Pascale, E., Waldmann, I. P., MacTavish, C. J., et al. 2015, Experimental Astronomy, 40, 601
* Petigura et al. (2018) Petigura, E. A., Benneke, B., Batygin, K., et al. 2018, AJ, 156, 89
* Pluriel et al. (2020) Pluriel, W., Whiteford, N., Edwards, B., et al. 2020, arXiv e-prints, arXiv:2006.14199
* Polyansky et al. (2018) Polyansky, O. L., Kyuberis, A. A., Zobov, N. F., et al. 2018, Monthly Notices of the Royal Astronomical Society, 480, 2597
* Pontoppidan et al. (2016) Pontoppidan, K. M., Pickering, T. E., Laidler, V. G., et al. 2016, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9910, Observatory Operations: Strategies, Processes, and Systems VI, ed. A. B. Peck, R. L. Seaman, & C. R. Benn, 991016
* Powell et al. (2019) Powell, D., Louden, T., Kreidberg, L., et al. 2019, ApJ, 887, 170
* Puig et al. (2015) Puig, L., Isaak, K., Linder, M., et al. 2015, Experimental Astronomy, 40, 393
* Puig et al. (2018) Puig, L., Pilbratt, G., Heske, A., et al. 2018, Experimental Astronomy, doi:10.1007/s10686-018-9604-3
* Rauscher et al. (2007) Rauscher, B. J., Fox, O., Ferruit, P., et al. 2007, PASP, 119, 7786
* Reddy et al. (2018) Reddy, V., Sanchez, J. A., Furfaro, R., et al. 2018, AJ, 155, 140
* Ricker et al. (2014) Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2014, in Proceedings of the SPIE, Vol. 9143, Space Telescopes and Instrumentation 2014: Optical, Infrared, and Millimeter Wave, 914320
* Rivkin & Emery (2010) Rivkin, A., & Emery, J. 2010, Nature, 464, 1322
* Rocchetto et al. (2016) Rocchetto, M., Waldmann, I. P., Venot, O., Lagage, P. O., & Tinetti, G. 2016, ApJ, 833, 120
* Rothman & Gordon (2014) Rothman, L. S., & Gordon, I. E. 2014, in 13th International HITRAN Conference, June 2014, Cambridge, Massachusetts, USA
* Sarkar et al. (2020a) Sarkar, S., Madhusudhan, N., & Papageorgiou, A. 2020a, MNRAS, 491, 378
* Sarkar et al. (2016) Sarkar, S., Papageorgiou, A., & Pascale, E. 2016, in Space Telescopes and Instrumentation 2016: Optical, Infrared, and Millimeter Wave, Vol. 9904, International Society for Optics and Photonics (SPIE), 1243 – 1251. https://doi.org/10.1117/12.2234216
* Sarkar et al. (2017) Sarkar, S., Papageorgiou, A., Tsikrikonis, I. A., Vandenbussche, B., & Pascale, E. 2017, ARIEL Performance Analysis Report: ARIEL-CRDF-PL-AN-001
* Sarkar et al. (2020b) Sarkar, S., Pascale, E., Papageorgiou, A., Johnson, L. J., & Waldmann, I. 2020b, arXiv e-prints, arXiv:2002.03739
* Savini et al. (2016) Savini, G., Tessenyi, M., Tinetti, G., et al. 2016, in Preprint (SPIE astronomical telescopes + instrumentation 2016 paper 9904—175)
* Simon et al. (2015) Simon, A. E., Szabó, G. M., Kiss, L. L., Fortier, A., & Benz, W. 2015, PASP, 127, 1084
* Sing et al. (2016) Sing, D. K., Fortney, J. J., Nikolov, N., et al. 2016, Nature, 529, 59
* Skaf et al. (2020) Skaf, N., Fabienne Bieger, M., Edwards, B., et al. 2020, arXiv e-prints, arXiv:2005.09615
* Spake et al. (2020) Spake, J. J., Sing, D. K., Wakeford, H. R., et al. 2020, arXiv e-prints, arXiv:1911.08859
* Spiegel et al. (2009) Spiegel, D. S., Silverio, K., & Burrows, A. 2009, ApJ, 699, 1487
* Sreejith et al. (2019) Sreejith, A. G., Fossati, L., Fleming, B. T., et al. 2019, Journal of Astronomical Telescopes, Instruments, and Systems, 5, 018004
* Stevenson (2020) Stevenson, K. B. 2020, ApJ, 898, L35
* Takir & Emery (2012) Takir, D., & Emery, J. 2012, Icarus, 219, 641
* Takir et al. (2013) Takir, D., Emery, J., McSween, H., et al. 2013, Meteoritics and Planetary Science, 48, 1618
* Taylor et al. (2020) Taylor, J., Parmentier, V., Irwin, P. G. J., et al. 2020, MNRAS, 493, 4342
* Teachey & Kipping (2018) Teachey, A., & Kipping, D. M. 2018, Science Advances, 4, eaav1784
* Thomas et al. (2016) Thomas, C., Abell, P., Castillo-Rogez, J., et al. 2016, PASP, 128, 018002
* Thuillot et al. (2015) Thuillot, W., Bancelin, D., Ivantsov, A., et al. 2015, A&A, 583, A59
* Tinetti et al. (2012) Tinetti, G., Beaulieu, J. P., Henning, T., et al. 2012, Experimental Astronomy, 34, 311
* Tinetti et al. (2018) Tinetti, G., Drossart, P., Eccleston, P., et al. 2018, Experimental Astronomy, in press. https://doi.org/10.1007/s10686-018-9598-x
* Trilling et al. (2007) Trilling, D., Bhattacharya, B., Blaylock, M., et al. 2007, in Bulletin of the American Astronomical Society, Vol. 39, AAS/Division for Planetary Sciences Meeting Abstracts #39, 484
* Trilling et al. (2010) Trilling, D., Mueller, M., Hora, J., et al. 2010, ArXiv e-prints, arXiv:1007.1009
* Tsiaras et al. (2016a) Tsiaras, A., Waldmann, I., Rocchetto, M., et al. 2016a, ascl:1612.018
* Tsiaras et al. (2016b) Tsiaras, A., Waldmann, I. P., Rocchetto, M., et al. 2016b, ApJ, 832, 202
* Tsiaras et al. (2018) Tsiaras, A., Waldmann, I. P., Zingales, T., et al. 2018, AJ, 155, 156
* Tsumura et al. (2010) Tsumura, K., Battle, J., Bock, J., et al. 2010, ApJ, 719, 394
* Tucker et al. (2018) Tucker, G. S., Nagler, P., Butler, N., et al. 2018, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10702, Ground-based and Airborne Instrumentation for Astronomy VII, ed. C. J. Evans, L. Simard, & H. Takami, 107025G
* Varley et al. (2017) Varley, R., Tsiaras, A., & Karpouzas, K. 2017, ApJS, 231, 13
* Venot et al. (2012) Venot, O., Hébrard, E., Agúndez, M., et al. 2012, Astronomy & Astrophysics, 546, A43. http://dx.doi.org/10.1051/0004-6361/201219310
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261
* von Essen et al. (2020) von Essen, C., Mallonn, M., Hermansen, S., et al. 2020, arXiv e-prints, arXiv:2003.06424
* Wakeford et al. (2017) Wakeford, H. R., Sing, D. K., Kataria, T., et al. 2017, Science, 356, 628
* Waldmann et al. (2015a) Waldmann, I. P., Rocchetto, M., Tinetti, G., et al. 2015a, ApJ, 813, 13
* Waldmann et al. (2015b) Waldmann, I. P., Tinetti, G., Rocchetto, M., et al. 2015b, ApJ, 802, 107
* Welbanks et al. (2019) Welbanks, L., Madhusudhan, N., Allard, N. F., et al. 2019, ApJ, 887, L20
* Wende et al. (2010) Wende, S., Reiners, A., Seifahrt, A., & Bernath, P. F. 2010, A&A, 523, A58
* Werner et al. (2004) Werner, M. W., Roellig, T. L., Low, F. J., et al. 2004, ApJS, 154, 1
* Wright et al. (2010) Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ, 140, 1868
* Yip et al. (2020) Yip, K. H., Changeat, Q., Nikolaou, N., et al. 2020, arXiv e-prints, arXiv:2011.11284
* Yip et al. (2019) Yip, K. H., Waldmann, I. P., Tsiaras, A., & Tinetti, G. 2019, arXiv e-prints, arXiv:1811.04686
* Yu et al. (2017) Yu, L.-L., Ji, J., & Ip, W.-H. 2017, Research in Astronomy and Astrophysics, 17, 070
* Yu et al. (2014) Yu, Y., Richardson, D. C., Michel, P., Schwartz, S. R., & Ballouz, R.-L. 2014, Icarus, 242, 82
* Yurchenko et al. (2017) Yurchenko, S. N., Amundsen, D. S., Tennyson, J., & Waldmann, I. P. 2017, A&A, 605, A95
* Yurchenko et al. (2011) Yurchenko, S. N., Barber, R. J., & Tennyson, J. 2011, MNRAS, 413, 1828
* Yurchenko & Tennyson (2012) Yurchenko, S. N., & Tennyson, J. 2012, in EAS Publications Series, Vol. 58, EAS Publications Series, ed. C. Stehlé, C. Joblin, & L. d’Hendecourt, 243–248
* Zeng et al. (2019) Zeng, L., Jacobsen, S. B., Sasselov, D. D., et al. 2019, Proceedings of the National Academy of Science, 116, 9723
* Zingales & Waldmann (2018) Zingales, T., & Waldmann, I. P. 2018, AJ, 156, 268
|
# Axion hot dark matter bound, reliably
Luca Di Luzio<EMAIL_ADDRESS>DESY, Notkestraße 85, D-22607 Hamburg,
Germany Guido Martinelli<EMAIL_ADDRESS>Physics Department
and INFN Sezione di Roma La Sapienza, Piazzale Aldo Moro 5, 00185 Roma, Italy
Gioacchino Piazza<EMAIL_ADDRESS>IJCLab, Pôle Théorie
(Bât. 210), CNRS/IN2P3 et Université Paris-Saclay, 91405 Orsay, France
###### Abstract
We show that the commonly adopted hot dark matter (HDM) bound on the axion
mass $m_{a}\lesssim$ 1 eV is not reliable, since it is obtained by
extrapolating the chiral expansion in a region where the effective field
theory breaks down. This is explicitly shown via the calculation of the axion-
pion thermalization rate at the next-to-leading order in chiral perturbation
theory. We finally advocate a strategy for a sound extraction of the axion HDM
bound via lattice QCD techniques.
Introduction. The axion originally emerged as a low-energy remnant of the
Peccei Quinn solution to the strong CP problem Peccei and Quinn (1977a, b);
Wilczek (1978); Weinberg (1978), but it also unavoidably contributes to the
energy density of the Universe. There are two qualitatively different
populations of relic axions, a non-thermal one comprising cold dark matter
(DM) Preskill _et al._ (1983); Abbott and Sikivie (1983); Dine and Fischler
(1983); Davis (1986), and a thermal axion population Turner (1987) which,
while still relativistic, would behave as extra dark radiation. Such hot dark
matter (HDM) component contributes to the effective number of extra
relativistic degrees of freedom Kolb and Turner (1990) $\Delta N_{\rm
eff}\simeq 4/7\left(43/[4g_{S}(T_{D})]\right)^{4/3}$, with $g_{S}(T_{D})$ the
number of entropy degrees of freedom at the axion decoupling temperature,
$T_{D}$. The value of $\Delta N_{\rm eff}$ is constrained by cosmic microwave
background (CMB) experiments, such as the Planck satellite Aghanim _et al._
(2020a, b), while planned CMB Stage 4 (CMB-S4) experiments Abazajian _et al._
(2016) will provide an observable window on the axion interactions.
There are several processes that can keep the axion in thermal equilibrium
with the Standard Model (SM) thermal bath. From the standpoint of the axion
solution to the strong CP problem, an unavoidable process arises from the
model-independent coupling to gluons,
$\frac{\alpha_{s}}{8\pi}\frac{a}{f_{a}}G\tilde{G}$.111Other thermalization
channels arise from model-dependent axion couplings to photons Turner (1987),
SM quarks Salvio _et al._ (2014); Baumann _et al._ (2016); Ferreira and
Notari (2018); Arias-Aragon _et al._ (2020) and leptons D’Eramo _et al._
(2018). For $T_{D}\gtrsim 1$ GeV thermal axion production proceeds via its
scatterings with gluons in the quark-gluon plasma Masso _et al._ (2002); Graf
and Steffen (2011), while for $T_{D}\lesssim 1$ GeV processes involving pions
and nucleons must be considered Berezhiani _et al._ (1992); Chang and Choi
(1993); Hannestad _et al._ (2005). The latter, have the advantage of
occurring very late in the thermal history, so that it is unlikely that the
corresponding population of thermal axions could be diluted by inflation. The
transition between the two regimes depends on the strength of the axion
interactions set by $f_{a}$ or, equivalently, by $m_{a}\simeq
5.7\times(10^{6}\ \text{GeV}/f_{a})$ eV, and it encompasses the range
$m_{a}\in[0.01,0.1]$ eV (with heavier axions leading to lower decoupling
temperatures). Although the transition region cannot be precisely determined
due to the complications of the quark-hadron phase transition, for heavier
axions approaching the eV scale the main thermalization channel is
$a\pi\leftrightarrow\pi\pi$ Chang and Choi (1993); Hannestad _et al._ (2005),
with $T_{D}\lesssim 200$ MeV. In this regime, scatterings off nucleons are
subdominant because of the exponential suppression in their number density.
The highest attainable axion mass from cosmological constraints on extra
relativistic degrees of freedom, also known as HDM bound, translate into
$m_{a}\lesssim$ 1 eV Zyla _et al._ (2020). Based on a leading-order (LO)
axion-pion chiral effective field theory (EFT) analysis of the axion-pion
thermalization rate Chang and Choi (1993); Hannestad _et al._ (2005), the
axion HDM bound has been reconsidered in Refs. Melchiorri _et al._ (2007);
Hannestad _et al._ (2008, 2010); Archidiacono _et al._ (2013); Giusarma _et
al._ (2014); Di Valentino _et al._ (2015, 2016); Archidiacono _et al._
(2015); Giarè _et al._ (2020), also in correlation with relic neutrinos. The
most recent update Giarè _et al._ (2020) quotes a 95$\%$ CL bound that ranges
from $m_{a}\lesssim$ 0.2 eV to 1 eV, depending on the used data set and
assumed cosmological model. Although the axion mass range relevant for the HDM
bound is in generic tension with astrophysical constraints, the latter can be
tamed in several respects.222Tree-level axion couplings to electrons are
absent in KSVZ models Kim (1979); Shifman _et al._ (1980), thus relaxing the
constraints from Red Giants and White Dwarfs. The axion coupling to photons,
constrained by Horizontal Branch stars evolution, can be accidentally
suppressed in certain KSVZ-like models Kaplan (1985); Di Luzio _et al._
(2017a, b). Finally, the SN1987A bound on the axion coupling to nucleons can
be considered less robust both from the astrophysical and experimental point
of view Raffelt (1990); Chang _et al._ (2018); Carenza _et al._ (2019); Bar
_et al._ (2020).
It is the purpose of this Letter to revisit the axion HDM bound in the context
of the next-to-LO (NLO) axion-pion chiral EFT. This is motivated by the simple
observation that the mean energy of pions (axions) in a heat bath of $T\simeq
100$ MeV is $\left\langle E\right\rangle\equiv\rho/n\simeq 350$ MeV ($270$
MeV), thus questioning the validity of the chiral expansion for the scattering
process $a\pi\leftrightarrow\pi\pi$. The latter is expected to fail for
$\sqrt{s}\sim\left\langle E_{\pi}\right\rangle+\left\langle
E_{a}\right\rangle\gtrsim 500$ MeV, corresponding to temperatures well below
that of QCD deconfinement, $T_{c}=154\pm 9$ MeV Bazavov _et al._ (2012).
In this work, we provide for the first time the formulation of the full axion-
pion Lagrangian at NLO, including also derivative axion couplings to the
pionic current (previous NLO studies only considered non-derivative axion-pion
interactions Spalinski (1988); Grilli di Cortona _et al._ (2016)), and paying
special attention to the issue of the axion-pion mixing. Next, we perform a
NLO calculation of the $a\pi\leftrightarrow\pi\pi$ thermalization rate (that
can be cast as an expansion in $T/f_{\pi}$, with $f_{\pi}\simeq 92$ MeV) and
show that the NLO correction saturates half of the LO contribution for
$T_{\chi}\simeq 62$ MeV. The latter can be considered as the maximal
temperature above which the chiral description breaks down for the process
under consideration. On the other hand, the region from $T_{\chi}$ up to
$T_{c}$, where chiral perturbation theory cannot be applied, turns out to be
crucial for the extraction of the HDM bound and for assessing the sensitivity
of future CMB experiments.
We conclude with a proposal for extracting the axion-pion thermalization rate
via a direct Lattice QCD calculation, in analogy to the well-studied case of
$\pi$-$\pi$ scattering.
Axion-pion scattering at LO. The construction of the LO axion-pion Lagrangian
was discussed long ago in Refs. Di Vecchia and Veneziano (1980); Georgi _et
al._ (1986). We recall here its basic ingredients (see also Chang and Choi
(1993); Di Luzio _et al._ (2020)), in view of the extension at NLO. Defining
the pion Goldstone matrix $U=e^{i\pi^{A}\sigma^{A}/f_{\pi}}$, with
$f_{\pi}\simeq 92$ MeV, $\pi^{A}$ and $\sigma^{A}$ ($A=1,2,3$) denoting
respectively the real pion fields and the Pauli matrices, the LO axion-pion
interactions stem from
$\mathscr{L}^{\rm LO}_{a\text{-}\pi}=\frac{f_{\pi}^{2}}{4}{\rm
Tr}\left[U\chi^{\dagger}_{a}+\chi_{a}U^{\dagger}\right]+\frac{\partial^{\mu}a}{2f_{a}}{\rm
Tr}\left[c_{q}\sigma^{A}\right]J^{A}_{\mu}\,,$ (1)
where $\chi_{a}=2B_{0}M_{a}$, in terms of the quark condensate $B_{0}$ and the
‘axion-dressed’ quark mass matrix
$M_{a}=e^{i\frac{a}{2f_{a}}Q_{a}}M_{q}e^{i\frac{a}{2f_{a}}Q_{a}}$, with
$M_{q}=\mbox{diag}\,(m_{u},m_{d})$ and $\mbox{Tr}\,Q_{a}=1$. The latter
condition ensures that the axion field is transferred from the operator
$\frac{\alpha_{s}}{8\pi}\frac{a}{f_{a}}G\tilde{G}$ to the phase of the quark
mass matrix, via the quark axial field redefinition
$q\to\exp(i\gamma_{5}\frac{a}{2f_{a}}Q_{a})q$. In the following, we set
$Q_{a}=M_{q}^{-1}/\mbox{Tr}\,M_{q}^{-1}$, so that terms linear in $a$
(including $a$-$\pi^{0}$ mass mixing) drop out from the first term in Eq. (1).
Hence, in this basis, the only linear axion interaction is the derivative one
with the conserved ${\rm SU}(2)_{A}$ pion current. The latter reads at LO
$J^{A}_{\mu}|^{\rm LO}=\frac{i}{4}f_{\pi}^{2}{\rm
Tr}\left[\sigma^{A}\left(U\partial_{\mu}U^{\dagger}-U^{\dagger}\partial_{\mu}U\right)\right]\,,$
(2)
while the derivative axion coupling in Eq. (1) is
$\mbox{Tr}\,\left[c_{q}\sigma^{A}\right]=(\frac{m_{u}-m_{d}}{m_{u}+m_{d}}+c^{0}_{u}-c^{0}_{d})\delta^{A3}$,
where the first term arises from the axial quark rotation that removed the
$aG\tilde{G}$ operator and the second one originates from the model-dependent
coefficient $c^{0}_{q}=\text{diag}(c^{0}_{u},c^{0}_{d})$, defined via the
Lagrangian term
$\frac{\partial^{\mu}a}{2f_{a}}\overline{q}c^{0}_{q}\gamma_{\mu}\gamma_{5}q$.
For instance, $c^{0}_{u,d}=0$ in the KSVZ model Kim (1979); Shifman _et al._
(1980), while $c^{0}_{u}=\frac{1}{3}\cos^{2}\beta$ and
$c^{0}_{d}=\frac{1}{3}\sin^{2}\beta$ in the DFSZ model Zhitnitsky (1980); Dine
_et al._ (1981), with $\tan\beta$ the ratio between the vacuum expectation
values of two Higgs doublets. Expanding the pion matrix in Eq. (1) one obtains
$\mathscr{L}^{\rm
LO}_{a\text{-}\pi}\supset\epsilon\,\partial^{\mu}a\partial_{\mu}\pi_{0}+\frac{C_{a\pi}}{f_{a}f_{\pi}}\partial^{\mu}a[\partial\pi\pi\pi]_{\mu}\,,$
(3)
with the definitions
$[\partial\pi\pi\pi]_{\mu}=2\partial_{\mu}\pi_{0}\pi_{+}\pi_{-}-\pi_{0}\partial_{\mu}\pi_{+}\pi_{-}-\pi_{0}\pi_{+}\partial_{\mu}\pi_{-}$,
$\epsilon=-\frac{3f_{\pi}C_{a\pi}}{2f_{a}}$ and
$C_{a\pi}=\frac{1}{3}\left(\frac{m_{d}-m_{u}}{m_{u}+m_{d}}+c_{d}^{0}-c_{u}^{0}\right)\,.$
(4)
At the LO in $\epsilon$ the diagonalization of the $a$-$\pi^{0}$ term is
obtained by shifting $a\to a-\epsilon\pi^{0}$ and
$\pi^{0}\to\pi^{0}+\mathcal{O}(\epsilon^{3})a$, where we used the fact that
$m_{a}/m_{\pi}=\mathcal{O}(\epsilon)$. Hence, as long as we are interested in
effects that are linear in $a$ and neglect $\mathcal{O}(\epsilon^{3})$
corrections, the axion-pion interactions in Eq. (3) are already in the basis
with canonical propagators.
For temperatures below the QCD phase transition, the main processes relevant
for the axion thermalization rate are
$a(p_{1})\pi_{0}(p_{2})\rightarrow\pi_{+}(p_{3})\pi_{-}(p_{4})$, whose
amplitude at LO reads
$\mathcal{M}^{\rm
LO}_{a\pi_{0}\rightarrow\pi_{+}\pi_{-}}=\frac{C_{a\pi}}{f_{\pi}f_{a}}\frac{3}{2}\left[m_{\pi}^{2}-s\right]\,,$
(5)
with $s=(p_{1}+p_{2})^{2}$, together with the crossed channels
$a\pi_{-}\rightarrow\pi_{0}\pi_{-}$ and $a\pi_{+}\rightarrow\pi_{+}\pi_{0}$.
The amplitudes of the latter are obtained by replacing $s\to
t=(p_{1}-p_{3})^{2}$ and $s\to u=(p_{1}-p_{4})^{2}$, respectively. Taking
equal masses for the neutral and charged pions, one finds the squared matrix
element (summed over the three channels above) Hannestad _et al._ (2005)
$\sum|\mathcal{M}|_{\rm
LO}^{2}=\left(\frac{C_{a\pi}}{f_{a}f_{\pi}}\right)^{2}\frac{9}{4}\left[s^{2}+t^{2}+u^{2}-3m_{\pi}^{4}\right]\,.$
(6)
Axion-pion scattering at NLO. To compute the axion thermalization process
beyond LO we need to consider the one-loop amplitudes from the LO Lagrangian
in Eq. (1) as well as the tree-level amplitudes stemming from the NLO axion-
pion Lagrangian, both contributing to $\mathcal{O}(p^{4})$ in the chiral
expansion. The NLO interactions include the derivative coupling of the axion
to the NLO axial current, which has been computed here for the first time.
We stick to the expression of the NLO chiral Lagrangian given in Ref. Gasser
and Leutwyler (1984) (see for example Appendix D in Scherer (2003) for trace
notation), which, considering only two flavours, depends on $10$ low-energy
constants (LECs) $\ell_{1},\ell_{2},\dots,\ell_{7},h_{1},h_{2},h_{3}$. The
axion field has been included in the phase of the quark mass matrix, as
described after Eq. (1). Note that since we are interested in $2\to 2$
scattering processes, we can neglect the $\mathcal{O}(p^{4})$ Wess-Zumino-
Witten term Wess and Zumino (1971); Witten (1983) since it contains operators
with an odd number of bosons.
To compute the axial current $J^{A}_{\mu}$ at NLO, we promote the ordinary
derivative to a covariant one, defined as
$D_{\mu}U=\partial_{\mu}U-ir_{\mu}U+iUl_{\mu}$, with
$r_{\mu}=r_{\mu}^{A}\sigma^{A}/2$ and $l_{\mu}=l_{\mu}^{A}\sigma^{A}/2$
external fields which can be used to include electromagnetic or weak effects.
The left and right SU(2) currents are obtained by differentiating the NLO
Lagrangian with respect to $l_{\mu}^{A}$ and $r_{\mu}^{A}$, respectively.
Taking the $R-L$ combination and switching off the external fields, the NLO
axial current reads
$\displaystyle J^{A}_{\mu}|^{\rm NLO}=\frac{i}{2}\ell_{1}{\rm
Tr}\left[\sigma^{A}\left\\{\partial_{\mu}U^{\dagger},U\right\\}\right]{\rm
Tr}\left[\partial_{\nu}U\partial^{\nu}U^{\dagger}\right]$
$\displaystyle+\frac{i}{4}\ell_{2}{\rm
Tr}\left[\sigma^{A}\left\\{\partial^{\nu}U^{\dagger},U\right\\}\right]{\rm
Tr}\left[\partial_{\mu}U\partial_{\nu}U^{\dagger}+\partial_{\nu}U\partial_{\mu}U^{\dagger}\right]$
$\displaystyle-\frac{i}{8}\ell_{4}{\rm
Tr}\big{[}\sigma^{A}\left\\{\partial_{\mu}U,\chi^{\dagger}_{a}\right\\}-\sigma^{A}\left\\{U,\partial_{\mu}\chi^{\dagger}_{a}\right\\}$
$\displaystyle\ \ \ \ \ \ \
+\sigma^{A}\left\\{\partial_{\mu}\chi_{a},U^{\dagger}\right\\}-\sigma^{A}\left\\{\chi_{a},\partial_{\mu}U^{\dagger}\right\\}\big{]}\,,$
(7)
where curly brackets indicate anti-commutators.
New $a\text{-}\pi_{0}$ mixings arise at NLO, both at tree level from the NLO
Lagrangian and at one loop from $\mathscr{L}^{\rm LO}_{a\text{-}\pi}$. These
mixings are explicitly taken into account in the Lehmann-Symanzik-Zimmermann
(LSZ) reduction formula Lehmann _et al._ (1955) (focussing e.g. on the
$a\pi_{0}\to\pi_{+}\pi_{-}$ channel)
$\displaystyle\mathcal{M}_{a\pi_{0}\to\pi_{+}\pi_{-}}$
$\displaystyle=\frac{1}{\sqrt{Z_{a}Z_{\pi}^{3}}}\Pi_{i=1}^{4}\lim_{p_{i}^{2}\to
m_{i}^{2}}\left(p_{i}^{2}-m_{i}^{2}\right)$ $\displaystyle\times
G_{a\pi_{0}\pi_{+}\pi_{-}}(p_{1},p_{2},p_{3},p_{4})\,,$ (8)
where the index $i$ runs over the external particles, $Z_{a}$ ($Z_{\pi}$) is
the wave-function renormalization of the axion (pion) field and the full
4-point Green’s function is given by
$\displaystyle G_{a\pi_{0}\pi_{+}\pi_{-}}=\sum_{i,j=a,\pi_{0}}{\cal
G}_{ij\pi_{+}\pi_{-}}$ (9) $\displaystyle\times
G_{\pi_{+}\pi_{+}}(m^{2}_{\pi})G_{\pi_{-}\pi_{-}}(m^{2}_{\pi})G_{ai}(m^{2}_{a}=0)G_{\pi_{0}j}(m^{2}_{\pi})\,.$
The first term is the amputated 4-point function, multiplied by the 2-point
functions of the external legs with the axion mass to zero. Working with LO
diagonal propagators, the 2-point amplitude for the $a\text{-}\pi_{0}$ system
reads $\mathcal{P}_{ij}=\mbox{diag}\,(p^{2},p^{2}-m^{2}_{\pi})-\Sigma_{ij}$,
where $\Sigma_{ij}$ encodes NLO corrections including mixings. The 2-point
Green’s function $G_{ij}=(-i\mathcal{P})^{-1}_{ij}$ is hence
$G_{ij}=i\begin{pmatrix}\frac{1}{p^{2}}&\frac{\Sigma_{a\pi}}{p^{2}\left(p^{2}-m_{\pi}^{2}-\Sigma_{\pi\pi}\right)}\\\
\frac{\Sigma_{a\pi}}{p^{2}\left(p^{2}-m_{\pi}^{2}-\Sigma_{\pi\pi}\right)}&\frac{1}{p^{2}-m_{\pi}^{2}-\Sigma_{\pi\pi}}\end{pmatrix}\,.$
(10)
Plugging Eq. (9) and (10) into the LSZ formula for the scattering amplitude
and neglecting $\mathcal{O}(1/f_{a})^{2}$ terms, one finds (with $Z_{a}=1$,
$Z_{\pi}=1+\Sigma^{\prime}_{\pi\pi}(m^{2}_{\pi})$ and primes indicating
derivatives with respect to $p^{2}$)
$\displaystyle\mathcal{M}_{a\pi_{0}\to\pi_{+}\pi_{-}}=\left(1+\frac{3}{2}\Sigma^{\prime}_{\pi\pi}(m^{2}_{\pi})\right){\cal
G}_{a\pi_{0}\pi_{+}\pi_{-}}^{\rm LO}$
$\displaystyle-\frac{\Sigma_{a\pi}(m_{a}^{2}=0)}{m^{2}_{\pi}}{\cal
G}_{\pi_{0}\pi_{0}\pi_{+}\pi_{-}}^{\rm LO}+{\cal
G}_{a\pi_{0}\pi_{+}\pi_{-}}^{\rm NLO}\,,$ (11)
where the ${\cal G}$’s are evaluated at the physical masses of the external
particles. The one-loop amplitudes have been computed in dimensional
regularization. To carry out the renormalization procedure in the (modified)
$\overline{\text{MS}}$ scheme, we define the scale independent parameters
$\overline{\ell_{i}}$ as Gasser and Leutwyler (1984)
$\ell_{i}=\frac{\gamma_{i}}{32\pi^{2}}\left[\overline{\ell_{i}}+R+\ln\left(\frac{m_{\pi}^{2}}{\mu^{2}}\right)\right]\,,$
(12)
with $R=\frac{2}{d-4}-\log(4\pi)+\gamma_{E}-1$, in order to cancel the
divergent terms (in the limit $d=4$) with a suitable choice of the
$\gamma_{i}$. Eventually, only the terms proportional to $\ell_{1,2,7}$
contribute to the NLO amplitude, which is renormalized for $\gamma_{1}=1/3$,
$\gamma_{2}=2/3$ and $\gamma_{7}=0$. The latter coincide with the values
obtained in Ref. Gasser and Leutwyler (1984) for the standard chiral theory
without the axion.
The renormalized NLO amplitude for the $a\pi_{0}\rightarrow\pi_{+}\pi_{-}$
process (and its crossed channels) is given in Supplemetary Material. We have
also checked that the same analytical result is obtained via a direct NLO
diagonalization of the $a$ and $\pi^{0}$ propagators, without employing the
LSZ formalism with off-diagonal propagators. For consistency, we will only
consider the interference between the LO and NLO terms in the squared matrix
elements, $\sum|\mathcal{M}|^{2}\simeq\sum|\mathcal{M}|_{\rm LO}^{2}+\sum
2\mbox{Re}\,[\mathcal{M}_{\rm LO}\mathcal{M}^{*}_{\rm NLO}]$, since the NLO
squared correction is of the same order of the NNLO-LO interference, which we
neglect.
Breakdown of the chiral expansion at finite temperature. The crucial quantity
that is needed to extract the HDM bound is the axion decoupling temperature,
$T_{D}$, obtained via the freeze-out condition (following the same criterium
as in Hannestad _et al._ (2005))
$\Gamma_{a}(T_{D})=H(T_{D})\,.$ (13)
Here, $H(T)=\sqrt{4\pi^{3}g_{\star}(T)/45}\,T^{2}/m_{\rm pl}$ is the Hubble
rate (assuming a radiation dominated Universe) in terms of the Planck mass
$m_{\rm pl}=1.22\times 10^{19}$ GeV and the effective number of relativistic
degrees of freedom, $g_{\star}(T)$, while $\Gamma_{a}$ is the axion
thermalization rate entering the Boltzmann equation
$\displaystyle\Gamma_{a}$ $\displaystyle=\frac{1}{n_{a}^{\rm
eq}}\int\frac{d^{3}\mathbf{p}_{1}}{(2\pi)^{3}2E_{1}}\frac{d^{3}\mathbf{p}_{2}}{(2\pi)^{3}2E_{2}}\frac{d^{3}\mathbf{p}_{3}}{(2\pi)^{3}2E_{3}}\frac{d^{3}\mathbf{p}_{4}}{(2\pi)^{3}2E_{4}}$
$\displaystyle\times\sum|\mathcal{M}|^{2}(2\pi)^{4}\delta^{4}\left(p_{1}+p_{2}-p_{3}-p_{4}\right)$
$\displaystyle\times f_{1}f_{2}(1+f_{3})(1+f_{4})\,,$ (14)
where $n_{a}^{\rm eq}=(\zeta_{3}/\pi^{2})T^{3}$ and $f_{i}=1/(e^{E_{i}/T}-1)$.
In the following, we will set the model-dependent axion couplings
$c^{0}_{u,\,d}=0$ (cf. Eq. (4)), to comply with the standard setup considered
in the literature Chang and Choi (1993); Hannestad _et al._ (2005);
Melchiorri _et al._ (2007); Hannestad _et al._ (2008, 2010); Archidiacono
_et al._ (2013); Giusarma _et al._ (2014); Di Valentino _et al._ (2015,
2016); Archidiacono _et al._ (2015); Giarè _et al._ (2020) (see Ferreira
_et al._ (2020) for an exception). Moreover, we will neglect thermal
corrections to the scattering matrix element, since those are small for
$T\lesssim m_{\pi}$ Gasser and Leutwyler (1987a, b); Gerber and Leutwyler
(1989).
Figure 1: Numerical profile of the $h_{\rm LO}$ and $h_{\rm NLO}$ functions
entering the axion-pion thermalization rate in Eq. (Axion hot dark matter
bound, reliably).
By integrating numerically the phase space in Eq. (Axion hot dark matter
bound, reliably) and neglecting third order terms in the isospin breaking, we
find (see Supplementary Material for a useful intermediate analytical step, or
Hannestad and Madsen (1995) for a slightly different approach)
$\displaystyle\Gamma_{a}(T)$
$\displaystyle=\left(\frac{C_{a\pi}}{f_{a}f_{\pi}}\right)^{2}0.212\
T^{5}\Big{[}h_{\rm LO}(m_{\pi}/T)$
$\displaystyle-2.92\frac{T^{2}}{f_{\pi}^{2}}\ h_{\rm
NLO}(m_{\pi}/T)\Big{]}\,,$ (15)
where for the numerical evaluation we used the central values of the LECs
$\overline{\ell_{1}}=-0.36(59)$ Colangelo _et al._ (2001),
$\overline{\ell_{2}}=4.31(11)$ Colangelo _et al._ (2001),
$\overline{\ell_{3}}=3.53(26)$ Aoki _et al._ (2020),
$\overline{\ell_{4}}=4.73(10)$ Aoki _et al._ (2020) and $\ell_{7}=7(4)\times
10^{-3}$ Grilli di Cortona _et al._ (2016), $m_{u}/m_{d}=0.50(2)$ Aoki _et
al._ (2020), $f_{\pi}=92.1(8)$ MeV Zyla _et al._ (2020) and $m_{\pi}=137$ MeV
(corresponding to the average neutral/charged pion mass). The $h$-functions
are normalized to $h_{\rm LO}(0)=h_{\rm NLO}(0)=1$ and plotted in Fig. 1.
We have checked that $h_{\rm LO}$ reproduces the result of Ref. Hannestad _et
al._ (2005) within percent accuracy. It should be noted that Eq. (Axion hot
dark matter bound, reliably) is meaningful only for $m_{\pi}/T\gtrsim 1$,
since at higher temperatures above $T_{c}$ pions are deconfined and the axion
thermalization rate should be computed from the interactions with a quark-
gluon plasma. Nevertheless, we are interested in extrapolating the behaviour
of Eq. (Axion hot dark matter bound, reliably) from the low-temperature
regime, where the chiral approach is reliable.
In Fig. 2 we compare the LO and NLO rates contributing to
$\Gamma_{a}=\Gamma_{a}^{\rm LO}+\Gamma_{a}^{\rm NLO}$. In particular, the
$|\Gamma_{a}^{\rm NLO}/\Gamma_{a}^{\rm LO}|$ ratio does not depend on $f_{a}$.
Requiring as a loose criterium that the NLO correction is less than $50\%$ of
the LO one, yields $T_{\chi}\simeq 62$ MeV as the maximal temperature at which
the chiral description of the thermalization rate can be reliably extended.
Figure 2: Ratio between the NLO and the LO axion-pion thermalization rate.
$T_{\chi}\simeq 62$ MeV corresponds to a NLO correction of $50\%$.
Fig. 3 shows instead the extraction of the decoupling temperature (defined via
Eq. (13)) for two reference values of the axion mass (setting the strength of
the axion coupling via $f_{a}$), namely $m_{a}=1$ eV and 0.1 eV. Assuming a
standard analysis employing the LO axion thermalization rate Hannestad _et
al._ (2005), the former benchmark (1 eV) corresponds to the most conservative
HDM bound Giarè _et al._ (2020), while the latter (0.1 eV) saturates the most
stringent one Giarè _et al._ (2020) and also represents the typical reach of
future CMB-S4 experiments Abazajian _et al._ (2016). However, from Fig. 3 we
see that $T_{D}^{\rm LO}\simeq 59$ MeV for $m_{a}=1$ eV and $T_{D}^{\rm
LO}\sim 200$ MeV for $m_{a}=0.1$ eV. Hence, while in the former case the
decoupling temperature is at the boundary of validity of the chiral expansion,
set by $T_{\chi}\simeq 62$ MeV, in the latter is well above it. In particular,
the region where the chiral expansion fails, $T_{D}\gtrsim T_{\chi}$,
corresponds to $m_{a}\lesssim 1.2$ eV.
We hence conclude that in the mass range of interest, $m_{a}\in[0.1,1]$ eV,
the decoupling temperature and consequently the axion HDM bound cannot be
reliably extracted within the chiral approach. Note, also, that in the
presence of model-dependent axion couplings $c^{0}_{u,d}\gg 1$ (as in some
axion models Darmé _et al._ (2020)), the same decoupling temperature as in
the $c^{0}_{u,d}=0$ case is obtained for larger $f_{a}$, thus shifting down
the mass window relevant for the axion HDM bound.
Figure 3: Axion-pion thermalization rate vs. Hubble rate for two reference
values of the axion mass, $m_{a}=1$ eV and 0.1 eV. The full $\Gamma_{a}$ has
been stopped at $T\simeq 85$ MeV, for which $|\Gamma^{\rm NLO}_{a}/\Gamma^{\rm
LO}_{a}|=90\%$.
Towards a reliable axion HDM bound. The failure of the chiral approach in the
calculation of the axion-pion thermalization rate can be traced back to the
fact that in a thermal bath with temperatures of the order of $T\simeq 100$
MeV the mean energy of pions is $\left\langle E_{\pi}\right\rangle\simeq 350$
MeV, so that $\pi$-$\pi$ scatterings happen at center of mass energies above
the validity of the 2-flavour chiral EFT. The latter can be related to the
scale of tree-level unitarity violation of $\pi$-$\pi$ scattering resulting in
$\sqrt{s}\lesssim\sqrt{8\pi}f_{\pi}\simeq 460$ MeV Weinberg (1966); Aydemir
_et al._ (2012). A possible strategy to extend the theoretical predictions at
higher energies is to compute the relevant $a\pi\to\pi\pi$ amplitudes using
lattice QCD simulations. To this end one may employ the standard techniques
used to compute weak non-leptonic matrix elements Blum _et al._ (2015);
Abbott _et al._ (2020) and $\pi$-$\pi$ scattering amplitudes as a function of
the energy at finite volume Luscher (1991); Rummukainen and Gottlieb (1995);
Kim _et al._ (2005); Hansen and Sharpe (2012). Although this approach has
limitations with respect to the maximum attainable center of mass energy, we
believe that it can be used to compute the amplitudes up to values of
$\sqrt{s}\sim 600-900$ MeV or higher Briceno _et al._ (2017).
We conclude by stressing the importance of obtaining a reliable determination
of the axion-pion thermalization rate, not only in view of the extraction of a
notable bound in axion physics, but also in order to set definite targets for
future CMB probes of the axion-pion coupling, which could represent a
‘discovery channel’ for the axion.
###### Acknowledgements.
Acknowledgments. We thank Enrico Nardi and Maurizio Giannotti for helpful
discussions. The work of LDL is supported by the Marie Skłodowska-Curie
Individual Fellowship grant AXIONRUSH (GA 840791) and the Deutsche
Forschungsgemeinschaft under Germany’s Excellence Strategy \- EXC 2121 Quantum
Universe - 390833306. The work of GP has received funding from the European
Union’s Horizon 2020 research and innovation programme under the Marie
Skłodowska-Curie grant agreement N∘ 860881\.
## References
* Peccei and Quinn (1977a) R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. 38, 1440 (1977a).
* Peccei and Quinn (1977b) R. D. Peccei and H. R. Quinn, Phys. Rev. D16, 1791 (1977b).
* Wilczek (1978) F. Wilczek, Phys. Rev. Lett. 40, 279 (1978).
* Weinberg (1978) S. Weinberg, Phys. Rev. Lett. 40, 223 (1978).
* Preskill _et al._ (1983) J. Preskill, M. B. Wise, and F. Wilczek, Phys. Lett. 120B, 127 (1983).
* Abbott and Sikivie (1983) L. F. Abbott and P. Sikivie, Phys. Lett. 120B, 133 (1983).
* Dine and Fischler (1983) M. Dine and W. Fischler, Phys. Lett. 120B, 137 (1983).
* Davis (1986) R. L. Davis, Phys. Lett. B 180, 225 (1986).
* Turner (1987) M. S. Turner, Phys. Rev. Lett. 59, 2489 (1987), [Erratum: Phys.Rev.Lett. 60, 1101 (1988)].
* Kolb and Turner (1990) E. W. Kolb and M. S. Turner, _The Early Universe_ , Vol. 69 (1990).
* Aghanim _et al._ (2020a) N. Aghanim _et al._ (Planck), Astron. Astrophys. 641, A1 (2020a), arXiv:1807.06205 [astro-ph.CO] .
* Aghanim _et al._ (2020b) N. Aghanim _et al._ (Planck), Astron. Astrophys. 641, A6 (2020b), arXiv:1807.06209 [astro-ph.CO] .
* Abazajian _et al._ (2016) K. N. Abazajian _et al._ (CMB-S4), (2016), arXiv:1610.02743 [astro-ph.CO] .
* Salvio _et al._ (2014) A. Salvio, A. Strumia, and W. Xue, JCAP 01, 011 (2014), arXiv:1310.6982 [hep-ph] .
* Baumann _et al._ (2016) D. Baumann, D. Green, and B. Wallisch, Phys. Rev. Lett. 117, 171301 (2016), arXiv:1604.08614 [astro-ph.CO] .
* Ferreira and Notari (2018) R. Z. Ferreira and A. Notari, Phys. Rev. Lett. 120, 191301 (2018), arXiv:1801.06090 [hep-ph] .
* Arias-Aragon _et al._ (2020) F. Arias-Aragon, F. D’Eramo, R. Z. Ferreira, L. Merlo, and A. Notari, (2020), arXiv:2012.04736 [hep-ph] .
* D’Eramo _et al._ (2018) F. D’Eramo, R. Z. Ferreira, A. Notari, and J. L. Bernal, JCAP 11, 014 (2018), arXiv:1808.07430 [hep-ph] .
* Masso _et al._ (2002) E. Masso, F. Rota, and G. Zsembinszki, Phys. Rev. D 66, 023004 (2002), arXiv:hep-ph/0203221 .
* Graf and Steffen (2011) P. Graf and F. D. Steffen, Phys. Rev. D 83, 075011 (2011), arXiv:1008.4528 [hep-ph] .
* Berezhiani _et al._ (1992) Z. Berezhiani, A. Sakharov, and M. Khlopov, Sov. J. Nucl. Phys. 55, 1063 (1992).
* Chang and Choi (1993) S. Chang and K. Choi, Phys. Lett. B 316, 51 (1993), arXiv:hep-ph/9306216 .
* Hannestad _et al._ (2005) S. Hannestad, A. Mirizzi, and G. Raffelt, JCAP 07, 002 (2005), arXiv:hep-ph/0504059 .
* Zyla _et al._ (2020) P. Zyla _et al._ (Particle Data Group), PTEP 2020, 083C01 (2020).
* Melchiorri _et al._ (2007) A. Melchiorri, O. Mena, and A. Slosar, Phys. Rev. D 76, 041303 (2007), arXiv:0705.2695 [astro-ph] .
* Hannestad _et al._ (2008) S. Hannestad, A. Mirizzi, G. G. Raffelt, and Y. Y. Wong, JCAP 04, 019 (2008), arXiv:0803.1585 [astro-ph] .
* Hannestad _et al._ (2010) S. Hannestad, A. Mirizzi, G. G. Raffelt, and Y. Y. Wong, JCAP 08, 001 (2010), arXiv:1004.0695 [astro-ph.CO] .
* Archidiacono _et al._ (2013) M. Archidiacono, S. Hannestad, A. Mirizzi, G. Raffelt, and Y. Y. Wong, JCAP 10, 020 (2013), arXiv:1307.0615 [astro-ph.CO] .
* Giusarma _et al._ (2014) E. Giusarma, E. Di Valentino, M. Lattanzi, A. Melchiorri, and O. Mena, Phys. Rev. D 90, 043507 (2014), arXiv:1403.4852 [astro-ph.CO] .
* Di Valentino _et al._ (2015) E. Di Valentino, S. Gariazzo, E. Giusarma, and O. Mena, Phys. Rev. D 91, 123505 (2015), arXiv:1503.00911 [astro-ph.CO] .
* Di Valentino _et al._ (2016) E. Di Valentino, E. Giusarma, M. Lattanzi, O. Mena, A. Melchiorri, and J. Silk, Phys. Lett. B 752, 182 (2016), arXiv:1507.08665 [astro-ph.CO] .
* Archidiacono _et al._ (2015) M. Archidiacono, T. Basse, J. Hamann, S. Hannestad, G. Raffelt, and Y. Y. Wong, JCAP 05, 050 (2015), arXiv:1502.03325 [astro-ph.CO] .
* Giarè _et al._ (2020) W. Giarè, E. Di Valentino, A. Melchiorri, and O. Mena, (2020), arXiv:2011.14704 [astro-ph.CO] .
* Kim (1979) J. E. Kim, Phys. Rev. Lett. 43, 103 (1979).
* Shifman _et al._ (1980) M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, Nucl. Phys. B166, 493 (1980).
* Kaplan (1985) D. B. Kaplan, Nucl. Phys. B 260, 215 (1985).
* Di Luzio _et al._ (2017a) L. Di Luzio, F. Mescia, and E. Nardi, Phys. Rev. Lett. 118, 031801 (2017a), arXiv:1610.07593 [hep-ph] .
* Di Luzio _et al._ (2017b) L. Di Luzio, F. Mescia, and E. Nardi, Phys. Rev. D 96, 075003 (2017b), arXiv:1705.05370 [hep-ph] .
* Raffelt (1990) G. G. Raffelt, Phys. Rept. 198, 1 (1990).
* Chang _et al._ (2018) J. H. Chang, R. Essig, and S. D. McDermott, JHEP 09, 051 (2018), arXiv:1803.00993 [hep-ph] .
* Carenza _et al._ (2019) P. Carenza, T. Fischer, M. Giannotti, G. Guo, G. Martínez-Pinedo, and A. Mirizzi, JCAP 10, 016 (2019), [Erratum: JCAP 05, E01 (2020)], arXiv:1906.11844 [hep-ph] .
* Bar _et al._ (2020) N. Bar, K. Blum, and G. D’Amico, Phys. Rev. D 101, 123025 (2020), arXiv:1907.05020 [hep-ph] .
* Bazavov _et al._ (2012) A. Bazavov _et al._ , Phys. Rev. D 85, 054503 (2012), arXiv:1111.1710 [hep-lat] .
* Spalinski (1988) M. Spalinski, Z. Phys. C 41, 87 (1988).
* Grilli di Cortona _et al._ (2016) G. Grilli di Cortona, E. Hardy, J. Pardo Vega, and G. Villadoro, JHEP 01, 034 (2016), arXiv:1511.02867 [hep-ph] .
* Di Vecchia and Veneziano (1980) P. Di Vecchia and G. Veneziano, Nucl. Phys. B 171, 253 (1980).
* Georgi _et al._ (1986) H. Georgi, D. B. Kaplan, and L. Randall, Phys. Lett. B 169, 73 (1986).
* Di Luzio _et al._ (2020) L. Di Luzio, M. Giannotti, E. Nardi, and L. Visinelli, Phys. Rept. 870, 1 (2020), arXiv:2003.01100 [hep-ph] .
* Zhitnitsky (1980) A. R. Zhitnitsky, Sov. J. Nucl. Phys. 31, 260 (1980), [Yad. Fiz.31,497(1980)].
* Dine _et al._ (1981) M. Dine, W. Fischler, and M. Srednicki, Phys. Lett. B104, 199 (1981).
* Gasser and Leutwyler (1984) J. Gasser and H. Leutwyler, Annals Phys. 158, 142 (1984).
* Scherer (2003) S. Scherer, Adv. Nucl. Phys. 27, 277 (2003), arXiv:hep-ph/0210398 .
* Wess and Zumino (1971) J. Wess and B. Zumino, Phys. Lett. B 37, 95 (1971).
* Witten (1983) E. Witten, Nucl. Phys. B 223, 422 (1983).
* Lehmann _et al._ (1955) H. Lehmann, K. Symanzik, and W. Zimmermann, Nuovo Cim. 1, 205 (1955).
* Ferreira _et al._ (2020) R. Z. Ferreira, A. Notari, and F. Rompineve, (2020), arXiv:2012.06566 [hep-ph] .
* Gasser and Leutwyler (1987a) J. Gasser and H. Leutwyler, Phys. Lett. B 184, 83 (1987a).
* Gasser and Leutwyler (1987b) J. Gasser and H. Leutwyler, Phys. Lett. B 188, 477 (1987b).
* Gerber and Leutwyler (1989) P. Gerber and H. Leutwyler, Nucl. Phys. B 321, 387 (1989).
* Hannestad and Madsen (1995) S. Hannestad and J. Madsen, Phys. Rev. D 52, 1764 (1995), arXiv:astro-ph/9506015 .
* Colangelo _et al._ (2001) G. Colangelo, J. Gasser, and H. Leutwyler, Nucl. Phys. B 603, 125 (2001), arXiv:hep-ph/0103088 .
* Aoki _et al._ (2020) S. Aoki _et al._ (Flavour Lattice Averaging Group), Eur. Phys. J. C 80, 113 (2020), arXiv:1902.08191 [hep-lat] .
* Darmé _et al._ (2020) L. Darmé, L. Di Luzio, M. Giannotti, and E. Nardi, (2020), arXiv:2010.15846 [hep-ph] .
* Weinberg (1966) S. Weinberg, Phys. Rev. Lett. 17, 616 (1966).
* Aydemir _et al._ (2012) U. Aydemir, M. M. Anber, and J. F. Donoghue, Phys. Rev. D 86, 014025 (2012), arXiv:1203.5153 [hep-ph] .
* Blum _et al._ (2015) T. Blum _et al._ , Phys. Rev. D 91, 074502 (2015), arXiv:1502.00263 [hep-lat] .
* Abbott _et al._ (2020) R. Abbott _et al._ (RBC, UKQCD), Phys. Rev. D 102, 054509 (2020), arXiv:2004.09440 [hep-lat] .
* Luscher (1991) M. Luscher, Nucl. Phys. B 354, 531 (1991).
* Rummukainen and Gottlieb (1995) K. Rummukainen and S. A. Gottlieb, Nucl. Phys. B 450, 397 (1995), arXiv:hep-lat/9503028 .
* Kim _et al._ (2005) C. h. Kim, C. T. Sachrajda, and S. R. Sharpe, Nucl. Phys. B 727, 218 (2005), arXiv:hep-lat/0507006 .
* Hansen and Sharpe (2012) M. T. Hansen and S. R. Sharpe, Phys. Rev. D 86, 016007 (2012), arXiv:1204.0826 [hep-lat] .
* Briceno _et al._ (2017) R. A. Briceno, J. J. Dudek, R. G. Edwards, and D. J. Wilson, Phys. Rev. Lett. 118, 022002 (2017), arXiv:1607.05900 [hep-ph] .
* Alloul _et al._ (2014) A. Alloul, N. D. Christensen, C. Degrande, C. Duhr, and B. Fuks, Comput. Phys. Commun. 185, 2250 (2014), arXiv:1310.1921 [hep-ph] .
* Christensen and Duhr (2009) N. D. Christensen and C. Duhr, Comput. Phys. Commun. 180, 1614 (2009), arXiv:0806.4194 [hep-ph] .
* Hahn (2001) T. Hahn, Comput. Phys. Commun. 140, 418 (2001), arXiv:hep-ph/0012260 .
* Shtabovenko _et al._ (2020) V. Shtabovenko, R. Mertig, and F. Orellana, Comput. Phys. Commun. 256, 107478 (2020), arXiv:2001.04407 [hep-ph] .
* Shtabovenko _et al._ (2016) V. Shtabovenko, R. Mertig, and F. Orellana, Comput. Phys. Commun. 207, 432 (2016), arXiv:1601.01167 [hep-ph] .
* Mertig _et al._ (1991) R. Mertig, M. Bohm, and A. Denner, Comput. Phys. Commun. 64, 345 (1991).
* Patel (2015) H. H. Patel, Comput. Phys. Commun. 197, 276 (2015), arXiv:1503.01469 [hep-ph] .
## I Supplementary Material
The calculation of the amplitudes was carried out using the computational
tools FeynRules Alloul _et al._ (2014); Christensen and Duhr (2009), FeynArts
Hahn (2001), FeynCalc Shtabovenko _et al._ (2020, 2016); Mertig _et al._
(1991) and Package-X Patel (2015). The full analytical expression of the
renormalized NLO amplitude for the $a\pi_{0}\rightarrow\pi_{+}\pi_{-}$ process
reads
$\displaystyle\mathcal{M}^{\rm
NLO}_{a\pi_{0}\rightarrow\pi_{+}\pi_{-}}=\frac{C_{a\pi}}{192\pi^{2}f_{\pi}^{3}f_{a}}\Bigg{\\{}15m_{\pi}^{2}(u+t)-11u^{2}-8ut-11t^{2}-6\overline{\ell_{1}}\left(m_{\pi}^{2}-s\right)\left(2m_{\pi}^{2}-s\right)$
$\displaystyle-6\overline{\ell_{2}}\left(-3m_{\pi}^{2}(u+t)+4m_{\pi}^{4}+u^{2}+t^{2}\right)+9m_{\pi}^{4}\overline{\ell_{3}}+18\overline{\ell_{4}}m_{\pi}^{2}(m_{\pi}^{2}-s)+576\pi^{2}\ell_{7}m_{\pi}^{4}\left(\frac{m_{d}-m_{u}}{m_{d}+m_{u}}\right)^{2}$
$\displaystyle+3\left[3\sqrt{1-\frac{4m_{\pi}^{2}}{s}}s\left(m_{\pi}^{2}-s\right)\ln{\left(\frac{\sqrt{s\left(s-4m_{\pi}^{2}\right)}+2m_{\pi}^{2}-s}{2m_{\pi}^{2}}\right)}\right.$
$\displaystyle+\sqrt{1-\frac{4m_{\pi}^{2}}{t}}\left(m_{\pi}^{2}(t-4u)+3m_{\pi}^{4}+t(u-t)\right)\ln{\left(\frac{\sqrt{t\left(t-4m_{\pi}^{2}\right)}+2m_{\pi}^{2}-t}{2m_{\pi}^{2}}\right)}$
$\displaystyle\left.+\sqrt{1-\frac{4m_{\pi}^{2}}{u}}\left(m_{\pi}^{2}(u-4t)+3m_{\pi}^{4}+u(t-u)\right)\ln{\left(\frac{\sqrt{u\left(u-4m_{\pi}^{2}\right)}+2m_{\pi}^{2}-u}{2m_{\pi}^{2}}\right)}\right]\Bigg{\\}}$
$\displaystyle+\frac{4\ell_{7}m_{\pi}^{2}m_{d}\left(s-2m_{\pi}^{2}\right)m_{u}\left(m_{d}-m_{u}\right)}{f_{\pi}^{3}f_{a}\left(m_{d}+m_{u}\right){}^{3}}\,,$
(16)
where the terms proportional to $\overline{\ell_{3}}$, $\overline{\ell_{4}}$
and $\ell_{7}$ in the second row arise from the LO amplitude, via the NLO
corrections to $m_{\pi}$ and $f_{\pi}$ (see Ref. Gasser and Leutwyler (1984)).
The latter include the charged-neutral pion mass difference arising at second
order in the isospin breaking parameter $m_{d}-m_{u}$, which has been
neglected in the numerical integration of the rate. The amplitudes for the
crossed channels $a\pi_{-}\rightarrow\pi_{0}\pi_{-}$ and
$a\pi_{+}\rightarrow\pi_{+}\pi_{0}$ are obtained by cross symmetry through the
replacements $s\leftrightarrow t$ and $s\leftrightarrow u$, respectively.
Next, we describe our procedure to analytically reduce the 12-dimensional
phase space integral of Eq. (Axion hot dark matter bound, reliably) down to a
5-dimensional one. We first integrate the fourth-particle phase space in Eq.
(Axion hot dark matter bound, reliably) using the relation
$d^{3}\mathbf{p}_{4}/(2E_{4})=d^{4}p_{4}\delta\left(p_{4}^{2}-m_{4}^{2}\right)\theta(p_{4}^{0})$.
Therefore, defining the angles $\alpha$ and $\theta$ via
$\cos\alpha=\frac{\mathbf{p}_{1}\cdot\mathbf{p}_{2}}{|\mathbf{p}_{1}||\mathbf{p}_{2}|}$
and
$\cos\theta=\frac{\mathbf{p}_{1}\cdot\mathbf{p}_{3}}{|\mathbf{p}_{1}||\mathbf{p}_{3}|}$,
the thermalization rate becomes
$\displaystyle\Gamma_{a}$ $\displaystyle=\frac{1}{n_{a}^{\rm
eq}}\int\frac{dp_{1}|\mathbf{p}_{1}|^{2}}{2E_{1}}\frac{dp_{2}|\mathbf{p}_{2}|^{2}}{2E_{2}}\frac{dp_{3}|\mathbf{p}_{3}|^{2}}{2E_{3}}\int_{-1}^{1}d\cos\alpha\int_{-1}^{1}d\cos\theta\int_{0}^{2\pi}d\beta\sum|\mathcal{M}|^{2}\frac{4\pi}{(2\pi)^{7}}$
$\displaystyle\times\frac{\delta\left(E_{1}-\xi(E_{2},E_{3},\alpha,\beta,\theta)\right)}{2|E_{2}-E_{3}-|\mathbf{p}_{2}|\cos\alpha+|\mathbf{p}_{3}|\cos\theta|}f_{1}f_{2}(1+f_{3})(1+f_{4})\,,$
(17)
with $\beta$ the angle between the scattering planes defined by
$(\mathbf{p}_{1},\mathbf{p}_{2})$ and $(\mathbf{p}_{3},\mathbf{p}_{4})$, and
the function $\xi$ given by
$\xi(E_{2},E_{3},\alpha,\beta,\theta)=\frac{2\left(E_{2}E_{3}-|\mathbf{p}_{2}||\mathbf{p}_{3}|\left(\sin\alpha\sin\theta\cos\beta+\cos\alpha\cos\theta\right)\right)-m_{\pi}^{2}}{2(E_{2}-E_{3}-|\mathbf{p}_{2}|\cos\alpha+|\mathbf{p}_{3}|\cos\theta)}\,.$
(18)
Eq. (I) is then integrated numerically, leading to Eq. (Axion hot dark matter
bound, reliably).
|
††thanks: Contributed equally to this work††thanks: Contributed equally to
this work††thanks: Contributed equally to this work
# Multi-angle reconstruction of domain morphology with all-optical diamond
magnetometry
Lucio Stefan<EMAIL_ADDRESS>Cavendish Laboratory, University of
Cambridge, J. J. Thomson Avenue, Cambridge, CB3 0HE, UK The Faraday
Institution, Quad One, Becquerel Avenue, Harwell Campus, Didcot, OX11 0RA, UK
Anthony K. C. Tan Cavendish Laboratory, University of Cambridge, J. J.
Thomson Avenue, Cambridge, CB3 0HE, UK Baptiste Vindolet Université Paris-
Saclay, CNRS, ENS Paris-Saclay, CentraleSupélec, LuMIn, 91190, Gif-sur-Yvette,
France Michael Högen Cavendish Laboratory, University of Cambridge, J. J.
Thomson Avenue, Cambridge, CB3 0HE, UK Dickson Thian Institute of Materials
Research and Engineering, Agency for Science, Technology and Research
(A*STAR), 138634 Singapore Hang Khume Tan Institute of Materials Research
and Engineering, Agency for Science, Technology and Research (A*STAR), 138634
Singapore Loïc Rondin Université Paris-Saclay, CNRS, ENS Paris-Saclay,
CentraleSupélec, LuMIn, 91190, Gif-sur-Yvette, France Helena S. Knowles
Cavendish Laboratory, University of Cambridge, J. J. Thomson Avenue,
Cambridge, CB3 0HE, UK Jean-François Roch Université Paris-Saclay, CNRS, ENS
Paris-Saclay, CentraleSupélec, LuMIn, 91190, Gif-sur-Yvette, France Anjan
Soumyanarayanan Institute of Materials Research and Engineering, Agency for
Science, Technology and Research (A*STAR), 138634 Singapore Physics
Department, National University of Singapore (NUS), 117551 Singapore Mete
Atatüre<EMAIL_ADDRESS>Cavendish Laboratory, University of Cambridge, J. J.
Thomson Avenue, Cambridge, CB3 0HE, UK
###### Abstract
Scanning diamond magnetometers based on the optically detected magnetic
resonance of the nitrogen-vacancy centre offer very high sensitivity and non-
invasive imaging capabilities when the stray fields emanating from ultrathin
magnetic materials are sufficiently low ($<10\,\mathrm{mT}$). Beyond this low-
field regime, the optical signal quenches and a quantitative measurement is
challenging. While the field-dependent NV photoluminescence can still provide
qualitative information on magnetic morphology, this operation regime remains
unexplored particularly for surface magnetisation larger than $\sim
3\,\mathrm{mA}$. Here, we introduce a multi-angle reconstruction technique
(MARe) that captures the full nanoscale domain morphology in all magnetic-
field regimes leading to NV photoluminescence quench. To demonstrate this, we
use [Ir/Co/Pt]14 multilayer films with surface magnetisation an order of
magnitude larger than previous reports. Our approach brings non-invasive
nanoscale magnetic field imaging capability to the study of a wider pool of
magnetic materials and phenomena.
## I Introduction
In the last decade, the negatively-charged nitrogen-vacancy (NV) centre in
diamond has attracted great interest as a versatile quantum sensor for the
investigations of weak-field magnetism which demands high sensitivity,
nanoscale resolution and noninvasiveness. [1, 2, 3, 4, 5]. In the presence of
a magnetic field, the Zeeman splitting of the NV spin can be quantified by
performing optically detected magnetic resonance (ODMR) measurements using
laser and microwave excitation [6]. The single-spin nature of the NV centre
also ensures limited perturbation of the measured system. Further, attaching
an NV-containing diamond platform on a scanning probe [7, 3, 8, 2, 9] enables
scanning NV microscopy (SNVM), which allows for nanoscale noninvasive magnetic
imaging. This technique features a large operating temperature range
(cryogenic to room temperature) and stability in vacuum to ambient conditions
[6, 1, 10]. However, the ODMR measurements are restricted to magnetic fields
below $10\,\mathrm{mT}$ due to the field-induced quenching of the ODMR
contrast, thus preventing the optical readout of the spin splitting [11, 12,
8]. As a consequence, quantitative ODMR-based SNVM has been demonstrated
mainly on magnetic textures in thin films with close to zero surface
magnetisation, such as antiferromagnetic or single layer ferromagnetic
materials [8, 13, 7, 14, 4, 11, 15, 16, 17, 18, 19, 20].
Figure 1: Effect of Magnetic Field Amplitude and Orientation on NV
Luminescence. (a) Illustration of a diamond probe scanning over a spin texture
(colored cones) with magnetic field lines across the domain boundaries (red
lines). The inset is a schematic of the local magnetic field vector B with
reference to the NV quantisation axis $\hat{u}_{\mathrm{NV}}$ at the tip of
the diamond probe. $\alpha_{B}$ indicates the angle between B and
$\hat{u}_{\mathrm{NV}}$. (b) Labyrinth domain morphology in Ir/Co/Pt
multilayer observed by MFM, exhibiting a zero-field period of
$407\,\mathrm{nm}$ (scale bar: $3\,\mathrm{\mu m}$). (c) Normalized NV
luminescence defined as
$(\mathrm{PL}-\mathrm{PL}_{\rm\min})/(\mathrm{PL}_{\rm\max}-\mathrm{PL}_{\rm\min})$
as a function of $\left|\mathbf{B}\right|$ and $\alpha_{B}$. The corresponding
distributions at various $d_{\mathrm{NV}}$, obtained from simulated magnetic
fields across (b), are overlaid on (c) (Contour lines). The 80th, 60th, and
40th percentiles are indicated with increasingly lighter contour lines. (d)
The histograms of the simulated PL response at the three $d_{\mathrm{NV}}$
values, $60\,\mathrm{nm}$, $100\,\mathrm{nm}$ and $200\,\mathrm{nm}$.
$\overline{\mathrm{PL}}$ indicates the mean PL, $\Delta\mathrm{PL}$ marks the
difference between the 90th and 10th percentile of the PL distribution. (e)
Dependence of $\Delta\mathrm{PL}$ and $\overline{\mathrm{PL}}$ on
$d_{\mathrm{NV}}$. The peak of $\Delta\mathrm{PL}$ marks the optimal distance
for quench-based imaging for the multilayer film of (b). The colored circles
correspond to the three $d_{\mathrm{NV}}$ values considered in panels (c) and
(d).
To extend the operational range beyond $10\,\mathrm{mT}$, the NV centre can
harness the field-dependent quench of the NV photoluminescence (PL) for
magnetic imaging as demonstrated recently [21, 11, 22, 23]. Quench-based SNVM
monitors the changes in NV PL due to the local magnetic field variation across
a spin texture with the respect to the NV quantization axis. This modality
also offers reduced acquisition time and enables microwave-free non-
perturbative operation [24, 5, 25]. The interpretation of quench-based SNVM
maps can be ambiguous, because of the multiple parameters that influence PL
quenching, such as NV-sample distance, NV axis orientation, sample
magnetization, magnetic domain size or magnetic field noise [26]. Therefore,
this imaging mode has been limited to the mapping of magnetic domain
morphology with surface magnetisation $I_{S}\lesssim 3\,\mathrm{mA}$ [21, 22,
23] (equivalent to $\leavevmode\nobreak\ 2\,\mathrm{nm}$ of $\mathrm{Co}$). In
this report, we reveal distinct quench-based imaging regimes, dependent on the
material parameters, and introduce the Multi-Angle Reconstruction (MARe)
protocol to interpret the domain morphology from quenched SNVM maps. We
demonstrate MARe on [Ir/Co/Pt]14 multilayer film with $12\,\mathrm{mA}$ out-
of-plane surface magnetisation, an order of magnitude larger than the
operational limit of ODMR-based SNVM. Utilising MARe can extend the
applicability of SNVM to a wider range of materials and magnetic regimes.
## II Quench-Based Imaging in Different Regimes
Figure 1(a) illustrates our experimental setup consisting of a diamond
scanning probe with an NV centre implanted close to the diamond surface at an
NV-sample distance $d_{\mathrm{NV}}$ smaller than $100\,\mathrm{nm}$ [2, 27,
9, 28]. The optical ground state of the NV centre is a spin triplet, with a
quantisation axis $\hat{u}_{\mathrm{NV}}$ along one of the four
crystallographic axes of the diamond lattice [29, 30] and the lowest-energy
state $\ket{m_{s}=0}$ is split from the $\ket{m_{s}=\pm 1}$ states by
$2.87\,\mathrm{GHz}$ [31]. The local magnetic field can be decomposed into
parallel ($\textbf{B}_{\parallel}$) and orthogonal ($\textbf{B}_{\perp}$)
components with respect to $\hat{u}_{\mathrm{NV}}$ (Fig. 1(a) insert). The
$\textbf{B}_{\parallel}$ splits the $\ket{m_{s}=\pm 1}$ states which is
measured by monitoring the ODMR [32]. However, $\textbf{B}_{\perp}$ mixes
these spin states and modifies the branching ratio of the optical transitions
[12]. This results in the quenching of the NV PL and the suppression of the
ODMR contrast (Supplemental Material A), restricting quantitative ODMR-based
imaging to below $\sim 10\,\mathrm{mT}$ [12, 11].
Figure 2: Quench-based SNVM Imaging Regimes. (a) Different regimes of quench-
based imaging as a function of NV-sample distance ($d_{\mathrm{NV}}$) and
surface magnetisation ($I_{s}$), based on simulated quench images. Little to
no domain morphological information is captured in the No Quench (left greyed
area) and Full Quench (right greyed area) regimes where the PL map is
predominantly bright or dark, respectively. In the Partial Quench regime (area
bounded by dashed lines), field variations are mapped to PL changes resulting
in (b-d) quench images with features indicative of domain boundaries (scale
bar: $1\,\mathrm{\mu m}$). Domain boundaries appear as dark isotropic PL
features (low directionality) for smaller $d_{\mathrm{NV}}$ and $I_{s}$ (b),
and as directional bright features (high directionality) at larger
$d_{\mathrm{NV}}$ and $I_{s}$ (c-d). The orientation of the directionality
depends on $\hat{u}_{\mathrm{NV},\varphi}$ which is the NV axis
$\hat{u}_{\mathrm{NV}}$, projected on the sample surface. The dashed lines
indicate the contour lines for 5% map contrast. (e) Illustration depicting the
NV axis $\hat{u}_{\mathrm{NV}}$, the tilt angle $\vartheta_{\mathrm{NV}}$ from
the normal to the sample surface, and the projection of
$\hat{u}_{\mathrm{NV}}$ in the sample plane, $\hat{u}_{\mathrm{NV},\varphi}$.
The angle $\varphi_{\mathrm{NV}}$ is the angle between
$\hat{u}_{\mathrm{NV},\varphi}$ and the reference axis within the sample
plane. For panels (a-d), $\vartheta_{\mathrm{NV}}{}=54.7^{\circ}$ and
$\varphi_{\mathrm{NV}}{}=0^{\circ}$.
Quench-based SNVM generates a PL intensity map, where regions with strong
$\textbf{B}_{\perp}$ component appear darker. In the limit of modest surface
magnetisation and small NV-sample distance $d_{\mathrm{NV}}$, the domain
boundaries appear dark producing faithful magnetic domain morphology maps.
Therefore, demonstrations are limited to single or bilayer thin film systems
with surface magnetisation $I_{s}\lesssim 3\,\mathrm{mA}$ [21, 22, 23, 11, 8].
Outside this regime, the complex interplay between $d_{\mathrm{NV}}$ and
$I_{s}$, as well as the morphology lenghtscale, on the NV PL obfuscates the
straightforward correspondence of dark regions to domain boundaries.
Therefore, a systematic understanding of quench-based SNVM response is
necessary to retrieve the domain morphology of a magnetic material. To do
this, we first simulate the $d_{\mathrm{NV}}$ dependence of quench-based SNVM
for a known magnetic structure.
Our study involves [Ir($1\,\mathrm{nm}$)/Co(1)/Pt(1)]14 magnetic multilayer, a
room-temperature skyrmion platform with an out-of-plane anisotropy and
$I_{s}=12\,\mathrm{mA}$ (Supplemental Material B) – an order of magnitude
larger than systems studied previously with SNVM. Further, the ambient
stability of the nanoscale spin textures [33, 34] allows us to correlate the
quench-based SNVM images with MFM measurements [35]. Figure 1(b) presents an
MFM image of this film, exhibiting a labyrinth domain morphology with a zero-
field period of $407\,\mathrm{nm}$. Figure 1(c) presents a grey-scale map of
normalised PL intensity simulated as a function of field amplitude
$\left|\textbf{B}\right|$ and field angle $\alpha_{B}$ with respect to the NV
axis $\hat{u}_{\mathrm{NV}}$. To understand how the stray field distribution
of the domain morphology affects the NV PL at various $d_{\mathrm{NV}}$, we
simulate the volumetric field distribution from the MFM map in Figure 1(b)
using the micromagnetics package mumax3 [36] (Supplemental Material C). On
Figure 1(c), we overlay the corresponding $\left|\textbf{B}\right|$ \-
$\alpha_{B}$ distributions of the magnetic field at three different
$d_{\mathrm{NV}}$, at $60\,\mathrm{nm}$ (green contours), $100\,\mathrm{nm}$
(orange) and $200\,\mathrm{nm}$ (blue). (Supplemental Material D). At
$d_{\mathrm{NV}}{}=60\,\mathrm{nm}$ ($200\,\mathrm{nm}$), the NV PL remains
uniformly quenched (unaffected) for the majority of the field distribution,
while $100\,\mathrm{nm}$ $d_{\mathrm{NV}}$ results in strong PL variation.
Figure 1(d) clearly highlights these NV PL variations $\Delta\mathrm{PL}$ via
the corresponding histograms at $d_{\mathrm{NV}}{}=60,\,100$ and
$200\,\mathrm{nm}$. Figure 1(e) presents the $\Delta\mathrm{PL}$ – calculated
as the difference between the 90th and 10th percentile of the NV PL
distribution – as a function of $d_{\mathrm{NV}}$ (solid red curve) alongside
the mean PL (dashed grey curve).
Figure 3: Directional Quench Imaging and morphology reconstruction. (a)
Simulated quenched PL map based on spin texture in Figure 1b, with
$\vartheta_{\mathrm{NV}}{}=54.7^{\circ}$, $\varphi_{\mathrm{NV}}{}=0^{\circ}$
and (b) simulated quenched PL map in the same area but with the NV rotated
$90^{\circ}$ in the sample plane ($\varphi_{\mathrm{NV}}{}=90^{\circ}$). Both
maps are simulated at a NV-sample distance $d_{\mathrm{NV}}{}=77\,\mathrm{nm}$
and surface magnetisation $I_{s}=12\,\mathrm{mA}$ (scale bar: $1\,\mathrm{\mu
m}$). (c) Reconstructed image obtained by summing (a) and (b). (d) Multi-Angle
Reconstruction (MARe) illustrating the domain morphology acquisition based on
multiple N images at various $\varphi_{\mathrm{NV}}$. (e) Coverage of domain
boundaries given as function of $N$ and $\varphi_{max}$. $N$ is the number of
quench images involved in the reconstruction, and are obtained over a range of
$\varphi_{\mathrm{NV}}{}$ ($0^{\circ}$ to $\varphi_{max}$) spaced by
$\Delta\varphi_{\mathrm{NV}}{}=\varphi_{max}/(N-1)$. The reconstruction with
$N=4$ images yields the largest coverage, which saturates above
$\varphi_{max}\simeq 120^{\circ}$.
To assess the operational regime of quench-based SNVM, we need to consider
further the interplay between $I_{s}$ and $d_{\mathrm{NV}}$. As shown in
Figure 2(a), quench-based SNVM can be categorised into different regimes. The
combination of large $d_{\mathrm{NV}}$ and small $I_{s}$ (small
$d_{\mathrm{NV}}$ and large $I_{s}$) results in predominantly bright
(quenched) PL maps. In both the No Quench and the Full Quench regimes, the
lack of PL variation $\Delta\mathrm{PL}$ implies that little to no
morphological information of the underlying spin textures is captured. In
contrast, quench-based SNVM is feasible in the Partial Quench regime (area
bounded by dotted lines in Figure 2(a)) for a limited range of $I_{s}$ and
$d_{\mathrm{NV}}$ combinations. While the Partial Quench regime gives a large
$\Delta\mathrm{PL}$, which is desirable for quench-based SNVM, the resultant
PL maps over an identical spin texture can vary dramatically across this
regime. To highlight this, we simulated quench-based SNVM maps of the same
area in the multilayer film using three different combinations of $I_{s}$ and
$d_{\mathrm{NV}}$ (Fig. 2(b-d)). In general, we observe an evolution from
dark, isotropic features at lower $I_{s}$ and $d_{\mathrm{NV}}$ to bright,
directional features at higher $I_{s}$ and $d_{\mathrm{NV}}$ due to competing
magnetic field contributions above domains and domain boundaries. At lower
$I_{s}$ and $d_{\mathrm{NV}}$ (blue region in Figure 2(a)), the quench image
appears as a uniform bright background with isotropic dark outlines (Fig.
2(b)). This is a result of the strong magnetic field localised at the domain
boundaries which quenches the NV. The NV quench images reported to-date lie in
this region of the parameter space [21, 8, 22, 23] (Supplemental Material D).
Figure 4: Experimental verification of Multi-angle Reconstruction of Domain
Morphology. Experimental quenching map of the same area of Figure 3(a, b) with
$\vartheta_{\mathrm{NV}}{}=60\pm 2^{\circ}$ and (a) with $\varphi=0^{\circ}$
and (b) with $\varphi=90^{\circ}$. The two images are combined to give (c) the
reconstructed domain morphological map with $N=2$. (scale bar: $1\,\mathrm{\mu
m}$). (d) Binarized and magnified image of Figure 1(b) covering the same area
in (a, b and c), with domain boundaries marked in red.
For combinations of larger $I_{s}$ and $d_{\mathrm{NV}}$ values (orange region
in Figure 2(a)), the quench maps generate strikingly different images: panels
(c) and (d) capture highly directional bright and segmented features along the
domain boundaries. In this case, strong off-axis magnetic field above domains
and domain boundaries results in a predominantly dark PL map. However due to
large gradients localised at domain boundaries, there are instances where the
field is aligned closer to $\hat{u}_{\mathrm{NV}}$. This occurs across
portions of domain boundaries orthogonal to the projection of
$\hat{u}_{\mathrm{NV}}$ in the sample plane ($\hat{u}_{\mathrm{NV},\varphi}$),
resulting in directional bright features for panels (c) and (d), highly
dependent on the NV equatorial angle $\varphi_{\mathrm{NV}}$ (Fig. 2(e)).
Notably, this directional behaviour occurs over a significantly larger
parameter space of the Partial Quench regime, well beyond that of panel (a),
and the trend remains valid for different domain periodicity (Supplemental
Material D). It is worth emphasising here that magnetic materials with $I_{s}$
larger than $\sim 3\,\mathrm{mA}$ would inevitably constrain quench-based SNVM
to the directional region of Figure 2(a). Therefore, a protocol that relates
these images with the actual magnetic domain morphology is necessary in order
to extend the operation regime of quench-based SNVM for non-perturbative
investigations of such materials.
## III Reconstruction of domain morphology - MARe
To reflect the role of $\varphi_{\mathrm{NV}}$ in quench-based SNVM, we
simulate two quench images for $\varphi_{\mathrm{NV}}{}=0^{\circ}$ and
$\varphi_{\mathrm{NV}}{}=90^{\circ}$, displayed in Figure 3(a) and 3(b),
respectively. We set $d_{\mathrm{NV}}$ = $77\,\mathrm{nm}$,
$\vartheta_{\mathrm{NV}}{}=54.7^{\circ}$, and $\sim 12\,\mathrm{mA}$ surface
magnetisation (Supplementary Material E) to reflect our experimental
measurements. The images for both $\varphi_{\mathrm{NV}}$ orientations show
directional segments revealing some features of the domain morphology, but
more importantly these segments are complementary. Therefore, while an image
at a given $\varphi_{\mathrm{NV}}$ remains incomplete, images obtained at
multiple $\varphi_{\mathrm{NV}}$ values can collectively give a significantly
better coverage of the underlying domain morphology, which is the essence of
the proposed imaging protocol. Multi-Angle Reconstruction protocol (MARe)
harnesses the $\varphi_{\mathrm{NV}}$ dependence of PL features to build a
composite map enabling morphological imaging further into the Partial Quench
regime, i.e. in strong-field conditions.
The overlapping features in the PL maps obtained at different
$\varphi_{\mathrm{NV}}$, e.g. $0^{\circ}$ and $90^{\circ}$ as in panels (a)
and (b) of Figure 3 allow us to perform an initial image registration to
compensate for the domain outline shift caused by
$\vartheta_{\mathrm{NV}}{}\neq 0^{\circ}$ (Supplemental Material F).
Subsequently, the maps are normalised and summed to yield a MARe image, as
displayed in Figure 3(c), revealing a larger fraction of the domain boundaries
with just two values of $\varphi_{\mathrm{NV}}$. To quantify the domain
boundary coverage, we integrate the product of the domain outlines from the
MFM image (Fig. 1(b)) with the binarized MARe image. In order to maximise the
fraction of domain boundaries covered by the protocol, we consider $N\geq 2$
images taken at different $\varphi_{\mathrm{NV}}$ values ranging from
$0^{\circ}$ to $\varphi_{\mathrm{NV}}^{\rm max}$ spaced equally by
$\Delta\varphi_{\mathrm{NV}}{}=\varphi_{\mathrm{NV}}^{\rm max}/(N-1)$. Figure
3(d) illustrates the MARe scheme for $N=4$ and $\varphi_{\rm
max}=120^{\circ}$, which corresponds to 4 quench-based SNVM images with each
obtained at $40^{\circ}$ relative angle. The corresponding MARe image clearly
captures an increased fraction of the domain morphology.
Figure 3(e) presents the calculated fraction of domain boundary coverage for
MARe with $N=2,3$ and $4$ (black, blue and red curve). For $N=2$ ($3$) the
maximum coverage reaches $81\%$ ($94\%$) at $\varphi_{\rm max}=80^{\circ}$
($100^{\circ}$). Extending MARe to $N=4$ further improves the coverage
reaching a maximum of $\sim 98\%$. This shows that even for $N\leq 4$, the
MARe protocol is capable of recovering the domain morphology with near-unity
coverage.
Figure 4 presents our experimental demonstration of domain morphology mapping
using MARe on the [Ir/Co/Pt]14 multilayer. To obtain quench-based SNVM images
we use a (100) diamond probe containing a single NV centre with
$\vartheta_{\mathrm{NV}}{}=60\pm 2\,\mathrm{{}^{\circ}}$ and
$d_{\mathrm{NV}}{}=77\pm 3\,\mathrm{nm}$ (Supplementary Material E). The
combination of the $d_{\mathrm{NV}}$ ($77\,\mathrm{nm}$) value and
$I_{s}\leavevmode\nobreak\ (12\,\mathrm{mA})$ of the [Ir/Co/Pt]14 multilayer
yields directional quench images according to Figure 2(a). Figures 4(a) and
4(b) show experimental quench images acquired at
$\varphi_{\mathrm{NV}}{}=0^{\circ}$ and $\varphi_{\mathrm{NV}}{}=90^{\circ}$,
respectively, on the same area used for simulating Figure 3(a) and 3(b)
(Supplemental Material B). The domain boundary coverage of each of these
images is $60\begin{subarray}{c}+13\\\ -11\end{subarray}\%$, in line with the
simulations and there is good agreement between the simulated and the measured
images for both orientations (Supplemental Material G). Figure 4(c) is the
corresponding $N=2$ MARe image showing matching bright features with the
highlighted domain boundaries of the binarised MFM image displayed in Figure
4(d). The experimentally achieved domain boundary coverage is improved to
$71\begin{subarray}{c}+12\\\ -15\end{subarray}\%$ – an enhancement beyond the
single frame coverage of $\sim 60\%$. The deviation from the simulated
coverage is due to the nonlinearity of the experimental map, as well as image
thresholding and registration operations (see Supplementary Material H).
Another reason for this deviation might be due to perturbations of the domain
morphology induced by MFM scanning. As the experimental protocol includes MFM
scans performed before and after each quench-based SNVM map, we do observe
local perturbations due to MFM that could potentially lead to deviations from
the unperturbed images captured by quench-based SNVM (see Supplemental
Material I). Nonetheless, the experimental demonstration of MARe extends the
operational range of non-invasive quench-based SNVM into the Partial Quench
regime.
## IV Outlook
Our work methodically evaluates quench-based SNVM in terms of characteristic
NV and magnetic material properties. We establish a predictive scheme
involving MFM, micromagnetics and NV photodynamics simulations, which yields
images in excellent agreement with experimentally acquired data. We find two
regimes of quench imaging where morphological information is captured. The
first regime corresponds to mostly bright PL maps with dark outlines tracing
the domain boundaries, which corresponds to materials of low magnetisation
($I_{s}<3\,\mathrm{mA}$). The second regime, which has not been reported to-
date, results in PL maps with directional segmented features with strong
$\hat{u}_{\mathrm{NV},\varphi}$ dependence. We established a multi-angle
reconstruction scheme, herein named as MARe, to enable domain morphology
mapping with near-unity coverage for the second regime. The experimentally
validated MARe protocol extends quench-based SNVM imaging of out-of-plane spin
textures to magnetic systems with $I_{s}>3\,\mathrm{mA}$. Furthermore, the
scheme to identify the imaging regimes can be generalized to complex magnetic
textures, thus enabling the forecast of the attainable SNVM modes. We
anticipate that these insights, alongside tools developed for prediction,
interpretation and reconstruction, will stimulate the adoption of quench-based
SNVM as a non-perturbative nanoscale magnetometry to a wider pool of
materials, thereby furthering the development of quantitative quench-based
SNVM imaging.
## V Acknowledgements
This work was performed at the Cambridge Nanoscale Sensing and Imaging Suite
(CANSIS), part of the Cambridge Henry Royce Institute, EPSRC grant
EP/P024947/1. We further acknowledge funding from EPSRC QUES2T (EP/N015118/1)
and from the Betty and Gordon Moore Foundation. This work was also supported
by the Faraday Institution (FIRG01) and by the SpOT-LITE programme (Grant Nos.
A1818g0042, A18A6b0057), funded by Singapore’s RIE2020 Initiatives. A. K. C.
Tan acknowledges funding from A*STAR, Singapore. B. Vindolet acknowledges
support by a PhD research Grant of Délégation Générale de l’Armement. J.-F.
Roch thanks Churchill College and the French Embassy in the UK for supporting
his stay at the Cavendish Laboratory.
## Appendix A Simulation of the NV photodynamics
To capture the photodynamics of the NV centre, we use of a seven-state model
which includes the ground-state and excited-state fine structure of the NV
centre (Fig. A). The strain splitting is $E_{\rm gs}=E_{\rm es}\approx 0$,
where the subscripts gs and es indicate the optical ground state and the
optical excited state, respectively. At zero-field, the levels $\ket{i}$ with
$i=0,1,2$ are split by $D_{\rm gs}=2.87\,\mathrm{GHz}$ in the optical ground
state while the levels of the excited state $\ket{i}$, $i=3,4,5$, are split by
the excited state zero-field splitting $D_{\rm es}=1.42\,\mathrm{GHz}$. The
transition rates from level $\ket{i}$ to the level $\ket{j}$ are denoted as
$\gamma_{ij}$. The decay rates are defined as in the work by Tetienne et al.
[12]: we assume $\gamma_{30}=\gamma_{41}=\gamma_{52}=\gamma_{r}$,
$\gamma_{46}=\gamma_{56}$, and $\gamma_{61}=\gamma_{62}$. The spin non-
conserving transitions from the excited state are assumed to be forbidden.
Optical excitation pumps the ground state populations to the excited state but
stimulated emission is neglected, the laser being off-resonant and the
vibrational relaxation decay time being short. The values used for the
numerical simulations are taken from the works by Robledo et al. and Tetienne
et al. [37, 12] (Tab. A). Within the assumption of Markovian noise:
$\dfrac{\mathrm{d}\rho(t)}{\mathrm{d}t}=-\frac{i}{\hbar}\left[\mathscr{H},\rho\right]-\frac{1}{2}\sum_{k=0}^{m}\left(L_{k}^{\dagger}L_{k}\rho+\rho
L_{k}^{\dagger}L_{k}\right)+\sum_{k=0}^{m}L_{k}\rho L_{k}^{\dagger}\;\,$ (1)
where $\mathscr{H}$ is the magnetic-field dependent Hamiltonian describing the
seven-state system, $\rho$ is the density operator and $L_{k}$ are the Kraus
operators which describe the $m$ photon emission or absorption processes. We
work in the approximation of microwave excitation rate weaker than the laser
pumping, hence $T_{2}^{*}$ dephasing is neglected. The laser pump is described
as an incoherent absorption process. The Kraus operators then can either take
the form:
$L_{k}^{abs}=\sqrt{\gamma_{ji}}\,\outerproduct{i}{j},\;i=\left(3,4,5\right),j=\left(0,1,2\right)\;\,$
(2)
or
$L_{k}^{em}=\sqrt{\gamma_{ij}}\,\outerproduct{j}{i},\;i=\left(3,4,5,6\right),\,j=\left(0,1,2\right)\;.$
(3)
Extra Kraus operators can be added if incoherent microwave driving is included
in the model:
$L_{k}^{mw}=\sqrt{\gamma_{ij}^{mw}}\outerproduct{i}{j},\;i,j=\left(0,1,2\right),i\neq
j\;.$ (4)
The steady-state PL rate is proportional to the sum of the steady-state
populations in the excited state:
$\Gamma_{\mathrm{PL}}\propto\sum_{i=3}^{5}\rho_{ii}\,.$ (5)
| Decay rate (MHz)
---|---
$\gamma_{r}$ | 65
$\gamma_{36}$ | 11
$\gamma_{46}=\gamma_{56}$ | 80
$\gamma_{60}$ | 3
$\gamma_{61}=\gamma_{62}$ | 3
Table S1: Photodynamics parameters. The previously reported [37, 12] decay
rates used in the 7-state model for the NV magnetic field-dependent
photodynamics. Figure S5: Schematics of the NV seven-level system. Seven-level
system used to capture the NV photodynamics, for an arbitrary magnetic field.
In general, off-axis magnetic fields couple the zero-field eigenstates and
allow for spin-flip transitions which modify the zero-field photodynamics.
Green lines represent laser excitation, red lines optical decay and purple
lines non-radiative decay. Figure S6: Magnetic field-dependent NV
photodynamics. (a) Changes in steady state population under continuous green
excitation of the triplet ground state and singlet state of an NV as a
function of a magnetic field, $B_{\perp}$, orthogonal to the NV axis
$\hat{u}_{\mathrm{NV}}$. (b) The corresponding quench response of the NV PL
(blue curve) with increasing $B_{\perp}$ due to larger shelving state
population (shown in (a)). ODMR contrast (red curve) is also reduced due to
the decrease in population difference between $\ket{0}$ and $\ket{\pm 1}$
(shown in (a)).
Magnetic field components orthogonal to $\hat{u}_{\mathrm{NV}}$ (off-axis)
couple the different spin states, modifying the branching ratio of the
transitions [12] and altering the steady-state populations of the levels (Fig.
S6(a)). On one hand, this leads to a reduction of the ESR contrast (Fig. S6c),
because of the reduced population difference between the $\ket{0}$ and
$\ket{\pm 1}$ levels (Fig. S6(b)). On the other hand, this leads to the
quenching of the PL [8, 12, 38], due to a larger population getting trapped in
the singlet state (Fig. S6(b)). This effect leads to a trade-off between
magnetic field amplitude ($\propto 1/d_{\mathrm{NV}}{}$) and spatial
resolution ($\propto d_{\mathrm{NV}}{}$) when imaging small spin textures.
## Appendix B Material Properties and Preparations
The multilayer stack of [Ir($1\,\mathrm{nm}$)/ Co(1)/ Pt(1)]14 were deposited
on thermally oxidised Si wafers by DC magnetron sputtering. Additional
fabrication information is found in previous studies [34]. Relevant properties
of the Ir/Co/Pt stack are shown in Table B. The surface magnetisation $I_{s}$
is given by $M_{s}\cdot t_{\rm eff}$, where $t_{\rm eff}$ is the effective
magnetic thickness which is the number of repetition multiplied by the
thickness of the magnetic layer. In this case, $t_{\rm eff}$=14 nm, and hence
$I_{s}$=12.3 mA (Table. B). The $I_{s}$ of various systems studied with
quenched SNVM is given in Table B for comparison. The zero-field magnetic
domains are stabilised by demagnetising the sample. This results in labyrinth
morphology with a period, $P$=407 nm (Fig. S9(a)).
The sample is marked with a wirebonder (Fig. S7) which allows us to image the
same area of interest (yellow box in Fig. S7) using two techniques (SNVM and
MFM) on separate platforms. MFM is always carried out before and after
quenched SNVM, to ensure that the morphology of the probed area remains
identifiable and the features are largely unchanged.
$\mathbf{M_{s}}$ $(\mathrm{MA/m})$ | $\mathbf{K_{eff}}$ $(\mathrm{MJ/m^{3}})$ | $\mathbf{D}$ $(\mathrm{mJ/m^{2}})$
---|---|---
0.881 | 0.474 | 1.25
Table S2: Material Properties. The saturation magnetisation $M_{s}$, effective anisotropy $K_{eff}$ and DMI strength $D$ of [Ir/Co/Pt]14 film. Material System | $\mathbf{I_{s}}$ $(\mathrm{mA})$
---|---
$14\times$ Ir/Co/Pt | $12.3$
Pt/CFA/MgO/Ta [22] CFA: Co2FeAl | 1.8
Pt/FM/Au/FM/Pt [21] FM: Ni/Co/Ni | 2.6
Pt/Co/NiFe/IrMn [23] | 1.7
Table S3: Material Systems. The surface magnetisation $I_{s}$ of various
systems studied with quenched SNVM compared to [Ir/Co/Pt]14. Figure S7:
Marked Sample. Microscopic image of a marked area of the sample surface with a
MFM probe in view. The marking is achieved using a wirebonding tip, and the
area of interest probed by quenched SNVM, micromagnetics and MFM in the main
text is highlighted in yellow.
## Appendix C Micromagnetic simulations
The magnetic field above the spin texture was obtained via Mumax3 simulations.
For the study of quenched imaging in various regimes (Fig. 2 in main text),
The multilayer film is modelled using the effective medium method [39] so as
to reduce computation resources. The simulation grid consists of $256\times
256\times 128$ cells spanning $10\,\mathrm{\mu m}\times 10\,\mathrm{\mu
m}\times 384\,\mathrm{nm}$ (cell size: $\approx 39\,\mathrm{nm}\times
39\,\mathrm{nm}\times 3\,\mathrm{nm}$). The first 14 layers are modelled with
an effective saturation magnetisation $M_{\rm eff}=M_{s}/3$ and the volume
above as non-magnetic spacers. The simulation is further refined for
comparison with experiments (Fig. 3 and 4 in main text) with each cell layer
corresponding to $1\,\mathrm{nm}$ of Ir, Co, or Pt. Maintaining the same grid
size, this reduces the total simulated height to $128\,\mathrm{nm}$.
Similarly, the Pt, Ir layer and the volume above the multilayer film are
modelled as non-magnetic spacers. Differing from the effective medium model,
the Co layer has the experimentally obtained magnetisation $M_{s}$. In both
cases, the simulated non-magnetic volume above the multilayer film allows us
to retrieve the magnetostatic field environment above the spin texture (Fig.
S8) via Mumax3. The magnetisation distribution used in the simulation is based
on segmenting a MFM image into up and down domains by image thresholding (Fig.
S9).
Figure S8: Simulated Magnetic Field. (a-c) Magnetic field components $B_{x}$,
$B_{y}$, $B_{z}$, at $d_{\mathrm{NV}}=77\,\mathrm{nm}$ above the sample
surface, simulated based on the magnetisation distribution in Figure S9(b).
(Scale bar: $1\,\mathrm{\mu m}$) Figure S9: Image Thresholding. (a) MFM image
of sample surface (highlighted in Figure S7) showing labyrinth domains at zero
field. (b) Corresponding binary image after thresholding process, yielding
up/down magnetisation used for simulations in Figure S8. (Scale bar:
$1\,\mathrm{\mu m}$)
## Appendix D Analysis of Quenched Imaging Regimes
The diagram in Figure 2 of the main text is constructed based on the
directionality of the observed PL features and the contrast of quenched images
with different combinations of surface magnetisation $I_{s}$ and NV-sample
distance $d_{\mathrm{NV}}$. The directionality of the PL features is
determined from the auto-correlation of the quenched image (Fig. S10). The
directionality is defined as $1-r_{min}/r_{max}$, where $r_{\rm min}$ and
$r_{\rm max}$ are respectively the minor and the major axis of an elliptical
Gaussian fit to the cross-correlation peak. A directionality equal to zero
indicates isotropic features (Fig. S11(a)), and a value increasing to unity
implies increasing anisotropy. The PL contrast is given as $1-P_{10}/P_{90}$,
where $P_{x}$ is the $x^{th}$ percentile of the PL distribution of each
quenched image (Fig. S11(b)).
Apart from the films magnetisation and $d_{\mathrm{NV}}$, we expect the
magnetic field distribution be heavily influenced by the domain periodicity.
We show here that the quench imaging regimes put forward in the main paper
remain valid at different P, with appropriate scaling of $d_{\mathrm{NV}}$ and
M. We define the scaled $d_{\mathrm{NV}}$ as
$d_{\mathrm{NV}}{}^{\prime}=d_{\mathrm{NV}}{}\times(P/P_{0})^{S_{d}}$, and
scaled M as $I_{s}^{\prime}=I_{s}/I_{s,0}\times(P/P_{0})^{S_{i}}$ where
$P_{0}=407\,\mathrm{nm}$ and $I_{s,0}=12.3\,\mathrm{mA}$ correspond to the
value for our sample [Ir($1\,\mathrm{nm}$)/ Co(1)/ Pt(1)]14. Scaling factor
$S_{d}$ and $S_{i}$ are empirically determined to be $-1$ and $-0.8$
$(\sim\sqrt{2/3})$. The analytical derivation is however beyond the scope of
the paper. The scaled directionality maps at varying $P$ are given in Figure
S12. We also include films studied by Gross et al. [21] and Rana et al. [23]
in this framework (Fig. S13). The framework is in good agreement with the work
of Gross et al. which observed isotropic PL features. In the study of Rana et
al., we are unable to resolve the directionality of the features observed.
However, we expect the quenched imaging regime to deviate from our framework
as our simulation model does not include exchange bias present in their film.
Figure S10: Quenched image autocorrelation and directionality. (a) Quenched
images at $d_{\mathrm{NV}}{}=12\,\mathrm{nm}$ and $I_{s}=1.6\,\mathrm{mA}$ and
(b) at $d_{\mathrm{NV}}{}=78\,\mathrm{nm}$ and $I_{s}=10.5\,\mathrm{mA}$.
(Scale bar: $\mathrm{1\mu m}$) (c, d) Autocorrelation maps of panels a, b,
respectively. Quenched maps with low directionality display a a cross-
correlation peak with circular simmetry. When the directionality increases,
the peak becomes elliptic. Figure S11: Details on Quenched Imaging Regimes.
(a) The directionality of PL features and (b) the PL contrast of a quenched
image given as a function of $d_{\mathrm{NV}}$and $I_{s}$. Figure S12: Scaled
Quenched Imaging Regimes at Varying Domain Periodicity. The directionality of
PL features as a function of scaled $I_{s}$ ($I_{s}^{\prime}$) and scaled
$d_{\mathrm{NV}}$ ($d_{\mathrm{NV}}{}^{\prime}$) at domain periodicity, (a)
$P=200\,\mathrm{nm}$, (b) $P=400\,\mathrm{nm}$, (c) $P=600\,\mathrm{nm}$ and
(d) $P=800\,\mathrm{nm}$. The similar directionality picture indicates that
the quench imaging regimes remain valid across different $P$ with appropriate
scaling to $I_{s}$ and $d_{\mathrm{NV}}$. Figure S13: Overview of Quenched
Imaging on Thin Films. Previous studies involving quenched imaging of thin
films are plotted on the scaled directionality map. The position on the map is
based on the $d_{\mathrm{NV}}$, $I_{s}$, and $P$ in each study.
## Appendix E NV Sensor Characterisation
We use a 3-axis Helmholtz coil to apply an external magnetic field
$\mathbf{B}$ at varying $\varphi$ and $\vartheta$, with a fixed field strength
$\lvert\mathbf{B}\rvert=1\,\mathrm{mT}$. We obtain the ODMR spectra by
recording the integrated PL intensity of the NV centre as we sweep the
microwave (MW) frequency. In the presence of magnetic field, the ODMR spectrum
displays a splitting of the $\ket{m_{s}=+1}$ and $\ket{m_{s}=-1}$ states due
to the Zeeman effect (Fig. S14(a)). This splitting is proportional to the
projection of the magnetic field on the NV axis $\hat{u}_{\mathrm{NV}}$. The
ODMR spectrum is first obtained as a function of $\varphi$ while fixing
$\vartheta=90\,\mathrm{{}^{\circ}}$ (Fig. S14(b)). The Zeeman splitting is
maximum when $\varphi=\varphi_{\mathrm{NV}}$ which in our case is
$\varphi_{\mathrm{NV}}=93\pm 2\,\mathrm{{}^{\circ}}$. Next, we vary
$\vartheta$ while fixing $\varphi=\varphi_{\mathrm{NV}}$ (Fig. S14(c)).
Similarly, the maximum splitting occurs when
$\vartheta=\vartheta_{\mathrm{NV}}$ which we obtain to be
$\vartheta_{\mathrm{NV}}=60\pm 2\,\mathrm{{}^{\circ}}$.
We determine the NV-sample distance $d_{\mathrm{NV}}$ by measuring with our
diamond tip the stray field emitted across the edge of a [Ta/CoFeB/MgO] strip.
The out-of-plane magnetic hysteresis is characterised by a MagVision Magneto-
Optical Kerr Effect (MOKE) microscope (Vertisis Technology) in the polar
sensitivity mode and shows that the magnetisation remains saturated at
remanence (Fig. S15). The Zeeman shift of the ODMR spectrum across the edge at
remanence is given in Figure S16(a) (blue dashed curve) and is fitted (red
curve) following the procedure devised by Hingant et al. [40] to retrieve the
$d_{\mathrm{NV}}$. We repeat the measurement numerous times along the edge at
$50\,\mathrm{nm}$ spacing, and the extracted values are averaged (Fig.
S16(b)). The diamond tip used in this work has a
$d_{\mathrm{NV}}=77.5\,\mathrm{nm}\pm 3\,\mathrm{nm}$.
Figure S14: Axis measurements of the NV probe. (a) ODMR spectrum obtained
under an external magnetic field. We can observe the splitting of the
$\ket{m_{s}=+1}$ and $\ket{m_{s}=-1}$ due to the Zeeman effect. We measure a
splitting of $54\,\mathrm{MHz}$ which corresponds to a field felt by the NV of
about $1\,\mathrm{mT}$. (b) Measurement of $\varphi_{\mathrm{NV}}$. We fix
$\vartheta$ and $\varphi$ is varying. When the ODMR splitting is maximum,
$\varphi=\varphi_{\mathrm{NV}}$. (c) Measurement of $\vartheta_{\mathrm{NV}}$.
We fix $\varphi$ and $\vartheta$ is varying.
$\vartheta=\vartheta_{\mathrm{NV}}$ when the ODMR splitting reaches its
maximum value. Figure S15: Calibration Strip Characterisation. (a) Intensity
of polar MOKE signal of a [Ta/CoFeB/MgO] strip as a function of an out-of-
plane magnetic field. (b) Topography image of [Ta/CoFeB/MgO] calibration strip
(scale bar: $10\,\mathrm{\mu m}$) Figure S16: Calibration of the NV probe. (a)
We represent on this plot the topography of the edge of a CoFeB magnetic
stripe (in brown) and the measured Zeeman shift of the NV ODMR spectrum (in
blue) due to the magnetic field emitted at the edge of the stripe. We deduce
the value of $d_{\mathrm{NV}}$ from the fit (in red) of the Zeeman shift
experimentally measured. (b) Histogram distribution of all the NV-sample
distances we measured. The average value is
$d_{\mathrm{NV}}=77.5\,\mathrm{nm}$ and the standard deviation is
$\sigma_{d_{\mathrm{NV}}}\simeq 3\,\mathrm{nm}$.
## Appendix F Quenched Imaging with [111] NV Centre
Figure S17: Quenching with [111] NV centres. Quenching maps obtained with NVs
with $\vartheta_{\mathrm{NV}}{}=0^{\circ}$ on the same area as Figure 2 in the
main text (scale bar: $1\,\mathrm{\mu m}$). The different maps correspond to
(a) $I_{s}=3.1\,\mathrm{mA}$, $d_{\mathrm{NV}}{}=30\,\mathrm{nm}$, (b)
$I_{s}=9.2\,\mathrm{mA}$, $d_{\mathrm{NV}}{}=84\,\mathrm{nm}$, and (c)
$I_{s}=15.4\,\mathrm{mA}$, $d_{\mathrm{NV}}{}=162\,\mathrm{nm}$, which
correspond to the parameters of Figure 2(b-d) of the main text. (d-f) 2D
cross-correlation maps of the images in (a-c) with the domain boundaries. The
negative correlation at zero displacement indicates a low PL at the boundary.
If the displacement increases, the correlation is positive, corresponding to
the bright PL observed within the domains.
The discussion in the main text focuses on NV centres found in commercially
available (100) diamond tips. Quenched maps obtained with NVs with
$\vartheta_{\mathrm{NV}}{}=54.7^{\circ}$ on samples with out-of-plane magnetic
anisotropy give rise to different imaging regimes, as explained in the main
text. Notably, there is a range of $d_{\mathrm{NV}}$ and $I_{s}$ where the
quenched maps directionally highlight the domain boundaries. The
directionality is due to the non-zero angle between the NV axis and the
magnetic anisotropy. Hence, this effect is not present when using NV centres
pointing along the [111] axis (i.e. $\vartheta_{\mathrm{NV}}{}=0^{\circ}$),
hosted in (111)-oriented diamond tips, which have been recently reported [41].
The simulations shown in Figure S17(a-c), which have been taken at the same
$d_{\mathrm{NV}}$ and $I_{s}$ of Figure 2(b-d) of the main text, respectively.
At low magnetisation (Fig. S17(a)), the NV PL is quenched along the domain
boundaries (cross-correlation in Fig. S17(d)), resulting in a bright image
with dark outlines. At higher magnetisation (Fig. S17(b)) the quenching still
traces the domain boundaries (cross-correlation in Fig. S17(e)), but also
expands further within the domain area. The thin bright lines correspond to
the innermost areas of the domains, where the magnetic field is mainly
orthogonal to the sample surface and thus aligned with the NV axis. In Figure
S17(c), the combination of large magnetisation and high $d_{\mathrm{NV}}$
gives an image similar to Figure S17(b), but with lower resolution.
## Appendix G Domain Coverage Estimation
In order to estimate the percentage of domain boundaries covered by the
simulated quenched maps, we first binarize the selected MFM images with Otsu
thresholding [42] (a portion is shown Fig. S18(a)) and detect the boundaries
with the Canny algorithm. The quenched maps are simulated from the stray
fields obtained with Mumax3, as explained above. We first simulate the
quenched maps with NVs at $\vartheta_{\mathrm{NV}}{}=54.7^{\circ}$ and
different $\varphi_{\mathrm{NV}}{}$ (Fig. S18(b) for
$\varphi_{\mathrm{NV}}{}=0^{\circ}$). The single images are then registered to
the domain boundaries with the ECC image alignment algorithm [43], in order to
compensate for the small shift from the domain boundaries induced by the non-
zero angle between the magnetic anisotropy and $\vartheta_{\mathrm{NV}}$. The
images are then combined as explained in the main text (Fig. S18(c) for $N=4$
and $\varphi_{max}=180^{\circ}$). Additionally, we simulate a quenched map
with an NV at $\vartheta_{\mathrm{NV}}{}=0^{\circ}$, which exhibits no shift
and no $\varphi_{\mathrm{NV}}$-dependence (Fig. S18(d)). The images are then
binarized using local gaussian thresholding (Fig. S18(e-g)). The binarized
images are multiplied to the domain boundaries and integrated to yield the
coverage. For experimental quenched maps, the above estimation protocol
includes additional thinning and dilation of the binarized images before
multiplication. The thinning and dilation process ensures that local
deviations between the binarized experimental quenched maps obtained via SNVM
and the domain outline retrieved from MFM images are accounted in the coverage
estimation. These local deviations are largely due to experimental map
nonlinearity, suboptimal image threshold and registration conditions, and MFM
perturbation. To estimate the experimental coverage error, we use the 3
smallest structuring element – a 3 pixel wide diamond, 3 pixel wide square and
5 pixel wide diamond – for binary dilation. The coverage value is given by the
estimation protocol using a 3 pixel wide square dilation structuring element
while the coverage bounds are given by the diamond structuring elements.
Figure S18: Estimation of the domain coverage. (a) Portion of the binarised
MFM scan (background) and domain edges (red pixels) obtained via Canny edge
detection (scale bar: $1\,\mathrm{\mu m}$). Quenched maps of the same area,
where (b) is the map taken with an NV with
$\vartheta_{\mathrm{NV}}{}=54.7^{\circ}$ and
$\varphi_{\mathrm{NV}}{}=0^{\circ}$, (c) is the reconstructed image with $N=4$
at $\varphi_{max}=180^{\circ}$ (see main text), and (d) is acquired with an NV
with $\vartheta_{\mathrm{NV}}{}=0^{\circ}$. (e-g) are the images obtained by
binarising (b-d), respectively.
## Appendix H Directionality, image reconstruction and boundary coverage
Figure S19: Directionality and image reconstruction (a,b) Simulated quenched
maps with $\varphi_{\mathrm{NV}}{}=0^{\circ}$ and
$\varphi_{\mathrm{NV}}{}=90^{\circ}$ and (c) MARe image with $N=10$ and
$\varphi_{max}=180^{\circ}$. (Scale bar: $1\,\mathrm{\mu m}$). (d-f)
2-dimensional cross-correlations between the quenched images in (a-c) and the
domain boundaries. For single images (d,e), the cross-correlation shows a
positive correlation shifted from the origin in the direction opposite to
$\hat{u}_{\mathrm{NV},\varphi}$, the projection of $\hat{u}_{\mathrm{NV}}$ in
the sample plane. This indicates that the bright outlines are highly
directional and do not occur exactly on top of the domain boundaries. On the
contrary, the cross-correlation of the MARe map (f) is isotropic, indicating
that most of the boundaries are uniformly covered. (g) Simulations of domain
boundary coverage to include $N$ beyond $N=4$.
We can further analyze the properties of quenched maps by studying the
2-dimensional cross-correlation between the maps and the domain boundaries. We
do this by simulating the quenched maps starting from the MFM image (Fig.
S9(a-b)), as described above and in the main text. We then calculate the
cross-correlation between the maps and the domain boundaries obtained from the
MFM map with the Canny edge detection algorithm. The cross-correlation plots
(Fig. S19(d-e)) show a non-uniform positive correlation peak which is shifted
from the origin. The shift is opposite to the direction of the direction of
$\hat{u}_{\mathrm{NV},\varphi}$. This indicates that the directional features
on average do not occur on top of the domain boundaries. The shift originates
from the non-zero tilt of $\hat{u}_{\mathrm{NV}}$ from the normal to the
sample plane. This has important consequences for the MARe scheme, since the
shift needs to be compensated with image registration algorithms before
combining the images (Sec. G). In Figure S19(c) we show a MARe image with
$N=10$ and $\varphi_{max}=180^{\circ}$. The corresponding 2D cross-correlation
(Fig. S19(f)) displays a circularly symmetric peak, indicating that the
reconstructed map is non-directional.
We also study the option of using more than four images ($N>4$) to reconstruct
the domain morphology. We show the result of the simulations in Figure S19(g).
As presented in the main text, $N=4$ achieves a peak coverage of about $0.97$
at $\varphi_{max}\approx 120^{\circ}$. The maximum coverage for $N=5$ is
$\approx 0.98$ at $\varphi_{max}\approx 180^{\circ}$. For $N>5$, the coverage
reaches a peak value of $\approx 1$.
## Appendix I Imaging with Minimal Perturbation
As explained in the main text and in previous studies [22, 21, 23], quenched
SNVM enables perturbation-free imaging of spin textures. We present here a
comparison of repeated quenched SNVM and MFM scans over the same area, which
highlight the non-perturbative advantage of quenched SNVM. We first obtain two
consecutive quenched images over an area on the sample (Fig. S20(a,b)), and
thereafter another two consecutive MFM images over the same area (Fig. S20(c,
d)). By comparing the quenched and MFM images, we observed areas (circled in
Fig. S20(a-e)) showing non-perturbative consecutive quenched imaging (Fig.
S20(a,b)), but were subsequently perturbed by consecutive MFM scans (Fig.
S20(c,d)). In addition, the quenched image simulated (Fig. S20(e)) from the
MFM image in Figure S20(d) shows markedly different PL features compared to
experiments (Fig. S20(a,b)) at the vicinity of the highlighted areas (circled
in Figure S20(a), (b) and (e)), reinforcing the non-perturbative advantage of
quenched SNVM over conventional MFM. In our case, the MFM probe used for the
comparison is a low moment variant from Asylum Research, Oxford Instruments
(ASYMFMLM-R2). These observations are however not exhaustive in nature and
requires a statistical approach to determine the degree of perturbation
induced by MFM over quenched SNVM. A rigorous characterisation is highly non-
trivial and involves a vast parameter space including various magnetic
material parameters, different laser intensities utilised during quenched SNVM
and numerous low-moment probe options for MFM.
Figure S20: Evidence of non-perturbative imaging. Sequential study of the
domain morphology by (a,b) consecutive quenched SNVM imaging, followed by
(c,d) consecutive MFM. Areas of perturbation due to consecutive MFM imaging
are circled (c, d), while no visible changes are observed in the corresponding
areas in the consecutive quenched images (a, b). (e) Simulated quenched image
based on second MFM scan 2 (d) shows dissimilar PL features in the circled
vicinity as compared to experiments (a, b). (Scale bar: 500 nm)
## References
* Balasubramanian _et al._ [2008] G. Balasubramanian, I. Y. Chan, R. Kolesov, M. Al-Hmoud, J. Tisler, C. Shin, C. Kim, A. Wojcik, P. R. Hemmer, A. Krueger, T. Hanke, A. Leitenstorfer, R. Bratschitsch, F. Jelezko, and J. Wrachtrup, Nature 455, 648 (2008).
* Maletinsky _et al._ [2012] P. Maletinsky, S. Hong, M. S. Grinolds, B. Hausmann, M. D. Lukin, R. L. Walsworth, M. Loncar, and A. Yacoby, Nature Nanotechnology 7, 320 (2012).
* Tetienne _et al._ [2014a] J.-P. Tetienne, T. Hingant, J.-V. Kim, L. H. Diez, J.-P. Adam, K. Garcia, J.-F. Roch, S. Rohart, A. Thiaville, D. Ravelosona, and V. Jacques, Science 344, 1366 (2014a).
* Gross _et al._ [2017] I. Gross, W. Akhtar, V. Garcia, L. J. Martínez, S. Chouaieb, K. Garcia, C. Carrétéro, A. Barthélémy, P. Appel, P. Maletinsky, J.-V. Kim, J. Y. Chauleau, N. Jaouen, M. Viret, M. Bibes, S. Fusil, and V. Jacques, Nature 549, 252 (2017).
* Thiel _et al._ [2019] L. Thiel, Z. Wang, M. A. Tschudin, D. Rohner, I. Gutiérrez-Lezama, N. Ubrig, M. Gibertini, E. Giannini, A. F. Morpurgo, and P. Maletinsky, Science 364, 973 (2019).
* Rondin _et al._ [2014] L. Rondin, J.-P. Tetienne, T. Hingant, J.-F. Roch, P. Maletinsky, and V. Jacques, Reports on Progress in Physics 77, 056503 (2014).
* Tetienne _et al._ [2015] J.-P. Tetienne, T. Hingant, L. J. Martínez, S. Rohart, A. Thiaville, L. H. Diez, K. Garcia, J.-P. Adam, J.-V. Kim, J.-F. Roch, I. M. Miron, G. Gaudin, L. Vila, B. Ocker, D. Ravelosona, and V. Jacques, Nature Communications 6 (2015).
* Rondin _et al._ [2012] L. Rondin, J.-P. Tetienne, P. Spinicelli, C. Dal Savio, K. Karrai, G. Dantelle, A. Thiaville, S. Rohart, J.-F. Roch, and V. Jacques, Applied Physics Letters 100, 153118 (2012).
* Appel _et al._ [2016] P. Appel, E. Neu, M. Ganzhorn, A. Barfuss, M. Batzer, M. Gratz, A. Tschöpe, and P. Maletinsky, Review of Scientific Instruments 87, 063703 (2016).
* Degen [2008] C. Degen, Nature Nanotechnology 3, 643 (2008).
* Dovzhenko _et al._ [2018] Y. Dovzhenko, F. Casola, S. Schlotter, T. X. Zhou, F. Büttner, R. L. Walsworth, G. S. D. Beach, and A. Yacoby, Nature Communications 9, 1 (2018).
* Tetienne _et al._ [2012] J. P. Tetienne, L. Rondin, P. Spinicelli, M. Chipaux, T. Debuisschert, J. F. Roch, and V. Jacques, New Journal of Physics 14 (2012).
* Tetienne _et al._ [2014b] J.-P. Tetienne, T. Hingant, L. Rondin, S. Rohart, A. Thiaville, E. Jué, G. Gaudin, J.-F. Roch, and V. Jacques, Journal of Applied Physics 115, 17D501 (2014b).
* Kosub _et al._ [2017] T. Kosub, M. Kopte, R. Hühne, P. Appel, B. Shields, P. Maletinsky, R. Hübner, M. O. Liedke, J. Fassbender, O. G. Schmidt, and D. Makarov, Nature Communications 8, 13985 (2017).
* Wörnle _et al._ [2019] M. S. Wörnle, P. Welter, Z. Kašpar, K. Olejník, V. Novák, R. P. Campion, P. Wadley, T. Jungwirth, C. L. Degen, and P. Gambardella, arXiv:1912.05287 (2019).
* Jenkins _et al._ [2019] A. Jenkins, M. Pelliccione, G. Yu, X. Ma, X. Li, K. L. Wang, and A. C. B. Jayich, Physical Review Materials 3, 083801 (2019).
* Appel _et al._ [2019] P. Appel, B. J. Shields, T. Kosub, N. Hedrich, R. Hübner, J. Faßbender, D. Makarov, and P. Maletinsky, Nano Letters 19, 1682 (2019).
* Sun _et al._ [2020] Q.-C. Sun, T. Song, E. Anderson, T. Shalomayeva, J. Förster, A. Brunner, T. Taniguchi, K. Watanabe, J. Gräfe, R. Stöhr, X. Xu, and J. Wrachtrup, arXiv:2009.13440 (2020).
* Hedrich _et al._ [2020] N. Hedrich, K. Wagner, O. V. Pylypovskyi, B. J. Shields, T. Kosub, D. D. Sheka, D. Makarov, and P. Maletinsky, arXiv:2009.08986 (2020).
* Wörnle _et al._ [2020] M. S. Wörnle, P. Welter, M. Giraldo, T. Lottermoser, M. Fiebig, P. Gambardella, and C. L. Degen, arXiv:2009.09015 (2020).
* Gross _et al._ [2018] I. Gross, W. Akhtar, A. Hrabec, J. Sampaio, L. J. Martínez, S. Chouaieb, B. J. Shields, P. Maletinsky, A. Thiaville, S. Rohart, and V. Jacques, Phys. Rev. Materials 2, 024406 (2018).
* Akhtar _et al._ [2019] W. Akhtar, A. Hrabec, S. Chouaieb, A. Haykal, I. Gross, M. Belmeguenai, M. Gabor, B. Shields, P. Maletinsky, A. Thiaville, S. Rohart, and V. Jacques, Phys. Rev. Applied 11, 034066 (2019).
* Rana _et al._ [2020] K. G. Rana, A. Finco, F. Fabre, S. Chouaieb, A. Haykal, L. D. Buda-Prejbeanu, O. Fruchart, S. Le Denmat, P. David, M. Belmeguenai, T. Denneulin, R. E. Dunin-Borkowski, G. Gaudin, V. Jacques, and O. Boulle, Phys. Rev. Applied 13, 044079 (2020).
* Wickenbrock _et al._ [2016] A. Wickenbrock, H. Zheng, L. Bougas, N. Leefer, S. Afach, A. Jarmola, V. M. Acosta, and D. Budker, Applied Physics Letters 109, 053505 (2016).
* Zheng _et al._ [2020] H. Zheng, Z. Sun, G. Chatzidrosos, C. Zhang, K. Nakamura, H. Sumiya, T. Ohshima, J. Isoya, J. Wrachtrup, A. Wickenbrock, and D. Budker, Phys. Rev. Applied 13, 044023 (2020).
* Finco _et al._ [2020] A. Finco, A. Haykal, R. Tanos, F. Fabre, S. Chouaieb, W. Akhtar, I. Robert-Philip, W. Legrand, F. Ajejas, K. Bouzehouane, N. Reyren, T. Devolder, J.-P. Adam, J.-V. Kim, V. Cros, and V. Jacques, arXiv:2006.13130 [cond-mat] (2020).
* Van Der Sar _et al._ [2015] T. Van Der Sar, F. Casola, R. Walsworth, and A. Yacoby, Nature Communications 6, 1 (2015).
* Zhou _et al._ [2017] T. X. Zhou, R. J. Stöhr, and A. Yacoby, Applied Physics Letters 111, 163106 (2017).
* Doherty _et al._ [2012] M. W. Doherty, F. Dolde, H. Fedder, F. Jelezko, J. Wrachtrup, N. B. Manson, and L. C. L. Hollenberg, Physical Review B 85, 205203 (2012).
* Doherty _et al._ [2013] M. W. Doherty, N. B. Manson, P. Delaney, F. Jelezko, J. Wrachtrup, and L. C. Hollenberg, Physics Reports 528, 1 (2013).
* Doherty _et al._ [2011] M. W. Doherty, N. B. Manson, P. Delaney, and L. C. L. Hollenberg, New Journal of Physics 13 (2011).
* Gruber _et al._ [1997] A. Gruber, A. Dräbenstedt, C. Tietz, L. Fleury, J. Wrachtrup, and C. v. Borczyskowski, Science 276, 2012 (1997).
* Moreau-Luchaire _et al._ [2016] C. Moreau-Luchaire, C. Moutafis, N. Reyren, J. Sampaio, C. A. F. Vaz, N. Van Horne, K. Bouzehouane, K. Garcia, C. Deranlot, P. Warnicke, P. Wohlhüter, J.-M. George, M. Weigand, J. Raabe, V. Cros, and A. Fert, Nature Nanotechnology 11, 444 (2016).
* Soumyanarayanan _et al._ [2017] A. Soumyanarayanan, M. Raju, A. L. Gonzalez Oyarce, A. K. C. Tan, M.-Y. Im, A. P. Petrović, P. Ho, K. H. Khoo, M. Tran, C. K. Gan, F. Ernult, and C. Panagopoulos, Nature Materials 16, 898 (2017).
* Kazakova _et al._ [2019] O. Kazakova, R. Puttock, C. Barton, H. Corte-León, M. Jaafar, V. Neu, and A. Asenjo, Journal of Applied Physics 125, 060901 (2019).
* Vansteenkiste _et al._ [2014] A. Vansteenkiste, J. Leliaert, M. Dvornik, M. Helsen, F. Garcia-Sanchez, and B. Van Waeyenberge, AIP Advances 4, 107133 (2014).
* Robledo _et al._ [2011] L. Robledo, H. Bernien, T. van der Sar, and R. Hanson, New Journal of Physics 13, 025013 (2011).
* Stefan [2020] L. Stefan, _Scanning magnetometry with single-spin sensors_ , Ph.D. thesis, University of Bristol (2020).
* Woo _et al._ [2016] S. Woo, K. Litzius, B. Krüger, M.-Y. Im, L. Caretta, K. Richter, M. Mann, A. Krone, R. M. Reeve, M. Weigand, P. Agrawal, I. Lemesh, M.-A. Mawass, P. Fischer, M. Kläui, and G. S. D. Beach, Nature Materials 15, 501 (2016).
* Hingant _et al._ [2015] T. Hingant, J.-P. Tetienne, L. J. Martínez, K. Garcia, D. Ravelosona, J.-F. Roch, and V. Jacques, Physical Review Applied 4, 014003 (2015).
* Rohner _et al._ [2019] D. Rohner, J. Happacher, P. Reiser, M. A. Tschudin, A. Tallaire, J. Achard, B. J. Shields, and P. Maletinsky, Applied Physics Letters 115, 192401 (2019).
* Otsu [1979] N. Otsu, IEEE Transactions on Systems, Man, and Cybernetics 9, 62 (1979).
* Evangelidis and Psarakis [2008] G. D. Evangelidis and E. Z. Psarakis, IEEE Transactions on Pattern Analysis and Machine Intelligence 30, 1858 (2008).
|
# Where binary neutron stars merge: predictions from IllustrisTNG
Jonah C. Rose Department of Astronomy, University of Florida, Gainesville, FL
32611, USA Paul Torrey Department of Astronomy, University of Florida,
Gainesville, FL 32611, USA K.H. Lee Department of Physics, University of
Florida, Gainesville, FL 32611, USA I. Bartos Department of Physics,
University of Florida, Gainesville, FL 32611, USA
(Received June 1, 2019; Revised January 10, 2019)
###### Abstract
The rate and location of Binary Neutron Star (BNS) mergers are determined by a
combination of the star formation history and the Delay Time Distribution
(DTD) function. In this paper, we couple the star formation rate histories
(SFRHs) from the IllustrisTNG model to a series of varied assumptions for the
BNS DTD to make predictions for the BNS merger host galaxy mass function.
These predictions offer two outcomes: (i) in the near term: influence BNS
merger event follow-up strategy by scrutinizing where most BNS merger events
are expected to occur and (ii) in the long term: constrain the DTD for BNS
merger events once the host galaxy mass function is observationally well
determined. From our fiducial model analysis, we predict that 50% of BNS
mergers will occur in host galaxies with stellar mass between
$10^{10}-10^{11}$ $M_{\odot}$, 68% between $4\times 10^{9}-3\times 10^{11}$
$M_{\odot}$, and 95% between $4\times 10^{8}-2\times 10^{12}$ $M_{\odot}$. We
find that the details of the DTD employed does not have a strong effect on the
peak of the host mass function. However, varying the DTD provides enough
spread that the true DTD can be determined from enough electromagnetic
observations of BNS mergers. Knowing the true DTD can help us determine the
prevalence of BNS systems formed through highly eccentric and short separation
fast-merging channels and can constrain the dominant source of r-process
material.
neutron star merger, gravitational waves, methods: numerical, stars: neutron,
binaries: close
††journal: AJ
## 1 Introduction
Since the first discovery of gravitational waves by LIGO (Aasi et al., 2015),
a growing number of compact object mergers have been detected. To date, two
detections have been confirmed as binary neutron star (BNS) mergers (Abbott et
al., 2017a, 2020). Of these two, the BNS event GW170817 was detected across
the electromagnetic spectrum (Abbott et al., 2017b), beginning the age of
multi-messenger astronomy.
Within the next few years we expect tens of new BNS mergers events to be
observed by LIGO, Virgo (Acernese et al., 2014) and KAGRA (Akutsu et al.,
2019), broadening our understanding of BNS systems and their host galaxies
Abbott et al. (2018). Developing a clearer understanding of the link between
BNS merger events and their host galaxies is useful for multiple reasons in
both the short and long term.
In the short term, having clear predictions for the BNS host galaxy mass
function could inform follow-up strategies for future BNS merger events
detected by LIGO/Virgo/KAGRA. Locating the electromagnetic counterpart of GW
events is difficult owing to narrowly peaked observability windows and large
localization areas (Smartt et al., 2017; Metzger & Berger, 2012). GW
localizations can extend tens-to-hundreds of square degrees, making them
impractical to completely cover in a reasonable time after the initial event
with most telescopes (Gehrels et al., 2016; Bartos et al., 2013, 2015, 2018,
2019a). Long-term radio emission could allow sufficient time for follow-up
observations, but this will only be possible for nearby events within dense
circum-merger media (Bartos et al., 2019b; Lee et al., 2020; Grandorf et al.,
2020).
GW follow-up strategies have taken two approaches to search the localization
area more efficiently: covering the entire area or targeting galaxies (e.g.
Bartos et al., 2014; Arcavi et al., 2017; Chan et al., 2018; Antier et al.,
2020). Covering the entire localization area increases the likelihood of
imaging the correct host galaxy, but risks missing the transient owing to the
limited exposure times. Soares-Santos et al. (2017) used this method to
successfully locate the kilonova after GW170817 by covering 80.7% of the
probability weighted localization area. In contrast, targeted follow-ups use
galaxy catalogs to preferentially search galaxies based on select criteria
(e.g. galaxy blue luminosity), potentially reducing the required number of
pointings by a factor of 10 to 100 and increasing exposure times (Gehrels et
al., 2016; Ducoin et al., 2020). This strategy was used in the first
successful detection of the optical counterpart of GW170817 (Coulter et al.,
2017). However, it is more likely that targeted strategies will miss the event
if the BNS merger takes place in a less-massive galaxy. Efficient follow-up
strategies are important for maximising the chance of identifying the
electromagnetic counterpart with limited observations. One way to achieve this
is to build a clearer understanding of the expected host galaxy mass function
for BNS mergers.
Longer term, the link between BNS merger events and their host galaxies can
help determine the dominant formation channel of $r$-process material by
constraining the true BNS Delay Time Distribution (DTD) (Marchant et al.,
2016; Barrett et al., 2018; Mapelli et al., 2019; Santoliquido et al., 2020;
McCarthy et al., 2020). Specifically, while core-collapse supernovae (SNe) and
BNS mergers have been proposed as $r$-process formation channels, the dominant
$r$-process production channel must be able to recreate the observed
decreasing trend in Eu/Fe vs Fe/H (Matteucci et al., 2015). BNS mergers must
produce $r$-process elements in less than 100 Myr to dominate $r$-process
element production (Hotokezaka et al., 2018; Côté et al., 2017), and in less
than 1 Myr – with a steep cutoff slope – to be the source of all $r$-process
material (Matteucci et al., 2015). While it is possible for BNS systems to
merge in this time (Safarzadeh et al., 2019b), they require highly eccentric
orbits from high-velocity kicks or low initial separation from case BB mass
transfer, both of which may not occur in BNS formation (Tauris et al., 2017).
These models also predict a shallower DTD based on the current understanding
of BNS formation channels (Giacobbo & Mapelli, 2019; Simonetti et al., 2019;
Safarzadeh & Berger, 2019).
Current stellar populations synthesis models suggest BNS DTDs are best
modelled by power law distributions with an exponent between -1 and -1.5
(Simonetti et al., 2019) and the minimum time from creation of the binary
system to merger ($t_{\mathrm{min}}$) between 1 Myr to 1 Gyr (e.g. Simonetti
et al., 2019; Safarzadeh et al., 2019b). These models assert that the main
formation channel that forms BNS systems begin with two OB stars that are
close enough to undergo mass transfer (Tauris et al., 2017; Giacobbo &
Mapelli, 2018; Safarzadeh et al., 2019b). The rest of the systems are born
through so-called fast-merging channels where a binary system forms with
either a high eccentricity through large natal kicks, or with a small initial
separation through unstable case BB mass transfer (Tauris et al., 2017). If
the BNS DTD could be observationally constrained, it would not only shed light
on the physical formation channels (fast-merging vs. OB star mass transfer),
but could also help constrain progenitor metallicity, common-envelope
efficiency, natal kicks, mass ratio, and initial binary separation through
comparisons with population synthesis codes (Giacobbo & Mapelli, 2018;
Belczynski et al., 2018; Dominik et al., 2012). Constraining the BNS DTD may
be one of the best and most practical ways to constrain the physical origin
and implications of BNS merger events.
Taken over a whole galaxy, the rate of BNS merger events can be determined by
convolving the DTD with the star formation rate history (SFRH). Metallicity is
also accounted for in some models, but has been found to play a minor role in
influencing the DTD (Mapelli et al., 2019; Giacobbo & Mapelli, 2019; Côté et
al., 2017; Giacobbo & Mapelli, 2018). Previous studies have measured the BNS
merger rate using SFRHs derived from EAGLE and Illustris cosmological
simulations (Artale et al., 2019; Mapelli et al., 2018, 2019), the FIRE zoom-
in simulation (Lamberts et al., 2018), dark matter only simulations (Marassi
et al., 2019; Cao et al., 2018), or from semi-analytical models (Adhikari et
al., 2020; Toffano et al., 2019) with population synthesis codes or an assumed
DTD. Each of these models provide a different SFRH for the galaxies in that
simulation which provide different distributions for BNS mergers given the
same DTD.
In this paper, we use the IllustrisTNG simulations to make predictions for the
BNS merger host galaxy mass function. This extends the work that which has
been presented in (Mapelli et al., 2018) and (Artale et al., 2019) by focusing
on the uncertainty/variation introduced by variations in the assumed DTD.
Moreover, we consider here how in the future an observed BNS host galaxy mass
function could be used to constrain the real/underlying DTD. To do this, we
take as input the IllustrisTNG galactic SFRHs – which are known to match a
wide range of observed galaxy properties and galaxy scaling relations – and
employ varied assumptions about the BNS DTD. Our chosen SFRHs and DTDs allow
us to demonstrate the galactic masses at which we expect most BNS mergers to
occur, as well as to identify the level of variation that would be induced
based on changes to the BNS DTD.
The structure of this paper is as follows. In Section §2 we outline our
methods including a brief description of the IllustrisTNG simulations, our
adopted DTDs, and our methods for calculating the galaxy-by-galaxy BNS merger
rate. In §3 we present our main results including predictions for the BNS
merger host galaxy mass function and the sensitivity of this prediction to the
assumed DTD. In §4 we discuss the implications of our results. Finally, in §5
we discuss our results and conclude.
## 2 Methods
In this paper, we make predictions for the BNS merger host galaxy mass
function by adopting SFRHs from cosmological simulations and DTDs from basic
stellar population synthesis models.
### 2.1 Delay Time Distributions
Generally, the current BNS merger rate for any galaxy is given by convolving
its SFRH with the appropriate DTD. The BNS merger rate for any collection of
material (e.g. galaxy, volume, etc.) is given by
$r(t)=\int_{0}^{t}\psi(\tau)\Gamma(t-\tau,\,Z)d\tau$ (1)
where $\psi$ is the star formation rate, $\Gamma(t-\tau,\,Z)$ is the DTD, and
the integration is performed from the Big Bang ($t=0$) to the time of
observation, $t$ (e.g. Maoz et al., 2012). The metallicity dependence, $Z$, in
$\Gamma$ is only present in some DTDs, otherwise the DTD takes the form
$\Gamma(t-\tau)$. In the case of cosmological galaxy formation simulations,
this can be reduced to a sum over contributions from all relevant stellar
populations
$r(t)=\sum_{j}M_{j}\Gamma(t-t_{j},\,Z_{j})$ (2)
where the sum is performed over all stars (or stellar populations) in the
region of interest (e.g. within a specific galaxy), $M_{j}$ is the mass of
each stellar particle or stellar population, $t_{j}$ is birth time of that
stellar population such that $t-t_{j}$ is the age of the stellar population,
and $Z_{j}$ is the metallicity of the stellar population. For any cosmological
galaxy formation simulation, the BNS merger rate is easily evaluated once a
BNS DTD is specified.
Figure 1: (left) The two fiducial DTDs shown as the BNS merger rate vs time.
The BPASS DTD is split into two lines to show the range covered by the DTD as
the metallicity of the host star changes. (right) The normalized BNS merger
rate for the two fiducial DTDs as a function of the host galaxy stellar mass.
The solid line shows the merger rate given the power law DTD with an exponent
of s=-1 and $t_{\rm cut}$=0.01Gyr. The dashed line shows the merger rate given
the BPASS DTD. Both merger rates have been normalized individually such that
the total merger rate across the simulation for the given DTD is unity. The
shaded bands show the mass range which contain 50, 68, and 95 percent of the
mergers around the peak merger rate for the fiducial power law DTD. The arrow
points to the host galaxy mass of the only BNS merger with a detected
electromagnetic counterpart so far.
### 2.2 Power Law DTD
In this paper, we adopt two fiducial DTDs: (i) a simple paramaterized power
law and (ii) the metalicity dependent DTD from BPASS (Eldridge & Stanway,
2016). The power law DTD is given by
$\Gamma(t-t_{j})=\begin{cases}0&t-t_{j}\leq t_{\mathrm{cut}}\\\
\Gamma_{0}(t-t_{j})^{s}&t-t_{j}>t_{\mathrm{cut}}\end{cases}$ (3)
where $\Gamma_{0}$ is a normalization coefficient, $s$ is the power law index,
and $t_{\mathrm{cut}}$ is the minimum time/age before the first BNS merger
event occurs. Figure 1 shows our fiducial power law DTD ($s=-1$;
$t_{\mathrm{cut}}=10^{7}\,\mathrm{yrs}$). In addition to our fiducial power
law DTD, we also consider DTDs that have varied power law exponents ranging
from $s=-2$ to $s=2$ and cutoff times ranging from
$t_{\mathrm{cut}}=0.001\,\mathrm{Gyr}$ to $t_{\mathrm{cut}}=10\,\mathrm{Gyr}$.
### 2.3 BPASS DTD
In addition to using a simple power law DTD, we adopt our second fiducial DTD
from BPASS (Eldridge & Stanway, 2016) as shown in Figure 1. BPASS is a stellar
population synthesis code which follows the evolution of a large suite of
varying binary stars (Eldridge et al., 2008; Eldridge & Stanway, 2016). The
critical feature of BPASS that is important for this paper is that the
simulated stellar population matches the observed population of binaries in
abundance along with supernovae progenitors and rates (Eldridge et al., 2008,
2013, 2015; Eldridge & Stanway, 2016). The remnants of supernova can have a
mass in the full range between .1 and 300 $M_{\odot}$, allowing for more
realistic evolution of these systems. The simulation also encompass a wide
range of stellar metalicities which has been shown to potentially affect
(albeit weakly) the DTD of BNS mergers (e.g. Mapelli et al., 2018). The final
time for the BNSs to merge in BPASS is then the sum of the progenitor stars’
evolution and the in-spiral time once the BNS system has formed (Eldridge &
Stanway, 2016). The tabulated BPASS BNS merger rates are a function of stellar
population age and metallicity, $\Gamma(t-t_{j},Z_{j})$, which can be employed
in conjunction with Equation 2 to determine the total BNS merger rate.
We note that the overall normalization for all of our DTDs ($\Gamma_{0}$, in
the case of the powerlaw DTD) can be specified to match the expected global
rate of BNS mergers. However, its exact value is not important to the present
work as we are only interested in the relative/normalized distribution of BNS
merger events as a function of galaxy mass. We therefore normalize the total
BNS merger rate across the entire simulation box to unity for each DTD
individually. To achieve this, we divide the rate for an individual galaxy,
$j$, by the rate of the entire box for the given DTD, $\Gamma$. The normalized
form of Equation 2 becomes
$R_{i}(t)=\frac{r_{i}(t)}{\sum_{k}M_{k}\Gamma(t-t_{k},\,Z_{k})}$ (4)
where $r_{i}$ represents the BNS merger rate for an individual galaxy given by
equation 2, and the denominator sums over the rates of all galaxies ($k$) in
the simulation box.
We linearly interpolate across the ages and metallicities presented in
Eldridge & Stanway (2016). We do not extrapolate outside of the provided
metallicity values; all stars with $Z\leq 0.0001$ follow the DTD for
$Z=0.0001$ and all stars with $Z\geq 0.014$ follow the DTD for $Z=0.014$. The
BNS merger rate is then calculated using Equation 4. The BPASS DTD includes
information on natal kicks from the initial supernovae. For each supernova,
the kick velocity and direction are determined from Hobbs et al. (2005). For
more information, see Eldridge et al. (2011). There are no natal kicks
included in the calculation of the power law delay time distributions,
including the fiducial power law model. Thus, the power law DTDs are fully
specified with two parameters controlling (i) the time of BNS mergers onset
and (ii) the subsequent BNS merger rate evolution.
### 2.4 The IllustrisTNG Simulation Suite
In order to calculate the BNS merger rates, we adopt SFRHs from IllustrisTNG
simulation (Pillepich et al., 2018; Nelson et al., 2018a; Marinacci et al.,
2018; Springel et al., 2018; Naiman et al., 2018). IllustrisTNG is a suite of
cosmological hydrodnamical simulations which includes a comprehensive galaxy
formation model (Pillepich et al., 2018; Weinberger et al., 2017) and builds
upon the original Illustris model (Vogelsberger et al., 2013; Torrey et al.,
2014). The critical feature of the IllustrisTNG simulations for this paper is
that the simulations have been shown to broadly reproduce the cosmic star
formation rate density and the redshift-dependent galaxy stellar mass function
(Pillepich et al., 2018; Springel et al., 2018; Nelson et al., 2018b). These
tests give confidence that both the global and galaxy-by-galaxy star formation
histories produced by IllustrisTNG reasonably match those of the Universe.
While it is possible that the constraints for any given galaxy differ from a
“real” galaxy, when averaged over a large enough sample the trends are well
matched. Specifically, we employ the TNG-100-1 simulation which includes a
$100\;\rm{Mpc}$ cubed volume with hundreds-of-thousands of galaxies with
varied SFRHs and self-consistently evolved metallicity distributions. All
stellar particles are used in equation 4 to calculate the normalization, but
results are only presented for galaxies down to a stellar mass of
$10^{7}M_{\odot}$. We use the full information from the simulated galaxy
populations including the age and metallicity distribution to evaluate the BNS
merger rate.
## 3 Results
Figure 1 shows the DTDs (left) and host galaxy mass function (right) for our
two fiducial setups. The DTDs follow the same general form with mergers being
most likely shortly after $t_{\mathrm{cut}}$ then dropping with increasing
time. The main difference comes from the metallicity dependence in the BPASS
DTD. Owing to the similarities in the DTDs, the resulting BNS merger host
galaxy mass functions are remarkably similar. In particular, we find that the
peak of BNS mergers occurs in host galaxies with stellar masses around
$M_{*}=5\times 10^{10}$ $M_{\odot}$ and that half of the mergers occur in
galaxies with masses between $10^{10}M_{\odot}<M_{*}<10^{11}$ $M_{\odot}$.
Despite the significant added complexity of the BPASS models, the predicted
BNS host galaxy mass function is not significantly different from the power
law DTD. Additionally, the mass of the host galaxy from GW170817 (Blanchard et
al., 2017) is indicated with a downward facing arrow in the right panel of
Figure 1. While it is a sample size of one and should not be over-interpreted,
we note that GW170817’s host galaxy mass is in the peak region of expected BNS
host galaxy masses for both of our fiducial DTD models.
Figure 2: The individually normalized BNS merger rates as a function of
stellar mass for varied power law exponents (left) and varied
$t_{\mathrm{cut}}$ values (right). There is significant variation in the
predicted host galaxy mass functions when the DTD is perturbed from the
fiducial values.
Figure 2 shows the host galaxy mass functions for the power law DTDs with
varied power law exponents and $t_{\mathrm{cut}}$. For completeness, we
explore a large range of values for the power law exponents and cutoff times
$t_{\mathrm{cut}}$ which go beyond what is believed to be physically correct
(Côté et al., 2017). These values are included to demonstrate the variation in
host galaxy mass functions which would result from varied DTDs. Even with this
large spread of DTDs, we find that the results shown in Figure 1 are broadly
stable. In particular, despite the very significant variation in the DTDs, all
cases show a peak BNS merger rate that occurs in galaxies with a stellar mass
in the range $10^{10}$-$10^{11}$ $M_{\odot}$.
The DTD for which $s=0$ is of particular interest in Figure 2 because it
tracks the total stellar mass found in each mass bin – nearly independent of
star formation history.111There is a dependence on the amount of stellar mass
that formed in the past $t_{\mathrm{cut}}=10\mathrm{Myrs}$, but this is
expected to be only a $\sim 0.1\%$ correction. Owing to the shape of the
simulated (and observed) galaxy stellar mass functions, the peak of the
stellar mass distribution is in galaxies with stellar masses between $10^{10}$
and $10^{11}$ $M_{\odot}$. Therefore, the majority of BNS mergers for this DTD
are also found in that mass range. Importantly, because the predicted host
galaxy mass function for a DTD with power law index of $s=0$ is nearly
independent of galactic formation history, this result is not very sensitive
to the detailed SFRHs predicted by IllustrisTNG, but only the shape of the
galaxy stellar mass function. Insofar as other simulations or models reproduce
the same galaxy stellar mass functions, their predicted host galaxy mass
function for $s=0$ would be nearly identical.
Changing the power law exponent to values away from $s=0$ introduces a direct
dependence on the assumed star formation history by placing emphasis either on
the older or younger stellar populations. Specifically, power law exponents
higher than $s=0$ lead to systematic changes in which the host galaxy mass
function is biased toward galaxies with older stellar masses. This naturally
results in a shift of the peak in the host galaxy mass function toward older,
more massive systems. Conversely, changing the power law exponent to values
lower than $s=0$ (which is the more physical case) biases the host galaxy mass
function toward systems with younger stellar populations. Thus, as the power
law exponent is decreased, there is an expectation that an increasing number
of BNS mergers occur in low mass galaxies with current or recent ongoing star
formation. For a fixed DTD, the detailed shape that we predict for the BNS
merger host galaxy mass function is dependent on the IllustrisTNG galaxy
stellar mass function and SFRHs, and therefore should be checked against other
models (e.g. EAGLE, SIMBA, etc.). However, owing to observational constraints
provided by the cosmic star formation rate density and redshift dependent
galaxy stellar mass functions, we do not expect these results to substantially
change.
Despite the stability in the the peak of the host mass function across
different DTDs, there is still significant spread in the resulting host mass
functions at other masses. For example, when examining the BNS merger rate in
galaxies with a host mass near $10^{9}$, the BNS merger rates differ by 1-dex
between the $s=2$ and $s=-2$ DTDs with a fixed $t_{\mathrm{cut}}$. There is an
even greater spread in the highest mass systems where the merger rate differs
by 2-dex between the $s=2$ and $s=-2$ DTDs. A similar range in host mass
functions occurs across the different $t_{\mathrm{cut}}$ values at a fixed
value of $s$. The predicted variability in host galaxy mass functions suggests
that as GW BNS detections with EM follow-up observations mount, a careful
comparisons of observed and predicted host galaxy mass functions could be used
to constrain the true DTD for BNS mergers. These results agree with Safarzadeh
& Berger (2019) and running KS-tests on our power law host mass functions also
results in $\mathcal{O}(1000)$ observations being required to determine a true
DTD.
When optimizing BNS merger event follow-up, a question arises of which
galaxies should be targeted first. While Figures 1 and 2 indicate that most
BNS mergers will occur in roughly Milky Way mass galaxies, Figure 3 shows the
host mass function normalized by the number of galaxies in each mass bin which
indicates the predicted number of BNS mergers per galaxy. Here, the host mass
function no longer peaks in the $10^{10}$-$10^{11}$ $M_{\odot}$ range, but
instead peaks at larger masses (in the $10^{12}$-$10^{13}$ $M_{\odot}$ range).
This indicates that while our analysis predicts that most BNS merges will
occur in $\sim$ Milky Way mass galaxies, the highest likelihood of finding a
BNS merger based on a single observation still favors more massive systems,
simply because they have more mass. This conclusion is somewhat dependent on
the detailed assigned prescription for the BNS merger DTD. In particular,
while the fiducial values (see the magenta line in the left panel of Figure 3)
clearly peaks at the highest mass bin resolved in the IllustrisTNG volume, the
steeper exponent cases ($s=-1.5$ and $s=-2$) are much flatter above
$M_{*}=10^{10.5}M_{\odot}$.
A closer examination of how the BNS merger rate correlates with different
observables indicates a dependence on the DTD. This analysis was conducted by
comparing, through the Pearson correlation coefficient, the BNS merger rate
for each galaxy to one of three observable properties: its star formation rate
(SFR), its blue luminosity, and its stellar mass. For very steep and negative
DTDs, $s=-2$, we find that SFR is best correlated with merger rate (R=0.978).
For less steep and negative DTDs, $s=-1$, we find that blue luminosity is best
correlated (R=0.993). For flat or increasing DTDs, $s=0+$ we find that stellar
mass is best correlated (R=1.0, 0.994, 0.985 for s=0,1,2 respectively).
Figure 3: The individually normalized BNS merger rates as a function of
stellar mass per galaxy for varied power law exponents (left) and varied
$t_{\mathrm{cut}}$ values (right). While the host galaxy mass function (Figure
2) predicts most BNS mergers will happen in roughly Milky Way mass galaxies
when averaged over the whole galaxy population, this host galaxy specific-mass
function (this figure) indicates that the rate of BNS merger rate is higher is
higher in higher mass galaxies, when compared on an individual basis.
## 4 Discussion
The ability to connect LIGO-detected BNS merger events to their host galaxy
opens new scientific opportunities. Specifically, while transient event
detection and host galaxy association is well-established in astronomy,
traditional methods for kilonova detection yield little direct information
about the progenitor system. In contrast, wave form fitting of LIGO detected
compact object merger events provides detailed information about the
progenitor system including the masses of the merging objects. This new
information links mergers of specific object types to host galaxies with a
limited level of ambiguity or uncertainty that was not previously possible. As
we have discussed in this paper, this opens up the possibility of developing a
more intimate link between galactic star formation rate histories (SFRHs), BNS
delay time distributions, and the observed host galaxy stellar mass function.
In this paper, we have leveraged the galactic SFRHs from the IllustrisTNG
cosmological galaxy formation model. At some level, these SFRHs are likely not
a perfect reflection of real galactic SFRHs. However, the model is able to
broadly match the cosmic star formation rate history as well as the redshift
dependent galaxy stellar mass function. This gives us a reasonable level of
confidence that these simulated SFRHs provide good approximations to those of
real galaxies. Moreover, since fairly different physical models (e.g. those of
Illustris model, the EAGLE model, and semi-analytical models) yield similar
BNS merger rates, we believe the more holistic analysis obtained from the
TNG-100 simulation can further our understanding of the host galaxy mass
function and its dependence on the DTD.
An important feature of the IllustrisTNG simulation is that it self-
consistently tracks gas- and stellar-phase metallicities. The stellar
metallicities, in turn, impact the results of the metallicity-dependent DTDs,
such as the BPASS DTD. It has been shown that IllustrisTNG’s stellar
metallicity vs stellar mass relation generally agrees with observations but is
too flat leading to higher metallicities at lower masses (Nelson et al.,
2018b). The largest discrepancy is $\sim 0.5$ dex near $10^{10.5}\;M_{\odot}$.
To understand how this uncertainty affects our results, we use two host galaxy
mass functions with the BPASS DTD. Each host galaxy mass function is made by
setting all stellar metallicities to either $Z=0.0001$ or $Z=0.014$. Given the
large difference in metallicity, $\sim 2.5$ dex, between these host galaxy
mass functions, we expect any differences to be larger than those introduced
from uncertainties in the IllustrisTNG stellar metallicities. We find very
little variation between the host galaxy mass functions between the lowest and
highest metallicities across all galaxy masses. Given the large spread in
metallicities used in creating these DTDs, it is unlikely that our results
would be changed significantly if we instead employed stellar metallicities
from a different galaxy formation model. Given that our results do not change
when accounting for the more complicated BPASS DTD, additional credibility can
be given to results derived from power law DTDs.
Some efforts have already begun to explore the new connection between merger
events and their host galaxy using population synthesis codes and various star
formation histories. These studies investigate how the host galaxy mass
function is affected by the merger progenitors. The unique contribution of
this paper is to focus on systematic variations of the employed DTD coupled to
cosmologically motivated SFRHs. A similar set of DTD variations was employed
in Safarzadeh & Berger (2019), albeit with analytically simplified SFRHs. They
concluded that the host galaxy mass function peaks at high masses and that
$\mathcal{O}(10^{3})$ observations are required to constrain the a true power
law distribution. This paper agrees with their conclusions for the host galaxy
mass range that they cover, $10^{9}-10^{11.25}M_{\odot}$. This study extends
Safarzadeh & Berger (2019) by pairing a broad set of assumed DTDs to SFRHs
naturally derived in a cosmological environment and examining how the assumed
DTD affects the host galaxy mass function over the large range of host masses
allowed by IllustrisTNG.
Similar results to those presented in this paper have also been discussed in
Artale et al. (2019); Safarzadeh et al. (2019a); McCarthy et al. (2020).
Artale et al. (2019) uses the EAGLE simulation to create a stellar mass vs
specific BNS merger rate plot similar to Figure 3. They find that stellar mass
is an excellent tracer for specific merger rate. This result is consistent
with our result up to $\sim 10^{10.5}M_{\odot}$. However, at higher masses we
find a dependence on the DTD, causing faster merging times to not depend on
stellar mass.
Safarzadeh et al. (2019a); Adhikari et al. (2020); McCarthy et al. (2020) also
present host galaxy mass functions using different SFRH models. Safarzadeh &
Berger (2019) uses an analytic model with the set us DTDs used in this paper
to understand how the host galaxy mass function is affected by the DTD. Our
results are generally consistent with theirs, but our extended range of host
masses allow us to see that most BNS mergers do not happen in galaxies with
highest mass, but in galaxies with masses between $10^{10}$ and
$10^{11}M_{\odot}$. McCarthy et al. (2020) also uses an analytic model but
paired with SDSS observations to explore the host galaxy mass function along
with other host observables. For the host galaxy mass function, our results
are consistent with theirs. However, our larger range of DTDs presented in
Figure 2 reveal the large spread between different assumed DTDs and the stable
peak near $10^{10.5}M_{\odot}$. Adhikari et al. (2020) also find that other
host observables paired with stellar mass are necessary to obtain a better
understanding where BNS merge. Overall, we find that future explorations of
this topic will need to consider a wide range of DTDs and the full range of
the observable in question.
While the work we present here continues our understanding of what we can
learn from observations of BNS host galaxies, further investigations are
necessary to fully understand how BNS form and evolve. One example of such an
investigation is to expand the set of DTDs examined using the methods in this
paper. The set of DTDs we examine are broad, covering those most commonly
referenced (e.g. Safarzadeh & Berger, 2019; Eldridge & Stanway, 2016), but we
do not exhaustively search the full range of DTDs proposed (e.g. Simonetti et
al., 2019; Dominik et al., 2012). Also, our convolution of IllustrisTNG’s SFRH
with our DTDs does not include any form of natal kicks. If these kicks are
strong enough to dislodge the binary from smaller galaxies, it is possible
their addition would weight the host mass functions toward higher mass
galaxies. With a greater range of DTDs examined and a more detailed
convolution, we will gain a clearer picture of where BNS mergers are located,
which delay times can be distinguished using the host galaxy mass function,
and the most likely places they will be observed. Another way to incorporate a
more complete set of DTDs would be to use a varied set of population synthesis
models which cover a wide range of binary separations, kick velocities,
initial mass functions, etc.
Including other star formation histories could also provide a more detailed
look at the spread in possible host galaxy mass functions. While IllustrisTNG
is broadly consistent with the cosmic star formation rate density and redshfit
dependent galaxy stellar mass functions (Pillepich et al., 2018), its accuracy
should not be over interpreted and different simulations will surely produce
somewhat varied star formation histories that could impact our results.
However, we can say that up to their mass cutoff, our results align with
Artale et al. (2019) who found no significant difference when comparing
results from Illustris and Eagle. The lack of variation in Artale et al.
(2019) most likely indicates that – while there is some variation – the SFRHs
in Illustris and EAGLE are sufficiently similar to not significantly impact
the results. Thus, by adopting the SFRHs from galaxy formation simulations and
assumptions about the functional form of the DTD, predictions can be made
about the BNS host galaxy mass function. Additionally, similarities between
the different simulations suggest that uncertainties in the poorly constrained
DTDs are likely larger than the uncertainties introduced from the SFRHs. In
particular, the detailed shape of the BNS host galaxy mass function will be
sensitive to assumptions about the DTD.
## 5 Conclusion
We presented predictions for the host galaxy mass function and host galaxy
specific-mass function for BNS mergers. Our predictions were generated by
convolving a set of power law and BPASS DTDs with the star formation histories
from the IllustrisTNG cosmological simulation. Our main conclusions are as
follows:
1. 1.
We find almost no difference between the host galaxy mass functions produced
by our fiducial power law (slope of $s=-1$, minimum time of $t_{\rm
cut}=10^{7}$ yrs) and the BPASS DTDs (Figure 1).
2. 2.
The peak of the host galaxy mass function occurs around the Milky Way mass
scale, with roughly $\sim 50\%$ of BNS mergers happening in the
$10^{10}M_{\odot}<M_{*}<10^{11}M_{\odot}$ mass range for our fiducial DTDs
(Figure 1). This mass bin includes NGC 4993, the host galaxy of GW170817.
3. 3.
While the detailed shape of the host galaxy mass function is sensitive to
details of the adopted DTD, the peak does not change significantly when
varying over a broad range of DTDs (Figure 2). The peak of the host galaxy
specific-mass function is similarly insensitive to changes in the adopted DTD
(Figure 3).
4. 4.
The peak of the host galaxy specific-mass function is located in the highest
mass bin for the fiducial power law DTD and BPASS model. Thus, while we expect
most BNS mergers to happen in somewhat lower mass systems for our fiducial
DTDs, high mass galaxies are more likely to host a BNS merger on a per-galaxy
basis (Figures 1 and 3).
5. 5.
Host galaxy mass functions constructed from different DTDs vary up to one dex
at low masses and up to two dex at high masses. This provides an opportunity
through which an observationally reconstructed host galaxy mass function can
be used to constrain the true BNS DTD (Figure 2).
6. 6.
The observable galactic property (or properties) that is expected to provide
the best correlation with the BNS merger rate depends on the true DTD.
In the short term, the results found here in both the host mass and specific-
mass functions paint an interesting picture on how astronomers should
structure electromagnetic follow-ups for BNS events. The peak of the host
galaxy specific-mass function laying in the highest mass bin suggests that the
optimal way to quickly find the resulting kilonova from a BNS merger would be
to search the highest mass galaxies first. This agrees with the current method
most follow-up strategies use in locating BNS mergers (e.g. Gehrels et al.,
2016; Arcavi et al., 2017; Singer et al., 2016). However, the peak of the host
mass function laying in the mass range $M_{*}=10^{10}-10^{11}\;M_{\odot}$
suggests that this method will miss, or take a longer to locate, most of the
BNS mergers. Determining the true DTD would allow for more efficient
electromagnetic follow-up by determining which observable: SFR, blue
luminosity, or stellar mass, best correlates with BNS merger rate.
In the long term, LIGO/Virgo/KAGRA will create a host mass function which can
be used to determine the true BNS DTD. With this DTD, the minimum delay time,
$t_{\rm cut}$, can constrain the proportion of BNS systems which form through
highly eccentric and low separation fast-merging channels. Understanding this
proportion will place constraints on natal kick velocity and common envelop
efficiency. The minimum delay time can also determine whether BNS mergers are
the dominant source of r-process elements. The overall shape of the true DTD
allows various physical parameters of BNS systems to be constrained, such as
the progenitor’s metallicity, masses, mass ratio, common envelope efficiency,
natal kicks, and initial binary separation through comparisons with resulting
DTDs from population synthesis codes.
## Acknowledgements
The authors thank Steve Eikenberry for his useful ideas and comments. JCR
acknowledges support from the University of Florida Graduate School’s Graduate
Research Fellowship. PT acknowledges support from NSF grant AST-1909933, NASA
ATP Grant 19-ATP19-0031. IB acknowledges support from the Alfred P. Sloan
Foundation and the University of Florida.
## References
* Aasi et al. (2015) Aasi, J., et al. 2015, Class. Quantum Grav., 32, 074001, doi: 10.1088/0264-9381/32/7/074001
* Abbott et al. (2017a) Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2017a, Phys. Rev. Lett., 119, 161101, doi: 10.1103/PhysRevLett.119.161101
* Abbott et al. (2017b) —. 2017b, ApJ, 848, L12, doi: 10.3847/2041-8213/aa91c9
* Abbott et al. (2018) Abbott, B. P., et al. 2018, Living Rev. Relativ., 21, 3, doi: 10.1007/s41114-018-0012-9
* Abbott et al. (2020) Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2020, ApJ, 892, L3, doi: 10.3847/2041-8213/ab75f5
* Acernese et al. (2014) Acernese, F., Agathos, M., Agatsuma, K., et al. 2014, Classical and Quantum Gravity, 32, 024001, doi: 10.1088/0264-9381/32/2/024001
* Adhikari et al. (2020) Adhikari, S., Fishbach, M., Holz, D. E., Wechsler, R. H., & Fang, Z. 2020, arXiv e-prints, arXiv:2001.01025. https://arxiv.org/abs/2001.01025
* Akutsu et al. (2019) Akutsu, T., Ando, M., Arai, K., et al. 2019, Nature Astronomy, 3, 35–40, doi: 10.1038/s41550-018-0658-y
* Antier et al. (2020) Antier, S., Agayeva, S., Aivazyan, V., et al. 2020, MNRAS, 492, 3904, doi: 10.1093/mnras/stz3142
* Arcavi et al. (2017) Arcavi, I., McCully, C., Hosseinzadeh, G., et al. 2017, ApJ, 848, L33, doi: 10.3847/2041-8213/aa910f
* Artale et al. (2019) Artale, M. C., Mapelli, M., Giacobbo, N., et al. 2019, MNRAS, 487, 1675, doi: 10.1093/mnras/stz1382
* Barrett et al. (2018) Barrett, J. W., Gaebel, S. M., Neijssel, C. J., et al. 2018, MNRAS, 477, 4685, doi: 10.1093/mnras/sty908
* Bartos et al. (2013) Bartos, I., Brady, P., & Márka, S. 2013, Class. Quantum Grav., 30, 123001, doi: 10.1088/0264-9381/30/12/123001
* Bartos et al. (2019a) Bartos, I., Corley, K. R., Gupte, N., et al. 2019a, MNRAS, 490, 3476, doi: 10.1093/mnras/stz2848
* Bartos et al. (2015) Bartos, I., Crotts, A. P. S., & Márka, S. 2015, ApJ, 801, L1, doi: 10.1088/2041-8205/801/1/L1
* Bartos et al. (2019b) Bartos, I., Lee, K. H., Corsi, A., Márka, Z., & Márka, S. 2019b, MNRAS, 485, 4150, doi: 10.1093/mnras/stz719
* Bartos et al. (2014) Bartos, I., Veres, P., Nieto, D., et al. 2014, MNRAS, 443, 738, doi: 10.1093/mnras/stu1205
* Bartos et al. (2018) Bartos, I., Di Girolamo, T., Gair, J. R., et al. 2018, MNRAS, 477, 639, doi: 10.1093/mnras/sty602
* Belczynski et al. (2018) Belczynski, K., Bulik, T., Olejak, A., et al. 2018, arXiv e-prints, arXiv:1812.10065. https://arxiv.org/abs/1812.10065
* Blanchard et al. (2017) Blanchard, P. K., Berger, E., Fong, W., et al. 2017, ApJ, 848, L22, doi: 10.3847/2041-8213/aa9055
* Cao et al. (2018) Cao, L., Lu, Y., & Zhao, Y. 2018, MNRAS, 474, 4997, doi: 10.1093/mnras/stx3087
* Chan et al. (2018) Chan, M. L., Messenger, C., Heng, I. S., & Hendry, M. 2018, Phys. Rev. D, 97, 123014, doi: 10.1103/PhysRevD.97.123014
* Côté et al. (2017) Côté, B., Belczynski, K., Fryer, C. L., et al. 2017, ApJ, 836, 230, doi: 10.3847/1538-4357/aa5c8d
* Coulter et al. (2017) Coulter, D. A., Foley, R. J., Kilpatrick, C. D., et al. 2017, Science, 358, 1556, doi: 10.1126/science.aap9811
* Dominik et al. (2012) Dominik, M., Belczynski, K., Fryer, C., et al. 2012, ApJ, 759, 52, doi: 10.1088/0004-637X/759/1/52
* Ducoin et al. (2020) Ducoin, J. G., Corre, D., Leroy, N., & Le Floch, E. 2020, MNRAS, 492, 4768, doi: 10.1093/mnras/staa114
* Eldridge et al. (2015) Eldridge, J. J., Fraser, M., Maund, J. R., & Smartt, S. J. 2015, MNRAS, 446, 2689, doi: 10.1093/mnras/stu2197
* Eldridge et al. (2013) Eldridge, J. J., Fraser, M., Smartt, S. J., Maund, J. R., & Crockett, R. M. 2013, MNRAS, 436, 774, doi: 10.1093/mnras/stt1612
* Eldridge et al. (2008) Eldridge, J. J., Izzard, R. G., & Tout, C. A. 2008, MNRAS, 384, 1109, doi: 10.1111/j.1365-2966.2007.12738.x
* Eldridge et al. (2011) Eldridge, J. J., Langer, N., & Tout, C. A. 2011, MNRAS, 414, 3501, doi: 10.1111/j.1365-2966.2011.18650.x
* Eldridge & Stanway (2016) Eldridge, J. J., & Stanway, E. R. 2016, MNRAS, 462, 3302, doi: 10.1093/mnras/stw1772
* Gehrels et al. (2016) Gehrels, N., Cannizzo, J. K., Kanner, J., et al. 2016, ApJ, 820, 136, doi: 10.3847/0004-637X/820/2/136
* Giacobbo & Mapelli (2018) Giacobbo, N., & Mapelli, M. 2018, MNRAS, 480, 2011, doi: 10.1093/mnras/sty1999
* Giacobbo & Mapelli (2019) —. 2019, MNRAS, 482, 2234, doi: 10.1093/mnras/sty2848
* Grandorf et al. (2020) Grandorf, C., McCarty, J., Rajkumar, P., et al. 2020, arXiv e-prints, arXiv:2008.05330. https://arxiv.org/abs/2008.05330
* Hobbs et al. (2005) Hobbs, G., Lorimer, D. R., Lyne, A. G., & Kramer, M. 2005, MNRAS, 360, 974, doi: 10.1111/j.1365-2966.2005.09087.x
* Hotokezaka et al. (2018) Hotokezaka, K., Beniamini, P., & Piran, T. 2018, International Journal of Modern Physics D, 27, 1842005, doi: 10.1142/S0218271818420051
* Lamberts et al. (2018) Lamberts, A., Garrison-Kimmel, S., Hopkins, P. F., et al. 2018, MNRAS, 480, 2704, doi: 10.1093/mnras/sty2035
* Lee et al. (2020) Lee, K. H., Bartos, I., Privon, G. C., Rose, J. C., & Torrey, P. 2020, arXiv e-prints, arXiv:2007.00563. https://arxiv.org/abs/2007.00563
* Maoz et al. (2012) Maoz, D., Mannucci, F., & Brandt, T. D. 2012, MNRAS, 426, 3282, doi: 10.1111/j.1365-2966.2012.21871.x
* Mapelli et al. (2019) Mapelli, M., Giacobbo, N., Santoliquido, F., & Artale, M. C. 2019, MNRAS, 487, 2, doi: 10.1093/mnras/stz1150
* Mapelli et al. (2018) Mapelli, M., Giacobbo, N., Toffano, M., et al. 2018, MNRAS, 481, 5324, doi: 10.1093/mnras/sty2663
* Marassi et al. (2019) Marassi, S., Graziani, L., Ginolfi, M., et al. 2019, MNRAS, 484, 3219, doi: 10.1093/mnras/stz170
* Marchant et al. (2016) Marchant, P., Langer, N., Podsiadlowski, P., Tauris, T. M., & Moriya, T. J. 2016, A&A, 588, A50, doi: 10.1051/0004-6361/201628133
* Marinacci et al. (2018) Marinacci, F., Vogelsberger, M., Pakmor, R., et al. 2018, MNRAS, 480, 5113, doi: 10.1093/mnras/sty2206
* Matteucci et al. (2015) Matteucci, F., Romano, D., Arcones, A., Korobkin, O., & Rosswog, S. 2015, MNRAS, 447, 326, doi: 10.1093/mnras/stu2463
* McCarthy et al. (2020) McCarthy, K. S., Zheng, Z., & Ramirez-Ruiz, E. 2020, MNRAS, 499, 5220, doi: 10.1093/mnras/staa3206
* Metzger & Berger (2012) Metzger, B. D., & Berger, E. 2012, ApJ, 746, 48, doi: 10.1088/0004-637X/746/1/48
* Naiman et al. (2018) Naiman, J. P., Pillepich, A., Springel, V., et al. 2018, MNRAS, 477, 1206, doi: 10.1093/mnras/sty618
* Nelson et al. (2018a) Nelson, D., Pillepich, A., Springel, V., et al. 2018a, MNRAS, 475, 624, doi: 10.1093/mnras/stx3040
* Nelson et al. (2018b) —. 2018b, MNRAS, 475, 624, doi: 10.1093/mnras/stx3040
* Pillepich et al. (2018) Pillepich, A., Nelson, D., Hernquist, L., et al. 2018, MNRAS, 475, 648, doi: 10.1093/mnras/stx3112
* Pillepich et al. (2018) Pillepich, A., Springel, V., Nelson, D., et al. 2018, MNRAS, 473, 4077
* Safarzadeh & Berger (2019) Safarzadeh, M., & Berger, E. 2019, ApJ, 878, L12, doi: 10.3847/2041-8213/ab24df
* Safarzadeh et al. (2019a) Safarzadeh, M., Berger, E., Leja, J., & Speagle, J. S. 2019a, ApJ, 878, L14, doi: 10.3847/2041-8213/ab24e3
* Safarzadeh et al. (2019b) Safarzadeh, M., Ramirez-Ruiz, E., Andrews, J. J., et al. 2019b, ApJ, 872, 105, doi: 10.3847/1538-4357/aafe0e
* Santoliquido et al. (2020) Santoliquido, F., Mapelli, M., Bouffanais, Y., et al. 2020, arXiv e-prints, arXiv:2004.09533. https://arxiv.org/abs/2004.09533
* Simonetti et al. (2019) Simonetti, P., Matteucci, F., Greggio, L., & Cescutti, G. 2019, MNRAS, 486, 2896, doi: 10.1093/mnras/stz991
* Singer et al. (2016) Singer, L. P., Chen, H.-Y., Holz, D. E., et al. 2016, ApJ, 829, L15, doi: 10.3847/2041-8205/829/1/L15
* Smartt et al. (2017) Smartt, S. J., Chen, T. W., Jerkstrand, A., et al. 2017, Nature, 551, 75, doi: 10.1038/nature24303
* Soares-Santos et al. (2017) Soares-Santos, M., Holz, D. E., Annis, J., et al. 2017, ApJ, 848, L16, doi: 10.3847/2041-8213/aa9059
* Springel et al. (2018) Springel, V., Pakmor, R., Pillepich, A., et al. 2018, MNRAS, 475, 676, doi: 10.1093/mnras/stx3304
* Tauris et al. (2017) Tauris, T. M., Kramer, M., Freire, P. C. C., et al. 2017, ApJ, 846, 170, doi: 10.3847/1538-4357/aa7e89
* Toffano et al. (2019) Toffano, M., Mapelli, M., Giacobbo, N., Artale, M. C., & Ghirlanda, G. 2019, MNRAS, 489, 4622, doi: 10.1093/mnras/stz2415
* Torrey et al. (2014) Torrey, P., Vogelsberger, M., Genel, S., et al. 2014, MNRAS, 438, 1985, doi: 10.1093/mnras/stt2295
* Vogelsberger et al. (2013) Vogelsberger, M., Genel, S., Sijacki, D., et al. 2013, MNRAS, 436, 3031, doi: 10.1093/mnras/stt1789
* Weinberger et al. (2017) Weinberger, R., Springel, V., Hernquist, L., et al. 2017, MNRAS, 465, 3291, doi: 10.1093/mnras/stw2944
|
# Commissioning the Hi Observing Mode of the Beamformer for the Cryogenically
Cooled Focal L-band Array for the GBT (FLAG)
N. M. Pingel Research School of Astronomy and Astrophysics
The Australian National University
Canberra, ACT 2611, Australia Department of Physics and Astronomy
West Virginia University
White Hall, Box 6315, Morgantown, WV 26506 Center for Gravitational Waves and
Cosmology
West Virginia University
Chestnut Ridge Research Building, Morgantown, WV 26505 D. J. Pisano
Department of Physics and Astronomy
West Virginia University
White Hall, Box 6315, Morgantown, WV 26506 Center for Gravitational Waves and
Cosmology
West Virginia University
Chestnut Ridge Research Building, Morgantown, WV 26505 Adjunct Astronomer at
Green Bank Observatory, P.O. Box 2, Green Bank, WV 24944, USA. M. Ruzindana
Brigham Young University (BYU) Provo, UT, 84602, USA M. Burnett Brigham
Young University (BYU) Provo, UT, 84602, USA K. M. Rajwade Jodrell Bank
Centre for Astrophysics
University of Manchester
Oxford Road, Manchester M193PL, UK R. Black Brigham Young University (BYU)
Provo, UT, 84602, USA B. Jeffs Brigham Young University (BYU) Provo, UT,
84602, USA K. F. Warnick Brigham Young University (BYU) Provo, UT, 84602,
USA D. R. Lorimer Department of Physics and Astronomy
West Virginia University
White Hall, Box 6315, Morgantown, WV 26506 Center for Gravitational Waves and
Cosmology
West Virginia University
Chestnut Ridge Research Building, Morgantown, WV 26505 D. Anish Roshi
National Radio Astronomy Observatory (NRAO)
520 Edgemont Road Charlottesville, VA 22903, USA Arecibo Observatory
Arecibo, Puerto Rico 00612 R. Prestage Green Bank Observatory (GBO)
155 Observatory Rd, Green Bank, WV 24944, USA M. A. McLaughlin Department of
Physics and Astronomy
West Virginia University
White Hall, Box 6315, Morgantown, WV 26506 Center for Gravitational Waves and
Cosmology
West Virginia University
Chestnut Ridge Research Building, Morgantown, WV 26505 D. Agarwal Department
of Physics and Astronomy
West Virginia University
White Hall, Box 6315, Morgantown, WV 26506 Center for Gravitational Waves and
Cosmology
West Virginia University
Chestnut Ridge Research Building, Morgantown, WV 26505 T. Chamberlin Green
Bank Observatory (GBO)
155 Observatory Rd, Green Bank, WV 24944, USA L. Hawkins Green Bank
Observatory (GBO)
155 Observatory Rd, Green Bank, WV 24944, USA L. Jensen Green Bank
Observatory (GBO)
155 Observatory Rd, Green Bank, WV 24944, USA P. Marganian Green Bank
Observatory (GBO)
155 Observatory Rd, Green Bank, WV 24944, USA J. D. Nelson Green Bank
Observatory (GBO)
155 Observatory Rd, Green Bank, WV 24944, USA W. Shillue National Radio
Astronomy Observatory (NRAO)
520 Edgemont Road Charlottesville, VA 22903, USA E. Smith Department of
Physics and Astronomy
West Virginia University
White Hall, Box 6315, Morgantown, WV 26506 Center for Gravitational Waves and
Cosmology
West Virginia University
Chestnut Ridge Research Building, Morgantown, WV 26505 B. Simon Green Bank
Observatory (GBO)
155 Observatory Rd, Green Bank, WV 24944, USA V. Van Tonder Square Kilometre
Array South Africa (SKA SA)
Cape Town, South Africa S. White Green Bank Observatory (GBO)
155 Observatory Rd, Green Bank, WV 24944, USA
(Received June 18, 2020; Accepted January 20, 2021)
###### Abstract
We present the results of commissioning observations for a new digital
beamforming back end for the Focal plane L-band Array for the Robert C. Byrd
Green Bank Telescope (FLAG), a cryogenically cooled Phased Array Feed (PAF)
with the lowest measured $T_{\rm sys}$/$\eta$ of any PAF outfitted on a radio
telescope to date. We describe the custom software used to apply beamforming
weights to the raw element covariances to create research quality spectral
line images for the new fine-channel mode, study the stability of the beam
weights over time, characterize FLAG’s sensitivity over a frequency range of
150 MHz, and compare the measured noise properties and observed distribution
of neutral hydrogen emission from several extragalactic and Galactic sources
with data obtained with the current single-pixel L-band receiver. These
commissioning runs establish FLAG as the preeminent PAF receiver currently
available for spectral line observations on the world’s major radio
telescopes.
Instrumentation: Phased Array Feeds — Galaxies: general — Galaxies: structure
††journal: AJ††software: [††thanks: Deceased
## 1 Introduction
The increase in survey speed provided by Phased Array Feed (PAF) receivers
embodies the next major advancement in radio astronomy instrumentation. Such
arrays have been used commercially for decades (Milligan, 2005), but the
unique challenge of operating at extremely low noise levels to detect
inherently faint astrophysical signals has only been overcome within the last
two decades (e.g., Fisher & Bradley 2000). Placing an array of densely packed
dipole radiators in the focal plane of a radio telescope allows full sampling
of the focal field. Multiplying voltages from the dipoles by different complex
coefficients (i.e., beamformer weights) and summing them will alter the
aperture illumination such that the resulting far-field power patterns mimic a
multi-beam feed (e.g. Landon et al. 2010), while avoiding the challenges of
positioning physically distinct feeds. This is an especially powerful shortcut
for L-band observations where relatively large physical feeds are necessary
and only sample a limiting fraction of sky at one instant.
Several PAFs have successfully been tested and deployed on both large single
dishes, such as the 64m Parkes telescope, and aperture synthesis arrays. For
instance, Reynolds et al. (2017) successfully recreated a detailed neutral
hydrogen (Hi) column density (NHI) map of the Large Magellanic Cloud,
originally observed with the Parkes’ L-band multi-beam receiver, as well as
the direct detection of source from the Hi Parkes All-Sky Survey (HIPASS;
Barnes et al. 2001) and hydrogen recombination lines. Serra et al. (2015a)
utilized the PAF-equipped Australian Square Kilometer Array Pathfinder (ASKAP)
to reveal new Hi clouds within the IC 1459 galaxy group. More recently, pilot
observations of Widefield ASKAP L-band Legacy All-sky Blind Survey (WALLABY)
have expanded the total membership of the NGC 7162 galaxy group and provided
high-quality Hi data for kinematic modeling (Reynolds et al., 2019),
identified five new Hi sources in the NGC 7232 group (Kleiner et al., 2019),
and characterized Hi clouds that are likely resolved tidal debris features
from the NGC 7232/3 triplet (Lee-Waddell et al., 2019). Additional early
science results from WALLABLY are discussed in Elagali et al. (2019) and For
et al. (2019). Other recent observations from The Galactic ASKAP (GASKAP;
Dickey et al. 2013) survey of the Hi in the nearby Small Magellanic Cloud,
where the $\sim$5$\times$5 deg2 extent of the dwarf galaxy was captured in a
single pointing, have demonstrated the clear advantage PAFs provide in
creating wide-field images (McClure-Griffiths et al., 2018). Additionally,
commissioning observations from the Apertif upgrade to the Westerbork Radio
Telescope (WSRT; Oosterloo et al. 2009) have shown excellent wide-field
imaging capabilities.
While the increase in the Field-of-View (FoV) will in turn dramatically
increase the survey speeds of aperture synthesis arrays like the Apertif or
ASKAP, the small filling factors and spacing of the individual antenna
elements inherently filter out the largest spatial frequencies and limit the
sensitivity to low surface brightness emission. Complimentary observations
from a large single dish provide these vital missing zero spacing measurements
to ensure angular sensitivity at large scales and high surface brightness
sensitivity. The decrease in the necessary telescope time required for deep
(NHI $\leq$ 1018 cm-2) on-the-fly (OTF) mapping of extended sources, makes a
PAF-equipped GBT the ideal instrument for future deep Hi surveys to reach
pioneering sensitivity levels.
The Focal L-band Array for the GBT (FLAG) is a 19 element, dual-polarization
PAF with cryogenically cooled low noise amplifiers (LNAs) to maximize
sensitivity over a bandwidth of 150.519 MHz divided up into 500 coarse
channels. Previous commissioning observations of the front end have shown
excellent performance in terms of sensitivity and spectral line imaging
capabilities (Roshi et al., 2018). In Spring 2018, FLAG recorded the lowest
reported system temperature ($T_{\rm sys}$) normalized by aperture efficiency
$\eta$ at 25$\pm$3 K near 1350 MHz for an electronically formed beam (Roshi et
al., 2018), which is comparable to the capabilities of the existing single-
pixel L-band receiver. The work presented in this paper describes aspects of a
new digital beamforming back end with a new polyphase filterbank (PFB)
implementation for fine channelization of 100 coarse channels into 3200 fine-
channels specifically designed for spectral line science. Rajwade et al.
(2019) provides an overview of the real-time beamforming mode for the
detection of transient signals from fast radio bursts and pulsars.
We describe the system architecture and available observing modes in Section 2
and briefly summarize the mathematical principles of beamforming in Section 3;
in Section 4, we describe the observing setup and strategies for beamformer
weight calibration and Hi mapping with the GBT; the custom software used for
post-correlation beamforming, flux calibration, and imaging are summarized in
Section 5; Section 6 investigates how distinct sets of beamforming weights
vary with time, demonstrates the sensitivity across the full range of
bandwidth, compares the Hi properties of several extragalactic and Galactic
sources as detected by FLAG and the current L-band single-pixel receiver, and
presents a comparison between the survey speed of FLAG relative to other PAFs
and multi-beam receivers equipped on the world’s major radio telescopes;
finally, our conclusions and instrument outlook are summarized in Section 7.
## 2 FLAG System Architecture
The Focal L-band Array for the Green Bank Telescope (FLAG) was developed in
collaboration between the National Radio Astronomy Observatory (NRAO), the
Green Bank Observatory (GBO), Brigham Young University (BYU), and West
Virginia University (WVU). It is a 19 element, dual-polarization, cryogenic
PAF with direct digitization of radio frequency (RF) signals at the front end,
digital signal transport over fiber, and now possesses a real-time signal
processing back end with up to 150 MHz bandwidth. The front end employs a new
digital-down-link (DDL) mechanism that performs all analog-to-digital
conversions in a compact assembly that sits at prime focus (Morgan et al.,
2013).
Two integral processes in the success of the DDL are achieving bit and byte
lock in the back end system. The front end system produces complex sample
voltages for each dipole element that are serialized into 8-bit real and 8-bit
imaginary components. These are combined to form a 16-bit (or 2-byte) word per
time sample. These serialized voltages are transmitted over optical fiber
without any sort of encoding such as start/stop bits to delineate the
boundaries between bits. Bit lock refers to the recovery of the most-
significant bit by the deserializer in the FLAG back end. This is done by
constructing a histogram of the arriving samples and comparing to the expected
probability density function of a random Gaussian process. Once the sample are
correctly aligned in terms of their most-significant bits, the byte-lock
procedure ensures that two sequential bytes are correctly identified as the
real and imaginary components. Due to the relationship between the magnitudes
of complex conjugated signals, if the bytes are incorrectly identified (i.e.,
there is no byte-lock), a strong test-tone injected at a known positive
frequency offset relative to a set central frequency will have a symmetric
counterpart at the corresponding negative frequency offset. The bits are then
slipped by eight locations to correctly align the bytes to achieve byte-lock.
See Diao (2017) and Burnett (2017) for detailed information on the PAF
receiver front end and bit/byte locking procedures, respectively.
The FLAG back end consists of five digital optical receiver cards, five ROACH
II FPGA boards (Parsons et al., 2006), a Mellanox SX 1012 12-port 40 Gbe
ethernet switch, and five Mercury GPU408 4U GPU Server High Performance
Computers (HPCs). These parts are all connected in the order listed.
The digitized signals from the front end of the system are serialized and sent
over 40 (38 + 2 spare) optical fibers to the optical receiver cards which are
connected to the ROACH II boards. The boards channelize the approximately 150
MHz bandwidth into 512 channels each with a bandwidth of 303.18 kHz.
The data is then reduced to 500 frequency channels and packetized into 10
user-datagram protocol (UDP) packets each containing 50 frequency samples for
eight antennas across 20 time samples. These packets are streamed over
10-Gbe/40-Gbe breakout cables into a 12-port 40-Gbe network switch, which
redirects packets into the HPCs such that each one receives 100 frequency
samples with a width of 303.18 kHz for all 40 antennas.
Mode | Bandwidth [MHz] | Nchan | Nchan in Bank | $\Delta\nu$ [kHz]
---|---|---|---|---
CALCORR | 151.59 | 500 | 25 non-contiguous | 303.18
PFBCORR | 30.318 | 3200 | 160 contiguous | 9.47
RTBF | 151.59 | 500 | 25 non-contiguous | 303.18
Table 1: Properties of Available FLAG Observing Modes
Each HPC then takes these 100 frequency samples and divides them evenly
between two Nvidia GeForce Titan X Graphical Processing Units (GPUs), which
contain real-time beamformer and coarse/fine channel correlator algorithms.
Within each HPC is a real-time operating system (RTOS) called HASHPIPE used
for thread management and pipelining, and a user interface called
dealer/player. These enable the operation of the beamformer and correlator
algorithms. Each HPC can be run in three distinct observing modes: (1)
CALCORR, which is the mode used to derive the beamforming weights; (2)
PFBCORR, which is used for the spectral line observations and sends a
frequency chunk of 100 coarse channels with a total bandwidth of 30.318 kHz
through a polyphase filterbank implementation to obtain 3200 total fine
channels with resolution of 9.47 kHz; each GPU in these correlator modes runs
a correlator thread that processes one-tenth the total bandwidth; and (3) RTBF
mode, which is the mode used for pulsar and transient detection. The
properties of these observing modes are summarized in Table 1. We refer the
reader to Ruzindana (2017) for a detailed description on the FLAG back end and
Rajwade et al. (2019) for the description and early success of the RTBF mode.
## 3 Maximum Signal-to-Noise Beamforming
The process of beamforming involves the weighted sum of the individual sensor
responses to an incident astronomical signal. In radio astronomy, where the
signals are inherently extremely faint, it is advantageous for an observer to
compute weights that maximize the signal-to-noise from a given detection.
Defining $\mathbf{z}\left(t\right)$ to be a vector containing the individual
responses of each dipole in an $M$-dipole PAF measured over a discrete time
sample (i.e., integration), a convenient covariance matrix
$\mathbf{R}=\mathbf{z}^{H}\left(t\right)\mathbf{z}\left(t\right)$ (1)
can be constructed such that R is a $M\times M$ matrix of complex values that
characterizes the correlations between the recorded complex voltages of the
individual dipole elements. Note that the $H$ superscript in the above
equation represents the Hermitian (complex conjugate transpose) form of the
vector. Jeffs et al. (2008) goes on to characterize the signal from the array
by the equation
$\mathbf{R}=\mathbf{R_{\rm s}}+\mathbf{R_{\rm n}},$ (2)
where Rs is the signal covariance matrix and Rn contains the noise covariance
from spillover, background, and the mutual coupling of the dipoles.
Rn can be measured by pointing the telescope to a blank patch of sky so that R
$\approx$ Rn. Pointing at a bright point source and solving Equation 2 for Rs
gives the signal covariance matrix. A steering vector that characterizes the
response of each dipole in a given direction can now be computed and is
defined by
$\mathbf{a}\left(\theta\right)=\mathbf{R_{\rm n}}\mathbf{u}_{\textrm{max}},$
(3)
where umax is the dominant eigenvector of the generalized eigenvalue equation
Rumax = $\lambda_{\rm max}$Rnumax.
Elmer et al. (2012) define the maximum signal-to-noise beamformer by
maximizing the following expression
$\mathbf{w_{\rm
maxSNR}}=\mathrm{argmax}\left(\frac{\mathbf{w^{H}}\mathbf{R_{\rm
s}\mathbf{w}}}{\mathbf{w^{H}}\mathbf{R_{\rm n}\mathbf{w}}}\right).$ (4)
The values contained within the weight vector w and its Hermitian form are not
yet known. Maximizing Equation 4 by taking the derivative with respect to w
and setting the result equal to zero is equivalent to finding the dominant
eigenvector of the generalized eigenvalue equation
$\mathbf{R_{\rm s}}\mathbf{w_{\rm maxSNR}}=\lambda_{\rm max}\mathbf{R_{\rm
n}}\mathbf{w_{\rm maxSNR}}.$ (5)
A raw power value $P$ in units of counts at a particular frequency $\nu$ and
short term integration ($n$) is measured by calculating
$P_{\nu\rm,n}=\mathbf{w^{\rm H}_{\rm maxSNR,\nu\rm,n}}\mathbf{R_{\rm
s,\nu\rm,n}}\mathbf{w_{\rm maxSNR,\nu\rm,n}}.$ (6)
The max-SNR beamforming algorithm effectively manipulates the individual
dipole illumination patterns such that the aperture is optimally illuminated
for each formed beam in a given direction on the sky. While this scheme
produces the highest gain in a given direction, there is little control over
the level of the sidelobes due to the sharp transition in illumination
pattern. High sidelobe levels could introduce stray radiation, where signal is
detected in a sidelobe rather than the main formed beam, affecting the
accuracy of flux and mapped structure. For example, stray radiation in the
initial data release of the Parkes Galactic All-Sky Survey (GASS; McClure-
Griffiths et al. (2009)) accounted for upwards of 35% of the observed emission
in some individual spectra. Nevertheless, high sensitivity over a large field
of view is particularly advantageous for the detection of diffuse (angularly
extended and faint) Hi, as evidenced by the abundance of highly detailed and
faint structure observed in the GASS survey even before the application of
corrections for stray radiation. The unique unblocked aperture design of the
GBT ensures inherently low sidelobe structure — even in the case of maxSNR —
and subsequently high image fidelity. A PAF-equipped GBT will produce high
quality maps while also decreasing the survey times necessary to pursue —
amongst many applications — the detection of cold gas accretion, the study of
high velocity clouds (Moss et al. 2013), and the compact clouds being driven
from the Galactic center (Di Teodoro et al. 2018).
## 4 Observations
The first step in forming beams is the characterization of the response of
each individual dipole element in a given direction, $\theta_{\rm i}$, in the
form of a signal response vector (i.e., Equation 3). For these commissioning
observations, we implement a maxSNR beamformer as defined in Equation 4. While
a PAF can theoretically form any number of beams as long as there exists a
sufficient number of steering vectors and recorded covariance matrices, we
employ two calibration techniques deemed a Calibration Grid and 7-Point
Calibration to form seven total beams arranged such that the central (i.e.,
boresight) beam is surrounded by six outer beams in a hexagonal pattern that
overlap at approximately the half-power points (see Figure 1 of Rajwade et al.
2019). This particular pattern provides ideal balance between mapping speed
and uniform sensitivity within FLAG’s FoV. We refer to the boresight beam as
‘Beam 0’; as viewed on the sky, Beam 1 is the upper left beam, and the
subsequent beam numbers increase in a clockwise fashion. Once a set of wb, is
obtained for the $b$th beam (of $B$ total beams) in the direction of
$\theta_{\rm i}$, we acquire the raw power value at each $\nu$ and short term
integration $n$ through Equation 6. Table 2 summarizes all calibration and
science observations discussed in this paper.
Session | UT Date | UT Start | UT End | Schedule Block Type | Source | Mode | Integration Length [s] | Central Frequency [MHz] | Notes
---|---|---|---|---|---|---|---|---|---
GBT16B_400_03 | | | | | | | | |
| 2017-05-27 | 04:17:55 | 05:12:11 | Calibration Grid | 3C295 | CALCORR | 0.1 | 1450.00000 | Continuous Trajectory
GBT16B_400_09 | | | | | | | | |
| 2017-07-28 | 05:06:19 | 05:38:52 | Calibration Grid | 3C295 | CALCORR | 0.5 | 1450.00000 | —
GBT16B_400_12 | | | | | | | | |
| 2017-08-04 | 04:16:54 | 05:03:33 | Calibration Grid | 3C295 | CALCORR | 0.5 | 1450.00000 | 40$\times$40 $\square^{\prime}$
| ‡2017-08-04 | 05:30:27 | 06:02:27 | DecLatMap | NGC6946 | PFBCORR | 0.5 | 1450.00000 | 41 columns; $N_{\rm ints}=72$; $t_{\rm eff,comb}=60$ s
| 2017-08-04 | 06:12:19 | 06:14:53 | 7Pt-Calibration | 3C48 | CALCORR | 0.5 | 1450.0000 | 10 s Tracks
GBT16B_400_13 | | | | | | | | |
| 2017-08-04 | 13:44:40 | 14:29:09 | Calibration Grid | 3C123 | CALCORR | 0.5 | 1449.84841 | —
| 2017-08-04 | 06:12:19 | 06:14:53 | 7Pt-Calibration | 3C134 | CALCORR | 0.5 | 1449.84841 | 15 s Tracks
GBT16B_400_14 | | | | | | | | |
| 2017-08-06 | 16:41:15 | 16:43:58 | 7Pt-Calibration | 3C147 | CALCORR | 0.5 | 1450.0000 | 15 s Tracks
| 2017-08-06 | 16:44:48 | 17:22:16 | Calibration Grid | 3C147 | CALCORR | 0.5 | 1449.74271 | —
GBT17B_360_01 | | | | | | | | |
| 2018-01-27 | 15:07:59 | 15:09:55 | 7Pt-Calibration | 3C295 | CALCORR | 0.5 | 1450.0000 | 10 s Tracks
| 2018-01-27 | 15:11:00 | 15:39:18 | Calibration Grid | 3C295 | CALCORR | 0.5 | 1450.0000 | —
| 2018-01-27 | 15:40:29 | 15:42:24 | 7Pt-Calibration | 3C295 | CALCORR | 0.5 | 1450.00000 | 10 s Tracks
GBT17B_360_02 | | | | | | | | |
| 2018-01-27 | 18:32:59 | 18:36:07 | 7Pt-Calibration | 3C295 | CALCORR | 0.5 | 1450.00000 | 10 s Tracks;
| 2018-01-27 | 19:13:57 | 19:41:40 | Calibration Grid | 3C147 | CALCORR | 0.5 | 1450.00000 | —
| 2018-01-27 | 21:07:00 | 21:10:04 | 7Pt-Calibration | 3C147 | CALCORR | 0.5 | 1450.00000 | 10 s Tracks
GBT17B_360_03 | | | | | | | | |
| 2018-01-28 | 06:44:29 | 06:47:38 | 7Pt-Calibration | 3C295 | CALCORR | 0.5 | 1449.84841 | 10 s Tracks
| 2018-01-28 | 06:48:56 | 07:17:23 | Calibration Grid | 3C295 | CALCORR | 0.5 | 1449.84841 | —
| ‡2018-01-28 | 08:05:49 | 08:36:44 | DecLatMap | NGC4258 Field | PFBCORR | 0.5 | 1449.84841 | 31 columns;$N_{\rm ints}=72$; $t_{\rm eff,comb}=68$ s
| ‡2018-01-28 | 08:05:49 | 08:36:44 | DecLatMap | NGC4258 Field | PFBCORR | 0.5 | 1449.84841 | 31 columns;$N_{\rm ints}=72$; $t_{\rm eff,comb}=68$ s
| ‡2018-01-28 | 08:38:28 | 09:07:35 | DecLatMap | NGC4258 Field | PFBCORR | 0.5 | 1449.84841 | 31 columns;$N_{\rm ints}=72$; $t_{\rm eff,comb}=68$ s
GBT17B_360_04 | | | | | | | | |
| 2018-01-29 | 07:29:58 | 08:32:14 | Calibration Grid | 3C295 | CALCORR | 0.5 | 1450.00000 | —
| 2018-01-29 | 08:38:51 | 08:42:10 | 7Pt-Calibration | 3C295 | CALCORR | 0.5 | 1450.00000 | 20 s Tracks
| ‡2018-01-29 | 08:50:26 | 09:20:42 | DecLatMap | NGC4258 Field | PFBCORR | 0.5 | 1450.0000 | 31 columns;$N_{\rm ints}=72$; $t_{\rm eff,comb}=68$ s;
| ‡2018-01-29 | 09:25:19 | 09:56:10 | DecLatMap | NGC4258 Field | PFBCORR | 0.5 | 1450.0000 | 31 columns;$N_{\rm ints}=72$; $t_{\rm eff,comb}=68$ s
| ‡2018-01-29 | 09:59:00 | 10:28:50 | DecLatMap | NGC4258 Field | PFBCORR | 0.5 | 1450.00000 | 31 columns;$N_{\rm ints}=72$; $t_{\rm eff,comb}=68$ s
| ‡2018-01-29 | 10:30:44 | 10:59:11 | DecLatMap | NGC4258 Field | PFBCORR | 0.5 | 1450.00000 | 31 columns;$N_{\rm ints}=72$; $t_{\rm eff,comb}=68$ s
GBT17B_360_05 | | | | | | | | |
| 2018-01-30 | 12:02:53 | 12:13:08 | 7Pt-Calibration | 3C295 | CALCORR | 0.5 | 1450.0000 | 20 s Tracks
| 2018-01-30 | 12:53:24 | 13:00:44 | 7Pt-Calibration | 3C295 | CALCORR | 0.5 | 1450.00000 | 20 s Tracks
GBT17B_360_06 | | | | | | | | |
| 2018-02-03 | 17:30:03 | 17:35:46 | 7Pt-Calibration | 3C48 | CALCORR | 0.5 | 1075.00000 | 30 s Tracks
| 2018-02-03 | 18:15:50 | 18:21:39 | 7Pt-Calibration | 3C48 | CALCORR | 0.5 | 1250.00000 | 30 s Tracks
| 2018-02-03 | 18:32:32 | 18:38:21 | 7Pt-Calibration | 3C48 | CALCORR | 0.5 | 1350.00000 | 30 s Tracks
| 2018-02-03 | 18:51:01 | 18:56:52 | 7Pt-Calibration | 3C48 | CALCORR | 0.5 | 1550.00000 | 30 s Tracks
| 2018-02-03 | 19:08:18 | 19:14:11 | 7Pt-Calibration | 3C48 | CALCORR | 0.5 | 1650.00000 | 30 s Tracks
| 2018-02-03 | 19:25:22 | 19:31:17 | 7Pt-Calibration | 3C48 | CALCORR | 0.5 | 1750.00000 | 30 s Tracks
| 2018-02-03 | 19:57:26 | 20:03:28 | 7Pt-Calibration | 3C48 | CALCORR | 0.5 | 1449.74271 | 30 s Tracks
| 2018-02-03 | 20:04:47 | 20:35:45 | Calibration Grid | 3C48 | CALCORR | 0.5 | 1449.74271 | —
GBT17B_360_07 | | | | | | | | |
| 2018-02-05 | 06:25:20 | 06:53:49 | Calibration Grid | 3C295 | CALCORR | 0.5 | 1450.00000 | —
| 2018-02-05 | 10:04:05 | 10:14:38 | 7Pt-Calibration | 3C295 | CALCORR | 0.5 | 1450.00000 | 60 s Tracks
GBT17B_455_01 | | | | | | | | |
| 2018-02-04 | 13:18:07 | 13:28:01 | 7Pt-Calibration | 3C348 | CALCORR | 0.5 | 1450.00000 | 60 s Tracks
| ‡2018-02-04 | 13:35:47 | 14:54:23 | RaLongMap | Galactic Center | PFBCORR | 0.5 | 1450.00000 | 41 rows;$N_{\rm ints}=72$; $t_{\rm eff,comb}=60$ s
| 2018-02-04 | 15:09:06 | 15:18:56 | 7Pt-Calibration | 3C348 | CALCORR | 0.5 | 1450.84841 | 60 s Tracks
| ‡2018-02-04 | 15:26:57 | 16:29:32 | RaLongMap | Galactic Center | PFBCORR | 0.5 | 1450.84841 | 41 rows;$N_{\rm ints}=72$; $t_{\rm eff,comb}=60$ s
Table 2: Summary of FLAG Observations; ‡ represents mapping scans used to make
the science maps; $N_{\rm ints}$ represents the number of integration along
each row/column; and $t_{\rm eff,map}$ gives the effective integration time of
the combined map time in units of s (see text in Section 4.3).
### 4.1 Calibration Grid
Figure 1: The trajectory from one of our calibration grids centered on 3C295.
The ‘$\times$’ symbols denote the mean location of the reference pointings,
and the solid black lines represent the trajectory of the grid.
To obtain measurements of Rs, we move the GBT in a grid centered on a strong
calibrator spanning 30 arcminutes in Cross-elevation (XEL) as set by the
horizontal celestial coordinate system (i.e. ‘Encoder’ setting when using the
GBT) for a total of 34 rows spaced 0.91 arcminutes (approximately one-tenth
the full-width half max of the GBT beam at 1.4 GHz) apart in Elevation (EL).
We compute Rn by tracking two degrees in XEL away from the grid for a duration
of ten seconds. We track after every fifth row to attain six total reference
pointings with three evenly spaced on each side of the grid. To ensure
adequate spatial sampling, we move the telescope at a rate of 0.91 arcminutes
per second and dump integrations to disk every 0.5 s. The trajectory of the
calibration grid observations performed during session GBT16B_400_12 centered
on 3C295 is shown in Figure 1. The total time to complete such a grid is about
40 minutes, including scan overhead.
The calibration grid provides the necessary covariance matrices with which to
characterize the response and quality of the formed beams. A convenient
quantity with which to compare beam-to-beam sensitivity variations — as it
directly measurable — is the system equivalent flux density (SEFD), which is
the flux density equivalent of the system temperature, $T_{\rm sys}$. The SEFD
is defined
${\rm SEFD}=\frac{S_{\rm CalSrc}}{\left(\frac{\left<P_{\rm
s}\right>}{\left<P_{\rm n}\right>}-1\right)},$ (7)
where $S_{\rm CalSrc}$ is the known flux density of a calibrator source in
units of Jy and $\left<P_{\rm s}\right>$ and $\left<P_{\rm n}\right>$ are
respectively the mean on-source and off-source power values. These are
determined by building distributions of on-source and off-source raw
beamformed power values contained between coarse channels corresponding to
1400.2 MHz to 1416.6 MHz and 1425.1 MHz to 1440.3 MHz to avoid bias from
Galactic Hi emission. These distributions are then fit with separate Gaussian
functions to calculate $\left<P_{\rm s}\right>$ and $\left<P_{\rm n}\right>$.
The associated uncertainties are taken to be the standard deviations returned
by these fits. In cases where the fit does not converge due to complex
bandpass shapes, the arithmetic mean and standard deviations are used. All
power values are corrected for atmospheric attenuation. The final uncertainty
for the SEFD value is computed by propagating the statistical uncertainties of
$\left<P_{\rm s}\right>$, $\left<P_{\rm n}\right>$, and $S_{\rm CalSrc}$. The
flux density of a given calibrator source is taken from Perley & Butler
(2017).
The SEFD provides a comparison metric between individual beams. If the SEFD is
derived for the ideal observation of a blank sky, it can be related to the
ratio of $T_{\rm sys}$ and aperture efficiency $\eta$ through
$\frac{T_{\rm sys}}{\eta}=\frac{10^{-26}\rm SEFDA_{\rm g}}{2k},$ (8)
where $A_{\rm g}$ is the geometric area of the GBT, and $k$ is the Boltzmann
constant. Substituting the definition for the SEFD from Equation 7 and putting
the power levels in terms of the product between correlation matrices and
beamforming weights from Equation 6 results in the expression
$\frac{T_{\rm sys}}{\eta}=\frac{10^{-26}S_{\rm CalSrc}A_{\rm
g}}{2k}\frac{\mathbf{w^{\rm H}}\mathbf{R_{\rm n}}\mathbf{w}}{\mathbf{w^{\rm
H}}\mathbf{R_{\rm s}}\mathbf{w}}.$ (9)
This equation is an oft-used metric for comparing and characterizing the
performance of PAFs (Jeffs et al., 2008; Landon et al., 2010; Roshi et al.,
2018), since it can be directly measured. Equation 8 can be rearranged to
define a formed beam sensitivity in units of m2 K-1 at each $\nu$ from each
direction $\theta$
$S_{\nu}\left(\theta\right)=\frac{\eta A_{g}}{T_{\rm
sys}}=\frac{2k}{10^{-26}S_{\rm CalSrc}}\frac{\mathbf{w^{\rm H}}\mathbf{R_{\rm
s}}\mathbf{w}}{\mathbf{w^{\rm H}}\mathbf{R_{\rm n}}\mathbf{w}}.$ (10)
Figure 2: Sensitivity map of the XX polarization at 1404.74 MHz derived from
the calibration grid shown in Figure 1. The contours levels begin at the $-$5
dB drop off level of the peak response and continue to the $-$3 and $-$1 dB
drop off.
Figure 3: left: The formed beam pattern derived from the calibration grid
shown in Figure 1. The red x symbols denote the intended beam centers. The
intersections of the vertical and horizontal dashed red lines denote the
location of the peak response of each formed beam. The contours represent
levels of $-$3, $-$5, $-$10, and $-$15 dB. Right: profiles of the normalized
beam response at the location of the peak response along the XEL (orange) and
EL (blue) axes. Gaussian fits are represented by dashed lines, while the
intended locations of the peak response in XEL and EL are shown by vertical
dotted lines.
Figure 2 shows the resulting sensitivity map from the calibration grid in
Figure 1 in the XX pol at 1404.74 MHz. The inner 0.5$\times$0.5 deg2 of the
FoV shows uniform sensitivity, which reflects the aggregate response of the
individual dipoles on the sky (see Figures 4, 5, and 10 from Roshi et al.
2018), before smoothly dropping towards the edge of the FoV. The excellent
uniformity across the FoV facilitates high-quality beams.
Beam | FWHMXEL [′] | PeakXEL,Intended [′] | PeakXEL,Measured [′] | XEL %-Diff | FWHMEL [′] | PeakEL,Intended [′] | PeakEL,Measured [′] | EL %-Diff
---|---|---|---|---|---|---|---|---
0 | 9.14$\pm$0.01 | 0.05 | 0.08 | 0.03 | 9.06$\pm$0.01 | $-$0.44 | $-$0.39 | 0.55
1 | 9.35$\pm$0.01 | $-$1.87 | $-$1.79 | 0.86 | 9.33$\pm$0.02 | 4.11 | 4.12 | 0.11
2 | 9.30$\pm$0.01 | 2.68 | 2.72 | 1.5 | 9.3$\pm$0.01 | 4.11 | 4.12 | 0.11
3 | 9.99$\pm$0.03 | 4.61 | 4.59 | 0.22 | 9.22$\pm$0.01 | $-$0.44 | $-$0.39 | 0.54
4 | 9.53$\pm$0.01 | 1.87 | 1.94 | 0.73 | 10.08$\pm$0.03 | $-$4.08 | $-$4.12 | 0.39
5 | 10.31$\pm$0.03 | $-$2.68 | $-$2.72 | 1.5 | 10.39$\pm$0.03 | $-$4.08 | $-$4.12 | 0.38
6 | 10.53$\pm$0.04 | $-$4.51 | $-$4.43 | 1.8 | 9.50$\pm$0.01 | $-$0.44 | $-$0.39 | 0.53
Table 3: Summary of Gaussian fits to the beam response profiles shown in
Figure 3. Column (1): beam number; column (2): FWHM fit along XEL axis at the
location of the peak response; column (3) PeakXEL,Intended is intended
location of peak response along the XEL axis; column (4) PeakXEL,Measured is
the measured location of peak response along the XEL axis; column (5): XEL
%-Offset = |PeakXEL,Intended$-$PeakXEL,Measured|/FWHMXEL$\times$100%; columns
(6-9): same as columns (2-5) but for EL axis.
The response of the $i$th formed beam for each $\nu$ at each direction
$\theta$ is
$I_{i}\left(\theta\right)=\left|\mathbf{w_{\rm
maxSNR,i}}^{H}\left(\theta\right)\mathbf{a}_{i}\left(\theta)\right)\right|^{2}.$
(11)
The left panel of Figure 3 shows the formed beam patterns for the calibration
grid around 3C295 observed for session GBT17B_360_04. Gaussian fits to cuts in
XEL and EL at the location of each beam’s peak response (red dashed lines)
shown in the right hand panel and summarized in Table 3 demonstrate that the
FWHM of the formed beams range between approximately 9′ and 10.5′, which is
comparable with the beam of the current single-pixel receiver. The offset
between the measured location of the peak response of each beam and its
indicated position is less than 2% of the measured FWHM. While the outer beams
are more elongated than the boresight beam, the fits to the beam profiles show
deviations from a Gaussian approximation at response levels much below the
FWHM. The elongated shape at levels below 10% of the peak response is largely
due to forming beams near where the sensitivity begins to drop off. For
example, the elongation of the low-level response of Beam 3 corresponds to the
transition from the -1 dB to -3 dB contours in the sensitivity map shown in
Figure 2.
### 4.2 7-Point Cal
While it is interesting to obtain detailed spatial information of the array
response provided by a calibration grid, the necessary $\sim$40 minutes of
total observing time (including overhead) is disadvantageous. A 7-Point
calibration scan (henceforth 7Pt-Cal) can be performed in instances where
telescope time is a constraint. This procedure will (1) track the area of sky
minus two degrees in XEL away from the calibrator source and at the same EL
offset as the center of Beams 4 and 5; (2) directly track the source (i.e. the
boresight); (3) slew the telescope to put calibrator source at desired center
of Beams 1-6; (4) track the area of sky minus two degrees away from the source
and at the same EL offsets as the centers of Beams 1 and 2. The two reference
pointings at similar EL offsets as the outer beams allow for construction of
$\mathbf{R}_{\rm n}$ and also account for elevation-dependent effects, while
the tracks on the desired beam centers collects the necessary response data to
derive maxSNR weights. The duration of each track ranges between 10 and 30
seconds. While more efficient in terms of time than a full calibration grid,
the amount of steering vectors a$\left(\theta\right)$ obtained during a 7Pt-
Cal are only enough to set the location of the peak response for each beam and
derive an SEFD. No additional information concerning the shape of the formed
beams is available. This type of calibration is the primary calibration
procedure for pulsar and transient observations, when detailed knowledge of
the beam shape is not crucial to the science goals as compared to e.g., the
overall flux sensitivity.
### 4.3 Hi Observations
The spectral line data were collected in the fine channelized PFBCORR mode
with an inherent frequency resolution of 9.47 kHz ($\sim$2 km/s at the
frequency of Hi emission) by steering the telescope along columns of constant
longitudinal coordinates to make OTF maps. The raw dipole correlation matrices
were dumped to disk every $t_{\rm int}$ = 0.5 s at angular intervals of 1.67′
to ensure adequate spatial Nyquist sampling; the columns/rows were spaced
every 3′ in each DecLatMap/RaLongMap. The coordinate systems used to make our
science maps include horizontal (XEL/EL), J2000, and Galactic. See Table 2 and
Section 6.3 for a summary of the observational set-up for the Hi sources and
Sections 6.3.1, 6.3.2, and 6.3.3 for the results from observations of NGC
6946, NGC 4258, and a field near the Galactic Center.
The effective integration time of a map made with FLAG that combines all seven
formed beams ($t_{\rm eff,comb}$) is derived by first computing the total
effective integration time of a map made with a single beam $t_{\rm eff,map}$,
multiplying this by the number of formed beams, $N_{\rm beams}$, and dividing
by the map area in terms of the total number of beams contained within a map.
For example, the 2$\times$2 deg2 maps of NGC 6946 consists of 41 total columns
($N_{\rm columns}$), each with 72 distinct integrations ($N_{\rm int}$).
Similar to the calibration procedure outlined in Pingel et al. (2018), we
obtain a reference spectrum from the edges of our science maps by utilizing
the first and last four integrations of a particular map scan. The effective
integration time for a single integration in a map from a single formed beam
is therefore
$t_{\rm eff,int}=\frac{t_{\rm int}t_{\rm ref}}{t_{\rm int}+t_{\rm
ref}}=\frac{0.5\rm~{}s\cdot 4\rm~{}s}{0.5\rm~{}s+4\rm~{}s}=0.444\rm~{}s;$ (12)
$t_{\rm eff,map}$ then follows from $N_{\rm rows}\times N_{\rm int}\times
t_{\rm eff,int}=$1312 s and increases to $t_{\rm eff,map}\times N_{\rm
beams}=1312\times 7=9184$ sec for combined map. The FWHM of the approximately
Gaussian boresight beam is 9.1′, which corresponds to an angular area of
1.1331$\times\left(9.1^{\prime}\right)^{2}\sim$ 0.026 deg2. The area in terms
of the number of beams is then 4 deg2/0.026 deg2 $\sim$153 beams. The final
$t_{\rm eff,comb}$ is then just $t_{\rm eff,comb}$ = 9184 s / 153 beams $\sim$
60 s/beam. These $t_{\rm eff,comb}$ values are listed listed in the Notes
column of Table 2 for each science map and can be used in the ideal radiometer
equation to calculate the theoretical noise value in the final images.
## 5 Data Reduction
The data reduction and calibration of the Hi data was performed with a custom
Python software packages pyFLAG111https://github.com/nipingel/pyFLAG. This
section summarizes the scripts available to perform the post-correlation
beamforming, flux calibration, and imaging of FLAG spectral line data.
### 5.1 Post-Correlation Beamforming
A scan performed with FLAG produces several types of ancillary
FITS222https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf files
that contain important metadata such as the antenna positions and LO settings.
These metadata must be collated and combined with the raw covariances stored
in FITS files to create a single dish FITS
(SDFITS333https://safe.nrao.edu/wiki/bin/view/Main/SdfitsDetails) file that
can be manipulated in GBTIDL444http://gbtidl.nrao.edu/, just as data from the
single-pixel receiver. The unique format of the raw FLAG dipole covariances
necessitate custom pyFLAG software to collate all the metadata and perform the
post-correlation beamforming (i.e., Equation 6) to generate an SDFITS file for
each formed beam that contains beamformed spectra in units of raw counts.
This software suite contains all the necessary Python and GBTIDL code with
which to calibrate and image spectral line data from FLAG. In both correlator
modes (i.e., PFBCORR and CALCORR), each GPU runs two correlator threads making
use of the xGPU library555https://github.com/GPU-
correlators/xGPU/tree/master/src, which is optimized to work on FLAG system
parameters. Each correlator thread handles 1/20th of the total bandwidth made
up of either 25 non-contiguous coarse frequency channels with 303.18 MHz
resolution or 160 contiguous fine channels with 9.47 kHz resolution and writes
the raw output to disk in a FITS file format. The data acquisition software
used to save these data to disk was borrowed from development code based for
the Versatile GBT Astronomical Spectrometer (VEGAS) engineering FITS format.
The output FITS file from each correlator thread is considered a ‘bank’ with a
unique X-engine ID (XID; i.e., the correlator thread) ranging from 0 to 19
that is stored in the primary header of the FITS binary table; there are
therefore 20 distinct FITS files created for each scan. Reading and sorting
the covariances stored within each bank FITS file — whether placing the non-
contiguous 25$\times$20 coarse channels in the correct order or stitching
together the the 160$\times$20 contiguous fine channels — is crucial a step
within the pyFLAG software.
Figure 4: The structure of a covariance matrix used in beamforming. The
numbers preceding each row/column correspond to the dipole element. Each
element of the matrix stores the covariance between dipole elements $i$ and
$j$ for a single frequency channel, $k$. The output is ordered in a flattened
one-dimensional array that needs to be reshaped into a 40$\times$40 matrix
before beamforming weights can be applied. Additionally, due to xGPU
limitations, the output size is 64$\times$64, which results in many zeros that
need to be thrown away in data processing.
The raw data output for both CALCORR and PFBCORR correlator modes are the
covariance matrices containing the covariance between individual dipole
elements. However, due to xGPU limitations, the covariance matrices are shaped
64$\times$64 and flattened to a one-dimensional (1D) data vector.
An example of how the covariance values are ordered is illustrated in Figure
4. Here, $R_{\rm k}^{\rm i,j}$ corresponds to the covariance between dipole
$i$ and $j$ at frequency channel $k$. Most of the transpose pairs (e.g.,
$R_{\rm k}^{\rm 1,3}$) are shown as zero because they are not included in the
1D data array that is saved to disk in order to preserve disk space.
Additionally, since only the covariances between the first 40 data streams (19
dipoles$\times$2 polarizations$+$ 2 spare channels), there is a large portion
of zeros appended onto the end of the 1D data array. The correlator output
represents the block lower triangular portion of the large 64x64 covariance
matrix shown in this figure sorted in row-major order, where a block
corresponds to a colored 2x2 sub-matrix. The reduction scripts treat each
4-element contiguous chunk of the 1D data vector as a block and place it into
the larger covariance matrix in row-major order. Once the first 40 rows have
been filled in, a conjugate transpose operation is performed to fill in the
missing covariance pairs and the remaining zeros are discarded.
When in CALCORR mode, the bank file corresponding to XID ID 0 contains
covariances matrices for frequency channels 0 to 4, 100 to 104, …, 400 to 404;
the XID 1 bank file stores covariance matrices for frequency channels, 5 to 9,
105 to 109, …, 405 to 409. However in PFBCORR mode, the covariance matrices
for channels 0 to 159 are stored in the bank file corresponding to XID 0 and
continue in a contiguous fashion such that the bank file corresponding to XID
19 stores data for frequency channels 3039 to 3199. The logic during data
reduction is to process each frequency channel individually, then sort the
result into the final bandpass based on the XID and mode in which the data
were taken. The scripts that drive the creation of an SDFITS file are
PAF_Filler.py — in essence the ‘main’ function of the program — and the two
modules metaDataModule.py and beamformerModule.py.
The foremost step in the filling and calibration process of FLAG data is to
solve Equation 5 for the dominant eigenvector using the Rs and Rn covariance
matrices obtained from calibration scans to determine the maxSNR complex
beamforming weights. This is performed with the pyFLAG python script,
pyWeights.py, which also generates a series of 20 FITS files (one for each
bank). Each beamformer weight FITS file contains a binary table consisting of
14$\times$3200 elements: (7 beams$\times$2 polarizations)$\times$(64
elements$\times$25 frequency channels$\times$2 for the complex pair). The
headers of these FITS files also contain the beam offsets (in arcminutes),
calibration set filenames, beamforming algorithm, and XID.
Once the weights have been generated, PAF_Filler.py can be run. This script
reads in and unpacks each bank FITS file to pass the raw data 1D covariances
to the beamformer object created by beamformingModule.py. Each bank FITS files
is processed in parallel to maximize efficiency.
Within this module, the FITS files storing the complex beamforming weights are
read in and organized into the form of a 2D numpy array of complex data-type,
with the rows representing the 25 coarse frequency channels and columns
represent the correlations of the ‘40’ dipoles (19$\times$2 dual polarization
dipoles plus 2 spare data channels). Once the complex weights are in the
correct format, the raw 1D covariances recorded for each integration are
reordered and transposed according to the block row-major scheme summarized in
Figure 4 and reshaped into a 3D numpy array of complex data-type with rows and
columns both representing the correlations between dipoles and the third axis
representing a given frequency channel. The final returned cube for each
integration has dimensions of 40$\times$40$\times\mathbf{N_{\rm chan}}$, where
$N_{\rm chan}$ is again number of frequency channels per bank file — either 25
or 160, depending on whether FLAG is operating in CALCORR or PFBCORR mode,
respectively. Note two important aspects: (1) the irrelevant correlations
caused by xGPU limitations are thrown away at this stage; (2) some rows and
columns contain zeros as they correspond to two unused data streams. Equation
6 is then applied to each plane of the correlation cube to construct a
beamformed bandpass in units of raw counts. A 2D array containing the
beamformed bandpass for each integration is returned to PAF_Filler.py and
sorted into global data buffers. The software will recognize the mode based on
the number of channels stored in a bank FITS file. When in PFBCORR mode, where
100 coarse channels are sent through a PFB implementation to obtain a total of
3200 fine channels, the beamformer weight for an input coarse channel will be
applied across the 32 corresponding output fine channels.
After each bank FITS file for a particular scan is processed, the filled
global data buffers are passed to a metadata object created by
metaDataModule.py. This object collates all associated metadata, applies the
beam offsets to the recorded antenna positions, and perform the necessary
Doppler corrections to the topocentric frequencies. Once all corrections to
the spatial and spectral coordinates have been made, the binary FITS tables
are combined and appended to a primary Header Data Unit and returned to
PAF_Filler.py where the final SDFITS file is written to disk. The process then
repeats until all beams for all observed objects are processed. Comprehensive
documentation and usage examples are available at
https://github.com/nipingel/pyFLAG.
### 5.2 Spectral Line Calibration and Imaging
Session | Beam | Scan Type | SEFD [Jy] | Calibration Source | Calibration Source Flux [Jy]
---|---|---|---|---|---
GBT16B_400_12 (NGC 6946) | | | | |
| 0 | Grid | 14$\pm$1 | 3C295 | 22.15
| 1 | Grid | 15$\pm$2 | 3C295 | 22.15
| 2 | Grid | 15$\pm$2 | 3C295 | 22.15
| 3 | Grid | 15$\pm$1 | 3C295 | 22.15
| 4 | Grid | 16$\pm$1 | 3C295 | 22.15
| 5 | Grid | 16$\pm$2 | 3C295 | 22.15
| 6 | Grid | 17$\pm$2 | 3C295 | 22.15
GBT16B_400_13 (NGC 6946) | | | | |
| 0 | Grid | 14$\pm$1 | 3C123 | 22.15
| 1 | Grid | 15$\pm$2 | 3C123 | 22.15
| 2 | Grid | 15$\pm$2 | 3C123 | 22.15
| 3 | Grid | 15$\pm$1 | 3C123 | 22.15
| 4 | Grid | 16$\pm$1 | 3C123 | 22.15
| 5 | Grid | 16$\pm$2 | 3C123 | 22.15
| 6 | Grid | 17$\pm$2 | 3C123 | 22.15
GBT17B_360_03 (NGC 4258 Field) | | | | |
| 0 | Grid | 16.4$\pm$0.3 | 3C295 | 22.15
| 1 | Grid | 17.0$\pm$0.4 | 3C295 | 22.15
| 2 | Grid | 16.2$\pm$0.6 | 3C295 | 22.15
| 3 | Grid | 16.9$\pm$0.6 | 3C295 | 22.15
| 4 | Grid | 18.1$\pm$0.4 | 3C295 | 22.15
| 5 | Grid | 17.4$\pm$0.4 | 3C295 | 22.15
| 6 | Grid | 17.5$\pm$0.4 | 3C295 | 22.15
GBT17B_360_04 (NGC4258 Field)* | | | | |
| 0 | Grid | 9.3$\pm$0.2 | 3C295 | 22.15
| 1 | Grid | 9.5$\pm$0.3 | 3C295 | 22.15
| 2 | Grid | 9.6$\pm$0.2 | 3C295 | 22.15
| 3 | Grid | 9.6$\pm$0.3 | 3C295 | 22.15
| 4 | Grid | 9.7$\pm$0.3 | 3C295 | 22.15
| 5 | Grid | 9.5$\pm$0.3 | 3C295 | 22.15
| 6 | Grid | 9.7$\pm$0.2 | 3C295 | 22.15
GBT17B_455_01 (G353$-$4.0) | | | | |
| 0 | 7Pt-Cal† | 10$\pm$2 | 3C348 | 48.14
| 1 | 7Pt-Cal | 10$\pm$2 | 3C348 | 48.14
| 2 | 7Pt-Cal | 10$\pm$2 | 3C348 | 48.14
| 3 | 7Pt-Cal | 10$\pm$1 | 3C348 | 48.14
| 4 | 7Pt-Cal | 10$\pm$1 | 3C348 | 48.14
| 5 | 7Pt-Cal | 10$\pm$3 | 3C348 | 48.14
| 6 | 7Pt-Cal | 10$\pm$3 | 3C348 | 48.14
Table 4: Summary of derived system properties in XX Polarization from
calibration scans used to make the science images; † denotes that $\nu_{0}$
was set to 1450.00000 MHz for Beams 0-6; ‡ denotes that $\nu_{0}$ was set to
1450.8484 MHz.
After post-correlation beamforming to obtain spectra in units of raw system
counts, flux calibration of Hi data can begin. We calculate the SEFD (see
Equation 7 and discussion in Section 4.1) from the CALCORR calibration scans.
The flux measured on the sky is
$S_{\rm sky}={\rm SEFD}\left(\frac{P_{\rm On}}{P_{\rm Off}}-1\right).$ (13)
As discussed above, we obtain a reference spectrum to use as $P_{\rm Off}$
from the edges of our science maps by taking the mean power in each frequency
channel for the first and last four integrations of a particular map scan.
$P_{\rm On}$ in Equation 13 is then the raw power in each integration recorded
during the scan. The SEFD values used to scale the raw power ratios for each
beam and each session are computed with Equation 7 as discussed in Section 4.1
and summarized in Table 4. The flux calibration scripts are written in GBTIDL
and driven with a python wrapper, PAF_edgeoffkeep_parallel.py, in order to
calibrate each of the seven beams in parallel. The mean SEFD over all beams
included in our science maps is 12.3$\pm$0.3 Jy/beam. However, this value is
biased by measurements taken before improvements in calibration procedures
(see the discussion below); a more typical value after improvements is
9.8$\pm$0.4 Jy. If we assume an $\eta$ of 0.65 (Boothroyd et al., 2011) for
the sake of direct comparison with the single-pixel receiver, and use the more
characteristic SEFD value of 10 Jy, Equation 7 gives a $T_{\rm sys}$ of 18.5
K. While this assumption of $\eta$ does not consider specific parameters of
the FLAG receiver, such as the large spillover from the illumination pattern
of individual dipoles and their mutual coupling, this $T_{\rm sys}$ value is
consistent with both the existing single-pixel receiver ($\sim$ 18 K) and the
$T_{\rm sys}$/$\eta$ measurements of Roshi et al. (2018) at 1.4 GHz
($\sim$25-35 K). The overall sensitivity of FLAG is discussed in Section 6.2.
The measured $T_{\rm sys}$/$\eta$ is directly related to the SEFD (i.e.,
Equation 8). Consistent SEFD values are critical for accurately reproducing
measurements of flux on the sky between observing sessions and making
comparisons between the data collected by FLAG and other instruments. We see
that the overall SEFD values progressively converge to the single-pixel value
and observe a consistent decrease in the variation between beams and session-
to-session with subsequent observing runs. We attribute the steady reduction
in both measured SEFD values and associated scatter to consistent improvements
to the calibration strategies used to obtain and maintain bit and byte-lock —
such as the introduction of scripts to automate this process. We stress that
our most accurate flux measurements are obtained from our later observing
sessions, specifically GBT17B_360_04 and beyond. We therefore note that the
maps presented in Section 6.3 from previous sessions are done so with the
caveat that the overall flux scale has high uncertainty relative to later
sessions. Furthermore, since the overall flux scale of an OTF spectral map
depends on both the area of the assumed telescope beam and the width of the
convolution function used to interpolate the individual samples to a regular
image grid (Mangum et al., 2007), we present Hi flux density profiles only
from sessions where the weights were derived from a full calibration grid to
ensure the beam response is fully characterized over the FoV.
Figure 5: An example of an uncalibrated, beamformed spectrum taken from the
35th integration of the 19th column of a DecLatMap scan of NGC6946. The $-$3
dB drop in power (i.e., ‘scalloping’) is an artifact of the two step PFB
implementation of the back end (see text). Bottom: The calibrated version of
the above spectrum. While the scalloping behavior appears to be mitigated, the
signal aliasing at the edge of a coarse channel is still present.
### 5.3 Bandpass Scalloping
An example of a raw and calibrated integration when the system is in PFBCORR
mode is shown in Figure 5. The nulls, or ‘scalloping’, seen every 303.18 kHz
(every 32 fine frequency channels) in the top panel is a consequence of the
two stage PFB architecture approach currently implemented in the back end. As
the raw complex time series data are processed within the ROACHs, a response
filter is implemented in the coarse PFB such that the adjacent channels
overlap at the $-$3 dB point to reduce spectral leakage (power leaking in from
adjacent channels). However, this underlying structure becomes readily
apparent after the fine PFB implemented in the GPUs. The scalloping therefore
traces the structure of each coarse channel across the bandpass. While the
structure is somewhat mitigated in the calibrated data (since there is a
division by a reference spectrum), power variations caused by spectral leakage
in the transition bands of the coarse-channel bandpass filter result in
residual structure. Additionally, this scheme leads to signal aliasing
stemming from the overlap in coarse channels. Such near-coarse-channel-band-
edge aliasing artifacts are present in a number of other existing astronomical
two-stage zoom spectrometers. These artifacts do not hinder the performance of
FLAG in terms of sensitivity, but a fix for the signal aliasing is a priority
going forward. A provisional fix with the capability to provide both coarse
and narrowband spectra is realized by a two-stage channelizer architecture.
The first implemented in the ROACH and the second as part of PFBCORR mode in
the GPU. Both stages of processing use PFBs for computationally efficient
channelization. In our case we are implementing critically sampled PFBs at
both stages. To remove spectral artifacts (aliasing, scalloping) the first
stage channelizer must be an oversampled PFB to allow adjacent channels to
overlap. In the output of the second stage critically sampled PFB (PFBCORR),
the overlapped region is discarded to eliminate artifacts.
The scalloping can be completely mitigated by dithering the frequency such
that a subsequent map has a central frequency that is either 151.59 kHz (or
one-half of a coarse channel width) above or below the initial central
frequency setting. Because the scalloping is caused by overlap of the input
100 coarse channels into the PFB, there are 98 instances of drops in power
across the total 3200 fine channels, with each dip corresponding to 56 kHz or
six fine channels corresponding to the three channels at each edge of a coarse
channel. The channels affected by the scalloping are known beforehand and do
not change regardless of LO setting. In a dithered observation, the affected
channels from observations at both frequency settings can be blanked with the
chanBlank_parallel.py script before imaging to ensure no signal is aliased in
the final maps. These blanked calibrated spectra are smoothed with a Gaussian
kernel to a final resolution of 5.2 km s-1 and imaged with the
gbtgridder666https://github.com/GreenBankObservatory/gbtgridder tool,
utilizing a Gaussian-tapered circular Bessel gridding function. Note that we
present images of only the XX linear polarization due to complications with
the YY polarization signal chain during our two observing runs that has since
been rectified. We account for the use of a single polarization in our
calculations of sensitivity and comparison to equivalent single-pixel data.
## 6 Results
### 6.1 Beamformer Weights
Figure 6: Variation of the phase distance metric between subsequent
beamforming weights for the boresight beam as a function of time. The $d_{\rm
1}$ values are corrected for the bulk phase offset according to Equation 15
and normalized by the first $d_{\rm 1}$ value from each observing epoch for
clarity.
The calibration procedure described in Section 4.1 contributes to $\sim$40
minutes of overhead and, in principle, can remain valid for several weeks if
bit/byte lock is not lost. However, since lock is currently lost with every
change in the local oscillator setting, it is recommended that an observer
derive fresh beam weights at the beginning of each observing session. Other
reasons re-calibration may be necessary include: large variations in the
contribution of spillover and sky noise to the signal model and the relative
electronic gain drift between dipoles (Jeffs et al., 2008). Important factors
that impact the quality of the weights are robust bit and byte locks,
constraining the desired steering vector for a formed beam, and utilizing a
sufficiently bright calibration source to adequately characterize the system
response when on and off source.
While the current state of the FLAG system effectively requires new
beamforming weights every session, it is still interesting to explore how the
complex weight vectors derived from a given calibration observation vary with
time. Studying the variations will help reveal characteristic properties and
behavior of the weights that demonstrate the stability of the system with
time.
Recall that a given element in the weight vector is a complex number that
contains the amplitude and phase information to be applied to the output of a
given dipole in order to steer a beam in the desired direction. Beam steering
is primarily influenced by varying the amplitude of the weights applied to
each dipole. Given the reliable placement of our formed beams demonstrated in
Figure 3, we wish to investigate how the formed beam responses are influenced
by the second-order effect of phase variations. To measure the difference in
phase, a distance metric can be defined
$d_{1}=\lvert\lvert\mathbf{a_{1}}-\mathbf{\tilde{a}_{2}}\rvert\rvert,$ (14)
where $\mathbf{a_{1}}$ and $\mathbf{a_{2}}$ are the vector norms (i.e., the
square root of the sum of each element’s squared complex modulus) of the
weight vectors, or $\mathbf{w_{1}}/\lvert\lvert\mathbf{w_{1}}\rvert\rvert$ and
$\mathbf{w_{2}}/\lvert\lvert\mathbf{w_{2}}\rvert\rvert$, respectively. The
vector $\mathbf{\tilde{a}_{2}}$ represents the subsequent weight vector that
has been corrected for the bulk phase offset between the two vectors. This
bulk phase offset arises from the steering vectors, which are found by solving
for the dominant eigenvector in the generalized eigenvalue problem in Equation
5. Since eigenvectors are basis vectors that have arbitrary scaling, it is the
unknown scaling of the phase between calibration data sets that contributes to
the bulk phase offset. A subsequent weight vector can be phase aligned to some
initial weight vector by first making the first element of $\mathbf{a_{1}}$
real and then computing
$\hat{\phi}=\angle\left(\mathbf{a^{H}_{2}}\mathbf{a_{1}}\right),$ (15)
where $\hat{\phi}$ is the angle of the product of
$\mathbf{a^{H}_{2}}\mathbf{a_{1}}$. The correction for the bulk phase offset
is therefore a complex scaling factor applied to all phases in the latter
weight vector to ensure the phase differences in the remaining dipoles arise
strictly from the systematic (e.g., bit/byte lock) and instrumental effects
between different weight calibrations. The phase aligned weight vector is
therefore $\mathbf{\tilde{a}_{2}}=e^{i\hat{\phi}}\mathbf{a_{2}}$.
Because the distance metric $d{{}_{1}}$ is the overall magnitude of an
element-wise difference between two $M$ element vectors, it encapsulates all
the phase differences between respective dipoles into a single quantity. Small
variations in $d_{1}$ over time indicate similar phases (save for the bulk
phase offset due, in part, to new bit/byte lock) between the derived weight
vectors, meaning the directional response of the array is stable over the time
span of a typical observing run; thus, the beam pattern shape remains
relatively unchanged.
Figure 6 shows the variation of the normalized $d_{1}$ distance metric as a
function of time for the boresight beam. We see similar trends for each of the
outer beams and no discernible difference between types of calibration scans
performed. The initial set of beamformer weights (i.e., $\mathbf{a_{1}}$) is
taken to be the first set of weights derived for that particular observing
run. We compare all subsequent weights from a given observing run with this
first set. The time values are taken to be the difference between the mean
Modified Julian Date (MJD) values associated with a given calibration scan,
with the initial MJD taken to be from the first calibration scan in a given
observing run. The scatter in the phase variations is well below the 1% level,
indicating that the directional response to identical coincident signals is
very similar over time, which ensures the peak response is reliably located in
the desired direction on the sky and similar beam structure between sessions.
Figure 7 demonstrates the effect of varying beamforming weights on the
measured beam shapes. Weights derived from the calibration grids performed
during the GBT17B_360_04 observing sessions were applied to steering vectors
from the GBT17B_360_07 calibration grid. Weights derived during an earlier
session applied to steering vectors from a subsequent session are considered
to be stale, since the sample delays required to achieve a previous bit/byte
lock will produce a different phase response. By examining the changes in
overall beam shape, the locations of the peak response of each formed beam
relative to the desired pointing center, and change in sensitivity (i.e.,
Equation 10), we are able to investigate the stability of these beam weights
between observing sessions.
The beams formed with stale weights retain their overall Gaussian shape. While
the peak response of the stale boresight beam is close to the desired pointing
center, the peak responses of some of the outer beams, specifically Beam 3,
shifts significantly. The difference map in the bottom left panel reveals that
the relative phase offset in the stale weights degrades the sidelobe
structure, shifting the low-level beam response towards the edge of the FoV.
The change in the low-level beam response is further illustrated in the
partial histograms shown in the bottom left panel. The peaks in the histograms
that represent the sidelobe structure shift to higher values and become
broader, indicating a change in the overall beam shape below the 10% level.
The change in shape of each distribution is due to the relative phase offset
present in the stale weights.
We compute a measure of sensitivity for each formed beam using Equation 10.
The value of $\mathbf{w^{\rm H}}\mathbf{R_{\rm s}}\mathbf{w}$ is taken to be
the maximum power value at 1420 MHz in a 7-Pt calibration scan when the
calibrator is centered in a given beam, and the $\mathbf{w^{\rm
H}}\mathbf{R_{\rm n}}\mathbf{w}$ value is the average power value in the
nearest reference pointing at that same frequency. Taking the ratio of the
sensitivity values between beams formed from stale weights to those formed
with the correct weights reveals an average drop of 56% between all formed
beams. Overall, the beams formed with stale weights are stable above the 50%
level of the peak response. However, the application of stale weights results
in beam patterns that are, on average, half as sensitive and possess altered
directional responses to the same incident signal at the levels of the first
sidelobes. An observer should account for the overhead to perform at least a
7Pt-Cal to derive contemporaneous weights.
A calibration strategy deemed ‘word lock’ that, in principle, will allow
observers to reuse previously derived weights is nearing deployment. This
procedure accounts for the variable amount of sample delays between each
bit/byte lock cycle by utilizing the time-shift property of the Fourier
Transform to insert shifts in the full 16-bit/2-byte word. By inserting the
optimal amount of shifts that minimizes the variation in phase across
frequency relative to a reference dipole (Burnett, 2017), the phase response
of the previous set of weights will now apply to the current state of the
system.
Figure 7: Formed beam patterns and resulting histograms wherein beamforming
weights derived from the calibration grids performed during the GBT17B_360_04
and GBT17B_360_07 observing sessions were applied to steering vectors from the
GBT17B_360_04 calibration grid. The contours, red dash lines, and $\times$
symbols are the same as in Figure 3. The white vertical and horizontal dashed
lines correspond to the red lines from the upper right panel as a reference to
the shift in peak response caused by the application of stale weights. Top
left: beam pattern derived using the correct weights. Top right: beam pattern
derived using stale (i.e., from GBT17B_360_07) weights. Bottom left: the
difference of the top left and right panels. The solid (dashed) contours
denote the 90%, 50%, and 25% level of the maximum relative difference between
each formed beam. Bottom right: Partial histograms of the beam response values
shown in the upper panels. The range of response values is chosen to highlight
the difference at the levels of the sidelobes.
### 6.2 Sensitivity as a Function of Frequency
Figure 8: $T_{\rm sys}$/$\eta$ (see Equation 9) as a function of frequency
derived for a set of 7Pt-Cal scans in which the LO was sequentially shifted by
50 MHz. The PAF model results form Roshi et al. (2019b) corresponding tot he
two polarizations are marked RSF19XX and RSF19YY. These model results
correspond to a thermal transition length of 9.1 cm and its loss of 1 K. See
Roshi et al. (2019b) for further details.
Figure 8 shows the result of Equation 9 derived from frequency sweep
observations performed as engineering tests for several of the commissioning
runs. The goal of this test is to characterize the sensitivity over a wide
range of frequencies and identify frequencies most affected by narrowband RFI
features. We performed a series of 7Pt-Cal scans with the LO set to 50 MHz
increments beginning at 1100 MHz and continuing up to 1700 MHz. For each
calibration scan, we calculate $T_{\rm sys}$/$\eta$ as a function of the 150
MHz bandpass for each formed beam at the coarse channel resolution of 0.30318
MHz and merge the results.
As can be expected with significant improvements to the system made between
subsequent commissioning runs, $T_{\rm sys}$/$\eta$ decreases as a function of
epoch for both polarizations with the February 2018 calibration data showing
the lowest observed $T_{\rm sys}$/$\eta$. Since $T_{\rm sys}$/$\eta$ and SEFD
depend on one another, we also attribute this trend to improvements made to
the signal processing algorithms of the back end and calibration strategies to
obtain bit/byte lock. Specifically, a correction to increase the digital gain
to avoid a bit underflow when the data in the ROACH is reduced to 8-bit/8-bit
real and imaginary values just before packetization was implemented for the
February 2018 observing runs.
The measured $T_{\rm sys}$/$\eta$ are compared to several PAF system models
(see Figure 8). In general, these models are produced by first obtaining the
modified full polarization response pattern of the individual dipole elements
embedded in the array. Finite element solutions of electromagnetic equations
were used to obtain these response patterns. The patterns along with a model
of the GBT optics were used to predict the full polarized electromagnetic
field pattern in the antenna aperture and to characterize ground spillover.
These results were used to compute the signal covariance and the noise
covariance due to the ground spillover and sky background. A noise model for
the cryogenic LNAs is utilized to pre dict the receiver contribution to the
noise covariances. The maxSNR beamforming algorithm is then applied to the
signal and noise covariances to predict the final $T_{\rm sys}$/$\eta$ at a
given frequency. See Roshi et al. (2019b) (hereafter RSF19) for further
details on modeling.
The measured $T_{\rm sys}$/$\eta$ for the February observing run are largely
consistent with the models at a frequency of 1.4 GHz. Overall, the measured
sensitivity across functional frequency range of the receiver are consistent
with expectations models with only moderate narrowband RFI near the Hi
transition. The discrepancy between the models and measurements at lower
frequencies may be due to differences between the modeled and actual roll-off
of the analog filter. Obvious RFI artifacts present between 1000 MHz and 1100
MHz and near 1625 MHz need to be considered when planning potential
observations of radio recombination lines and the OH 1665 MHz and 1667 MHz
transitions. While we only show results for the boresight beam, the trends are
similar for all outer beams.
### 6.3 Hi Results
#### 6.3.1 NGC 6946
The external galaxy, NGC 6946, was chosen as the first science target for Hi
on the basis of ample GBT single-pixel data available for comparison (e.g.,
Pisano 2014). The presence of high-velocity gas from galactic fountain
activity (Boomsma et al., 2008) and an Hi filament, possibly related to recent
accretion (Pisano, 2014), and several smaller nearby companions are also ideal
features to test the sensitivity of this new receiver.
This source was observed in the horizontal celestial coordinate system to
ensure beam offsets, which are determined in the same coordinate frame by
definition, were correct. The images presented here are 2${}^{\circ}\times$2∘
large and had the central frequency ($\nu_{0}$) set to 1450.0000 MHz in the
topocentric Doppler reference frame. For a direct comparison with previous
single-pixel data and a single FLAG beam, we show channel maps from the
boresight beam in Figure 9 to demonstrate that FLAG effectively reproduces
single-pixel observations. Overall, the FLAG and single-pixel contours agree
well with the slight offsets in the lowest level contours are attributed to
the fact that the FLAG data are a factor of almost 10 times less sensitive
than the single-pixel data. The difference in sensitivity between these two
maps also explains the non-detection of the two unresolved companions,
UGC11583 and L149, in the channel maps from a single beam. Figure 10 reveals
the presence of the two companions once data from all seven FLAG beams are
combined in a single map. Here, the slight differences at the lowest contour
levels likely arise from the complicated beamshape and sidelobe structure
resulting from averaging the seven distinct formed beams.
Figure 9: Channel maps of NGC 6946 and nearby companions. Hi emission
detected by the FLAG boresight beam is represented by the color scale and
white contours, while emission detected by the single-pixel receiver is
denoted by orange contours. Both sets of contours begin at the 130 mJy/Beam
level ($\sim$3$\sigma_{\rm meas}$ in Table 5) and continue at 10 and 25 times
that level. Figure 10: Hi column density map of NGC 6946. FLAG data is
represented by the color scale and white contours, while the single-pixel
equivalent column density levels are overlaid in orange. The outer contour is
at a level of 1$\times$1019 cm-2, which represents a 3$\sigma$ detection over
the integrated 11.4 km s-1 to 181.4 km s-1 velocity range, while the inner
contours go as 5, 10, and 25 times that level. We have assumed the emission is
optically thin and a similar gain of 1.86 K/Jy to convert the FLAG data to
units of brightness temperature.
#### 6.3.2 NGC 4258 Field
NGC 4258 resides in the Canes Venatici II Group (de Vaucouleurs, 1975), which
is comprised of several companions including the late-type galaxies NGC 4288
and UGC 7408 to the southwest and J121811.04+465501.1 — a low surface
brightness dwarf galaxy (Liang et al., 2007) — slightly to the southeast. The
most appealing target in the field is a prominent Hi filament extending from
NGC 4288 that points towards NGC 4258. This filament was seen previously with
the 76.2m Lovell telescope at the Jodrell Bank Observatory (UK) as part of the
Hi Jodrell All Sky Survey (HIJASS); (Wolfinger et al., 2013). It is classified
as an ‘Hi cloud’ with the designation HIJASS J1219+46; no known optical
counterparts are observed over the spatial extent of the Hi emission. The
single-pixel data used as a comparison, which was collected during a GBT
survey to provide the single-dish counterpart to the high-resolution
Westorbork Radio Synthesis Telescope (WSRT) Hydrogen Accretion in LOcal
GAlaxieS (HALOGAS) Survey (Heald et al., 2011; Pingel et al., 2018), shows a
peak flux of $\sim$0.06 Jy and projected physical scale of $\sim$80 kpc,
assuming the same distance as to NGC 4258.
A total of six 1.5${}^{\circ}\times$2∘ maps evenly split over two separate
observing sessions were performed. Improvements to how the beam offsets were
applied in the custom reduction software enabled mapping in equatorial
coordinates. The first session the $\nu_{0}$ set to 1450.0000 MHz and
1449.84841 MHz, respectively, to circumvent the frequency scallopping (see
Figure 5 and discussion in Section 5.2). The relative weak flux, extended
nature, and complex kinematics originating from a possible tidal interaction
between HIJASS J1219+46 and other group members provide an excellent benchmark
for the mapping capabilities of FLAG.
The channel maps of the NGC 4258 Field in Figure 11 contain data from all
seven beams from sessions 17B_360_03 and 17B_360_04, with data from the former
session being scaled by the factor listed in Table 5 to ensure a consistent
flux scale. While there is not a specific cause for the relatively large scale
factor of $\sim$0.6 between observing sessions, we again note that scripts to
automate the bit/byte locking procedure were used for the first time before
the latter session, which has shown to significantly increase the stability of
the system over the course of multiple observing sessions that use the same LO
configuration. These channel maps demonstrate that FLAG can reproduce the
features of diffuse structures detected by the single-pixel receiver when
mapping at similar sensitivities. The contours tracing the filament, HIJASS
J1219+46, extending from NGC 4288 between the velocities of 378 km s-1 and 409
km s-1 are in agreement, albeit for the lowest level contours that are
affected by the unconstrained sidelobe levels. The Hi column density image in
Figure 12, in which a mask was applied such that only pixels with a S/N$>$3
are included in the final image shows very good correspondence at all contour
levels between the FLAG and single-pixel data with a clear detection of the
low-level emission associated with the Hi cloud.
Figure 11: Channel maps of the NGC 4258 Field. Hi emission detected by FLAG
is represented by the color scale and white contours, while emission detected
by the single-pixel receiver is denoted by orange contours. Both sets of
contours begin at the 27 mJy/Beam level ($\sim$3$\sigma_{\rm meas}$ in Table
5) and continue at 5 and 10 times that level. Figure 12: Hi column density
map of the NGC4258 Field observed by FLAG (color scale and white contours)
with equivalent single-pixel data (orange contours) overlaid. The contour
levels beginning at 2$\times$1018 cm-2, which represents a 5$\sigma$ detection
over a 20 km s-1 line, and continuing at 15, 100, and 500 and 1000 times that
level. The dashed and dot-dashed rectangles denote the angular areas over
which the flux profiles shown in Figure were integrated.
Figures 13 and 14 present Hi flux density profiles comparing FLAG and single-
pixel data, with the former comparing measurements from individual beams and
the latter showing profiles taken from the rectangular regions in the combined
map as denoted in Figure 12; the measured fluxes and associated Hi masses are
summarized in the row denoted $S_{\rm meas}$ under 17B_360_04 in Table 5.
Since the intensity units of Hi maps are presented in terms of surface
brightness, it is vital to have knowledge of the beam area. Unfortunately, as
demonstrated in the beam patterns shown in Figure 3, each formed beam has a
unique area. For each beam, we take the derived beam pattern and fit two
Gaussians along two orthogonal cuts along the central horizontal and vertical
axes. The beam area is then calculated from the average of these two
Gaussians. The final beam area of the combined map is taken to be the mean of
these individual beam areas; see again Table 5 for a summary of these areas.
Given the variation in early SEFD values, relative uncertainty with the final
beam areas, possible errors in bandpass calibration, the presence of
interference, and modeling for atmospheric effects, we adopt an overall 10%
flux uncertainty.
The profiles and total flux measurements of Beams 0-3 and Beam 6 agree very
well with the flux values from the equivalent single-pixel map. The offset in
Beam 4 and Beam 5, while still within the 10% flux uncertainty, is likely
influenced by the deviations from Gaussianity in the main lobe of these formed
beams and relatively high sidelobes. The combined maps and profiles of both
NGC4258 and HIJASS J1219+46 agree very well with their single-pixel
counterparts. The overall consistency between the FLAG and single-pixel data
of the NGC 4258 Field and detection of a very diffuse Hi cloud demonstrate the
capability of FLAG to provide equivalent and accurate spectral line maps
relative to the current single-pixel receiver on the GBT.
Figure 13: Hi flux density profiles of NGC 4258 from each FLAG beam (blue)
with equivalent single-pixel profile (orange) overlaid. These profiles were
measured by integrating over the dashed rectangular region overlaided in
Figure 12.
Figure 14: Left: Hi flux density profiles of NGC 4258 from the combined FLAG
map (blue) with equivalent single-pixel profile (orange) overlaid. Right: Hi
flux density profiles from the same maps of the faint Hi cloud, HIJASS
J1219+46. These profiles were measured by integrating over the dashed and dot-
dashed rectangular regions overlaid in Figure 12.
#### 6.3.3 Galactic Center
A recent single-pixel survey of Hi above and below the Galactic Center
undertaken by Di Teodoro et al. (2018) revealed a population of anomalous
velocity clouds expanding out in a biconic shape, which likely arises from
nuclear wind driven by the star formation activity in the inner regions of the
Milky Way. As a demonstration of FLAG’s capability to map extended Galactic
emission and characterize gas moving at anomalous velocities, we mapped a
2${}^{\circ}\times$2∘ region centered on $l$ = 353∘ and $b$ = $-$4∘ in the
Galactic coordinate system with $\nu_{0}$ set to 1449.84841 MHz.
Figure 15 presents Hi column density of structures towards the Milky Way
center that are moving at anomalous approaching and receding velocities. Once
more, the spatial distribution of the emission detected by FLAG is
sufficiently consistent with the single-pixel contours. The comparisons of Hi
spatial extent clearly highlight FLAG’s ability to characterize both the
diffuse Hi associated with extragalactic sources and the complex kinematic
properties of anomalous velocity clouds in and around the Milky Way.
Figure 15: Hi column density maps of the FLAG (color scale and white
contours) with equivalent single-pixel data (orange contours) overlaid for the
Galactic Center observations; left: Hi map derived by integrating over
approaching LSR velocities (see text). The contours begin at a level of
6$\times$1018 cm-2 and continue at 5 and 10 times that level; right: Hi map
derived by integrating over receding LSR velocities with the same contour
levels.
#### 6.3.4 Discrepancies and Improvements
Figure 16: The beam patterns from the boresight FLAG beam (left) and outer
Beam 2 (right). The white contours denote the response from a model of the
singel-pixel beam. Contours begin at a level of 0.001, 0.01, 0.1 and 0.5 times
the peak response. The outer beam is shown to highlight the highly peaked
sidelobe that overlaps near the peak of the boresight.
Figures 9-15 demonstrate broad agreement with previous single-pixel
observations. However, there are notable discrepancies between FLAG and
single-pixel contours that are at the same absolute flux density and column
density levels. There are several possible sources for such discrepancies
including stray radiation from the complex beam shapes, differences in
sensitivity between maps, and a flux offset between FLAG and single-pixel
data.
Figure 16 shows the beam patterns for the boresight (Beam 0) and Beam 2
derived from the calibration grid from session GBT17B_360_04 with overlaid
contours from a model of the GBT single-pixel L-Band beam shown in Figure 1 of
Pingel et al. (2018). There is excellent agreement between the single-pixel
beam model and Beam 0 from the FWHM response level extending down to the level
of the first sidelobe at the 0.1%. The sidelobes is highly asymmetric in both
FLAG beams, with the peak sidelobe in the outer Beam 2 peaking an order of
magnitude higher than that of the boresight beam; also, note that this
sidelobe overlaps almost directly with the peak of the boresight response.
Given that that dynamic range of the our observations is typically on the
order of several hundred, it is feasible that such complex beam shapes —
especially in the final combined FLAG maps, where the beam responses are
effectively averaged together – will affect the observed morphology of diffuse
structures.
Figure 17: Hi column density map of NGC 6946 after convolving the FLAG data
with a model of the GBT L-Band single-pixel beam (color scale and white
contours) with contours from the data single-pixel overlaid after a similar
convolution with the FLAG boresight beam pattern. The convolution ensures the
both maps have effectively equal responses to the observed sky brightness
distribution. The contours are at the same levels listed in the caption from
Figure 10.
To test the degree to which the complex sidelobes structure in the formed FLAG
beams affect the discrepancies in the flux density contours, we convolve the
FLAG map made with the boresight beam of NGC 6946 with a single-pixel beam
model re-gridded to a common pixel grid. Likewise, the single-pixel map is
convolved with the FLAG boresight beam pattern; the resulting column density
map shown in Figure 17 now shows the same sky brightness distribution
convolved with the same response. The apparent bridge of material that now
connects NGC 6946 with its companions is due to the degraded angular
resolution from convolution with both beams. The contours to the south are in
better agreement with deviations on the scale of a single pixel, confirming
that the asymmetric sidelobe patterns of the formed FLAG beams indeed
influence the morphology of diffuse emission by a non-negligible amount. The
larger discrepancies towards the north and around the unresolved companions
can be attributed to the order of magnitude difference in sensitivity between
the FLAG and single-pixel map, which detects an appreciable amount of diffuse
Hi below a column density level of 1$\times$1019 cm-2 — including a
conspicuous Hi plume — that likely influences these northern contours (Pisano,
2014). Differences between the overall flux scale, which has since been
addressed with improvements to the overall stability of the system, can also
cause such discrepancies.
There are several possible avenues to mitigate effects from the complex
sidelobe patterns, including utilizing alternative beamforming algorithms.
However, attempts to constrain the sidelobe levels of formed beams on other
radio telescopes sacrifice sensitivity at unacceptable levels. Fortunately,
the raw covariance data obtained from FLAG can be used to aid development of
new algorithms. A more traditional approach would be to apply a stray
radiation correction first developed by van Woerden (1962), demonstrated for
single dish telescopes e.g., Kalberla et al. (1980), Winkel et al. (2016), and
applied to multibeam systems in Kalberla et al. (2010). Such a correction
requires detailed knowledge of the sidelobes, which can easily be obtained
using a sufficiently large calibration grid. The correction can also be
considerably simplified by having a known all-sky brightness temperature
distribution. Ample archival data from the single-pixel exists to attempt such
corrections for future FLAG data.
### 6.4 Survey Speed Comparison
We now aim to quantify the performance of FLAG relative to the single-pixel
receiver and the PAFs and multi-beam receivers available on other prominent
radio telescopes. We do this through the survey speed (SS) metric.
To obtain an expression for $SS$, we first define a given surface brightness
sensitivity (in units of K) to be
$\sigma=\frac{T_{\rm sys}}{\sqrt{\Delta\nu N_{\rm p}t}},$ (16)
where $\Delta\nu$ is the width of a frequency bin, $N_{\rm p}$ is the number
of polarizations, and $t$ is the integration time necessary to reach a given
surface brightness sensitivity. Putting $T_{\rm sys}$ in terms of SEFD
(Equation 8) and absorbing the antenna gain factors gives an equivalent
expression for point source sensitivity ($\sigma_{\rm s}$ in units of Jy) that
can be rearranged to give the time necessary to reach a given point source
sensitivity
$t=\frac{1}{\Delta\nu N_{\rm p}}\left(\frac{\sigma_{\rm s}}{\rm
SEFD}\right)^{2}$ (17)
Following Johnston & Gray (2006), the speed at which a single dish can survey
an area of sky to the necessary sensitivity limit is the ratio of its inherent
FoV to $t$ or
$SS={\rm FoV}\Delta\nu N_{\rm p}\left(\frac{\sigma_{\rm s}}{\rm
SEFD}\right)^{2}$ (18)
where the FoV is measured in square degrees. In the case of FLAG, we define
the FoV to be the area of sky over which the sensitivity map (see again Figure
2) remains above a $-$3 dB drop off relative to peak response. The average FoV
measured from all available calibration grids is 0.144 deg2.
We employ the Source Finding Application (SoFiA; Serra et al. 2015b) software
package to measure the noise in the FLAG cubes and compare with similar data
from the single-pixel receiver. We utilize the feature in which the rms is
estimated from a Gaussian fit to the negative half of the histogram of pixel
values. The histogram is constructed using only emission-free channels to
avoid spectral channels whose reference spectra have been contaminated by
Milky Way emission during calibration. Table 5 lists the measured noise
returned by SoFiA for the cubes produced for each individually formed beam,
the combined beam cube, and the single-pixel cube. The measured noise in the
combined beam cubes generally scale by the reciprocal of the square root
number of beams, as expected from pure Gaussian noise. The beam-to-beam
variation in SEFD values also influences the final noise floor in the combined
cubes. Because the available single-pixel cubes are generally more sensitive
than the FLAG commissioning maps, a straight calculation of the $SS$ metric
using the measured noise properties will give a convoluted comparison. To
ensure a normalized comparison, we use the measured single-pixel noise while
taking the FLAG SEFD values available in Table 4 to compute Equation 18. The
quoted uncertainties are propagated from the SEFD uncertainties. For all
observing sessions, FLAG possesses a higher $SS$ in the final combined maps,
largely aided by the increase in FoV.
As broader comparison, we assume a desired point source sensitivity level of 5
mJy and plot the SS of FLAG, the single-pixel receiver, and several other
multi-pixel receivers and PAFs already available or planned for other major
radio telescopes as function of angular resolution in Figure 18. When
comparing different receivers, we must make a consistent definition of the
FoV, since sensitivity maps for the other receivers are not readily available.
In these cases, we consider the field of view to be
${\rm FoV}_{\rm eff}=N_{\rm b}\Omega_{\rm b},$ (19)
where $N_{\rm b}$ is the number of beams and $\Omega_{\rm b}$ is the beam
solid angle in square degrees as measured at the FWHM. Table 6 summarizes the
parameters used in the calculation of Equation 18.
ASKAP and Apertif, being PAF-equipped interferometric telescopes, possess a
distinct advantage in terms of angular resolution due to their capability to
sample large spatial frequencies. However, even when considering point-source
sensitivity, they are ultimately limited in their SS by their relatively large
SEFDs. On the other hand, the SEFDs of single dish telescopes benefit from
their large and continuous apertures but suffer in terms of angular
resolution. The SS of FLAG relative to the GBT single-pixel reciever is about
an order of magnitude higher, and the cryogenically cooled LNAs in its front
end enhance its performance to exceed all other existing PAFs, while providing
comparable resolution. Relative to multiple horn receivers, FLAG beats the 13
beam multibeam receiver on Parkes in terms of angular resolution and SS and
also produces comparable SS metrics to the 7-beam ALFA receiver on the now
defunct 300m Arecibo telescope. In fact, the survey capabilities of the GBT
when equipped with FLAG are only exceeded by the multibeam receiver on FAST,
the world’s largest primary reflector telescope that cannot be fully steered.
Session | Property | Beam 0 | Beam 1 | Beam 2 | Beam 3 | Beam 4 | Beam 5 | Beam 6 | Combined | single-pixel
---|---|---|---|---|---|---|---|---|---|---
16B_400_12 | | | | | | | | | |
| $\sigma_{\rm meas}$ [mJy Beam-1] | 43 | 45 | 46 | 46 | 49 | 49 | 49 | 19 | 4
| SS [deg2 hr-1] | 0.38$\pm$0.05 | 0.3$\pm$0.2 | 0.3$\pm$0.2 | 0.32$\pm$0.04 | 0.29$\pm$0.04 | 0.3$\pm$0.1 | 0.3$\pm$0.1 | 1.70$\pm$0.08 | 0.78$\pm$0.05
16B_400_13 | | | | | | | | | |
| $\sigma_{\rm meas}$ [mJy/Beam-1] | 44 | 49 | 46 | 51 | 53 | 57 | 51 | 20 | 4
| SS [deg2 hr-1] | 0.37$\pm$0.05 | 0.3$\pm$0.2 | 0.3$\pm$0.2 | 0.32$\pm$0.04 | 0.29$\pm$0.04 | 0.3$\pm$0.1 | 0.3$\pm$0.1 | 1.70$\pm$0.08 | 0.78$\pm$0.05
17B_360_03 | | | | | | | | | |
| Scaling Factor† | 0.58 | 0.57 | 0.60 | 0.53 | 0.50 | 0.57 | 0.56 | |
| $\sigma_{\rm meas}$ [mJy/Beam-1] | 30 | 31 | 29 | 30 | 33 | 33 | 33 | 16 | 8
| SS [deg2 hr-1] | 1.09$\pm$0.01 | 1.01$\pm$0.02 | 1.11$\pm$0.05 | 1.02$\pm$0.04 | 0.89$\pm$0.02 | 0.97$\pm$0.02 | 0.96$\pm$0.02 | 5.56$\pm$0.02 | 3.1$\pm$0.2
17B_360_04 | | | | | | | | | |
| $\Omega$ [arcmin2] | 95 | 100 | 101 | 105 | 110 | 122 | 114 | 107 | 94
| $S_{\rm meas}$ [Jy km s-1] | 410$\pm$40 | 400$\pm$40 | 420$\pm$40 | 370$\pm$40 | 350$\pm$40 | 340$\pm$30 | 390$\pm$40 | 380$\pm$40 | 410$\pm$20
| $\sigma_{\rm meas}$ [mJy/Beam-1] | 15 | 15 | 15 | 15 | 16 | 15 | 15 | 8 | 8
| SS [deg2 hr-1] | 3.38$\pm$0.02 | 3.24$\pm$0.06 | 3.17$\pm$0.03 | 3.17$\pm$0.06 | 3.11$\pm$0.06 | 3.24$\pm$0.06 | 3.11$\pm$0.03 | 17.74$\pm$0.04 | 3.1$\pm$0.2
Table 5: Measured noise ($\sigma_{\rm meas}$), survey speeds ($SS$), beam area
($\Omega$), and measured flux ($S_{\rm meas}$); † represents the scaling
factor applied before combination with an associated frequency-dithered
session.
Receiver | $N_{\rm b}$ | FWHM [arcmin] | Resolution [arcmin] | FoVeff [deg2] | SEFD [Jy] | SS [deg hr-1] | Reference
---|---|---|---|---|---|---|---
FLAG | 7 | 9.1 | 9.1 | 0.144 | 10 | 6.3$\times$10-6 | This work
GBT single-pixel | 1 | 9.1 | 9.1 | 0.018 | 9.7 | 8.4$\times$10-7 | This work
Apertif | 37 | 30.0 | 0.3 | 10.500 | 330 | 4.2$\times$10-7 | Oosterloo et al. 2009
ASKAP | 36 | 60.0 | 0.2 | 46.200 | 1700 | 7.0$\times$10-8 | David McConnell (2020; private communication)
ALFA | 7 | 3.5 | 3.5 | 0.027 | 3 | 1.3$\times$10-5 | Peek et al. 2011; http://outreach.naci.edu/ao/scientist-user-portal/astronomy/recievers
ALPACA | 40 | 3.3 | 3.3 | 0.137 | 3 | 6.7$\times$10-5 | Roshi et al. 2019a
Effelsberg PAF | 36 | 7.6 | 7.6 | 0.650 | 130 | 1.7$\times$10-7 | Rajwade et al. 2019
FAST Mutli-Beam | 19 | 2.9 | 2.9 | 0.014 | 0.4 | 3.8$\times$10-4 | Jiang et al. 2020
Parkes Multi-Beam | 13 | 14.5 | 14.5 | 0.86 | 25 | 6.0$\times$10-6 | Staveley-Smith et al. 1996; McClure-Griffiths et al. 2009
Parkes PAF | 17 | 13.0 | 13.0 | 0.900 | 65 | 9.4$\times$10-7 | Reynolds et al. 2017
Table 6: Survey Speed Parameters. Note that the FWHM for ASKAP and Apertif
refer to the size of a single formed primary beam, while resolution refers to
the size of a typical synthesized beam. Figure 18: Comparison of various
receiver survey speeds. The dotted lines denote different PAF recievers, while
the solid lines represent traditional multi-beam and single-pixel receivers.
## 7 Conclusions and Outlook
This work summarized the commissioning of the calibration and spectral-line
observing modes for a new beamforming back end for FLAG, a cryogenically
cooled PAF L-band receiver for the GBT. These observations represent the
culmination of several commissioning runs from 2016 to 2018 wherein the system
was incrementally tested on a diverse range of extragalactic and Galactic Hi
science targets and known calibrator sources. The main results from these
commissioning runs are:
* •
The beamforming weights derived from Calibration Grids and 7Pt-Cal scans
produce seven simultaneously formed beams optimally spaced to achieve uniform
sensitivity across the FoV. The measured beam shapes are sufficiently Gaussian
down to the 3% level of the peak response with FWHM’s ranging from 8.7′ to
9.5′. The locations of the peak response for each beam beam are reliably
located within 5% of the their intended pointing centers.
* •
The custom python package, pyFLAG, is used to apply the beamforming weights to
the raw covariance matrices to create SDFITS files that contain uncalibrated
beamformed spectra. Through several GBTIDL and GBO tools, these spectra are
flux calibrated and imaged to create SDFITS cubes for each formed beam. A beam
combined cube is produced by averaging all spectra from these individual
cubes.
* •
The overall phase of the derived complex beamforming weights varies less than
1% over timescales of $\sim$1 week, indicating the directional response to
identically coincident signals is extremely reliable. Applying stale weights
(i.e., weights from a previous observing session) to the steering vectors of a
subsequent observing session produces beams that keep their Gaussian shape
above the 50% level of the peak response, but degrades the side-lobe
structure, sensitivity, and shifts the peak response away from the intended
pointing center. An observer should at least perform a 7Pt-Cal scan to derive
contemporaneous weights. In the future, the word lock calibration procedure
will ensure the phase response of a previous set of weights applies to the
current state of the system. Weights can then be reused without deterioration
of sensitivity or overall beam shape.
* •
The measures of sensitivity across the entire 150 MHz bandpass show steady
improvement over our commissioning runs. Likewise, the measured SEFDs used to
scale spectra to the correct flux scale converged towards the single-pixel
value in later sessions. These improvements are the result of improvements in
our calibration strategies to obtain and maintain bit and byte-lock, which
ensure the serialized complex voltages samples streaming from the front end
over optical fiber are correctly decoded for downstream processing in the back
end.
* •
The observed Hi science targets were chosen to incrementally test the spectral
line mapping capabilities of FLAG. The map of NGC 6946 compares well with
equivalent single-pixel data. The Hi flux density profiles of sources within
the NGC 4258 field are also well-matched to equivalent single-pixel data and
demonstrate accurate measurements of the shape of the FLAG beams. The
detection of the diffuse Hi cloud, HIJASS J1219+46, and emission at anomalous
velocities towards the Galactic Center shows that FLAG is able to reproduce a
wide-range of Hi properties observed in and around extragalactic sources and
Galactic regions.
* •
The relatively high sidelobes inherent to maxSNR beamforming do affect the
overall morphology of low-level emission. Correcting for stray radiation using
proven techniques can mitigate these effects in future observations.
* •
The compromise between survey speed and angular resolution when compared
between FLAG, the current GBT single-pixel receiver, and other multi-beam and
PAF receivers available or planned for the world’s major radio telescopes is
only matched by those with much larger apertures that are not fully steerable.
Overall, the new beamforming back end for FLAG performed exceptionally well in
terms of the derivation of stable beamforming weights and generally reproduces
equivalent observations from the current single-pixel receiver. There are
several possible avenues of improvement including the correcting for stray
radiation. The increase in survey speed provided by FLAG and its upgraded
backend, coupled with the sky coverage available only from a fully steerable
dish, will ensure the GBT remains a premiere instrument for radio
astrophysics.
The authors wish to thank Richard Prestage for leading the organizational
efforts during these commissioning observations and for significant
contributions to the field of radio astronomy. We also thank the anonymous
referee whose comments greatly improved the quality of this work. We
acknowledge the significant funding for the FLAG receiver provided by GBO and
NRAO. The Green Bank Observatory is a major facility supported by the National
Science Foundation and operated under cooperative agreement by Associated
Universities, Inc. The National Radio Astronomy Observatory is a facility of
the National Science Foundation operated under cooperative agreement by
Associated Universities, Inc. NMP, KMR, DRL, DA, DJP, and MAM acknowledge
partial support from National Science Foundation grant AST-1309815. KMR
acknowledges funding from the European Research Council (ERC) under the
European Union’s Horizon 2020 research and innovation programme (grant
agreement No 694745). This material is based upon the work supported by
National Science Foundation Grant No. 1309832. htp]This research made use of
Astropy,777http://www.astropy.org a community-developed core Python package
for Astronomy (Astropy Collaboration et al., 2013, 2018).
## References
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f
* Barnes et al. (2001) Barnes, D. G., Staveley-Smith, L., de Blok, W. J. G., et al. 2001, MNRAS, 322, 486, doi: 10.1046/j.1365-8711.2001.04102.x
* Boomsma et al. (2008) Boomsma, R., Oosterloo, T. A., Fraternali, F., van der Hulst, J. M., & Sancisi, R. 2008, A&A, 490, 555, doi: 10.1051/0004-6361:200810120
* Boothroyd et al. (2011) Boothroyd, A. I., Blagrave, K., Lockman, F. J., et al. 2011, A&A, 536, A81, doi: 10.1051/0004-6361/201117656
* Burnett (2017) Burnett, M. C. 2017, Master’s thesis, Brigham Young University
* de Vaucouleurs (1975) de Vaucouleurs, G. 1975, Nearby Groups of Galaxies, ed. A. Sandage, M. Sandage, & J. Kristian (the University of Chicago Press), 557
* Di Teodoro et al. (2018) Di Teodoro, E. M., McClure-Griffiths, N. M., Lockman, F. J., et al. 2018, ApJ, 855, 33, doi: 10.3847/1538-4357/aaad6a
* Diao (2017) Diao, J. 2017, Ph.d. dissertation, Brigham Young University
* Dickey et al. (2013) Dickey, J. M., McClure-Griffiths, N., Gibson, S. J., et al. 2013, PASA, 30, e003, doi: 10.1017/pasa.2012.003
* Elagali et al. (2019) Elagali, A., Staveley-Smith, L., Rhee, J., et al. 2019, MNRAS, 487, 2797, doi: 10.1093/mnras/stz1448
* Elmer et al. (2012) Elmer, M., Jeffs, B. D., Warnick, K. F., Fisher, J. R., & Norrod, R. D. 2012, IEEE Transactions on Antennas and Propagation, 60, 903, doi: 10.1109/TAP.2011.2173143
* Fisher & Bradley (2000) Fisher, J. R., & Bradley, R. F. 2000, in Astronomical Society of the Pacific Conference Series, Vol. 217, Imaging at Radio through Submillimeter Wavelengths, ed. J. G. Mangum & S. J. E. Radford, 11
* For et al. (2019) For, B. Q., Staveley-Smith, L., Westmeier, T., et al. 2019, MNRAS, 489, 5723, doi: 10.1093/mnras/stz2501
* Heald et al. (2011) Heald, G., Józsa, G., Serra, P., et al. 2011, A&A, 526, A118, doi: 10.1051/0004-6361/201015938
* Jeffs et al. (2008) Jeffs, B. D., Warnick, K. F., Landon, J., et al. 2008, IEEE Journal of Selected Topics in Signal Processing, 2, 635, doi: 10.1109/JSTSP.2008.2005023
* Jiang et al. (2020) Jiang, P., Tang, N.-Y., Hou, L.-G., et al. 2020, arXiv e-prints, arXiv:2002.01786. https://arxiv.org/abs/2002.01786
* Johnston & Gray (2006) Johnston, S., & Gray, A. 2006, SKA Memo Series, 2
* Kalberla et al. (1980) Kalberla, P. M. W., Mebold, U., & Reich, W. 1980, A&A, 82, 275
* Kalberla et al. (2010) Kalberla, P. M. W., McClure-Griffiths, N. M., Pisano, D. J., et al. 2010, A&A, 521, A17, doi: 10.1051/0004-6361/200913979
* Kleiner et al. (2019) Kleiner, D., Koribalski, B. S., Serra, P., et al. 2019, MNRAS, 488, 5352, doi: 10.1093/mnras/stz2063
* Landon et al. (2010) Landon, J., Elmer, M., Waldron, J., et al. 2010, AJ, 139, 1154, doi: 10.1088/0004-6256/139/3/1154
* Lee-Waddell et al. (2019) Lee-Waddell, K., Koribalski, B. S., Westmeier, T., et al. 2019, MNRAS, doi: 10.1093/mnras/stz017
* Liang et al. (2007) Liang, Y. C., Hu, J. Y., Liu, F. S., & Liu, Z. T. 2007, AJ, 134, 759, doi: 10.1086/519957
* Mangum et al. (2007) Mangum, J. G., Emerson, D. T., & Greisen, E. W. 2007, A&A, 474, 679, doi: 10.1051/0004-6361:20077811
* McClure-Griffiths et al. (2009) McClure-Griffiths, N. M., Pisano, D. J., Calabretta, M. R., et al. 2009, ApJS, 181, 398, doi: 10.1088/0067-0049/181/2/398
* McClure-Griffiths et al. (2018) McClure-Griffiths, N. M., Dénes, H., Dickey, J. M., et al. 2018, Nature Astronomy, 2, 901, doi: 10.1038/s41550-018-0608-8
* Milligan (2005) Milligan, T. A. 2005, Modern Antenna Design (Hoboken, New Jersey, USA: John Wiley & Sons, Inc, 2005)
* Morgan et al. (2013) Morgan, M. A., Fisher, J. R., & Castro, J. J. 2013, Publications of the Astronomical Society of the Pacific, 125, 695
* Moss et al. (2013) Moss, V. A., McClure-Griffiths, N. M., Murphy, T., et al. 2013, ApJS, 209, 12, doi: 10.1088/0067-0049/209/1/12
* Oosterloo et al. (2009) Oosterloo, T., Verheijen, M. A. W., van Cappellen, W., et al. 2009, in Wide Field Astronomy Technology for the Square Kilometre Array, 70. https://arxiv.org/abs/0912.0093
* Parsons et al. (2006) Parsons, A., Backer, D., Chang, C., et al. 2006, in Signals, Systems and Computers, 2006. ACSSC’06. Fortieth Asilomar Conference on, IEEE, 2031–2035
* Peek et al. (2011) Peek, J. E. G., Heiles, C., Douglas, K. A., et al. 2011, ApJS, 194, 20, doi: 10.1088/0067-0049/194/2/20
* Perley & Butler (2017) Perley, R. A., & Butler, B. J. 2017, ApJS, 230, 7, doi: 10.3847/1538-4365/aa6df9
* Pingel et al. (2018) Pingel, N. M., Pisano, D. J., Heald, G., et al. 2018, ApJ, 865, 36, doi: 10.3847/1538-4357/aad816
* Pisano (2014) Pisano, D. J. 2014, AJ, 147, 48, doi: 10.1088/0004-6256/147/3/48
* Rajwade et al. (2019) Rajwade, K. M., Agarwal, D., Lorimer, D. R., et al. 2019, MNRAS, 489, 1709, doi: 10.1093/mnras/stz2207
* Reynolds et al. (2017) Reynolds, T. N., Staveley-Smith, L., Rhee, J., et al. 2017, PASA, 34, e051, doi: 10.1017/pasa.2017.45
* Reynolds et al. (2019) Reynolds, T. N., Westmeier, T., Staveley-Smith, L., et al. 2019, MNRAS, 482, 3591, doi: 10.1093/mnras/sty2930
* Roshi et al. (2019a) Roshi, A., Anderson, L. D., Araya, E., et al. 2019a, in BAAS, Vol. 51, 244. https://arxiv.org/abs/1907.06052
* Roshi et al. (2019b) Roshi, D. A., Shillue, W., & Fisher, J. R. 2019b, IEEE Transactions on Antennas and Propagation, 67, 3011, doi: 10.1109/TAP.2019.2899046
* Roshi et al. (2018) Roshi, D. A., Shillue, W., Simon, B., et al. 2018, AJ, 155, 202, doi: 10.3847/1538-3881/aab965
* Ruzindana (2017) Ruzindana, M. W. 2017, Master’s thesis, Brigham Young University
* Serra et al. (2015a) Serra, P., Koribalski, B., Kilborn, V., et al. 2015a, MNRAS, 452, 2680, doi: 10.1093/mnras/stv1326
* Serra et al. (2015b) Serra, P., Westmeier, T., Giese, N., et al. 2015b, MNRAS, 448, 1922, doi: 10.1093/mnras/stv079
* Staveley-Smith et al. (1996) Staveley-Smith, L., Wilson, W. E., Bird, T. S., et al. 1996, PASA, 13, 243
* van Woerden (1962) van Woerden, H. 1962, De neutrale waterstof in Orion
* Winkel et al. (2016) Winkel, B., Kerp, J., Flöer, L., et al. 2016, A&A, 585, A41, doi: 10.1051/0004-6361/201527007
* Wolfinger et al. (2013) Wolfinger, K., Kilborn, V. A., Koribalski, B. S., et al. 2013, MNRAS, 428, 1790, doi: 10.1093/mnras/sts160
|
11institutetext: INAF – Osservatorio Astronomico di Cagliari, Via della
Scienza 5, 09047 Selargius, CA, Italy
11email<EMAIL_ADDRESS>22institutetext: University of Oulu, Space
physics and astronomy unit, Pentti Kaiteran katu 1, 90014, Oulu, Finland
33institutetext: Institute of Astronomy, Graduate School of Science, The
University of Tokyo, 2–21–1 Osawa, Mitaka, Tokyo 181–0015, Japan
44institutetext: Kapteyn Astronomical Institute, University of Groningen,PO
Box 800, 9700 AV Groningen, The Netherlands 55institutetext: INAF –
Astronomical observatory of Capodimonte, Via Moiariello 16, Naples 80131,
Italy 66institutetext: Netherlands Institute for Radio Astronomy (ASTRON),
Oude Hoogeveensedijk 4, 7991 PD Dwingeloo, the Netherlands 77institutetext:
Deptarment of Astronomy, Univ. of Cape Town, Private Bag X3, Rondebosch 7701,
South Africa 88institutetext: Inter-University Institute for Data Intensive
Astronomy, University of Cape Town, Cape Town, Western Cape, 7700, South
Africa 99institutetext: South African Radio Astronomy Observatory, 2 Fir
Street, Black River Park, Observatory, Cape Town, 7925, South Africa
1010institutetext: Department of Physics and Electronics, Rhodes University,
PO Box 94, Makhanda, 6140, South Africa 1111institutetext: Argelander-Institut
für Astronomie, Auf dem Hügel 71, D-53121 Bonn, Germany 1212institutetext:
Ruhr University Bochum, Faculty of Physics and Astronomy, Astronomical
Institute, 44780 Bochum, Germany 1313institutetext: Dipartimento di Fisica,
Università di Cagliari, Cittadella Universitaria, 09042 Monserrato, Italy
1414institutetext: Centre for Space Research, North-West University,
Potchefstroom 2520, South Africa 1515institutetext: Department of Physics,
University of Pretoria, Private Bag X20, Hatfield 0028, South Africa
1616institutetext: INAF – Istituto di Radioastronomia, via Gobetti 101,
I-40129 Bologna, Italy
# A MeerKAT view of pre-processing in the Fornax A group
D. Kleiner 11 P. Serra 11 F. M. Maccagni 11 A. Venhola 22 K. Morokuma-Matsui
33 R. Peletier 44 E. Iodice 55 M. A. Raj 55 W. J. G. de Blok 667744 A. Comrie
88 G. I. G. Józsa 9910101111 P. Kamphuis 1212 A. Loni 111313 S. I. Loubser
1414 D. Cs. Molnár 11 S. S. Passmoor 99 M. Ramatsoku 101011 A. Sivitilli 77 O.
Smirnov 101099 K. Thorat 151588 F. Vitello 1616
(Received 13 November, 2020; accepted 25 January, 2021)
We present MeerKAT neutral hydrogen (H i) observations of the Fornax A group,
which is likely falling into the Fornax cluster for the first time. Our H i
image is sensitive to 1.4 $\times$ 1019 atoms cm-2 over 44.1 km s-1, where we
detect H i in 10 galaxies and a total of (1.12 $\pm$ 0.02) $\times$ 109 M⊙ of
H i in the intra-group medium (IGM). We search for signs of pre-processing in
the 12 group galaxies with confirmed optical redshifts that reside within the
sensitivity limit of our H i image. There are 9 galaxies that show evidence of
pre-processing and we classify each galaxy into their respective pre-
processing category, according to their H i morphology and gas (atomic and
molecular) scaling relations. Galaxies that have not yet experienced pre-
processing have extended H i discs and a high H i content with a H2-to-H i
ratio that is an order of magnitude lower than the median for their stellar
mass. Galaxies that are currently being pre-processed display H i tails,
truncated H i discs with typical gas fractions, and H2-to-H i ratios. Galaxies
in the advanced stages of pre-processing are the most H i deficient. If there
is any H i, they have lost their outer H i disc and efficiently converted
their H i to H2, resulting in H2-to-H i ratios that are an order of magnitude
higher than the median for their stellar mass. The central, massive galaxy in
our group (NGC 1316) underwent a 10:1 merger $\sim$ 2 Gyr ago and ejected 6.6
– 11.2 $\times$ 108 M⊙ of H i, which we detect as clouds and streams in the
IGM, some of which form coherent structures up to $\sim$ 220 kpc in length. We
also detect giant ($\sim$ 100 kpc) ionised hydrogen (H$\alpha$) filaments in
the IGM, likely from cool gas being removed (and subsequently ionised) from an
in-falling satellite. The H$\alpha$ filaments are situated within the hot halo
of NGC 1316 and there are localised regions that contain H i. We speculate
that the H$\alpha$ and multiphase gas is supported by magnetic pressure
(possibly assisted by the NGC 1316 AGN), such that the hot gas can condense
and form H i that survives in the hot halo for cosmological timescales.
###### Key Words.:
Galaxies: groups: general – galaxies: groups: individual: Fornax A – galaxies:
evolution – galaxies: interactions – galaxies: ISM – radio lines: galaxies
## 1 Introduction
Our current understanding of galaxy formation and evolution is that secular
processes and galaxy environment fundamentally shape the properties of
galaxies (e.g. Baldry et al., 2004; Balogh et al., 2004; Bell et al., 2004;
Peng et al., 2010; Driver et al., 2011; Schawinski et al., 2014; Davies et
al., 2019). In the local Universe (z $\sim$ 0) up to $\sim$ 50% of galaxies
reside in groups (Eke et al., 2004; Robotham et al., 2011), making it
essential to understand the group environment in the context of galaxy
evolution.
While there is no precise definition of a galaxy group, it generally contains
3 – 102 galaxies in a dark matter (DM) halo of 1012 – 1014 M⊙ (e.g. Catinella
et al., 2013). As the galaxy number density and DM halo mass of groups span a
wide range, there is no dominant transformation mechanism that galaxies are
subjected to, but rather multiple secular and external mechanisms working
together. The properties of group galaxies appear to correlate with group halo
mass and virial radius, implying that quenching paths in groups are different
from those in clusters (Weinmann et al., 2006; Haines et al., 2007; Wetzel et
al., 2012; Woo et al., 2013; Haines et al., 2015).
As galaxies fall towards clusters, there is sufficient time for external (i.e.
environmentally driven, such as tidal and hydro-dynamical) mechanisms to
transform and even quench the galaxies, prior to reaching the cluster (e.g.
Porter et al., 2008; Haines et al., 2013, 2015; Bianconi et al., 2018; Fossati
et al., 2019; Seth & Raychaudhury, 2020). This is called “pre-processing” and
refers to the accelerated, non-secular evolution of galaxies that occurs prior
to entering a cluster. As pre-processing requires external mechanisms to
transform the galaxies, this evolution commonly occurs in groups, where it is
generally thought that group galaxies follow a different evolutionary path
compared to galaxies of the same mass in the field (e.g. Fujita, 2004;
Mahajan, 2013; Roberts & Parker, 2017; Cluver et al., 2020). In particular,
pre-processing is likely to be most efficient in massive ($>$ 1010.5 M⊙)
galaxies residing in massive (1013 – 1014 M⊙) groups (Donnari et al., 2020).
It has also been shown that pre-processing is responsible for the decrease in
star formation activity for late-type galaxies at distances between 1 and 3
cluster virial radii (e.g Lewis et al., 2002; Gómez et al., 2003; Verdugo et
al., 2008; Mahajan et al., 2012; Haines et al., 2015).
Neutral hydrogen in the atomic form (H i) is ideal for tracing tidal and
hydro-dynamical processes in galaxies and the intra-group medium (IGM). H i is
the main component of the interstellar medium (ISM) and can show the effects
of ram pressure, viscous and turbulent stripping, thermal heating (e.g. Cowie
& McKee, 1977; Nulsen, 1982; Chung et al., 2007; Rasmussen et al., 2008; Chung
et al., 2009; Steinhauser et al., 2016; Ramatsoku et al., 2020), and moderate
and strong tidal interactions (e.g. Koribalski, 2012; de Blok et al., 2018;
Kleiner et al., 2019), long before these mechanism can be identified in the
stars.
In this paper we present a detailed analysis of the Fornax A galaxy group
based on H i and ancillary observations. The Fornax A group is an excellent
candidate to search for pre-processing signatures as it is likely in-falling
into the (low mass – 5 $\times$ 1013 M⊙) Fornax cluster (Drinkwater et al.,
2001) for the first time. The group galaxies span a variety of stellar masses
and morphological types, implying that tidal and hydro-dynamical interactions
are likely to affect the galaxies gas and stellar content (Raj et al., 2020).
Using Meer Karoo Array Telescope (MeerKAT) H i observations, deep optical
imaging from the Fornax Deep Survey (FDS: Iodice et al., 2016, 2017; Venhola
et al., 2018, 2019; Raj et al., 2019, 2020), wide-field H$\alpha$ imaging from
the VLT Survey Telescope (VST) and molecular gas observations from the Atacama
Large Millimetre Array (ALMA), we identify galaxies at different stages of
pre-processing following various types of interactions.
This paper is organised as follows: Section 2 describes the Fornax A group.
Section 3 describes the H i and H$\alpha$ observations and the data reduction
process used to produce our images. We present the results of our H i
measurements, H i images, and the relation to stellar and H$\alpha$ emission
in Section 4. In Section 5 we present the atomic-to-molecular gas ratios and
discuss the evidence and timescale of pre-processing in the group. Finally, we
summarise our results in Section 6. Throughout this paper we assume a
luminosity distance of 20 Mpc to the most massive galaxy (NGC 1316) in the
Fornax A group (Cantiello et al., 2013; Hatt et al., 2018) and assume all
objects in the group are at the same distance. At this distance, 1′
corresponds to 5.8 kpc.
## 2 The Fornax A group
The Fornax A galaxy group is the brightest group in the Fornax volume. It is
located on the cluster outskirts at a projected distance of $\sim$ 3.6 deg
(1.3 Mpc, or $\sim$ 2 $\times$ the Fornax cluster virial radius) from the
cluster centre and has a mass of 1.6 $\times$ 1013 M⊙, which is of the same
order of magnitude as the Fornax cluster (Mvir $\sim$ 5 $\times$ 1013 M⊙)
itself (Maddox et al., 2019). Within the virial radius of the group ($\sim$ 1
degree or 0.38 Mpc, as measured by Drinkwater et al., 2001), there are
approximately 70 galaxies (mostly dwarfs) that have been photometricially
identified as likely group members (Venhola et al., 2019), of which 13 have
confirmed spectroscopic redshifts (Maddox et al., 2019).
The brightest group galaxy (BGG), NGC 1316, is a peculiar early-type galaxy
with a stellar mass of 6 $\times$ 1011 M⊙ (Iodice et al., 2017). NGC 1316 is a
giant radio galaxy (Ekers et al., 1983; Fomalont et al., 1989; McKinley et
al., 2015; Maccagni et al., 2020), known merger remnant, and the brightest
galaxy in the Fornax cluster volume (even brighter than the brightest cluster
galaxy NGC 1399). There are a number of extended stellar loops and streams in
NGC 1316 that are a result of a 10:1 merger that occurred 1 – 3 Gyr ago,
between a massive early-type galaxy and a gas-rich, late-type galaxy
(Schweizer, 1980; Mackie & Fabbiano, 1998; Goudfrooij et al., 2001; Iodice et
al., 2017; Serra et al., 2019). The majority of the remaining bright ($m_{B}$
¡ 16) galaxies are late types that have stellar mass ranges of 8 $<$
log(M⋆/M⊙) $<$ 10.5 (Raj et al., 2020).
There have been a variety of previous studies that have detected H i in the
Fornax A group. Horellou et al. (2001) and Serra et al. (2019) imaged the
central region of the Fornax A group in H i, where the more recent image of
Serra et al. (2019) detected NGC 1316, NGC 1317, NGC 1310, and ESO 301-IG 11,
along with four clouds at the outskirts of NGC 1316 (EELR, SH2, CN,1, and
CN,2), and two tails (TN and TS). The remaining six galaxies, which have
previously been detected, are NGC 1326, NGC 1326A ,and NGC 1326B in the H i
Parkes All Sky Survey (HIPASS; Meyer et al., 2004; Koribalski et al., 2004),
NGC 1316C with the Nançay telescope (Theureau et al., 1998), FCC 35 with the
Australian Telescope Compact Array (ATCA) and the Green Bank Telescope (Putman
et al., 1998; Courtois & Tully, 2015), and FCC 46 with the ATCA (De Rijcke et
al., 2013). Within NGC 1316, H i has been resolved in the centre and
correlates with massive amounts of molecular gas (Morokuma-Matsui et al.,
2019; Serra et al., 2019). H i has also been detected in the outer stellar
halo, within the regions defined by the H$\alpha$ extended emission line
region (EELR; originally discovered by Mackie & Fabbiano, 1998), in the
southern star cluster complex (SH2; Horellou et al., 2001) and in two northern
clouds (CN,1 and CN,2) (Serra et al., 2019). Lastly, $\sim$ 6 $\times$ 108 M⊙
of H i was detected in the IGM, defined as the northern and southern tails (TN
and TS). The tails are ejected H i gas from the NGC 1316 merger and extend up
to 150 kpc from the galaxy centre (Serra et al., 2019).
The Fornax A group is an ideal system to search for pre-processing. Evidence
suggests that the group is in the early stage of assembly (Iodice et al.,
2017; Raj et al., 2020) and is located at the cluster infall distance where
pre-processing is thought to occur (Lewis et al., 2002; Gómez et al., 2003;
Verdugo et al., 2008; Mahajan et al., 2012; Haines et al., 2015). The BGG is
massive enough to experience efficient pre-processing (Donnari et al., 2020)
and Raj et al. (2020) show that there are signatures of pre-processing in the
group; six of the nine late types have an up-bending (type III) break in their
radial light profile. This indicates that the star formation may be halting in
the outer disc of galaxies, although, what is driving the decline in star
formation is not yet clear.
## 3 Observations and data reduction
### 3.1 MeerKAT radio observation
Commissioned in July 2018, MeerKAT is a new radio interferometer and a
precursor for the Square Kilometre Array SKA1-MID telescope (Jonas, 2016;
Mauch et al., 2020). MeerKAT is designed to produce highly sensitive radio
continuum and H i images with good spatial and spectral resolution in a
relatively short amount of observing time. The MeerKAT Fornax Survey (MFS; PI:
P.Serra) is one of the designated Large Survey Projects (LSPs) of the MeerKAT
telescope. The MFS will observe the Fornax galaxy cluster in H i over a wide
range of environment densities, down to a column density of a few $\times$
1019 atoms cm-2 at a resolution of 1 kpc, equivalent to a H i mass limit of 5
$\times$ 105 M⊙ (Serra et al., 2016).
The Fornax A group was observed with MeerKAT in two different commissioning
observations in June 2018, which differ by the number of antennas (36 and 62,
respectively) connected to the correlator. We present the details of these
observations and of the H i cube in Table 1. The MeerKAT baselines range
between 29 m and 7.7 km and for both these observations, the SKARAB correlator
in the 4k mode was used, which consists of 4096 channels in full polarisation
in the frequency range 856-1712 MHz with a resolution of 209 kHz (equivalent
to 44.1 km s-1 for H i at the distance of the Fornax cluster).
The first observation (referred to as Mk-36) used 36 antennas and observed the
target for a total of 8 h. Results from this observation are presented both in
radio continuum (Maccagni et al., 2020) and in H i (Serra et al., 2019); these
papers provide a detailed description of the data reduction process. In this
work, we use the Mk-36 calibrated measurement set in combination with that
from the second observation (detailed below).
The second observation (Mk-62) used 62 antennas and observed the target for a
total of 7 h. PKS 1934-638 and PKS 0032-403 were observed, where the former
was observed for 20 min and used as the bandpass and flux calibrator while the
latter was observed for 2 min every 10 min and used as the gain calibrator.
Table 1: Observation and H i cube properties. The measurements of the H i cube RMS noise and column density (over a single channel of 44.1 km s-1) were taken in the pointing centre and restoring beam was taken from the centre channel. Property | Mk-36 observation | Mk-62 observation
---|---|---
Date | 2 June 2018 | 16 June 2018
ID | 20180601-0009 | 20180615-0039
Time on target | 8 hr | 7 hr
Number of antennas | 36 | 62
Pointing centre (J2000) | 03h 22m 41.7s, -37d 12′ 30.0″
Available bandwidth | 856 - 1712 MHz
H i cube frequency range | 1402 - 1420 MHz
H i cube spectral resolution | 209 kHz (44.1 km s-1 at z $\sim$ 0)
H i cube pixel size | 6.5″
H i cube weight | robust = 0.5 and 20″taper
H i cube RMS noise | 90 $\mu$Jy beam-1
H i cube restoring beam | 33.0″$\times$ 29.2″
3$\sigma$ H i column density | 1.4 $\times$ 1019 atoms cm-2
We used the Containerised Automated Radio Astronomical Calibration
(CARACal111https://caracal.readthedocs.io; Józsa et al., 2020) pipeline to
reduce the MeerKAT observations. The pipeline uses
Stimela222https://github.com/SpheMakh/Stimela, which containerises different
open-source radio interferometry software in a Python framework. This makes
the pipeline both flexible and highly customisable and has been used to reduce
MeerKAT and other (e.g. Jansky Very Large Array) interferometric observations
(e.g. see Serra et al., 2019; Maccagni et al., 2020; Ramatsoku et al., 2020;
Ianjamasimanana et al., 2020).
We used CARACal to reduce the Mk-62 observation end-to-end and include the
already reduced Mk-36 observation (Serra et al., 2019; Maccagni et al., 2020)
at the spectral line imaging step. For the Mk-62 observation, we used 120
(1330 - 1450) MHz of bandwidth to ensure adequate continuum imaging and
calibration. We used 18 (1402 - 1420) MHz, which easily covers the group
volume, for the (joint) spectral line imaging.
Our choice of data reduction techniques and steps is outlined using CARACal as
follows: First, we flag the radio frequency interference (RFI) in the
calibrators data based on the Stokes Q visibilites using AOflagger (Offringa
et al., 2012). Then, a time-independent, antenna-based, complex gain solution
was derived for the bandpass using CASA bandpass and the flux scale was
determined with CASA gaincal. A Frequency-independent, time-dependent,
antenna-based complex gains were determined using CASA gaincal. The gain
amplitudes were scaled to bootstrap the flux scale with CASA fluxscale, and
the bandpass and complex gain solutions were applied to the target
visibilities using CASA applycal. The RFI in the target data was then flagged
based on the Stokes Q visibilites, using AOflagger (Offringa et al., 2012). We
imaged and self-calibrated the continuum emission of the target with WSclean
(Offringa et al., 2014; Offringa & Smirnov, 2017) and CUBICAL (Kenyon et al.,
2018), respectively. This process was repeated two more times, in which each
self-calibration iteration was frequency-independent and solved only for the
gain phase, with a solution interval of 2 min. The final continuum model was
subtracted from the visibilities using CASA msutils. The visibilities from
both the Mk-36 and Mk-62 calibrated measurement sets were then Doppler
corrected into the barycentric rest frame using CASA mstransform. Residual
continuum emission in the combined measurement set was removed by fitting and
subtracting a second order polynomial to the real and imaginary visibility
spectra with CASA mstransform. Then, we created a H i cube by imaging the H i
emission with WSclean (Offringa et al., 2014; Offringa & Smirnov, 2017) and
made a 3D mask through source finding with the Source Findina Application
(SoFiA; Serra et al., 2015). This was then used as a clean mask to image a new
H i cube with higher image fidelity. Finally, we applied the primary beam
correction of Mauch et al. (2020) down to a level of 2%, which corrects for
the sensitivity response pattern of MeerKAT.
Our H i cube was imaged333The deep H i imaging revealed periodic, artefacts
caused by the correlator during this time of commissioning. The artefacts were
apparent at the sky position of bright continuum emission. We were able to
remove the artefacts by excluding baselines less than 50 m in the cube and 85
m for the single, worst channel. While short baselines are essential for
diffuse emission, this equates to 5 and 22 baselines out of 1891. using an 18
MHz sub-band (centred on NGC 1316) and the basic properties are presented in
Table 1. The root mean square (RMS) noise is 90 $\mu$Jy beam-1, which equates
to a 3$\sigma$ H i column density of 1.4 $\times$ 1019 atoms cm-2 over a
single channel of 44.1 km s-1 at the angular resolution of 33.0″$\times$
29.2″. Compared to Serra et al. (2019), we present an image that is
approximately twice as large and more than twice as sensitive and has
comparable spatial and velocity resolutions.
We searched for H i sources using SoFiA outside the CARACal pipeline. To
ensure that we properly captured H i emission that is diffuse or far from the
pointing centre, we tested different combinations of smoothing kernels and
detection thresholds in the SoFiA smooth + clip algorithm, per-source
integrated signal-to-noise ratio (S/N) thresholds, and reliability thresholds.
Pixels in the H i cube are detected if their value is above a smooth + clip
detection threshold of 3.5 (in absolute value and relative to the cube noise)
for spatial smoothing kernels equal to 1, 2, and 3 times the synthesised beam
in combination with velocity smoothing kernels over a single (i.e. no
smoothing) and three channels. The mean, sum, and maximum pixel value of each
detected source (normalised to the local noise) create a parameter space that
can separate real H i emission from noise peaks (Fig. 1; Serra et al., 2012).
The reliability of each source (defined as the local ratio of positive-to-
negative source density within this 3D parameter space) as well as the
integrated S/N are then used to identify statistically significant, real H i
sources. Our catalogue was created by retaining only sources with an
integrated S/N above 4 and a reliability above 0.65. As shown in Fig. 1, this
selection is purposefully designed to be conservative, ensuring that detected
diffuse H i emission (i.e clouds in the IGM) is clearly real emission and does
not include noise peaks.
However, we found some real H i emission below these thresholds that should be
included in the detection mask. We thus operated on the detection mask using
the virtual reality (VR) software iDaVIE-v (Sivitilli et al. in press) from
the Institute for Data Intensive Astronomy (IDIA) Visualisation Lab (Marchetti
et al., 2020; Jarrett et al., 2020). This allowed us to use a ray marching
renderer (Comrie et al. in prep) to view and interact with our H i cube, while
making adjustments to the mask within a 3D digital immersive setting. We were
able to inspect the mask for any spurious H i emission that was included or
identify real H i emission that was missed. This was accomplished by importing
the detection mask from SoFiA, overlaying it with the H i cube in the VR
environment, and then adjusting the mask using the motion-tracking hand
controllers. As part of this process, we added two sources to the detection
mask within the VR environment by marking zones where emission was clearly
present.
The two sources added in VR were originally excluded from the detection mask
because they are below the reliability threshold of 0.65 (but above the
integrated S/N threshold of 4). These sources are deemed real because they
either coincide with emission at other wavelengths (see below) or are part of
large, coherent H i emission. Following these edits to the detection mask in
VR, we created H i intensity and velocity maps that are presented in the next
section.
Figure 1: Sum of the pixel values as a function of the mean pixel value for
all sources detected with SoFiA. The blue points indicate the positive
detections and the red points indicate the negative detections (Serra et al.,
2012). Detected H i clouds are shown as black crosses. The dotted line shows
the per-source integrated S/N of 4. Only positive sources above this threshold
and with a reliability $>$ 0.65 are retained in our final catalogue. The
chosen integrated S/N of 4 is a conservative threshold as it is closer to area
of parameter space occupied by the most statistically significant detections
(i.e. the positive sources with a high sum/noise for their mean/noise value)
and is clearly above the edge of non-statistically significant detections
(i.e. where the density of positive sources is approximately the same as the
density of negative sources). Owing to this conservative threshold, the
detected H i clouds, while often diffuse, occupy the parameter space of real,
reliable H i emission.
### 3.2 VST H$\alpha$ observation
To generate the H$\alpha$-emission images, we used a combination of H$\alpha$
narrow-band images and $r^{\prime}$ broad-band images both collected using the
OmegaCAM attached to the VST at Cerro Paranal, Chile (PID: 0102.B-0780(A)).
The OmegaCAM is a 32 CCD wide-field camera with a 1$\deg\times$ 1$\deg$ field
of view and a pixel size of 0.21″. We used the NB 659 H$\alpha$ filter with 10
nm throughput, bought by Janet Drew for the VST Photometric H$\alpha$ Survey
(VPHAS; Drew et al., 2014). The imaging was done using large $\approx$ 1 deg
pointings and short 150 s and 180 s exposures in $r^{\prime}$ and H$\alpha$
bands, respectively. This strategy allows us to make accurate sky background
removal by subtracting averaged background models from the science exposures,
and it also reduces the amount of imaging artefacts (such as satellite tracks)
in the final mosaics because those are averaged out when the images are
stacked. The total exposure times in the $r^{\prime}$-band and H$\alpha$-band
were 8 250s and 31 140s, respectively. Similar data reduction and calibration
was done for both $r^{\prime}$-band and H$\alpha$-images. Details of the used
reduction steps are given by Venhola et al. (2018).
As the H$\alpha$ narrow-band images are sensitive both to H$\alpha$ emission
and flux coming from the continuum, we needed to subtract the continuum flux
from the H$\alpha$ images before they can be used for H$\alpha$ analysis. As
the flux in the $r^{\prime}$ band is dominated by the continuum, we use scaled
$r^{\prime}$-band images to subtract the continuum from the H$\alpha$. The
optimal scaling of the $r^{\prime}$-band image was selected by visually
determining the scaling factor that results in a clean subtraction of the
majority of stars and early-type galaxies.
However, there are some caveats in this procedure, which leaves some
systematic over- and under-subtraction in the H$\alpha$ images. If the seeing
conditions or point spread functions (PSFs) differ between the broad- and
narrow-band images there will be some residuals in the continuum subtracted
image. In addition to these residuals caused by the inner parts of the PSF
($\lesssim$ 5″), also the extended outer parts (see Venhola et al., 2018) and
reflection rings of the PSF may leave some features in the images. In the case
of bright, extended, and peaked galaxies such as NGC 1316, these PSF features
are also significant. As the positions of the reflection rings are dependent
on the position of the source on the focal plane they do not overlap precisely
in the narrow- and broad-band images and thus leave some systematic over- and
under- subtractions in the images. These kinds of features are apparent in the
reduced H$\alpha$ emission images.
The over- and under-subtraction artefacts dominate in and around objects with
bright stellar emission. Therefore, NGC 1316 is significantly affected to the
extent that the artefacts obscure real H$\alpha$ emission. To rectify this, we
select a sub-region that includes NGC 1316, NGC 1317, and NGC 1310 and create
a model of the background that is ultimately subtracted from the original
image.
The background model was created by masking the visible, real H$\alpha$
emission, and replaced with the background local median. The masked image is
then filtered with a median filter to eliminate sharp features in the image.
Lastly, the (masked, filtered) background model is subtracted from the
original image.
We repeat this process using the residual image to create an improved mask,
which is then subtracted from the original image. We use a conservative
approach to mask the H$\alpha$ emission, as the aim is to remove the dominant
artefacts and achieve a uniform background throughout the image. We present a
comparison of the images and additional detail in Appendix A.
## 4 H i distribution in the group
In Fig. 2, we present the primary beam-corrected H i column density map as
detected by MeerKAT, overlaid on a $gri$ stacked optical image from the FDS
(Iodice et al., 2016, 2017; Venhola et al., 2018). Our H i image (Fig. 2) is
sensitive to a column density of N${}_{H\,\textsc{i}}$ = 1.4 $\times$ 1019
atoms cm-2 in the most sensitive part (pointing centre), equating to a
3$\sigma$ H i mass lower detection limit 1.7 $\times$ 106 M⊙ for a point
source 100 km s-1 wide.
As a result of the improved sensitivity of our image, in H i we detect 10
galaxies out of the 13 spectroscopically confirmed galaxies (Maddox et al.,
2019), all the previously known clouds and streams, and a new population of
clouds and streams in the IGM. Eleven of our H i detections (10 galaxies and
SH2) have corresponding optical redshifts (Maddox et al., 2019). NGC 1341, FCC
19, and FCC 40 are the 3 galaxies with optical redshifts in which we do not
detect any H i. NGC 1341 is a late-type (SbcII) galaxy with a stellar mass of
5.5 $\times$ 109 M⊙ (Raj et al., 2020), in which H i has beeen previously
detected (Courtois & Tully, 2015). However, NGC 1341 is outside our H i image
field of view and we do not include it in our sample. FCC 40 is a low surface
brightness dwarf (dE4) elliptical (Iodice et al., 2017) and is unlikely to
contain massive amounts of H i. It is also located in a region of the image in
which the sensitivity is 75% worse than the pointing centre, such that we do
not detect H i below 5.6 $\times$ 1019 atoms cm-2. FCC 19 is a dS0 with a
stellar mass of 3.4 $\times$ 108 M⊙ (Iodice et al., 2017; Liu et al., 2019).
As it is near the pointing centre (70 kpc in projection from NGC 1316), we
would expect to detect H i if there were any. However, no H i is detected in
FCC 19 and we discuss the implications of this in section 5.2.
We present the three-colour (constructed using the individual $g$, $r$, and
$i$ images) FDS (Iodice et al., 2016) optical image cutout for each group
galaxy in our sample, which has been overlaid with the H i contours at their
respective column density sensitivity (or upper limit) in Fig. 3. The
integrated H i flux and mass of the H i detections and the basic properties of
the group galaxies within the H i image field of view are presented in Table
2. The velocity field is presented in Fig. 4 and highlights some new large-
scale coherent H i structures, which extend up to $\sim$ 220 kpc in length.
Figure 2: Primary beam-corrected constant H i contours from MeerKAT (blue) overlaid on a FDS (Iodice et al., 2016) $gri$ stacked optical image. The lowest contour represents the 3$\sigma$ column density level of N${}_{H\,\textsc{i}}$ = 1.4 $\times$ 1019 atoms cm-2 over a 44.1 km s-1 channel, where the contours increase by a factor of 3n ($n$ = 0, 1, 2, …). The group galaxies are labelled and the galaxies not detected in H i are outlined by a dashed black ellipse. The grey circles indicate the sensitivity of the primary beam (Mauch et al., 2020) at 50%, 10%, and 2%. The red dashed circle denotes the 1.05 degree (0.38 Mpc) virial radius of the group as adopted in Drinkwater et al. (2001), where the restoring beam (33.0″$\times$ 29.2″) is shown in the bottom left corner and a scale bar indicating 20 kpc at the distance of Fornax A in the bottom right corner. The direction to the Fornax cluster is shown by the black arrow. In H i, we detect 10 (out of 12) galaxies, previously known clouds and streams in the IGM and a population of new H i clouds in the IGM. The previously known IGM H i structures are labelled in Fig. 4 for clarity. Figure 3: Optical three-colour composite of each group galaxy in our sample with overlaid H i contours. The colour image is comprised of the $g$-, $r$-, and $i$-band filters from the FDS (Iodice et al., 2016); the white dashed contour shows the most sensitive, constant column density of N${}_{H\,\textsc{i}}$ = 1.4 $\times$ 1019 atoms cm-2 from Fig. 2 and the blue contours start from the local column density sensitivity (i.e. 1.4 $\times$ 1019 atoms cm-2 scaled by the primary beam response; see top left corner of each cutout) and increase by a factor of 3n with $n$ = 0, 1, 2, …, at each step. For non-detections, the 3$\sigma$ H i column density upper limit over a single channel is shown in red in the top left of the cutout. The restoring beam (33.0″$\times$ 29.2″) is shown in orange in the bottom left corner and a 5 kpc scale bar is shown in the bottom right corner. Table 2: Basic properties of the group galaxies and H i detected sources within the H i image field of view. The primary beam-corrected integrated H i flux, mass, and upper limits are included for all sources while the morphological type, stellar mass, and $g$ – $r$ colour is included for all the galaxies. The H i mass was calculated using a distance of 20 Mpc and the statistical uncertainty of the flux was measured and propagated to the H i mass. The 3$\sigma$ upper limits of the H i flux and mass are calculated for non-detections using the local RMS and a 100 km s-1 wide integrated flux for a point source. All previously known sources are individually identified and the remaining H i IGM detections are summed into the remaining clouds category. The galaxy morphologies are classified in Ferguson (1989), the photometry is used to estimate the stellar mass (with the method of Taylor et al., 2011), and $g$ \- $r$ colours are measured in Raj et al. (2020) for the majority of the galaxies and in Venhola et al. (2018) for FCC 19 and FCC 40. The photometry, $g$ \- $r$ colour, and stellar mass of NGC 1316 are measured independently in Iodice et al. (2017). Source | Integrated flux | H i mass | Morphological type | Stellar mass | $g$ – $r$
---|---|---|---|---|---
| (Jy km s-1) | (107 M⊙) | | (109 M⊙) | (mag)
NGC 1310 | 5.13 $\pm$ 0.07 | 48.1 $\pm$ 0.6 | SBcII | 4.7 | 0.6 $\pm$ 0.1
NGC 1316 | 0.72 $\pm$ 0.04 | 6.8 $\pm$ 0.4 | SAB0 | 600 | 0.72 $\pm$ 0.01
NGC 1316C | 0.18 $\pm$ 0.02 | 1.7 $\pm$ 0.2 | SdIII pec | 1.4 | 0.7 $\pm$ 0.1
NGC 1317 | 2.96 $\pm$ 0.02 | 27.8 $\pm$ 0.2 | Sa pec | 17.1 | 0.77 $\pm$ 0.02
NGC 1326 | 24.3 $\pm$ 0.5 | 228 $\pm$ 4 | SBa(r) | 29.4 | 0.62 $\pm$ 0.04
NGC 1326A | 15.2 $\pm$ 0.8 | 142 $\pm$ 8 | SBcIII | 1.7 | 0.5 $\pm$ 0.1
NGC 1326B | 49 $\pm$ 1 | 455 $\pm$ 9 | SdIII | 1.8 | 0.3 $\pm$ 0.1
ESO 301-IG 11 | 1.52 $\pm$ 0.04 | 14.3 $\pm$ 0.4 | SmIII | 2.9 | 0.57 $\pm$ 0.04
FCC 19 | $<$ 0.03 | $<$ 0.17 | dS0 | 0.18 | 0.62 $\pm$ 0.04
FCC 35 | 3.51 $\pm$ 0.09 | 33.0 $\pm$ 0.8 | SmIV | 0.17 | 0.2 $\pm$ 0.1
FCC 40 | $<$ 0.15 | $<$ 0.72 | dE4 | 0.002 | 0.61 $\pm$ 0.04
FCC 46 | 0.13 $\pm$ 0.03 | 1.2 $\pm$ 0.2 | dE4 | 0.58 | 0.46 $\pm$ 0.01
TN | 2.24 $\pm$ 0.07 | 21.0 $\pm$ 0.7 | - | - | -
TS | 4.86 $\pm$ 0.08 | 45.6 $\pm$ 0.7 | - | - | -
CN,1 | 0.75 $\pm$ 0.05 | 7.0 $\pm$ 0.5 | - | - | -
CN,2 | 0.35 $\pm$ 0.03 | 3.3 $\pm$ 0.3 | - | - | -
EELR | 0.49 $\pm$ 0.02 | 4.6 $\pm$ 0.2 | - | - | -
SH2 | 0.31 $\pm$ 0.02 | 2.9 $\pm$ 0.2 | - | - | -
Remaining clouds | 3.0 $\pm$ 0.2 | 28 $\pm$ 2 | - | - | -
Figure 4: H i velocity field, where the known galaxies and previously detected
clouds and tails in the IGM are labelled. As in Fig. 2, the two galaxies not
detected in H i are outlined by black, dashed ellipses and the direction to
the Fornax cluster is shown by the black arrow. The velocity colour bar is
centred on the systemic velocity of the BGG (NGC 1316) at 1760 km s-1. The
grey circles indicate the sensitivity of the primary beam (Mauch et al., 2020)
at 50%, 10%, and 2%. The red dashed circle denotes the 1.05 degree (0.38 Mpc)
virial radius of the group as adopted in Drinkwater et al. (2001), where the
restoring (33.0″$\times$ 29.2″) beam and scale bar are shown in the bottom
corners. The clouds that make up TN have a new, extended component,
effectively doubling the size compared to its original discovery in Serra et
al. (2019).
### 4.1 Newly detected H i
Our H i image is the widest and deepest interferometric image of the Fornax A
group to date. Naturally, we detect new H i sources, additional H i in known
sources and resolved H i in previously unresolved sources. All the sources are
presented in Table 2, Fig. 2, and 4. As described in Section 2, several
sources in the Fornax A group have been previously detected. The new H i
sources detected in this work are as follows: resolved H i tails associated
with FCC 35, NGC 1310, and NGC 1326; an extension of TN in the form of
additional, coherent clouds; an additional component to TS in the form of a
western cloud; and a population of clouds in the IGM (unlabelled in Fig. 4).
### 4.2 H i in galaxies
We detect H i in ten galaxies, where the H i is well resolved in eight of them
(Fig. 3). Out of those, two galaxies have H i that is confined to the stellar
disc, while the remaining 6 have H i emission that extend beyond the stellar
disc. The two galaxies with unresolved H i are NGC 1316C and FCC 46.
The two well-resolved galaxies with H i confined within the stellar discs are
NGC 1316 and NGC 1317 (Fig. 3). We detected 6.8 $\times$ 107 M⊙ of H i in the
centre of NGC 1316, 60% more H i in the centre than previously detected in
(Serra et al., 2019). The H i has complex kinematics (also seen in the
molecular gas and dust) beyond a uniformly rotating disc. The H i in NGC 1317
is sharply truncated at the boundary of the stellar disc. Given its stellar
mass and morphology, NGC 1317 is H i deficient by at least an order of
magnitude (discussed in detail in section 5.2).
There are six galaxies in the group that have extended H i discs. Three
galaxies (NGC 1326A/B and ESO 301-IG 11) have slightly extended and mostly
symmetric H i discs, while the other three galaxies (FCC 35, NGC 1326, and NGC
1310) have extended H i features that are significantly disturbed and
asymmetric (Fig. 3).
NGC 1326A and B have extended H i discs and although they overlap in
projection, they are separated by $\sim$ 800 km s-1 in velocity. There is no H
i connecting these two galaxies along the line of sight down to a column
density of 2.8 $\times$ 1020 atoms cm-2, which is also confirmed through
visual inspection in virtual reality. Future, more sensitive data from the MFS
(Serra et al., 2016) will unambiguously show whether these galaxies are
interacting or not.
The collisional ring galaxy ESO 301-IG 11 has a slightly extended H i disc,
where the extension is in the south-east direction (away from the group
centre). As suggested by its classification, the H i is likely to have been
tidally disturbed in the collision that formed the ring.
In the three galaxies with disturbed or asymmetric H i discs (detailed below),
strong tidal interactions can be reasonably excluded as the cause, as the deep
$g$-band FDS images show no stellar emission associated with the extended H i
down to a surface brightness of 30 mag arcsec-2. The H i tails and asymmetries
all differ in these galaxies, likely because each galaxy is affected by
different processes, such as gentle tidal interactions, ram pressure, and
accretion.
The dwarf late-type galaxy FCC 35 has a long, asymmetric (kinematically
irregular) H i tail pointing away from the group centre. The two closest
galaxies (spatially with confirmed redshifts) are NGC 1316C and FCC 46, a
dwarf late type and dwarf early type. These two galaxies have unresolved H i
and are more H i deficient than the majority of the group galaxies. Neither a
dynamical interaction between these galaxies nor a hydrodynamical mechanism
(such as ram pressure) can be ruled out as the cause for the long, H i tail of
FCC 35.
NGC 1326 is a barred spiral galaxy with a ring and has clumpy, extended, and
asymmetric H i emission in the south, pointing towards the group centre. The
one-sided H i emission could be indicative of a tidal interaction. However,
this could also be an instrumental effect, as the galaxy is located very far
from the pointing centre and is subjected to a variable sensitivity response.
The southern side (where the H i tail is) is sensitive down to $\sim$ 6.1
$\times$ 1019 atoms cm-2, while the northern side has a lower sensitivity of
$\sim$ 2.3 $\times$ 1020 atoms cm-2. As the tails are diffuse ($<$ 1 $\times$
1020 atoms cm-2), more sensitive observations are needed to determine if NGC
1326 has extended H i emission on the northern side.
Finally, the massive late-type galaxy NGC 1310 is surrounded by H i extensions
and clouds of different velocities, which is unusual, because it is a
relatively (compared to NGC 1317) isolated galaxy, with an undisturbed optical
spiral morphology and a uniformly rotating H i disc. Despite the coarse
velocity resolution, we can determine from our observations that the majority
of the H i extensions and clouds (except for the extended component of the
disc to the south) are not rotating with the disc (Fig. 4) and cover a broad
range ($\sim$ 1450 – 1950 km s-1) in velocity, suggesting that it may be
anomalous H i gas from an external origin. Future data from the MFS (Serra et
al., 2016) with better velocity resolution will clarify this point.
### 4.3 H i in the intra-group medium
We detect a total of (1.12 $\pm$ 0.02) $\times$ 109 M⊙ of H i in the IGM. All
of the previous clouds in Serra et al. (2019) were detected as well as
additional H i in some of these features. We detect new clouds, the majority
residing in the north, with some forming large, contiguous 3D structures.
We searched for any association between the new H i in the IGM and stellar
emission. In particular, as more H i has been detected within the stellar halo
of NGC 1316, we checked for any correlation between the H i and known stellar
loops (Fig. 5). Overall, there is very little, clear association between the H
i in the NGC 1316 halo and its stellar loops. The major exceptions are TS and
its newly detected cloud, as they are fully contained within the SW stellar
loop. The H i in SH2 and EELR may potentially correlate with the stellar loop
L1 and there are some H i cloud (e.g. CN,2) in the north that partially
overlap with the stellar loop L7. Other than examples above, all the remaining
H i in the IGM shows no association with stellar emission.
Figure 5: Low surface brightness (star removed) image of NGC 1316 in $g$-band,
observed with the VST (Iodice et al., 2017). The known (Schweizer, 1980;
Richtler et al., 2014; Iodice et al., 2017) stellar loops are labelled and
outlined by the dashed green lines. The H i is shown by the solid blue
contours and the previously known H i clouds are labelled. The clouds that
make up TS (including the new western H i cloud) overlap with the stellar SW
loop. There is some overlap with some H i clouds in the north (e.g. CN,2 and
the clouds to the west) and the optical loop L7. Overall, there is no
consistent correlation between the stellar loops and the distribution of H i
clouds.
We detect an extension in TN, effectively doubling its length and mass. The
extension smoothly connects in velocity with the previously known emission and
now extends up to $\sim$ 220 kpc from NGC 1316 (Fig. 4), which is where the H
i originated from Serra et al. (2019). The clouds that make up TN now contains
(2.10 $\pm$ 0.07) $\times$ 108 M⊙ of H i. The north and south tails contain
60% (6.7 $\pm$ 0.1 $\times$ 108 M⊙) of the total IGM H i mass. The remaining
clouds in the IGM mostly reside to the north of NGC 1316, with the majority of
these existing over a narrow (90 km s-1) velocity range. It is possible some
of these clouds form large coherent H i structures, although it is not clear
compared to the case of TN and TS. While TN and TS originate from a single
pericentric passage of the NGC 1316 merger (Serra et al., 2019), the remaining
clouds in the IGM are more likely to be the remnants of recently accreted
satellites onto NGC 1316, which is consistent with Iodice et al. (2017).
The clouds immediately to the north-west of NGC 1317 may be a remnant of its
outer disc. These clouds are within a projected distance of 10 kpc from NGC
1317 and the cloud and the galaxy have the same velocity. The H i-to-stellar
mass ratio of the galaxy is low by at least an order of magnitude (see below)
and these clouds alone are not enough to explain the H i deficiency. However,
these are the only clouds that show potential evidence that they originated
from NGC 1317.
All the H i in the IGM located north of the group centre (NGC 1316) and the
clouds to the south-east of ESO 301-IG 11 appear to be decoupled from the
stars. The H i in the south (SH2, TS) has stellar emission associated with it.
Additionally, there are a few H i clouds near to the group centre that contain
multiphase gas.
### 4.4 Multiphase gas in the intra-group medium
In Figure 6, we show the ionized H$\alpha$ gas emission detected in the
vicinity of NGC 1316 (i.e. the group centre). H$\alpha$ is detected in NGC
1316, NGC 1317, and NGC 1310. However, the most striking features are the
H$\alpha$ complexes detected in the IGM.
Figure 6: OmegaCAM H$\alpha$ emission showing the ionised gas in the vicinity
of NGC 1316. The blue contour shows the majority of the western lobe of NGC
1316 in radio continuum at a (conservative) level of 1.3 mJy beam-1 from
Maccagni et al. (2020). The white contours show the 3$\sigma$ H i column
density of 1.4 $\times$ 1019 atoms cm-2 (over 44.1 km s-1) from this work.
Known sources (i.e. galaxies and IGM H i) and multiphase (MP) gas clouds that
contain H$\alpha$ and H i as well as the Ant-like feature from Fomalont et al.
(1989) are labelled. This image reveals long filaments of ionised gas in the
IGM.
There are giant filaments of H$\alpha$ in the IGM stretching between galaxies
of the group. H i is directly associated with some of the ionised gas, showing
the coexistence of multiphase gas in the IGM. These occur in EELR, CN,1, the
cloud directly below CN,1 and in five newly detected clouds containing H i
that we label MP in Fig. 6. Additionally, we detect the “Ant” (or ALF; Ant-
like feature) first detected as a depolarising feature in Fomalont et al.
(1989) and later in H$\alpha$ by Bland-Hawthorn et al. (1995). The H$\alpha$
emission is thought to provide the intervening turbulent magneto-ionic medium
required to depolarize the radio continuum emission Fomalont et al. (1989).
There is no optical continuum emission nor any H i emission currently
associated with the Ant.
While there are a number of multiphase gas clouds in the IGM, the brightest
case is EELR. It is clear that EELR has a complex multiphase nature, with H i,
H$\alpha$, and dust all previously detected in it (Mackie & Fabbiano, 1998;
Horellou et al., 2001; Serra et al., 2019). We detect 50% more H i than the
previous study (Serra et al., 2019) and H i is only present in the region of
the bright, more ordered ionised gas morphology. Given that our H i image is
sensitive to a column density of 1.4 $\times$ 1019 atoms cm-2, it is unlikely
that there is any H i in the less ordered (and likely turbulent) part of EELR.
Currently, the origin of EELR is unclear, and we will present a detailed
analysis of it and its multiphase gas in future work.
## 5 Pre-processing in the group
The Fornax A group is at a projected distance of $\sim$ 1.3 Mpc (approximately
2 virial radii) from the Fornax Cluster centre. Redshift independent distances
are too uncertain to establish whether the group is falling into the cluster.
However, the intact spiral morphologies of group galaxies imply that the group
has not passed the cluster pericentre as spiral morphologies do not typically
survive more than one pericentric passage (e.g. Calcáneo-Roldán et al., 2000).
At this distance, the intra-cluster medium (ICM) of the Fornax cluster should
not have a significant impact on the group galaxies, meaning that quenched
galaxies are a result of pre-processing within the group.
An optical analysis of the radial light profiles of the group galaxies and the
intra-group light (IGL) concluded that the Fornax A group is in an early stage
of assembly (Raj et al., 2020). This is evident from the low level (16%) of
IGL and from the group being dominated by late types with undisturbed
morphologies and comparable stellar masses (Raj et al., 2020).
In this work, we detect H i throughout the Fornax A group both in the galaxies
and the IGM. While the galaxies range from being H i rich to extremely H i
deficient, the majority of the galaxies contain a regular amount of H i for
their stellar mass. This is consistent with the group being in an early phase
of assembly, as the majority of galaxies would be H i deficient for a group in
the advanced assembly stage. The H i detections show evidence of pre-
processing in the form of (2.8 $\pm$ 0.2) $\times$ 108 M⊙ of H i in the IGM, H
i deficient galaxies, truncated H i discs, H i tails, and asymmetries. The
diversity of galaxy H i morphologies suggest that we are observing galaxies at
different stages of pre-processing, as we detail below.
### 5.1 NGC 1316 merger
The most obvious case of pre-processing in the group is NGC 1316, the BGG. It
is a peculiar early type that is the brightest galaxy in the entire Fornax
cluster volume and the result of a 10:1 merger that occurred 1 – 3 Gyr ago
between a massive early-type galaxy and a gas-rich late-type galaxy
(Schweizer, 1980; Mackie & Fabbiano, 1998; Goudfrooij et al., 2001; Iodice et
al., 2017; Serra et al., 2019). There are large stellar loops and streams, an
anomalous amount of dust and molecular gas (2 $\times$ 107 and 6 $\times$ 108
M⊙, respectively) in the centre, as well as H i in the centre and in the form
of long tails (Draine et al., 2007; Lanz et al., 2010; Galametz et al., 2012;
Morokuma-Matsui et al., 2019; Serra et al., 2019).
The H i mass budget for a 10:1 merger to produce the features observed in NGC
1316 requires the progenitor to contain $\sim$ 2 $\times$ 109 M⊙ of H i (Lanz
et al., 2010; Serra et al., 2019). Recently, Serra et al. (2019) detected 4.3
$\times$ 107 M⊙ of H i in the centre of NGC 1316, overlapping with the dust
and molecular gas, and a total H i mass of 7 $\times$ 108 M⊙ when including
the tails and nearby H i clouds. While these authors detected an order of
magnitude more H i than previous studies, this is a factor of $\sim$ 3 lower
than expected. In this work, we detect a H i mass in the centre of (6.8 $\pm$
0.4) $\times$ 107 M⊙ and a total H i mass 0.9 – 1.2444The lower limit was
determined by only including the same H i sources as Serra et al. (2019) and
the TN extension, while the upper limit includes the remaining H i clouds in
the IGM $\times$ 109 M⊙ associated with NGC 1316 in the form of streams and
clouds. This brings the observed H i mass budget even closer to the expected
value under the 10:1 lenticular + spiral merger hypothesis – just within a
factor 1.7 - 2.2, which is well within the uncertainties.
Since the merger 1 – 3 Gyr ago, NGC 1316 has been accreting small satellites
(Iodice et al., 2017). The satellites may have contributed to the build up of
H i, however, we do not observe any H i correlated with dwarf galaxies within
150 kpc of NGC 1316. Any contributed H i is second order compared to the
initial merger, which is supported by the H i mass of NGC 1316 being dominated
by the tails. Tidal forces from the initial merger ejected 6.6 $\times$ 108 M⊙
of H i into the IGM in the TN and TS tails alone. The remaining H i in the IGM
is likely to be a combination of gas decoupled from stars in the initial
merger and gas from more recently accreted satellites. H i tidal tails that
span hundreds of kpc in galaxy groups have been shown to survive in the IGM
for the same timescale (1 – 3 Gyr) from when this merger took place (Hess et
al., 2017).
### 5.2 Pre-processing status of the group galaxies
In this section, we identify galaxies at different stages of pre-processing
according to their H i morphology and cool gas (H i and H2) ratios. The
categories are as follows: i) early, where a galaxy has yet to experience
significant pre-processing; ii) ongoing, for galaxies that currently show
signatures of pre-processing; and iii) advanced, for galaxies that have
already experienced significant pre-processing.
There are a total of 12 galaxies in the sample, which are all the
spectroscopically confirmed galaxies within the H i image field of view. In
our sample, 10 galaxies have H i detections and 2 galaxies (FCC 19 and FCC 40)
have H i upper limits (Fig. 3). There are 7 galaxies that have been observed
with ALMA. The 5 galaxies that were not observed are ESO 301-IG 11, FCC 19,
FCC 35, FCC 40, and FCC 46 (Morokuma-Matsui et al., 2019, Morokuma-Matsui et
al. in prep). We measure the molecular gas mass of the observed galaxies using
the standard Milky Way CO-to-H2 conversion factor of 4.36 (M⊙ K km s-1 pc-2)-1
(Bolatto et al., 2013) as well as estimated stellar masses (Table 2) from Raj
et al. (2020) and Venhola et al. (2018), which are derived from the $g$ and
$i$ photometric relation in Taylor et al. (2011). We remove the helium
contribution from our molecular gas masses so that we are measuring the
molecular-to-atomic hydrogen gas mass (except in the total gas fraction, shown
below) and can directly compare our findings to Catinella et al. (2018).
We present the H i and H2 scaling ratios in Fig. 7. We measure the H i gas
fraction FHI $\equiv$ log(MHI/M⋆), the total gas fraction Fgas $\equiv$
log(1.3(MHI \+ MH2)/M⋆) where the 1.3 accounts for the helium contribution,
the molecular-to-atomic gas mass ratio Rmol $\equiv$ log(MH2/MHI), and the H2
gas fraction FH2 $\equiv$ log(MH2/M⋆). We compare the H i fraction of our
galaxies to those in the Herschel Reference Survey (HRS; Boselli et al., 2010,
2014) and the Void Galaxy Survey (VGS; Kreckel et al., 2012), which span a
comparable stellar mass range of our galaxies. We also compare FHI to the
median trend of the extended GALEX Arecibo SDSS Survey (xGASS; Catinella et
al., 2018). Furthermore, we compare our molecular gas scaling relations to the
median trends of xGASS-CO (Fig. 7), which are xGASS galaxies with CO
detections (Catinella et al., 2018). The xGASS and xGASS-CO trends provide a
good reference for the H i and H2 scaling relations in the local Universe as
the median FHI trend was derived from 1179 galaxies selected with 109 $<$ M⋆
(M⊙) $<$ 1011.5 and 0.01 $<$ z $<$ 0.05, and the H2 mass and scaling relations
derived using a subset 477 galaxies from the parent sample that have CO
detections.
Figure 7: Atomic and molecular gas scaling ratios. In all figures, the early,
ongoing, and advanced pre-processing categories are shown as blue circles,
green squares, and red diamonds, respectively and H2 upper limits are depicted
by arrows. Solid markers indicate H i detections and open markers are non-
detections. FCC 40 is not assigned to any pre-processing category and is shown
as the open black star. _Top left panel_ : The H i gas fraction compared to
galaxies from the HRS (Boselli et al., 2010, 2014) and VGS (Kreckel et al.,
2012) (grey points) that show the typical scatter in FHI. The orange shaded
region indicates the median trend from xGASS (Catinella et al., 2018). _Top
right panel_ : The total gas fraction of our galaxies compared to the median
xGASS-CO trend (Catinella et al., 2018) (orange shaded region). _Bottom left
panel:_ The molecular-to-atomic-gas ratio of our galaxies compared to the
median xGASS-CO trend (Catinella et al., 2018) (orange shaded region). _Bottom
right panel_ : The H2 gas fraction as a function of H i gas fraction, showing
constant ratios of 100%, 30%, 10%, and 3%. Overall, the galaxies in the early
category are H i rich, the galaxies in the ongoing category typically follow
the xGASS and xGASS-CO median scaling relations (Catinella et al., 2018),
while galaxies in the advanced category have no H i or are H i-deficient with
irregularly high H2-to-H i ratios.
The two galaxies that show no signatures (i.e. in the early phase) of pre-
processing are NGC 1326A and NGC 1326B. They are H i rich galaxies with
typical extended H i discs and a low molecular gas content. Both galaxies were
observed with ALMA (Morokuma-Matsui et al., 2019, Morokuma-Matsui et al. in
prep), although no CO was detected, placing upper limits on the H2 mass. They
have the highest H i fraction and lowest H2-to-H i ratios given their stellar
mass (Fig. 7). The galaxies are just within the virial radius of the group,
making them furthest from the group centre in projected distance. This
increases the likelihood that the galaxies have not undergone pre-processing
yet.
The galaxies that show current signatures of pre-processing (i.e. the ongoing
category) are FCC 35, ESO 301-IG 11, NGC 1310, NGC 1316, and NGC 1326. In
general, these galaxies have H i tails or asymmetric extended H i emission,
typical H i and H2 ratios (for the galaxies with H2 observations) that follow
the median xGASS trends in Fig. 7. The exception to this is NGC 1316. As this
galaxy is the BGG, it has a unique formation and evolution history (discussed
in section 5.1) that displays both an ongoing (e.g. tidal tails) and advanced
state (giant elliptical with a lack of H i contained in the stellar body) of
pre-processing. In this work, we include this galaxy in the ongoing category,
although the H i mass range calculated in section 5.1 reflects that it could
also be part of the advanced category.
FCC 35 is the bluest galaxy (Fig. 3 and Table 2) in the group (Raj et al.,
2020) and has extremely strong and narrow optical emission lines that classify
it as either a blue compact dwarf or an active star-burst HII galaxy (Putman
et al., 1998). Previous studies (i.e. Putman et al., 1998; Schröder et al.,
2001) detected a H i cloud associated with FCC 35 and suggested it may be a
result of a tidal interaction with the nearest (projected separation of 50
kpc) neighbour NGC 1316C. This is a plausible scenario as FCC 35 has an up-
bending (Type-III) break in the stellar radial profile, and a bluer outer
stellar disc (Raj et al., 2020), which could be tidally induced star
formation. However, the star formation could also be compression/shock induced
(Raj et al., 2020). We detect the H i cloud of FCC 35 as part of a long tail
pointing away from the group centre, making it the most likely galaxy to show
evidence of ram pressure stripping. The lower IGM (compared to the ICM)
density means that ram pressure stripping is less prevalent in groups. Despite
the observational challenges, a few cases have been reported (e.g. Westmeier
et al., 2011; Rasmussen et al., 2012; Vulcani et al., 2018; Elagali et al.,
2019) and ram pressure is thought to play an important role in the pre-
processing of galaxies in groups. FCC 35 is not H i deficient (Fig. 7),
implying that the gas has recently been displaced, similar to other galaxies
showing early signs of gas removal (e.g. Ramatsoku et al., 2020; Moretti et
al., 2020).
ESO 301-IG 11 is a collisional ring galaxy with a H i gas fraction below the
median trend, although it is not the most H i deficient galaxy for its stellar
mass. There is clear evidence of a tidal interaction in the form of irregular
optical morphology, an up-bending (Type-III) break in the stellar radial
profile and a slightly extended and asymmetric H i disc. The galaxy is blue in
colour, although the outer stellar disc is redder than the inner disc (Raj et
al., 2020), implying that the tidal interaction may have restarted star
formation in the centre.
The asymmetric H i tail of NGC 1326 is diffuse ($<$ 1 $\times$ 1020 atoms
cm-2) and only detected on one side of the galaxy. The sensitivity of the
opposing side prevents us from detecting H i that diffuse, and we are
therefore unable to distinguish whether the extended H i is part of a regular
extended H i disc or a signature of pre-processing. With the current H i
content, it follows the same H i and H2 trends as the other galaxies in the
ongoing category.
The optical morphology and gas scaling relations of NGC 1310 suggest that it
is not being pre-processed. The stellar spiral structure is completely intact
(Fig. 3), ruling out strong tidal interactions and the H i gas fraction and
molecular-to-atomic gas ratios are close to the median trends. However, the H
i morphology appears complex and incoherent, with many asymmetric extensions
and nearby clouds at different velocities. It is clear that the anomalous H i
clouds and extensions are not rotating with the main H i disc (Fig. 4),
suggesting external origins. The H i extension in the north-west may be
emission from a dwarf satellite galaxy, although a spectroscopic redshift
would be required to confirm this. Given the presence of the H$\alpha$
filaments in the vicinity of NGC 1310, the remaining clouds may be a result of
hot gas, cooling in the IGM (and hot halo of NGC 1316) and being captured or
accreted onto this galaxy.
Finally, the galaxies that are in the advanced stage of pre-processing are NGC
1316C, NGC 1317, FCC 19, and FCC 46. There is no H i detected in FCC 19, and
the other three galaxies have truncated H i discs and are H i deficient as
their FHI is more than 3$\sigma$ from the xGASS median trend (Fig. 7).
NGC 1316C and NGC 1317 have a low H i mass fraction and regular H2 mass
fraction. The total gas fraction of these galaxies is low and is driven by the
lack of H i. Hence, they have significantly more H2 than H i and a molecular-
to-atomic fraction an order of magnitude higher (the highest in our sample)
than the median trend (Fig. 7). Both these galaxies have no break (Type-I) in
their stellar radial profile (Raj et al., 2020), showing no sign of disruption
to their stellar body and their H i confined to the stellar disc, implying
that the outer H i disc has been removed. Ram pressure or gentle tidal
interactions are likely to be responsible for removing the outer H i disc of
these galaxies. The less dense (compared to the ICM) IGM combined with the
group potential allows galaxies to hold on to their gas more effectively than
in clusters (Seth & Raychaudhury, 2020). The retained atomic gas within the
stellar body can then be converted into molecular gas. This scenario is
consistent with the findings of the GAs Stripping Phenomena in galaxies with
MUSE (GASP; Moretti et al., 2020) project, where pre-processed galaxies in
groups (and clusters) have their outer H i removed (via ram pressure) and the
remaining H i is efficiently converted into H2. These galaxies in the advanced
stage of pre-processing with truncated H i discs and regular amounts of H2 are
similar to some galaxies in the Virgo (Cortese et al., 2010) and Fornax
cluster (Loni et al., 2021). This suggests that late-type galaxies that have
been sufficiently processed lose their outer H i disc and end up with more H2
than H i.
Despite the similarities between NGC 1316C and NGC 1317, these galaxies have
likely been pre-processed on different timescales. The stellar mass of NGC
1316C is more than an order of magnitude lower than that of NGC 1317 and
according to Raj et al. (2020), NGC 1316C only recently ($<$ 1 Gyr) became a
group member while NGC 1317 may have been a group member for up to 8 Gyr.
There is no star formation beyond the very inner ($<$ 0.5′) disc of NGC 1317
(Raj et al., 2020) and even though there is only a projected separation of
$\sim$ 50 kpc between NGC 1316 and NGC 1317, a strong tidal interaction can be
reasonably excluded due to the intact spiral structure of NGC 1317 (Richtler
et al., 2014; Iodice et al., 2017). The outer H i disc has been removed and
possibly lost to the IGM (i.e. potentially identified as the adjacent clouds
at the same velocity) as a result of gentle tidal or hydrodynamical
interactions. Alternatively, the outer disc may have been converted to other
gaseous phases on short timescales ($<$ 1 Gyr). While we are unable to
identify the exact mechanisms that are responsible for the truncated H i disc
of NGC 1317, it is evident that the galaxy has not had access to cold gas over
long timescales.
Out of all the galaxies with H i, FCC 46 is the most H i deficient given its
stellar mass. It is a dwarf elliptical with a recent star formation event and
H i was first detected as a polar ring orbiting around the optical minor axis
by De Rijcke et al. (2013). As the H i is kinematically decoupled from stellar
body, the gas was likely accreted from an external source (De Rijcke et al.,
2013). Our measured H i mass (Table 2) is consistent with that from De Rijcke
et al. (2013), although, as a result of our sensitivity at that position, we
do not detect the diffuse H i component that shows the minor axis rotation. A
minor merger event (e.g. with a dwarf late type) is consistent with the
morphology and $\sim$ 107 M⊙ of H i found in FCC 46.
FCC 19 is a dwarf lenticular galaxy (Fig. 3) with a stellar mass of 3.4
$\times$ 108 M⊙ (Liu et al., 2019). It has a $g$ – $r$ colour of 0.58 (Iodice
et al., 2017), which is similar to the colour of NGC 1310, NGC 1326, NGC
1326A, and ESO 301-IG 11 (Table 2), which have regular H i fractions and are
likely forming stars. However, no H i is detected in FCC 19 and we measure a
3$\sigma$ FHI upper limit of -2.3 (Fig. 7) assuming a 100 km s-1 line width.
FCC 19 is situated in the most sensitive part of the image, meaning that the
galaxy truly does not contain H i. Being so close (70 kpc in projection) to
NGC 1316, the tidal field and hot halo of NGC 1316 are likely to have played
significant roles in removing the H i from FCC 19. The H i has likely been
stripped from the galaxy and lost to the IGM. The stripped H i may also be
potentially heated and prevented from cooling.
Lastly, we refrain from assigning a category to FCC 40 because we are unable
to ascertain whether the galaxy properties are a result of secular evolution
or have been influenced by pre-processing. This galaxy is a low surface
brightness (Fig. 3), low-mass (M⋆ = 2.3 $\times$ 106 M⊙) blue dwarf elliptical
(Table 2) with no H i detected. We place an upper limit on the H i mass (and H
i fraction), although it is currently unknown if galaxies of this mass,
colour, and morphology are expected to contain H i.
We show the spatial distribution of each group galaxy and their pre-processing
status in Fig. 8. The distribution shows a variety of pre-processing stages
mixed throughout the group, with no clear radial dependence. The majority of
on-going and advanced pre-processing are $<$ 0.5 of the group virial radius,
although there are galaxies (i.e. FCC 46 and NGC 1326) that have the same pre-
processing status and are located closer to edge of the group, $>$ 0.5 of the
group virial radius. At a distance of $\sim$ 2 (cluster) virial radii from the
Fornax cluster, the Fornax group is located at the distance where pre-
processing is thought to be the most efficient (Lewis et al., 2002; Gómez et
al., 2003; Verdugo et al., 2008; Mahajan et al., 2012; Haines et al., 2015).
In general, it is not clear whether the pre-processing at this infall distance
is driven by the group interacting with the cluster, or by local (e.g. tidal
and hydrodynamical) interactions within the group. In this instance, it
appears that pre-processing is driven by local interactions within the Fornax
A group for the following reasons: i) The massive, central galaxy is at least
one order of magnitude more massive (Table 2) than the satellite galaxies. ii)
This central galaxy underwent a merger 1 – 3 Gyr ago (discussed in Section
5.1). iii) The majority of galaxies close to the group centre ($<$ 0.5 of the
group virial radius) show evidence of pre-processing, while the two galaxies
(NGC 1326 A/B) closest to the Fornax cluster (and furthest from the group
centre) show no evidence of pre-processing. In addition to these points, there
are four galaxies (NGC 1310, NGC 1317, ESO 301-IG 11, and FCC 19) that
spatially overlap (in projection) with the radio lobes of NGC 1316 (Fig. 8)
and therefore may be influenced by the AGN (e.g Johnson et al., 2015).
Figure 8: Pre-processing map of the Fornax A group. The background image shows
the 1.44 GHz MeerKAT radio continuum emission (Maccagni et al., 2020) and the
position of each group galaxy are overlaid with the same markers as Fig. 7.
The filled markers represent H i detections, the open markers indicate H i
non-detections, where the early, ongoing, advanced, and unclassified pre-
processing categories are shown as blue circles, green squares, red diamonds,
and black stars, respectively. The red dashed circle denotes the 1.05 degree
(0.38 Mpc) virial radius of the group as adopted in Drinkwater et al. (2001).
A 20 kpc scale bar is shown in the bottom right corner and the direction to
the Fornax cluster is shown by the black arrow. There is no consistent trend
between projected position and pre-processing status, although the majority of
group galaxies show evidence of pre-processing. The extent of the NGC 1316 AGN
lobes show that it may be playing a role in the pre-processing of neighbouring
galaxies and the magnetic field could help the containment of multiphase gas.
### 5.3 Gas in the IGM
The H i tails and clouds in the IGM are a direct result of galaxies having
their H i removed through hydrodynamical and tidal interactions over the past
few Giga-years. As described in sections 4.3 and 5.1, the majority (if not
all) of the H i in the IGM is due to the Fornax A merger and the recent
accretion of satellites.
The amount (1.12 $\pm$ 0.2 $\times$ 109 M⊙) of detected H i in the IGM is not
enough to account for all of the missing H i in the H i deficient group
galaxies. However, the outer parts of the image are subject to a large primary
beam attenuation and some of the IGM H i may be hiding in the noise. We
estimate the amount of H i potentially missed by assuming that we detect all H
i in the IGM in the inner 0.1 deg2 (primary beam response $>$ 90%) and that
the IGM H i in this area is representative of the IGM throughout the entire
group both in terms of amount of H i per unit area and H i column density
distribution. Under this assumption, the primary beam attenuation reduces the
detected H i by a factor of $\sim$ 2.3, implying that we may be missing up to
$\sim$ 1.5 $\times$ 109 M⊙ of H i in the IGM. All the (including the missed) H
i in IGM is still not enough to explain all the H i deficient galaxies in the
group and clearly gas exists in other phases (i.e. H2 and H$\alpha$). Some of
the H i in the galaxies has been converted into H2, which explains why the
more advanced pre-processed galaxies that have H i, display high molecular-to-
atomic gas ratios, and there is H$\alpha$ in galaxies and the IGM.
Currently, the origin of giant ionised gas filaments in the IGM is not well
understood. However, they are typically observed in high-mass groups or low-
mass clusters (e.g. halo masses $>$ 1013.5 M⊙), for example the Virgo cluster
(Kenney et al., 2008; Boselli et al., 2018b, a; Longobardi et al., 2020) and
the Blue Infalling Group (Cortese et al., 2006; Fossati et al., 2019); see
Yagi et al. (2017) for a list of clusters that contain long ionised gas
filaments. A likely scenario is that cool gas is stripped from an in-falling
galaxy, and subsequently ionised, possibly from ionising photons originating
from star-forming regions (Poggianti et al., 2018; Fossati et al., 2019) or
through non-photo-ionisation mechanisms such as shocks, heat conduction, and
magneto-hydrodynamic waves (Boselli et al., 2016). We use the relation in
Barger et al. (2013) to estimate the total H$\alpha$ mass in the IGM (i.e.
EELR, SH2, and the filaments) from our H$\alpha$ photometry (Fig. 6). Assuming
a typical H$\alpha$ temperature of 104 K and electron density of 1 cm-3, we
estimate the total H$\alpha$ mass in the IGM to be $\sim$ 2.6 $\times$ 106 M⊙,
which does not significantly contribute to the total gas budget in the IGM.
Simulations show that $\sim$ 104 K (i.e. relatively cool) gas clouds can
survive in hot haloes (such as NGC 1316) for cosmological timescales (Nelson
et al., 2020). The clouds originate from satellite mergers, and are not in
thermal equilibrium, but rather magnetically dominated. Cooling is triggered
by the thermal instability and the cool gas is surrounded by an interface of
intermediate temperature gas (Nelson et al., 2020). These ingredients can
explain how multiphase gas clouds are present in the hot halo of NGC 1316
(Fig. 6), such that the H$\alpha$ filaments are a result of satellite
accretion and the H i has rapidly cooled from these structures, with the
ability to survive in the IGM for cosmological timescales.
Recently, Müller et al. (2020) suggest that magnetic fields of the order of 2
– 4 $\mu$G can shield H$\alpha$ and H i in the ICM / IGM such that the gas
clouds do not dissipate. As the H$\alpha$ filaments and multiphase gas clouds
are within the radio lobes (in projection) of NGC 1316, the magnetic field of
the lobes (measured to be $\sim$ 3 $\mu$G by McKinley et al., 2015; Anderson
et al., 2018; Maccagni et al., 2020) may be providing additional stability for
the H$\alpha$ and H i to survive. Indeed, the Ant detected by Fomalont et al.
(1989) and Bland-Hawthorn et al. (1995) is a small portion of the giant
H$\alpha$ filaments in the IGM. Even though there is currently no H i
associated with the Ant, other sections of the H$\alpha$ filaments show that
neutral and ionised gas can coexist in some regions of the IGM, possibly
transform into one another, and accrete onto group galaxies (e.g. NGC 1310).
## 6 Conclusions
We present results from MeerKAT H i commissioning observations of the Fornax A
group. Our observations are reduced with the CARACal pipeline and our H i
image is sensitive to a column density of 1.4 $\times$ 1019 atoms cm-2 in the
field centre. Out of 13 spectroscopically confirmed group members, we detect H
i in 10 and report an H i mass upper limit for 2 (the remaining galaxy is
outside the field of view of our observation). We also detect H i in the IGM,
in the form of clouds, some distributed along coherent structures up to 220
kpc in length. The H i in the IGM is the result of a major merger occurring in
the massive, central galaxy NGC 1316, 1 – 3 Gyr ago, combined with H i being
removed from satellite galaxies as they are pre-processed.
We find that 9 out of the 12 galaxies show some evidence of pre-processing in
the form of H i deficient galaxies, truncated H i discs, H i tails, and
asymmetries. Using the H i morphology and the molecular-to-atomic gas ratios
of the galaxy, we classify whether each galaxy is in the early, ongoing, or
advanced stage of pre-processing.
Finally, we show that there are giant H$\alpha$ filaments in the IGM, within
the hot halo of NGC 1316. The filaments are likely a result of molecular gas
being removed from a satellite galaxy and then ionised. We observe a number of
H i clouds associated with the ionised H$\alpha$ filament, indicating the
presence of multiphase gas. Simulations show that hot gas can condense into
cool gas within hot haloes and survive for long periods of time on a
cosmological timescale, which is consistent with the cool gas clouds we detect
within the hot halo of NGC 1316. The multiphase gas is supported by magnetic
pressure, implying that the magnetic field in the lobes of the NGC 1316 AGN
might be playing an important role in maintaining these multiphase gas clouds.
The cycle of AGN activity and cooling gas in the IGM could ultimately result
in the cool gas clouds falling back onto the central galaxy. We summarise our
main findings as follows:
1. 1.
We present new, resolved H i in FCC 35, NGC 1310, and NGC 1326.
2. 2.
There is a total of(1.12 $\pm$ 0.02) $\times$ 109 M⊙ of H i in the IGM, which
is dominated by TN and TS (combined H i mass of 6.6 $\times$ 108 M⊙). We
detect additional components in both tails, an extension in TN, effectively
doubling its length, and a cloud in TS that shows coherence with the stellar
south-west loop.
3. 3.
The H i in the IGM is decoupled from the stars, other than in TS and SH2.
4. 4.
We measure 0.9 – 1.2 $\times$ 109 M⊙ of H i associated with NGC 1316, bringing
the observed H i mass budget within a factor of $\sim$ 2 of the expected value
for a 10:1 lenticular + spiral merger occurring $\sim$ 2 Gyr ago.
5. 5.
Out of the 12 group galaxies in our sample, 2 (NGC 1326A and NGC 1326B) are in
the early phase of pre-processing, 5 (FCC 35, ESO 301-IG 11, NGC 1310, NGC
1316, and NGC 1326) are in the ongoing phase of pre-processing, 4 (NGC 1316C,
NGC 1317 FCC 19, and FCC 46) are in the advanced stage of pre-processing, and
1 (FCC 40) remains unclassified.
6. 6.
Galaxies that are yet to be pre-processed have a typical extended H i disk,
high H i content, and molecular-to-atomic gas ratios at least an order of
magnitude below the median trend for their stellar mass. Galaxies that are
currently being pre-processed typically display H i tails or asymmetric
extended disks, while containing regular amounts of H i and H2. Galaxies in
the advanced stage of pre-processing have no H i or have lost their outer H i
and are efficiently converting their remaining H i to H2.
7. 7.
We detect the Ant first observed by Fomalont et al. (1989) as a depolarising
feature and later in H$\alpha$ by Bland-Hawthorn et al. (1995), which turns
out to be a small part of long, ionised H$\alpha$ filaments in the IGM.
Localised cooling (potentially assisted by the magnetic field in the lobes of
the NGC 1316 AGN) can occur in the H$\alpha$ filaments to condense and form H
i.
In this work, our deep MeerKAT H i image shows many examples of pre-processing
in the Fornax A group, such as galaxies with a variety of atypical
morphologies and massive amounts of H i in the IGM. The improved sensitivity
and resolution of the MFS (Serra et al., 2016) will likely reveal more H i
throughout the group and provide kinematic information for the H i in galaxies
and the IGM.
###### Acknowledgements.
The MeerKAT telescope is operated by the South African Radio Astronomy
Observatory, which is a facility of the National Research Foundation, an
agency of the Department of Science and Innovation. We are grateful to the
full MeerKAT team at SARAO for their work on building and commissioning
MeerKAT. This paper makes use of the following ALMA data:
ADS/JAO.ALMA#2017.1.00129.S. ALMA is a partnership of ESO (representing its
member states), NSF (USA), and NINS (Japan), together with NRC (Canada), MOST
and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the
Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO,
and NAOJ. This work also made use of the Inter-University Institute for Data
Intensive Astronomy (IDIA) visualisation lab (https://vislab.idia.ac.za). IDIA
is a partnership of the University of Cape Town, the University of Pretoria
and the University of Western Cape. This project has received funding from the
European Research Council (ERC) under the European Union’s Horizon 2020
research and innovation programme (grant agreement no. 679627; project name
FORNAX). The research of OS is supported by the South African Research Chairs
Initiative of the Department of Science and Innovation and the National
Research Foundation. KT acknowledges support from IDIA. The work of KMM is
supported by JSPS KAKENHI Grant Number of 19J40004. RFP acknowledges financial
support from the European Union’s Horizon 2020 research and innovation program
under the Marie Skłodowska-Curie grant agreement No. 721463 to the SUNDIAL ITN
network. AV acknowledges the funding from the Emil Aaltonen foundation. PK is
partially supported by the BMBF project 05A17PC2 for D-MeerKAT. AS
acknowledges funding from the National Research Foundation under the Research
Career Advancement and South African Research Chair Initiative programs
(SARChI), respectively. FV acknowledges financial support from the Italian
Ministry of Foreign Affairs and International Cooperation (MAECI Grant Number
ZA18GR02) and the South African NRF (Grant Number 113121) as part of the ISARP
RAIOSKY2020 Joint Research Scheme.
## References
* Anderson et al. (2018) Anderson, C. S., Gaensler, B. M., Heald, G. H., et al. 2018, ApJ, 855, 41
* Baldry et al. (2004) Baldry, I. K., Glazebrook, K., Brinkmann, J., et al. 2004, ApJ, 600, 681
* Balogh et al. (2004) Balogh, M., Eke, V., Miller, C., et al. 2004, MNRAS, 348, 1355
* Barger et al. (2013) Barger, K. A., Haffner, L. M., & Bland-Hawthorn, J. 2013, ApJ, 771, 132
* Bell et al. (2004) Bell, E. F., Wolf, C., Meisenheimer, K., et al. 2004, ApJ, 608, 752
* Bianconi et al. (2018) Bianconi, M., Smith, G. P., Haines, C. P., et al. 2018, MNRAS, 473, L79
* Bland-Hawthorn et al. (1995) Bland-Hawthorn, J., Ekers, R. D., van Breugel, W., Koekemoer, A., & Taylor, K. 1995, ApJ, 447, L77
* Bolatto et al. (2013) Bolatto, A. D., Wolfire, M., & Leroy, A. K. 2013, ARA&A, 51, 207
* Boselli et al. (2014) Boselli, A., Cortese, L., Boquien, M., et al. 2014, A&A, 564, A66
* Boselli et al. (2016) Boselli, A., Cuillandre, J. C., Fossati, M., et al. 2016, A&A, 587, A68
* Boselli et al. (2010) Boselli, A., Eales, S., Cortese, L., et al. 2010, PASP, 122, 261
* Boselli et al. (2018a) Boselli, A., Fossati, M., Consolandi, G., et al. 2018a, A&A, 620, A164
* Boselli et al. (2018b) Boselli, A., Fossati, M., Ferrarese, L., et al. 2018b, A&A, 614, A56
* Calcáneo-Roldán et al. (2000) Calcáneo-Roldán, C., Moore, B., Bland -Hawthorn, J., Malin, D., & Sadler, E. M. 2000, MNRAS, 314, 324
* Cantiello et al. (2013) Cantiello, M., Grado, A., Blakeslee, J. P., et al. 2013, A&A, 552, A106
* Catinella et al. (2018) Catinella, B., Saintonge, A., Janowiecki, S., et al. 2018, MNRAS, 476, 875
* Catinella et al. (2013) Catinella, B., Schiminovich, D., Cortese, L., et al. 2013, MNRAS, 436, 34
* Chung et al. (2007) Chung, A., van Gorkom, J. H., Kenney, J. D. P., & Vollmer, B. 2007, ApJ, 659, L115
* Chung et al. (2009) Chung, S. M., Gonzalez, A. H., Clowe, D., et al. 2009, ApJ, 691, 963
* Cluver et al. (2020) Cluver, M. E., Jarrett, T. H., Taylor, E. N., et al. 2020, ApJ, 898, 20
* Cortese et al. (2010) Cortese, L., Davies, J. I., Pohlen, M., et al. 2010, A&A, 518, L49
* Cortese et al. (2006) Cortese, L., Gavazzi, G., Boselli, A., et al. 2006, A&A, 453, 847
* Courtois & Tully (2015) Courtois, H. M. & Tully, R. B. 2015, MNRAS, 447, 1531
* Cowie & McKee (1977) Cowie, L. L. & McKee, C. F. 1977, ApJ, 211, 135
* Davies et al. (2019) Davies, L. J. M., Robotham, A. S. G., Lagos, C. d. P., et al. 2019, MNRAS, 483, 5444
* de Blok et al. (2018) de Blok, W. J. G., Walter, F., Ferguson, A. M. N., et al. 2018, ApJ, 865, 26
* De Rijcke et al. (2013) De Rijcke, S., Buyle, P., & Koleva, M. 2013, ApJ, 770, L26
* Donnari et al. (2020) Donnari, M., Pillepich, A., Joshi, G. D., et al. 2020, MNRAS[arXiv:2008.00005]
* Draine et al. (2007) Draine, B. T., Dale, D. A., Bendo, G., et al. 2007, ApJ, 663, 866
* Drew et al. (2014) Drew, J. E., Gonzalez-Solares, E., Greimel, R., et al. 2014, MNRAS, 440, 2036
* Drinkwater et al. (2001) Drinkwater, M. J., Gregg, M. D., Holman, B. A., & Brown, M. J. I. 2001, MNRAS, 326, 1076
* Driver et al. (2011) Driver, S. P., Hill, D. T., Kelvin, L. S., et al. 2011, MNRAS, 413, 971
* Eke et al. (2004) Eke, V. R., Baugh, C. M., Cole, S., et al. 2004, MNRAS, 348, 866
* Ekers et al. (1983) Ekers, R. D., Goss, W. M., Wellington, K. J., et al. 1983, A&A, 127, 361
* Elagali et al. (2019) Elagali, A., Staveley-Smith, L., Rhee, J., et al. 2019, MNRAS, 487, 2797
* Ferguson (1989) Ferguson, H. C. 1989, AJ, 98, 367
* Fomalont et al. (1989) Fomalont, E. B., Ebneter, K. A., van Breugel, W. J. M., & Ekers, R. D. 1989, ApJ, 346, L17
* Fossati et al. (2019) Fossati, M., Fumagalli, M., Gavazzi, G., et al. 2019, MNRAS, 484, 2212
* Fujita (2004) Fujita, Y. 2004, PASJ, 56, 29
* Galametz et al. (2012) Galametz, M., Kennicutt, R. C., Albrecht, M., et al. 2012, MNRAS, 425, 763
* Gómez et al. (2003) Gómez, P. L., Nichol, R. C., Miller, C. J., et al. 2003, ApJ, 584, 210
* Goudfrooij et al. (2001) Goudfrooij, P., Alonso, M. V., Maraston, C., & Minniti, D. 2001, MNRAS, 328, 237
* Haines et al. (2007) Haines, C. P., Gargiulo, A., La Barbera, F., et al. 2007, MNRAS, 381, 7
* Haines et al. (2015) Haines, C. P., Pereira, M. J., Smith, G. P., et al. 2015, ApJ, 806, 101
* Haines et al. (2013) Haines, C. P., Pereira, M. J., Smith, G. P., et al. 2013, ApJ, 775, 126
* Hatt et al. (2018) Hatt, D., Freedman, W. L., Madore, B. F., et al. 2018, ApJ, 866, 145
* Hess et al. (2017) Hess, K. M., Cluver, M. E., Yahya, S., et al. 2017, MNRAS, 464, 957
* Horellou et al. (2001) Horellou, C., Black, J. H., van Gorkom, J. H., et al. 2001, A&A, 376, 837
* Ianjamasimanana et al. (2020) Ianjamasimanana, R., Namumba, B., Ramaila, A. J. T., et al. 2020, MNRAS, 497, 4795
* Iodice et al. (2016) Iodice, E., Capaccioli, M., Grado, A., et al. 2016, ApJ, 820, 42
* Iodice et al. (2017) Iodice, E., Spavone, M., Capaccioli, M., et al. 2017, ApJ, 839, 21
* Jarrett et al. (2020) Jarrett, T. H., Comrie, A., Marchetti, L., et al. 2020, arXiv e-prints, arXiv:2012.10342
* Johnson et al. (2015) Johnson, M. C., Kamphuis, P., Koribalski, B. S., et al. 2015, MNRAS, 451, 3192
* Jonas (2016) Jonas, J. 2016, Proceedings of MeerKAT Science: On the Pathway to the SKA
* Józsa et al. (2020) Józsa, G. I. G., White, S. V., Thorat, K., et al. 2020, in ASP Conf. Ser., Vol. 527, ADASS XXIX, ed. R. Pizzo, E. Deul, J.-D. Mol, J. de Plaa, & H. Verkouter, San Francisco, 635–638
* Kenney et al. (2008) Kenney, J. D. P., Tal, T., Crowl, H. H., Feldmeier, J., & Jacoby, G. H. 2008, ApJ, 687, L69
* Kenyon et al. (2018) Kenyon, J. S., Smirnov, O. M., Grobler, T. L., & Perkins, S. J. 2018, Monthly Notices of the Royal Astronomical Society, 478, 2399
* Kleiner et al. (2019) Kleiner, D., Koribalski, B. S., Serra, P., et al. 2019, MNRAS, 488, 5352
* Koribalski (2012) Koribalski, B. S. 2012, PASA, 29, 359
* Koribalski et al. (2004) Koribalski, B. S., Staveley-Smith, L., Kilborn, V. A., et al. 2004, AJ, 128, 16
* Kreckel et al. (2012) Kreckel, K., Platen, E., Aragón-Calvo, M. A., et al. 2012, AJ, 144, 16
* Lanz et al. (2010) Lanz, L., Jones, C., Forman, W. R., et al. 2010, ApJ, 721, 1702
* Lewis et al. (2002) Lewis, I., Balogh, M., De Propris, R., et al. 2002, MNRAS, 334, 673
* Liu et al. (2019) Liu, Y., Peng, E. W., Jordán, A., et al. 2019, ApJ, 875, 156
* Longobardi et al. (2020) Longobardi, A., Boselli, A., Fossati, M., et al. 2020, A&A, 644, A161
* Loni et al. (2021) Loni, A., Serra, P., Kleiner, D., et al. 2021, arXiv e-prints, arXiv:2102.01185
* Maccagni et al. (2020) Maccagni, F. M., Murgia, M., Serra, P., et al. 2020, A&A, 634, A9
* Mackie & Fabbiano (1998) Mackie, G. & Fabbiano, G. 1998, AJ, 115, 514
* Maddox et al. (2019) Maddox, N., Serra, P., Venhola, A., et al. 2019, MNRAS, 490, 1666
* Mahajan (2013) Mahajan, S. 2013, MNRAS, 431, L117
* Mahajan et al. (2012) Mahajan, S., Raychaudhury, S., & Pimbblet, K. A. 2012, MNRAS, 427, 1252
* Marchetti et al. (2020) Marchetti, L., Jarrett, T. H., Comrie, A., et al. 2020, arXiv e-prints, arXiv:2012.11553
* Mauch et al. (2020) Mauch, T., Cotton, W. D., Condon, J. J., et al. 2020, ApJ, 888, 61
* McKinley et al. (2015) McKinley, B., Yang, R., López-Caniego, M., et al. 2015, MNRAS, 446, 3478
* Meyer et al. (2004) Meyer, M. J., Zwaan, M. A., Webster, R. L., et al. 2004, MNRAS, 350, 1195
* Moretti et al. (2020) Moretti, A., Paladino, R., Poggianti, B. M., et al. 2020, ApJ, 897, L30
* Morokuma-Matsui et al. (2019) Morokuma-Matsui, K., Serra, P., Maccagni, F. M., et al. 2019, PASJ, 71, 85
* Müller et al. (2020) Müller, A., Poggianti, B. M., Pfrommer, C., et al. 2020, Nature Astronomy [arXiv:2009.13287]
* Nelson et al. (2020) Nelson, D., Sharma, P., Pillepich, A., et al. 2020, MNRAS, 498, 2391
* Nulsen (1982) Nulsen, P. E. J. 1982, MNRAS, 198, 1007
* Offringa et al. (2014) Offringa, A. R., McKinley, B., Hurley-Walker, N., et al. 2014, MNRAS, 444, 606
* Offringa & Smirnov (2017) Offringa, A. R. & Smirnov, O. 2017, MNRAS, 471, 301
* Offringa et al. (2012) Offringa, A. R., van de Gronde, J. J., & Roerdink, J. B. T. M. 2012, A&A, 539, A95
* Peng et al. (2010) Peng, Y.-j., Lilly, S. J., Kovač, K., et al. 2010, ApJ, 721, 193
* Poggianti et al. (2018) Poggianti, B. M., Moretti, A., Gullieuszik, M., et al. 2018, ApJ, 853, 200
* Porter et al. (2008) Porter, S. C., Raychaudhury, S., Pimbblet, K. A., & Drinkwater, M. J. 2008, MNRAS, 388, 1152
* Putman et al. (1998) Putman, M. E., Bureau, M., Mould, J. R., Staveley-Smith, L., & Freeman, K. C. 1998, AJ, 115, 2345
* Raj et al. (2020) Raj, M. A., Iodice, E., Napolitano, N. R., et al. 2020, A&A, 640, A137
* Raj et al. (2019) Raj, M. A., Iodice, E., Napolitano, N. R., et al. 2019, A&A, 628, A4
* Ramatsoku et al. (2020) Ramatsoku, M., Murgia, M., Vacca, V., et al. 2020, A&A, 636, L1
* Rasmussen et al. (2012) Rasmussen, J., Bai, X.-N., Mulchaey, J. S., et al. 2012, ApJ, 747, 31
* Rasmussen et al. (2008) Rasmussen, J., Ponman, T. J., Verdes-Montenegro, L., Yun, M. S., & Borthakur, S. 2008, MNRAS, 388, 1245
* Richtler et al. (2014) Richtler, T., Hilker, M., Kumar, B., et al. 2014, A&A, 569, A41
* Roberts & Parker (2017) Roberts, I. D. & Parker, L. C. 2017, MNRAS, 467, 3268
* Robotham et al. (2011) Robotham, A. S. G., Norberg, P., Driver, S. P., et al. 2011, MNRAS, 416, 2640
* Schawinski et al. (2014) Schawinski, K., Urry, C. M., Simmons, B. D., et al. 2014, MNRAS, 440, 889
* Schröder et al. (2001) Schröder, A., Drinkwater, M. J., & Richter, O. G. 2001, A&A, 376, 98
* Schweizer (1980) Schweizer, F. 1980, ApJ, 237, 303
* Serra et al. (2016) Serra, P., de Blok, W. J. G., Bryan, G. L., et al. 2016, in MeerKAT Science: On the Pathway to the SKA, 8
* Serra et al. (2012) Serra, P., Jurek, R., & Flöer, L. 2012, PASA, 29, 296
* Serra et al. (2019) Serra, P., Maccagni, F. M., Kleiner, D., et al. 2019, A&A, 628, A122
* Serra et al. (2015) Serra, P., Westmeier, T., Giese, N., et al. 2015, MNRAS, 448, 1922
* Seth & Raychaudhury (2020) Seth, R. & Raychaudhury, S. 2020, MNRAS, 497, 466
* Steinhauser et al. (2016) Steinhauser, D., Schindler, S., & Springel, V. 2016, A&A, 591, A51
* Taylor et al. (2011) Taylor, E. N., Hopkins, A. M., Baldry, I. K., et al. 2011, MNRAS, 418, 1587
* Theureau et al. (1998) Theureau, G., Bottinelli, L., Coudreau-Durand, N., et al. 1998, A&AS, 130, 333
* Venhola et al. (2018) Venhola, A., Peletier, R., Laurikainen, E., et al. 2018, A&A, 620, A165
* Venhola et al. (2019) Venhola, A., Peletier, R., Laurikainen, E., et al. 2019, A&A, 625, A143
* Verdugo et al. (2008) Verdugo, M., Ziegler, B. L., & Gerken, B. 2008, A&A, 486, 9
* Vulcani et al. (2018) Vulcani, B., Poggianti, B. M., Jaffé, Y. L., et al. 2018, MNRAS, 480, 3152
* Weinmann et al. (2006) Weinmann, S. M., van den Bosch, F. C., Yang, X., & Mo, H. J. 2006, MNRAS, 366, 2
* Westmeier et al. (2011) Westmeier, T., Braun, R., & Koribalski, B. S. 2011, MNRAS, 410, 2217
* Wetzel et al. (2012) Wetzel, A. R., Tinker, J. L., & Conroy, C. 2012, MNRAS, 424, 232
* Woo et al. (2013) Woo, J., Dekel, A., Faber, S. M., et al. 2013, MNRAS, 428, 3306
* Yagi et al. (2017) Yagi, M., Yoshida, M., Gavazzi, G., et al. 2017, ApJ, 839, 65
## Appendix A H$\alpha$ image comparison
In Fig.9, we show the H$\alpha$ image after the standard reduction and the
image we used that modelled and subtracted the background of the original
image using a median filter. The giant H$\alpha$ filaments can be seen in the
original image, however, it is dominated by the over- and under-subtracted
artefacts and has a variable background.
The success of our median smoothing and model background subtraction is
dependent on how well the real H$\alpha$ is masked. Anything that is included
in the mask, by definition, is included in the final image. This is especially
challenging for diffuse H$\alpha$ emission located in areas with high
background noise. The converse is also true: if spurious H$\alpha$ emission is
included in the mask, it will also be in the final image.
To mitigate these issues as best as possible, we used a conservative approach
to carefully mask the real H$\alpha$ that was clearly visible in the original
image. It is particularly difficult to mask real H$\alpha$ emission in areas
with a highly variable background and where the background is significantly
under subtracted. The result is that some of the diffuse H$\alpha$ emission is
lost and not reproduced in the final image. As this is an iterative process,
we were able to recover H$\alpha$ emission in the most over-subtracted regions
of the image. Even though we cannot conserve 100% of the H$\alpha$ emission in
this process, the purpose of this is to present the underlying structure of
the new, giant H$\alpha$ filaments detected in the IGM.
Figure 9: Comparison of the original and filtered H$\alpha$ images. _Top
image_ : H$\alpha$ image after the standard data reduction process. _Bottom
image_ : H$\alpha$ image we present in our work that iteratively modelled and
subtracted (described in section 3.2) the background of the original image.
Both images are presented on the same scale. The original image is clearly
dominated by over- and under-subtracted artefacts, while the new image has a
smooth and uniform background, which retains the majority of the real
H$\alpha$ emission. Some diffuse H$\alpha$ emission is lost in this process,
however, the new image is a significant improvement that shows the underlying
structure of the giant H$\alpha$ filaments in the IGM.
|
# Heavy elements unveil the non primordial origin of the giant HI ring in Leo
Edvige Corbelli INAF-Osservatorio di Arcetri, Largo E. Fermi 5, 50125 Firenze,
Italy Giovanni Cresci INAF-Osservatorio di Arcetri, Largo E. Fermi 5, 50125
Firenze, Italy Filippo Mannucci INAF-Osservatorio di Arcetri, Largo E. Fermi
5, 50125 Firenze, Italy David Thilker Department of Physics and Astronomy,
The Johns Hopkins University, Baltimore, MD, USA Giacomo Venturi Instituto de
Astrofísica, Facultad de Física, Pontificia Universidad Católica de Chile,
Casilla 306, Santiago 22, Chile INAF-Osservatorio di Arcetri, Largo E. Fermi
5, 50125 Firenze, Italy
(Accepted by ApJ Letters)
###### Abstract
The origin and fate of the most extended extragalactic neutral cloud known in
the Local Universe, the Leo ring, is still debated 38 years after its
discovery. Its existence is alternatively attributed to leftover primordial
gas with some low level of metal pollution versus enriched gas stripped during
a galaxy-galaxy encounter. Taking advantage of MUSE (Multi Unit Spectroscopic
Explorer) operating at the VLT, we performed optical integral field
spectroscopy of 3 HI clumps in the Leo ring where ultraviolet continuum
emission has been found. We detected, for the first time, ionized hydrogen in
the ring and identify 4 nebular regions powered by massive stars. These
nebulae show several metal lines ([OIII], [NII], [SII]) which allowed reliable
measures of metallicities, found to be close to or above the solar value
(0.8$\leq Z/Z_{\odot}\leq$1.4). Given the faintness of the diffuse stellar
counterparts, less than 3$\%$ of the observed heavy elements could have been
produced locally in the main body of the ring and not much more than 15$\%$ in
the HI clump towards M 96. This inference, and the chemical homogeneity among
the regions, convincingly demonstrates that the gas in the ring is not
primordial, but has been pre-enriched in a galaxy disk, then later removed and
shaped by tidal forces and it is forming a sparse population of stars.
Galaxy groups ; Intergalactic clouds ; HII regions ; Chemical abundances
## 1 Introduction
The serendipitous discovery of an optically dark HI cloud in the M 96 galaxy
group (Schneider et al., 1983), part of the Leo I group, has since then
triggered a lot of discussion on the origin and survival of the most massive
and extended intergalactic neutral cloud known in the local Universe ($D\leq
20$ Mpc). With an extension of about 200 kpc and an HI mass $M_{HI}\simeq
2\times 10^{9}$ M⊙, the cloud has a ring-like shape orbiting the galaxies M
105 and NGC 3384 (Schneider, 1985), and it is known also as the Leo ring. As
opposed to tidal streams, the main body of the Leo ring is isolated, more
distant than 3 optical radii from any luminous galaxy. The ring is much larger
than any known ring galaxy (Ghosh & Mapelli, 2008). The collisional ring of
NGC 5291 ($D\simeq 50$ Mpc) (Longmore et al., 1979; Boquien et al., 2007), of
similar extent, is vigorously forming stars, as many other collisional rings.
The Leo ring is much more quiescent and for many years since its discovery has
been detected only via HI emission. Lacking a pervasive optical counterpart
(Pierce & Tully, 1985; Kibblewhite et al., 1985; Watkins et al., 2014) it has
been proposed as a candidate primordial cloud (Schneider et al., 1989;
Sil’chenko et al., 2003) dating to the time of the Leo I group formation.
The bulk of the HI gas in the ring is on the south and west side, especially
between M 96 (to the south) and NGC 3384/M 105 (at the ring center, see Figure
1). Intermediate resolution VLA maps of this region with an effective beam of
45″ revealed the presence of gas clumps (Schneider et al., 1986), some of
which appear as distinct virialized entities and have masses up to 3.5$\times
10^{7}$ M⊙. The position angle of the clump major axes and their velocity
field suggest some internal rotation with a possible disk-like geometry and
gas densities similar to those of the interstellar medium. Distinct cloudlets
are found in the extension pointing south, towards M 96. Detection of GALEX
UV-continuum light in the direction of a few HI clumps of the ring, suggested
star formation activity between 0.1 and 1 Gyr ago (Thilker et al., 2009).
However, most of the gas mass is not forming massive stars today since there
has been no confirmed diffuse H$\alpha$ emission (Reynolds et al., 1986;
Donahue et al., 1995) or CO detection from a pervasive population of giant
molecular complexes (Schneider et al., 1989).
A low level of metal enrichment, inferred from GALEX-UV and optical colors,
favoured the primordial origin hypothesis. This was supported a few years
later by the detection of weak metal absorption lines in the spectra of 3
background QSOs, 2 of which have sightlines close or within low HI column
density contours of the ring (Rosenberg et al., 2014). The low metallicity,
estimated between 2$\%$ \- 16$\%$ solar for Si/H, C/H and N/H, has however
large uncertainties due to ionisation corrections. Confusion with emission
from the Milky Way in the QSO’s spectra does not allow to measure HI column
densities along the sightlines. This is inferred from large scale HI gas maps,
and gas substructures on small scales can alter the estimated abundance
ratios.
Figure 1: In left panel the HI contours of the Leo ring are overlayed to the
optical image of the M96 group (SDSS color image). In magenta the Arecibo
contour at N${}_{HI}=2\times 10^{18}$ cm-2, in yellow the VLA HI contours of
the southern part of the ring Schneider et al. (1986). Crosses indicates
background QSOs (Rosenberg et al., 2014), red squares the HI clumps observed
with MUSE: Clump1(C1), Clump2(C2) and Clump2E(C2E). The 1 arcmin2 angular size
of the MUSE field is shown in the bottom left corner. In the right panels the
MUSE H$\alpha$ images of Clump1 and of Clump2E show the 5 nebular regions
detected. The corresponding GALEX-FUV continuum emission is displayed to the
left of the H$\alpha$ images. The southernmost FUV source in the field of
Clump1 is a background galaxy.
The Leo ring has also been considered as a possible product of a gas-sweeping
head-on collision (Rood & Williams, 1985), involving group members such as NGC
3384 and M 96 (Michel-Dansac et al., 2010) or a low surface brightness galaxy
colliding with the group (Bekki et al., 2005). A tentative detection of dust
emission at 8 $\mu$m in one HI clump (Clump1) (Bot et al., 2009) also supports
the pre-enrichment scenario. A direct and reliable measurement of high
metallicity gas associated to the very weak stellar counterpart can give the
conclusive signature of a ring made of pre-enriched gas.
In this Letter we present the first detection of nebular regions in the Leo
ring. In Section 2 we describe integral field optical spectroscopy of 3 fields
in the ring and estimate metal abundances from emission lines in star forming
regions. The local metal production and the implications for the origin of the
Leo ring are discussed in the last Section. In a companion paper (Corbelli et
al., 2021)(hereafter Paper II) we analyse star formation and the stellar
population in and around the detected nebulae using GALEX and HST images.
## 2 The discovery of nebular regions and their chemical abundances
We assume a distance to the Leo ring of 10 Mpc, as for M 96 and M 105. This
implies that an angular separation of 1″ corresponds to a spatial scale of
48.5 pc.
### 2.1 The data
Between December 2019 and March 2020 we have observed three 1$\times$1 arcmin2
regions in the Leo ring using the integral field spectrograph MUSE (Multi Unit
Spectroscopic Explorer) mounted on the ESO Very Large Telescope (VLT). The
locations of MUSE fields are shown in red in the left panel of Figure 1
overlaid on the SDSS optical image of the M 96 group and on the VLA HI
contours of the ring. The fields have been centered at 3 HI peak locations,
Clump1, Clump2 and Clump2E, two in the main body of the ring and one in the
filament connecting the ring to M 96. They cover completely the ultraviolet-
bright regions of Clump1 and Clump2E listed by Thilker et al. (2009). The
southernmost side of the UV emission in Clump2 is at the border of the MUSE
field.
The final cube for each region is the result of two observing blocks, one
totalling 960 s and the other 1920 s. The observing blocks are a combination
of two and four 480 s exposures, respectively, which were rotated and dithered
from each other in order to provide a uniform coverage of the field and to
limit systematics. Dedicated offset sky exposure of 100 s each were acquired
every two object exposures. The reduction of the raw data was performed with
the ESO MUSE pipeline (Weilbacher et al., 2020), which includes the standard
procedures of bias subtraction, flat fielding, wavelength calibration, flux
calibration, sky subtraction and the final cube reconstruction by the spatial
arrangement of the individual slits of the image slicers. For Clump1 we did
not employ the dedicated sky observations for the sky subtraction, since these
were giving strong sky residuals, especially around the H$\alpha$ line. We
thus extracted the sky spectrum to be subtracted from within the science cube,
by selecting the portions of the FoV free of source emission. This allowed to
remove the problematic sky residuals because the sky spectrum obtained from
within the science FoV is simultaneous with the science spectra. The final
dataset comprises 3 data cubes, one per clump, covering a FoV slightly larger
than 1 arcmin2. Each spectrum spans the wavelength range 4600 - 9350 Å, with a
typical spectral resolution between 1770 at 4800 Å and 3590 at 9300 Å. The
spatial resolution given by the seeing is of the order of 1″.
### 2.2 HII regions in the ring
We analyse spectral data at the observed spectral and spatial resolution
searching for H$\alpha$ emission at the velocities of the HI gas in the ring
i.e. between 860 and 1060 km s-1. We detect hydrogen and some collisionally
excited metal lines in three distinct regions of Clump1 (C1a, C1b, C1c) and in
two regions of Clump2E (C2Ea, C2Eb). Figure 1 shows the GALEX-FUV continuum
and the H$\alpha$ emission in the three MUSE fields. The FUV emission in
Clump2E seems more extended than the HII region in H$\alpha$ and suggests a
non coeval population or the presence of some low mass stellar cluster lacking
massive stars. No nebular lines are detected in the field covering Clump2.
This clump is the reddest of the three clumps observed, having the largest
values of UV and optical colours (Thilker et al., 2009; Watkins et al., 2014).
=-2.cm Source RA DEC $V_{hel}$ 12+log(O/H) $A_{V}$ $Z/Z_{\odot}$
$R^{max}_{ap}$ $A^{Rmax}_{H\alpha}$ log $L_{H\alpha}$ km s-1 mag arcsec mag
erg s-1 C1a 10:47:47.93 12:11:31.9 994$\pm$2 8.59${}^{+0.04}_{-0.04}$
1.02${}^{+0.60}_{-0.60}$ 0.79 5.0 0.40 36.62 C1b 10:47:47.44 12:11:27.6
1003$\pm$3 8.63${}^{+0.28}_{-0.04}$ 0.06${}^{+1.59}_{-0.06}$ 0.87 3.0 …. 35.94
C2Ea 10:48:13.52 12:02:24.3 940$\pm$3 8.84${}^{+0.01}_{-0.01}$
0.47${}^{+0.31}_{-0.35}$ 1.41 3.4 0.61 36.91 C2Eb 10:48:14.08 12:02:32.5
937$\pm$21 8.82${}^{+0.09}_{-0.11}$ …. 1.35 3.0 …. 35.85
Table 1: HII region coordinates, chemical abundance and extinction. Extinction
corrected total H$\alpha$ luminosities are computed using circular apertures
with radius R${}_{ap}^{max}$.
=-2.cm Source H$\beta$ [OIII]5007 [NII]6548 H$\alpha$ [NII]6583 [SII]6716/
[SII]6731 FWHMb,r[Å] C1a 1.89$\pm$0.37 1.53$\pm$0.38 $<0.46$ 7.89 $\pm$0.29
1.39$\pm$0.25 0.86$\pm$ 0.21 0.54$\pm$ 0.21 2.5,2.5 C1b 1.00$\pm$0.37 $<0.76$
$<0.49$ 3.17$\pm$0.23 0.82$\pm$0.31 $<0.40$ $<0.40$ 1.9,2.2 C2Ea 7.97$\pm
0.41$ 1.34$\pm$0.37 3.88$\pm 0.31$ 26.57$\pm$0.35 11.39$\pm$0.36 2.71$\pm$0.33
1.81$\pm$ 0.36 2.8,2.4 C2Eb $<0.79$ $<0.79$ $<0.76$ 2.25$\pm$0.35 1.01
$\pm$0.32 $<0.34$ $<0.34$ …,2.1
Table 2: Integrated emission for Gaussian fits to nebular lines with Rap=1.2″.
Upper limits are 3$\sigma$ values, flux units are 10-17 erg s-1cm-2.
The four regions listed in Table 1 are HII regions associated with recent star
formation events according to their line ratios (Kauffmann et al., 2003;
Sanders et al., 2012) and to the underlying stellar population (see Paper II).
The data relative to the faintest nebula detected, C1c, is presented and
discussed in Paper II because emission line ratios and [OIII]5007 luminosity
are consistent with the object being a Planetary Nebula whose metallicity is
unconstrained due to undetected lines. We give in Table 1 the central
coordinates of the HII regions and the mean recession velocities of identified
optical lines. These are consistent with the 21-cm line velocities of the HI
gas (Schneider et al., 1986). We fit Gaussian profiles to emission lines whose
peaks are well above 3$\sigma$ in circular apertures with radius 1.2″,
comparable to the seeing. With these apertures we sample more than one third
of the region total H$\alpha$ luminosity and achieve good signal-to-noise
(S/N$>$2.5) for integrated Gaussian line fits to all detected lines. The
integrated line fluxes are shown in Table 2. We require a uniform Gaussian
line width in the red or in the blue part of the spectrum since lines are
unresolved. The resulting FWHM are shown in Table 2. Upper limits in Table 2
are 3$\sigma$ values for non-detected lines, inferred using the rms of the
spectra at the expected wavelength and a typical full spectral extent of the
line. For the brightest HII regions we detected strong metal lines, such as [O
iii]5007, [N ii]6583, [S ii]6716,6731 which can be used to compute reliable
metallicities.
### 2.3 Chemical abundances
For the four HII regions in Table 1 we compute the gas-phase metal abundances
using the strong-line calibration in Curti et al. (2020). All the available
emission lines and the upper limits to the undetected lines are used to
measure metallicity and dust extinction in a two-parameter minimization
routine which also estimates the uncertainties on these two parameters. The
resulting metal abundances are displayed in Figure 2 and in Table 1. In Figure
2 we show the 1-$\sigma$ confidence levels in the oxygen abundance-visual
extinction plane and the best fitting values of chemical abundances along the
calibration curves for the strong line ratios. Line ratios for all the HII
regions are well-reproduced by close to solar metallicities and moderate
visual dust extinctions. For C2Eb extinction cannot be constrained.
Metallicities in Clump1 are slightly below solar, those in Clump 2E are above
solar. The HII regions in Clump 1 have lower SII/H$\alpha$ line ratios than
predicted by Curti et al. (2020). This is also found in outer disk HII regions
(Vilchez & Esteban, 1996) and it is likely due to a high ionisation parameter
driven by a low density interstellar medium (Dopita et al., 2013). The mass
fraction of metals with respect to solar, $Z/Z_{\odot}$, ranges between 0.79
and 1.41 (assuming solar distribution of heavy elements and $Z_{\odot}$=0.0142
(Asplund et al., 2009)).
Figure 2: Dust visual extinction and gas-phase metallicity for each HII
region, color-coded as in the legend. In the top panel we show the best
fitting values (cross) and 1-$\sigma$ confidence levels of $A_{V}$; the dotted
line shows the solar metallicity (12+log(O/H)=8.69). The four bottom panels
refer to relevant strong-line ratios. Diamonds show the observed values of the
line ratio plotted at the best-fitting value of metallicity. The
H$\alpha$/H$\beta$ ratio is computed for Case B recombination with
uncertainties due to the unknown temperature, and circles showing extinction-
corrected values. The solid curves in the lower three panels trace the
calibrations from Curti et al. (2020), with the relative uncertainties.
Using wide apertures, as listed in column (8) of Table 1 and chosen to include
most of H$\alpha$ emission with no overlap, we derive the HII region total
H$\alpha$ luminosities, $L_{H\alpha}$. These are given in column (10) already
corrected for extinction when this can be estimated from the Balmer decrement
in these apertures (column (9)). Luminosities are high enough to require the
presence of very massive and young stars, especially for C1a and C2Ea. The
local production rate of ionizing photons by hot stars might be higher than
what can be inferred using $L_{H\alpha}$ if some photons leak out or are
directly absorbed by dust in the nebula.
The HII regions in the [OIII]/H$\beta$ versus [NII]/H$\alpha$ plane, known as
the BPT diagram (Baldwin et al., 1981), are consistent with data from young
HII regions in galaxies (Kauffmann et al., 2003; Sanders et al., 2012) and
their metallicities are in agreement with those predicted by photoionisation
models of HII regions (Dopita et al., 2013; Byler et al., 2017). Line ratios
observed in Clump1 are also consistent with the distribution of the recent
evolution models of HII regions in gas clouds (Pellegrini et al., 2020),
available only for solar metallicity. These predict an age of about 5 Myrs for
C1a. Line ratios for C2Ea instead fall outside the area where solar
metallicity HII regions are found, in agreement with the higher than solar
metallicity we infer for this clump. A very young age is recovered in Paper II
for this HII region through a multiwavelength analysis.
## 3 In situ metal enrichment and the ring origin
We compute the maximum mass fraction of metals which could conceivably be
produced in situ, $f_{Z}^{max}$, given the observed metal abundances,
$Z_{obs}$, and the limiting blue magnitudes of the Leo ring, $\mu_{B}$. For
this extreme local enrichment scenario we assume that all stars have formed in
the ring and use the instantaneous burst or continuous star formation models
of Starburst99 (Leitherer et al., 1999) in addition to population synthesis
models of Bruzual & Charlot (2003) for an initial burst with an exponential
decay ($\tau=1$ Gyr). At each time step we compute the $B-V$ color and the
maximum stellar surface mass density which corresponds to the limiting values
of $\mu_{B}$. This stellar density gives the maximum mass fraction of metals
produced locally. In order to maximise the local metal production we consider
a closed box model with no inflows or outflows for which a simple equation
relates the stellar yields to the increase in metallicity since star formation
has switched on (Searle & Sargent, 1972):
$f_{Z}^{max}={Z\over Z_{obs}}={y_{Z}\over Z_{obs}}\
\hbox{ln}({\Sigma_{g0}\over\Sigma_{g}})={y_{Z}\over Z_{obs}}\
\hbox{ln}(1+{\Sigma_{*}\over\Sigma_{g}})$ (1)
where $Z$ and $\Sigma_{*}$ are the abundance of metals by mass and the stellar
mass surface density produced in situ. The gas mass surface density at the
present time and at the time of the Leo ring formation are $\Sigma_{g}$ and
$\Sigma_{g0}$ respectively. The total net yields $y_{Z}$ refers to the mass of
all heavy elements produced and injected into the interstellar medium by a
stellar population to the rate of mass locked up into low mass stars and
stellar remnants. There are several factors that can affects the yields: the
upper end of the Initial Mass Function (hereafter IMF), massive star evolution
and ejecta models, metallicity. Since the pioneer work of Searle & Sargent
(1972) several papers have analysed these dependencies (e.g. Maeder, 1992;
Meynet & Maeder, 2002; Romano et al., 2010; Vincenzo et al., 2016). Following
the results of Romano et al. (2010); Vincenzo et al. (2016) we consider
negligible the metallicity dependence on the yields and consider the Chabrier
IMF i.e. an IMF with a Salpeter slope from 1 $M_{\odot}$ up to its high mass
end at 100 $M_{\odot}$ and a Chabrier-lognormal slope from 0.1 to 1
$M_{\odot}$ (Salpeter, 1955; Chabrier, 2003). This IMF has a total yield
$y_{Z}$=0.06, the highest amongst commonly considered IMF (Vincenzo et al.,
2016).
To maximize the associated fraction of metals produced locally, $f_{Z}^{max}$,
we consider zero extinction and the best fitted metallicities for C1a and C2Ea
minus 3 times their dispersion, i.e. $Z_{obs}$=0.6 and 1.32 $Z_{\odot}$ for
Clump1 and Clump2E respectively. A very large fraction of the HI rich ring
area corresponding to the VLA coverage of Schneider et al. (1986) has been
surveyed deeply in the optical B band (Pierce & Tully, 1985; Watkins et al.,
2014). For a very diffuse pervasive population throughout the Leo ring the
survey results give $\mu_{B}\geq 30$ mag arcsec-2. For optical emission in
less extended regions as the MUSE fields, or equivalently at the VLA HI map
spatial resolution (Schneider et al., 1986), and following the results of
Mihos et al. (2018) we can use the more conservative upper limit $\mu_{B}\geq
29$ mag arcsec-2. Given the optical colors $B-V$=0.0$\pm 0.1$ for Clump1 and
$B-V$=0.1$\pm 0.2$ mag for Clump2E (Watkins et al., 2014) we consider
$B-V\leq$0.1 and $B-V\leq$0.3 for Clump1 and Clump2E respectively. The average
HI+He gas surface density over a circular area with 45″ radius is
$\Sigma_{g}$=3.1 and 0.8 $M_{\odot}$ pc-2 in Clump1 and Clump2E respectively.
We compute from the models the stellar mass surface density corresponding to
$\mu_{B}=29$ mag arcsec-2 at each time, and $f_{Z}$ with the above values of
$\Sigma_{g}$, $Z_{obs}$ and $y_{Z}$, using equation (1). The value of
$f_{Z}^{max}$ will be $f_{Z}$ at the maximum value of $B-V$ for each clump. In
Figure 3 we show $f_{Z}$ for the three models as a function of time and of
$B-V$. The dashed line indicates the limiting value of $B-V$. A dotted line
has been placed at the value of $f_{Z}^{max}$ i.e. where the limiting colors
intersect the models which produces the highest mass of metals.
Figure 3: The mass fraction of metals $f_{Z}$ produced in situ for an
instantaneous burst (blue lines), a burst with exponential decay (red lines),
and a continuous star formation model (green lines) as a function of optical
colors (left panels) and time (right panels). Each model for both Clump1
(upper panels) and Clump2E (lower panels) is normalised as to produce an
apparent magnitude $\mu_{B}=29$ at any time after star formation switches on.
The dashed lines indicate the maximum $B-V$ optical color of the clumps, and
the dotted line $f_{Z}^{max}$, the highest value of $f_{Z}$ compatible with
$B-V$.
For Clump1 a starburst 500 Myrs ago that slowly decays with time gives the
highest possible local metal production with $f_{Z}^{max}$ = 3$\%$ and
$\Sigma_{*}$=0.01 $M_{\odot}$ pc-2. For Clump2E both an instantaneous burst
500 Myrs ago or a continuous star formation since 2 Gyr ago gives the maximum
value of $f_{Z}^{max}$ = 17$\%$ with $\Sigma_{*}$=0.04 $M_{\odot}$ pc-2. We
conclude that the fraction of metals produced locally is too small to be
compatible with a scenario of a primordial metal poor ring enriched in situ.
The ring must have formed out of metal rich gas, with chemical abundances
above 0.5 $Z_{\odot}$, mostly polluted while residing in a galaxy and then
dispersed into space.
We underlines that all models predicts a small fraction of metals produced in
situ and that the ones that maximise $f_{Z}$ are not necessarely the best
fitted models to the underlying stellar population. These will be examined in
Paper II. The apparent discrepancy between our results and the lower
abundances inferred by QSO’s absorption lines can be resolved if hydrogen
column densities along sightlines to nearby QSOs are lower than those used in
the analysis of Rosenberg et al. (2014) and estimated from HI emission
averaged over a large beam. The most discrepant abundance with respect to the
nearly solar abundances we infer for the ring is for carbon toward the
southernmost QSO: -1.7$\leq$ [C/H]/[C/H]${}_{\odot}\leq-1.1$. If future
measures of the HI column density towards the QSO’s sightline confirm the low
metal abundances, these can be used to investigate chemical inhomogeneities
due to a mix of metal rich gas with local intragroup metal poor gas in the
ring outskirts.
We summarise that our finding has confirmed spectroscopically the association
between stellar complexes detected in the UV-continuum and the high column
density gas (Thilker et al., 2009). The detected H$\alpha$ emission implies a
sporadic presence of a much younger and massive stellar population then
estimated previously (see Paper II for more details). For the first time we
have detected gaseous nebulae in the ring with chemical abundances close to or
above solar which conflict with the primordial origin hypothesis of the Leo
ring. A scenario of pre-enrichment, where the gas has been polluted by the
metals produced in a galaxy and subsequently tidally stripped and placed in
ring-like shape, is in agreement with the data presented in this Letter. This
picture is dynamically consistent with numerical simulations showing the
possible collisional origin of the Leo ring (Bekki et al., 2005; Michel-Dansac
et al., 2010) and with chemical abundances in nearby galaxies possibly
involved in the encounter such as M 96 and NGC 3384 (Oey & Kennicutt, 1993;
Sánchez-Blázquez et al., 2007).
EC acknowledges support from PRIN MIUR 2017 -$20173ML3WW_{0}0$ and Mainstream-
GasDustpedia. GV acknowledges support from ANID programs FONDECYT
Postdoctorado 3200802 and Basal-CATA AFB-170002.
## References
* Asplund et al. (2009) Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481, doi: 10.1146/annurev.astro.46.060407.145222
* Baldwin et al. (1981) Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, PASP, 93, 5, doi: 10.1086/130766
* Bekki et al. (2005) Bekki, K., Koribalski, B. S., Ryder, S. D., & Couch, W. J. 2005, MNRAS, 357, L21, doi: 10.1111/j.1745-3933.2005.08625.x
* Bot et al. (2009) Bot, C., Helou, G., Latter, W. B., et al. 2009, AJ, 138, 452, doi: 10.1088/0004-6256/138/2/452
* Boquien et al. (2007) Boquien, M., Duc, P. A., Braine, J., et al. 2007, A&A, 467, 93
* Bruzual & Charlot (2003) Bruzual, G., & Charlot, S. 2003, MNRAS, 344, 1000, doi: 10.1046/j.1365-8711.2003.06897.x
* Byler et al. (2017) Byler, N., Dalcanton, J. J., Conroy, C., & Johnson, B. D. 2017, ApJ, 840, 44, doi: 10.3847/1538-4357/aa6c66
* Chabrier (2003) Chabrier, G. 2003, PASP, 115, 763, doi: 10.1086/376392
* Corbelli et al. (2021) Corbelli, E., Mannucci, F., Thilker, D., Cresci, G. & Venturi, G. submitted to A&A(Paper II)
* Curti et al. (2020) Curti, M., Mannucci, F., Cresci, G., & Maiolino, R. 2020, MNRAS, 491, 944, doi: 10.1093/mnras/stz2910
* Donahue et al. (1995) Donahue, M., Aldering, G., & Stocke, J. T. 1995, ApJ, 450, L45, doi: 10.1086/316771
* Dopita et al. (2013) Dopita, M. A., Sutherland, R. S., Nicholls, D. C., Kewley, L. J., & Vogt, F. P. A. 2013, ApJS, 208, 10, doi: 10.1088/0067-0049/208/1/10
* Ghosh & Mapelli (2008) Ghosh, K. K., & Mapelli, M. 2008, MNRAS, 386, L38, doi: 10.1111/j.1745-3933.2008.00456.x
* Kauffmann et al. (2003) Kauffmann, G., Heckman, T. M., Tremonti, C., et al. 2003, MNRAS, 346, 1055, doi: 10.1111/j.1365-2966.2003.07154.x
* Kibblewhite et al. (1985) Kibblewhite, E. J., Cawson, M. G. M., Disney, M. J., & Phillipps, S. 1985, MNRAS, 213, 111, doi: 10.1093/mnras/213.2.111
* Leitherer et al. (1999) Leitherer, C., Schaerer, D., Goldader, J. D., et al. 1999, ApJS, 123, 3, doi: 10.1086/313233
* Longmore et al. (1979) Longmore, A. J., Hawarden, T. G., Cannon, R. D., et al. 1979, MNRAS, 188, 285
* Maeder (1992) Maeder, A. 1992, A&A, 264, 105
* Meynet & Maeder (2002) Meynet, G., & Maeder, A. 2002, A&A, 390, 561, doi: 10.1051/0004-6361:20020755
* Michel-Dansac et al. (2010) Michel-Dansac, L., Duc, P.-A., Bournaud, F., et al. 2010, ApJ, 717, L143, doi: 10.1088/2041-8205/717/2/L143
* Mihos et al. (2018) Mihos, J. C., Carr, C. T., Watkins, A. E., Oosterloo, T., & Harding, P. 2018, ApJ, 863, L7, doi: 10.3847/2041-8213/aad62e
* Oey & Kennicutt (1993) Oey, M. S., & Kennicutt, R. C., J. 1993, ApJ, 411, 137, doi: 10.1086/172814
* Pellegrini et al. (2020) Pellegrini, E. W., Rahner, D., Reissl, S., et al. 2020, MNRAS, 496, 339, doi: 10.1093/mnras/staa1473
* Pierce & Tully (1985) Pierce, M. J., & Tully, R. B. 1985, AJ, 90, 450, doi: 10.1086/113750
* Reynolds et al. (1986) Reynolds, R. J., Magee, K., Roesler, F. L., Scherb, F., & Harlander, J. 1986, ApJ, 309, L9, doi: 10.1086/184750
* Romano et al. (2010) Romano, D., Karakas, A. I., Tosi, M., & Matteucci, F. 2010, A&A, 522, A32, doi: 10.1051/0004-6361/201014483
* Rood & Williams (1985) Rood, H. J., & Williams, B. A. 1985, ApJ, 288, 535, doi: 10.1086/162819
* Rosenberg et al. (2014) Rosenberg, J. L., Haislmaier, K., Giroux, M. L., Keeney, B. A., & Schneider, S. E. 2014, ApJ, 790, 64, doi: 10.1088/0004-637X/790/1/64
* Salpeter (1955) Salpeter, E. E. 1955, ApJ, 121, 161, doi: 10.1086/145971
* Sánchez-Blázquez et al. (2007) Sánchez-Blázquez, P., Forbes, D. A., Strader, J., Brodie, J., & Proctor, R. 2007, MNRAS, 377, 759, doi: 10.1111/j.1365-2966.2007.11647.x
* Sanders et al. (2012) Sanders, N. E., Caldwell, N., McDowell, J., & Harding, P. 2012, ApJ, 758, 133, doi: 10.1088/0004-637X/758/2/133
* Schneider (1985) Schneider, S. 1985, ApJ, 288, L33, doi: 10.1086/184416
* Schneider et al. (1983) Schneider, S. E., Helou, G., Salpeter, E. E., & Terzian, Y. 1983, ApJ, 273, L1, doi: 10.1086/184118
* Schneider et al. (1986) Schneider, S. E., Salpeter, E. E., & Terzian, Y. 1986, AJ, 91, 13, doi: 10.1086/113975
* Schneider et al. (1989) Schneider, S. E., Skrutskie, M. F., Hacking, P. B., et al. 1989, AJ, 97, 666, doi: 10.1086/115012
* Searle & Sargent (1972) Searle, L., & Sargent, W. L. W. 1972, ApJ, 173, 25, doi: 10.1086/151398
* Sil’chenko et al. (2003) Sil’chenko, O. K., Moiseev, A. V., Afanasiev, V. L., Chavushyan, V. H., & Valdes, J. R. 2003, ApJ, 591, 185, doi: 10.1086/375315
* Thilker et al. (2009) Thilker, D. A., Donovan, J., Schiminovich, D., et al. 2009, Nature, 457, 990, doi: 10.1038/nature07780
* Vilchez & Esteban (1996) Vilchez, J. M., & Esteban, C. 1996, MNRAS, 280, 720, doi: 10.1093/mnras/280.3.720
* Vincenzo et al. (2016) Vincenzo, F., Matteucci, F., Belfiore, F., & Maiolino, R. 2016, MNRAS, 455, 4183, doi: 10.1093/mnras/stv2598
* Watkins et al. (2014) Watkins, A. E., Mihos, J. C., Harding, P., & Feldmeier, J. J. 2014, ApJ, 791, 38, doi: 10.1088/0004-637X/791/1/38
* Weilbacher et al. (2020) Weilbacher, P. M., Palsa, R., Streicher, O., et al. 2020, A&A, 641, A28, doi: 10.1051/0004-6361/202037855
|
Regret-Optimal Filtering
Oron Sabag Babak Hassibi
Caltech
###### Abstract
We consider the problem of filtering in linear state-space models (e.g., the
Kalman filter setting) through the lens of regret optimization. Specifically,
we study the problem of causally estimating a desired signal generated by a
linear state space model driven by process noise, and based on noisy
observations of a related observation process. Different assumptions on the
driving disturbance and the observation noise sequences give rise to different
estimators: in the stochastic setting to the celebrated Kalman filter, and in
the deterministic setting of bounded energy disturbances to $H_{\infty}$
estimators. In this work, we formulate a novel criterion for estimator design
which is based on the concept of regret. We define the regret as the
difference in estimation error energy between a clairvoyant estimator that has
access to all future observations (a so-called smoother) and a causal one that
only has access to current and past observations. The regret-optimal estimator
is chosen to minimize this worst-case difference across all bounded-energy
noise sequences. The resulting estimator is adaptive in the sense that it aims
to mimic the behavior of the clairvoyant estimator, irrespective of what the
realization of the noise will be and thus nicely interpolates between the
stochastic and deterministic approaches. We provide a solution for the regret
estimation problem at two different levels. First, we provide a solution at
the operator level by reducing it to the Nehari problem, i.e., the problem of
approximating an anti-causal operator with a causal one. Second, when we have
an underlying state-space model, we explicitly find the estimator that
achieves the optimal regret. From a computational perspective, the regret-
optimal estimator can be easily implemented by solving three Riccati equations
and a single Lyapunov equation. For a state-space model of dimension $n$, the
regret-optimal estimator has a state-space structure of dimension $3n$. We
demonstrate the applicability and efficacy of the estimator in a variety of
problems and observe that the estimator has average and worst-case
performances that are simultaneously close to their optimal values. We
therefore argue that regret-optimality is a viable approach to estimator
design.
## 1 Introduction
Filtering is the problem of estimating the current value of a desired signal,
given current and past values of a related observation signal. It has numerous
applications in signal processing, control, and learning and a rich history
going back to Wiener, Kolomgorov, and Kalman. When the underlying desired and
observation signals have state-space structures driven by white Gaussian
noise, the celebrated Kalman filter gives the minimum mean-square error
estimate of the current value of the desired signal, given the past and
current of the observed signal Kalman (1960). When all that is known of the
noise sources are their first and second-order statistics, the Kalman filter
gives the linear least-mean-squares estimate. While these are very desired
properties, the Kalman filter is predicated on knowing the underlying
statistics and distributions of the signal. It can therefore have poor
performance if the underlying signals have statistics and/or properties that
deviate from those that are assumed. It is also not suitable for learning
applications, since it has no possibility of ”learning” the signal statistics.
Another approach to filtering that was developed in the 80’s and 90’s was
$H_{\infty}$ filtering, where the noise sources were considered adversarial
and the worst-case estimation error energy was minimized (over all bounded
energy noises). While $H_{\infty}$ estimators are robust to lack of
statistical knowledge of the underlying noise sources, and have some deep
connections to classical learning algorithms (see, e.g. Hassibi et al.
(1995)), they are often too conservative since they safeguard against the
worst-case and do not exploit the noise structure.
### 1.1 Main contributions
The contributions can be summarized as follows:
* •
Motivated by the concept of regret in learning problems (e.g., Hazan (2019),
Simchowitz (2020), Abbasi-Yadkori (2011),(2019)), we propose to adopt it for
filtering problems so as to bridge between the philosophies of Kalman and
$H_{\infty}$ filtering. Specifically, we formulate a new design criterion for
filtering which optimizes the difference in estimation error energies between
a clairvoyant estimator that has access to the entire observations sequence
(including future samples) and a causal one that does not have access to
future observations. We show that the regret formulation is fundamentally
different from the $H_{2}$ (e.g., the Kalman filter by Kalman (1960)) and
$H_{\infty}$ criteria (see the tutorial by Shaked et al. (1992)).
* •
We show that the regret-optimal estimation problem can be reduced to the
classical Nehari problem in operator theory (Theorem 1). This is the problem
of approximating an anti-causal operator with a causal one in the operator
norm by Nehari (1957).
* •
When the underlying signals have a state space structure, we provide an
explicit solution for the regret-optimal filter. The solution to the filtering
problem is given as via simple steps; first, the optimal regret value is
determined by solving two Riccati equations and a single Lyapunov, along with
a bisection method with a simple condition that is given in Theorem 2. Then,
the regret-optimal filter is given explicitly in a state-space form in Theorem
4.
* •
We present numerical examples that demonstrate the efficacy and applicability
of the approach and observe that the regret-optimal filter has average and
worst-case performances that are simultaneously close to their optimal values.
We therefore argue that regret-optimality is a viable approach to estimator
design.
## 2 The Setting and Problem Formulation
### 2.1 Notation
Linear operators are denoted by calligraphic letters, e.g., $\mathcal{X}$.
Finite-dimensional vectors and matrices are denoted with small and capital
letters, respectively, e.g., $x$ and $X$. Subscripts are used to denote time
indices e.g., $x_{i}$, and boldface letters denote the set of finite-
dimensional vectors at all times, e.g., $\mathbf{x}=\\{x_{i}\\}_{i}$.
### 2.2 The setting and problem formulation
We consider a general estimation problem
$\displaystyle\operatorname{\mathbf{y}}$
$\displaystyle=\mathcal{H}\operatorname{\mathbf{w}}+\operatorname{\mathbf{v}}\operatorname*{}$
$\displaystyle\mathbf{s}$ $\displaystyle=\mathcal{L}\operatorname{\mathbf{w}}$
(1)
where $\mathcal{H}$ and $\mathcal{L}$ are strictly causal operators, the
sequence $\operatorname{\mathbf{w}}$ denotes an exogenous disturbance $w$ that
generates a hidden state as the output of an operator $\mathcal{H}$,
$\operatorname{\mathbf{y}}$ denotes the observations process and $\mathbf{s}$
denotes the signal that should be estimated. Note that we did not specify yet
the estimator operation or the time horizon. The setting is quite general and
includes for instance the state-space model that will be presented in the next
section.
A linear estimator is a linear mapping from the observations to the signal
space $\mathbf{s}$ and is denoted as
$\hat{\mathbf{s}}=\mathcal{K}\operatorname{\mathbf{y}}$. Then, for any
$\mathcal{K}$, the estimation error of the signal is
$\displaystyle\mathbf{e}$
$\displaystyle=\mathbf{s}-\hat{\mathbf{s}}\operatorname*{}$
$\displaystyle=\begin{pmatrix}\mathcal{L}-\mathcal{K}\mathcal{H}&-\mathcal{K}\end{pmatrix}\begin{pmatrix}\operatorname{\mathbf{w}}\\\
\operatorname{\mathbf{v}}\end{pmatrix}\operatorname*{}$
$\displaystyle\triangleq
T_{\mathcal{K}}\begin{pmatrix}\operatorname{\mathbf{w}}\\\
\operatorname{\mathbf{v}}\end{pmatrix}$ (2)
Note that the estimation error is a function of the driving disturbance
$\operatorname{\mathbf{w}}$ and the observation noise
$\operatorname{\mathbf{v}}$. The squared error can then be expressed as
$\displaystyle\mathbf{e}(\operatorname{\mathbf{w}},\operatorname{\mathbf{v}},\mathcal{K})$
$\displaystyle\triangleq\mathbf{e}^{\ast}\mathbf{e}\operatorname*{}$
$\displaystyle=\begin{pmatrix}\operatorname{\mathbf{w}}^{\ast}&\operatorname{\mathbf{v}}^{\ast}\end{pmatrix}T_{\mathcal{K}}^{\ast}T_{\mathcal{K}}\begin{pmatrix}\operatorname{\mathbf{w}}\\\
\operatorname{\mathbf{v}}\end{pmatrix}.$ (3)
Different assumptions on the driving disturbance and the observation noise
sequences give rise to different estimators: in the stochastic setting to the
celebrated Kalman filter, and in the deterministic setting of bounded energy
disturbances to $H_{\infty}$ estimators. A common characteristic of these two
paradigms is that if we do not restrict the constructed estimators to be
causal, then there exists a single linear estimator that attains the minimal
Frobenius and operator norms simultaneously. This known fact is summarized in
the following lemma.
###### Lemma 1 (The non-causal estimator).
For the $H_{2}$ and the $H_{\infty}$ problems, the optimal non-causal
estimator is
$\mathcal{K}_{0}=\mathcal{L}\mathcal{H}^{\ast}(I+\mathcal{H}\mathcal{H}^{\ast})^{-1}.$
Note that the non-causal estimator cannot be implemented in practice even for
simple operator $\mathcal{L},\mathcal{H}$ since it requires access to future
instances of the observations. However, the fact that there is a single
estimator that simultaneously optimizes these two norms naturally leads to our
new approach of regret optimization. Specifically, we will aim at constructing
a causal (or strictly causal) estimator that performs as close as possible to
the non-causal estimator in Lemma 1.
The optimal regret can now be defined as
$\displaystyle{\operatorname*{regret}}^{\ast}$
$\displaystyle=\min_{\text{causal}\
\mathcal{K}}\max_{\operatorname{\mathbf{w}},\operatorname{\mathbf{v}}\in\ell_{2},\operatorname{\mathbf{w}},\operatorname{\mathbf{v}}\neq
0}\frac{|\mathbf{e}(\operatorname{\mathbf{w}},\operatorname{\mathbf{v}},\mathcal{K})-\mathbf{e}(\operatorname{\mathbf{w}},\operatorname{\mathbf{v}},\mathcal{K}_{0})|}{\|\operatorname{\mathbf{w}}\|^{2}+\|\operatorname{\mathbf{v}}\|^{2}}\operatorname*{}$
$\displaystyle=\min_{\text{causal}\
\mathcal{K}}\|T_{\mathcal{K}}^{\ast}T_{\mathcal{K}}-T_{\mathcal{K}_{0}}^{\ast}T_{\mathcal{K}_{0}}\|.$
(4)
In words, the defined regret metric measures the worst-case deviation of the
estimation error from the estimation error of the non-causal estimator across
all bounded-energy disturbances sequences. It is illuminating to compare now
the regret criterion with the traditional $H_{\infty}$ estimation:
$\displaystyle\underbrace{\inf_{\mbox{causal
$K$}}\|T_{K}^{*}T_{K}\|}_{\mbox{$H_{\infty}$
estimation}}~{}~{},~{}~{}\underbrace{\inf_{\mbox{causal
$K$}}\|T_{K}^{\ast}T_{K}-T_{K_{0}}^{\ast}T_{K_{0}}\|.}_{\mbox{regret-optimal
estimation}}$
The difference is now transparent; in $H_{\infty}$ estimation, one attempts to
minimize the worst-case gain from the disturbances energy to the estimation
error, whereas in regret-optimal estimation one attempts to minimize the
worst-case gain from the disturbance energy to the regret. It is this latter
fact that makes the regret-optimal estimator more adaptive since it has as its
baseline the best that any noncausal estimator can do, whereas the
$H_{\infty}$ estimator has no baseline to measure itself against. This fact
will be illustrated in Section 4, where we will show that the regret
definition results in an estimator that interpolates between the $H_{2}$ and
the $H_{\infty}$ design criteria.
Simplifying the optimal regret to have a simple formula is a difficult task
and. Therefore, in this paper, we define a sub-optimal problem of determining
whether the optimal regret is below, above or equal to a given threshold
$\gamma$. This is made precise in the following problem definition.
###### Problem 1 (The $\gamma$-optimal regret estimation problem).
For a fixed $\gamma$, if exists, find a causal estimator $\mathcal{K}$ such
that
$\displaystyle\|T_{\mathcal{K}}^{\ast}T_{\mathcal{K}}-T_{\mathcal{K}_{0}}^{\ast}T_{\mathcal{K}_{0}}\|_{\infty}\leq\gamma^{2}.$
(5)
A _$\gamma$ -optimal estimator_ is referred to as any solution to Problem 1.
Finally, we define a fundamental problem which will serve as the main tool in
the derivations.
###### Problem 2 (The Nehari problem).
Given an anticausal and bounded operator $\mathcal{U}$, find a causal operator
$\mathcal{K}$ such that $\|\mathcal{K}-\mathcal{U}\|$ is minimized.
This problem is well known as the Nehari problem. In the general operator
notation, it is difficult to derive an explicit formulae for the approximation
$\mathcal{K}$ and the minimal value of a valid $\gamma_{N}$. However, when
there is a state-space structure to the operator $\mathcal{U}$, then the
problem has a closed-form solution that will be presented in Section 6.
### 2.3 The state-space model
The setting defined above in its operator notation is general and cannot have
an explicit structured solution. In many cases, including our problem,
imposing a state space structure for the problem provides means to obtain
explicit estimators. In the state-space setting, the equations in (2.2) are
simplify to
$\displaystyle x_{i+1}$ $\displaystyle=Fx_{i}+Gw_{i}\operatorname*{}$
$\displaystyle y_{i}$ $\displaystyle=Hx_{i}+v_{i}\operatorname*{}$
$\displaystyle s_{i}$ $\displaystyle=Lx_{i},$ (6)
where $x_{i}$ is the hidden state, $y_{i}$ is the observation and $s_{i}$
corresponds to the signal that needs to be estimated. We also make the
standard assumption that the pair $(F,H)$ is detectable. To recover the state-
space setting from its operator notation counterpart in (2.2), choose
$\mathcal{H}$ and $\mathcal{L}$ as Toeplitz operators with Markov parameters
$HF^{i}G$ and $LF^{i}G$, respectively.
A causal estimator is defined as a sequence of mappings $\pi_{i}(\cdot)$ with
the estimation being $\hat{s}_{i}=\pi_{i}(\\{y_{j}\\}_{j\leq i})$. The
estimation error at time $i$ is
$\displaystyle e_{i}$ $\displaystyle=s_{i}-\hat{s}_{i}.$ (7)
In a similar fashion, we can define a strictly causal estimator as a sequence
of strictly causal mappings, i.e., $\hat{s}_{i}=\pi_{i}(\\{y_{j}\\}_{j<i})$.
Due to lack of space in this paper, we will not present the solution for the
strictly causal setting which follows from the same steps that will be taken
for the causal scenario.
Note that we did not specify the time horizon of the problem so that the
formulation here and in the previous section hold for finite, one-sided
infinite and doubly infinite time horizons. However, to simplify the
derivations of the state-space, we will focus here on the case of doubly-
infinite time horizon where the total estimation error energy can be expressed
as $\sum_{i=-\infty}^{\infty}e_{i}^{\ast}e_{i}$. In this case, it is also
convenient to define the causal transfer matrices
$\displaystyle H(z)$ $\displaystyle=H(zI-F)^{-1}G,\operatorname*{}$
$\displaystyle L(z)$ $\displaystyle=L(zI-F)^{-1}G.$ (8)
that describe the filters whose input is the disturbance $w$ and their outputs
are the observation $y$ and the target signal $s$, respectively. We now
proceed to show the main results of this paper.
## 3 Main results
In this section, we present our main results. We first provide the reduction
of the general regret estimation problem to a Nehari problem. In Section 3.1,
we provide an explicit solution for the state-space setting in the causal
scenario.
###### Theorem 1 (Reduction to the Nehari Problem).
A $\gamma$-optimal estimator exists if and only if there exists a solution to
the Neahri problem
$\displaystyle\min_{\text{causal}\
\mathcal{K}}\|\\{\nabla_{\gamma}\mathcal{K}_{0}\Delta\\}_{-}-\mathcal{K}\|\leq
1,$ (9)
where $\\{\cdot\\}_{-}$ denotes the strictly anticausal part of its argument,
and $\Delta,\nabla_{\gamma}$ are causal operators that are obtained from the
canonical factorizations
$\displaystyle\Delta\Delta^{\ast}$
$\displaystyle=I+\mathcal{H}\mathcal{H}^{\ast}\operatorname*{}$
$\displaystyle\nabla_{\gamma}^{\ast}\nabla_{\gamma}$
$\displaystyle=\gamma^{-2}(I+\gamma^{-2}\mathcal{L}(I+\mathcal{H}^{\ast}\mathcal{H})^{-1}\mathcal{L}^{\ast}).$
(10)
Let $(\gamma^{\ast},\mathcal{K}_{N})$ be a solution that achieves the upper
bound in the Nehari problem (9), then a regret-optimal estimator is given by
$\displaystyle\mathcal{K}$
$\displaystyle=\nabla^{-1}_{\gamma^{\ast}}(\mathcal{K}_{N}+\\{\nabla_{\gamma^{\ast}}\mathcal{K}_{0}\Delta\\}_{+})\Delta^{-1}$
(11)
where $\\{\cdot\\}_{+}$ denoted the causal part of an operator.
For general operators $\mathcal{L},\mathcal{H}$, the reduction in Theorem 1
does not give practical means to derive an implementable estimator. However,
it provides the outline of the necessary technical steps in order to have
explicit characterizations in the state-space setting. Specifically, in the
state-space setting we will need to perform two canonical spectral
factorizations (Eq. 1) and a decomposition of the operator
$\nabla_{\gamma^{\ast}}\mathcal{K}_{0}\Delta$ into causal and anticausal
operators. The proof of Theorem 1 appears in Section 5.
### 3.1 Solution for the state-space setting
We now proceed to particularize our results to the state-space representation
of the estimation problem. Towards our main objective to derive the regret-
optimal estimator, we will solve the sub-optimal problem, i.e., for a given
$\gamma$. Thus, our results are presented in two steps. First, we provide a
simple condition to verify whether the value of $\gamma$ is valid or not.
Then, assuming that the threshold $\gamma$ have been optimized, we present the
regret-optimal estimator.
Throughout the derivations, there are three Riccati and a single Lyapunov
equations. The first Riccati equation is the standard one from the Kalman
filter, i.e.,
$\displaystyle P\mspace{-2.0mu}$
$\displaystyle=\mspace{-2.0mu}GG^{\ast}\mspace{-4.0mu}+\mspace{-2.0mu}FPF^{\ast}\mspace{-4.0mu}-\mspace{-2.0mu}FPH^{\ast}(I\mspace{-2.0mu}+\mspace{-2.0mu}HPH^{\ast})^{-1}HPF^{\ast}.$
(12)
The stabilizing solution is denoted as $P$, its feedback gain as
$K_{P}=FPH^{\ast}(I+HPH^{\ast})^{-1}$ and its closed-loop system as
$F_{P}=F-GK_{P}$.
The remaining two Riccati equation depend on the parameter $\gamma$ and
therefore should be part of the optimization on $\gamma$. Define the
$\gamma$-dependent Riccati equations as
$\displaystyle W$ $\displaystyle=H^{\ast}H+\gamma^{-2}L^{\ast}L+F^{\ast}WF-
K_{W}^{\ast}R_{W}K_{W}\operatorname*{}$ $\displaystyle Q$
$\displaystyle=-GR_{W}^{-1}G^{\ast}+F_{W}QF_{W}^{\ast}-K_{Q}R_{Q}^{-1}K_{Q}^{\ast},$
(13)
with
$\displaystyle K_{W}$ $\displaystyle=R_{W}^{-1}G^{\ast}WF;\ \ \
R_{W}=I+G^{\ast}WG\operatorname*{}$ $\displaystyle K_{Q}$
$\displaystyle=F_{W}QL^{\ast}R_{Q}^{-1};\ \ R_{Q}=\gamma^{2}I+LQL^{\ast}.$
(14)
Additionally, define the corresponding closed-loop systems
$F_{Q}=F_{W}-K_{Q}L$ and $F_{W}=F-GK_{W}$, and the factorizations
$R_{W}=R_{W}^{\ast/2}R_{W}^{1/2}$ and $R_{Q}=R_{Q}^{1/2}R_{Q}^{\ast/2}$. Note
that the Riccati equation for $Q$ depends on the solution to Riccati equation
for $W$.
Finally, define $U$ as the solution to the Lyapunov equation
$\displaystyle U$ $\displaystyle=K_{Q}LPF_{P}^{\ast}+F_{Q}UF_{P}^{\ast}.$ (15)
We are now ready to present the condition for the existence of a regret-
optimal estimator.
###### Theorem 2 (Condition for Estimator Existence).
A $\gamma$-optimal estimator exists if and only if
$\displaystyle\bar{\sigma}(Z_{\gamma}\Pi)\leq 1,$ (16)
where $Z_{\gamma}$ and $\Pi$ are the solutions to the Lyapunov equations
$\displaystyle\Pi$ $\displaystyle=F_{P}^{\ast}\Pi
F_{P}+H^{\ast}(I+HPH^{\ast})^{-1}H\operatorname*{}$ $\displaystyle Z_{\gamma}$
$\displaystyle=F_{P}Z_{\gamma}F_{P}^{\ast}+F_{P}(P-U)^{\ast}L^{\ast}R_{Q}^{-1}L(P-U)F_{P}^{\ast}.$
(17)
A regret-optimal estimator that attains the optimal regret can be found by
optimizing over $\gamma$ in (16) so that the maximal singular value is
arbitrarily close to $1$. From now on, we assume that the value of $\gamma$ is
fixed after the optimization which fix in turn the $\gamma$-dependent
quantities $(W,Q,U,Z_{\gamma})$.
A key element in our solution to the regret-optimal estimator is a solution to
the Nehari problem in Theorem 1. Recall that its solution provides the best
approximation (in the operator norm) for the anticausal part of the transfer
function
$\nabla_{\gamma}^{-1}(z)L(z)H^{\ast}(z^{-\ast})\Delta^{-\ast}(z^{-\ast})$. We
denote this anticausal part as $T(z)$ which appears explicitly below in (5).
By having the operator $T(z)$, we can provide a solution for the Nehari
problem.
###### Lemma 2.
For any $\gamma$, the optimal solution to the Nehari problem with $T(z)$ in
(5) is
$\displaystyle K_{N}(z)$ $\displaystyle=\tilde{\Pi}(I+F_{N}(zI-
F_{N})^{-1})G_{N},$ (18)
where
$\displaystyle G_{N}$
$\displaystyle=(I-F_{P}Z_{\gamma}F_{P}^{\ast}\Pi)^{-1}F_{P}Z_{\gamma}H^{\ast}(I+HPH^{\ast})^{-\ast/2}\operatorname*{}$
$\displaystyle F_{N}$
$\displaystyle=F_{P}-G_{N}(I+HPH^{\ast})^{-1/2}H\operatorname*{}$
$\displaystyle\tilde{\Pi}$ $\displaystyle=R_{Q}^{-1/2}L(P-U)F_{P}^{\ast}\Pi$
(19)
where $(Z_{\gamma},\Pi)$ are defined in (2).
Although the solution to the Nehari problem is given for any value of
$\gamma$, it should be clear that it should be chosen accordingly with Theorem
2 in order to result in a $\gamma$-optimal estimator.
The following theorem reveals the structure of the regret-optimal estimator in
the frequency domain.
###### Theorem 3 (The Regret-Optimal Estimator in Frequency Domain).
Given the optimal threshold $\gamma^{\ast}$, a regret-optimal estimator for
the causal scenario is given by
$\displaystyle K(z)$
$\displaystyle=\nabla^{-1}_{\gamma^{\ast}}(z)[K_{N}(z)+S(z)]\Delta^{-1}(z)+K_{H_{2}}(z),\operatorname*{}$
with
$\displaystyle\nabla^{-1}_{\gamma}(z)=(I+L(zI-
F_{W})^{-1}K_{Q})R_{Q}^{1/2}\operatorname*{}$ $\displaystyle
S(z)=-R_{Q}^{-1/2}L[(zI-
F_{Q})^{-1}F_{Q}+I]UH^{\ast}(I+HPH^{\ast})^{-\ast/2}\operatorname*{}$
$\displaystyle\Delta^{-1}(z)=(I+HPH^{\ast})^{-1/2}(I+H(zI-F)^{-1}K_{P})^{-1},$
(20)
where all constants are defined in (12)-(15), $K_{N}(z)$ is given in (18) and
$K_{H_{2}}(z)$ is the causal $H_{2}$ (Kalman) filter:
$\displaystyle K_{H_{2}}(z)$
$\displaystyle=LPH^{\ast}(I+HPH^{\ast})^{-1}\operatorname*{}$
$\displaystyle+L(I-PH^{\ast}(I+HPH^{\ast})^{-1}H)(zI-
F_{P})^{-1}K_{P}.\operatorname*{}$
It is interesting to note that the causal Kalman filter naturally appears as
part of our solution to the regret-optimal estimation. This implies that the
regret-optimal estimator is a sum of two terms; the first is a Kalman filter
which is designed to minimize the Frobenius norm of the operator $T_{K}$,
while the other term is resulted from the Nehari and guarantees that the
regret criterion is minimized.
At this point, the frequency-domain results can be converted into a simple
state-space.
###### Theorem 4 (The Causal Regret-Optimal Estimator).
Given the optimal threshold $\gamma^{\ast}$, a regret-optimal estimator for
the causal scenario is given by
$\displaystyle\xi_{i+1}$
$\displaystyle=\tilde{F}\xi_{i}+\tilde{G}y_{i}\operatorname*{}$
$\displaystyle\hat{s}_{i}$ $\displaystyle=\tilde{H}\xi_{i}+\tilde{J}y_{i}.$
(21)
where the matrices are given by
$\displaystyle\tilde{F}$ $\displaystyle=\begin{pmatrix}F_{P}&0&0\\\
\tilde{F}_{2,1}&F_{N}&0\\\
\tilde{F}_{3,1}&\tilde{F}_{3,2}&F_{W}\end{pmatrix};$ (22)
$\displaystyle\tilde{H}$
$\displaystyle=\begin{pmatrix}\tilde{H}_{1}&R_{Q}^{1/2}\tilde{\Pi}F_{N}&L\end{pmatrix}\operatorname*{}$
$\displaystyle\tilde{G}$ $\displaystyle=\begin{pmatrix}K_{P}\\\
G_{N}(I+HPH^{\ast})^{-1/2}\\\ \tilde{G}_{3}\end{pmatrix}\operatorname*{}$
$\displaystyle\tilde{J}$
$\displaystyle=L(P-U)H^{\ast}(I+HPH^{\ast})^{-1}\operatorname*{}$
$\displaystyle\ +R_{Q}^{1/2}\tilde{\Pi}G_{N}(I+HPH^{\ast})^{-1/2},$ (23)
and the explicit constants are
$\displaystyle\tilde{F}_{2,1}$
$\displaystyle=-G_{N}(I+HPH^{\ast})^{-1/2}H\operatorname*{}$
$\displaystyle\tilde{F}_{3,1}$
$\displaystyle=F_{W}UH^{\ast}(I+HPH^{\ast})^{-1}H\operatorname*{}$
$\displaystyle\
-K_{Q}R_{Q}^{1/2}\tilde{\Pi}G_{N}(I+HPH^{\ast})^{-1/2}H\operatorname*{}$
$\displaystyle\tilde{F}_{3,2}$
$\displaystyle=K_{Q}R_{Q}^{1/2}\tilde{\Pi}F_{N}\operatorname*{}$
$\displaystyle\tilde{H}_{1}$
$\displaystyle=L-L(P-U)H^{\ast}(I+HPH^{\ast})^{-1}H\operatorname*{}$
$\displaystyle\
-LK_{Q}R_{Q}^{1/2}\tilde{\Pi}G_{N}(I+HPH^{\ast})^{-1/2}H\operatorname*{}$
$\displaystyle\tilde{G}_{3}$
$\displaystyle=K_{Q}R_{Q}^{1/2}\tilde{\Pi}G_{N}(I+HPH^{\ast})^{-1/2}\operatorname*{}$
$\displaystyle\ -F_{W}UH^{\ast}(I+HPH^{\ast})^{-1}.$ (24)
with the Riccati variables defined in (12)-(15) and the variables
$(F_{N},G_{N},\tilde{\Pi})$ defined in (2).
By Theorem 4, given the optimal threshold $\gamma^{\ast}$, the regret-optimal
estimator can be easily implemented. Note that the $\gamma-$dependent
variables should be computed only throughout the process of determining
$\gamma^{\ast}$ but not throughout the estimation process itself. Thus, from
computational perspective, the filter requires the same resources as the
standard Kalman filter. Its internal state inherits the finite dimension of
the original state space but has an increased dimension with a factor of
three.
## 4 Numerical examples
We have performed two numerical experiments to evaluate the performance of the
regret-optimal estimator compared to the traditional $H_{2}$ and $H_{\infty}$
estimators. As mentioned earlier, the performance of any (linear) estimator is
governed by the transfer operator $T_{K}$ that maps the disturbance sequences
$\mathbf{w}$ and $\mathbf{v}$ to the errors sequences $\mathbf{e}$. It will be
useful to represent this operator via its transfer function in the $z$-domain,
i.e.,
$T_{K}(z)=\left[\begin{array}[]{cc}L(z)-K(z)H(z)&-K(z)\end{array}\right].$
The squared Frobenius norm of $T_{K}$, which is what the $H_{2}$ estimator
minimizes, is given by
$\|T_{K}\|_{F}^{2}=\frac{1}{2\pi}\int_{0}^{2\pi}\mbox{trace}\left(T_{K}^{*}(e^{j\omega})T_{K}(e^{j\omega})\right)d\omega,$
and the squared operator norm of $T_{K}$, which is what $H_{\infty}$
estimators minimize, by
$\|T_{K}\|^{2}=\max_{0\leq\omega\leq
2\pi}\bar{\sigma}\left(T^{\ast}_{K}(e^{j\omega})T_{K}(e^{j\omega})\right),$
where $\bar{\sigma}(\cdot)$ denotes the maximal singular value of a matrix.
Figure 1: The squared operator norm as a function of the frequency parameter
for the scalar system in Section 4.1. The norm is compared between the
$H_{2}$, $H_{\infty}$, non-causal and our new regret-optimal estimator. As can
be seen, the non-causal estimator achieves the best performance at all
frequencies. As expected, among all causal estimators, the $H_{\infty}$
estimator achieves the lowest peak, and the $H_{2}$ estimator attains the
smallest area under its curve (i.e., integral). Out new estimator attains the
best of the two worlds as it achieves a lower peak than the $H_{2}$ estimator,
and a comparable area with the $H_{2}$ estimator. Precise comparison of the
resulted norms appears in Table 1.
### 4.1 Scalar systems
We start with a simple scalar system to illustrate the results. For scalar
systems, $T_{K}(z)$ is a 1-by-2 vector so we have that
$\|T_{K}\|_{F}^{2}=\frac{1}{2\pi}\int_{0}^{2\pi}\left\|T_{K}(e^{j\omega})\right\|^{2}d\omega.$
Consider now a simple stable scalar state-space with $F=0.9$, $H=L=G=1$. For
such a system, we have constructed the optimal $H_{2}$, $H_{\infty}$, and non-
causal estimators, as well as the regret-optimal estimator. Plotting the value
of $\|T_{K}(e^{j\omega})\|^{2}$, as a function of frequency, is quite
illuminating as it allows one to assess and compare the performance of the
respective estimators across the full range of input disturbances. This is
done in Figure 1.
Table 1: Performance for the Scalar Example | $\|T_{K}\|_{F}^{2}$ | $\|T_{K}\|^{2}$ | Regret
---|---|---|---
Noncausal estimator | 0.46 | 0.99 | 0
Regret-optimal | 0.65 | 1.1 | 0.38
$H_{2}$ estimator | 0.6 | 1.27 | 0.7
$H_{\infty}$ estimator | 0.94 | 0.99 | 0.71
As can be seen, the non-causal estimator outperforms the other three
estimators across all frequencies. The $H_{2}$ estimator minimizes the
Frobenius norm, i.e., the average performance over iid $w$, which is the area
under the curve. However, in doing so, it sacrifices the worst-case
performance and so has a relatively large peak at low frequencies. The
$H_{\infty}$ estimator minimizes the operator norm, i.e., the worst-case
performance, which is the peak of the curve. (Here we can see that the
$H_{\infty}$-optimal estimator has the same peak as the non-causal estimator
meaning that it attains the same worst-case performance.) However, in doing
so, it sacrifices the average performance and has a relatively large area
under the curve. Recall that the regret-optimal estimator aims to mimic the
non-causal behavior. In doing so, it achieves the best of both worlds: it has
an area under the curve that is close to that of the $H_{2}$-optimal estimator
(0.6 vs 0.65), and it has a peak that significantly improves upon the the peak
of the $H_{2}$-optimal estimator. The precise norms are presented in Table 1.
It is also illuminating to examine our new regret criterion in Fig. 4.1. We
plot the regret of the causal estimators with respect to the non-causal
estimator. It can be seen that at low frequencies, the $H_{\infty}$ estimator
has the lowest regret, while at mid-frequencies it is the $H_{2}$ estimator.
However, their peak is almost twice that of the regret-optimal estimator that
maintains almost a constant regret across all frequencies.
Figure 2: The frequency response of the various estimators for the tracking
example in Section 4.2. Comparison of the corresponding norms appears in Table
2.
### 4.2 Tracking example
Here, we will study a one-dimensional tracking problem whose state-space model
is
$\displaystyle\begin{pmatrix}x_{i+1}\\\ \nu_{i+1}\end{pmatrix}$
$\displaystyle=\begin{pmatrix}1&\Delta T\\\
0&1\end{pmatrix}\begin{pmatrix}x_{i}\\\
\nu_{i}\end{pmatrix}+\begin{pmatrix}0\\\ \Delta
T\end{pmatrix}a_{i}\operatorname*{}$ $\displaystyle y_{i}$
$\displaystyle=\begin{pmatrix}1&0\end{pmatrix}\begin{pmatrix}x_{i}\\\
\nu_{i}\end{pmatrix}+v_{i}$ (25) $\displaystyle s_{i}$
$\displaystyle=x_{i+1},$ (26)
where $x_{i}$ corresponds to the position, $\nu_{i}$ corresponds to velocity
and $a_{i}$ to the exogenous acceleration. The desired signal is the position
of the object at the next time step $s_{i}=x_{i+1}$, and the observations
signal is the noisy position $y_{i}=x_{i}+v_{i}$, where $v_{i}$ is measurement
noise. The frequency reponse of the various estimators is presented in Fig. 2
and Table 2 summarizes their performance.
Table 2: Performance for the Tracking Experiment | $\|T_{K}\|_{F}^{2}$ | $\|T_{K}\|^{2}$ | Regret
---|---|---|---
Noncausal estimator | 0.39 | 1 | 0
Regret-optimal | 0.82 | 1.24 | 0.65
$H_{2}$ estimator | 0.77 | 1.4 | 1.02
$H_{\infty}$ estimator | 0.97 | 1 | 0.95
The time domain performance of the various filters is given in Fig. 3. We plot
the time-averaged estimation error energy as a function of time for the
$H_{2}$, $H_{\infty}$, and regret-optimal filters for two different types of
noise. One is the white Gaussian noise for which the $H_{2}$ filter is the
optimal, and one is an adversarial noise for which the $H_{\infty}$ filter is
the best. As can be seen, the regret-optimal filter has a performance that
interpolates nicely between these filters and achieves good performance across
a range of disturbances.
Figure 3: Time-averaged estimation error energy as a function of time for the
tracking example with two different disturbances. In the bottom three curves,
the state-space model is driven with Gaussian disturbances. In the top three
curves, it is driven with an adversarial disturbance.
## 5 Proof of Theorem 1
Recall that we aim to solve the sub-optimal problem
$\displaystyle
T^{\ast}_{\mathcal{K}}T_{\mathcal{K}}-T_{\mathcal{K}_{0}}^{\ast}T_{\mathcal{K}_{0}}\preceq\gamma^{2}I.$
(27)
By the Schur complement and the _Matrix inversion lemma_ (in its operator
form), we can write
$\displaystyle
T_{\mathcal{K}}(\gamma^{-2}I-\gamma^{-2}T_{\mathcal{K}_{0}}^{\ast}(I+\gamma^{-2}T_{\mathcal{K}_{0}}T_{\mathcal{K}_{0}}^{\ast})^{-1}\gamma^{-2}T_{\mathcal{K}_{0}})T^{\ast}_{\mathcal{K}}\preceq
I.\operatorname*{}$
It can now be shown that for any $\mathcal{K}$,
$\displaystyle T_{\mathcal{K}}T_{\mathcal{K}_{0}}^{\ast}$
$\displaystyle=\begin{pmatrix}\mathcal{L}-\mathcal{K}\mathcal{H}&-\mathcal{K}\end{pmatrix}\begin{pmatrix}I\\\
-\mathcal{H}\end{pmatrix}(I+\mathcal{H}^{\ast}\mathcal{H})^{-1}\mathcal{L}^{\ast}\operatorname*{}$
$\displaystyle=T_{\mathcal{K}_{0}}T_{\mathcal{K}_{0}}^{\ast}.$ (28)
Combining the simplified condition with (5) gives
$\displaystyle T_{\mathcal{K}}T_{\mathcal{K}}^{\ast}$
$\displaystyle\preceq\gamma^{2}I+\gamma^{-2}T_{\mathcal{K}_{0}}T_{\mathcal{K}_{0}}^{\ast}(I+\gamma^{-2}T_{\mathcal{K}_{0}}T_{\mathcal{K}_{0}}^{\ast})^{-1}T_{\mathcal{K}_{0}}T_{\mathcal{K}_{0}}^{\ast}\operatorname*{}$
$\displaystyle=\gamma^{2}I+T_{\mathcal{K}_{0}}T_{\mathcal{K}_{0}}^{\ast}(\gamma^{2}I+T_{\mathcal{K}_{0}}T_{\mathcal{K}_{0}}^{\ast})^{-1}T_{\mathcal{K}_{0}}T_{\mathcal{K}_{0}}^{\ast}.$
(29)
By the completion of square, we also have
$\displaystyle T_{\mathcal{K}}T_{\mathcal{K}}^{\ast}$
$\displaystyle=(\mathcal{K}-\mathcal{K}_{0})(I+\mathcal{H}\mathcal{H}^{\ast})(\mathcal{K}-\mathcal{K}_{0})^{\ast}+T_{\mathcal{K}_{0}}T_{\mathcal{K}_{0}}^{\ast}\operatorname*{}$
and rearranging the RHS of the condition gives
$\displaystyle\gamma^{2}I+T_{\mathcal{K}_{0}}T_{\mathcal{K}_{0}}^{\ast}(\gamma^{2}I+T_{\mathcal{K}_{0}}T_{\mathcal{K}_{0}}^{\ast})^{-1}T_{\mathcal{K}_{0}}T_{\mathcal{K}_{0}}^{\ast}-T_{\mathcal{K}_{0}}T_{\mathcal{K}_{0}}^{\ast}\operatorname*{}$
$\displaystyle=\gamma^{2}(I+\gamma^{-2}T_{\mathcal{K}_{0}}T_{\mathcal{K}_{0}}^{\ast})^{-1}.$
(30)
To conclude, the condition can be written as
$\displaystyle(\mathcal{K}-\mathcal{K}_{0})(I+\mathcal{H}\mathcal{H}^{\ast})(\mathcal{K}-\mathcal{K}_{0})^{\ast}\operatorname*{}$
$\displaystyle\ \ \
\preceq\gamma^{2}(I+\gamma^{-2}\mathcal{L}(I+\mathcal{H}^{\ast}\mathcal{H})^{-1}\mathcal{L}^{\ast})^{-1}$
(31)
By defining the canonical factorizations
$\displaystyle\Delta\Delta^{\ast}$
$\displaystyle=I+\mathcal{H}\mathcal{H}^{\ast}\operatorname*{}$
$\displaystyle\nabla_{\gamma}^{\ast}\nabla_{\gamma}$
$\displaystyle=\gamma^{-2}(I+\gamma^{-2}\mathcal{L}(I+\mathcal{H}^{\ast}\mathcal{H})^{-1}\mathcal{L}^{\ast}).$
(32)
and applying the Schur complement again gives that
$\displaystyle(\mathcal{K}\Delta-\mathcal{K}_{0}\Delta)^{\ast}\nabla_{\gamma}^{\ast}\nabla_{\gamma}(\mathcal{K}\Delta-\mathcal{K}_{0}\Delta)\preceq
I.$ (33)
Note that $\nabla_{\gamma}\mathcal{K}\Delta$ is a causal operator. Now, let
$\nabla_{\gamma}\mathcal{K}_{0}\Delta=\mathcal{S}+\mathcal{T}$ where
$\mathcal{S}$ is a causal operator and $\mathcal{T}$ is a strictly anticausal
operator (both operators depend on $\gamma$ implicitly). Then, if
$\mathcal{K}_{N}$ is a solution to the Nehari problem
$\|\mathcal{K}_{N}-\mathcal{T}\|\leq 1$, then a $\gamma$-optimal estimator is
given by $\nabla^{-1}(\mathcal{K}_{N}+\mathcal{S})\Delta^{-1}$.
## 6 Proof Outline of the State-Space
In this section, we present the main lemmas that constitute the explicit
solution for the state-space setting. As written above, there are three
technical lemmas to obtain a Nehari problem. The solution to the Nehari
problem is known and appears in the supplementary material. Proofs of the
technical lemmas appear in the supplementary material as well.
The first factorization appears as follows.
###### Lemma 3.
The transfer function $I+H(z)H^{\ast}(z^{-\ast})$ can be factored as
$\Delta(z)\Delta^{\ast}(z^{-\ast})=I+H(z)H^{\ast}(z^{-\ast})$
with
$\displaystyle\Delta(z)$
$\displaystyle=(I+H(zI-F)^{-1}K_{P})(I+HPH^{\ast})^{1/2}$ (34)
where $(I+HPH^{\ast})^{1/2}(I+HPH^{\ast})^{\ast/2}=I+HPH^{\ast}$,
$K_{P}=FPH^{\ast}(I+HPH^{\ast})^{-1}$ and $P$ is the stabilizing solution to
the Riccati equation
$\displaystyle P$
$\displaystyle=GG^{\ast}+FPF^{\ast}-FPH^{\ast}(I+HPH^{\ast})^{-1}HPF^{\ast}.\operatorname*{}$
Moreover, the transfer function $\Delta^{-1}(z)$ is bounded on $|z|\geq 1$.
In the second factorization, the expression we aim to factor is positive but
the order of its causal and anticausal components are in the reversed order.
This is resolved with an additional Riccati equation.
###### Lemma 4.
For any $\gamma>0$, the factorization
$\nabla_{\gamma}^{\ast}(z^{-\ast})\nabla_{\gamma}(z)=\gamma^{-2}(I+\gamma^{-2}L(z)(I+H^{\ast}(z^{-\ast})H(z))^{-1}L^{\ast}(z^{-\ast}))\operatorname*{}$
holds with
$\displaystyle\nabla_{\gamma}(z)$ $\displaystyle=R_{Q}^{-1/2}(I-L(zI-
F_{Q})^{-1}K_{Q}),$ (35)
where $R_{Q}=R_{Q}^{1/2}R_{Q}^{\ast/2}$, $Q$ is a solution to the Riccati
equation
$Q=-GR_{W}^{-1}G^{\ast}+F_{W}QF_{W}^{\ast}-K_{Q}R_{Q}K_{Q}^{\ast},$
and $K_{Q}=F_{W}QL^{\ast}R_{Q}^{-1}$ and $R_{Q}=\gamma^{2}I+LQL^{\ast}$ and
the closed-loop system $F_{Q}=F_{W}-K_{Q}L$. The constants $(F_{W},K_{W})$ are
obtained from the solution $W$ to the Riccati equation
$\displaystyle W$
$\displaystyle=H^{\ast}H+L_{\gamma}^{\ast}L_{\gamma}+F^{\ast}WF-
K_{W}^{\ast}R_{W}K_{W},$ (36)
and $K_{W}=R_{W}^{-1}G^{\ast}WF$ and $R_{W}=I+G^{\ast}WG$ with
$R_{W}=R_{W}^{\ast/2}R_{W}^{1/2}$ and $F_{W}=F-GK_{W}$.
The following lemma is the required decomposition.
###### Lemma 5.
The product of the transfer matrices
$\nabla_{\gamma}(z)L(z)H^{\ast}(z^{-\ast})\Delta^{-\ast}(z^{-\ast})$ can be
written as the sum of an anticausal transfer function
$\displaystyle T(z)$
$\displaystyle=R_{Q}^{-1/2}L(P\mspace{-3.0mu}-\mspace{-3.0mu}U)F_{P}^{\ast}\operatorname*{}$
$\displaystyle\
\cdot(z^{-1}I\mspace{-2.0mu}-\mspace{-2.0mu}F_{P}^{\ast})^{-1}H^{\ast}(I\mspace{-3.0mu}+\mspace{-3.0mu}HPH^{\ast})^{-\ast/2}.$
(37)
and a causal transfer function
$\displaystyle
S(z)=\nabla_{\gamma}(z)L[(zI-F)^{-1}F\mspace{-3.0mu}+\mspace{-3.0mu}I]PH^{\ast}(I\mspace{-3.0mu}+\mspace{-3.0mu}HPH^{\ast})^{-\ast/2}\operatorname*{}$
$\displaystyle\ -R_{Q}^{-1/2}L[(zI-
F_{Q})^{-1}F_{Q}+I]UH^{\ast}(I+HPH^{\ast})^{-\ast/2},\operatorname*{}$
where $U$ solves $U=K_{Q}LPF_{P}^{\ast}+F_{Q}UF_{P}^{\ast}$.
It can be shown that the first line of $S(z)$ is
$\nabla_{\gamma}(z)K_{H_{2}}(z)\Delta(z)$ where $K_{H_{2}}(z)$ is the optimal
$H_{2}$ estimator. By having the decomposition and the anticausal $T(z)$ in
part, we can apply the results of the Nahari problem to obtain Lemma 2.
## References
* 1 R. E. Kalman (1960). A New Approach to Linear Filtering and Prediction Problems. Transactions of the ASME–Journal of Basic Engineering,dvances in Neural Information Processing Systems, vol. 82: 35–45.
* 2 U. Shaked, Y. Theodor (1992). ${H}_{\infty}$-optimal estimation: a tutorial. In Proceedings of the 31st Conference on Decision and Control (CDC), vol. 2: 2278–2286.
* 3 B. Hassibi, A. Sayed, T. Kailath (1999). Indefinite-Quadratic estimation and control: A unified approach to H2 and H-infinity theories. Society for Industrial and Applied Mathematics, vol. 16.
* 4 Z. Nehari (1957). On bounded bilinear forms. Annals of Mathematics, vol. 16:153–162.
* 5 B. Hassibi, A. Sayed, T. Kailath (1996). $H_{\infty}$ optimality of the LMS algorithm. IEEE Transactions on Signal Processing, vol. 44(2):267–280.
* 6 E. Hazan, S. Kakade, K. Singh (2019). The Nonstochastic Control Problem. In ArXiv preprint:1911.12178.
* 7 M. Simchowitz, K. Singh, E. Hazan (2020). Improper Learning for Non-Stochastic Control. In ArXiv preprint :2001.09254.
* 8 Y. Abbasi-Yadkori, N. Lazic, C. Szepesv´ari (2019). Model-free linear quadratic control via reduction to expert prediction. In _The 22nd International Conference on Artificial Intelligence and Statistics_ , 3108–3117.
* 9 Y. Abbasi-Yadkori,C. Szepesv´ari (2011). Regret bounds for the adaptive control of linear quadratic systems. In _Proceedings of the 24th Annual Conference on Learning Theory_ , 1-–26, 2011.
|
# Estimates for weighted homogeneous delay systems:
A Lyapunov-Krasovskii-Razumikhin approach*
Gerson Portilla${}^{1},$ Irina V. Alexandrova2 and Sabine Mondié1 1Gerson
Portilla and Sabine Mondié are with the Department of Automatic Control,
CINVESTAV-IPN, 07360 Mexico D.F., Mexico
<EMAIL_ADDRESS>V. Alexandrova is with the
Department of Control Theory, St. Petersburg State University, 7/9
Universitetskaya nab., St. Petersburg, 199034, Russia
<EMAIL_ADDRESS>work of Gerson Portilla and Sabine Mondié was
supported by Projects CONACYT A1-S-24796 and SEP-CINVESTAV 155, Mexico. The
work of Irina Alexandrova was supported by the Russian Science Foundation,
Project 19-71-00061.
###### Abstract
In this paper, we present estimates for solutions and for the attraction
domain of the trivial solution for systems with delayed and nonlinear weighted
homogeneous right-hand side of positive degree. The results are achieved via a
generalization of the Lyapunov-Krasovskii functional construction presented
recently for homogeneous systems with standard dilation. Along with the
classical approach for the calculation of the estimates within the Lyapunov-
Krasovskii framework, we develop a novel approach which combines the use of
Lyapunov-Krasovskii functionals with ideas of the Razumikhin framework. More
precisely, a lower bound for the functional on a special set of functions
inspired by the Razumikhin condition is constructed, and an additional
condition imposed on the solution of the comparison equation ensures that this
bound can be used to estimate all solutions in a certain neighbourhood of the
trivial one. An example shows that this approach yields less conservative
estimates in comparison with the classical one.
## I Introduction
When the linear approximation is zero, the homogeneous one can be used for
nonlinear systems analysis and control. Generalised definitions such as
weighted homogeneity [1], [2] or in the bi-limit [3] allow covering wider
classes of nonlinear systems. The Lyapunov framework has produced significant
results on stability [4], [5], robustness [1] as well as observer and
controller design [5], to name a few.
To study homogeneous systems with delays, researchers have naturally resorted
to the Lyapunov-Razumikhin framework [6]. Some general results on delay-
independent and finite-time stability for the cases of positive and negative
degree, respectively, as well as for stability of locally homogeneous systems
are obtained in [7, 8]. For weighted homogeneous systems of positive degree,
the delay-independent stability was established with the help of the Lyapunov
function of the corresponding delay-free system [9]. The approach has allowed
to present the estimates for solutions [10] and for the attraction region, as
well as to analyze perturbed systems and complex systems describing the
interaction between several subsystems [11]. Moreover, contrary to the often
expressed view that the Razumikhin approach allows obtaining qualitative
estimates only, it was shown recently [12] that the estimates of [10] are
close enough to the system response. A similar conclusion was made in [13] for
different kind of systems.
Recently, for the case of standard dilation and homogeneity degree strictly
greater than one, a Lyapunov-Krasovskii functional was introduced in [14],
[15]. It was inspired by the Lyapunov functional of complete type for delay
linear systems [16], [17], and lead to stability and instability results [18],
estimates of the region of attraction [15] and of the system response [12],
see also [19].
In this contribution, we extend this functional to the case of weighted
homogeneous time-delay systems of positive degree and use it to construct
quantitative estimates of the region of attraction and of the system response.
Two approaches are developed. The first one is based on the classical ideas of
the Lyapunov-Krasovskii method, whereas a combination of Lyapunov-Krasovskii
and Razumikhin techniques is used in the second one. The idea is to construct
a lower bound for the Lyapunov-Krasovskii functional which is only valid on a
special set of functions inspired by the Razumikhin condition, see [20] and
[21] for the linear and nonlinear cases, respectively. Exploiting the ideas in
[10], we require the solutions of the comparison equation for the functional
to satisfy the same condition, thus ensuring that the final estimates hold for
all solutions from a certain neighbourhood. This approach yields better
estimates than the classical one.
Note that there exists a parallel work on the generalization of the functional
of [14] to the case of weighted dilation covering both asymptotic stability
for the case of positive degree and boundedness of solutions for that of
negative degree [22]. The main difference between the functional we use and
those of [22] is that to cover both cases the authors of [22] use a more
complex construction with additional parameters, a more complicated bounding
and non-standard norms, thus achieving a moderate computational performance.
In contrast, we bound the functional and its derivative componentwise
following naturally the componentwise definition of homogeneous functions, and
use natural norm based on the homogeneous vector norm. Additionally, we
present fully computed quantitative estimates of the response and of the
attraction region, via this combined Lyapunov-Krasovskii-Razumikhin approach.
The contribution is organised as follows. Previous results on homogeneous
systems are reminded in section II. The Lyapunov-Krasovskii functional
construction is presented in Section III. The functional is applied to the
estimation of the attraction region in Section IV and of the homogeneous
system solutions in Section V. An illustrative example is given in Section VI.
Notation: The space of $\mathbb{R}^{n}$ valued continuous functions on
$[-h,0]$ endowed with the norm
$\|\varphi\|_{h}=\max_{\theta\in[-h,0]}\|\varphi(\theta)\|$ is denoted by
$C([-h,0],\mathbb{R}^{n})$. Here, $\|\cdot\|$ stands for the Euclidean norm.
In computations it turns out to be more convenient to use the following
homogeneous norm
$\|\varphi\|_{\mathscr{H}}=\max_{\theta\in[-h,0]}\|\varphi(\theta)\|_{r,p},$
where $\|\cdot\|_{r,p}$ stands for the typical vector homogeneous norm defined
below. The solution of a time delay system and the restriction of the solution
to the segment $[t-h,t]$, corresponding to the initial function $\varphi\in
C([-h,0],\mathbb{R}^{n}),$ are respectively denoted by $x(t)$ and $x_{t}$. If
the initial condition is important, we write $x(t,\varphi)$ and
$x_{t}(\varphi)$, respectively.
## II Preliminaries
We start with a brief reminder of the definitions related to the homogeneity
concept [2, 23]. Define the vector of weights $r=(r_{1},\ldots,r_{n})^{T},$
where $r_{i}>0,$ $i=\overline{1,n},$ and the dilation operator
$\delta_{\varepsilon}^{r}(\textup{x})=(\varepsilon^{r_{1}}\textup{x}_{1},\ldots,\varepsilon^{r_{n}}\textup{x}_{n}),\quad\varepsilon>0.$
Here, $\textup{x}=(\textup{x}_{1},\ldots,\textup{x}_{n})^{T}.$ Then, function
$\|\textup{x}\|_{r,p}=\left(\sum_{i=1}^{n}|\textup{x}_{i}|^{p/r_{i}}\right)^{1/p},$
where $p\geq 1,$ is called the $\delta^{r}$-homogeneous norm. Although it is
not a norm in the usual sense, it has been shown to be equivalent to the
Euclidean norm. A scalar function $V:\mathbb{R}^{n}\rightarrow\mathbb{R}$ is
called $\delta^{r}$-homogeneous, if there exists $\mu\in\mathbb{R}$ such that
$V(\delta_{\varepsilon}^{r}(\textup{x}))=\varepsilon^{\mu}V(\textup{x})\quad\forall\,\varepsilon>0.$
A vector function
$f=f(\textup{x},\textup{y}):\mathbb{R}^{2n}\rightarrow\mathbb{R}^{n}$ is
called $\delta^{r}$-homogeneous, if there exists $\mu\in\mathbb{R}$ such that
its component $f_{i}$ is a $\delta^{r}$-homogeneous function of degree
$\mu+r_{i},$ i.e.
$f_{i}(\delta_{\varepsilon}^{r}(\textup{x}),\delta_{\varepsilon}^{r}(\textup{y}))=\varepsilon^{\mu+r_{i}}f_{i}(\textup{x},\textup{y})\quad\forall\,\varepsilon>0,\quad
i=\overline{1,n},$
where $\textup{x},\textup{y}\in\mathbb{R}^{n}.$ In both cases, the constant
$\mu$ is called the degree of homogeneity. It is worth mentioning that the
homogeneous norm is a $\delta^{r}$-homogeneous function of degree one:
$\|\delta_{\varepsilon}^{r}(\textup{x})\|_{r,p}=\varepsilon\|\textup{x}\|_{r,p}.$
Assume that $\mu\geq-\min_{i=\overline{1,n}}r_{i}.$
###### Lemma 1.
There exist $m_{i}>0$ such that the components of the $\delta^{r}$-homogeneous
function $f(\textup{x},\textup{y})$ satisfy
$|f_{i}(\textup{x},\textup{y})|\leq
m_{i}\left(\|\textup{x}\|_{r,p}^{\mu+r_{i}}+\|\textup{y}\|_{r,p}^{\mu+r_{i}}\right),\quad
i=\overline{1,n}.$
###### Proof.
If $\mu+r_{i}>0,$ then we take
$m_{i}=\max_{\|\textup{x}\|_{r,p}^{\mu+r_{i}}+\|\textup{y}\|_{r,p}^{\mu+r_{i}}=1}|f_{i}(\textup{x},\textup{y})|>0,$
and
$\varepsilon=(\|\textup{x}\|_{r,p}^{\mu+r_{i}}+\|\textup{y}\|_{r,p}^{\mu+r_{i}})^{-1/(\mu+r_{i})}.$
It can be easily seen that
$\|\delta_{\varepsilon}^{r}(\textup{x})\|_{r,p}^{\mu+r_{i}}+\|\delta_{\varepsilon}^{r}(\textup{y})\|_{r,p}^{\mu+r_{i}}=1.$
This implies
$|f_{i}(\delta_{\varepsilon}^{r}(\textup{x}),\delta_{\varepsilon}^{r}(\textup{y}))|\leq
m_{i}.$
Using the definition of homogeneity, we arrive at the result. If
$\mu+r_{i}=0,$ then the same conclusion can be drawn with
$m_{i}=\max_{\|\textup{x}\|_{r,p}^{k}+\|\textup{y}\|_{r,p}^{k}=1}|f_{i}(\textup{x},\textup{y})|>0$
for any $k>0.$ ∎
###### Lemma 2.
Assume that $f(\textup{x},\textup{y})$ is continuously differentiable with
respect to x and $\delta^{r}$-homogeneous. Then, there exist $\eta_{ij}>0$
such that
$\left|\frac{\partial
f_{i}(\textup{x},\textup{y})}{\partial\textup{x}_{j}}\right|\leq\eta_{ij}\left(\|\textup{x}\|_{r,p}^{\mu+r_{i}-r_{j}}+\|\textup{y}\|_{r,p}^{\mu+r_{i}-r_{j}}\right),\
i,j=\overline{1,n},$
if $\mu+r_{i}-r_{j}>0,$ and
$\left|\frac{\partial
f_{i}(\textup{x},\textup{y})}{\partial\textup{x}_{j}}\right|\leq\frac{\eta_{ij}}{\left(\|\textup{x}\|_{r,p}^{-\mu-
r_{i}+r_{j}}+\|\textup{y}\|_{r,p}^{-\mu-r_{i}+r_{j}}\right)},\
i,j=\overline{1,n},$
if $\mu+r_{i}-r_{j}<0$ and at least one of the vectors x and y is nonzero.
Now, consider a time delay system of the form
$\dot{x}(t)=f(x(t),x(t-h)),$ (1)
where $x(t)\in\mathbb{R}^{n},$ $h>0$ is a constant delay. The following
assumptions are made.
###### Assumption 1.
The vector function $f(\textup{x},\textup{y})$ is continuously differentiable
with respect to $\textup{x}\in\mathbb{R}^{n},$ locally Lipshitz with respect
to $\textup{y}\in\mathbb{R}^{n},$ and $\delta^{r}$-homogeneous of degree
$\mu>0.$
###### Assumption 2.
The delay free system
$\dot{x}(t)=f(x(t),x(t))$ (2)
is asymptotically stable.
In [1], [23] the existence of a homogeneous Lyapunov function for system (2)
is established. More precisely, it is proven that for any $l\in\mathbb{N}$ and
$\gamma\geq l\max_{i=\overline{1,n}}\\{r_{i}\\}$ there exists a positive
definite $\delta^{r}$-homogeneous of degree $\gamma$ and of class $C^{l}$
Lyapunov function $V(\textup{x})$ such that its time derivative with respect
to system (2) is a negative definite $\delta^{r}$-homogeneous function of
degree $\gamma+\mu,$ that is
$\left(\frac{\partial
V(\textup{x})}{\partial\textup{x}}\right)^{T}f(\textup{x},\textup{x})\leq-\textup{w}\|\textup{x}\|_{r,p}^{\gamma+\mu},\quad\textup{w}>0.$
(3)
We set $l=2$ and use a Lyapunov function $V(\textup{x})$ of class $C^{2}$ and
the homogeneity degree $\gamma\geq 2\max_{i=\overline{1,n}}\\{r_{i}\\}$ below.
The following estimates hold [2], [23]:
$\displaystyle\alpha_{0}\|\textup{x}\|_{r,p}^{\gamma}\leq
V(\textup{x})\leq\alpha_{1}\|\textup{x}\|_{r,p}^{\gamma},\quad\alpha_{0},\alpha_{1}>0,$
(4) $\displaystyle\left|\frac{\partial
V(\textup{x})}{\partial\textup{x}_{i}}\right|\leq\beta_{i}\|\textup{x}\|_{r,p}^{\gamma-
r_{i}},\quad\left|\frac{\partial^{2}V(\textup{x})}{\partial\textup{x}_{i}\partial\textup{x}_{j}}\right|\leq\psi_{ij}\|\textup{x}\|_{r,p}^{\gamma-
r_{i}-r_{j}},$ (5)
where $\beta_{i},\psi_{ij}>0,$ $i,j=\overline{1,n}.$
It is proved in [9] that if Assumptions 1 and 2 hold, then the trivial
solution of time delay system (1) is asymptotically stable for all $h>0.$ In
the next section, we present the Lyapunov-Krasovskii functional validating
this result, and further construct the estimates for the solutions of (1) and
for the attraction region. An important step in the obtention of the estimates
is the use of a lower bound for the functional on the special set
$S_{\alpha}=\Bigl{\\{}\varphi\in C([-h,0],\mathbb{R}^{n})\Bigl{\arrowvert}\\\
\|\varphi(\theta)\|_{r,p}\leq\alpha\|\varphi(0)\|_{r,p},\;\theta\in[-h,0]\Bigr{\\}},$
where $\alpha>1.$ This set was introduced in the Lyapunov-Krasovskii analysis
in [20] and [21] for linear and nonlinear time delay systems, respectively. In
particular, it was shown that it is enough to construct the lower bound for
the functional on the set $S_{\alpha}$ in order to prove asymptotic stability.
## III The Functional Construction
A natural generalization of the Lyapunov-Krasovskii functional introduced in
[14, 15] to the case of $\delta^{r}$-homogeneous time delay systems is
$\displaystyle v(\varphi)$
$\displaystyle=V(\varphi(0))+\left.\left(\frac{\partial
V(\textup{x})}{\partial\textup{x}}\right)^{T}\right|_{\textup{x}=\varphi(0)}\int_{-h}^{0}f(\varphi(0),\varphi(\theta))d\theta$
(6)
$\displaystyle+\int_{-h}^{0}(\mathrm{w_{1}}+(h+\theta)\mathrm{w_{2}})\|\varphi(\theta)\|_{r,p}^{\gamma+\mu}d\theta.$
Here, $\mathrm{w}_{1},\mathrm{w}_{2}>0,$ and
$\mathrm{w}_{0}=\mathrm{w}-\mathrm{w}_{1}-h\mathrm{w}_{2}>0$. In this section,
we show that functional (6) satisfies the classical Lyapunov-Krasovskii
theorem [24]. For the sake of brevity, the three summands of (6) are denoted
by $I_{1}(\varphi)$, $I_{2}(\varphi)$ and $I_{3}(\varphi),$ respectively.
###### Lemma 3.
There exist $\delta>0$ and $a_{1},\,a_{2}>0$ such that
$v(\varphi)\geq
a_{1}\|\varphi(0)\|_{r,p}^{\gamma}+a_{2}\int_{-h}^{0}\|\varphi(\theta)\|_{r,p}^{\gamma+\mu}d\theta$
(7)
in the neighbourhood $\|\varphi\|_{\mathscr{H}}\leq\delta.$
###### Proof.
It is obvious that
$I_{1}(\varphi)\geq\alpha_{0}\|\varphi(0)\|_{r,p}^{\gamma},\;\,I_{3}(\varphi)\geq\mathrm{w}_{1}\int_{-h}^{0}\|\varphi(\theta)\|_{r,p}^{\gamma+\mu}d\theta.$
Now, we use Lemma 1 and the first of bounds (5) for the second summand of the
functional:
$\displaystyle|I_{2}(\varphi)|$ $\displaystyle\leq
h\left(\sum_{i=1}^{n}\beta_{i}m_{i}\right)\|\varphi(0)\|_{r,p}^{\gamma+\mu}+\sum_{i=1}^{n}\beta_{i}m_{i}\chi^{\gamma-\mu-2r_{i}}$
$\displaystyle\times\left(\frac{\|\varphi(0)\|_{r,p}}{\chi}\right)^{\gamma-
r_{i}}\int_{-h}^{0}(\chi\|\varphi(\theta)\|_{r,p})^{\mu+r_{i}}d\theta,$
where $\chi>0$ is a parameter. Using the standard inequality
$A^{p_{1}}B^{p_{2}}\leq A^{p_{1}+p_{2}}+B^{p_{1}+p_{2}},$ where
$p_{1},p_{2}\geq 0$ and $A,B\geq 0$, we get
$\displaystyle|I_{2}(\varphi)|$ $\displaystyle\leq
h\left(\sum_{i=1}^{n}\beta_{i}m_{i}\left(1+\chi^{-2(\mu+r_{i})}\right)\right)\|\varphi(0)\|_{r,p}^{\gamma+\mu}$
$\displaystyle+\left(\sum_{i=1}^{n}\beta_{i}m_{i}\chi^{2(\gamma-
r_{i})}\right)\int_{-h}^{0}\|\varphi(\theta)\|_{r,p}^{\gamma+\mu}d\theta.$
Combining all summands together and making use of
$I_{2}(\varphi)\geq-|I_{2}(\varphi)|$ along with
$\|\varphi\|_{\mathscr{H}}\leq\delta,$ we arrive at the lower bound (7) with
$\displaystyle a_{1}$
$\displaystyle=\alpha_{0}-h\sum_{i=1}^{n}\beta_{i}m_{i}\left(1+\chi^{-2(\mu+r_{i})}\right)\delta^{\mu},$
$\displaystyle a_{2}$
$\displaystyle=\mathrm{w}_{1}-\sum_{i=1}^{n}\beta_{i}m_{i}\chi^{2(\gamma-
r_{i})}.$
Here, the parameter $\chi>0$ is chosen such that $a_{2}>0,$ and
$\delta<H_{1}=\left(\frac{\alpha_{0}}{h\sum_{i=1}^{n}\beta_{i}m_{i}\left(1+\chi^{-2(\mu+r_{i})}\right)}\right)^{1/\mu}.$
(8)
∎
###### Lemma 4.
There exist $\delta>0$ and $c_{0},c_{1},c_{2}>0$ such that the time derivative
of functional (6) along the solutions of system (1) satisfies
$\displaystyle\frac{\mathrm{d}v(x_{t})}{\mathrm{d}t}$ $\displaystyle\leq-
c_{0}\|x(t)\|_{r,p}^{\gamma+\mu}-c_{1}\|x(t-h)\|_{r,p}^{\gamma+\mu}$ (9)
$\displaystyle-
c_{2}\int_{-h}^{0}\|x(t+\theta)\|_{r,p}^{\gamma+\mu}d\theta,\quad\text{if}\quad\|x_{t}\|_{\mathscr{H}}\leq\delta.$
###### Proof.
Similarly to the case of standard dilation [15], we differentiate the
functional along the solutions of system (1):
$\displaystyle\begin{split}\frac{\mathrm{d}v(x_{t})}{\mathrm{d}t}&=-\mathrm{w}_{0}\|x(t)\|_{r,p}^{\gamma+\mu}-\mathrm{w}_{1}\|x(t-h)\|_{r,p}^{\gamma+\mu}\\\
&-\mathrm{w}_{2}\int_{-h}^{0}\|x(t+\theta)\|_{r,p}^{\gamma+\mu}d\theta+\sum_{j=1}^{2}\Lambda_{j},\quad\text{where}\end{split}$
$\displaystyle\begin{split}\Lambda_{1}&=\sum_{i,j=1}^{n}\left.\frac{\partial
V(\textup{x})}{\partial\textup{x}_{i}}\right|_{\textup{x}=x(t)}\left.\int_{t-h}^{t}\frac{\partial
f_{i}(\textup{x},x(s))}{\partial\textup{x}_{j}}\right|_{\textup{x}=x(t)}\mathrm{d}s\\\
&\times
f_{j}(x(t),x(t-h)),\quad\Lambda_{2}=\sum_{i,j=1}^{n}f_{i}(x(t),x(t-h))\\\
&\times\left.\left(\frac{\partial^{2}V(\textup{x})}{\partial\textup{x}_{i}\textup{x}_{j}}\right)\right|_{\textup{x}=x(t)}\int_{-h}^{0}f_{j}(x(t),x(t+\theta))\mathrm{d}\theta.\end{split}$
Next, we estimate $\Lambda_{1}$ and $\Lambda_{2}$ with the help of Lemmas 1, 2
and inequalities (5). We introduce the sets of indices
$\displaystyle R_{1}$
$\displaystyle=\\{(i,j):\;i,j=\overline{1,n},\;\mu+r_{i}-r_{j}\geq 0\\},$
$\displaystyle R_{2}$
$\displaystyle=\\{(i,j):\;i,j=\overline{1,n},\;\mu+r_{i}-r_{j}<0\\}$
for the estimation of $\Lambda_{1}.$ Notice that Lemma 2 implies that
$\left|\frac{\partial
f_{i}(\textup{x},\textup{y})}{\partial\textup{x}_{j}}\right|\leq\eta_{ij},\quad(i,j)\in
R_{2},$
hence,
$\displaystyle\Lambda_{1}\leq\\!\\!\\!\sum_{(i,j)\in
R_{1}}\\!\\!\\!\beta_{i}m_{j}\|x(t)\|_{r,p}^{\gamma-
r_{i}}(\|x(t)\|_{r,p}^{\mu+r_{j}}+\|x(t-h)\|_{r,p}^{\mu+r_{j}})$
$\displaystyle\times\int_{-h}^{0}\eta_{ij}(\|x(t)\|_{r,p}^{\mu+r_{i}-r_{j}}+\|x(t+\theta)\|_{r,p}^{\mu+r_{i}-r_{j}})d\theta$
$\displaystyle+\\!\\!\\!\\!\\!\sum_{(i,j)\in
R_{2}}\\!\\!\\!\\!\\!h\beta_{i}m_{j}\eta_{ij}\|x(t)\|_{r,p}^{\gamma-
r_{i}}(\|x(t)\|_{r,p}^{\mu+r_{j}}+\|x(t-h)\|_{r,p}^{\mu+r_{j}}),$
$\displaystyle\Lambda_{2}\leq\sum_{i=1}^{n}\sum_{j=1}^{n}m_{i}m_{j}\psi_{ij}(\|x(t)\|_{r,p}^{\mu+r_{i}}+\|x(t-h)\|_{r,p}^{\mu+r_{i}})$
$\displaystyle\times\|x(t)\|_{r,p}^{\gamma-
r_{i}-r_{j}}\int_{-h}^{0}(\|x(t)\|_{r,p}^{\mu+r_{j}}+\|x(t+\theta)\|_{r,p}^{\mu+r_{j}})d\theta.$
Using the standard inequality
$A^{p_{1}}B^{p_{2}}C^{p_{3}}\leq
A^{p_{1}+p_{2}+p_{3}}+B^{p_{1}+p_{2}+p_{3}}+C^{p_{1}+p_{2}+p_{3}},$
where $p_{1},p_{2},p_{3}\geq 0$ and $A,B,C\geq 0$, and defining
$\displaystyle s_{ij}$ $\displaystyle=\left\\{\begin{aligned}
&1,&\quad&(i,j)\in R_{1},\\\ &\delta^{r_{j}-r_{i}-\mu},&\quad&(i,j)\in
R_{2},\end{aligned}\right.$ $\displaystyle L$
$\displaystyle=\sum_{i=1}^{n}\sum_{j=1}^{n}m_{j}\left(\beta_{i}\eta_{ij}s_{ij}+m_{i}\psi_{ij}\right),$
$\displaystyle g(x_{t})$
$\displaystyle=4h\|x(t)\|_{r,p}^{\gamma+2\mu}+2h\|x(t-h)\|_{r,p}^{\gamma+2\mu}$
$\displaystyle+2\int_{-h}^{0}\|x(t+\theta)\|_{r,p}^{\gamma+2\mu},$
we arrive at $\Lambda_{1}+\Lambda_{2}\leq Lg(x_{t}).$ Since $\mu>0,$ we obtain
bound (9) with
$\displaystyle
c_{0}=\mathrm{w}_{0}-4hL\delta^{\mu},\;\,c_{1}=\mathrm{w}_{1}-2hL\delta^{\mu},\;\,c_{2}=\mathrm{w}_{2}-2L\delta^{\mu}.$
It is enough to choose
$\delta<H_{2}=\left(\min\left\\{\frac{\mathrm{w}_{0}}{4hL},\frac{\mathrm{w}_{1}}{2hL},\frac{\mathrm{w}_{2}}{2L}\right\\}\right)^{1/\mu}$
(10)
to end the proof. ∎
###### Lemma 5.
There exist $b_{1},b_{2}>0$ such that
$v(\varphi)\leq
b_{1}\|\varphi(0)\|_{r,p}^{\gamma}+b_{2}\int_{-h}^{0}\|\varphi(\theta)\|_{r,p}^{\gamma}d\theta,$
(11)
if $\|\varphi\|_{\mathscr{H}}\leq\delta.$
###### Proof.
The bound is obtained straightforwardly with
$\displaystyle b_{1}$
$\displaystyle=\alpha_{1}+2h\left(\sum_{i=1}^{n}\beta_{i}m_{i}\right)\delta^{\mu},$
$\displaystyle b_{2}$
$\displaystyle=\left(\mathrm{w}_{1}+h\mathrm{w}_{2}+\sum_{i=1}^{n}\beta_{i}m_{i}\right)\delta^{\mu}.$
∎
###### Corollary 1.
Functional (6) admits an upper bound
$\displaystyle
v(\varphi)\leq\alpha_{1}\|\varphi(0)\|^{\gamma}+b_{3}\|\varphi\|_{\mathscr{H}}^{\gamma+\mu},$
(12)
where
$b_{3}=\left(\mathrm{w}_{1}+h\mathrm{w}_{2}+2h\sum_{i=1}^{n}\beta_{i}m_{i}\right)h.$
Now, we present a lower bound for the functional $v(\varphi)$ on the set
$S_{\alpha}.$ This bound will be useful for the construction the of the
estimates in Sections IV and V.
###### Lemma 6.
There exist $\delta>0$ and $\tilde{a}_{1}=\tilde{a}_{1}(\alpha)>0$ such that
$v(\varphi)\geq\tilde{a}_{1}\|\varphi(0)\|_{r,p}^{\gamma}+\mathrm{w}_{1}\int_{-h}^{0}\|\varphi(\theta)\|_{r,p}^{\gamma+\mu}d\theta,$
(13)
if $\varphi\in S_{\alpha}$ and $\|\varphi\|_{\mathscr{H}}\leq\delta.$
###### Proof.
Using $\|\varphi(\theta)\|_{r,p}\leq\alpha\|\varphi(0)\|_{r,p},$
$\theta\in[-h,0],$ for the estimation of the second summand, we obtain that
bound (13) is satisfied with
$\displaystyle\tilde{a}_{1}=\alpha_{0}-h\left(\sum_{i=1}^{n}(1+\alpha^{\mu+r_{i}})m_{i}\beta_{i}\right)\delta^{\mu},\quad\text{where}$
$\displaystyle\delta<H_{3}=\left(\frac{\alpha_{0}}{h\sum_{i=1}^{n}(1+\alpha^{\mu+r_{i}})m_{i}\beta_{i}}\right)^{1/\mu}.$
(14)
∎
## IV Estimates for the Attraction Region
Lemmas 3–6 allow us to present straightforwardly estimates of the domain of
attraction of the trivial solution of system (1). The proofs are very similar
to that in [15] (see Corollary 10 and Remark 11). The estimates differ in the
lower bound for the functional used: bound (7) in Theorem 1 and bound (13) in
Theorem 2.
###### Theorem 1.
Let $\Delta$ be a positive root of equation
$\alpha_{1}\Delta^{\gamma}+b_{3}\Delta^{\gamma+\mu}=a_{1}\delta^{\gamma},$
where $\delta<\min\\{H_{1},H_{2}\\},$ and $H_{1}$ and $H_{2}$ are defined by
(8) and (10), respectively. If system (2) is asymptotically stable, then the
set of initial functions
$\Omega=\\{\varphi\in
C([-h,0],\mathbb{R}^{n}):\|\varphi\|_{\mathscr{H}}<\Delta\\},$
estimates the attraction region of the trivial solution of (1).
###### Theorem 2.
Let $\Delta_{\alpha}$ be a positive root of equation
$\alpha_{1}\Delta_{\alpha}^{\gamma}+b_{3}\Delta_{\alpha}^{\gamma+\mu}=\tilde{a}_{1}\delta^{\gamma},$
where $\delta<\min\\{H_{2},H_{3}\\},$ and $H_{3}$ is defined by (14). If
system (2) is asymptotically stable, then the set of initial functions
$\Omega_{\alpha}=\\{\varphi\in
C([-h,0],\mathbb{R}^{n}):\|\varphi\|_{\mathscr{H}}<\Delta_{\alpha}\\},$
estimates the attraction region of the trivial solution of (1).
###### Remark.
Proofs of Theorem 1 and Theorem 2 imply that $\|x(t,\varphi)\|_{r,p}<\delta$
$\,\forall\,t\geq 0,$ if $\|\varphi\|_{\mathscr{H}}<\Delta$ or
$\|\varphi\|_{\mathscr{H}}<\Delta_{\alpha}.$
## V Estimates for the solutions
For the standard dilation, the estimates for the solutions obtained with the
help of the Lyapunov-Krasovskii functional (6) are presented in [12]. We
extend straightforwardly this result to the case of weighted dilation in
Section V-A. A novel approach which combines the use of functional (6) with
ideas of the Razumikhin framework is presented in Section V-B.
### V-A Classical Approach
Bounds (9) and (11) imply that if $\|x_{t}\|_{\mathscr{H}}\leq\delta,$ $t\geq
0,$ then
$\displaystyle\frac{dv(x_{t})}{dt}$
$\displaystyle\leq-c\left(\|x(t)\|_{r,p}^{\gamma+\mu}+\int_{-h}^{0}\|x(t+\theta)\|_{r,p}^{\gamma+\mu}d\theta\right),$
(15) $\displaystyle v(x_{t})$ $\displaystyle\leq
b\left(\|x(t)\|_{r,p}^{\gamma}+\int_{-h}^{0}\|x(t+\theta)\|_{r,p}^{\gamma}d\theta\right),$
(16)
where $c=\min\\{c_{0},c_{2}\\},$ $b=\max\\{b_{1},b_{2}\\},$
$\delta<\min\\{H_{1},H_{2}\\}.$ Define the values
$\rho_{1}=\bigl{(}2\max\\{1,h\\}\bigr{)}^{\frac{\mu}{\gamma}},\quad\rho_{2}=\frac{c}{\rho_{1}b^{\frac{\gamma+\mu}{\gamma}}}.$
The following relation was established in [12] on the basis of Hölder’s
inequality:
$\left(\|x(t)\|_{r,p}^{\gamma}+\int_{-h}^{0}\|x(t+\theta)\|_{r,p}^{\gamma}d\theta\right)^{\frac{\gamma+\mu}{\gamma}}\\\
\leq\rho_{1}\left(\|x(t)\|_{r,p}^{\gamma+\mu}+\int_{-h}^{0}\|x(t+\theta)\|_{r,p}^{\gamma+\mu}d\theta\right).$
(17)
Combining (15), (16) and (17) gives the following connection between
functional (6) and its derivative.
###### Lemma 7.
The following inequality is satisfied:
$\frac{dv(x_{t})}{dt}\leq-\rho_{2}v^{\frac{\gamma+\mu}{\gamma}}(x_{t}),\quad
t\geq 0,$ (18)
along the solutions of system (1) with $\|x_{t}\|_{\mathscr{H}}\leq\delta.$
Considering the comparison equation [25] of the form
$\frac{du(t)}{dt}=-\rho_{2}u^{\frac{\gamma+\mu}{\gamma}}(t),$ (19)
with initial condition
$u(0)=u_{0}=(\alpha_{1}+b_{3}\Delta^{\mu})\|\varphi\|_{\mathscr{H}}^{\gamma}$
and exploiting the classical ideas, we arrive at the following estimates for
solutions in the homogeneous norm.
###### Theorem 3.
Let Assumptions 1 and 2 hold. Then, the solutions of system (1) with initial
functions with $\|\varphi\|_{\mathscr{H}}<\Delta,$ where $\Delta$ is defined
in Theorem 1, admit an estimate of the form
$\displaystyle\|x(t,\varphi)\|_{r,p}\leq\frac{\hat{c}_{1}\|\varphi\|_{\mathscr{H}}}{\left(1+\hat{c}_{2}\|\varphi\|_{\mathscr{H}}^{\mu}t\right)^{1/\mu}},\quad\text{where}$
(20)
$\displaystyle\begin{split}\hat{c}_{1}&=\left(\frac{\alpha_{1}+b_{3}\Delta^{\mu}}{a_{1}}\right)^{\frac{1}{\gamma}}=\frac{\delta}{\Delta},\\\
\hat{c}_{2}&=\frac{c}{b}\left(\frac{\mu}{\gamma}\right)\left(\frac{\alpha_{1}+b_{3}\Delta^{\mu}}{2b\max\\{1,h\\}}\right)^{\frac{\mu}{\gamma}}.\end{split}$
###### Remark.
Note that $\|\varphi\|_{\mathscr{H}}<\Delta$ implies
$\|x_{t}(\varphi)\|_{\mathscr{H}}<\delta$ for all $t\geq 0$ according to
Theorem 1 thus making use of (15) and (16) legal.
### V-B Novel Approach Using the Set $S_{\alpha}$
The lower bound (13) is expected to be less conservative than the original
bound (7), since it should hold on the reduced set $S_{\alpha}$ instead of the
set of all continuous functions. Thus, a natural question appears: Can we
replace the constant $a_{1}$ coming from the lower bound in (20) with
$\tilde{a}_{1}$? We give an affirmative answer to this question with some
restrictions below. Exploring the proof of Theorem 3, one finds that the lower
bound for the functional is used at the final step of the proof, namely,
$a_{1}\|x(t,\varphi)\|_{r,p}^{\gamma}\leq v(x_{t})\leq u(t),$
where $u(t)$ is the solution of (19). It is crucial that the last formula is
true for all solutions, not only those with each segment in $S_{\alpha}.$ A
similar difficulty appears while constructing the estimates of solutions via
the Razumikhin theorem [10]. Here, we adapt the ideas of [10] to reduce the
conservatism of Theorem 3.
We start with Lemma 7. To overcome the mentioned difficulty, we take
$\rho<\rho_{2},$ and consider the comparison initial value problem
$\displaystyle\frac{du(t)}{dt}=-\rho u^{\frac{\gamma+\mu}{\gamma}}(t),$ (21)
$\displaystyle
u(0)=\tilde{u}_{0}=(\alpha_{1}+b_{3}\Delta_{\alpha}^{\mu})\|\varphi\|_{\mathscr{H}}^{\gamma},$
(22)
which admits the solution
$u(t)=\tilde{u}_{0}\left[1+\rho\left(\frac{\mu}{\gamma}\right)\tilde{u}_{0}^{\frac{\mu}{\gamma}}t\right]^{-\frac{\gamma}{\mu}}.$
Now, we present a set of auxiliary results to extend the approach of [10] to
the Lyapunov-Krasovskii framework. In Lemma 9, a choice of $\rho$ is made.
Such choice is always possible due to $\alpha>1.$ Theorem 4 allows us to
switch from the bound on the set $S_{\alpha}$ to the bound which holds for all
solutions in a certain neighbourhood of the trivial one. The proofs are
omitted due to length limitations.
###### Lemma 8.
If $\|\varphi\|_{\mathscr{H}}<\Delta_{\alpha},$ then
$v(x_{t})<u(t),\quad t\geq 0.$
###### Lemma 9.
If the condition
$1+\rho
h\left(\frac{\mu}{\gamma}\right)(\alpha_{1}+b_{3}\Delta_{\alpha}^{\mu})^{\frac{\mu}{\gamma}}\Delta_{\alpha}^{\mu}\leq\alpha^{\mu}$
(23)
holds, then $u(t+\theta)<\alpha^{\gamma}u(t)$ for all $t\geq 0$ and
$\theta\in[-h,0]$ such that $t+\theta\geq 0.$
###### Theorem 4.
If $\|\varphi\|_{\mathscr{H}}<\Delta_{\alpha}$ and inequality (23) holds, then
$\tilde{a}_{1}\|x(t,\varphi)\|_{r,p}^{\gamma}<u(t),\quad t\geq 0.$
Based on Lemmas 8, 9 and Theorem 4 we present the main result of the section.
###### Theorem 5.
Let Assumptions 1, 2 and inequality (23) hold. Then, the solutions of system
(1) corresponding to the initial functions with
$\|\varphi\|_{\mathscr{H}}<\Delta_{\alpha},$ where $\Delta_{\alpha}$ is
defined in Theorem 2, admit an estimate of the form (20) with
$\displaystyle\hat{c}_{1}$
$\displaystyle=\left(\frac{\alpha_{1}+b_{3}\Delta_{\alpha}^{\mu}}{\tilde{a}_{1}}\right)^{\frac{1}{\gamma}}=\frac{\delta}{\Delta_{\alpha}},$
$\displaystyle\hat{c}_{2}$
$\displaystyle=\frac{c}{b}\left(\frac{\mu}{\gamma}\right)\left(\frac{\alpha_{1}+b_{3}\Delta_{\alpha}^{\mu}}{2b\max\\{1,h\\}}\right)^{\frac{\mu}{\gamma}}.$
## VI ILLUSTRATIVE EXAMPLE
Consider the following system, which is used to model complex interactions,
either instantaneous or delayed, occurring amongst transcription factors and
target genes [8]:
$\displaystyle\begin{split}\dot{x}_{1}(t)&=-\kappa_{1}x_{1}^{2}(t)+\lambda_{1}x_{2}(t-h),\\\
\dot{x}_{2}(t)&=-\kappa_{2}x_{2}^{3/2}(t)+\lambda_{2}x_{2}(t)x_{1}(t-h).\end{split}$
(24)
Here $x_{1}(t),x_{2}(t)\in\mathbb{R}^{+}$ represent interactions occurring in
a genetic network, $h>0$ is the transition delay in the network, and
$\kappa_{1},\kappa_{2},\lambda_{1},\lambda_{2}$ are positive parameters.
System (24) is $\delta^{r}$-homogeneous of degree $\mu=1$ with
$(r_{1},r_{2})=(1,2).$ Set $\gamma=4$ and consider the Lyapunov function
$V(x)=x_{1}^{4}+x_{2}^{2},$
which is positive definite. Its derivative along the trajectories of system
(24) when $h=0$ is of the form
$\displaystyle\dot{V}(x)=-4\kappa_{1}x_{1}^{5}+4\lambda_{1}x_{1}^{3}x_{2}-2\kappa_{2}x_{2}^{5/2}+2\lambda_{2}x_{2}^{2}x_{1}$
$\displaystyle\leq-2\min\\{2\kappa_{1},\kappa_{2}\\}(x_{1}^{5}+x_{2}^{5/2})+4\max\\{2\lambda_{1},\lambda_{2}\\}\|x(t)\|_{r,p}^{5}.$
Choosing $p=5$ for the homogeneous norm, we arrive at bound (3) with
$\mathrm{w}=2\min\\{2\kappa_{1},\kappa_{2}\\}-4\max\\{2\lambda_{1},\lambda_{2}\\}.$
Compute the other constants: $m_{1}=\max\\{\kappa_{1},\lambda_{1}\\},$
$m_{2}=\kappa_{2}+\lambda_{2},$ $\eta_{11}=2\kappa_{1},$
$\eta_{12}=\eta_{21}=0,$ $\eta_{22}=\max\\{1.5\kappa_{2},\lambda_{2}\\},$
$\beta_{1}=4,$ $\beta_{2}=2,$ $\psi_{11}=12,$ $\psi_{12}=\psi_{21}=0,$
$\psi_{22}=2,$ $\alpha_{0}=1$ and $\alpha_{1}=2^{1/5}$.
Figure 1: Estimation of the solution of system (24) for $p=5$
For the parameters
$(\kappa_{1},\kappa_{2},\lambda_{1},\lambda_{2})=(9,18,0.25,0.5)$ and the
initial function $\varphi(\theta)=[5\cdot 10^{-11},5\cdot 10^{-11}]$,
$\theta\in[-10,0],$ the system response (continuous line) and the estimates
using Theorem 3 (dashed line) and Theorem 5 (dashed-dot line) are depicted on
Figure 1. We conclude that the use of the set $S_{\alpha}$ allows us to obtain
a tighter estimate than those via the classical approach.
## VII Conclusion
In this paper, we present a Lyapunov-Krasovskii functional for weighted
homogeneous time delay systems of positive degree and show its potential as an
analysis and design tool by computing the estimates of the domain of
attraction and of the system solutions.
## References
* [1] L. Rosier, “Homogeneous Lyapunov function for homogeneous continuous vector field,” _Systems and Control Letters_ , vol. 19, no. 6, pp. 467–473, 1992\.
* [2] A. Bacciotti and L. Rosier, _Liapunov functions and stability in control theory_. Springer Science and Business Media, 2006.
* [3] V. Andrieu, L. Praly, and A. Astolfi, “Homogeneous approximation, recursive observer design, and output feedback,” _SIAM Journal on Control and Optimization_ , vol. 47, no. 4, pp. 1814–1850, 2008.
* [4] V. I. Zubov, _Methods of A.M. Lyapunov and their Application_. P. Noordhoff, 1964.
* [5] H. Hermes, “Homogeneous coordinates and continuous asymptotically stabilizing feedback controls,” _Differential Equations, Stability and Control_ , vol. 109, no. 1, pp. 249–260, 1991.
* [6] D. Efimov and W. Perruquetti, “Homogeneity for time-delay systems,” _IFAC Proceedings Volumes_ , vol. 44, no. 1, pp. 3861–3866, 2011.
* [7] D. Efimov, A. Polyakov, W. Perruquetti, and J.-P. Richard, “Weighted homogeneity for time-delay systems: Finite-time and independent of delay stability,” _IEEE Transactions on Automatic Control_ , vol. 61, no. 1, pp. 210–215, 2016.
* [8] D. Efimov, W. Perruquetti, and J.-P. Richard, “Global and local weighted homogeneity for time-delay systems,” in _Recent Results on Nonlinear Delay Control Systems_. Springer, 2016, pp. 163–181.
* [9] A. Y. Aleksandrov and A. P. Zhabko, “Delay-independent stability of homogeneous systems,” _Applied Mathematics Letters_ , vol. 34, pp. 43–50, 2014.
* [10] ——, “On the asymptotic stability of solutions of nonlinear systems with delay,” _Siberian Mathematical Journal_ , vol. 53, no. 3, pp. 393–403, 2012\.
* [11] A. Y. Aleksandrov, G.-D. Hu, and A. P. Zhabko, “Delay-independent stability conditions for some classes of nonlinear systems,” _IEEE Transactions on Automatic Control_ , vol. 59, no. 8, pp. 2209–2214, 2014.
* [12] G. Portilla, I. Alexandrova, S. Mondié, and A. P. Zhabko, “Estimates for solutions of homogeneous time-delay systems: Comparison of Lyapunov-Krasovskii and Lyapunov-Razumikhin techniques,” _Systems and Control Letters, Submitted_ , 2020.
* [13] D. Efimov and A. Aleksandrov, “On estimation of rates of convergence in Lyapunov–Razumikhin approach,” _Automatica_ , vol. 116, p. 108928, 2020\.
* [14] A. Aleksandrov, A. Zhabko, and V. Pecherskiy, “Complete type functionals for some classes of homogeneous differential-difference systems,” _Proc. 8th International Conference “Modern methods of applied mathematics, control theory and computer technology”_ , pp. 5–8 (in Russian), 2015.
* [15] A. P. Zhabko and I. V. Alexandrova, “Lyapunov direct method for homogeneous time delay systems,” _IFAC-PapersOnLine_ , vol. 52, no. 18, pp. 79–84, 2019\.
* [16] V. Kharitonov and A. P. Zhabko, “Lyapunov-Krasovskii approach to the robust stability analysis of time-delay systems,” _Automatica_ , vol. 39, no. 1, pp. 15–20, 2003.
* [17] V. Kharitonov, _Time-delay systems: Lyapunov functionals and matrices_. Springer Science and Business Media, 2013.
* [18] A. Zhabko and I. Alexandrova, “Complete type functionals for homogeneous time delay systems,” _Automatica, Submitted_ , 2020.
* [19] G. Portilla, “Análisis de estabilidad de sistemas homogéneos en presencia de retardos,” _Master’s thesis, CINVESTAV IPN, Mexico_ , 2020.
* [20] I. V. Medvedeva and A. P. Zhabko, “Synthesis of Razumikhin and Lyapunov–Krasovskii approaches to stability analysis of time-delay systems,” _Automatica_ , vol. 51, pp. 372–377, 2015.
* [21] I. V. Alexandrova and A. P. Zhabko, “At the junction of Lyapunov-Krasovskii and Razumikhin approaches,” _IFAC-PapersOnLine_ , vol. 51, no. 14, pp. 147–152, 2018.
* [22] D. Efimov and A. Aleksandrov, “Analysis of robustness of homogeneous systems with time delays using Lyapunov-Krasovskii functionals,” _Int J Robust Nonlinear Control_ , pp. 1–17, 2020, DOI: 10.1002/rnc.5115.
* [23] V. I. Zubov, _Mathematical Methods for the Study of Automatical Control Systems_. Pergamon Press, Jerusalem Acad. Press, Oxford, Jerusalem, 1962.
* [24] J. K. Hale and S. M. V. Lunel, _Introduction to functional differential equations_. Springer Science Business Media, 2013, vol. 99.
* [25] H. K. Khalil, “Nonlinear systems,” _Prentice-Hall Inc_ , 1996.
|
11institutetext: IMCCE, Observatoire de Paris, PSL Research University, CNRS,
Sorbonne Université, Université de Lille, 75014 Paris, France 11email:
<EMAIL_ADDRESS>22institutetext: Department of Mathematics,
University of Pisa, Largo Bruno Pontecorvo 5, 56127 Pisa, Italy
# The past and future obliquity of Saturn as Titan migrates
Melaine Saillenfest 11 Giacomo Lari 22 Gwenaël Boué 11 Ariane Courtot 11
(Received 12 November 2020 / Accepted 23 January 2021)
###### Abstract
Context. Giant planets are expected to form with near-zero obliquities. It has
recently been shown that the fast migration of Titan could be responsible for
the current $26.7^{\circ}$-tilt of Saturn’s spin axis.
Aims. We aim to quantify the level of generality of this result by measuring
the range of parameters allowing for this scenario to happen. Since Titan
continues to migrate today, we also aim to determine the obliquity that Saturn
will reach in the future.
Methods. For a large variety of migration rates for Titan, we numerically
propagated the orientation of Saturn’s spin axis both backwards and forwards
in time. We explored a broad range of initial conditions after the late
planetary migration, including both small and large obliquity values.
Results. In the adiabatic regime, the likelihood of reproducing Saturn’s
current spin-axis orientation is maximised for primordial obliquities between
about $2^{\circ}$ and $7^{\circ}$. For a slightly faster migration than
expected from radio-science experiments, non-adiabatic effects even allow for
exactly null primordial obliquities. Starting from such small tilts, Saturn’s
spin axis can evolve up to its current state provided that: _i)_ the semi-
major axis of Titan changed by more than $5\%$ of its current value since the
late planetary migration, and _ii)_ its migration rate does not exceed ten
times the nominal measured rate. In comparison, observational data suggest
that the increase in Titan’s semi-major axis exceeded $50\%$ over $4$ Gyrs,
and error bars imply that the current migration rate is unlikely to be larger
than $1.5$ times its nominal value.
Conclusions. If Titan did migrate substantially before today, tilting Saturn
from a small obliquity is not only possible, but it is the most likely
scenario. Saturn’s obliquity is still expected to be increasing today and
could exceed $65^{\circ}$ in the future. Maximising the likelihood would also
put strict constraints on Saturn’s polar moment of inertia. However, the
possibility remains that Saturn’s primordial obliquity was already large, for
instance as a result of a massive collision. The unambiguous distinction
between these two scenarios would be given by a precise measure of Saturn’s
polar moment of inertia.
###### Key Words.:
celestial mechanics, Saturn, secular dynamics, spin axis, obliquity
## 1 Introduction
The obliquity of a planet is the angle between its spin axis and the normal to
its orbit. In the protoplanetary disc, giant planets are expected to form with
near-zero obliquities (Ward & Hamilton, 2004; Rogoszinski & Hamilton, 2020).
After the formation of Saturn, some dynamical mechanism must therefore have
tilted its spin axis up to its current obliquity of $26.7^{\circ}$.
Ward & Hamilton (2004) showed that Saturn is currently located very close to a
secular spin-orbit resonance with the nodal precession mode of Neptune. This
resonance strongly affects Saturn’s spin axis today, and it offers a tempting
explanation for its current large obliquity. For years, the scenarios that
were most successful in reproducing Saturn’s current obliquity through this
resonance invoked the late planetary migration (Hamilton & Ward, 2004; Boué et
al., 2009; Vokrouhlický & Nesvorný, 2015; Brasser & Lee, 2015). However,
Saillenfest et al. (2021) have recently shown that this picture is
incompatible with the fast tidal migration of Titan detected by Lainey et al.
(2020) in two independent sets of observations – assuming that this migration
is not specific to the present epoch but went on over a substantial interval
of time. Indeed, satellites affect the spin-axis precession of their host
planets (see e.g. Ward, 1975; Tremaine, 1991; Laskar et al., 1993; Boué &
Laskar, 2006). Since the effect of a satellite depends on its orbital
distance, migrating satellites induce a long-term drift in the planet’s spin-
axis precession velocity. In the course of this drift, large obliquity
variations can occur if a secular spin-orbit resonance is encountered (i.e. if
the planet’s spin-axis precession velocity becomes commensurate with a
harmonic of its orbital precession). Because of this mechanism, dramatic
variations in the Earth’s obliquity are expected to take place in a few
billion years from now, as a result of the Moon’s migration (Néron de Surgy &
Laskar, 1997). Likewise, Jupiter’s obliquity is likely steadily increasing
today and could exceed $30^{\circ}$ in the next billions of years, as a result
of the migration of the Galilean satellites (Lari et al., 2020; Saillenfest et
al., 2020).
A significant migration of Saturn’s satellites implies that, contrary to
previous assumptions, Saturn’s spin-axis precession velocity was much smaller
in the past, precluding any resonance with an orbital frequency. The same
conclusion could also hold for Jupiter (Lainey et al., 2009; Lari et al.,
2020). In fact, Saillenfest et al. (2021) have shown that Titan’s migration
itself is likely responsible for the resonant encounter between Saturn’s spin
axis and the nodal precession mode of Neptune. Their results indicate that
this relatively recent resonant encounter could explain the current large
obliquity of Saturn starting from a small value, possibly less than
$3^{\circ}$. This new paradigm solves the problem of the low probability of
reproducing both the orbits and axis tilts of Jupiter and Saturn during the
late planetary migration (Brasser & Lee, 2015). However, it revokes the
concomitant constraints on the parameters of the late planetary migration.
The findings of Saillenfest et al. (2021) have been obtained through backward
integrations from Saturn’s current spin orientation, and by exploring
migration histories for Titan in the vicinity of the nominal scenario of
Lainey et al. (2020). However, observation uncertainties and our lack of
knowledge about the past evolution of Titan’s migration rate still allow for a
large variety of migration histories, and one can wonder whether the dramatic
influence of Titan is a generic result or whether it is restricted to the
range of parameters explored by Saillenfest et al. (2021). Moreover, even
though backward numerical integrations do prove that Titan’s migration is able
to raise Saturn’s obliquity, a statistical picture of the possible
trajectories that could have been followed is still missing. In this regard,
the likelihood of following a given dynamical pathway would be quite valuable,
because it could be used as a constraint to the parameters of the model, in
the spirit of Boué et al. (2009), Brasser & Lee (2015), and Vokrouhlický &
Nesvorný (2015).
For these reasons, we aim to explore the outcomes given by all conceivable
migration timescales for Titan, and to perform a statistical search for
Saturn’s past obliquity. This will provide the whole region of the parameter
space allowing Titan’s migration to be responsible for Saturn’s large
obliquity, with the corresponding probability. Finally, since Titan still
migrates today, Saturn’s obliquity could suffer from further large variations
in the future, in the same way as Jupiter (Saillenfest et al., 2020).
Therefore, we also aim to extend previous analyses to the future dynamics of
Saturn’s spin axis.
Our article is organised as follows. In Sect. 2, we recall the dynamical model
used by Saillenfest et al. (2021) and discuss the range of acceptable values
for the physical parameters of Saturn and its satellites. Sections 3 and 4 are
dedicated to the past spin-axis dynamics of Saturn: after having explored the
parameter space and quantified the importance of non-adiabaticity, we perform
Monte Carlo experiments to search for the initial conditions of Saturn’s spin
axis. In Sect. 5, we present our results about the obliquity values that will
be reached by Saturn in the future. Finally, our conclusions are summarised in
Sect. 6.
## 2 Secular dynamics of the spin axis
### 2.1 Equations of motion
In the approximation of rigid rotation, the spin-axis dynamics of an oblate
planet subject to the lowest-order term of the torque from the Sun is given
for instance by Laskar & Robutel (1993) or Néron de Surgy & Laskar (1997). Far
from spin-orbit resonances, and due to the weakness of the torque, the long-
term evolution of the spin axis is accurately described by the secular
Hamiltonian function (i.e. averaged over rotational and orbital motions). This
Hamiltonian can be written
$\displaystyle\mathcal{H}(X,-\psi,t)$
$\displaystyle=-\frac{\alpha}{2}\frac{X^{2}}{\big{(}1-e(t)^{2}\big{)}^{3/2}}$
(1)
$\displaystyle-\sqrt{1-X^{2}}\big{(}\mathcal{A}(t)\sin\psi+\mathcal{B}(t)\cos\psi\big{)}$
$\displaystyle+2X\mathcal{C}(t),$
where the conjugate coordinates are $X=\cos\varepsilon$ (cosine of obliquity)
and $-\psi$ (minus the precession angle). The Hamiltonian in Eq. (1)
explicitly depends on time $t$ through the orbital eccentricity $e$ of the
planet and through the functions
$\left\\{\begin{aligned}
\mathcal{A}(t)&=\frac{2\big{(}\dot{q}+p\,\mathcal{C}(t)\big{)}}{\sqrt{1-p^{2}-q^{2}}}\,,\\\
\mathcal{B}(t)&=\frac{2\big{(}\dot{p}-q\,\mathcal{C}(t)\big{)}}{\sqrt{1-p^{2}-q^{2}}}\,,\\\
\end{aligned}\right.\quad\text{and}\quad\mathcal{C}(t)=q\dot{p}-p\dot{q}\,.$
(2)
In these expressions, $q=\eta\cos\Omega$ and $p=\eta\sin\Omega$, where
$\eta\equiv\sin(I/2)$, and $I$ and $\Omega$ are the orbital inclination and
the longitude of ascending node of the planet, respectively. The quantity
$\alpha$ is called the precession constant. It depends on the spin rate of the
planet and on its mass distribution, through the formula:
$\alpha=\frac{3}{2}\frac{\mathcal{G}m_{\odot}}{\omega
a^{3}}\frac{J_{2}}{\lambda}\,,$ (3)
where $\mathcal{G}$ is the gravitational constant, $m_{\odot}$ is the mass of
the sun, $\omega$ is the spin rate of the planet, $a$ is its semi-major axis,
$J_{2}$ is its second zonal gravity coefficient, and $\lambda$ is its
normalised polar moment of inertia. The parameters $J_{2}$ and $\lambda$ can
be expressed as
$J_{2}=\frac{2C-A-B}{2MR_{\mathrm{eq}}^{2}}\quad\text{and}\quad\lambda=\frac{C}{MR_{\mathrm{eq}}^{2}}\,,$
(4)
where $A$, $B$, and $C$ are the equatorial and polar moments of inertia of the
planet, $M$ is its mass, and $R_{\mathrm{eq}}$ is its equatorial radius.
The precession rate of the planet is increased if it possesses massive
satellites. Far-away satellites increase the torque exerted by the sun on the
equatorial bulge of the planet, whereas close-in satellites artificially
increase the oblateness and the rotational angular momentum of the planet
(Boué & Laskar, 2006). In the close-in regime, an expression for the effective
precession constant has been derived by Ward (1975). It has been generalised
by French et al. (1993) who included the effect of the non-zero orbital
inclinations of the satellites, as they oscillate around their local ‘Laplace
plane’ (see e.g. Tremaine et al., 2009). The effective precession constant is
obtained by replacing $J_{2}$ and $\lambda$ in Eq. (3) by the effective
values:
$\displaystyle J_{2}^{\prime}$
$\displaystyle=J_{2}+\frac{1}{2}\sum_{k}\frac{m_{k}}{M}\frac{a_{k}^{2}}{R_{\mathrm{eq}}^{2}}\frac{\sin(2\varepsilon-2L_{k})}{\sin(2\varepsilon)}\,,$
(5) $\displaystyle\lambda^{\prime}$
$\displaystyle=\lambda+\sum_{k}\frac{m_{k}}{M}\frac{a_{k}^{2}}{R_{\mathrm{eq}}^{2}}\frac{n_{k}}{\omega}\frac{\sin(\varepsilon-
L_{k})}{\sin(\varepsilon)}\,,$
where $m_{k}$, $a_{k}$ and $n_{k}$ are the mass, the semi-major axis, and the
mean motion of the $k$th satellite, $\varepsilon$ is the obliquity of the
planet, and $L_{k}$ is the inclination of the Laplace plane of the $k$th
satellite with respect to the planet’s equator. For regular satellites,
$L_{k}$ lies between $0$ (close-in satellite) and $\varepsilon$ (far-away
satellite). The formulas of French et al. (1993) given by Eq. (5) are valid
whatever the distance of the satellites, and they closely match the general
precession solutions of Boué & Laskar (2006). We can also verify that the
small eccentricities of Saturn’s major satellites do not contribute
substantially to $J_{2}^{\prime}$ and $\lambda^{\prime}$, allowing us to
neglect them.
Because of its large mass, Titan is by far the satellite that contributes most
to the value of $\alpha$. Therefore, even though its Laplace plane is not much
inclined, taking its inclination into account changes the whole satellites’
contribution by several percent111This point was missed by French et al.
(1993) who only included the inclination contribution of Iapetus.. Tremaine et
al. (2009) give a closed-form expression for $L_{k}$ in the regime $m_{k}\ll
M$, where all other satellites $j$ with $a_{j}<a_{k}$ are also taken into
account. The values obtained for Titan ($L_{6}\approx 0.62^{\circ}$) and
Iapetus ($L_{8}\approx 16.03^{\circ}$) are very close to those found in the
quasi-periodic decomposition of their ephemerides (see e.g. Vienne & Duriez,
1995). The inclinations $L_{k}$ of the other satellites of Saturn do not
contribute substantially to the value of $\alpha$.
Even though the value of $\alpha$ computed using Eq. (5) yields an accurate
value of the current mean spin-axis precession velocity of Saturn as
$\dot{\psi}=\alpha X/(1-e^{2})^{3/2}$, it cannot be directly used to propagate
the dynamics using the Hamiltonian function in Eq. (1), because $\alpha$ would
itself be a function of $X$, which contradicts the Hamiltonian formulation.
For this reason, authors generally assume that $\alpha$ only weakly depends on
$\varepsilon$, such that the satellite’s contributions can be considered to be
fixed while $\varepsilon$ varies according to Hamilton’s equations of motion
(see e.g. Ward & Hamilton, 2004; Boué et al., 2009; Vokrouhlický & Nesvorný,
2015; Brasser & Lee, 2015). In our case, Titan largely dominates the
satellite’s contribution, and it is almost in the close-in regime
$(L_{6}\ll\varepsilon)$. We can therefore use the same trick as Saillenfest et
al. (2021) and replace Eq. (5) by
$\tilde{J}_{2}=J_{2}+\frac{1}{2}\frac{\tilde{m}_{6}}{M}\frac{a_{6}^{2}}{R_{\mathrm{eq}}^{2}}\,,\quad\text{and}\quad\tilde{\lambda}=\lambda+\frac{\tilde{m}_{6}}{M}\frac{a_{6}^{2}}{R_{\mathrm{eq}}^{2}}\frac{n_{6}}{\omega}\,,$
(6)
where only Titan is considered ($k=6$), in the close-in regime ($L_{6}=0$),
and where its mass $m_{6}$ has been slightly increased ($\tilde{m}_{6}\approx
1.04\,m_{6}$) so as to produce the exact same value of $\alpha$ today using
Eq. (6) instead of Eq. (5). This slight increase in Titan’s mass has no
physical meaning; it is only used here to provide the right connection between
$\lambda$ and today’s value of $\alpha$. This point is further discussed in
Sect. 2.3
### 2.2 Orbital solution
The Hamiltonian function in Eq. (1) depends on the orbit of the planet and on
its temporal variations. In order to explore the long-term dynamics of
Saturn’s spin axis, we need an orbital solution that is valid over billions of
years. In the same way as Saillenfest et al. (2020), we use the secular
solution of Laskar (1990) expanded in quasi-periodic series:
$\displaystyle z=e\exp(i\varpi)$
$\displaystyle=\sum_{k}E_{k}\exp(i\theta_{k})\,,$ (7)
$\displaystyle\zeta=\eta\exp(i\Omega)$
$\displaystyle=\sum_{k}S_{k}\exp(i\phi_{k})\,,$
where $\varpi$ is Saturn’s longitude of perihelion. The amplitudes $E_{k}$ and
$S_{k}$ are real constants, and the angles $\theta_{k}$ and $\phi_{k}$ evolve
linearly over time $t$ with frequencies $\mu_{k}$ and $\nu_{k}$:
$\theta_{k}(t)=\mu_{k}\,t+\theta_{k}^{(0)}\hskip 14.22636pt\text{and}\hskip
14.22636pt\phi_{k}(t)=\nu_{k}\,t+\phi_{k}^{(0)}\,.$ (8)
See Appendix A for the complete orbital solution of Laskar (1990).
The series in Eq. (7) contain contributions from all the planets of the Solar
System. In the integrable approximation, the frequency of each term
corresponds to a unique combination of the fundamental frequencies of the
system, usually noted $g_{j}$ and $s_{j}$. In the limit of small masses, small
eccentricities and small inclinations (Lagrange-Laplace secular system), the
$z$ series only contains the frequencies $g_{j}$, while the $\zeta$ series
only contains the frequencies $s_{j}$ (see e.g. Murray & Dermott, 1999 or
Laskar et al., 2012). This is not the case in more realistic situations. Table
1 shows the combinations of fundamental frequencies identified for the largest
terms of Saturn’s $\zeta$ series obtained by Laskar (1990).
Table 1: First twenty terms of Saturn’s inclination and longitude of ascending
node in the J2000 ecliptic and equinox reference frame.
$\begin{array}[]{rcrrr}\hline\cr\hline\cr k&\text{identification}&\nu_{k}\
(^{\prime\prime}\,\text{yr}^{-1})&S_{k}\times 10^{8}&\phi_{k}^{(0)}\
(^{\text{o}})\\\ \hline\cr 1&s_{5}&0.00000&1377395&107.59\\\
2&s_{6}&-26.33023&785009&127.29\\\ 3&s_{8}&-0.69189&55969&23.96\\\
4&s_{7}&-3.00557&39101&140.33\\\ 5&g_{5}-g_{6}+s_{7}&-26.97744&5889&43.05\\\
6&2g_{6}-s_{6}&82.77163&3417&128.95\\\
7&g_{5}+g_{6}-s_{6}&58.80017&2003&212.90\\\
8&2g_{5}-s_{6}&34.82788&1583&294.12\\\ 9&s_{1}&-5.61755&1373&168.70\\\
10&s_{4}&-17.74818&1269&123.28\\\
11&-g_{5}+g_{7}+s_{6}&-27.48935&1014&218.53\\\
12&g_{5}-g_{7}+s_{6}&-25.17116&958&215.94\\\
13&g_{5}-g_{6}+s_{6}&-50.30212&943&209.84\\\
14&g_{5}-g_{7}+s_{7}&-1.84625&943&35.32\\\
15&-g_{5}+g_{6}+s_{6}&-2.35835&825&225.04\\\
16&-g_{5}+g_{7}+s_{7}&-4.16482&756&51.51\\\ 17&s_{2}&-7.07963&668&273.79\\\
18&-g_{6}+g_{7}+s_{7}&-28.13656&637&314.07\\\
19&g_{7}-g_{8}+s_{7}&-0.58033&544&17.32\\\
20&s_{1}+\gamma&-5.50098&490&162.89\\\ \hline\cr\end{array}$
222Due to the secular resonance $(g_{1}-g_{5})-(s_{1}-s_{2})$, an additional
fundamental frequency $\gamma$ appears in term 20 (see Laskar, 1990).
As explained by Saillenfest et al. (2019), at first order in the amplitudes
$S_{k}$ and $E_{k}$, secular spin-orbit resonant angles can only be of the
form $\sigma_{p}=\psi+\phi_{p}$, where $p$ is a given index in the $\zeta$
series. Resonances featuring terms of the $z$ series only appear at third
order and beyond. For the giant planets of the Solar System, the existing
secular spin-orbit resonances are small and isolated from each other, and only
first-order resonances play a substantial role (see e.g. Saillenfest et al.,
2020).
Figure 1 shows the location and width of every first-order resonance for the
spin-axis of Saturn in an interval of precession constant $\alpha$ ranging
from $0$ to $2^{\prime\prime}\,$yr-1. Because of the chaotic dynamics of the
Solar System (Laskar, 1989), the fundamental frequencies related to the
terrestrial planets (e.g. $s_{1}$, $s_{2}$, $s_{4}$, and $\gamma$ appearing in
Table 1) could vary substantially over billions of years (Laskar, 1990).
However, they only marginally contribute to Saturn’s orbital solution and none
of them takes part in the resonances shown in Fig. 1. Our secular orbital
solution for Saturn can therefore be considered valid since the late planetary
migration, which presumably ended at least $4$ Gyrs ago (see e.g. Nesvorný &
Morbidelli, 2012; Deienno et al., 2017; Clement et al., 2018). For this
reason, we consider in all this article a maximum timespan of $4$ Gyrs in the
past. As shown by Saillenfest et al. (2021), this timespan is more than enough
for Saturn to relax to its primordial obliquity value. Our results are
therefore independent of this choice, unless one considers a much slower
migration rate for Titan than observed today. This last case is discussed in
Sect. 3.2.
Figure 1: Location and width of every first-order secular spin-orbit resonance
for Saturn. Each resonant angle is of the form $\sigma_{p}=\psi+\phi_{p}$
where $\phi_{p}$ has frequency $\nu_{p}$ labelled on the graph according to
its index in the orbital series (see Table 1 and Appendix A). For a given
value of the precession constant $\alpha$, the interval of obliquity enclosed
by the separatrix is shown in pink, as computed using the formulas of
Saillenfest et al. (2019). The green bar shows Saturn’s current obliquity and
the range for its precession constant considered in this article, as detailed
in Sects. 2.3 and 2.4.
### 2.3 Precession constant
As shown by the Hamiltonian function in Eq. (1), the precession constant
$\alpha$ is a key parameter of the spin-axis dynamics of a planet. The
physical parameters of Saturn that enter into its expression (see Eq. 3) are
all very well constrained from observations, except the normalised polar
moment of inertia $\lambda$. Indeed, the gravitational potential measured by
spacecrafts only provides differences between the moments of inertia (e.g. the
coefficient $J_{2}$). In order to obtain the individual value of a single
moment of inertia, one would need to detect the precession of the spin axis or
the Lense–Thirring effect, as explained for instance by Helled et al. (2011).
Such measurements are difficult considering the limited timespan covered by
space missions. To our knowledge, the most accurate estimate of Saturn’s polar
motion, including decades of astrometric observations and _Cassini_ data, is
given by French et al. (2017). However, their estimate is still not accurate
enough to bring any decisive constraint on Saturn’s polar moment of inertia.
Moreover, since the observed polar motion of Saturn is affected by many short-
period harmonics, it cannot be directly linked to the secular spin-axis
precession rate $\dot{\psi}$ discussed in this article. Removing short-period
harmonics from the observed signal would require an extensive modelling that
is not yet available. Even though some attempts to compute a secular trend
from Saturn’s spin-axis observations have been reported (as the unpublished
results of Jacobson cited by Vokrouhlický & Nesvorný, 2015), we must still
rely on theoretical values of $\lambda$.
As pointed out by Saillenfest et al. (2020), one must be careful about the
normalisation used for $\lambda$. Here, we adopt $R_{\text{eq}}=60268$ km by
convention and we renormalise each quantity in Eqs. (4) and (5) accordingly.
Many different values of $\lambda$ can be found in the literature. Under basic
assumptions, Jeffreys (1924) obtained a value of $0.198$. This value is
smaller than other estimates found in the literature, even though it is
marginally compatible with the calculations of Hubbard & Marley (1989), who
gave $\lambda=0.22037$ with a $10\%$ uncertainty. The latter value and its
uncertainty have been reused by many authors afterwards, including French et
al. (1993) and Ward & Hamilton (2004). Later on, Helled et al. (2009) obtained
values of $\lambda$ ranging between $0.207$ and $0.210$. From an exploration
of the parameter space, Helled (2011) then found $\lambda\in[0.200,0.205]$,
but the normalisation used in this article is ambiguous333Even though Saturn’s
mean radius is explicitly mentioned by Helled (2011), her values are cited by
Nettelmann et al. (2013) as having been normalised using the equatorial radius
instead, according to a ‘personal communication’.. The computations of
Nettelmann et al. (2013) yielded yet another range for $\lambda$, estimated to
lie in $[0.219,0.220]$. Among the alternative models proposed by Vazan et al.
(2016), values of $\lambda$ are found to range between $0.222$ and $0.228$.
Finally, Movshovitz et al. (2020) used a new fitting technique supposed to be
less model-dependent, and obtained $\lambda\in[0.2204,0.2234]$ at the
$3\sigma$ error level (assuming that their values are normalised using
$R_{\mathrm{eq}}$, which is not specified in the article). In the review of
Fortney et al. (2018) focussing on the better knowledge of Saturn’s internal
structure brought by the _Cassini_ mission, the authors go back to a value of
$\lambda$ equal to $0.22\pm 10\%$. A value of $0.22$ is also quoted in the
review of Helled (2018).
Here, instead of relying on one particular estimate of $\lambda$, we turn to
the exploration of the whole range of values given in the literature, which is
slightly larger than $\lambda\in[0.200,0.240]$. The spin velocity of Saturn is
taken from Archinal et al. (2018) and its $J_{2}$ from Iess et al. (2019). For
consistency with Saturn’s orbital solution (Sect. 2.2), we take its mass and
secular semi-major axis from Laskar (1990).
In order to compute $J_{2}^{\prime}$ and $\lambda^{\prime}$ in Eq. (5), we
need the masses and orbital elements of Saturn’s satellites. We take into
account the eight major satellites of Saturn and use the masses of the SAT427
numerical ephemerides444https://ssd.jpl.nasa.gov/. These ephemerides are then
digitally filtered in order to obtain the secular semi-major axes. The
inclination $L_{k}$ of the Laplace plane of each satellite is computed using
the formula of Tremaine et al. (2009). Taking $\lambda$ into its exploration
interval, the current value of Saturn’s precession constant, computed from
Eqs. (3) and (5), ranges from $0.747$ to $0.894^{\prime\prime}\,$yr-1. The
corresponding adjusted mass of Titan in Eq. (6) is $\tilde{m}_{6}\approx
1.04\,m_{6}$. Similar results are obtained when using the more precise values
of $L_{k}$ given by the constant terms of the full series of Vienne & Duriez
(1995) and Duriez & Vienne (1997).
Because of tidal dissipation, satellites migrate over time. This produces a
drift of the precession constant $\alpha$ on a timescale that is much larger
than the precession motion (i.e. the circulation of $\psi$). The long-term
spin-axis dynamics of a planet with migrating satellites is described by the
Hamiltonian in Eq. (1) where $\alpha$ is a slowly-varying function of time.
Since Titan is in the close-in regime, its outward migration produces an
increase in $\alpha$. The migration rate of Titan recently measured by Lainey
et al. (2020) supports the tidal theory of Fuller et al. (2016), through which
the time evolution of Titan’s semi-major axis can be expressed as
$a_{6}(t)=a_{0}\left(\frac{t}{t_{0}}\right)^{b}\,,$ (9)
where $a_{0}$ is Titan’s current mean semi-major axis, $t_{0}$ is Saturn’s
current age, and $b$ is a real parameter (see Lainey et al., 2020). Even
though Eq. (9) only provides a crude model for Titan’s migration, the
parameter $b$ can be directly linked to the observed migration rate,
independently of whether Eq. (9) is valid or not555In the latter case, $b$
should be considered as a non-constant quantity and what we measure today
would only be its current value.. Equation (9) implies that Titan’s current
tidal timescale $t_{\mathrm{tide}}=a_{6}/\dot{a}_{6}$ relates to $b$ as
$b=t_{0}/t_{\mathrm{tide}}$. Considering a $3\sigma$ error interval, the
astrometric measurements of Lainey et al. (2020) yield values of $b$ ranging
in $[0.18,1.71]$, while their radio-science experiments yield values ranging
in $[0.34,0.49]$. For the long-term evolution of Saturn’s satellites, they
adopt a nominal value of $b_{0}=1/3$, which roughly matches the observed
migration of all satellites studied. Using this nominal value, we obtain a
drift of the precession constant $\alpha$ as depicted in Fig. 2. Taking $b$ as
parameter, a migration $n$ times faster for Titan is obtained by using in Eq.
(9) a parameter $b=n\,b_{0}$. The corresponding evolution of Titan’s semi-
major axis is illustrated in Fig. 3.
Figure 2: Evolution of the effective precession constant of Saturn due to the
migration of Titan (adapted from Saillenfest et al., 2021). The top and bottom
green curves correspond to the two extreme values of the normalised polar
moment of inertia $\lambda$ considered in this article. They appear into
$\alpha$ through Eq. (3). Both curves are obtained using the nominal value
$b=1/3$ in Eq. (9). Today’s interval corresponds to the one shown in Fig. 1;
it is independent of the value of $b$ considered. The blue line shows
Neptune’s nodal precession mode, which was higher before the end of the late
planetary migration. Figure 3: Time evolution of Titan’s semi-major axis for
different migration rates. The pink and blue intervals show the $3\sigma$
uncertainty ranges of astrometric and radio-science measurements, respectively
(Lainey et al., 2020). The coloured curves are obtained by varying the
parameter $b$ in Eq. (9).
As mentioned by Saillenfest et al. (2020), other parameters in Eq. (3)
probably slightly vary over billions of years, such as the spin velocity of
Saturn or its oblateness. We consider that the impact of their variations is
small compared to the effect of Titan’s migration (see Fig. 2) and contained
within our exploration range. Moreover, all satellites, and not only Titan,
migrate over time. However, being Titan so much more massive, its fast
migration is by far the dominant cause of the drift of $\alpha$. Since its
exact migration rate is still uncertain (see Fig. 3), this justifies our
choice to only include Titan in Eq. (6), while the use of its slightly
increased mass $\tilde{m}_{6}$ yet allows us to obtain the right value of
today’s precession constant $\alpha$, as if all satellites were included.
### 2.4 Current spin orientation
The initial orientation of Saturn’s spin axis is taken from the solution of
Archinal et al. (2018) averaged over short-period terms. With respect to
Saturn’s secular orbital solution (see Sect. 2.2), this gives an obliquity
$\varepsilon=26.727^{\circ}$ and a precession angle $\psi=6.402^{\circ}$ at
time J2000. The uncertainty on these values is extremely small compared to the
range of $\alpha$ considered (see Sect. 2.3).
## 3 The past obliquity of Saturn: Exploration of the parameter space
### 3.1 Overview of possible trajectories
From the results of their backward numerical integrations, Saillenfest et al.
(2021) find that Saturn can have evolved through distinct kinds of evolution,
which had previously been described by Ward & Hamilton (2004). These different
kinds of evolution are set by the outcomes of the resonant encounter between
Saturn’s spin-axis precession and the nodal precession mode of Neptune (term
$\phi_{3}$ in Table 1 and largest resonance in Fig. 1). The four possible
types of past evolution are illustrated in Fig. 4 for $b=b_{0}$. They are
namely:
* •
Type 1: For $\lambda\leqslant 0.220$, Saturn went past the resonance through
its hyperbolic point.
* •
Type 2: For $\lambda\in(0.220,0.224)\cup(0.237,0.241)$, Saturn was captured
recently by crossing the separatrix of the resonance and followed the drift of
its centre afterwards.
* •
Type 3: For $\lambda\in[0.224,237]$, the separatrix of the resonance appeared
around Saturn’s trajectory resulting in a $100\%$-sure capture at low
obliquity. Saturn followed the drift of its centre afterwards.
* •
Type 4: For $\lambda\geqslant 0.241$, Saturn did not reach yet the resonance.
Figure 5 shows the current oscillation interval of Saturn’s spin axis in all
four cases. Trajectories of Type 3 are those featuring the smallest libration
amplitude of the resonant angle $\sigma_{3}$ and allowing for the smallest
past obliquity of Saturn. Type 4 is ruled out by our uncertainty range for
$\lambda$.
Figure 4: Examples illustrating the four different types of past obliquity
evolution of Saturn. Each graph shows a $4$-Gyr numerical trajectory (black
dots) computed for Titan’s nominal migration rate and for a given value of
Saturn’s normalised polar moment of inertia $\lambda=C/(MR_{\mathrm{eq}}^{2})$
specified in title. Today’s location of Saturn is represented by the big green
spot; the vertical error bar corresponds to our full exploration interval of
$\lambda$. The red curves show the centre of first-order secular spin-orbit
resonances (Cassini state 2) and the coloured areas represent their widths
(same as Fig. 1). The top large area is the resonance with $\phi_{3}$ and the
bottom thin area is the resonance with $\phi_{19}$ (see Table 1). The
separatrices of the $\phi_{3}$ resonance are highlighted in blue. Going
forwards in time, the trajectories go from bottom to top. Figure 5: Current
dynamics of Saturn’s spin axis according to its normalised polar moment of
inertia $\lambda$. The value of $\lambda$ (top horizontal axis) is linked to
the current precession constant of Saturn (bottom horizontal axis) through
Eqs. (3) and (5). The black interval shows the ‘instantaneous’ oscillation
range of Saturn’s spin axis (i.e. without drift of $\alpha$) obtained by
numerical integration. The resonant angle is $\sigma_{3}=\psi+\phi_{3}$ (see
Sect. 2). The green line shows Saturn’s current obliquity and resonant angle.
The background colour indicates the type of past evolution as labelled in the
top panel (see text for the numbering).
During its past evolution, Saturn also crossed a first-order secular spin-
orbit resonance with the term $\phi_{19}$ which has frequency
$g_{7}-g_{8}+s_{7}$ (see Table 1). As shown in Fig. 4, however, this did not
produce any noticeable change in obliquity for Saturn. Indeed, since this
resonance is very small, the oscillation timescale of
$\sigma_{19}=\psi+\phi_{19}$ inside the resonance is dramatically longer than
the duration of the resonance crossing. This results in a short non-adiabatic
crossing. The difference of oscillation timescales of $\sigma_{3}$ and
$\sigma_{19}$ can be appreciated in Fig. 6. It explains why these two
resonances have a so dissimilar influence on Saturn’s spin-axis dynamics. This
phenomenon has been further discussed by Saillenfest et al. (2020) in the case
of Jupiter.
Figure 6: Period of small oscillations about the resonance centre for a
resonance with $\phi_{3}$ or $\phi_{19}$. The resonant angles are
$\sigma_{3}=\psi+\phi_{3}$ and $\sigma_{19}=\psi+\phi_{19}$, respectively.
Dashed curves are used for oscillations about Cassini state 2 before the
separatrix appears. The appearance of the separatrix is marked by a blue dot.
### 3.2 Adiabaticity of Titan’s migration
If the drift of $\alpha$ over time was perfectly adiabatic (i.e. infinitely
slow compared to the oscillations of $\sigma_{3}$), the outcome of the
dynamics would not depend on the exact migration rate of Titan; the latter
would only affect the evolution timescale. In the vicinity of Titan’s nominal
migration rate, Saillenfest et al. (2021) show that the drift of $\alpha$ is
almost an adiabatic process. Here, we extend the analysis to a larger interval
of migration rates in order to determine the limits of the adiabatic regime.
Figure 7 shows Saturn’s obliquity $4$ Gyrs in the past obtained by backward
numerical integrations for different migration rates of Titan and using values
of $\lambda$ finely sampled in its exploration interval. Migration rates
comprised between the red and magenta curves are compatible with the
astrometric measurements of Lainey et al. (2020), and migration rates
comprised between the blue and green curves are compatible with their radio-
science experiments (same colour code as in Fig. 3). As argued by Saillenfest
et al. (2021), Titan’s migration may have been sporadic, in which case $b$
would vary with time and the result would roughly correspond to a mix of
several panels in Fig. (7). However, because of our current lack of knowledge
about tidal dissipation mechanisms, refined evolution scenarios would only be
speculative at this stage.
Figure 7: Past obliquity of Saturn for different migration rates of Titan. The
top and bottom horizontal axes are the same as in Fig. 5 and the horizontal
green line shows Saturn’s current obliquity. For a given value of the
normalised polar moment of inertia $\lambda$ (top horizontal axis), the curve
width shows the oscillation range of obliquity $4$ Gyrs in the past obtained
by backward numerical integration. The migration rates are labelled on each
panel as a fraction of the nominal rate of Lainey et al. (2020). The four
coloured curves correspond to the migration rates illustrated in Fig. 3. The
grey stripes in the central panel highlight trajectories of Type 2 (same as in
Fig. 5). The value of $b$ in the top left panel corresponds to a current
quality factor $Q$ equal to $5000$ (see Lainey et al., 2020).
The blue curve of Fig. 7 confirms that the nominal migration rate of Lainey et
al. (2020) is close to the adiabatic regime, since smaller rates give very
similar results (see the curves for a migration two times and four times
slower). Non-adiabatic signatures are only substantial in the grey areas, that
is, for recently captured trajectories that crossed the resonance separatrix
(evolution Type 2). Indeed, the teeth-shaped structures are due to ‘phase
effects’, meaning that the precise outcome depends on the value of the
resonant angle $\sigma_{3}$ during the separatrix crossing. For smaller
migration rates, these structures pack together and tend to a smooth interval
(that would be reached for perfect adiabaticity). If the migration of Titan is
very slow, however, our $4$-Gyr backward integrations stop while Saturn is
still close to the resonance, or even inside it. The curves obtained for
$b\lesssim 1/7\,b_{0}$ have not enough time to completely relax from their
initial shape shown in Fig. 5. This means that if, as argued by Saillenfest et
al. (2021), Titan is responsible for Saturn’s current large obliquity, its
migration cannot have been arbitrarily slow. Historical tidal models used to
predict very small migration rates, as in the top left panel of Fig. 7. Such
small migration rates are unable to noticeably affect Saturn’s obliquity over
the age of the Solar System. This explains why previous studies considered
that Saturn’s precession constant remained approximatively constant since the
late planetary migration (Boué et al., 2009; Brasser & Lee, 2015; Vokrouhlický
& Nesvorný, 2015). Figure 7 shows that for $\lambda\in[0.200,0.240]$, near-
zero past obliquities can be achieved only if $b\gtrsim 1/16\,b_{0}$, that is,
if Titan migrated by at least $1$ $R_{\text{eq}}$ after the late planetary
migration. This condition is definitely achieved in the whole error ranges
given by Lainey et al. (2020), provided that Titan’s migration did go on over
a significant amount of time. Assuming that $b=b_{0}$, Titan should have
migrated at least during several hundreds of million years before today in
order for its semi-major axis to have changed by more than $1$
$R_{\text{eq}}$. On the contrary, no substantial obliquity variation could be
produced if Titan only began migrating very recently (less than a few hundreds
of million years) and always remained unmoved before that. As mentioned by
Saillenfest et al. (2021), this extreme possibility appears unlikely but
cannot be ruled out yet.
When we increase Titan’s migration rate above its nominal value, Fig. 7 shows
that the adiabatic nature of the drift of $\alpha$ is gradually destroyed. For
$b=3b_{0}$, phase effects become very strong and distort the whole picture.
The magenta curve (which marks the limit of the $3\sigma$ error bar of Lainey
et al., 2020) shows that the non-adiabaticity allows for a past obliquity of
Saturn equal to exactly $0^{\circ}$. Such a null value is obtained when the
oscillation phase of $\sigma_{3}$ brings Saturn’s obliquity to zero exactly
together with Cassini state 2. This configuration can only happen for finely
tuned values of the parameters, which is why putting a primordial obliquity
$\varepsilon\approx 0^{\circ}$ as a prerequisite puts so strong constraints on
the parameter range allowed (Brasser & Lee, 2015; Vokrouhlický & Nesvorný,
2015).
If the resonance crossing is too fast, however, the resonant angle
$\sigma_{3}$ has not enough time to oscillate before escaping the resonance.
As a result, Saturn’s spin-axis can only follow the drift of the resonance
centre during a very limited amount of time, and only a moderate obliquity
kick is possible. As discussed in Sect. 3.1, this is what happens for the thin
resonance with $\phi_{19}$. In Fig. 7, the effect of overly fast crossings is
clearly visible for $b\gtrsim 11b_{0}$. Beyond this approximate limit, all
trajectories in our backward integrations cross the resonance separatrix,
which means that trajectories of Type 3 are impossible and no small past
obliquity can be obtained.
Figure 8 summarises all values of Saturn’s past obliquity obtained in our
backward integrations as a function of Titan’s migration rate and Saturn’s
polar moment of inertia. Non-adiabaticity is revealed by the coloured waves,
denoting phase effects. As expected, the waves disappear for $b\lesssim
b_{0}$: this is the adiabatic regime (see Fig. 4 of Saillenfest et al., 2021
for a zoom-in view). For very small migration rates, however, Titan would not
have time in $4$ Gyrs to migrate enough to produce substantial effects on
Saturn’s obliquity. This is why the dark-blue region in Fig. 8 does not reach
$b=0$. For too fast migration rates, on the contrary, the resonance crossing
is so brief that it can only produce a small obliquity kick. In particular no
past obliquity smaller than $5^{\circ}$ is obtained for $b\gtrsim 10\,b_{0}$.
This migration rate can therefore be considered as the largest one allowing
Titan to be held responsible for Saturn’s current large obliquity.
Figure 8: Past obliquity of Saturn as a function of Titan’s migration velocity
and Saturn’s polar moment of inertia. Each panel of Fig. 7 corresponds here to
a vertical slice. The colour scale depicts the minimum obliquity of the
oscillation range, and the white curve highlights the $5^{\circ}$ level. The
$3\sigma$ uncertainty ranges of Lainey et al. (2020) yield today approximately
$b/b_{0}\in[1/2,5]$ for the astrometric measurements and $b/b_{0}\in[1,3/2]$
for the radio-science experiments (see Fig. 3).
### 3.3 Extreme phase effects
As can be guessed from the thinness of the spikes visible in some panels of
Fig. 7, the variety of outcomes obtained for trajectories that cross the
resonance separatrix (i.e. Types 1 and 2) depend on the resolution used for
sampling the parameter $\lambda$. The deepest spikes denote the strongest
phase effects; they correspond to trajectories that reach the resonance almost
exactly at its hyperbolic equilibrium point (called Cassini state 4: see e.g.
Saillenfest et al., 2019 for phase portraits666There is a typographical error
in Saillenfest et al. (2019): the list of the Cassini states given before Eq.
(22) should read (4,2,3,1) instead of (1,2,3,4) in order to match the
denomination introduced by Peale (1969).). Since the resonance island slowly
drifts as $\alpha$ varies over time, extreme phase effects can be produced
when the hyperbolic point drifts away just as the trajectory gets closer to
it, maintaining the trajectory on the edge between capture and non-capture
into resonance. This kind of borderline trajectory is more common for strongly
non-adiabatic drifts (i.e. the spikes in Fig. 7 are wider for larger $b$),
because a faster drift of the resonance means that trajectories need to follow
less accurately the separatrix in order to ‘chase’ the hyperbolic point at the
same pace as it gets away. If the drift of the resonance is too fast, however,
trajectories are outrun by the resonance and strong phase effects are
impossible. This is visible in the last panel of Fig. 7 (for $b=20\,b_{0}$),
in which the spikes are noticeably smoothed.
In order to investigate the outcomes of extreme phase effects, one can look
for the exact tip of the spikes in Fig. 7 by a fine tuning of $\lambda$. For
$b=b_{0}$ (central panel), a tuning of $\lambda$ at the $10^{-15}$ level shows
that Type 2 trajectories all feature a minimum past obliquity of about
$10^{\circ}$, as illustrated in Fig. 9. This minimum value is the same for
each spike, and zooming in in Fig. 9 shows that we do reach the bottom of the
spikes. For Type 1 trajectories (i.e. $\lambda<0.220$ in the central panel of
Fig. 7), we managed to find past obliquities of about $28^{\circ}$ at the tip
of the spikes, but using extended precision arithmetic may allow one to obtain
even smaller values (possibly down to $10^{\circ}$ as for Type 2
trajectories). The width of these spikes ($\Delta\lambda<10^{-15}$) would
however make them absolutely invisible in Figs. 7 and 8. In fact, the level of
fine tuning required here is so extreme that such trajectories are unlikely to
have any physical relevance. They are yet possible solutions in a mathematical
point of view. Some examples are given in Appendix B.
Figure 9: Zoom-in view of the central panel of Fig. 7. We use a red curve to
highlight the bottom limit of the blue interval, otherwise the narrowness of
the spikes makes them invisible (the width of spike d is $\Delta\lambda\approx
10^{-14}$). This graph can be compared to Fig. 3 of Saillenfest et al. (2021),
where such level of fine tuning is not shown due to its questionable physical
relevance. See Appendix B for examples of trajectories.
These findings can be compared to previous studies, even though previous
studies relied on a different tilting scenario. For a non-adiabatic drift of
the resonance and a past obliquity fixed to $1.5^{\circ}$, Boué et al. (2009)
found that if Saturn is not currently inside the resonance (i.e. if
$\lambda<0.220$), an extremely narrow but non-zero range of initial conditions
is able to reproduce Saturn’s current orientation, with a probability less
than $3\times 10^{-8}$. Using a smaller set of simulations, Vokrouhlický &
Nesvorný (2015) did not even find a single of these trajectories. In light of
our results, we argue that these unlikely trajectories are produced through
the ‘extreme phase effects’ described here. The vanishingly small probability
of producing such trajectories is confirmed in Sect. 4.
## 4 Monte Carlo search for initial conditions
In Sect. 3, the past behaviour of Saturn’s spin axis has been investigated
using backward numerical integrations. If we now consider the space of all
possible orientations for Saturn’s primordial spin axis, each dynamical
pathway has a given probability of being followed. A large subset of
trajectories (those of Types 1 and 2) go through the separatrix of the large
resonance with $\phi_{3}$. Separatrix crossings are known to be chaotic events
(see e.g. Wisdom, 1985), and since Saturn’s orbital evolution is not
restricted to its 3rd harmonic, the separatrix itself appears as a thin
chaotic belt (see e.g. Saillenfest et al., 2020). Therefore, we can wonder
whether the chaotic divergence of trajectories during separatrix crossings
could lead to some kind of time-irreversibility in our numerical solutions
(see e.g. Morbidelli et al., 2020), especially in the non-adiabatic regime,
which has not been studied by Saillenfest et al. (2021). These aspects can be
investigated through a Monte Carlo search for the initial conditions of
Saturn’s spin axis.
### 4.1 Capture probability
Our first experiment is designed as follows: for a given set of parameters
$(b,\lambda)$, values of initial obliquity are regularly sampled between
$0^{\circ}$ and $60^{\circ}$. Then, for each of those, we regularly sample
values of initial precession angle $\psi\in[0,2\pi)$, and all trajectories are
propagated forwards in time starting at $4$ Gyrs in the past (i.e. after the
late planetary migration) up to today’s epoch. Figure 10 shows snapshots of
this experiment for $\lambda=0.204$ and Titan’s nominal migration rate
($b=b_{0}$). The first snapshot is taken about $20$ million years after the
start of the integrations, and the last snapshot is taken at today’s epoch.
Changing the value of $\lambda$ produces a shift of Saturn’s precession
constant $\alpha$ but no strong variation in its drift rate (see Fig. 2).
Moreover, since this drift is almost an adiabatic process for $b=b_{0}$ (see
Sect. 3.2), a small change of drift rate does not modify the statistical
outcome of the dynamics but only its timescale. For these reasons, a snapshot
in Fig. 10 taken at a given time $t$ for $\lambda=0.204$ is undistinguishable
from a snapshot taken at a slightly different time $\tilde{t}$ for another
value $\tilde{\lambda}$. More precisely, if we introduce a function of time
$f_{\lambda}(t)$ such that $t\longrightarrow\alpha=f_{\lambda}(t)$, an
indistinguishable snapshot is obtained for a polar moment of inertia
$\tilde{\lambda}$ at a time $\tilde{t}=f_{\tilde{\lambda}}^{-1}(\alpha)$.
Hence, the only parameter that matters here is the value of the precession
constant $\alpha$ reached by the trajectories. This is why the panels of Fig.
10 are labelled by $\alpha$ instead of $t$: this way they are valid for any
value of $\lambda$.
Before reaching the neighbourhood of the resonance with $\phi_{3}$, Fig. 10
shows that all trajectories only slightly oscillate around their initial
obliquity value (compare the first two snapshots, taken for two very different
values of $\alpha$). Then, as $\alpha$ continues to increase, the trajectories
are gradually divided between the four possible types of evolution listed in
Sect. 3.1. All trajectories with initial obliquity smaller than about
$10^{\circ}$ are captured in the resonance and lifted to high obliquities
(Type 3: blue dots). Trajectories with a larger initial obliquity can either
be captured (Type 2: green and orange dots) or go past the resonance through
its hyperbolic point (Type 1: lowermost red dots).
Figure 10: Snapshots of a Monte Carlo experiment computed for $\lambda=0.204$
and Titan’s nominal migration rate. This experiment features $101$ values of
initial obliquity between $0^{\circ}$ and $60^{\circ}$, for which $240$ values
of initial precession angle are regularly sampled in $[0,2\pi)$. The value of
$\alpha$ reached by the trajectories at the time of the snapshot is labelled
on each panel. Each trajectory is represented by a small dot which is coloured
according to the variation range of the resonant angle $\sigma_{3}$ (obtained
by a $0.5$-Gyr numerical integration with constant $\alpha$). The horizontal
green line shows the current obliquity of Saturn. At the beginning of the
propagations, all trajectories are coloured red (since $\sigma_{3}$
circulates), and distributed along a diagonal line. Then, as $\alpha$
increases over time, trajectories are dispersed off the diagonal according to
the four types of trajectories depicted in Fig. 4 and labelled in the
penultimate panel.
Assuming that Saturn’s primordial precession angle $\psi$ is a random number
uniformly distributed in $[0,2\pi)$, the probability of capture in resonance
is given by the fraction of points ending up in the pencil-shaped structure of
Fig. 10. The result is shown in Fig. 11, in which we increased the resolution
for better statistical significance. Assuming perfect adiabaticity, each
outcome can be modelled as a probabilistic event ruled by analytical formulas
(see Henrard & Murigande, 1987; Ward & Hamilton, 2004; Su & Lai, 2020). As
shown by Fig. 11, non-adiabaticity tends to smooth the probability profile and
to reduce the interval of $100\%$-sure capture. For growing initial obliquity,
the probability of Type 2 trajectories (i.e. capture) decreases, favouring
Type 1 trajectories instead (i.e. crossing without capture).
Figure 11: Capture probability of Saturn in secular spin-orbit resonance with
$\phi_{3}$ as a function of its primordial obliquity. For each initial
obliquity ($401$ values between $0$ and $60^{\circ}$), $720$ values of initial
precession angle are uniformly sampled in $[0,2\pi)$ and propagated forwards
in time starting from $-4$ Gyrs and until every trajectory has reached the
resonance. This experiment is repeated with two different migration laws for
Titan (see labels). The result is virtually independent of the value chosen
for $\lambda$. The fraction of captured trajectories (coloured curves) is
compared to the perfect adiabatic case (black curve) computed with the
analytical formulas of Henrard & Murigande (1987).
### 4.2 Loose success criteria: Probing all dynamical pathways
Over all possible trajectories, we now look for those matching Saturn’s actual
spin-axis dynamics today. We first consider ‘loose success criteria’, for
which a run is judged successful if: _i)_ Saturn’s current obliquity
$\varepsilon=26.727^{\circ}$ lies within the final spin-axis oscillation
interval, and _ii)_ the libration amplitude of the resonant angle $\sigma_{3}$
lies within $5^{\circ}$ of the actual amplitude shown in Fig. 5. These
criteria are not chosen to be very strict in order to probe all dynamical
pathways in the neighbourhood of Saturn’s spin-axis orientation, including
some that could have been missed by the backward propagations of Sect. 3. Our
results are depicted in Fig. 12. We closely retrieve the predictions of
backward numerical integrations, in particular for trajectories of Type 3.
Narrowing the target interval leads to an even better match. For trajectories
of Type 2 (grey background), we obtain a larger spread of initial obliquities
because our success criteria do not include any restriction on today’s phase
of the resonant angle $\sigma_{3}$, but only on its oscillation amplitude. The
results shown in Fig. 12 are therefore less shaped by ‘phase effects’
discussed in Sect. 3.2. Since a slight change in Titan’s migration rate would
result in a phase shift, we can interpret Fig. 12 as encompassing different
migration rates around the nominal observed rate. The results presented in
Fig. 12 are therefore more general than those obtained using backward
numerical integrations. In accordance with Fig. 11, the success ratio for Type
2 trajectories sharply decreases for increasing initial obliquity (colour
gradient), because most initial conditions lead to a resonance crossing
without capture. Moreover, we do not detect trajectories as extreme as those
presented in Sect. 3.3 (i.e. with an initial obliquity of about $10^{\circ}$
all over the width of Zone 2), because they require initial conditions that
are too specific for our sampling; the probability of obtaining one is indeed
negligible. Finally, for trajectories of Types 1 and 4, which are today out of
the resonance, our ‘loose success criteria’ are extremely permissive, since
the variation amplitude of $\sigma_{3}$ is $2\pi$ for all trajectories (see
Fig. 5). This explains why Fig. 12 shows large intervals of black dots. These
intervals can be spotted in Fig. 10, where they appear as the whole range of
red dots that are pierced by the green horizontal line.
Figure 12: Brute-force search for Saturn’s past obliquity using the loose
success criteria: probing all dynamical pathways. We use Titan’s nominal
migration law ($b=b_{0}$). Among all trajectories evenly sampled in the space
of the normalised polar moment of inertia $\lambda$ (top horizontal axis,
$101$ values), of the initial obliquity (vertical axis, $101$ values), and of
the initial precession angle ($240$ values between $0$ and $2\pi$), we only
keep those matching Saturn’s spin axis today according to our loose success
criteria (see text). Each point is coloured according to the number of
successful runs among the $240$ initial precession angles; the success ratio
is written below the colour bar. A point is not drawn if no successful
trajectory is found. In the back, the blue interval shows the past obliquity
of Saturn obtained by backward numerical integration (same as Fig. 7 for
$b=b_{0}$), showing the consistency between backward and forward integrations
in time. The background stripes and their labels have the same meaning as in
Fig. 5.
Figure 12 does not feature unexpected dynamical paths that could have been
missed by our backward integrations, even though signatures of chaos are
visible in the sparse spreads of coloured dots. From this close match, we
conclude that the chaos is not strong enough here to significantly mingle the
trajectories and to produce a substantial phenomenon of numerical
irreversibility. As one can point out, separatrix crossings would have been
irreversible if, in order to predict the different outcomes, we used the
adiabatic invariant theory instead of numerical integrations (see Henrard &
Murigande, 1987; Ward & Hamilton, 2004; Su & Lai, 2020). Indeed, in the
adiabatic invariant theory, the resonant angle is assumed to oscillate
infinitely faster than the drift of $\alpha$ and phase effects are modelled as
probabilistic events (Henrard, 1982, 1993). This probabilistic modelling of
chaos explains why separatrix crossings are not reversible when using this
theory.
### 4.3 Strict success criteria: Relative likelihood of producing Saturn’s
current state
In order to compare the likelihood of producing Saturn’s current state in the
space of all possible initial conditions, our loose success criteria are not
enough. Independently of whether Saturn is inside or outside the resonance
today, its spin-axis precession is not uniform, which means that the phase of
Saturn’s spin-axis motion at a given time is not uniformly distributed and
must therefore be taken into account, too. Moreover, we saw in Sect. 3.2 that
out of the strict adiabatic regime, phase effects (that are deliberately
ignored by our loose success criteria) do matter to reproduce Saturn’s current
spin-axis orientation; actually, the very notion of ‘libration’ loses its
meaning when the drift of $\alpha$ is not adiabatic, since the resonance is
distorted before $\sigma_{3}$ has time to perform a single cycle. For these
reasons, we now define ‘strict success criteria’, for which a run is judged
successful if: _i)_ today’s obliquity $\varepsilon$ lies within $0.5^{\circ}$
of the true value, and _ii)_ today’s precession angle $\psi$ lies within
$5^{\circ}$ of the true value. These criteria are very narrow, but still
within reach of our millions of numerical propagations. The result is shown in
Fig. 13 for Titan’s nominal migration rate. As expected, the points are more
sparse than in Fig. 12 and the success ratios are smaller. Assuming that
Saturn’s primordial precession angle is a random number uniformly distributed
between $0$ and $2\pi$, the colour gradient in Fig. 13 is a direct measure of
the likelihood to reproduce Saturn’s current state. Type 3 trajectories are
greatly favoured: they feature the maximum likelihood, which is about ten
times the likelihood of Type 1 trajectories. The region with maximum
likelihood is for past obliquities between about $2^{\circ}$ and $7^{\circ}$,
and current precession constant $\alpha$ between about $0.76$ and
$0.79^{\prime\prime}\,$yr-1 (red box).
As already discussed by Ward & Hamilton (2004) and Hamilton & Ward (2004),
there are two reasons why Type 3 trajectories are the most likely: first, they
have a $100\%$ chance of being captured inside the resonance (whereas Types 1
and 2 both have a non-zero probability of failure, see Fig. 11); second, Type
3 trajectories oscillate today with a small amplitude inside the resonance,
which means that all of them feature a similar value of the precession angle
$\psi$, imposed by the resonance relation $\sigma_{3}\sim 0$. On the contrary,
other types of trajectories either feature a large oscillation amplitude of
$\sigma_{3}$ (Type 2) or circulation of $\sigma_{3}$ (Types 1 and 4);
therefore, they only sweep over Saturn’s actual orientation once in a while,
and matching it today would only be a low-probability ‘coincidental’
event777The same argument has been pointed out for Jupiter by Ward & Canup
(2006) and Saillenfest et al. (2020).. As shown by Fig. 13, the least favoured
trajectories are those of Type 2, especially for high initial obliquities,
because of the strong decrease in capture probability (see Fig. 11).
Figure 13: Same as Fig. 12, but using the strict success criteria: comparing
the relative likelihood of producing Saturn’s current state. As in Fig. 12,
each point of the graph is made of $240$ simulations with initial
$\psi\in[0,2\pi)$. The red rectangle highlights the region featuring the
highest success ratios.
In order to explore all migration rates and bring further constraints on the
model parameters, we now turn to a second Monte Carlo experiment, with the
following approach: assuming that Saturn was indeed tilted as a result of
Titan’s migration, we look for the possible values of the parameters
$(b,\lambda)$ allowed, with their respective likelihood. This approach is
similar to those used in previous studies (e.g. Vokrouhlický & Nesvorný,
2015).
The notion of likelihood associated with this second experiment deserves some
comments. Since Saturn’s spin axis performed many precession revolutions in
$4$ Gyrs and since it was initially not locked in resonance, a tiny error in
the model rapidly spreads over time into a uniform probability distribution of
the precession angle $\psi$ in $[0,2\pi)$. This is the reason why, in absence
of any mechanism able to maintain $\psi$ in a preferred direction, it is
legitimate to consider a uniform initial distribution for $\psi$, as people
usually do (and as we already did above). Establishing a prior distribution
for $\varepsilon$, instead, is more hazardous: we know that near-zero values
are expected from formation models, but small primordial excitations cannot be
excluded. Such excitations could be attributed to the phase of planetesimal
bombardment at the end of Saturn’s formation or by abrupt resonance crossings
stemming from the dissipation of the protoplanetary and/or circumplanetary
discs (see e.g. Millholland & Batygin, 2019). Therefore, we arbitrarily
consider here values of initial obliquity $\varepsilon\lesssim 5^{\circ}$,
which leaves room for a few degrees of primordial obliquity excitation. This
choice is somewhat guided by the $3^{\circ}$-obliquity of Jupiter, a part of
which could possibly be primordial (Ward & Canup, 2006; Vokrouhlický &
Nesvorný, 2015). Jupiter is located today near a secular spin-orbit resonance
with $s_{7}$ (see Table 1), but contrary to Saturn, its satellites did not
migrate enough yet to substantially increase its obliquity (Saillenfest et
al., 2020); however, in order to ascertain possible values for Jupiter’s
primordial obliquity, the effect of the past migration of the Galilean
satellites would need to be studied. We choose to use a uniform random
distribution of $\varepsilon$, resulting in a non-uniform distribution of
spin-axis directions over the unit sphere that favours small obliquities888In
order to uniformly sample the unit sphere, one should consider instead a
uniform distribution of $\cos\varepsilon$.. The influence of our arbitrary
choice of Saturn’s initial obliquity is discussed below.
Figure 14: Distribution of the solutions starting from a low primordial
obliquity and matching our strict success criteria. For each set $(b,\lambda)$
of the parameters, $2400$ values of initial obliquity $\varepsilon$ and
precession angle $\psi$ are drawn from a uniform random distribution in
$(\varepsilon,\psi)\in[0^{\circ},5^{\circ}]\times[0,2\pi)$. Coloured dots show
the parameter sets $(b,\lambda)$ for which at least one successful trajectory
was found; the success ratio is written below the colour bar. Light-grey
crosses mean that no successful trajectory was found over our $2400$ initial
conditions. The black contours show the $5^{\circ}$-level obtained through
backward numerical integrations (same as Fig. 8), showing the consistency
between backward and forward integrations in time. The black lines in the top
portion show the approximate location of the border of the blue stripes in
Fig. 8, where extreme phase effects can happen; the corresponding ranges of
parameters are so narrow that they are missed by the resolution of Fig. 8 (see
Sect. 3.3).
In practice, our setup is the following: over a grid of point $(b,\lambda)$ of
the parameter space, we perform each time $2400$ numerical integrations
starting from random initial conditions $(\varepsilon,\psi)$ with
$\varepsilon\leqslant 5^{\circ}$ and $\psi\in[0,2\pi)$. All trajectories are
then propagated from $-4$ Gyrs up to today’s epoch, and we only keep
trajectories matching Saturn’s current spin-axis orientation according to our
strict success criteria. Figure 14 shows the result of this experiment. Again,
we closely retrieve the predictions of backward integrations from Sect. 3,
confirming the reversible nature of the dynamics, and helping us to interpret
the patterns obtained. The wavy structure at $3b_{0}\lesssim b\lesssim 5b_{0}$
resembles to some extent the successful matches of Vokrouhlický & Nesvorný
(2015), reminding us that the basic dynamical ingredients are the same, even
though the mechanism producing the resonance encounter in their study is
different (their Fig. 7 is rotated clockwise). Unsurprisingly, the highest
concentrations of matching trajectories in Fig. 14 are located in the regions
where backward propagations result in near-zero primordial obliquities
(compare with Fig. 8). The maximum likelihood thus favours slightly non-
adiabatic migration rates, for $b$ lying roughly between $3b_{0}$ and
$6b_{0}$. According to Lainey et al. (2020), such values are consistent with
the $3\sigma$ uncertainty ranges of Titan’s current migration rate obtained
from astrometric measurements ($b/b_{0}\in[1/2,5]$), but not with the
uncertainty ranges given by radio-science experiments ($b/b_{0}\in[1,3/2]$).
However, successful trajectories with substantial likelihood are anyway found
in a very large interval of migration rates, which extends much farther than
the uncertainty range of Lainey et al. (2020). We can therefore not bring any
decisive constraint on Titan’s migration history that would be tighter than
those obtained from observations. Yet, the tilting of Saturn would impose
strong constraint on Saturn’s polar moment of inertia. As already visible in
the figures of Saillenfest et al. (2021), tilting Saturn from
$\varepsilon\lesssim 5^{\circ}$ in the adiabatic regime would require that
$\lambda$ lies between about $0.228$ and $0.235$. Figure 14 shows that
allowing for non-adiabatic effects ($b\gtrsim 3b_{0}$) widens this range to
about $[0.224,0.239]$.
Interestingly, Fig. 14 features three trajectories affected by an ‘extreme
phase effect’ (see Sect. 3.3), visible as the three isolated grey points at
$\lambda\approx 0.205$, $0.210$, and $0.215$. These trajectories are of Type 1
(i.e. currently out of the resonance) and fit our strict success criteria. The
existence of these points recalls that such trajectories are extremely rare
(we found only three over millions of trials), but yet possible, as previously
reported by Boué et al. (2009). They correspond to the narrow dark edges of
the blue stripes in the top portion of Fig. 8. The complete trajectory
producing the leftmost of these points can be found in Appendix B.
As mentioned above, the likelihood measure depicted in Fig. 14 is conditioned
by our assumptions about the initial value of $\varepsilon$. The influence of
these assumptions can be investigated from our large simulation set. Figure 15
shows the statistics restricted to the lower- and higher-obliquity halves of
the distribution. Restricting the initial obliquity to $\varepsilon\lesssim
2.5^{\circ}$ suppresses most successful matches from the adiabatic regime
($b\lesssim 3b_{0}$), as one could have guessed from previous figures. On the
contrary, restricting the statistics to the upper half of the distribution
($\varepsilon\in[2.5^{\circ},5^{\circ}]$) greatly shifts the point of maximum
likelihood towards the adiabatic regime. Further experiments are provided in
Appendix C with initial obliquity values up to $10^{\circ}$. These experiments
show that the adiabatic and non-adiabatic regimes are roughly equally likely
if one considers an isotropic distribution of initial spin orientations (with
$\varepsilon\lesssim 5^{\circ}$ or $\varepsilon\lesssim 10^{\circ}$) instead
of a distribution favouring small initial obliquities as in Fig. 14.
Unsurprisingly, the adiabatic regime and Titan’s nominal migration rate are
the most likely if one considers initial obliquity values as
$2^{\circ}\lesssim\varepsilon\lesssim 7^{\circ}$ (i.e. in the red box of Fig.
13).
This discussion shows how important is the prior chosen for the initial
conditions. Assumption biases are unavoidable and were also present in
previous studies: Boué et al. (2009) assumed $\varepsilon=1.5^{\circ}$;
Vokrouhlický & Nesvorný (2015) assumed $\varepsilon=0.1^{\circ}$ (with respect
to the orbit averaged over all angles but $\phi_{3}$); and Brasser & Lee
(2015) assumed $\varepsilon\approx 0.05^{\circ}$ (with respect to the
invariable plane, i.e. the orbit averaged over all $\phi_{k}$). As shown by
our results, leaving room for a few degrees of extra primordial excitation, or
even only $0.5^{\circ}$, in any of those studies could have greatly enhanced
the chances of success. As recalled above, a few degrees of primordial
obliquity excitation are plausible and could be explained in different ways.
In this regard, the most general overview of our findings is given by Fig. 8,
since it does not presuppose any initial obliquity for Saturn, and Fig. 13
shows the respective likelihood of each dynamical pathway, still with no
assumption about the initial obliquity.
Figure 15: Same as Fig. 14, but for statistics based on a sub-sample of
simulations. a: initial conditions in
$(\varepsilon,\psi)\in[0^{\circ},2.5^{\circ}]\times[0,2\pi)$. b: initial
conditions in $(\varepsilon,\psi)\in[2.5^{\circ},5^{\circ}]\times[0,2\pi)$. In
both panels, each point is made of about $1200$ initial conditions extracted
from the simulations from Fig. 14.
## 5 The future obliquity of Saturn
Since Titan goes on migrating today, Saturn’s obliquity is likely to
continuously vary over time. Hence, we can wonder whether it could reach large
values, in the same way as Jupiter (Saillenfest et al., 2020). In order to
explore the future obliquity dynamics of Saturn, we propagate Saturn’s spin-
axis from today up to $5$ Gyrs in the future.
Figure 16 shows the summary of our results for finely sampled values of
$\lambda$ and $b$. Contrary to Fig. 8, we restrict here our sampling to
$b<3b_{0}$ because for larger migration rates, Titan goes beyond the Laplace
radius during the integration timespan ($a_{6}\approx 40$ $R_{\mathrm{eq}}$)
and the close-satellite approximation used in Eq. (6) is invalidated. Faster
migration rates are anyway disfavoured by the $3\sigma$ uncertainty range of
the radio-science experiments of Lainey et al. (2020).
Figure 16: Future obliquity of Saturn as a function of Titan’s migration
velocity and Saturn’s polar moment of inertia. The axes are the same as in
Fig. 8. The $3\sigma$ uncertainty ranges of Lainey et al. (2020) yield
approximately $b/b_{0}\in[1/2,5]$ for the astrometric measurements and
$b/b_{0}\in[1,3/2]$ for the radio-science experiments. Some level curves are
shown in red.
The top portion of Fig. 16 features trajectories of Type 1. Such trajectories
are currently above the resonance with $\phi_{3}$ (see Fig. 4) and they go
farther away from it as $\alpha$ continues to increase. The increase in
$\alpha$ makes them cross the resonances with $\phi_{51}$, with $\phi_{14}$,
and with $\phi_{15}$ (see Fig. 1 and Table 1). Being very small, these
resonances are crossed quickly and they do not produce noticeable obliquity
variations in Fig. 16. This explains why the top portion of the figure is
coloured almost uniformly with an obliquity value approximatively equal to
today’s. For $b\approx 3b_{0}$ (the fastest migration presented in Fig. 16),
trajectories reach the lower fringe of the strong resonance with $\phi_{4}$ at
the end of the integration, but they do not actually reach it.
Figure 17: Example of Type 2 trajectory that is expelled out of the resonance.
It has been obtained for $\lambda=0.221076$ and a migration rate $b/b_{0}=3$.
The integration runs from today up to $5$ Gyrs in the future.
The middle portion of Fig. 16 features trajectories of Types 2 and 3. Such
trajectories are currently inside the resonance with $\phi_{3}$ (see Fig. 4)
and they follow its centre as $\alpha$ increases. After $5$ Gyrs from now,
Saturn can therefore reach very large obliquity values, provided that Titan
goes on migrating as predicted by Lainey et al. (2020). For migration rates
lying in the error range of the radio-science experiments of Lainey et al.
(2020), Saturn’s obliquity can grow as large as $65^{\circ}$. As noticed by
Saillenfest et al. (2021), the resonance width increases up to $\alpha\approx
0.971^{\prime\prime}\,$yr-1, but decreases beyond. The trajectories featuring
a large libration amplitude and a large increase in $\alpha$ have therefore a
risk of being expelled out of the resonance, as described by Su & Lai (2020).
After a careful examination, we found that expulsion out of resonance only
occurs for the largest migration velocities and in a tiny interval of
$\lambda$ located at the very edge of the brightly-coloured region of Fig. 16.
An example of such a trajectory is presented in Fig. 17. The expelled
trajectories reach slightly smaller values of obliquity than if they continued
to follow the resonance centre; however, this behaviour concerns such a small
range of parameters (which is almost undistinguishable in Fig. 16) that it is
very unlikely to have any consequence for Saturn.
The bottom portion of Fig. 16 features trajectories of Type 4, which are ruled
out by our uncertainty range for $\lambda\in[0.200,0.240]$. Such trajectories
did not reach yet the resonance today (see Fig. 4), but they will in the
future as $\alpha$ continues to increase. The resonance encounter can either
lead to a capture (like Type 2 trajectories) or to a permanent decrease in
obliquity (like Type 1 trajectories). The outcome is determined by the phase
of $\sigma_{3}$ at crossing time, which depends on the migration velocity.
This explains why the two possible outcomes are organised in Fig. 16 as narrow
bands that are close to each other for a slow migration and spaced for a fast
migration. In a perfect adiabatic regime, the bands would be so close to each
other that the outcome could be modelled as a probabilistic event.
## 6 Discussion and conclusion
Since giant planets are expected to form with near-zero obliquities, some
mechanism must have tilted Saturn after its formation. Saillenfest et al.
(2021) have shown that the fast migration of Titan measured by Lainey et al.
(2020) may be responsible for the current large obliquity of Saturn. Through
an extensive set of numerical simulations, we further investigated the long-
term spin-axis dynamics of Saturn and determined the variety of laws for
Titan’s migration compatible with this scenario.
Saturn is located today near a strong secular spin-orbit resonance with the
nodal precession mode of Neptune (Ward & Hamilton, 2004). As Titan migrates
over time, it produces a drift in Saturn’s spin-axis precession velocity,
which led Saturn to encounter this resonance. The continuous migration of
Titan shifts the resonance centre over time, which can force Saturn’s
obliquity to vary. Through this mechanism, Saturn’s obliquity can have grown
from a small to a large value provided that: _i)_ Titan migrated over a large
enough distance to substantially shift the resonance centre, and _ii)_ Titan
migrated slowly enough for Saturn to adiabatically follow the resonance shift.
The first condition is met if Titan migrated over a distance of at least one
radius of Saturn after the late planetary migration, more than $4$ Gyrs ago.
Assuming that Titan’s migration is continuous, this requires migration
velocities larger than about $n\approx 0.06$ times the nominal rate given by
Lainey et al. (2020). For comparison, astrometric measurements predict
$n\gtrsim 0.5$. The second condition is met if Titan’s migration velocity does
not exceed $n\approx 10$ times the nominal rate, while astrometric
measurements predict $n\lesssim 5$. Therefore, the scenario proposed by
Saillenfest et al. (2021) is realistic over the whole range of migration rates
obtained from observations. It even allows for more complex scenarios in which
Titan would alternate between fast and slow migration regimes.
For the largest migration rates of Titan allowed by observational uncertainty
ranges, non-adiabatic effects are quite pronounced, but not to the point of
preventing Saturn from following the resonance centre. Interestingly, non-
adiabaticity even allows for an exactly zero value for Saturn’s primordial
obliquity. Zero values are however disfavoured by the error range of radio-
science experiments, which yield most likely primordial obliquities between
$2^{\circ}$ and $7^{\circ}$.
Our Monte Carlo experiments do not reveal a strong chaotic mixing of
trajectories, even though borderline separatrix-crossing trajectories do
exhibit a noticeable chaotic spreading. All possible dynamical paths fall into
the four types of trajectories obtained by Saillenfest et al. (2021) through
backward numerical integrations, and we detected no substantial numerical
irreversibility. For Titan’s nominal migration rate, our experiments show that
all trajectories with initial obliquity smaller than about $10^{\circ}$ are
captured inside the resonance with a $100\%$ probability. Such trajectories
can match Saturn’s current orientation if its normalised polar moment of
inertia $\lambda$ lies in about $[0.224,0.237]$, as previously reported.
Interestingly, small past obliquities $\varepsilon\lesssim 10^{\circ}$ in our
Monte Carlo experiments also feature the highest likelihood of reproducing
Saturn’s current spin-axis orientation, surpassing high-obliquity alternatives
by a factor of about ten. Yet, other values of $\lambda$ cannot be completely
ruled out; they would mean that Saturn’s past obliquity was larger or similar
as today and one would need to find another explanation for its large value.
In the future, the still ongoing migration of Titan is expected to produce
dramatic effects on Saturn’s obliquity provided that Saturn is currently
located inside the resonance, that is, if $\lambda$ lies in about
$[0.220,0.241]$. Depending on the precise migration rate of Titan, Saturn’s
obliquity would then range between $55^{\circ}$ and $65^{\circ}$ after $5$
Gyrs from now, and we even obtain values exceeding $75^{\circ}$ when
considering the full $3\sigma$ uncertainty of the astrometric measurements of
Lainey et al. (2020). For smaller values of $\lambda$, Saturn’s obliquity is
not expected to change much in the future because the migration of Titan
pushes it away from the resonance. No strong obliquity variations would be
expected either if Titan’s migration rate strongly drops in the future (i.e.
if Titan is released out of the tidal resonance-locking mechanism of Fuller et
al., 2016), but to our knowledge, there is no evidence showing that it could
be the case.
The migration law for Titan proposed by Lainey et al. (2020) and used in this
article is very simplified. Since our conclusions remain valid in a much
larger interval of migration rates than allowed by the observational
uncertainties, we can be confident that no major change would be produced by
using different (and possibly more realistic) migrations laws, unless Titan
underwent extreme variations in migration rate in the past. For instance, if
Titan’s migration is not continuous and if it was only triggered very recently
(less than a few hundreds of million years ago), then Saturn’s past obliquity
dynamics would not have been affected. As mentioned by Saillenfest et al.
(2021), this alternative is unlikely but cannot be ruled out considering our
current knowledge of the tidal dissipation within Saturn.
The past and future behaviour of Saturn’s spin axis is very sensitive to its
normalised polar moment of inertia $\lambda$. An accuracy of at least three
digits would be required to securely assert which dynamical path was followed
by Saturn and what will be the future evolution of its spin axis. Model-
dependent theoretical values are not enough for this purpose, and it is still
unclear what is the true uncertainty of values inferred from the _Cassini_
data (Helled, 2011; Fortney et al., 2018; Movshovitz et al., 2020). A precise
value of $\lambda$ would inform us about whether Saturn is currently inside
the resonance (which is the most likely alternative), or outside the
resonance. If Saturn is confirmed to be currently in resonance, it would imply
that Titan’s past migration rate never became so fast as to eject Saturn from
the resonance or to prevent its capture in the first place. However, this
constraint would not be very stringent: simulations show that Saturn can be
captured into resonance even if Titan’s migration rate is increased by a
factor ten from the nominal measured value. If, on the contrary, Saturn turns
out to be currently out of resonance, then it would imply that its primordial
obliquity was high, and most probably even higher than $30^{\circ}$,
regardless of Titan’s precise migration history. This last possibility is not
what one would expect from planetary formation models, and our results show
that it is also unlikely in a dynamical point of view.
Previous works reveal that numerous dynamical mechanisms can alter the
obliquity of a planet (see e.g. Laskar & Robutel, 1993; Correia & Laskar,
2001; Quillen et al., 2018; Millholland & Batygin, 2019). The fast migration
of satellites and capture in a secular spin-orbit resonance offers one more
alternative, and we have shown that it can result in a steady increase in
obliquity, possibly lasting over the whole lifetime of the planetary system.
In the broad context of exoplanets, we can therefore expect that only a few
would have conserved their primordial axis tilt, whether they are close-in and
likely tidally locked (Millholland & Laughlin, 2019), or whether they are
largely spaced and have very stable orbits like Jupiter and Saturn.
###### Acknowledgements.
Our work greatly benefited from discussions with David Nesvorný; we thank him
very much. We are also very grateful to Dan Tamayo for his in-depth review and
inspiring comments. G. L. acknowledges financial support from the Italian
Space Agency (ASI) through agreement 2017-40-H.0.
## References
* Archinal et al. (2018) Archinal, B. A., Acton, C. H., A’Hearn, M. F., et al. 2018, Celestial Mechanics and Dynamical Astronomy, 130, 22
* Boué & Laskar (2006) Boué, G. & Laskar, J. 2006, Icarus, 185, 312
* Boué et al. (2009) Boué, G., Laskar, J., & Kuchynka, P. 2009, ApJ, 702, L19
* Brasser & Lee (2015) Brasser, R. & Lee, M. H. 2015, AJ, 150, 157
* Clement et al. (2018) Clement, M. S., Kaib, N. A., Raymond, S. N., & Walsh, K. J. 2018, Icarus, 311, 340
* Correia & Laskar (2001) Correia, A. C. M. & Laskar, J. 2001, Nature, 411, 767
* Deienno et al. (2017) Deienno, R., Morbidelli, A., Gomes, R. S., & Nesvorný, D. 2017, AJ, 153, 153
* Duriez & Vienne (1997) Duriez, L. & Vienne, A. 1997, A&A, 324, 366
* Fortney et al. (2018) Fortney, J. J., Helled, R., Nettelmann, N., et al. 2018, The Interior of Saturn, ed. K. H. Baines, F. M. Flasar, N. Krupp, & T. Stallard, Saturn in the 21st Century (Cambridge University Press), 44–68
* French et al. (2017) French, R. G., McGhee-French, C. A., Lonergan, K., et al. 2017, Icarus, 290, 14
* French et al. (1993) French, R. G., Nicholson, P. D., Cooke, M. L., et al. 1993, Icarus, 103, 163
* Fuller et al. (2016) Fuller, J., Luan, J., & Quataert, E. 2016, MNRAS, 458, 3867
* Hamilton & Ward (2004) Hamilton, D. P. & Ward, W. R. 2004, AJ, 128, 2510
* Helled (2011) Helled, R. 2011, ApJ, 735, L16
* Helled (2018) Helled, R. 2018, The Interiors of Jupiter and Saturn, Oxford Research Encyclopedia of Planetary Science, 175
* Helled et al. (2011) Helled, R., Anderson, J. D., Schubert, G., & Stevenson, D. J. 2011, Icarus, 216, 440
* Helled et al. (2009) Helled, R., Schubert, G., & Anderson, J. D. 2009, Icarus, 199, 368
* Henrard (1982) Henrard, J. 1982, Celestial Mechanics, 27, 3
* Henrard (1993) Henrard, J. 1993, The Adiabatic Invariant in Classical Mechanics, Dynamics Reported – Expositions in Dynamical Systems, vol. 2 (Springer Berlin Heidelberg)
* Henrard & Murigande (1987) Henrard, J. & Murigande, C. 1987, Celestial Mechanics, 40, 345
* Hubbard & Marley (1989) Hubbard, W. B. & Marley, M. S. 1989, Icarus, 78, 102
* Iess et al. (2019) Iess, L., Militzer, B., Kaspi, Y., et al. 2019, Science, 364, aat2965
* Jeffreys (1924) Jeffreys, H. 1924, MNRAS, 84, 534
* Lainey et al. (2009) Lainey, V., Arlot, J.-E., Karatekin, Ö., & van Hoolst, T. 2009, Nature, 459, 957
* Lainey et al. (2020) Lainey, V., Gomez Casajus, L., Fuller, J., et al. 2020, Nature Astronomy, 4, 1053
* Lari et al. (2020) Lari, G., Saillenfest, M., & Fenucci, M. 2020, A&A, 639, A40
* Laskar (1989) Laskar, J. 1989, Nature, 338, 237
* Laskar (1990) Laskar, J. 1990, Icarus, 88, 266
* Laskar et al. (2012) Laskar, J., Boué, G., & Correia, A. C. M. 2012, A&A, 538, A105
* Laskar et al. (1993) Laskar, J., Joutel, F., & Robutel, P. 1993, Nature, 361, 615
* Laskar & Robutel (1993) Laskar, J. & Robutel, P. 1993, Nature, 361, 608
* Millholland & Batygin (2019) Millholland, S. & Batygin, K. 2019, ApJ, 876, 119
* Millholland & Laughlin (2019) Millholland, S. & Laughlin, G. 2019, Nature Astronomy, 3, 424
* Morbidelli et al. (2020) Morbidelli, A., Batygin, K., Brasser, R., & Raymond, S. N. 2020, MNRAS, 497, L46
* Movshovitz et al. (2020) Movshovitz, N., Fortney, J. J., Mankovich, C., Thorngren, D., & Helled, R. 2020, ApJ, 891, 109
* Murray & Dermott (1999) Murray, C. D. & Dermott, S. F. 1999, Solar System Dynamics (Cambridge University Press)
* Néron de Surgy & Laskar (1997) Néron de Surgy, O. & Laskar, J. 1997, A&A, 318, 975
* Nesvorný & Morbidelli (2012) Nesvorný, D. & Morbidelli, A. 2012, AJ, 144, 117
* Nettelmann et al. (2013) Nettelmann, N., Püstow, R., & Redmer, R. 2013, Icarus, 225, 548
* Peale (1969) Peale, S. J. 1969, AJ, 74, 483
* Quillen et al. (2018) Quillen, A. C., Chen, Y.-Y., Noyelles, B., & Loane, S. 2018, Celestial Mechanics and Dynamical Astronomy, 130, 11
* Rogoszinski & Hamilton (2020) Rogoszinski, Z. & Hamilton, D. P. 2020, ApJ, 888, 60
* Saillenfest et al. (2021) Saillenfest, M., Lari, G., & Boué, G. 2021, Nature Astronomy, https://doi.org/10.1038/s41550-020-01284-x
* Saillenfest et al. (2020) Saillenfest, M., Lari, G., & Courtot, A. 2020, A&A, 640, A11
* Saillenfest et al. (2019) Saillenfest, M., Laskar, J., & Boué, G. 2019, A&A, 623, A4
* Su & Lai (2020) Su, Y. & Lai, D. 2020, ApJ, 903, 7
* Tremaine (1991) Tremaine, S. 1991, Icarus, 89, 85
* Tremaine et al. (2009) Tremaine, S., Touma, J., & Namouni, F. 2009, AJ, 137, 3706
* Vazan et al. (2016) Vazan, A., Helled, R., Podolak, M., & Kovetz, A. 2016, ApJ, 829, 118
* Vienne & Duriez (1995) Vienne, A. & Duriez, L. 1995, A&A, 297, 588
* Vokrouhlický & Nesvorný (2015) Vokrouhlický, D. & Nesvorný, D. 2015, ApJ, 806, 143
* Ward (1975) Ward, W. R. 1975, AJ, 80, 64
* Ward & Canup (2006) Ward, W. R. & Canup, R. M. 2006, ApJ, 640, L91
* Ward & Hamilton (2004) Ward, W. R. & Hamilton, D. P. 2004, AJ, 128, 2501
* Wisdom (1985) Wisdom, J. 1985, Icarus, 63, 272
## Appendix A Orbital solution for Saturn
The secular orbital solution of Laskar (1990) is obtained by multiplying the
normalised proper modes $z_{i}^{\bullet}$ and $\zeta_{i}^{\bullet}$ (Tables VI
and VII of Laskar 1990) by the matrix $\tilde{S}$ corresponding to the linear
part of the solution (Table V of Laskar 1990). In the series obtained, the
terms with the same combination of frequencies are then merged together,
resulting in 56 terms in eccentricity and 60 terms in inclination. This forms
the secular part of the orbital solution of Saturn, which is what is required
by our averaged model.
The orbital solution is expressed in the variables $z$ and $\zeta$ as
described in Eqs. (7) and (8). In Tables 2 and 3, we give all terms of the
solution in the J2000 ecliptic and equinox reference frame.
Table 2: Quasi-periodic decomposition of Saturn’s eccentricity and longitude
of perihelion (variable $z$).
$\begin{array}[]{rrrr}\hline\cr\hline\cr k&\mu_{k}\
(^{\prime\prime}\,\text{yr}^{-1})&E_{k}\times 10^{8}&\theta_{k}^{(0)}\
(^{\text{o}})\\\ \hline\cr 1&28.22069&4818642&128.11\\\
2&4.24882&3314184&30.67\\\ 3&52.19257&173448&225.55\\\
4&3.08952&151299&121.36\\\ 5&27.06140&55451&38.70\\\ 6&29.37998&54941&37.54\\\
7&28.86795&32868&212.64\\\ 8&27.57346&28869&223.74\\\
9&53.35188&14683&134.91\\\ 10&-19.72306&14125&113.24\\\
11&76.16447&7469&323.03\\\ 12&0.66708&5760&73.98\\\ 13&5.40817&4420&120.24\\\
14&51.03334&4144&136.29\\\ 15&7.45592&1387&20.24\\\ 16&5.59644&805&290.35\\\
17&1.93168&801&201.08\\\ 18&4.89647&717&291.46\\\ 19&17.36469&674&123.95\\\
20&3.60029&408&121.39\\\ 21&2.97706&395&306.81\\\ 22&-56.90922&365&44.11\\\
23&17.91550&339&335.18\\\ 24&5.47449&303&95.01\\\ 25&5.71670&230&300.52\\\
26&17.08266&187&179.38\\\ 27&-20.88236&186&203.93\\\ 28&6.93423&167&349.39\\\
29&16.81285&157&273.89\\\ 30&1.82121&139&151.70\\\ 31&7.05595&136&178.86\\\
32&5.35823&124&274.88\\\ 33&7.34103&117&27.85\\\ 34&0.77840&99&65.10\\\
35&7.57299&82&191.47\\\ 36&17.63081&78&191.55\\\ 37&19.01870&67&219.75\\\
38&17.15752&64&325.02\\\ 39&17.81084&58&58.56\\\ 40&18.18553&53&57.27\\\
41&5.99227&45&293.56\\\ 42&17.72293&44&48.46\\\ 43&5.65485&44&219.22\\\
44&4.36906&39&40.82\\\ 45&16.52731&39&131.91\\\ 46&6.82468&38&14.53\\\
47&18.01611&37&44.83\\\ 48&5.23841&36&92.97\\\ 49&17.47683&34&260.26\\\
50&18.46794&32&4.67\\\ 51&-0.49216&29&164.74\\\ 52&17.55234&27&197.65\\\
53&16.26122&26&58.89\\\ 54&7.20563&24&323.91\\\ 55&18.08627&22&356.17\\\
56&7.71663&15&273.52\\\ \hline\cr\end{array}$
999This solution has been directly obtained from Laskar (1990) as explained in
the text. The phases $\theta_{k}^{(0)}$ are given at time J2000.
Table 3: Quasi-periodic decomposition of Saturn’s inclination and longitude of
ascending node (variable $\zeta$).
$\begin{array}[]{rrrr}\hline\cr\hline\cr k&\nu_{k}\
(^{\prime\prime}\,\text{yr}^{-1})&S_{k}\times 10^{8}&\phi_{k}^{(0)}\
(^{\text{o}})\\\ \hline\cr 1&0.00000&1377395&107.59\\\
2&-26.33023&785009&127.29\\\ 3&-0.69189&55969&23.96\\\
4&-3.00557&39101&140.33\\\ 5&-26.97744&5889&43.05\\\ 6&82.77163&3417&128.95\\\
7&58.80017&2003&212.90\\\ 8&34.82788&1583&294.12\\\ 9&-5.61755&1373&168.70\\\
10&-17.74818&1269&123.28\\\ 11&-27.48935&1014&218.53\\\
12&-25.17116&958&215.94\\\ 13&-50.30212&943&209.84\\\ 14&-1.84625&943&35.32\\\
15&-2.35835&825&225.04\\\ 16&-4.16482&756&51.51\\\ 17&-7.07963&668&273.79\\\
18&-28.13656&637&314.07\\\ 19&-0.58033&544&17.32\\\ 20&-5.50098&490&162.89\\\
21&-6.84091&375&106.28\\\ 22&-7.19493&333&105.15\\\ 23&-6.96094&316&97.96\\\
24&-3.11725&261&326.97\\\ 25&-7.33264&206&196.75\\\ 26&-18.85115&168&60.48\\\
27&-5.85017&166&345.47\\\ 28&0.46547&157&286.88\\\ 29&-19.40256&141&208.18\\\
30&-17.19656&135&333.96\\\ 31&-5.21610&124&198.91\\\ 32&-5.37178&123&215.48\\\
33&-5.10025&121&15.38\\\ 34&-18.01114&96&242.09\\\ 35&-17.66094&91&138.93\\\
36&11.50319&83&281.01\\\ 37&-17.83857&74&289.13\\\ 38&-5.96899&71&170.64\\\
39&-6.73842&67&44.50\\\ 40&-17.54636&66&246.71\\\ 41&-7.40536&62&233.35\\\
42&-7.48780&58&47.95\\\ 43&-6.56016&54&303.47\\\ 44&0.57829&54&103.72\\\
45&-6.15490&51&269.77\\\ 46&-17.94404&47&212.26\\\ 47&-8.42342&45&211.21\\\
48&-18.59563&43&98.11\\\ 49&20.96631&32&57.78\\\ 50&9.18847&31&1.15\\\
51&-1.19906&30&132.74\\\ 52&10.34389&20&190.42\\\ 53&18.14984&19&291.19\\\
54&-19.13075&18&305.90\\\ 55&-18.97001&8&73.36\\\ 56&-18.30007&7&250.45\\\
57&-18.69743&4&221.70\\\ 58&-18.77933&4&222.83\\\ 59&-18.22681&4&46.30\\\
60&-19.06544&4&50.21\\\ \hline\cr\end{array}$
101010This solution has been directly obtained from Laskar (1990) as explained
in the text. The phases $\phi_{k}^{(0)}$ are given at time J2000.
## Appendix B Examples of trajectories featuring extreme phase effects
In Sect. 3.3, we show that trajectories crossing the separatrix can feature
extreme phase effects when they reach the resonance in the vicinity of its
hyperbolic point and follow its drift over time. This maintains them on the
edge between capture (Type 2 trajectory) and non-capture (Type 1 trajectory).
Figure 19 shows examples of such trajectories obtained for Titan’s nominal
migration rate. These trajectories are of Type 2 (i.e. currently inside the
resonance). Instead of the precession angle $\psi$, we plot the resonant angle
$\sigma_{3}=\psi+\phi_{3}$, where $\phi_{3}$ evolves as in Eq. (8). The
elliptic point of the resonance (Cassini state 2) is located at
$\sigma_{3}=0$, and the hyperbolic equilibrium point (Cassini state 4) is
located at $\sigma_{3}=\pi$ (see e.g. Saillenfest et al. 2019). We see that
passing from one spike of Fig. 9 to the next one corresponds to performing one
more oscillation inside the resonance. For a purely adiabatic dynamics, all
spikes would be infinitely close to each other, such that it would be
impossible to get one specific trajectory by finely tuning $\lambda$.
Figure 19 shows another example of extreme phase effect but for a trajectory
of Type 1 (i.e. currently outside the resonance). It is obtained using a
strongly non-adiabatic migration, which widens the parameter ranges allowing
for extreme phase effects (see Sect. 3.3). This trajectory does not exactly
match Saturn’s spin-axis orientation today, but it lies within our strict
success criteria defined in Sect. 4.3: its current coordinates $\varepsilon$
and $\psi$ are within $0.4^{\circ}$ and $4.9^{\circ}$ of the actual ones,
respectively. This trajectory appears in the top portion of Fig. 14 as the
leftmost isolated grey point. It can be linked to the bottom of a spike in
Fig. 7, that is, to one of the top blue stripes of Fig. 8. After having
bifurcated away from the hyperbolic point, Fig. 19 shows that this trajectory
has performed one complete revolution of $\sigma_{3}$. The two other isolated
grey points in Fig. 14 have performed zero and two, respectively.
Figure 18: Example of trajectories featuring an extreme phase effect. The left
column shows the evolution of the obliquity, and the right column shows the
evolution of the resonant angle $\sigma_{3}=\psi+\phi_{3}$. The migration
parameter is $b=b_{0}$. For each row, the parameter $\lambda$ used corresponds
to the tip of a spike in Fig. 9 (see labels), tuned at the $10^{-15}$ level.
The pink area represents the interval occupied by the resonance once the
separatrix appears. The blue curve shows the location of the hyperbolic
equilibrium point (Cassini state 4). The green point shows Saturn current
location (at $t=0$).
Figure 19: Same as Fig. 19, but for a trajectory of Type 1. This trajectory
has a migration parameter $b=7.37\,b_{0}$ and a normalised polar moment of
inertia $\lambda=0.2114$.
## Appendix C Experiments on the initial obliquity prior
In Sect. 4.3, a Monte Carlo experiment is performed in order to look for the
most likely values of Saturn’s precession constant and Titan’s migration rate.
Formation models predict that Saturn’s primordial obliquity was near-zero, but
the statistics obtained greatly depend on the precise distribution used as
initial conditions. In this section, we investigate further this dependence
with additional Monte Carlo experiments.
Figure 20 shows the statistics obtained when assuming a uniform distribution
of initial conditions over the spherical cap defined by $\varepsilon\leqslant
5^{\circ}$ (i.e. with a uniform sampling of $\cos\varepsilon$ instead of
$\varepsilon$). Contrary to Fig. 14, this distribution is isotropic: it
assumes that all directions over the spherical cap are equiprobable; small
obliquity values are not particularly favoured. In practice, we can avoid
running millions of simulations again by simply weighting the count of each
run in Fig. 14 by $\sin\varepsilon$. As illustrated in Fig. 21, this trick
allows us to mimic a uniform distribution of $\cos\varepsilon$ from a uniform
distribution of $\varepsilon$. This method has the drawback of reducing by
roughly a factor two the resolution at the high-obliquity end of the
distribution (since trajectories are weighted by a factor $w\approx 2$), but
this is not a problem here thanks to the high number of simulations.
Figure 20: Same as Fig. 14, but considering an isotropic distribution of
initial spin orientation with $\varepsilon\leqslant 5^{\circ}$. It is obtained
from Fig. 14 by weighting the count of each run by $\sin\varepsilon$ (see Fig.
21). Figure 21: Sampled distribution of initial obliquity for one arbitrary
point of Fig. 14, made of $2400$ simulations. The raw count of sampled
trajectories is shown in red; the weighted count is shown in blue. Top:
histogram with respect to the obliquity. Bottom: histogram with respect to the
cosine of obliquity. The probability density functions (‘pdf’) are shown by
the red and blue curves.
Interestingly, Fig. 20 shows that a uniform distribution of initial spin
directions over the sphere yields approximatively equal likelihoods for the
adiabatic ($b\lesssim 3b_{0}$) and non-adiabatic ($b\gtrsim 3b_{0}$) regimes.
If we enlarge the distribution of initial conditions to $\varepsilon\leqslant
10^{\circ}$, Fig. 22 shows that the limits between the adiabatic and non-
adiabatic regimes completely vanish, leaving only one large region with
roughly constant likelihood.
Figure 22: Same as Fig. 14, but for a range of initial spin orientations
enlarged to $0^{\circ}\leqslant\varepsilon\leqslant 10^{\circ}$. Each point is
made of $2400$ numerical simulations. a: uniform random distribution of
$(\varepsilon,\psi)\in[0^{\circ},10^{\circ}]\times[0,2\pi)$. b: uniform random
distribution of $(\varepsilon,\psi)$ over the spherical cap defined by
$\varepsilon\leqslant 10^{\circ}$. As in Fig. 20, Panel b is obtained by
weighting the count numbers of Panel a. The black contours show the
$5^{\circ}$ and $10^{\circ}$ levels obtained through backward numerical
integrations (see Fig. 8).
Figure 23 shows the distribution of successful runs starting from initial
obliquities in the range $2.5^{\circ}\leqslant\varepsilon\leqslant
7.5^{\circ}$. This interval turns out to be roughly the one that favours most
the adiabatic regime and Titan’s nominal migration rate, to the detriment of
the non-adiabatic regime. This is not surprising, since past obliquities
between about $2^{\circ}$ and $7^{\circ}$ are the most likely for Titan’s
nominal migration rate (see Fig. 13).
Figure 23: Same as Fig. 14, but for initial obliquities uniformly distributed
between $2.5^{\circ}$ and $7.5^{\circ}$. It is obtained from a sub-sample of
Fig. 22a, such that each point is made of about $1200$ runs. The black
contours show the $5^{\circ}$ and $10^{\circ}$ levels obtained through
backward numerical integrations (same as Fig. 22).
|
# Modeling Assumptions Clash with the Real World: Transparency, Equity, and
Community Challenges for Student Assignment Algorithms
Samantha Robertson University of California, BerkeleyBerkeleyCalifornia
<EMAIL_ADDRESS>, Tonya Nguyen University of California,
BerkeleyBerkeleyCalifornia<EMAIL_ADDRESS>and Niloufar Salehi
University of California, BerkeleyBerkeleyCalifornia<EMAIL_ADDRESS>
(2021)
###### Abstract.
Across the United States, a growing number of school districts are turning to
matching algorithms to assign students to public schools. The designers of
these algorithms aimed to promote values such as transparency, equity, and
community in the process. However, school districts have encountered practical
challenges in their deployment. In fact, San Francisco Unified School District
voted to stop using and completely redesign their student assignment algorithm
because it was frustrating for families and it was not promoting educational
equity in practice. We analyze this system using a Value Sensitive Design
approach and find that one reason values are not met in practice is that the
system relies on modeling assumptions about families’ priorities, constraints,
and goals that clash with the real world. These assumptions overlook the
complex barriers to ideal participation that many families face, particularly
because of socioeconomic inequalities. We argue that direct, ongoing
engagement with stakeholders is central to aligning algorithmic values with
real world conditions. In doing so we must broaden how we evaluate algorithms
while recognizing the limitations of purely algorithmic solutions in
addressing complex socio-political problems.
student assignment, mechanism design, value sensitive design
††copyright: rightsretained††submissionid: pn9848††journalyear:
2021††conference: CHI Conference on Human Factors in Computing Systems; May
8–13, 2021; Yokohama, Japan††booktitle: CHI Conference on Human Factors in
Computing Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan††doi:
10.1145/3411764.3445748††isbn: 978-1-4503-8096-6/21/05††ccs: Human-centered
computing Human computer interaction (HCI)
Figure 1. Student assignment algorithms were designed meet school district
values based on modeling assumptions (blue/top) that clash with the
constraints of the real world (red/bottom). Students are expected to have
predefined preferences over all schools, which they report truthfully. The
procedure is intended to be easy to explain and optimally satisfies student
preferences. In practice however, these assumptions clash with the real world
characterized by unequal access to information, resource constraints (e.g.
commuting), and distrust.
[A flow chart showing the inputs (student preferences and school priorities)
to the matching algorithm and its output (student-school assignments),
labelled with modeling assumptions and real world challenges at each stage of
the process.]A flow chart showing the inputs and outputs to the matching
algorithm. On the left there is a student icon next to a box labelled
“Preferences: Ranked list of schools.” Below that is a school icon with a box
labelled “Priority categories: Schools prioritize some applicants e.g.
siblings, underserved students, neighborhood students.” These two boxes have
arrows leading to a box in the centered labeled “Matching algorithm.” This box
then leads to a final box on the right labelled “Assignments” with an icon
showing a student and a school. Surrounding the flow chat are pairs of text
showing the modeling assumptions and corresponding real world challenges and
constraints. From left to right and top to bottom these read: “Equitable
access to all schools / Barriers to accessing schools;” “Strategy-proof /
Distrust and strategic behavior;” “All students can participate / Barriers to
participation;” “Clearly defined, explicit procedure / Difficult to explain &
understand;” “Competition between schools increases quality / Competition
driven by social signalling stereotypes;” “Prioritize neighborhood students /
Capacity mismatch and political tension.”
## 1\. Introduction
Algorithmic systems are increasingly involved in high-stakes decision-making
such as child welfare (Saxena et al., 2020; Brown et al., 2019), credit
scoring (Koren, 2015), medicine (Ghassemi et al., 2020; Obermeyer et al.,
2019), and law enforcement (Barry-Jester et al., 2015). Documented instances
of discriminatory algorithmic decision-making (Barry-Jester et al., 2015;
Chouldechova, 2017; Obermeyer et al., 2019; Ali et al., 2019) and biased
system performance (Buolamwini and Gebru, 2018; Noble, 2018; Sweeney, 2013;
Bolukbasi et al., 2016) have prompted a growing interest in designing systems
that reflect the values and needs of the communities in which they are
embedded (Friedman and Jr., 2003; Zhu et al., 2018). However, even when
systems are designed to support shared values, they do not always promote
those values in practice (Voida et al., 2014). One reason why an algorithmic
system may not support values as expected is that these expectations rely on
modeling assumptions about the world that clash with how the world actually
works. In this paper, we examine one such breakdown, the San Francisco Unified
School District’s student assignment algorithm, to study where and how those
clashes occur and to offer paths forward.
San Francisco has a long history of heavily segregated neighborhoods which has
resulted in segregated schools when students attend their neighborhood school
(Haney et al., 2018). In 2011, in an effort to promote educational equity and
racially integrated classrooms, San Francisco Unified School District joined
many cities across the country who were turning to assignment algorithms to
determine where students go to school (Haney et al., 2018). Rather than
enrolling in their neighborhood school, students submit their ranked
preference list of schools to the district, and the algorithm uses those
preferences along with school priorities and capacity constraints to match
students to schools. These algorithms have been met with great excitement for
their potential to provide more equitable access to public education and give
families more flexibility compared to a neighborhood-based assignment system
(Kasman and Valant, 2019). By 2018, however, diversity in schools had instead
decreased and parents were frustrated by an opaque and unpredictable process
(Haney et al., 2018). In fact, many schools were now more segregated than the
neighborhoods they were in (San Francisco Unified School District, 2015). The
algorithm had failed to support the values its designers had intended and the
San Francisco Board of Education voted for a complete overhaul and redesign of
the system (Haney et al., 2018).
Following a Value Sensitive Design approach, we ask two central questions: 1)
What values were designers and policy-makers hoping this algorithm would
support? 2) Why were those values not met in practice? To answer these
questions we first analyzed the school district’s publicly available policy
documents on student assignment and conducted a review of the relevant
economics literature where matching algorithms for student assignment have
been developed. To answer the second question, we conducted an empirical
investigation into how the algorithm is used in practice. We conducted 13
semi-structured interviews with parents in San Francisco who have used the
assignment system and performed content analysis of 12 Reddit threads where
parents discussed the algorithm. We complement our qualitative findings with
quantitative analysis of application and enrollment data from 4,594 incoming
kindergartners in 2017. This triangulation of methods enables us to paint a
richer picture of the whole ecosystem in which the algorithm is embedded.
We found that the algorithm failed to support its intended values in practice
because it’s theoretical promise depended on modeling assumptions that
oversimplify and idealize how families will behave and what they seek to
achieve. These assumptions overlook the complex barriers to ideal
participation that many families face, particularly because of socioeconomic
inequalities. Additionally, the system designers vastly underestimated the
cost of information acquisition and overestimated the explainability and
predictability of the algorithm. In contrast to expectations that the
algorithm would ensure an transparent, equitable student assignment process,
we find widespread strategic behavior, a lack of trust, and high levels of
stress and frustration among families.
Student assignment algorithms promise a clear, mathematically elegant solution
to what is in reality a messy, socio-political problem. Our findings show that
this clash can not only prevent the algorithm from supporting stakeholders’
values, but can even cause it to work against them. Human-centered approaches
may help algorithm designers build systems that are better aligned with
stakeholders’ values in practice. However, algorithmic systems will never be
perfect nor sufficient to address complex social and political challenges. For
this reason, we must also design systems that are adaptable to complex,
evolving community needs and seek alternatives where appropriate.
## 2\. Related work
In this work we build on two major areas of related work: work in economics on
designing and evaluating matching algorithms for student assignment; and
literature in HCI on Value Sensitive Design. We end with a review of
literature that examines the role of modeling assumptions in algorithmic
systems. In this paper we use the term “algorithmic system” or “student
assignment system” to broadly refer to the matching algorithm as well as the
district’s processes and families’ practices that make up a part of the
application and enrollment process.
### 2.1. Matching Algorithms for Student Assignment
Economists have developed matching algorithms to find optimal assignments
between two sides of a market based on each side’s preferences (Gale and
Shapley, 1962; Shapley and Scarf, 1974). These algorithms have since been
applied to numerous real world markets, such as university admissions and
organ donation (Roth, 2015). Abdulkadiroğlu and Sönmez proposed two variants
of matching algorithms111Deferred Acceptance (DA) (Gale and Shapley, 1962) and
Top-Trading Cycles (TTC) (Shapley and Scarf, 1974) are both used for student
assignment. Student-optimal DA finds the stable matching that most efficiently
satisfies student preferences, while TTC finds a matching that is Pareto-
efficient in the satisfaction of student preferences but is not guaranteed to
be stable. for assigning students to public schools (Abdulkadiroğlu and
Sönmez, 2003). In these systems, each student submits a ranked list of schools
that they would like to attend. Schools may have priority categories for
students, such as siblings or neighborhood priorities. Students’ preferences
are used in conjunction with school priorities to assign each student to an
available school seat. These algorithms have promising theoretical properties
that should ensure a fair and efficient allocation of seats. For example, they
are strategy-proof, meaning students cannot misrepresent their preferences to
guarantee an improved outcome. They also produce assignments that efficiently
satisfy students’ preferences. Student assignment systems based on matching
algorithms have been championed for their potential to advance equitable
access to high quality education, create more diverse classrooms, and provide
more flexibility to families compared to a traditional neighborhood system
(Kasman and Valant, 2019).
As these systems have been implemented in the real world they have faced new
types of challenges, such as confusion for families and decreasing classroom
diversity. Pathak suggests that early theoretical literature overlooked or
oversimplifed challenges of practical importance (Pathak, 2017). Economists
have employed empirical methods to further understand strategic behavior
(Hassidim et al., 2016; Kapor et al., 2020; Rees-Jones and Skowronek, 2018;
Ding and Schotter, 2017, 2019; Guillen and Hing, 2014; Pais and Pintér, 2008;
Guillen and Hakimov, 2017), information needs (Chen and He, 2020; Hermstrüwer,
2019; Guillen and Hakimov, 2018), and diversity constraints (Laverde, 2020;
Gonczarowski et al., 2019; Hafalir et al., 2013; Nguyen and Vohra, 2019;
Hastings et al., 2007; Glazerman and Dotter, 2017; Oosterbeek et al., 2019).
While these approaches improve some technical shortcomings of the systems,
they do not study the values supported by the design of system itself as well
as the human factors that shape how it is used in practice.
In this paper we take a human-centered approach and study parents and policy-
makers to gain a deeper understanding of their values, attitudes,
understandings, and uses of the student assignment system in practice. Kasman
and Valant warn that student assignment algorithms are subject to strong
political forces and are easily misunderstood (Kasman and Valant, 2019). They
argue that the ultimate success of matching algorithms for student assignment
will depend on how people interact with them (Kasman and Valant, 2019). Prior
work in HCI has studied human values with respect to matching algorithms in
experimental settings (Lee and Baykal, 2017; Lee et al., 2019). Central
concerns for participants included the algorithms’ inability to account for
social context, the difficulty of quantifying their preferences, and the lack
of opportunities for compromise (Lee and Baykal, 2017). We build on this work
and study stakeholders’ values with respect to a high-stakes matching
algorithm that has been in use for almost a decade to assign students to
public schools. Further, we focus on why the values that these algorithms
theoretically support, like transparency and equity, have not been promoted in
practice.
### 2.2. Value Sensitive Design
Value Sensitive Design (VSD) is a theoretically grounded methodology to
identify and account for stakeholders’ values in the design of new
technologies (Friedman et al., 2006). In Value Sensitive Design, “values” are
broadly defined as “what a person or group of people consider important in
life,” although values with ethical import are considered especially important
(Friedman and Jr., 2003). VSD is a tripartite methodology, involving
conceptual, empirical and technical investigations in an iterative and
integrative procedure (Friedman et al., 2006). In the conceptual stage,
designers identify stakeholders’ relevant values. Empirical investigations
examine stakeholders’ interactions with the technology and how they apprehend
values in practice (Davis and Nathan, 2015; Friedman et al., 2017). Technical
investigations explore how the properties and mechanisms of a particular
technology support or hinder values. VSD takes a proactive stance: values
should ideally be considered early on and throughout the design process (Davis
and Nathan, 2015). However, VSD can also be applied retrospectively to
evaluate deployed systems with respect to human values (Friedman et al.,
2006). We apply VSD methodology to understand what values San Francisco
Unified School District’s assignment algorithm was designed to support, and
why it has not supported those values in practice, leading to its redesign.
Zhu et al. adapt the VSD framework to the design and analysis of algorithmic
systems through “Value-Sensitive Algorithm Design” (VSAD) (Zhu et al., 2018).
VSAD emphasizes the need to evaluate algorithms based on whether they are
acceptable to stakeholders’ values, whether they effectively address the
problem they were designed for, and whether they have had positive broader
impacts (Zhu et al., 2018). This is in contrast to traditional evaluation
procedures for algorithmic systems, which depend heavily on narrow,
quantitative success metrics (Zhu et al., 2018). Subsequent work has applied
the VSAD framework to reveal stakeholder values in the context of a machine
learning algorithm used to predict the quality of editor contributions on
Wikipedia (Smith et al., 2020). The authors emphasize the need to integrate
values not only into the design of the algorithm itself, but also into the
user interface and work practices that form a part of the algorithmic
ecosystem (Smith et al., 2020). This is consistent with the interactional
principle in VSD, which dictates that “values are not embedded within a
technology; rather, they are implicated through engagement” (Davis and Nathan,
2015).
As VSD has been developed and more widely adopted, researchers have
encountered some challenges (Davis and Nathan, 2015; Borning and Muller, 2012;
Le Dantec et al., 2009). One challenge is resolving value conflicts, both
between stakeholders with different beliefs (Flanagan et al., 2005) and
between competing values (Shilton, 2013). However, even when stakeholders
agree on important values, it can be difficult to predict whether a technology
that supports a value in theory will actually uphold that value when the
system is deployed in the real world. Zhu et al. apply VSAD to design and
evaluate an algorithm to recruit new editors to Wikipedia communities (Zhu et
al., 2018). They found that their algorithm was acceptable and helpful to the
community, but also discovered unanticipated shortcomings. For instance, only
more experienced newcomers increased their contributions in response to the
recruitment outreach (Zhu et al., 2018). Ames offers another example of values
breakdown, contrasting the intended values of the One Laptop Per Child
project, such as productivity, with the consumptive values that were enacted
in practice (Ames, 2016).
Researchers have identified various causes of breakdowns between intended
values and values in practice. Ames’s work highlights the importance of
understanding local needs in the context where a technology is to be deployed.
Manders-Huits argues that problems can arise when designers misinterpret
stakeholders’ values, or because stakeholders’ values changed over time
(Manders-Huits, 2011). Similarly to this work, Voida et al. find that tension
arises from a misalignment between how a computational system operationalizes
a value and how the people who use the system understand that value (Voida et
al., 2014). We build on these findings by examining a clash between
algorithmic logics and real-world goals and practices. We connect these
challenges to emerging work studying the role of modeling assumptions and
abstraction in algorithmic breakdown.
### 2.3. Modeling Assumptions in Algorithmic Systems
All algorithmic systems rely on an implicit model of the world in order to
compute on it. Any model is a simplified abstraction of reality but the
simplifying assumptions often go unstated (Box, 1979). For example, Selbst et
al. describe the algorithmic frame in supervised machine learning, in which
each observation in labelled training data represents an abstraction of some
real-world entity, often a human being (Selbst et al., 2019). The authors warn
that algorithmic systems can break down if they rely on abstractions that do
not capture important aspects of the interactions between technical and social
systems. Researchers have documented challenges both when assumptions are too
broad, and when they are overly narrow. For instance, Chancellor et al.
identified significant inconsistency in how researchers conceptualize and
model humans when using machine learning to predict mental health (Chancellor
et al., 2019). In contrast, Saxena et al. found an overly narrow focus on risk
prediction in the U.S. child welfare system that oversimplifies the complexity
of the domain’s needs (Saxena et al., 2020).
In the student assignment context, Hitzig identified how matching algorithms
rely on an abstraction of the world that makes strong, unstated normative
assumptions regarding distributive justice (Hitzig, 2020), or the appropriate
distribution of benefits and burdens in a group. The matching paradigm assumes
that the ideal outcome is the one where every student is assigned to their
first choice school. Hitzig points out that this emphasis on efficiency may
not align with school districts’ goals, but is often framed in economics as
objectively optimal rather than only one of many ways to distribute resources.
This work demonstrates how unstated, erroneous modeling assumptions about the
world can break an algorithmic system. Baumer argues that this breakdown can
occur when an algorithm’s designers and stakeholders do not share a common
understanding of the system’s goals and limitations (Baumer, 2017). We expand
on this work by exploring how the designers of matching algorithms for student
assignment relied on certain modeling assumptions about the world in order to
justify their designs with respect to values like equity and transparency. We
analyze the breakdown of the student assignment algorithm in San Francisco as
a case study of what happens when these assumptions clash with stakeholders’
real world goals and constraints.
## 3\. Methods
Our goal in this research is to understand the values that San Francisco
Unified School District’s (SFUSD) student assignment system was designed to
support and compare and contrast these to parents’ experiences in practice.
Following Value Sensitive Design methodology (Friedman and Jr., 2003), we
begin with a conceptual investigation drawing on prior literature in economics
and SFUSD policy documents to identify the values the system was intended to
promote. Then, we conduct a mixed-method empirical investigation to understand
why the system ultimately did not support those values and needed to be
redesigned.
### 3.1. Data Collection
We collected data from three sources to understand the district’s policy goals
(how the system was intended to work) and parent experiences (how the system
has actually worked).
#### 3.1.1. District Policies
We collected two official documents from SFUSD to understand the district’s
policy goals, their justification for their original design in 2011, and the
reasons they voted for a redesign in 2018. We accessed the official policy
describing the existing assignment system (San Francisco Unified School
District Office of Education, nd) and the resolution that approved the ongoing
redesign (Haney et al., 2018) from the enrollment section of SFUSD’s
website.222https://www.sfusd.edu/schools/enroll/ad-hoc-committee. Accessed
April, 2020.
#### 3.1.2. Parent Perspectives
We collected parent experiences in two primary formats: through interviews
with parents, and from public online discussions on social media. The
interviews allowed us to ask questions and prompt parents to reflect on and
dig deeper into their experiences with the assignment system. The online
discussions provide potentially less filtered reflections shared without the
presence of researchers and reveal how parents seek and share information
online. We supplement this data with a presentation titled “Reflections on
Student Assignment” by the African American Parents Advisory Council (AAPAC)
(African American Parent Advisory Council, 2017), which was also downloaded
from the enrollment section of SFUSD’s website.
We conducted semi-structured interviews with 13 parents who have used the
student assignment system to apply for elementary schools in SFUSD. We
recruited parents through four parenting email and Facebook groups by
contacting group administrators who shared a brief recruitment survey on our
behalf. During the interview, we asked participants to describe their
application and enrollment experiences, and to reflect on their understanding
of the assignment algorithm. Interviews were 45 minutes and participants
received a $30 gift card. All interviews were conducted over the phone in
English between February and August 2020.
12 parents completed a demographic survey. Parents reported their income as
low income (1), middle income (5), and upper-middle to high income (4) and
identified their race or ethnicity as white (4), Asian (3), Chinese (2), white
and Hispanic (1), white and Middle Eastern (1), and Vietnamese (1). The 12
respondents reside in six different zip codes in the city. In all 12
households one or more parents had a Bachelor’s degree and in nine households
the highest level of education was a graduate degree. To preserve participant
privacy, we identify participants in this paper by unique identifiers P1
through P13.
We supplement the interview data with twelve Reddit threads posted on the
r/sanfrancisco subreddit333https://reddit.com/r/sanfrancisco between 2016 and
2020. These threads were selected by conducting a comprehensive search of
r/sanfrancisco using the search term “school lottery,” as it is commonly known
to parents.444Search conducted using the PushShift Reddit repository at
https://redditsearch.io/ Each post was reviewed to ensure that it was a
discussion of the current SFUSD assignment algorithm. From the twelve threads
made up of 678 posts and comments, we manually coded content where the author
demonstrated first-hand experience with the assignment algorithm, resulting in
a final dataset of 128 posts from 83 contributors. Excluded posts were those
that were off topic or presented the author’s political view rather than their
personal experiences with the system. We paraphrase this content to protect
the users’ privacy.
#### 3.1.3. Application and Enrollment Data
We complement our qualitative data about parent experiences with publicly
available, de-identified kindergarten application data from 2017 to understand
higher-level trends in how parents use the system.555The data was collected as
part of a public records request by local journalist Pickoff-White for a story
about how parents try to game the system (Pickoff-White, 2018). The data is
available at https://github.com/pickoffwhite/San-Francisco-Kindergarten-
Lottery. For each of the 4,594 applicants, the data includes their ranked list
of schools, the school they were assigned to, and the school they enrolled in.
It also includes the student’s zipcode, race, and whether the student resides
in a census tract with the lowest performing schools (CTIP1 area), which makes
them eligible for priority at their preferred schools. Applicants are 28%
Asian or Pacific Islander, 24% white, 23% Hispanic and 3.2% Black. 21%
declined to state their race. Approximately 15% of applicants were eligible
for CTIP1 priority, 45% of whom are Hispanic. 11% of CTIP1-eligible students
are Black, which is 53% of all Black applicants.
#### 3.1.4. Limitations
We recruited interview participants through convenience sampling online and
complemented the interviews with existing online data, which biases our data
towards those who have the time and motivation to participate in research
studies, online discussions, and district focus groups. Our dataset lacks
sufficient representation of low-income families and Black and Hispanic
families. It is important that future work addresses this limitation,
particularly considering that integration is a key goal for the school
district, and that these families are underrepresented in existing discourses.
In future work we will focus on understanding the experiences of historically
underserved families with student assignment algorithms, specifically families
of color, low-income families, and families with low English proficiency.
### 3.2. Data Analysis
In order to understand the district’s values for student assignment and the
reasons why the assignment algorithm has not supported these values, we
conduct inductive, qualitative content analysis (Merriam and Associates, 2002)
and quantitative data analysis.
#### 3.2.1. Qualitative Analysis
Our qualitative dataset was made up of district policy documents and community
input, interview transcripts, and Reddit content. We performed an open-ended
inductive analysis, drawing on elements of grounded theory method (Charmaz,
2014). We began with two separate analyses: one to understand the district’s
values and policies; and a second to understand parent experiences and
perspectives. The authors met regularly throughout the analysis to discuss
codes and emerging themes. In both analyses we began by conducting open coding
on a line-by-line basis using separate code books (Charmaz, 2014). We then
conducted axial coding to identify relationships between codes and higher
level themes. In the axial coding stage for the SFUSD policy documents, we
identified three high level codes relevant to our research questions: Values:
What are the district’s values and goals for student assignment?; Mechanism:
How was the district’s current system expected to support their values?; and
Challenges: Why did the district ultimately decide to redesign the system?.
Next, we analyzed parent perspectives from the community input documents,
interview transcripts, and Reddit content. We conducted two rounds of open
coding. First, we focused only on these three data sources. We identified
codes that included ”priorities,” ”algorithmic theories,” and ”challenges.”
Then, we linked the open codes from the first round to the challenges
identified in the policy documents. We found that challenges parents described
in our parent perspectives dataset were relatively consistent with those
described in the policy documents and we reached theoretical saturation after
approximately ten interviews.
#### 3.2.2. Quantitative Analysis
We linked the application dataset to publicly available school-level
standardized test results in order to understand how families use the system
to access educational opportunities. We accessed third grade results in the
California Smarter Balanced Summative Assessments in 2017-2018, provided by
the California Department of Education.666Data available at urlhttps://caaspp-
elpac.cde.ca.gov/caaspp/ResearchFileList. We link the school achievement data
to the applications by state-level (CDS) code. The preference data contains
only school numbers, a district-level coding scheme. SFUSD has published a
document linking these district school numbers to the school name and state-
level (CDS) codes
http://web.sfusd.edu/Services/research_public/rpadc_lib/SFUSD%20CDS%20Codes%20SchYr2012-13_(08-20-12).pdf.
We conducted exploratory data visualization to investigate trends in
preferences. We measure variation in preferences by race and CTIP1 priority
status in order to gain insight into if and how participation varies across
groups differently impacted by structural oppression and historical exclusion
from high quality education. We present quantitative findings using
visualizations to include all students. When comparing summary statistics we
use the bootstrap777We use percentile intervals to estimate confidence
intervals and the bootstrapped t-test to estimate p-values for differences in
means using 10,000 re-samples, following (Efron and Tibshirani, 1993). Groups
(race and CTIP1) are re-sampled independently. method to estimate statistical
significance (Efron and Tibshirani, 1993). For this analysis we used third
grade standardized test results as a rough estimate of resources and
opportunities at each elementary school. We recognize that there are many ways
in which schools provide value to children that are not reflected in
standardized test results.
## 4\. Student Assignment in San Francisco: Intended Values
In this section, we present our findings on the values that San Francisco
Unified School District (SFUSD) intended their student assignment system to
support. In the next section we analyze why this system did not realize those
values in practice.
SFUSD has been utilizing different choice-based systems to address educational
inequality in the district for almost forty years (San Francisco Unified
School District, 2015). Although the mechanism for assigning students to
schools has changed significantly over time, SFUSD has been consistent in
their values and goals for student assignment. Their current policy designates
three primary goals:
1. (1)
“Reverse the trend of racial isolation and the concentration of underserved
students in the same school;
2. (2)
Provide equitable access to the range of opportunities offered to students;
and
3. (3)
Provide transparency at every stage of the process.” (San Francisco Unified
School District Office of Education, nd)
In addition, they emphasize the importance of efficiently utilizing limited
district resources, ensuring predictability and ease of use for families, and
creating robust enrollments at all schools.
Figure 2. The matching algorithm takes students’ preferences over schools and
schools’ pre-defined priority categories as inputs and outputs the most
efficient assignment of students to schools.
[A flow chart showing the inputs (student preferences and school priorities)
to the matching algorithm and its output (student-school assignments),
labelled with key properties: strategy-proofness and efficiency.]A flow chart
showing the inputs and outputs to the matching algorithm with key properties
labelled. The flow chat is identical to Figure 1. The first label to the right
of the preferences box reads, “Strategy-proofness: The optimal strategy is to
list your true preferences for schools.” The second label is to the right of
the matching algorithm box and it reads, “Efficiency: The assignments
efficiently satisfy students’ preferences. You can’t improve one student’s
outcome without making another student worse off.”
In SFUSD’s current assignment system (San Francisco Unified School District,
2015), students or their parents apply for schools by submitting their
preferences: a ranked list of schools they would like to attend (Figure 2). To
increase flexibility and access to opportunities, students can rank any school
in the district and there is no limit on the number of schools they can rank.
The district also defines priority categories. Elementary schools give top
priority to siblings of continuing students and then to underserved students.
Underserved students are defined as those living in neighborhoods with the
schools that have the lowest performance on standardized tests, known as CTIP1
areas. The matching algorithm888SFUSD uses a variant of the Top Trading Cycles
algorithm (Shapley and Scarf, 1974). See (Abdulkadiroğlu and Sönmez, 2003) for
a technical analysis of Top Trading Cycles in the student assignment context
or (Roth, 2015) for a more broadly accessible introduction to market design.
then takes student preferences and school priorities and produces the best
possible assignments for the students subject to the schools’ priorities and
capacity constraints. Importantly, the resulting assignments from this
algorithm are guaranteed to efficiently satisfy student preferences not school
priorities. School priorities are only used to determine which students are
assigned to over-demanded seats. The matching algorithm is also strategy-
proof, meaning that it can be theoretically proven that families do not
benefit from manipulating their preferences to game the system.
We consolidated the school district’s stated goals for student assignment into
four high-level values: (1) transparency, predictability and simplicity; (2)
equity and diversity; (3) quality schools; and (4) community and continuity
(Table 1). In this section, we described the system that was expected to
support these values. In the next section, we explore why these expectations
were not met in practice.
## 5\. Algorithmic Breakdown: Values in Practice
In December 2018, San Francisco Board of Education determined that the the
algorithm was not working as intended (Haney et al., 2018). While the number
one stated goal of the algorithm was to “reverse the trend of racial isolation
and the concentration of underserved students in the same school,” the Board
found that segregation had increased since the algorithm was introduced and
there was widespread dissatisfaction amongst parents (San Francisco Unified
School District, 2019; San Francisco Unified School District Office of
Education, nd). The assignment algorithm had failed to respect the values that
it was designed to support and the Board voted to stop using it and to design
a new system. In this section we present our findings that help explain why.
For each of the district’s four high-level values for student assignment
(Table 1), we first review the theoretical properties and promises of the
algorithm related to that value: why would economists and district policy-
makers expect that the system would respect that value? Next, we analyze what
implicit modeling assumptions those expectations depend on. Finally, we
explain how families’ needs, constraints, and values in the real world clashed
with system designers’ assumptions about them, which prevented the algorithm
from meeting its theoretical promises and enacting the district’s values in
practice.999In this work we identify the school district’s values and draw on
families’ experiences to explain why they haven’t been supported. The
district’s values may not completely align with families’ values. We assume
that satisfying families is one of the district’s priorities, and we find
substantial overlap between the four district values and what parents in our
sample find important. We leave a detailed analysis of families’ values to
future work.
Table 1. We consolidated the San Francisco Unified School District’s goals for
student assignment into four overarching values. Assignment algorithms have
theoretical properties aligned with these values. However, the San Francisco
assignment algorithm’s theoretical promises have not been realized because
they rely on modeling assumptions that clash with real world challenges.
Value | Promises and Properties | Modeling Assumptions | Real World Challenges
---|---|---|---
Transparency, | Algorithm has a clearly | The district provides accessible, | Finding and understanding
predictability, and | defined procedure. | clear information. Families want and | information is difficult. Some parents
simplicity | Assignments are explainable. | understand explanations. Families do not try to game the system. | try to game the system. There is a lack of trust: assignments are perceived as unpredictable and unfair.
Equity and diversity | Any student can apply to any school. Underserved students are given priority access. | All families participate equally and the all-choice system offers identical opportunities to all families. | Time, language and economic constraints create participation barriers for lower resourced families.
Quality schools | Competition for applicants will drive up the overall quality of schools in the district. | Families base their preferences on accurate estimates of school quality. Schools can respond to competitive pressures. | Competition is driven by social signalling and negative stereotypes. Underserved schools lack resources to attract applicants.
Community and | Priority for siblings | Schools have sufficient capacity to | A lack of guaranteed access to
continuity | and students in the school’s attendance area. | handle demand from siblings and neighborhood children. | local schools frustrates families living in neighborhoods with very popular schools.
[Table summarizing the main findings of the paper.]Table summarizing the main
findings of the paper. There is a header and followed by four rows of data,
one corresponding to each of the four high-level school district values for
student assignment. The first column contains the value. The second column
summarizes the promises and properties of the assignment algorithm that were
supposed to support the value. The third column summarizes the modeling
assumptions that these promises and properties depend on. The fourth column
summarizes the real world challenges that clash with the modeling assumptions.
### 5.1. Transparency, Predictability, and Simplicity
#### 5.1.1. Theoretical promises
Matching algorithms are clearly and explicitly defined procedures. This
differentiates them from assignment systems based on imprecise admissions
criteria, which have historically been more difficult to justify and have led
to legal disputes (Abdulkadiroğlu and Sönmez, 2003). If a student wants to
understand why they did not receive an assignment they were hoping for, the
algorithm’s decision can be explained. Matching algorithms are also provably
strategy-proof. That is, students cannot guarantee a more preferable
assignment by strategically misrepresenting their preferences. Strategic
behavior requires time and effort, so preventing strategic advantages is
critical not only for simplicity and efficiency, but also for ensuring that
all families can participate equally.
#### 5.1.2. Modeling assumptions: families will accept their assignment as
fair and legitimate as long as the algorithm’s logic is explained to them.
This assumes that the school district provides an accessible, comprehensible
explanation and that families would seek out, understand, and trust this
explanation. Families have known preferences for schools and recognize that
they should report those preferences truthfully.
#### 5.1.3. Real world challenges
In practice, families find the assignment system difficult to navigate and
struggle to find relevant, clear, and consistent information. Some parents
engage in strategic behavior to try to improve their child’s assignment,
contrary to theoretical incentives. Rather than seeking and accepting an
explanation, families who are dissatisfied with their assignment seek to
change it. Families’ trust in the system is eroded by the lack of clear
information and the belief that some parents are able to game the system.
Parents face a significant information cost to understand the various
opportunities available across the city. There are 72 elementary school
programs in SFUSD (Haney et al., 2018). Parents indicated that researching
schools is a burdensome time-commitment. In-person school visits are a popular
source of information when forming preferences, but these visits are time-
intensive and logistically difficult.
> […I]t’s like a full time job doing all the school tours. (P7)
Online information is another widely used source, but school information is
not centralized, nor is it consistent across schools. A number of parents
mentioned the difficulty of navigating online district resources:
> […F]inding and gathering the information about the schools from the district
> is a mess. (P11)
None of the parents we interviewed felt that they had a clear understanding of
how the algorithm works. The algorithm is colloquially known to parents as
“the lottery.” Although the algorithm has only a small lottery aspect to break
ties between students with the same priority, many believe it is mostly or
entirely random.
> I’m not really that confident in their actual lottery system. It could be
> bingo in the background for all I know. (P4)
This leaves families feeling a lack of agency and control over their child’s
education.
> I mean, the word itself, lottery, most of it is random. I don’t feel like we
> can do anything at all. (P5)
Confused and frustrated by district resources, parents frequently seek advice
from other parents online and in-person. Reddit users sought and shared
complex strategies, sometimes relying on substantial independent research.
This is consistent with prior work showing that advice sharing in social
networks can encourage strategic behavior (Ding and Schotter, 2017, 2019).
Advice from other families is often conflicting and unclear, further
exacerbating confusion about the system.
> [W]e also got different advice from different parents. They’re very, very
> different from each other. Some people say, “Put in as many schools as
> possible,” and some people say, “No, just put two schools that you really
> wanted, and then you have a higher chance of getting those.” (P5)
The 2017 application data indicates that strategic behavior may be more
widespread amongst more privileged families. On average, families who were
eligible for the CTIP1 priority for underserved students ranked 5.5 schools in
their application (95% confidence interval (CI): 5.0–6.2 schools), while
families in other areas of the city ranked an average of 11.6 (95% CI:
11.2–12.1 schools; difference in means: p = 0.00) (Figure 3). 96% of families
eligible for CTIP1 priority were assigned their first choice, so this
difference may reflect these families’ confidence that they will get one of
their top choices. On the other hand, it may reflect disparities in access to
the time and resources needed to research schools and strategies. White
students submitted especially long preference lists (mean = 16.5; 95% CI:
15.6–17.6),101010Differences in means between white students and Black, Asian
or Pacific Islander, and Hispanic students is highly statistically
significant, even with conservative adjustments for multiple hypothesis
testing. a further indication that strategic behavior is more popular with
families with more structural advantages.
Figure 3. Families who were eligible for priority for underserved students
ranked fewer schools on average (mean: 5.5; 95% CI: 5.0-6.2) than other
families in the city (mean: 11.6; 95% CI: 11.2-12.1; difference in means:
p=0.00). This may suggest that stategic behavior is more widespread amongst
higher resource families.
[A boxplot showing the distribution of application length by underserved
student priority status. Underserved students submitted shorter applications,
overall.]A boxplot showing the distribution of application length for students
who received priority for underserved students and those who did not. The
median for underserved students is 3 schools and the interquartile range is 5.
For other students the median is 7 schools and the interquartile range is 8.
Both boxplots are skewed to the right with outliers with very long
applications. This tail is longer and heavier for advantaged students.
Receiving an unfavorable assignment was a major concern for families in our
sample. The district offers multiple rounds of the assignment algorithm, which
many parents participate in if they are dissatisfied with their child’s
assignment. However, this process can be long, uncertain, and frustrating.
Some parents received the same assignment every round with no further
explanation or assistance.
> […T]he first announcement that we got […], I actually wasn’t that upset. I
> said, “You know what, there’s more rounds. […] We could stick it out.” But I
> was really upset at the second one because there was literally no change.
> And that really had me questioning, “I’m just trying to play by the rules.
> Should I not trust this any more than it’s going to work out?” (P9)
Parents on Reddit recommended unofficial avenues for recourse, many of which
require substantial time and resources. These include going in person to the
enrollment office repeatedly to request reassignment, remaining on waiting
lists up to ten days into the school year, and opting out of the public school
system altogether.
Overall, a complicated algorithm together with a shortage of transparent and
accessible information has fostered distrust and frustration amongst parents
in the district. Distrust is fuelled by perceptions that the system is random
and unscientific, and that it allows parents with more time and resources to
gain an unfair advantage.
> It’s definitely convoluted. It’s definitely multilayered, it’s complex. And
> that favors people who have the time and the wherewithal to figure it out.
> […T]he complexity invites accusations of [corruption] and does not inspire
> trust (P9)
### 5.2. Diversity and Equity
#### 5.2.1. Theoretical promises
The assignment system is an all-choice system with unrestricted preference
lists, so any student can apply to any school in the district. Compared to a
neighborhood system, or even more restricted choice systems, this design has
the potential to enable more equitable access to educational opportunity. In
an effort to promote equitable access to education and diverse schools, SFUSD
has added the CTIP1 priority category, which gives priority admission at over-
demanded schools to students from neighborhoods with under-performing schools.
#### 5.2.2. Modeling assumptions: all families participate equally in the
system and the all-choice system offers identical opportunities to all
families.
CTIP1 students prefer to attend over-demanded schools if they can access them.
Applicant pools reflect the racial and socioeconomic diversity of the city.
#### 5.2.3. Real world challenges
Although an all-choice system offers greater flexibility than a neighborhood
system, our results show that families with fewer resources face significant
barriers to ideal participation in SFUSD’s choice system. Although families
can rank any school on their application, some families are not able to choose
the schools that offer the greatest opportunities. Preferences are segregated
by race and income, preventing the algorithm from creating diverse
assignments.
Our results indicate that the all-choice system does not offer identical
opportunities to all families. Every family can apply to any school, but that
does not mean that every family can actually access every school. For example,
transportation logistics can be a significant challenge. When choosing a
kindergarten for their child, P1 met with an education placement counselor at
SFUSD to understand the special education services offered across the
district. P1 recalled their response to one of the counselor’s suggestions:
> So, you are telling me this school is […] three blocks uphill and we’re
> supposed to do that with a kindergartner and no car? […] There’s no way that
> on my worst day that I would be able to drag my kindergartner with special
> needs uphill in the rain. (P1)
The CTIP1 priority is potentially a useful advantage for underserved students.
In 2017, 96% of students who were eligible for this priority were assigned
their first choice school, compared to 58% of students without this priority.
However, CTIP1 priority is only useful for advancing educational equity if
these students can actually use it to enroll in well-resourced schools. In
2017, students with CTIP1 priority enrolled in schools with lower academic
outcomes than other students (Figure 4). On average, underserved students
enrolled in a school where 45.0% of third graders met or exceeded expectations
in the English Language Arts/Literacy exams111111Qualitatively similar results
to those presented in this section hold for Mathematics results. (95% CI:
43.2% – 46.7%), compared to 57.2% (95% CI: 56.5% – 57.9%) of students at the
average school that other students enrolled in (difference in means: p =
0.00). This difference points to persisting inequities in access to higher
resource schools that priority assignment is insufficient to address. CTIP1
priority cannot, for example, help students access schools that are physically
inaccessible for them. Social factors may also influence choice patterns. For
instance, the African American Parent Advisory Council (AAPAC) has raised
concerns that Black students in San Francisco continue to face racism and
biases in racially diverse classrooms (African American Parent Advisory
Council, 2017).
These findings are consistent with prior work showing that while proximity and
academics are important to most families, more privileged parents tend to put
more emphasis on a school’s academic performance (Hastings et al., 2007;
Abdulkadiroglu et al., 2017; Burgess et al., 2015), while parents from low-
income or racialized backgrounds may be more likely to prioritize proximity
(Laverde, 2020) or representation of students from a similar background
(Hastings et al., 2007). As a result of differences in students’ preferences,
applicant pools at schools across the city are segregated by race and income.
This prevents the algorithm from creating diverse assignments (Haney et al.,
2018; Laverde, 2020).
Figure 4. The priority for underserved students helps those students access
educational opportunity, but there remain inequities that priority enrollment
cannot address. Students with priority enrolled in higher performing schools
(mean: 45.0% of students met or exceeded expectations on standardized tests;
95% CI: 43.2% – 46.7%), than their average neighborhood school (mean: 31.6%).
However, they still enrolled in lower performing schools on average than
students who were not eligible for priority (mean: 57.2%; 95% CI: 56.5%–57.9%)
(difference in means: p = 0.00). Academic outcomes are measured as the
percentage of third grade students at the enrolled school who met or exceeded
expectations in the 2017-18 statewide assessments.
[A boxplot showing that underserved students enroll in lower performing
schools than other students.]A boxplot showing the distribution of
standardized test performance at schools where underserved students enrolled
compared to other students. The median for underserved students is 43.55% and
the interquartile range is 39.4%. For other students the median is 62.65 % and
the interquartile range is 28.5%. The average across underserved students’
neighborhood schools is shown with a vertical reference line at 31.6% which is
labelled “Average at underserved schools.”
### 5.3. Quality Schools
#### 5.3.1. Theoretical promises
System designers have suggested that choice systems indirectly improve school
quality. For instance, Pathak argues that matching mechanisms create
competition between schools, which pushes under-demanded schools to improve in
order to attract applicants and sustain their enrollment (Pathak, 2017). In
addition, Pathak points out that an algorithmic system based on student
preferences creates a useful source of demand data for the district to target
interventions or closures at underenrolled schools (Pathak, 2017).
#### 5.3.2. Modeling assumptions: a competitive market will drive up the
overall quality of offerings.
This assumes that demand is driven by accurate estimates of school quality.
#### 5.3.3. Real world challenges
Unfortunately, competition in SFUSD has not resulted in an improvement in
educational opportunities and outcomes across the district (Haney et al.,
2018). Our findings reveal that parents base their preferences on noisy
signals of school quality. Still, some students depend on under-demanded
schools and are harmed by under-enrollment and school closures.
Our results suggest that parents’ preferences are strongly shaped by social
learning and stereotypes. Many parents reported using other parents’ opinions
and experiences of schools to inform their preferences. Some feel that a few
schools are disproportionately regarded as the “best” schools in the city.
Parents on Reddit attested that many good schools are unfairly dismissed by
more advantaged parents, sometimes on the basis of thinly veiled racist and
classist stereotypes. Standardized test scores or aggregate scores like those
reported by greatschools.org are another popular source of information. Though
seemingly more objective, these measure are heavily correlated with resources
and demographics at schools (Barnum and LeMee, 2019), further exacerbating
preference segregation. In the presence of these types of competitive
pressures, well-resourced schools are heavily over-demanded while under-
resourced schools struggle to maintain robust enrollments (Haney et al.,
2018). SFUSD believes the algorithm has created “unhealthy competition”
between schools, resulting in schools ranging in size from 100 to nearly 700
students (San Francisco Unified School District, 2019).
While Pathak argues that choice patterns are useful in determining which
schools to close and which to support and expand (Pathak, 2017), this
overlooks the correlation between demand patterns and existing patterns of
inequality. Under-enrollment and school closures can seriously harm the
communities at those schools, which often serve predominantly poor students of
color (Griffith and Freedman, 2019a; Ewing, 2018). SFUSD has acknowledged the
need to more equitably distribute resources, but it can be politically
difficult to direct resources to schools with low demand and enrollment (San
Francisco Unified School District Office of Education, nd).
### 5.4. Community and Continuity
#### 5.4.1. Theoretical promises
SFUSD’s sibling and attendance area priority categories are designed to
encourage a sense of community and cohesion for families. In addition,
students attending PreK or Transitional Kindergarten in the attendance area
are given priority to ensure continuity for students.
#### 5.4.2. Modeling assumptions: schools have sufficient capacity to handle
demand from siblings and neighborhood children.
#### 5.4.3. Real world challenges
Many families are dissatisfied by a lack of access to their local schools. In
many neighborhoods there is a mismatch between demand for the attendance area
school and its capacity. In fact, current attendance area boundaries are drawn
such that some schools do not have the capacity to serve every student in the
attendance area (San Francisco Unified School District, 2015). As a result,
the attendance area priority does not provide an acceptable level of
predictability for those who want to enroll in their local school.
For parents living in neighborhoods with popular schools, access to their
attendance area school is far from guaranteed. One Reddit user expressed
frustration after they found out that they may not be able to enroll their
child in their local school. Due to their family’s circumstances, they feared
it would be impossible to get their child to a school further from home.
Parents in our sample value access to local schools for convenience and a
sense of community. Under the existing system, two children who live close to
each other may attend schools on opposite sides of the city. There are even
neighborhoods in San Francisco where students are enrolled across all 72
elementary school programs (Haney et al., 2018). Some parents felt that this
dispersion undermines the educational experience for children:
> [I]t is really important for our children to bond and build relationships in
> their community. And they really connect to their education and their
> educational environment very differently [when they do]. (P1)
By underestimating the mismatch between demand for neighborhood schools and
capacity at those schools, the assignment system has generated significant
dissatisfaction among parents who live near popular schools. These parents are
increasingly pushing for a return to a traditional neighborhood system.
However, this would restrict flexibility and access to educational
opportunities for many families across the city who use the system to enroll
their children in schools other than their neighborhood school.121212A
district analysis showed that 54% of kindergarten applicants did not list
their attendance area school anywhere in their preference list for the 2013-14
school year (San Francisco Unified School District, 2015). This is especially
true of underserved students: according to the 2017 application data, around
75% of students who received CTIP1 priority enrolled in an elementary school
outside of the CTIP1 census tracts. Schools in CTIP1 census tracts were
determined according to the definition updated for the 2014-15 school year
https://archive.sfusd.edu/en/assets/sfusd-
staff/enroll/files/Revising_CTIP1_for_2014_15_SY.pdf.
## 6\. Design Implications for Student Assignment
In the previous section we showed how incorrect or oversimplified modeling
assumptions have played a role in the breakdown of the student assignment
algorithm in San Francisco. In this section we draw on these findings to
present four design implications for student assignment systems: (1) provide
relevant and accessible information; (2) (re)align algorithmic objectives with
community goals in mind; (3) reconsider how stakeholders express their needs
and constraints; and (4) make appropriate, reliable avenues for recourse
available. We emphasize that student assignment is a complex, socio-political
problem and our results and recommendations are our first step to better
understanding it. In the future, we will continue this work focusing
explicitly on the needs of underserved students. In the next section we
discuss broader implications of this work for the design of algorithmic
systems.
### 6.1. Provide relevant and accessible information
When looking for a school for their child, parents need to find schools that
meet their needs, and then understand how to apply. Our research shows that
information acquisition is very difficult, which leaves families with a sense
of distrust and perceptions of randomness, unpredictability, and unfairness.
However, more information is not always better. Information about algorithmic
systems should be congruent with stakeholder needs and interests and should be
limited to the most relevant information in order to minimize cognitive load
(Dietz et al., 2003). In the student assignment setting, we found the most
salient information for families is information about the schools available to
them that best meet their needs. Relevant, accurate information should be easy
to find and navigate. San Francisco Unified School District has recognized
this need and has committed to making this information available in a variety
of languages (Haney et al., 2018). Further work is needed to understand what
kind of information about schools will be relevant and helpful without
exacerbating negative stereotyping and preference segregation.
Transparency information about the algorithm itself may also reduce stress and
increase trust in the system, but only if this information is clear and useful
(Nissenbaum, 2011; Kulesza et al., 2013; Cheng et al., 2019). The algorithmic
information most relevant to parents in our sample is their chances of
receiving a particular assignment. This information is currently difficult to
find, in part because these probabilities depend on others’ preferences.
However, this information may reduce stress and increase predictability. One
concrete goal moving forward could be to ensure that information about schools
and admission probabilities are easily available.
### 6.2. (Re)Align algorithmic objectives with community goals in mind
SFUSD expected their assignment system to satisfy individual preferences and
promote community-level goals like equitable access to education and diverse
classrooms. However, the system has had limited success in promoting
educational equity, and racial and economic segregation has worsened since it
was introduced (Haney et al., 2018). One reason for this breakdown is that the
primary objective of matching algorithms is to efficiently satisfy students’
preferences, and in San Francisco students’ preferences are already heavily
segregated by race and income (Haney et al., 2018). This indicates a breakdown
between community goals and what the algorithm is optimizing for.
The focus on satisfying students’ preferences can also obscure other problems.
For example, if we look only at preference satisfaction, then underserved
students appear to have a strong advantage in the current system. 96% of
incoming kindergartners who were eligible for priority for underserved
students received their first choice school in 2017, compared to only 58% of
other students. However, underserved students continue to enroll in lower
resourced schools and an opportunity gap persists between underserved students
and others in the district. Due to the limitations of our sample, we cannot
conclusively explain the reasons for segregated and unequal preferences.
Nevertheless, these two challenges suggest that technical system designers
need to work closely with policy-makers and community members to ensure that
their algorithm’s objectives and evaluation metrics are aligned with higher-
level goals and values.
### 6.3. Reconsider how stakeholders express their needs and constraints
Another way to make progress towards community goals is to reconsider how
families express their values, needs, and constraints. Matching algorithms
model families as independent, self-interested agents with some inherent
preferences over schools. Schools are assumed to be merely “objects to be
‘consumed’ by the students” (Abdulkadiroğlu and Sönmez, 2003). However, our
findings highlight that preferences are based on limited information and are
strongly shaped by social context. Schools are also important communities for
children and their families. Researchers have found that matching algorithms
for group decision-making do not give participants the space to understand
each others’ concerns and arrive at compromises that might be natural in a
negotiation amongst humans (Lee and Baykal, 2017; Lee et al., 2017). One
avenue for future work is to develop alternative methods for eliciting
students’ preferences that better reflect their needs and allow for compromise
and community building. For example, families could submit their weighted
priorities over factors like language programs or proximity to their home. In
our interviews we found that parents already make these types of comparisons
frequently when researching schools. Such an approach might help shift
families’ focus from how high their assigned school was in their personal
ranked list to how their assigned school meets their needs and constraints and
contributes to progress towards community-level goals.
### 6.4. Make appropriate, reliable avenues for recourse available
Because there is limited space at popular schools, some students will receive
a disappointing assignment. There are multiple rounds of the algorithm for
students who wish to appeal their assignment. However, our results suggest
that this process can be frustrating and unpredictable. One concrete
recommendation is to improve communication with parents throughout the process
about their application status and their available next steps. Our findings
also suggest that privileged stakeholders will continue to seek unofficial
channels to achieve their goals. Therefore, future work developing fair
processes for recourse should prioritize the needs of lower resourced
stakeholders and design low cost appeals processes.
## 7\. Discussion and Future Work
In the previous section, we suggested ways to improve student assignment
algorithms to better support stakeholders’ values. In this section, we discuss
the implications of our work for algorithm design more broadly and identify
opportunities for future work.
This work presents an example of how incorrect assumptions can prevent a
system from supporting intended values in practice. Direct engagement with
stakeholders early on in the design process may help system designers identify
incorrect or oversimplified modeling assumptions. For example, economists
initially assumed that matching algorithms would be easy to explain to
families and that the procedure would be perceived as fair. A value sensitive
approach would have encouraged designers to engage with stakeholders early in
the development process to gauge their perceptions and acceptance of the
technology (Zhu et al., 2018). Economists may have discovered that
stakeholders’ acceptance of matching algorithms for student assignment would
depend heavily on social and political factors, such as pre-existing
institutional trust in the school district.
Even with improved methods to align algorithm design with stakeholders’
values, unanticipated challenges will arise because algorithmic systems must
rely on some abstractions and assumptions that will always be an imperfect
approximation of the real world (Box, 1979; Selbst et al., 2019). Crawford
analyzed sites of conflict between algorithms and humans, and has warned of
the danger of understanding algorithmic logics as autocratic (Crawford, 2016).
Instead, algorithmic systems should be accountable to community values beyond
the formal design process and stakeholders should have ongoing opportunities
to voice concerns, even after the system has been deployed (Zhu et al., 2018).
Future work is needed to design algorithmic systems that are adaptable and
flexible in response to this feedback.
In advocating for ongoing engagement with stakeholders, it is important to
grapple with differences in power and participation among them (Zhu et al.,
2018; Dietz et al., 2003). We need to design mechanisms for participation that
are equitable and low-cost for lower resource families to voice their concerns
(Harrington et al., 2019). In the student assignment setting, we found that
convenience sampling strongly skewed our sample of parents towards higher
resource parents with the time and motivation to voice their concerns. While
building a system that serves all stakeholders is ideal, trade-offs are
inevitable when systems impact a large number of stakeholders with diverse
perspectives and needs (Zhu et al., 2018; Dietz et al., 2003). Avenues for
participation should encourage deliberation of trade-offs and include
safeguards to prevent powerful stakeholders from compromising important
community values in order to design a system that better serves their own
interests.
Designing systems while taking into account stakeholders with conflicting
values and priorities will require a broader view of algorithmic performance.
The research literature on matching algorithms has typically emphasized
theoretical guarantees, such as whether assignments are efficient or stable. A
human-centered analysis of algorithmic performance would involve evaluating
the system in its real world context, along dimensions such as acceptance from
stakeholders and broader impacts (Zhu et al., 2018). This is in contrast to
typical practices in algorithmic fields such as machine learning, where
algorithms are developed and evaluated with respect to narrow, quantitative
metrics such as efficiency. A broader view of algorithmic performance may
identify challenges that are central to stakeholders’ experiences with the
system if not directly related to the algorithm’s design, such as the
difficulty of forming a preference list.
Finally, we cannot expect that every algorithmic system can support community
values if only the right design choices are made. Demand for a technology in
the first place is often closely tied to particular politics, which may
necessitate certain values and preclude others. For example, education
researcher, Scott argues that modern school choice programs reflect a
neoliberal ideology focused on empowering parents as consumers of educational
opportunities for their child (Scott, 2013). Advocates claim that school
choice promotes educational equity by enabling underserved students to attend
a school other than their neighborhood school. Assignment algorithms can
support this approach to equity with technical features like priority
categories or quota systems. However, this is not the only approach to
educational equity. In fact, it offers limited benefits to those who do not
have the time or resources to exercise informed choice (Scott, 2011). A
redistributive principle, on the other hand, would prioritize providing
underserved students with educational opportunities in their own communities
and protecting local students’ access to those resources (African American
Parent Advisory Council, 2017). Assignment algorithms cannot effectively
support such an approach: increasing enrollment at under-demanded schools
using an algorithm would require violating some students’ preferences and may
be disruptive and harmful to the existing communities at those schools
(African American Parent Advisory Council, 2017; Griffith and Freedman,
2019b). Therefore, student assignment algorithms exist within and to uphold a
political ideology that privileges individual choice sometimes at the cost of
other values, such as democracy, resource equality, and desegregation (Scott,
2011). This example shows why it is important not only to consider how certain
design choices might support the values that stakeholders find salient, but
also what values a technology necessitates or precludes based on the implicit
politics of its existence. Value Sensitive Design does not provide an explicit
ethical theory to designate what kinds of values should be supported (Manders-
Huits, 2011; Borning and Muller, 2012). Therefore, in addition to an
understanding of implicit values and politics, our analysis must include a
commitment to justice (Costanza-Chock, 2018) and accept refusal as a
legitimate way of engaging with technology (Cifor et al., 2019).
## 8\. Conclusion
In this paper we conduct qualitative content analysis of parent experiences
and district policies, and quantitative analysis of elementary school
applications to understand why the student assignment system in place in San
Francisco Unified School District has not supported the district’s goals and
values. We identify four values that the system was intended to support: (1)
transparency, predictability and simplicity; (2) equity and diversity; (3)
quality schools; and (4) community and continuity. We identify how the
algorithm’s theoretical promises to uphold these values depend on assumptions
about how stakeholders behave and interact with the system, and explore the
ways in which these assumptions clash with the properties and constraints of
the real world. We discuss the implications of this work for algorithm design
that accounts for complex and possibly conflicting values and needs.
###### Acknowledgements.
We thank our study participants for sharing their experiences and insights. We
also thank members of the U.C. Berkeley Algorithmic Fairness and Opacity
Working Group (AFOG) and participants at the 4th Workshop on Mechanism Design
for Social Good (MD4SG) for helpful feedback on an earlier version of this
work. Finally, we thank the anonymous reviewers for their feedback and
suggestions.
## References
* (1)
* Abdulkadiroglu et al. (2017) Atila Abdulkadiroglu, Parag A Pathak, Jonathan Schellenberg, and Christopher R Walters. 2017\. _Do Parents Value School Effectiveness?_ Working Paper 23912. National Bureau of Economic Research. https://doi.org/10.3386/w23912
* Abdulkadiroğlu and Sönmez (2003) Atila Abdulkadiroğlu and Tayfun Sönmez. 2003. School Choice: A Mechanism Design Approach. _American Economic Review_ 93, 3 (June 2003), 729–747. https://doi.org/10.1257/000282803322157061
* African American Parent Advisory Council (2017) African American Parent Advisory Council. 2017. _AAPAC Reflections on SFUSD’s Student Assignment Policy_. San Francisco Unified School District. Retrieved April 17, 2020 from https://archive.sfusd.edu/en/assets/sfusd-staff/enroll/files/AAPAC_Student_Assignment_Presentation_3.8.17.pdf?_ga=2.81786516.225077105.1586807928-1027271765.1579115407
* Ali et al. (2019) Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. 2019. Discrimination through Optimization: How Facebook’s Ad Delivery Can Lead to Biased Outcomes. _Proc. ACM Hum.-Comput. Interact._ 3, CSCW, Article 199 (Nov. 2019), 30 pages. https://doi.org/10.1145/3359301
* Ames (2016) Morgan G. Ames. 2016\. Learning consumption: Media, literacy, and the legacy of One Laptop per Child. _The Information Society_ 32, 2 (2016), 85–97. https://doi.org/10.1080/01972243.2016.1130497
* Barnum and LeMee (2019) Matt Barnum and Gabrielle LaMarr LeMee. 2019. _Looking for a home? You’ve seen GreatSchools ratings. Here’s how they nudge families toward schools with fewer black and Hispanic students._ Chalkbeat. https://www.chalkbeat.org/2019/12/5/21121858/looking-for-a-home-you-ve-seen-greatschools-ratings-here-s-how-they-nudge-families-toward-schools-wi
* Barry-Jester et al. (2015) Anna Maria Barry-Jester, Ben Casselman, and Dana Goldstein. 2015\. _The New Science of Sentencing_. The Marshall Project. https://www.themarshallproject.org/2015/08/04/the-new-science-of-sentencing
* Baumer (2017) Eric PS Baumer. 2017\. Toward human-centered algorithm design. _Big Data & Society_ 4, 2 (2017), 1–12. https://doi.org/10.1177/2053951717718854
* Bolukbasi et al. (2016) Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In _Proceedings of the 30th International Conference on Neural Information Processing Systems_ (Barcelona, Spain) _(NIPS’16)_. Curran Associates Inc., Red Hook, NY, USA, 4356–4364.
* Borning and Muller (2012) Alan Borning and Michael Muller. 2012. Next Steps for Value Sensitive Design. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Austin, Texas, USA) _(CHI ’12)_. Association for Computing Machinery, New York, NY, USA, 1125–1134. https://doi.org/10.1145/2207676.2208560
* Box (1979) G. E. P. Box. 1979\. Robustness in the Strategy of Scientific Model Building. In _Robustness in Statistics_ , Robert L. Launder and Graham N. Wilkinson (Eds.). Academic Press, Cambridge, MA, USA, 201 – 236. https://doi.org/10.1016/B978-0-12-438150-6.50018-2
* Brown et al. (2019) Anna Brown, Alexandra Chouldechova, Emily Putnam-Hornstein, Andrew Tobin, and Rhema Vaithianathan. 2019\. Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-Making in Child Welfare Services. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_ (Glasgow, Scotland Uk) _(CHI ’19)_. Association for Computing Machinery, New York, NY, USA, Article 41, 12 pages. https://doi.org/10.1145/3290605.3300271
* Buolamwini and Gebru (2018) Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In _Proceedings of the 1st Conference on Fairness, Accountability and Transparency_ _(Proceedings of Machine Learning Research)_ , Sorelle A. Friedler and Christo Wilson (Eds.), Vol. 81. PMLR, New York, NY, USA, 77–91. http://proceedings.mlr.press/v81/buolamwini18a.html
* Burgess et al. (2015) Simon Burgess, Ellen Greaves, Anna Vignoles, and Deborah Wilson. 2015. What Parents Want: School Preferences and School Choice. _The Economic Journal_ 125, 587 (2015), 1262–1289. https://doi.org/10.1111/ecoj.12153
* Chancellor et al. (2019) Stevie Chancellor, Eric P. S. Baumer, and Munmun De Choudhury. 2019\. Who is the ”Human” in Human-Centered Machine Learning: The Case of Predicting Mental Health from Social Media. _Proc. ACM Hum.-Comput. Interact._ 3, CSCW, Article 147 (Nov. 2019), 32 pages. https://doi.org/10.1145/3359249
* Charmaz (2014) Kathy Charmaz. 2014\. _Constructing grounded theory: A practical guide through qualitative research_ (2 ed.). SAGE Publications Ltd, London, United Kingdom.
* Chen and He (2020) Yan Chen and Yinghua He. 2020. _Information Acquisition and Provision in School Choice : An Experimental Study_. Working Paper. http://yanchen.people.si.umich.edu/papers/Chen_He_2020_09_Distribute.pdf
* Cheng et al. (2019) Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F. Maxwell Harper, and Haiyi Zhu. 2019. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_ (Glasgow, Scotland Uk) _(CHI ’19)_. Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300789
* Chouldechova (2017) Alexandra Chouldechova. 2017\. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. _Big Data_ 5, 2 (2017), 153–163. https://doi.org/10.1089/big.2016.0047 PMID: 28632438.
* Cifor et al. (2019) Marika Cifor, Patricia Garcia, TL Cowan, Jasmine Rault, Tonia Sutherland, Anita Say Chan, Jennifer Rode, Anna Lauren Hoffmann, Niloufar Salehi, and Lisa Nakamura. 2019\. Feminist data manifest-no. https://www.manifestno.com/
* Costanza-Chock (2018) Sasha Costanza-Chock. 2018\. Design Justice: Towards an Intersectional Feminist Framework for Design Theory and Practice. In _Proceedings of the Design Research Society_. Design Research Society, London, United Kingdom, 14. https://ssrn.com/abstract=3189696
* Crawford (2016) Kate Crawford. 2016\. Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Publics. _Science, Technology, & Human Values_ 41, 1 (2016), 77–92. https://doi.org/10.1177/0162243915589635
* Davis and Nathan (2015) Janet Davis and Lisa P. Nathan. 2015. Value Sensitive Design: Applications, Adaptations, and Critiques. In _Handbook of Ethics, Values, and Technological Design_ , Jeroen van den Hoven, Pieter E. Vermaas, and Ibo van de Poel (Eds.). Springer, Dordrecht, Netherlands, 11–40.
* Dietz et al. (2003) Thomas Dietz, Elinor Ostrom, and Paul C. Stern. 2003\. The Struggle to Govern the Commons. _Science_ 302, 5652 (2003), 1907–1912. https://doi.org/10.1126/science.1091015
* Ding and Schotter (2017) Tingting Ding and Andrew Schotter. 2017. Matching and chatting: An experimental study of the impact of network communication on school-matching mechanisms. _Games and Economic Behavior_ 103 (2017), 94 – 115. https://doi.org/10.1016/j.geb.2016.02.004 John Nash Memorial.
* Ding and Schotter (2019) Tingting Ding and Andrew Schotter. 2019. Learning and Mechanism Design: An Experimental Test of School Matching Mechanisms with Intergenerational Advice. _The Economic Journal_ 129, 623 (05 2019), 2779–2804.
* Efron and Tibshirani (1993) Bradley Efron and Robert J. Tibshirani. 1993. _An Introduction to the Bootstrap_. Chapman & Hall/CRC, Boca Raton, FL, USA.
* Ewing (2018) Eve L. Ewing. 2018\. _Ghosts in the Schoolyard: Racism and School Closings on Chicago’s South Side_. University of Chicago Press, Chicago, IL, USA.
* Flanagan et al. (2005) Mary Flanagan, Daniel C. Howe, and Helen Nissenbaum. 2005\. Values at Play: Design Tradeoffs in Socially-Oriented Game Design. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Portland, Oregon, USA) _(CHI ’05)_. Association for Computing Machinery, New York, NY, USA, 751–760. https://doi.org/10.1145/1054972.1055076
* Friedman et al. (2017) Batya Friedman, David G. Hendry, and Alan Borning. 2017\. A Survey of Value Sensitive Design Methods. _Foundations and Trends in Human–Computer Interaction_ 11, 2 (2017), 63–125. https://doi.org/10.1561/1100000015
* Friedman and Jr. (2003) Batya Friedman and Peter H. Kahn Jr. 2003. Human Values, Ethics, and Design. In _The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications_ , Jacko JA Sears A (Ed.). L. Erlbaum Associates Inc., USA, 1177–1201.
* Friedman et al. (2006) B Friedman, PH Jr Kahn, and A Borning. 2006. Value Sensitive Design and Information Systems. In _Human-Computer Interaction in Management Information Systems: Foundations_ , Ben Shneiderman, Ping Zhang, and Dennis Galletta (Eds.). M. E. Sharpe, Inc., Armonk, NY, USA, 348–372.
* Gale and Shapley (1962) D. Gale and L. S. Shapley. 1962. College Admissions and the Stability of Marriage. _The American Mathematical Monthly_ 69, 2 (1 1962), 9–15.
* Ghassemi et al. (2020) Marzyeh Ghassemi, Tristan Naumann, Peter Schulam, Andrew L. Beam, Irene Ya-Ping Chen, and Rajesh Ranganath. 2020. A Review of Challenges and Opportunities in Machine Learning for Health.. In _AMIA Joint Summits on Translational Science_. American Medical Informatics Association, USA, 191–200.
* Glazerman and Dotter (2017) Steven Glazerman and Dallas Dotter. 2017. Market Signals: Evidence on the Determinants and Consequences of School Choice From a Citywide Lottery. _Educational Evaluation and Policy Analysis_ 39, 4 (2017), 593–619. https://doi.org/10.3102/0162373717702964
* Gonczarowski et al. (2019) Yannai A. Gonczarowski, Noam Nisan, Lior Kovalio, and Assaf Romm. 2019. Matching for the Israeli ”Mechinot” Gap-Year Programs: Handling Rich Diversity Requirements. In _Proceedings of the 2019 ACM Conference on Economics and Computation_ (Phoenix, AZ, USA) _(EC ’19)_. Association for Computing Machinery, New York, NY, USA, 321. https://doi.org/10.1145/3328526.3329620
* Griffith and Freedman (2019a) Mark Winston Griffith and Max Freedman. 2019a. School Colors (Episode 5: The Disappearing District). https://www.schoolcolorspodcast.com/episodes/episode-5-the-disappearing-district
* Griffith and Freedman (2019b) Mark Winston Griffith and Max Freedman. 2019b. School Colors (Episode 7: New Kids on the Block). https://www.schoolcolorspodcast.com/episodes/episode-7-new-kids-on-the-block
* Guillen and Hakimov (2017) Pablo Guillen and Rustamdjan Hakimov. 2017. Not quite the best response: truth-telling, strategy-proof matching, and the manipulation of others. _Experimental Economics_ 20, 3 (2017), 670–686.
* Guillen and Hakimov (2018) Pablo Guillen and Rustamdjan Hakimov. 2018. The effectiveness of top-down advice in strategy-proof mechanisms: A field experiment. _European Economic Review_ 101, C (2018), 505–511.
* Guillen and Hing (2014) Pablo Guillen and Alexander Hing. 2014. Lying through their teeth: Third party advice and truth telling in a strategy proof mechanism. _European Economic Review_ 70 (2014), 178 – 185. https://doi.org/10.1016/j.euroecorev.2014.05.002
* Hafalir et al. (2013) Isa E. Hafalir, M. Bumin Yenmez, and Muhammed A. Yildirim. 2013\. Effective affirmative action in school choice. _Theoretical Economics_ 8, 2 (2013), 325–363. https://doi.org/10.3982/TE1135
* Haney et al. (2018) Matt Haney, Stevon Cook, and Rachel Norton. 2018. _Developing a Community Based Student Assignment System for SFUSD_. San Francisco Unified School District. Retrieved April 13, 2020 from https://archive.sfusd.edu/en/assets/sfusd-staff/enroll/files/2019-20/Student%20Assignment%20Proposal%20189-25A1.pdf
* Harrington et al. (2019) Christina Harrington, Sheena Erete, and Anne Marie Piper. 2019\. Deconstructing Community-Based Collaborative Design: Towards More Equitable Participatory Design Engagements. _Proc. ACM Hum.-Comput. Interact._ 3, CSCW, Article 216 (Nov. 2019), 25 pages. https://doi.org/10.1145/3359318
* Hassidim et al. (2016) Avinatan Hassidim, Assaf Romm, and Ran I. Shorrer. 2016\. “Strategic” Behavior in a Strategy-Proof Environment. In _Proceedings of the 2016 ACM Conference on Economics and Computation_ (Maastricht, The Netherlands) _(EC ’16)_. Association for Computing Machinery, New York, NY, USA, 763–764. https://doi.org/10.1145/2940716.2940751
* Hastings et al. (2007) Justine S. Hastings, Thomas J. Kane, and Douglas O. Staiger. 2007. _Heterogeneous Preferences and the Efficacy of Public School Choice_. Working Paper 12145. National Bureau of Economic Research. https://doi.org/10.3386/w12145
* Hermstrüwer (2019) Yoan Hermstrüwer. 2019\. _Transparency and Fairness in School Choice Mechanisms_. Technical Report. Max Planck Institute for Research on Collective Goods.
* Hitzig (2020) Zoë Hitzig. 2020\. The normative gap: Mechanism design and ideal theories of justice. _Economics and Philosophy_ 36, 3 (2020), 407–434. https://doi.org/10.1017/S0266267119000270
* Kapor et al. (2020) Adam J. Kapor, Christopher A. Neilson, and Seth D. Zimmerman. 2020. Heterogeneous Beliefs and School Choice Mechanisms. _American Economic Review_ 110, 5 (May 2020), 1274–1315. https://doi.org/10.1257/aer.20170129
* Kasman and Valant (2019) Matt Kasman and Jon Valant. 2019. _The opportunities and risks of K-12 student placement algorithms_. The Brookings Institute. https://www.brookings.edu/research/the-opportunities-and-risks-of-k-12-student-placement-algorithms/
* Koren (2015) James Rufus Koren. 2015\. _Some lenders are judging you on much more than finances_. Los Angeles Times. https://www.latimes.com/business/la-fi-new-credit-score-20151220-story.html
* Kulesza et al. (2013) T. Kulesza, S. Stumpf, M. Burnett, S. Yang, I. Kwan, and W. Wong. 2013\. Too much, too little, or just right? Ways explanations impact end users’ mental models. In _2013 IEEE Symposium on Visual Languages and Human Centric Computing_. IEEE, San Jose, CA, USA, 3–10. https://doi.org/10.1109/VLHCC.2013.6645235
* Laverde (2020) Mariana Laverde. 2020\. _Unequal Assignments to Public Schools and the Limits of School Choice_. Working Paper. https://drive.google.com/file/d/19HJFGmaf2HA56k3rNBgB_at4aSkG3Bpz/view?usp=sharing
* Le Dantec et al. (2009) Christopher A. Le Dantec, Erika Shehan Poole, and Susan P. Wyche. 2009. Values as Lived Experience: Evolving Value Sensitive Design in Support of Value Discovery. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Boston, MA, USA) _(CHI ’09)_. Association for Computing Machinery, New York, NY, USA, 1141–1150. https://doi.org/10.1145/1518701.1518875
* Lee and Baykal (2017) Min Kyung Lee and Su Baykal. 2017. Algorithmic Mediation in Group Decisions: Fairness Perceptions of Algorithmically Mediated vs. Discussion-Based Social Division. In _Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing_ (Portland, Oregon, USA) _(CSCW ’17)_. Association for Computing Machinery, New York, NY, USA, 1035–1048. https://doi.org/10.1145/2998181.2998230
* Lee et al. (2019) Min Kyung Lee, Anuraag Jain, Hea Jin Cha, Shashank Ojha, and Daniel Kusbit. 2019. Procedural Justice in Algorithmic Fairness: Leveraging Transparency and Outcome Control for Fair Algorithmic Mediation. _Proc. ACM Hum.-Comput. Interact._ 3, CSCW, Article 182 (Nov. 2019), 26 pages. https://doi.org/10.1145/3359284
* Lee et al. (2017) Min Kyung Lee, Ji Tae Kim, and Leah Lizarondo. 2017\. A Human-Centered Approach to Algorithmic Services: Considerations for Fair and Motivating Smart Community Service Management That Allocates Donations to Non-Profit Organizations. In _Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems_ (Denver, Colorado, USA) _(CHI ’17)_. Association for Computing Machinery, New York, NY, USA, 3365–3376. https://doi.org/10.1145/3025453.3025884
* Manders-Huits (2011) Noëmi Manders-Huits. 2011\. What Values in Design? The Challenge of Incorporating Moral Values into Design. _Science and Engineering Ethics_ 17, 2 (June 2011), 271–287. https://doi.org/10.1007/s11948-010-9198-2
* Merriam and Associates (2002) Sharan B Merriam and Associates. 2002. Introduction to qualitative research. In _Qualitative research in practice: Examples for discussion and analysis_. Jossey-Bass, Hoboken, NJ, USA, 1–17.
* Nguyen and Vohra (2019) Thành Nguyen and Rakesh Vohra. 2019. Stable Matching with Proportionality Constraints. _Operations Research_ 67, 6 (2019), 1503–1519. https://doi.org/10.1287/opre.2019.1909
* Nissenbaum (2011) Helen Nissenbaum. 2011\. A Contextual Approach to Privacy Online. _Daedalus_ 140, 4 (2011), 32–48.
* Noble (2018) Safiya Umoja Noble. 2018\. _Algorithms of Oppression: How Search Engines Reinforce Racism_. New York University Press, New York, NY, USA.
* Obermeyer et al. (2019) Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. _Science_ 366, 6464 (2019), 447–453. https://doi.org/10.1126/science.aax2342
* Oosterbeek et al. (2019) Hessel Oosterbeek, Sándor Sóvágó, and Bas Klaauw. 2019\. _Why are Schools Segregated? Evidence from the Secondary-School Match in Amsterdam_. Discussion Paper DP13462. Centre for Economic Policy Research. https://ssrn.com/abstract=3319783
* Pais and Pintér (2008) Joana Pais and Ágnes Pintér. 2008. School choice and information: An experimental study on matching mechanisms. _Games and Economic Behavior_ 64, 1 (2008), 303–328.
* Pathak (2017) Parag A. Pathak. 2017\. What Really Matters in Designing School Choice Mechanisms. In _Advances in Economics and Econometrics: Eleventh World Congress_ , Bo Honoré, Ariel Pakes, Monika Piazzesi, and Larry Samuelson (Eds.). Econometric Society Monographs, Vol. 1. Cambridge University Press, Cambridge, MA, USA, 176–214. https://doi.org/10.1017/9781108227162.006
* Pickoff-White (2018) Lisa Pickoff-White. 2018\. _S.F.’s Kindergarten Lottery: Do Parents’ Tricks Work?_ KQED. https://www.kqed.org/news/11641019/s-f-s-kindergarten-lottery-do-parents-tricks-work
* Rees-Jones and Skowronek (2018) Alex Rees-Jones and Samuel Skowronek. 2018. An experimental investigation of preference misrepresentation in the residency match. _Proceedings of the National Academy of Sciences_ 115, 45 (2018), 11471–11476. https://doi.org/10.1073/pnas.1803212115
* Roth (2015) Alvin E. Roth. 2015\. _Who Gets What And Why: The Hidden World of Matchmaking and Market Design_. Houghton Mifflin Harcourt Publishing Company, New York, NY, USA.
* San Francisco Unified School District (2015) San Francisco Unified School District. 2015\. _Student Assignment: 4th Annual Report: 2014-15 School Year_. San Francisco Unified School District. Retrieved April 13, 2020 from https://archive.sfusd.edu/en/assets/sfusd-staff/enroll/files/2015-16/4th-annual-report-april-8-2015.pdf
* San Francisco Unified School District (2019) San Francisco Unified School District. 2019\. _Why We’re Redesigning Student Assignment_. San Francisco Unified School District. Retrieved April 13, 2020 from https://www.sfusd.edu/studentassignment/why-were-redesigning-student-assignment
* San Francisco Unified School District Office of Education (nd) San Francisco Unified School District Office of Education. n.d.. _Board Policy 5101: Student Assignment_. San Francisco Unified School District. Retrieved April 27, 2020 from https://go.boarddocs.com/ca/sfusd/Board.nsf/goto?open&id=B55QMC657423
* Saxena et al. (2020) Devansh Saxena, Karla Badillo-Urquiola, Pamela J. Wisniewski, and Shion Guha. 2020. A Human-Centered Review of Algorithms Used within the U.S. Child Welfare System. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_ (Honolulu, HI, USA) _(CHI ’20)_. Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3313831.3376229
* Scott (2013) Janelle Scott. 2013\. School Choice and the Empowerment Imperative. _Peabody Journal of Education_ 88, 1 (2013), 60–73. https://doi.org/10.1080/0161956X.2013.752635 arXiv:https://doi.org/10.1080/0161956X.2013.752635
* Scott (2011) Janelle T. Scott. 2011\. Market-Driven Education Reform and the Racial Politics of Advocacy. _Peabody Journal of Education_ 86, 5 (2011), 580–599. https://doi.org/10.1080/0161956X.2011.616445
* Selbst et al. (2019) Andrew D. Selbst, danah boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019\. Fairness and Abstraction in Sociotechnical Systems. In _Proceedings of the Conference on Fairness, Accountability, and Transparency_ (Atlanta, GA, USA) _(FAT* ’19)_. Association for Computing Machinery, New York, NY, USA, 59–68. https://doi.org/10.1145/3287560.3287598
* Shapley and Scarf (1974) Lloyd Shapley and Herbert Scarf. 1974. On cores and indivisibility. _Journal of Mathematical Economics_ 1, 1 (1974), 23 – 37. https://doi.org/10.1016/0304-4068(74)90033-0
* Shilton (2013) Katie Shilton. 2013\. Values Levers: Building Ethics into Design. _Science, Technology, & Human Values_ 38, 3 (2013), 374–397. https://doi.org/10.1177/0162243912436985
* Smith et al. (2020) C. Estelle Smith, Bowen Yu, Anjali Srivastava, Aaron Halfaker, Loren Terveen, and Haiyi Zhu. 2020\. Keeping Community in the Loop: Understanding Wikipedia Stakeholder Values for Machine Learning-Based Systems. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_ (Honolulu, HI, USA) _(CHI ’20)_. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376783
* Sweeney (2013) Latanya Sweeney. 2013\. Discrimination in Online Ad Delivery. _Commun. ACM_ 56, 5 (May 2013), 44–54. https://doi.org/10.1145/2447976.2447990
* Voida et al. (2014) Amy Voida, Lynn Dombrowski, Gillian R. Hayes, and Melissa Mazmanian. 2014. Shared Values/Conflicting Logics: Working around e-Government Systems. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Toronto, Ontario, Canada) _(CHI ’14)_. Association for Computing Machinery, New York, NY, USA, 3583–3592. https://doi.org/10.1145/2556288.2556971
* Zhu et al. (2018) Haiyi Zhu, Bowen Yu, Aaron Halfaker, and Loren Terveen. 2018\. Value-Sensitive Algorithm Design: Method, Case Study, and Lessons. _Proc. ACM Hum.-Comput. Interact._ 2, CSCW, Article 194 (Nov. 2018), 23 pages. https://doi.org/10.1145/3274463
|
# Meta-Learning for Effective Multi-task and Multilingual Modelling
Ishan Tarunesh1 Sushil Khyalia1 Vishwajeet Kumar2
Ganesh Ramakrishnan1 Preethi Jyothi1
1 Indian Institute of Technology Bombay
2 IBM India Research Lab
{ishan, sushil, ganesh<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
Natural language processing (NLP) tasks (e.g. question-answering in English)
benefit from knowledge of other tasks (e.g., named entity recognition in
English) and knowledge of other languages (e.g., question-answering in
Spanish). Such shared representations are typically learned in isolation,
either across tasks or across languages. In this work, we propose a meta-
learning approach to learn the interactions between both tasks and languages.
We also investigate the role of different sampling strategies used during
meta-learning. We present experiments on five different tasks and six
different languages from the XTREME multilingual benchmark dataset Hu et al.
(2020). Our meta-learned model clearly improves in performance compared to
competitive baseline models that also include multi-task baselines. We also
present zero-shot evaluations on unseen target languages to demonstrate the
utility of our proposed model.
## 1 Introduction
Multi-task and multilingual learning are both problems of long standing
interest in natural language processing. Leveraging data from multiple tasks
and/or additional languages to benefit a target task is of great appeal,
especially when the target task has limited resources. When it comes to
multiple tasks, it is well-known from prior work on multi-task learning Liu et
al. (2019b); Kendall et al. (2018); Liu et al. (2019a); Yang and Hospedales
(2017) that jointly learning a model across tasks can benefit the tasks
mutually. For multiple languages, the ability of deep learning models to learn
effective embeddings has led to their use for joint learning of models across
languages Conneau et al. (2020); Conneau and Lample (2019); Artetxe and
Schwenk (2019); learning cross-lingual embeddings to aid languages in limited
resource settings is of growing interest Kumar et al. (2019); Wang et al.
(2017); Adams et al. (2017). Let us say we had access to $M$ tasks across $N$
different languages - c.f. Table 1 that outlines such a matrix of tasks and
languages from the XTREME benchmark Hu et al. (2020). How do we perform
effective joint learning across tasks and languages? Are there specific tasks
or languages that need to be sampled more frequently for effective joint
training? Can such sampling strategies be learned from the data?
In this work, we adopt a meta-learning approach for efficiently learning
parameters in a shared parameter space across multiple tasks and multiple
languages. Our chosen tasks are question answering (QA), natural language
inference (NLI), paraphrase identification (PA), part-of-speech tagging (POS)
and named entity recognition (NER). The tasks were chosen to enable us to
employ a gamut of different types of language representations needed to tackle
problems in NLP. In Figure 1, we illustrate the different types of
representations by drawing inspiration from the Vauquois Triangle Vauquois
(1968), well-known for machine translation, and situating our chosen tasks
within such a triangle. Here we see that POS and NER are relatively
‘shallower’ analysis tasks that are token-centric, while QA, NLI and PA are
‘deeper’ analysis tasks that would require deeper semantic representations.
This representation suggests a strategy for effective parameter sharing. For
the deeper tasks, the same task in different languages could have
representations that are closer and hence benefit each other, while for the
shallower tasks, keeping the language unchanged and exploring different tasks
might be more beneficial. Interestingly, this is exactly what we find with our
meta-learned model and is borne out in our experimental results. We also find
that as the model progressively learns, the meta-learning based models for the
tasks requiring deeper semantic analysis benefit more from joint learning
compared to the shallower tasks.
Figure 1: Illustration derived from Vauquois Triangle to linguistically
motivate our setting. POS and NER being lower down in the representations (and
are thus ‘shallower’) are further away from the same task in another language.
QA, XNLI and PAWS being higher up in the representations (and are thus
‘deeper’) are closer to the same task in another language.
With access to multiple tasks and languages during training, the question of
how to sample effectively from different tasks and languages also becomes
important to consider. We investigate different sampling strategies, including
a parameterized sampling strategy, to assess the influence of sampling across
tasks and languages on our meta-learned model.
Our main contributions in this work are three-fold:
* •
We present a meta-learning approach that enables effective sharing of
parameters across multiple tasks and multiple languages. This is the first
work, to our knowledge, to explore the interplay between multiple tasks at
different levels of abstraction and multiple languages using meta-learning. We
show results on the recently-released XTREME benchmark and observe consistent
improvements across different tasks and languages using our model. We also
offer rules of thumb for effective meta-learning that could hold in larger
settings involving additional tasks and languages.
* •
We investigate different sampling strategies that can be incorporated within
our meta-learning approach and examine their benefits.
* •
We evaluate our meta-learned model in zero-shot settings for every task on
target languages that never appear during training and show its superiority
compared to competitive zero-shot baselines.
## 2 Related Work
We summarize three threads of related research that look at the
transferability in models across different tasks and different languages:
multi-task learning, meta-learning and data sampling strategies for both
multi-task learning and meta-learning. Multi-task learning Caruana (1993) has
proven to be highly effective for transfer learning in a variety of NLP
applications such as question answering, neural machine translation, etc.
McCann et al. (2018); Hashimoto et al. (2017); Chen et al. (2018); Kiperwasser
and Ballesteros (2018). Some multi-task learning approaches Jawanpuria et al.
(2015) have attempted to identify clusters (or groups) of related tasks based
on end-to-end convex optimization formulations. Meta-learning algorithms
Nichol et al. (2018) are highly effective for fast adaptation and have
recently been shown to be beneficial for several machine learning tasks
Santoro et al. (2016); Finn et al. (2017). Gu et al. (2018) use a meta-
learning algorithm for machine translation to leverage information from high-
resource languages. Dou et al. (2019) investigate multiple model agnostic
meta-learning algorithms for low-resource natural language understanding on
the GLUE Wang et al. (2018) benchmark.
Data sampling strategies for multi-task learning and meta-learning form the
third thread of related work. A good sampling strategy has to account for the
imbalance in dataset sizes across tasks/languages and the similarity between
tasks/languages. A simple heuristic-based solution to address the issue of
data imbalance is to assign more weight to low-resource tasks or languages
Aharoni et al. (2019). Arivazhagan et al. (2019) define a temperature
parameter which controls how often one samples from low-resource
tasks/languages. The MultiDDS algorithm, proposed by Wang et al. (2020b),
actively learns a different set of parameters for sampling batches given a set
of tasks such that the performance on a held-out set is maximized. We use a
variant of MultiDDS as a sampling strategy in our meta-learned model.
Nooralahzadeh et al. (2020) is most similar in spirit to our work in that they
study a cross-lingual and cross-task meta-learning architecture but only focus
on zero-shot and few-shot transfer for two natural language understanding
tasks, NLI and QA. In contrast, we study many tasks in many languages, in
conjunction with sampling strategies, and offer concrete insights on how best
to guide the meta-learning process when multiple tasks are in the picture.
## 3 Methodology
Our setting is pivoted on a grid of tasks and languages (with some missing
entries as shown in Table 1). Each row of the grid corresponds to a single
task. A cell of the grid corresponds to a Task-Language pair which we refer to
as a TL pair (TLP). We denote by
$q_{i}=|\mathcal{D}_{train}^{i}|/\big{(}\sum_{k=1}^{n}|\mathcal{D}_{train}^{k}|\big{)}$,
the fraction of the dataset size for the $i^{th}$ TLP and by
$P_{\mathcal{D}}(i)$, the probability of sampling a batch from the $i^{th}$
TLP during meta training. The distribution over all TLPs, viz., is a
Multinomial (say $\mathcal{M}$) over $P_{\mathcal{D}}(i)$s.
### 3.1 Our Meta-learning Approach
The goal in the standard meta learning setting is to obtain a model that
generalizes well to new test/target tasks given some distribution over
training tasks. This can be achieved using optimization-based meta-learning
algorithms that modify the learning procedure in order to learn a good
initialization of the parameters. This can serve as a useful starting point
that can be further fine-tuned on various tasks. Finn et al. (2017) proposed a
general optimization algorithm called Model Agnostic Meta Learning (MAML) that
can be trained using gradient descent. MAML aims to minimize the following
objective
$\min_{\theta}\sum_{T_{i}\sim\mathcal{M}}\mathcal{L}_{i}\left(U^{k}_{i}(\theta)\right)$
(1)
where $\mathcal{M}$ is the Multinomial distribution over TLPs,
$\mathcal{L}_{i}$ is the loss and $U^{k}_{i}$ a function that returns $\theta$
after k gradient updates both calculated on batches sampled from $T_{i}$.
Minimizing this objective using first order methods involves computing
gradients of the form $\frac{\partial}{\partial\theta}U^{k}_{i}(\theta)$,
leading to the expensive computation of second order derivatives. Nichol et
al. (2018) proposed an alternative first-order meta-learning algorithm named
“Reptile” with simple update rule:
$\theta\leftarrow\theta+\beta\frac{1}{|\\{T_{i}\\}|}\sum_{T_{i}\sim\mathcal{M}}(\theta^{(k)}_{i}-\theta)$
(2)
where $\theta_{i}^{(k)}$ is $U^{k}_{i}(\theta)$. Despite its simplicity, a
recent study by Dou et al. (2019) showed that Reptile is atleast as effective
as MAML in terms of performance. We therefore employed Reptile for meta
learning in all our experiments.
Input: ${\mathcal{D}}_{train}$ set of TLPs for meta training (Also
${\mathcal{D}}_{dev}$ for parametrised sampling)
Sampling Strategy (Temperature / MultiDDS)
Output: The converged multi-task multilingual model parameters $\theta^{*}$
Algorithm 1 Our Meta-learning Approach
1: Initialize $P_{D}(i)$ depending on the sampling strategy
2: while not converged do
3: $\triangleright$ Perform Reptile Updates
4: Sample $m$ TLPs $T_{1},T_{2},\dots,T_{m}$ from $\mathcal{M}$
5: for i = 1,2,…,m do
6: $\theta_{i}^{(k)}\leftarrow U_{i}^{k}(\theta),$ denoting $k$ gradient
updates from $\theta$ on batches of TLP $T_{i}$
7: end for
8:
$\theta\leftarrow\theta+\frac{\beta}{m}\sum_{i=1}^{m}(\theta_{i}^{(k)}-\theta)$
9: if Sampling Strategy $\leftarrow$ MultiDDS then
10: for $\mathcal{D}_{train}^{i}$ $\in$ ${\mathcal{D}}_{train}$ do
11: $R(i;\theta)\leftarrow cos(g_{dev},g_{train})$, $g_{dev}$ is gradient on
$\\{\mathcal{D}_{dev}\\}$ and $g_{train}$ is gradient on
$\mathcal{D}_{train}^{i}$
12: end for
13: $\triangleright$ Update Sampling Probabilities
14:
$d_{\psi}\leftarrow\sum_{i=1}^{n}R(i;\theta)\cdot\nabla_{\psi}log(P_{\mathcal{D}}(i;\psi))$
15: $\psi\leftarrow$ GradientUpdate$(\psi,d_{\psi})$
16: end if
17: end while
### 3.2 Selection and Sampling Strategies
#### 3.2.1 Selection
The choice of TLPs in meta-learning plays a vital role in influencing the
model performance, as we will see in more detail in Section 5. Apart from the
use of all TLPs across both tasks and languages during training, selecting all
languages for a given task Gu et al. (2018) and selecting all tasks for a
given language Dou et al. (2019) are two other logical choices. We refer to
the last two settings as being Task-Limited and Lang-Limited, respectively.
#### 3.2.2 Heuristic Sampling
Once the TLPs for meta training (denoted by $\mathcal{D}$) have been selected,
we need to sample TLPs from $\mathcal{M}$. We investigate temperature-based
heuristic sampling Arivazhagan et al. (2019) which defines the probability of
any dataset as a function of its size. $P_{\mathcal{D}}(i)$ =
$q_{i}^{1/\tau}/\big{(}\sum_{k=1}^{n}q_{k}^{1/\tau}\big{)}$ where
$P_{\mathcal{D}}(i)$ is the probability of the $i^{th}$ TLP to be sampled and
$\tau$ is the temperature parameter. $\tau$ = 1 reduces to sampling TLPs
proportional to their dataset sizes and $\tau\rightarrow\infty$ reduces to
sampling TLPs uniformly.
#### 3.2.3 Parameterized Sampling
The sampling strategy defined in Section 3.2.2 remains constant throughout
meta training and only depends on dataset sizes. Wang et al. (2020b) proposed
a parameterized sampling technique called MultiDDS that builds on Differential
Data Selection (DDS) Wang et al. (2020a) for weighing multiple datasets. The
$P_{\mathcal{D}}(i)$ are parameterized using $\psi_{i}$ as
$P_{\mathcal{D}}(i)=e^{\psi_{i}}/\sum_{j}e^{\psi_{j}}$ with the initial value
of $\psi$ satisfying $P_{\mathcal{D}}(i)=q_{i}$. The optimization for $\psi$
and $\theta$ is performed in an alternating manner Colson et al. (2007)
$\displaystyle\psi^{*}$
$\displaystyle=\underset{\psi}{\operatorname{argmin}}\,\,J(\theta^{*}(\psi),\mathcal{D}_{dev})$
(3) $\displaystyle\theta^{*}(\psi)$
$\displaystyle=\underset{\theta}{\operatorname{argmin}}\,\,E_{x,y\sim
P(T;\psi)}[l(x,y;\theta)]$ (4)
$J(\theta,\mathcal{D}_{dev})$ is the objective function which we want to
minimize over development set(s). The reward function, $R(x,y;\theta_{t})$, is
defined as:
$\displaystyle R(x,\\!y;\\!\theta_{t})$
$\displaystyle\approx\underbrace{\nabla
J(\theta_{t},\\!\mathcal{D}_{dev})^{T}}_{g_{dev}}\\!\cdot\\!\underbrace{\nabla_{\theta}l(x,\\!y;\\!\theta_{t-1})}_{g_{train}}$
(5) $\displaystyle\approx cos(g_{dev},g_{train})$ (6)
$\psi$’s are updated using the REINFORCE Williams (1992) algorithm.
$\displaystyle\psi_{t+1}\\!\leftarrow\\!\psi_{t}+R(x,\\!y;\\!\theta_{t})\cdot\nabla_{\psi}log(P(x,\\!y;\\!\psi))$
(7)
The Reptile meta-learning algorithm (along with details of the parameterized
sampling strategy) is outlined in Algorithm 1.
## 4 Experimental Setup
### 4.1 Evaluation Benchmark
The recently released XTREME dataset Hu et al. (2020) is a multilingual multi-
task benchmark consisting of classification, structured prediction, QA and
retrieval tasks. Each constituent task has associated datasets in multiple
languages. The sources of POS and NER datasets are Universal Dependency v2.5
treebank Nivre et al. (2020) and WikiAnn Pan et al. (2017) respectively, with
ground-truth labels available for each language. Large-scale datasets for QA,
NLI and PA were originally available only for English. The PAWS-X Yang et al.
(2019) dataset contains machine-translated training pairs and human-translated
evaluation pairs for PA. The authors of XTREME train a custom-built
translation system to obtain translated datasets for QA and NLI. For the NLI
task, we train using MultiNLI Williams et al. (2018) and evaluate on XNLI
Conneau et al. (2018). For the QA task, SQuAD 1.1 Rajpurkar et al. (2016) was
used for training and MLQA Lewis et al. (2019) for evaluation.
Task | en | hi | es | de | fr | zh
---|---|---|---|---|---|---
Natural Language Inference (NLI) | 392K | | 392K | 392K | 392K |
Question Answering (QA) | 88.0K | 82.4K | 81.8K | 80.0K | |
Part Of Speech (POS) | 21.2K | 13.3K | 28.4K | 166K | | 7.9K
Named Entity Recognition (NER) | 20K | 5K | 20K | 20K | 20K | 20K
Paraphrase Identification (PA) | 49.4K | | 49.4K | 49.4K | 49.4K | 49.4K
Table 1: Dataset matrix showing datasets that are available (green) from the
XTREME Benchmark. The number of training instances are also mentioned for each
available dataset.
Regarding evaluation metrics, for QA we report F1 scores and for the other
four tasks (PA, NLI, POS, NER) we report accuracy scores.
### 4.2 Implementation Details
BERT Devlin et al. (2019) models yield state-of-the-art performance for many
NLP tasks. Since we are dealing with datasets in multiple languages, we build
our meta learning models on mBERT Pires et al. (2019); Wu and Dredze (2019)
base architecture, implemented by Wolf et al. (2020), with output layers
specific to each task. In our experiments, we use the AdamW Loshchilov and
Hutter (2017) optimizer to make gradient-based updates to the model’s
parameters using batches from a particular TLP (Alg. 1, Line 6). This
optimizer is shared across all the TLPs. When performing the meta-step (Alg.
1, Line 8), we use vanilla stochastic gradient descent (SGD) Robbins and Monro
(1951) updates. Similarly, in the case of parameterized sampling the weights
are updated (Alg. 1, Line 15) using vanilla SGD.
Meta training involves sampling a set of $m$ tasks, taking $k$ gradient update
steps from the initial parameter to arrive at $\theta_{i}^{(k)}$ for task
$T_{i}$ and finally updating $\theta$ using the Reptile update rule. For meta-
models we fix learning rate = 3e-5 and dropout probability = 0.1 (provided by
XTREME for reproduction of baselines). Grid search was performed on $m$ $\in$
{4, 8, 16}, $k$ $\in$ {2, 3, 4, 5} and $\beta$ $\in$ {0.1, 0.5, 1.0} for All
TLPs model ($\tau$ = 1). The best setting ($m$ = 8, $k$ = 3, $\beta$ = 1.0)
was selected based on validation score (accuracy or F1) averaged over all
TLPs. These hyperparameters were kept constant for all further experiments.
Each meta-learning model is trained for 5 epochs. We then finetune the meta
model individually on each TLP and evaluate the results. Finetuning parameters
vary for different task and are mentioned in Appendix B.
### 4.3 Data Selection and Sampling Strategies
We experiment with three different configurations for the set of TLPs to be
considered during meta-learning: (a) using all tasks for a given language
(Lang-Limited) (b) using all languages for a given task (Task-Limited) and (c)
using all tasks and all languages (All TLPs). Since the dataset size varies
across tasks (as also across languages), we use temperature sampling within
each setting for $\tau$ = 1, 2, 5 and $\infty$. (In Table 4 of the Appendix C
in the supplementary material, we report results for different choices of TLP
selection and different values of the temperature.)
Figure 2: (a) Size of train dataset by language for each task (b) Proportion
of dataset in meta training for different value of $\tau$.
With respect to the Input in Algorithm 1, there are two sets of TLPs that need
to be selected for parameterized sampling: $\mathcal{D}_{train}$ and
$\mathcal{D}_{dev}$. In order to analyse the effect of the choice of task and
language, we experiment with the following 4 settings -
(a) $\mathcal{D}_{train}$ = Lang-Limited, $\mathcal{D}_{dev}$ = Target TLP
(b) $\mathcal{D}_{train}$ = Task-Limited, $\mathcal{D}_{dev}$ = Target TLP
(c) $\mathcal{D}_{train}$ = All TLPs, $\mathcal{D}_{dev}$ = Lang-Limited
(d) $\mathcal{D}_{train}$ = All TLPs, $\mathcal{D}_{dev}$ = Task-Limited.
The models (a), (b) are referred to as mDDS and (c), (d) are called mDDS-Lang
and mDDS-Task respectively. Results for these 4 models are reported in Table 2
alongside temperature sampling for comparison.
### 4.4 Baselines
Our first baseline system for each TLP uses mBERT-based models trained on data
specific to each TLP, which is either available as ground-truth or in a
translated form. We follow the same hyperparameter settings as reported in
XTREME. We also present three multi-task learning (MTL) baseline systems: task
limited (Task-Limited), language limited (Lang-Limited), and the use of all
TLPs during training (All TLPs MTL). During MTL training, we concatenate and
shuffle the selected datasets. The model is trained for 5 epochs with a
learning rate of 5e-5. We refer the reader to Appendix A for more training
details.
## 5 Results and Analysis
Table 2 presents all our main results comparing different data selection and
sampling strategies used for meta-learning. Each column corresponds to a
target TLP; the best-performing meta-learned models for each target TLP within
each data selection setting have been highlighted in colour. (Light-to-dark
gradation reflects improvements in performance.) From Table 2, we see that our
meta-learned models outperform the baseline systems across all the TLPs
corresponding to QA, NLI and PA. (POS and NER also mostly benefit from meta-
learning, but the margins of improvement are much smaller compared to the
other tasks given the already high baseline scores).
Model | SS | QA (F1) | NLI (Acc.) | PA (Acc.)
---|---|---|---|---
en | hi | es | de | en | es | de | fr | en | es | de | fr | zh
Baselines | | 79.94 | 59.94 | 65.83 | 63.17 | 81.39 | 78.37 | 76.82 | 77.30 | 92.35 | 89.75 | 87.45 | 89.61 | 83.32
Lang-Limited MTL | | 69.80 | 53.24 | 62.29 | 58.91 | 80.49 | 76.10 | 75.18 | 74.94 | 93.75 | 87.75 | 85.35 | 88.55 | 80.49
Task-Limited MTL | | 74.04 | 57.77 | 64.28 | 61.47 | 80.95 | 78.15 | 75.90 | 77.14 | 93.65 | 86.65 | 86.25 | 86.82 | 81.24
All TLPs MTL | | 63.22 | 42.94 | 54.05 | 51.61 | 80.05 | 76.48 | 74.86 | 76.18 | 93.50 | 90.30 | 88.45 | 89.71 | 82.66
Lang-Limited | Temp | -0.04 | -0.24 | -0.27 | +0.07 | +0.06 | +0.39 | +0.03 | -0.70 | +0.45 | +0.05 | +0.35 | +0.40 | -0.06
mDDS | +0.07 | -0.12 | +0.06 | +0.14 | +0.02 | -0.61 | -0.80 | -0.60 | -0.25 | -0.05 | 0.00 | -0.30 | -1.41
Task-Limited | Temp | +0.55 | +0.43 | +0.50 | +0.40 | +1.65 | +1.12 | +1.25 | +0.79 | +0.20 | -0.15 | -0.55 | +0.85 | -0.15
mDDS | +0.21 | +0.62 | -0.67 | +1.06 | +1.32 | +1.10 | +1.39 | +0.48 | +0.50 | -0.65 | -0.35 | +1.45 | +1.06
All TLPs | Temp | +0.53 | +0.47 | +0.32 | +0.47 | +1.90 | +1.22 | +1.45 | +0.95 | +0.35 | +0.45 | +1.20 | +1.05 | +0.85
mDDS-Lang | +0.08 | +0.50 | -1.57 | +0.08 | +0.76 | +0.26 | -0.10 | +0.32 | +0.25 | +0.85 | +0.75 | +0.75 | +1.11
mDDS-Task | +0.18 | +0.60 | +0.11 | +0.54 | +1.50 | +0.90 | +0.72 | +0.72 | +0.10 | +0.80 | +1.27 | +1.10 | +1.16
Model | SS | NER (Acc.) | POS (Acc.)
---|---|---|---
en | hi | es | de | fr | zh | en | hi | es | de | zh
Baselines | | 93.23 | 95.72 | 95.84 | 97.32 | 95.48 | 94.34 | 96.15 | 93.57 | 96.02 | 97.37 | 92.60
Lang-Limited MTL | | 92.54 | 92.67 | 95.14 | 96.40 | 94.38 | 92.97 | 95.08 | 92.43 | 95.19 | 97.19 | 89.71
Task-Limited MTL | | 93.51 | 93.94 | 95.77 | 97.09 | 95.27 | 93.72 | 95.70 | 93.34 | 95.73 | 97.35 | 92.52
All TLPs MTL | | 92.28 | 91.95 | 94.90 | 96.18 | 94.38 | 92.53 | 94.70 | 91.89 | 95.10 | 97.03 | 89.92
Lang-Limited | Temp | +0.60 | +0.06 | +0.09 | +0.24 | -0.09 | -0.47 | -0.06 | -0.01 | +0.10 | +0.04 | -0.17
mDDS | -0.21 | -0.85 | -0.20 | -0.10 | -0.57 | -0.55 | -0.27 | -0.02 | -0.19 | -0.06 | -0.37
Task-Limited | Temp | +0.79 | -0.46 | 0.00 | -0.07 | -0.18 | -0.51 | -0.22 | -0.05 | -0.21 | +0.02 | -0.09
mDDS | -0.10 | -1.61 | 0.00 | -0.16 | -0.33 | -0.69 | -0.38 | -0.02 | -0.22 | +0.05 | -0.12
All TLPs | Temp | -0.15 | -0.70 | +0.13 | 0.00 | -0.16 | -0.39 | -0.22 | -0.09 | -0.21 | +0.03 | -0.16
mDDS-Lang | -0.16 | -0.09 | +0.11 | -0.08 | -0.14 | -0.65 | -0.21 | -0.10 | -0.11 | +0.03 | -0.17
mDDS-Task | -0.27 | -0.42 | +0.08 | -0.14 | -0.07 | -0.58 | -0.22 | -0.14 | -0.19 | +0.02 | -0.09
Table 2: Main results comparing different data selection and sampling
strategies. Sampling strategy, SS=Temp refers to the temperature-based
sampling strategy and SS=mDDS refers to the multiDDS-based sampling strategy.
mDDS-Task and mDDS-Lang refer to the use of a development set for multiDDS
that contains all languages for a task and all tasks for a language,
respectively. The best result among Baseline and three MTL models is
highlighted using orange. For each column we present the difference (positive
or negative) of the meta models from the best baseline (highlighted in orange)
of that column
##### Task-Limited vs Lang-Limited models.
For QA and NLI, we observe that the Task-Limited models are always better than
the Lang-Limited models. This is in line with our intuition that tasks like QA
and NLI (which require deeper semantic representations) will benefit more by
using data from different languages for the same task. We see the opposite
seems to hold for POS and NER where the Lang-Limited models are almost always
better than the Task-Limited models. With POS and NER being relatively
shallower tasks, it makes sense that they benefit more from language-specific
training that relies on token embeddings shared across tasks.
##### Investigating Sampling Strategies.
In Table 2, all the scores shown for the Temp sampling strategy are the best
scores across four different values of $T$, $T=1,2,5,\infty$. (The complete
table is available in Appendix C in the supplementary material.)
Figure 3: Evolution of $\psi$s and rewards as a function of training time for
three Lang-Limited tasks evaluated on (a) QA-en (b) NLI-es and (c) POS-de.
We also present comparisons with the mDDS, mDDS-Lang and mDDS-Task sampling
strategies enforced within the Lang-Limited, Task-Limited and All TLPs models,
respectively. For POS and NER, our best meta-learned models are mostly Lang-
Limited with Temp sampling. It is intuitive that for these shallower tasks,
mDDS does not offer any benefits from allowing to sample instances from other
tasks.
Model | NER (Acc.) | POS (Acc.)
---|---|---
bn | et | fi | ja | mr | ta | te | ur | et | fi | ja | mr | ta | te | ur
Task-Limited MTL | 81.80 | 93.98 | 94.47 | 81.03 | 90.63 | 83.46 | 87.67 | 69.25 | 85.21 | 83.98 | 58.42 | 72.56 | 73.88 | 79.15 | 86.08
All TLPs MTL | 77.49 | 90.35 | 92.65 | 77.80 | 81.19 | 81.21 | 86.17 | 64.27 | 69.63 | 73.50 | 57.24 | 68.80 | 70.52 | 72.41 | 81.59
Task-Limited | +1.91 | +0.63 | +0.16 | +0.35 | -0.67 | +1.34 | +0.63 | +2.14 | +2.94 | +2.15 | +0.83 | +8.64 | +2.34 | +2.82 | -0.30
All TLPs | +0.62 | +0.35 | -0.11 | +0.19 | -0.92 | +1.25 | +0.43 | +9.10 | +2.56 | +2.01 | -1.42 | +8.27 | +1.24 | +2.51 | -0.16
All TLPs mDDS-Task | -0.83 | +0.09 | -0.20 | -1.34 | -1.87 | +0.49 | +0.05 | +3.62 | +1.91 | +1.08 | -1.74 | +8.64 | +1.24 | +1.88 | -0.72
Model | QA (F1) | NLI (Acc.) | PA (Acc.)
---|---|---|---
ar | vi | ar | bg | el | ru | sw | th | tr | ur | vi | ja | ko
Task-Limited MTL | 32.25 | 44.35 | 62.88 | 67.47 | 66.09 | 67.85 | 43.61 | 43.16 | 57.79 | 57.03 | 69.45 | 78.23 | 74.85
All TLPs MTL | 40.14 | 54.08 | 64.54 | 67.99 | 66.25 | 70.05 | 43.89 | 45.72 | 56.73 | 56.93 | 72.02 | 77.61 | 73.49
Task-Limited | +8.14 | +6.63 | +4.35 | +5.15 | +4.62 | +2.72 | +8.51 | +14.42 | +6.79 | +5.27 | +1.3 | +0.21 | +1.81
All TLPs | +5.24 | +3.62 | +4.41 | +4.73 | +4.79 | +2.94 | +11.44 | +13.04 | +7.05 | +5.67 | +1.24 | +3.07 | +4.57
All TLPs mDDS-Task | +6.89 | +6.29 | +3.19 | +4.33 | +4.09 | +2.38 | +8.71 | +13.16 | +7.09 | +4.41 | +1.04 | +2.81 | +4.92
Table 3: Results comparing Zero-shot evaluations for several external
languages with competitive MTL baselines. The best MTL model is highlighted
using orange. Rows for meta models show the difference (positive or negative)
of the meta model result from the best MTL setting (orange) for that column
To better understand the effects of mDDS sampling, Figure 3 shows plots of the
rewards and sampling probabilities $\psi$’s computed as a function of training
time for two deeper tasks - QA-en and NLI-es along with a shallower task -
POS-de. We note that initially all the TLPs in any mDDS setting would start
with similar rewards, thus lending $\psi$’s to converge towards the $T=\infty$
state. We highlight the following three observations:
* •
We find that the mDDS strategy does not help NLI at all. This is because the
NLI task occupies the largest proportion across tasks at the start, as shown
in Figure 2, and the proportion of NLI decreases substantially over time
(since all tasks start with similar rewards at the beginning of meta
training). Thus, for tasks that are over-represented in the meta-learning
phase, temperature-based sampling is likely to be sufficient.
* •
We observe that the rewards for both QA and NLI are consistently high,
irrespective of the target TLP. This suggests that both QA and NLI are
information-rich tasks and could benefit other tasks in meta-learning. This is
also apparent from the accuracies for PA in Table 2, where all the best meta-
learned models employ mDDS sampling.
* •
From the sampling probabilities for QA-en, we see that both QA and NLI are
given almost equal weightage. However, from the F1 scores in Table 2, the best
numbers for QA are in the Task-Limited setting which suggests that QA does not
benefit from any other task. One explanation for this could be that the
sequence length of inputs for NLI is 128 while the inputs for QA are of length
384, thus allowing lesser room for QA to be benefited by NLI.
##### Zero-shot Evaluations.
Zero-shot evaluation is performed on languages that were not part of the
training (henceforth, we refer to them as external languages). In the case of
QA, NLI and PA we select all external language for which datasets were
available in XTREME. For NER and POS, the number of external languages is
close to 35 so we choose a subset of these to report the results. For
evaluation, we compare models that are agnostic to the target language during
meta training (Task-Limited, All TLPs and All TLPs mDDS-Task). Since Lang-
Limited MTL is language specific and does not offer a competitive baseline
when applied to an external language, we compare against Task-Limited MTL and
All TLPs MTL that are more competitive.
An interesting observation from the zero shot results in Table 3 is that for
every external language, on the ‘shallower’ NER and POS tasks, the Task-
Limited variant of meta-learning performs better than both the variants of
MTL, viz., Task-Limited MTL and All TLPs MTL. In contrast, the ‘deeper’ tasks,
viz., QA, NLI and PA benefit more from the use of meta-learning using All TLPs
setting, presumably because, as argued earlier, the deeper tasks tend to help
each other more.
## 6 Conclusion
We present effective use of meta-learning for capturing task and language
interactions in multi-task, multi-lingual settings. The effective use involves
appropriate strategies for sampling tasks and languages as well as rough
knowledge of the level of abstraction (deep vs. shallow representation) of
that task. We present experiments on the XTREME multilingual benchmark dataset
using five tasks and six languages. Our meta-learned model shows clear
performance improvements over competitive baseline models. We observe that
deeper tasks consistently benefit from meta-learning. Furthermore, shallower
tasks benefit from deeper tasks when meta-learning is restricted to a single
language. Finally, zero-shot evaluations for several external languages
demonstrate the benefit of using meta-learning over two multi-task baselines
while also reinforcing the linguistic insight that tasks requiring deeper
representations tend to collaborate better.
## Acknowledgements
We thank anonymous reviewers for providing constructive feedback. We are
grateful to IBM Research, India (specifically the IBM AI Horizon Networks -
IIT Bombay initiative) for their support and sponsorship.
## References
* Adams et al. (2017) Oliver Adams, Adam Makarucha, Graham Neubig, Steven Bird, and Trevor Cohn. 2017\. Cross-lingual word embeddings for low-resource language modeling. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_ , pages 937–947.
* Aharoni et al. (2019) Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 3874–3884.
* Arivazhagan et al. (2019) Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, M. Gatu Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, Wolfgang Macherey, Zhifeng Chen, and Yonghui Wu. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. _ArXiv_ , abs/1907.05019.
* Artetxe and Schwenk (2019) Mikel Artetxe and Holger Schwenk. 2019. Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond. _Transactions of the ACL 2019_.
* Caruana (1993) Rich Caruana. 1993. Multitask learning: A knowledge-based source of inductive bias. In _ICML_.
* Chen et al. (2018) Junkun Chen, Xipeng Qiu, Pengfei Liu, and Xuanjing Huang. 2018. Meta multi-task learning for sequence modeling. In _AAAI_.
* Colson et al. (2007) Benoît Colson, Patrice Marcotte, and Gilles Savard. 2007. An overview of bilevel optimization. _Annals of operations research_ , 153(1):235–256.
* Conneau et al. (2020) Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 8440–8451, Online. Association for Computational Linguistics.
* Conneau and Lample (2019) Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining. In _Advances in Neural Information Processing Systems_ , volume 32, pages 7059–7069. Curran Associates, Inc.
* Conneau et al. (2018) Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating cross-lingual sentence representations. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Dou et al. (2019) Zi-Yi Dou, Keyi Yu, and Antonios Anastasopoulos. 2019. Investigating meta-learning algorithms for low-resource natural language understanding tasks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 1192–1197.
* Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In _Proceedings of the 34th International Conference on Machine Learning_ , volume 70 of _Proceedings of Machine Learning Research_ , pages 1126–1135, International Convention Centre, Sydney, Australia. PMLR.
* Gu et al. (2018) Jiatao Gu, Yong Wang, Yun Chen, Victor OK Li, and Kyunghyun Cho. 2018. Meta-learning for low-resource neural machine translation. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 3622–3631.
* Hashimoto et al. (2017) Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 1923–1933, Copenhagen, Denmark. Association for Computational Linguistics.
* Hu et al. (2020) Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In _Proceedings of the 37th International Conference on Machine Learning_ , volume 119 of _Proceedings of Machine Learning Research_ , pages 4411–4421. PMLR.
* Jawanpuria et al. (2015) Pratik Jawanpuria, Jagarlapudi Saketha Nath, and Ganesh Ramakrishnan. 2015. Generalized hierarchical kernel learning. _J. Mach. Learn. Res._ , 16:617–652.
* Kendall et al. (2018) Alex Kendall, Yarin Gal, and Roberto Cipolla. 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 7482–7491.
* Kiperwasser and Ballesteros (2018) Eliyahu Kiperwasser and Miguel Ballesteros. 2018. Scheduled multi-task learning: From syntax to translation. _Transactions of the Association for Computational Linguistics_ , 6:225–240.
* Kumar et al. (2019) Vishwajeet Kumar, Nitish Joshi, Arijit Mukherjee, Ganesh Ramakrishnan, and Preethi Jyothi. 2019. Cross-lingual training for automatic question generation. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4863–4872, Florence, Italy. Association for Computational Linguistics.
* Lewis et al. (2019) Patrick Lewis, Barlas Oğuz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2019\. MLQA: Evaluating Cross-lingual Extractive Question Answering. _arXiv preprint arXiv:1910.07475_.
* Liu et al. (2019a) Shikun Liu, Edward Johns, and Andrew J. Davison. 2019a. End-to-end multi-task learning with attention. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_.
* Liu et al. (2019b) Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019b. Multi-task deep neural networks for natural language understanding. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4487–4496, Florence, Italy. Association for Computational Linguistics.
* Loshchilov and Hutter (2017) Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. _ArXiv_ , abs/1711.05101.
* McCann et al. (2018) Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. _arXiv preprint arXiv:1806.08730_.
* Nichol et al. (2018) Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. _ArXiv_ , abs/1803.02999.
* Nivre et al. (2020) Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajič, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In _Proceedings of the 12th Language Resources and Evaluation Conference_ , pages 4034–4043, Marseille, France. European Language Resources Association.
* Nooralahzadeh et al. (2020) Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, and Isabelle Augenstein. 2020. Zero-shot cross-lingual transfer with meta learning. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 4547–4562, Online. Association for Computational Linguistics.
* Pan et al. (2017) Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross-lingual name tagging and linking for 282 languages. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1946–1958.
* Pires et al. (2019) Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4996–5001, Florence, Italy. Association for Computational Linguistics.
* Rajpurkar et al. (2016) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
* Robbins and Monro (1951) Herbert Robbins and Sutton Monro. 1951. A stochastic approximation method. _The annals of mathematical statistics_ , pages 400–407.
* Santoro et al. (2016) Adam Santoro, Sergey Bartunov, Matthew M Botvinick, Daan Wierstra, and Timothy P. Lillicrap. 2016. Meta-learning with memory-augmented neural networks. In _ICML_.
* Vauquois (1968) Bernard Vauquois. 1968. A survey of formal grammars and algorithms for recognition and transformation in mechanical translation. In _IFIP Congress_.
* Wang et al. (2018) Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In _Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , pages 353–355.
* Wang et al. (2017) Dingquan Wang, Nanyun Peng, and Kevin Duh. 2017. A multi-task learning approach to adapting bilingual word embeddings for cross-lingual named entity recognition. In _Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)_ , pages 383–388.
* Wang et al. (2020a) Xinyi Wang, Hieu Pham, Paul Michel, Antonios Anastasopoulos, Jaime Carbonell, and Graham Neubig. 2020a. Optimizing data usage via differentiable rewards. In _Proceedings of the 37th International Conference on Machine Learning_ , volume 119 of _Proceedings of Machine Learning Research_ , pages 9983–9995. PMLR.
* Wang et al. (2020b) Xinyi Wang, Yulia Tsvetkov, and Graham Neubig. 2020b. Balancing training for multilingual neural machine translation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 8526–8537, Online. Association for Computational Linguistics.
* Williams et al. (2018) Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1112–1122. Association for Computational Linguistics.
* Williams (1992) Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. _Machine learning_ , 8(3-4):229–256.
* Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 38–45, Online. Association for Computational Linguistics.
* Wu and Dredze (2019) Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 833–844, Hong Kong, China. Association for Computational Linguistics.
* Yang et al. (2019) Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual adversarial dataset for paraphrase identification. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3687–3692, Hong Kong, China. Association for Computational Linguistics.
* Yang and Hospedales (2017) Yongxin Yang and Timothy M. Hospedales. 2017. Deep multi-task representation learning: A tensor factorisation approach. In _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_. OpenReview.net.
## Appendices
### Appendix A: Baseline Training Details
For QA learning rate is 3e-5 and sequence length is 384 and the model is
trained for 2 epochs. For PA, NLI, POS and NER the learning rate is 2e-5 and
sequence length is 128. NLI and PA models are trained for 5 epochs while POS
and NER models are trained for 10 epochs. The choice of hyperparameters was
kept constant across different languages for the same task.
### Appendix B: Finetuning Details
For finetuning we kept the same number of epochs as the baseline of that task
i.e 2 epochs for QA, 10 epochs for POS and NER, 5 epochs for NLI and PA. For
QA we finetune with learning rate 3e-5 and 3e-6 and POS/NER we finetune with
learning rate 2e-5 and 2e-6 and select the better of the two model. For PA and
NLI the results for learning rate 2e-5 were consistently worse compared to
2e-6 so we just use lr = 2e-6 for PA and NLI.
### Appendix C: Temperature Sampling
Model | T | QA (F1) | NLI (Acc.) | PA (Acc.)
---|---|---|---|---
en | hi | es | de | en | es | de | fr | en | es | de | fr | zh
Baselines | | 79.94 | 59.94 | 65.83 | 63.17 | 81.39 | 78.37 | 76.82 | 77.30 | 92.35 | 89.75 | 87.45 | 89.61 | 83.32
Lang-Limited MTL | | 69.80 | 53.24 | 62.29 | 58.91 | 80.49 | 76.10 | 75.18 | 74.94 | 93.75 | 87.75 | 85.35 | 88.55 | 80.49
Task-Limited MTL | | 74.04 | 57.77 | 64.28 | 61.47 | 80.95 | 78.15 | 75.90 | 77.14 | 93.65 | 86.65 | 86.25 | 86.82 | 81.24
All TLPs MTL | | 63.22 | 42.94 | 54.05 | 51.61 | 80.05 | 76.48 | 74.86 | 76.18 | 93.50 | 90.30 | 88.45 | 89.71 | 82.66
Lang-Limited | T = 1 | 79.49 | 59.42 | 64.67 | 63.04 | 81.13 | 78.76 | 76.23 | 76.51 | 93.85 | 89.15 | 87.83 | 89.63 | 82.56
T = 2 | 78.81 | 59.68 | 65.10 | 63.24 | 80.87 | 77.56 | 76.85 | 76.60 | 93.85 | 90.15 | 87.70 | 89.41 | 83.10
T = 5 | 79.90 | 58.74 | 65.56 | 62.12 | 81.19 | 78.17 | 76.10 | 76.56 | 93.65 | 90.35 | 88.60 | 90.11 | 83.20
T = $\infty$ | 79.71 | 59.70 | 65.29 | 62.89 | 81.45 | 78.45 | 76.74 | 76.46 | 94.20 | 89.65 | 88.80 | 89.56 | 83.26
Task-Limited | T = 1 | 80.30 | 60.37 | 66.32 | 63.57 | 82.91 | 79.49 | 77.96 | 78.02 | 93.95 | 90.15 | 87.50 | 90.56 | 82.66
T = 2 | 79.95 | 59.94 | 66.33 | 63.50 | 83.03 | 79.41 | 77.94 | 78.08 | 93.05 | 89.85 | 87.90 | 89.66 | 83.17
T = 5 | 80.49 | 60.17 | 65.94 | 62.74 | 82.75 | 79.33 | 77.98 | 78.00 | 93.90 | 89.80 | 87.65 | 90.21 | 83.12
T = $\infty$ | 79.77 | 59.86 | 66.01 | 62.96 | 83.03 | 79.39 | 78.07 | 78.09 | 93.60 | 89.75 | 87.75 | 89.61 | 82.42
All TLPs | T = 1 | 80.20 | 59.89 | 66.10 | 63.64 | 83.29 | 79.59 | 77.84 | 78.19 | 93.90 | 89.95 | 88.70 | 90.41 | 83.57
T = 2 | 80.47 | 60.41 | 66.04 | 63.56 | 82.71 | 78.83 | 77.96 | 78.04 | 93.50 | 90.75 | 89.65 | 90.71 | 84.02
T = 5 | 80.01 | 59.38 | 66.15 | 63.53 | 83.19 | 79.51 | 78.10 | 78.21 | 94.10 | 90.05 | 88.70 | 90.26 | 84.17
T = $\infty$ | 80.27 | 59.82 | 64.41 | 63.08 | 83.27 | 79.43 | 78.27 | 78.25 | 94.05 | 90.75 | 88.70 | 90.76 | 83.42
Model | T | NER (Acc.) | POS (Acc.)
---|---|---|---
en | hi | es | de | fr | zh | en | hi | es | de | zh
Baselines | | 93.23 | 95.72 | 95.84 | 97.32 | 95.48 | 94.34 | 96.15 | 93.57 | 96.02 | 97.37 | 92.60
Lang-Limited MTL | | 92.54 | 92.67 | 95.14 | 96.40 | 94.38 | 92.97 | 95.08 | 92.43 | 95.19 | 97.19 | 89.71
Task-Limited MTL | | 93.51 | 93.94 | 95.77 | 97.09 | 95.27 | 93.72 | 95.70 | 93.34 | 95.73 | 97.35 | 92.52
All TLPs MTL | | 92.28 | 91.95 | 94.90 | 96.18 | 94.38 | 92.53 | 94.70 | 91.89 | 95.10 | 97.03 | 89.92
Lang-Limited | T = 1 | 93.14 | 95.36 | 95.40 | 97.21 | 95.39 | 93.63 | 95.96 | 93.33 | 95.81 | 97.32 | 92.32
T = 2 | 93.24 | 94.76 | 95.80 | 97.56 | 95.07 | 93.53 | 95.87 | 93.53 | 95.93 | 97.39 | 92.40
T = 5 | 94.03 | 95.78 | 95.93 | 97.24 | 94.99 | 93.60 | 96.09 | 93.56 | 95.85 | 97.33 | 92.43
T = $\infty$ | 94.11 | 95.40 | 95.75 | 96.89 | 95.35 | 93.87 | 95.99 | 93.28 | 96.12 | 97.41 | 92.35
Task-Limited | T = 1 | 94.30 | 95.26 | 95.82 | 97.25 | 95.26 | 93.62 | 95.93 | 93.36 | 95.81 | 97.31 | 92.38
T = 2 | 93.30 | 94.92 | 95.82 | 97.07 | 95.30 | 93.63 | 95.84 | 93.52 | 95.78 | 97.31 | 92.38
T = 5 | 93.29 | 95.02 | 95.73 | 96.98 | 95.19 | 93.56 | 95.92 | 93.34 | 95.75 | 97.39 | 92.43
T = $\infty$ | 93.37 | 94.70 | 95.84 | 96.95 | 95.20 | 93.83 | 95.77 | 93.33 | 95.76 | 97.33 | 92.51
All TLPs | T = 1 | 93.14 | 93.63 | 95.91 | 97.30 | 95.32 | 93.53 | 95.90 | 93.35 | 95.76 | 97.36 | 92.43
T = 2 | 93.35 | 95.02 | 95.78 | 97.30 | 95.29 | 93.58 | 95.92 | 93.48 | 95.81 | 97.39 | 92.44
T = 5 | 93.36 | 94.51 | 95.93 | 97.26 | 95.28 | 93.95 | 95.92 | 93.35 | 95.78 | 97.40 | 92.42
T = $\infty$ | 93.35 | 94.95 | 95.97 | 97.32 | 95.28 | 93.63 | 95.93 | 93.31 | 95.80 | 97.30 | 92.43
Table 4: Detailed results of temperature based heuristic sampling for
different selections settings. The best result among Baseline and three MTL
models is highlighted using orange. For each column we present the difference
(positive or negative) of the meta models from the best baseline (highlighted
in orange) of that column
|
# Effective Communications: A Joint Learning and Communication Framework for
Multi-Agent Reinforcement Learning over Noisy Channels
Tze-Yang Tung, Szymon Kobus, Joan Pujol Roig, Deniz Gündüz
Information Processing and Communications Laboratory (IPC-Lab)
Dept. of Electrical and Electronic Engineering, Imperial College London, UK
This work was supported in part by the European Research Council (ERC)
Starting Grant BEACON (grant agreement no. 677854) and by the UK EPSRC (grant
no. EP/T023600/1).An earlier version of this work was presented at the IEEE
Global Communications Conference (GLOBECOM) in December 2020 [1].
###### Abstract
We propose a novel formulation of the “effectiveness problem” in
communications, put forth by Shannon and Weaver in their seminal work [2], by
considering multiple agents communicating over a noisy channel in order to
achieve better coordination and cooperation in a multi-agent reinforcement
learning (MARL) framework. Specifically, we consider a multi-agent partially
observable Markov decision process (MA-POMDP), in which the agents, in
addition to interacting with the environment can also communicate with each
other over a noisy communication channel. The noisy communication channel is
considered explicitly as part of the dynamics of the environment and the
message each agent sends is part of the action that the agent can take. As a
result, the agents learn not only to collaborate with each other but also to
communicate “effectively” over a noisy channel. This framework generalizes
both the traditional communication problem, where the main goal is to convey a
message reliably over a noisy channel, and the “learning to communicate”
framework that has received recent attention in the MARL literature, where the
underlying communication channels are assumed to be error-free. We show via
examples that the joint policy learned using the proposed framework is
superior to that where the communication is considered separately from the
underlying MA-POMDP. This is a very powerful framework, which has many real
world applications, from autonomous vehicle planning to drone swarm control,
and opens up the rich toolbox of deep reinforcement learning for the design of
multi-user communication systems.
## I Introduction
Communication is essential for our society. Humans use language to communicate
ideas, which has given rise to complex social structures, and scientists have
observed either gestural or vocal communication in other animal groups,
complexity of which increases with the complexity of the social structure of
the group [3]. Communication helps to achieve complex goals by enabling
cooperation and coordination [4, 5]. Advances in our ability to store and
transmit information over time and long distances have greatly expanded our
capabilities, and allows us to turn the world into the connected society that
we observe today. Communication technologies are at the core of this massively
complex system.
Communication technologies are built upon fundamental mathematical principles
and engineering expertise. The fundamental quest in the design of these
systems have been to deal with various imperfections in the communication
channel (e.g., noise and fading) and the interference among transmitters.
Decades of research and engineering efforts have produced highly advanced
networking protocols, modulation techniques, waveform designs and coding
techniques that can overcome these challenges quite effectively. However, this
design approach ignores the aforementioned core objective of communication in
enabling coordination and cooperation. To some extent, we have separated the
design of a communication network that can reliably carry signals from one
point to another from the ‘language’ that is formed to achieve coordination
and cooperation among agents.
This engineering approach was also highlighted by Shannon and Weaver in [2] by
organizing the communication problem into three “levels”: They described level
A as the technical problem, which tries to answer the question “How accurately
can the symbols of communication be transmitted?”. Level B is referred to as
the semantic problem, and asks the question “How precisely do the transmitted
symbols convey the desired meaning?”. Finally, Level C, called the
effectiveness problem, strives to answer the question “How effectively does
the received meaning affect conduct in the desired way?”. As we have described
above, our communication technologies mainly deal with Level A, ignoring the
semantics or the effectiveness problems. This simplifies the problem into the
transmission of a discrete message or a continuous waveform over a
communication channel in the most reliable manner. The semantics problem deals
with the meaning of the messages, and is rather abstract. There is a growing
interest in the semantics problem in the recent literature [6, 7, 8, 9, 10].
However, these works typically formulate the semantics as an end-to-end joint
source-channel coding problem, where the reconstruction objective can be
distortion with respect to the original signal [11, 12], or a more general
function that can model some form of ‘meaning’ [6, 13, 14, 15], which goes
beyond reconstructing the original signal111To be more precise, remote
hypothesis testing, classification, or retrieval problems can also be
formulated as end-to-end joint source-channel coding problems, albeit with a
non-additive distortion measure..
Figure 1: An illustration of a MARL problem with noisy communication between
the agents, e.g., agents communicating over a shared wireless channel. The
emerging communication scheme should not only allow the agents to better
coordinate and cooperate to maximize their rewards, but also mitigate the
adverse effects of the wireless channel, such as noise and interference.
In this paper, we deal with the ‘effectiveness problem’, which generalizes the
problems in both level A and level B. In particular, we formulate a multi-
agent problem with noisy communications between the agents, where the goal of
communications is to help agents better cooperate and achieve a common goal.
See Fig. 1 for an illustration of a multi-agent grid-world, where agents can
communicate through noisy wireless links. It is well-known that multi-agent
reinforcement learning (MARL) problems are notoriously difficult, and are a
topic of continuous research. Originally, these problems were approached by
treating each agent independently, as in a standard single-agent reinforcement
learning (RL) problem, while treating other agents as part of the state of the
environment. Consensus and cooperation are achieved through common or
correlated reward signals. However, this approach leads to overfitting of
policies due to limited local observations of each agent and it relies on
other agents not varying their policies [16]. It has been observed that these
limitations can be overcome by leveraging communication between the agents [5,
17].
Recently, there has been significant interest in the emergence of
communication among agents within the RL literature [18, 19, 20, 21]. These
works consider MARL problems, in which agents have access to a dedicated
communication channel, and the objective is to learn a communication protocol,
which can be considered as a ‘language’ to achieve the underlying goal, which
is typically translated into maximizing a specific reward function. This
corresponds to Level C, as described by Shannon and Weaver in [2], where the
agents change their behavior based on the messages received over the channel
in order to maximize their reward. However, the focus of the aforementioned
works is the emergence of communication protocols within the limited
communication resources that can provide the desired impact on the behavior of
the agents, and, unlike Shannon and Weaver, these works ignore the physical
layer characteristics of the channel.
Our goal in this work is to consider the effectiveness problem by taking into
account both the channel noise and the end-to-end learning objective. In this
problem, the goal of communication is not “reproducing at one point either
exactly or approximately a message selected at another point” as stated by
Shannon in [2], which is the foundation of the communication and information
theoretic formulations that have been studied over the last seven decades.
Instead, the goal is to enable cooperation in order to improve the objective
of the underlying multi-agent game. As we will show later in this paper, the
codes that emerge from the proposed framework can be very different from those
that would be used for reliable communication of messages.
We formulate this novel communication problem as a MARL problem, in which the
agents have access to a noisy communication channel. More specifically, we
formulate this as a multi-agent partially observable Markov decision process
(POMDP), and construct RL algorithms that can learn policies that govern both
the actions of the agents in the environment and the signals they transmit
over the channel. A communication protocol in this scenario should aim to
enable cooperation and coordination among agents in the presence of channel
noise. Therefore, the emerging modulation and coding schemes must not only be
capable of error correction/ compensation, but also enable agents to share
their knowledge of the environment and/or their intentions. We believe that
this novel formulation opens up many new directions for the design of
communication protocols and codes that will be applicable in many multi-agent
scenarios from teams of robots to platoons of autonomous cars [22], to drone
swarm planning [23].
We summarize the main contributions of this work as follows:
1. 1.
We propose a novel formulation of the “effectiveness problem” in
communications, where agents communicate over a noisy communication channel in
order to achieve better coordination and cooperation in a MARL framework. This
can be interpreted as a joint communication and learning approach in the RL
context [15]. The current paper is an initial study of this general framework,
focusing on scenarios that involve only point-to-point communications for
simplicity. More involved multi-user communication and coordination problems
will be the subject of future studies.
2. 2.
The proposed formulation generalizes the recently studied “learning to
communicate” framework in the MARL literature [18, 19, 20, 21], where the
underlying communication channels are assumed to be error-free. This framework
has been used to argue about the emergence of natural languages [24, 25];
however, in practice, there is inherent noise in any communication medium,
particularly in human/animal communications. Indeed, languages have evolved to
deal with such noise. For example, Shannon estimated that the English language
has approximately 75% redundancy. Such redundancy provides error correction
capabilities. Hence, we argue that the proposed framework better models
realistic communication problems, and the emerging codes and communication
schemes can help better understand the underlying structure of natural
languages.
3. 3.
The proposed framework also generalizes communication problems at level A,
which have been the target of most communication protocols and codes that have
been developed in the literature. Channel coding, source coding, as well as
joint source-channel coding problems, and their multi-user extensions can be
obtained as special cases of the proposed framework. The proposed deep
reinforcement learning (DRL) framework provides alternative approaches to the
design of codes and communication schemes for these problems that can
outperform existing ones. We highlight that there are very limited practical
code designs in the literature for most multi-user communication problems, and
the proposed framework and the exploitation of deep representations and
gradient-based optimization in DRL can provide a scalable and systematic
methodology to make progress in these challenging problems.
4. 4.
We study a particular case of the proposed general framework as an example,
which reduces to a point-to-point communication problem. In particular, we
show that any single-agent Markov decision process (MDP) can be converted into
a multi-agent partially observable MDP (MA-POMDP) with a noisy communication
link between the two agents. We consider both the binary symmetric channel
(BSC), the additive white Gaussian noise (AWGN) channel, and the bursty noise
(BN) channel for the noisy communication link and solve the MA-POMDP problem
by treating the other agent as part of the environment, from the perspective
of one agent. We employ deep Q-learning (DQN) [26] and deep deterministic
policy gradient (DDPG) [27] to train the agents. Substantial performance
improvement is observed in the resultant policy over those learned by
considering the cooperation and communication problems separately.
5. 5.
We then present the joint modulation and channel coding problem as an
important special case of the proposed framework. In recent years, there has
been a growing interest in using machine learning techniques to design
practical channel coding and modulation schemes [28, 29, 30, 11, 31, 32].
However, with the exception of [32], most of these approaches assume that the
channel model is known and differentiable, allowing the use of supervised
training by directly backpropagating through the channel using the channel
model. In this paper, we learn to communicate over an unknown channel solely
based on the reward function by formulating it as a RL problem. The proposed
DRL framework goes beyond the method employed in [32], which treats the
channel as a random variable, and numerically approximates the gradient of the
loss function. It is shown through numerical examples that the proposed DRL
techniques employing DDPG [27], and actor-critic [33] algorithms significantly
improve the block error probability (BLER) of the resultant code.
## II Related Works
The study of communication for multi-agent systems is not new [34]. However,
due to the success of deep neural networks (DNNs) for reinforcement learning
(RL), this problem has received renewed interest in the context of DNNs [24]
and deep RL (DRL) [18, 35, 36], where partially observable multi-agent
problems are considered. In each case, the agents, in addition to taking
actions that impact the environment, can also communicate with each other via
a limited-capacity communication channel. Particularly, in [18], two
approaches are considered: reinforced inter-agent learning (RIAL), where two
centralized Q-learning networks learn to act and communicate, respectively,
and differentiable inter-agent learning (DIAL), where communication feedback
is provided via backpropagation of gradients through the channel, while the
communication between agents is restricted during execution. Similarly, in
[37, 38], the authors propose a centralized learning, decentralized execution
approach, where a central critic is used to learn the state-action values of
all the agents and use those values to train individual policies of each
agent. Although they also consider the transmitted messages as part of the
agents’ actions, the communication channel is assumed to be noiseless.
CommNet [35] attempts to leverage communications in cooperative MARL by using
multiple continuous-valued transmissions at each time step to make decisions
for all agents. Each agent broadcasts its message to every other agent, and
the averaged message received by each agent forms part of the input. However,
this solution lacks scalability as it depends on a centralized network by
treating the problem as a single RL problem. Similarly, BiCNet [39] utilizes
recurrent neural networks to connect individual agent’s policy with a
centralized controller aggregating the hidden states of each agent, acting as
communication messages.
The reliance of the aforementioned works on a broadcast channel to communicate
with all the agents simultaneously may be infeasible or highly inefficient in
practice. To overcome this limitation, in [19], the authors propose an
attentional communication model that learns when communication is needed and
how to integrate shared information for cooperative decision making. In [21],
directional communication between agents is achieved with a signature-based
soft attention mechanism, where each message is associated to the target
recipient. They also propose multi-stage communication, where multiple rounds
of communication take place before an action is taken.
It is important to note that, with the exception of [40], all of the prior
works discussed above rely on error-free communication channels. MARL with
noisy communications is considered in [40], where two agents placed on a grid
world aim to coordinate to step on the goal square simultaneously. However,
for the particular problem presented in [40], it can be shown that even if the
agents are trained independently without any communication at all, the total
discounted reward would still be higher than the average reward achieved by
the scheme proposed in [40].
## III Problem Formulation
We consider a multi-agent partially observable Markov decision process (MA-
POMDP) with noisy communications. Consider first a Markov game with $N$ agents
$(\mathcal{S},\\{\mathcal{O}_{i}\\}_{i=1}^{N},\\{\mathcal{A}_{i}\\}_{i=1}^{N},$
$P,r)$, where $\mathcal{S}$ represents all possible configurations of the
environment and agents, $\mathcal{O}_{i}$ and $\mathcal{A}_{i}$ are the
observation and action sets of agent $i$, respectively, $P$ is the transition
kernel that governs the environment, and $r$ is the reward function. At each
step $t$ of this Markov game, agent $i$ has a partial observation of the state
$o_{i}^{(t)}\in\mathcal{O}_{i}$, and takes action
$a_{i}^{(t)}\in\mathcal{A}_{i}$, $\forall i$. Then, the state of the MA-POMDP
transitions from $s^{(t)}$ to $s^{(t+1)}$ according to the joint actions of
the agents following the transition probability
$P(s^{(t+1)}|s^{(t)},\mathbf{a}^{(t)})$, where
$\mathbf{a}^{(t)}=(a_{1}^{(t)},\ldots,a_{N}^{(t)})$. Observations in the next
time instant follow the conditional distribution
$\mathrm{Pr}(o^{(t+1)}|s^{(t)},\mathbf{a}^{(t)})$. While, in general, each
agent can have a separate reward function, we consider herein the fully
cooperative setting, where the agents receive the same team reward
$r^{(t)}=r(s^{(t)},\mathbf{a}^{(t)})$ at time $t$.
In order to coordinate and maximize the total reward, the agents are endowed
with a noisy communication channel, which is orthogonal to the environment.
That is, the environment transitions depend only on the environment actions,
and the only impact of the communication channel is that the actions of the
agents can now depend on the past received messages as well as the past
observations and rewards. We assume that the communication channel is governed
by the conditional probability distribution $P_{c}$, and we allow the agents
to use the channel $M$ times at each time $t$. Here, $M$ can be considered as
the channel bandwidth. Let the signals transmitted and received by agent $i$
at time step $t$ be denoted by $\mathbf{m}_{i}^{(t)}\in\mathcal{C}_{t}^{M}$
and $\hat{\mathbf{m}}_{i}^{(t)}\in\mathcal{C}_{r}^{M}$, respectively, where
$\mathcal{C}_{t}$ and $\mathcal{C}_{r}$ denote the input and output alphabets
of the channel, which can be discrete or continuous. We assume for simplicity
that the input and output alphabets of the channel are the same for all the
agents. Channel inputs and outputs at time $t$ are related through the
conditional distribution
$P_{c}\big{(}\hat{\mathbf{M}}^{(t)}|\mathbf{M}^{(t)}\big{)}=\mathrm{Pr}\big{(}\hat{\mathbf{M}}=\\{\hat{\mathbf{m}}_{i}^{(t)}\\}_{i=1}^{N}\big{|}\mathbf{M}=\\{\mathbf{m}_{i}^{(t)}\\}_{i=1}^{N}\big{)}$,
where
$\hat{\mathbf{M}}=(\hat{\mathbf{m}}_{1},\ldots,\hat{\mathbf{m}}_{N})\in\mathbb{R}^{N\times
M}$ denotes the matrix of received signals with each row
$\hat{\mathbf{m}}_{i}$ corresponding to a vector of symbols representing the
codeword chosen by agent $i$, and likewise for
$\mathbf{M}=(\mathbf{m}_{1},\ldots,\mathbf{m}_{N})\in\mathbb{R}^{N\times M}$
is the matrix of transmitted signals. That is, the received signal of agent
$i$ over the communication channel is a random function of the signals
transmitted by all other agents, characterized by the conditional distribution
of the multi-user communication channel. In our simulations, we will consider
independent and identically distributed channels as well as a channel with
Markov noise, but our formulation is general enough to take into account
arbitrarily correlated channels, both across time and users.
We can define a new Markov game with noisy communications, where the actions
of agent $i$ now consist of two components, the environment actions
$a_{i}^{(t)}$ as before, and the signal to be transmitted over the channel
$\mathbf{m}_{i}^{(t)}$. Each agent, in addition to taking actions that affect
the state of the environment, can also send signals to other agents over $M$
uses of the noisy communication channel. The observation of each agent is now
given by $(o_{i}^{(t)},\hat{\mathbf{m}}_{i}^{(t)})$; that is, a combination of
the partial observation of the environment as before and the channel output
signal.
At each time step $t$, agent $i$ observes
$(o_{i}^{(t)},\hat{\mathbf{m}}_{i}^{(t)})$ and selects an action
$(a_{i}^{(t)},\mathbf{m}_{i}^{(t)})$ according to its policy
$\pi_{i}:\mathcal{O}_{i}\times\mathcal{C}_{r}^{M}\rightarrow\mathcal{A}_{i}\times\mathcal{C}_{t}^{M}$.
The overall policy over all agents can be defined as
$\Pi:\mathcal{S}\rightarrow\mathcal{A}$. The objective of the Markov game with
noisy communications is to maximize the discounted sum of rewards
$V_{\Pi}(s)=\mathbb{E}_{\Pi}\Bigg{[}\sum_{t=1}^{\infty}\gamma^{t-1}r^{(t)}\Bigg{|}s^{(1)}=s\Bigg{]}$
(1)
for any initial state $s\in\mathcal{S}$ and $\gamma$ is the discount factor to
ensure convergence. We also define the state-action value function, also
referred to as Q-function as
$Q_{\Pi}(s^{(t)},a^{(t)})=\mathbb{E}_{\Pi}\Bigg{[}\sum_{i=t}^{\infty}\gamma^{(i-t)}r^{(t)}\Bigg{|}s^{(t)},a^{(t)}\Bigg{]}.$
(2)
In the subsequent sections we will show that this formulation of the MA-POMDP
with noisy communications lends itself to multiple problem domains where
communication is vital to achieve non-trivial total reward values, and we
devise methods that jointly learn to collaborate and communicate despite the
noise in the channel. Although the introduced MA-POMDP framework with
communications is fairly general and can model any multi-agent scenario with
complex multi-user communications, our focus in this paper will be on point-
to-point communications. This will allow us to expose the benefits of the
joint communication and learning design, without having to deal with the
challenges of multi-user communications. Extensions of the proposed framework
to scenarios that would involve multi-user communication channels will be
studied in future work.
## IV Guided Robot with Point-to-Point Communications
Figure 2: Illustration of the guided robot problem in grid world. The set
$\mathcal{A}_{2}$ of 16 possible actions the scout agent can take using hand
crafted (HC) codewords.
In this section, we consider a single-agent MDP and turn it into a MA-POMDP
problem by dividing the single agent into two separate agents, a guide and a
scout, which are connected through a noisy communication channel. In this
formulation, we assume that the guide observes the state of the original MDP
perfectly, but cannot take actions on the environment directly. Contrarily,
the scout can take actions on the environment, but cannot observe the
environment state. Therefore, the guide communicates to the scout through a
noisy communication channel and the scout has to take actions based on the
signals it receives from the guide through the communication channel. The
scout can be considered as a robot remotely controlled by the guide agent,
which has sensors to observe the environment.
We consider this particular setting since it clearly exposes the importance of
communication as the scout depends solely on the signals received from the
guide. Without the communication channel, the scout is limited to purely
random actions independent of the current state. Moreover, this scenario also
allows us to quantify the impact of the channel noise on the overall
performance since we recover the original single-agent MDP when the
communication channel is perfect; that is, if any desired message can be
conveyed over the channel in a reliable manner. Therefore, if the optimal
reward for the original MDP can be determined, this would serve as an upper
bound on the reward of the MA-POMDP with noisy communications.
As an example to study the proposed framework and to develop and test
numerical algorithms aiming to solve the obtained MA-POMDP problem, we
consider a grid world of size $L\times L$, denoted by
$\mathcal{L}=[L]\times[L]$, where $[L]=\\{0,1,\dots,L-1\\}$. We denote the
scout position at time step $t$ by
$p_{s}^{(t)}=(x_{s}^{(t)},y_{s}^{(t)})\in\mathcal{L}$. At each time instant,
the scout can take one action from the set of 16 possible actions
$\mathcal{A}=\\{[1,0],[-1,0],[0,1],[0,-1],[1,1],[-1,1],[-1,-1],[1,-1],[2,0],$
$[-2,0],[0,2],[0,-2],[2,2],[-2,2],[-2,-2],[2,-2]\\}$. See Fig. 6 for an
illustration of the scout and the 16 actions it can take. If the action taken
by the scout ends up in a cell outside the grid world, the agent remains in
its original location. The transition probability kernel of this MDP is
specified as follows: after each action, the agent moves to the intended
target location with probability (w.p.) $1-\delta$, and to a random
neighboring cell w.p. $\delta$. That is, the next state is given by
$s^{(t+1)}=s^{(t)}+a^{(t)}$ w.p. $1-\delta$, and
$s^{(t+1)}=s^{(t)}+a^{(t)}+z^{(t)}$, where $z^{(t)}$ is uniformly distributed
over the set $\\{[1,0],[1,1],[0,1],[-1,1],[-1,0],[0,-1],[-1,-1],[1,-1]\\}$
w.p. $\delta$.
The objective of the scout is to find the treasure, located at
$p_{g}=(x_{g},y_{g})\in\mathcal{L}$ as quickly as possible. We assume that the
initial position of the scout and the location of the treasure are random, and
are not the same. The scout takes instructions from the guide, who observes
the grid world, and utilizes a noisy communication channel $M$ times to
transmit signal $\mathbf{m}^{(t)}$ to the scout, who observes
$\hat{\mathbf{m}}^{(t)}$ from the output of the channel. To put it in the
context of the MA-POMDP defined in Section III, agent 1 is the guide, with
observable state $o_{1}^{(t)}=s^{(t)}$, where $s^{(t)}=(p_{s}^{(t)},p_{g})$,
and action set $\mathcal{A}_{1}=\mathcal{C}_{t}$. Agent 2 is the scout, with
observation $o_{2}^{(t)}=\hat{\mathbf{m}}^{(t)}$ and action set
$\mathcal{A}_{2}=\mathcal{A}$ (or, more precisely,
$o_{1}^{(t)}=(s^{(t)},\o),o_{2}^{(t)}=(\o,\hat{\mathbf{m}}_{2}^{(t)})$). We
define the reward function as follows to encourage the agents to collaborate
to find the treasure as quickly as possible:
$r^{(t)}=\begin{cases}10,~{}&\text{if }p_{s}^{(t)}=p_{g},\\\
-1,~{}&\text{otherwise}.\end{cases}$ (3)
The game terminates when $p_{s}^{(t)}=p_{g}$.
We should highlight that despite the simplicity of the problem, the original
MDP is not a trivial one when both the initial state of the agent and the
target location are random, as it has a rather large state space, and learning
the optimal policy requires a long training process in order to observe all
possible agent and target location pairs sufficiently many times. In order to
simplify the learning of the optimal policy, and focus on learning the
communication scheme, we will pay special attention to the scenario where
$\delta=0$. This corresponds to the scenario in which the underlying MDP is
deterministic, and it is not difficult to see that the optimal solution to
this MDP is to take the shortest path to the treasure.
Figure 3: Information flow between the guide and the scout.
We consider three types of channel distributions: the BSC, the AWGN, and the
BN channel. In the BSC case, we have $\mathcal{C}_{t}=\\{-1,+1\\}$. For the
AWGN channel and the BN channel, we have $\mathcal{C}_{t}=\\{-1,+1\\}$ if the
input is constrained to binary phase shift keying (BPSK) modulation, or
$\mathcal{C}_{t}=\mathbb{R}$ if no limitation is imposed on the input
constellation. We will impose an average power constraint in the latter case.
In both cases, the output alphabet is $\mathcal{C}_{r}=\mathbb{R}$. For the
BSC, the output of the channel is given by
$\hat{\mathbf{m}}_{i}^{(t)}=\mathbf{m}_{i}^{(t)}\oplus\mathbf{n}^{(t)}$, where
$\mathbf{n}^{(t)}\sim\mathrm{Bernoulli(p_{e})}$. For the AWGN channel, the
output at the $i$th use of the channel is given by
$\hat{\mathbf{m}}_{i}^{(t)}=\mathbf{m}_{i}^{(t)}+\mathbf{n}^{(t)}$, where
$\mathbf{n}^{(t)}\sim\mathcal{N}(0,\mathbf{I}_{M}\sigma_{n}^{2})$ is the zero-
mean Gaussian noise term with covariance matrix $\mathbf{I}_{M}\sigma_{n}^{2}$
and $\mathbf{I}_{M}$ is $M$-dimensional the identity matrix. For the BN
channel, the output at the $i$th use of the channel is given by
$\hat{\mathbf{m}}_{i}^{(t)}=\mathbf{m}_{i}^{(t)}+\mathbf{n}_{b}^{(t)}$, where
$\mathbf{n}_{b}^{(t)}$ is a two state Markov noise, with one state being the
low noise state $N(0,\mathbf{I}_{M}\sigma_{n}^{2})$ as in the AWGN case, and
the other being the high noise state
$N(0,\mathbf{I}_{M}(\sigma_{n}^{2}+\sigma_{b}^{2}))$. The probability of
transitioning from the low noise state to the high noise state and remaining
in that state is $p_{b}$. In practice, this channel models an occasional
random interference from a nearby transmitter.
We first consider the BSC case, also studied in [1]. The action set of agent 1
is $\mathcal{A}_{1}=\\{-1,+1\\}^{M}$, while the observation set of agent 2 is
$\mathcal{O}_{2}=\\{-1,+1\\}^{M}$. We will employ deep Q-learning network,
introduced in [26], which uses deep neural networks (DNNs) to approximate the
Q-function in Eqn. (2). More specifically, we use two distinct DNNs,
parameterized by $\boldsymbol{\theta}_{1}$ and $\boldsymbol{\theta}_{2}$,
respectively, representing DNNs for approximating the Q-functions of agent 1
(guide) and agent 2 (scout). The guide observes
$o_{1}^{(t)}=(p_{s}^{(t)},p_{g})$ and chooses a channel input signal
$\mathbf{m}_{1}^{(t)}=a_{1}^{(t)}=\operatorname*{arg\,max}_{a}Q_{\boldsymbol{\theta}_{1}}(o_{1}^{(t)},a)\in\mathcal{A}_{1}$,
based on the current Q-function approximation. The signal is then transmitted
across $M$ uses of the BSC. The scout observes
$o_{2}^{(t)}=\hat{\mathbf{m}}_{2}^{(t)}$ at the output of the BSC, and chooses
an action based on the current Q-function approximation
$a_{2}^{(t)}=\operatorname*{arg\,max}_{a}Q_{\boldsymbol{\theta}_{2}}(o_{2}^{(t)},a)\in\mathcal{A}_{2}$.
The scout then takes the action $a_{2}^{(t)}$, which updates its position
$p_{s}^{(t+1)}$, collects reward $r^{(t)}$, and the process is repeated. The
reward $r^{(t)}$ is fed to both the guide and the scout to update
$\boldsymbol{\theta}_{1}$ and $\boldsymbol{\theta}_{2}$.
As is typical in Q-learning methods, we use replay buffer, target networks and
$\epsilon$-greedy to improve the learned policy. The replay buffers
$\mathcal{R}_{1}$ and $\mathcal{R}_{2}$ store experiences
$(o_{1}^{(t)},a_{1}^{(t)},r^{(t)},o_{1}^{(t+1)})$ and
$(o_{2}^{(t)},a_{2}^{(t)},r^{(t)},o_{2}^{(t+1)})$ for the guide and scout,
respectively, and we sample them uniformly to update the parameters
$\boldsymbol{\theta}_{1}$ and $\boldsymbol{\theta}_{2}$. This prevents the
states from being correlated. We use target parameters
${\boldsymbol{\theta}_{1}^{-}}$ and ${\boldsymbol{\theta}_{2}^{-}}$, which are
copies of ${\boldsymbol{\theta}_{1}}$ and ${\boldsymbol{\theta}_{2}}$, to
compute the DQN loss function:
$\displaystyle
L_{\text{DQN}}(\boldsymbol{\theta}_{i})=\frac{1}{2}\Big{(}r^{(t)}+\gamma\max_{a}\big{\\{}Q_{\boldsymbol{\theta}_{i}^{-}}\big{(}o_{i}^{(t+1)},a\big{)}\big{\\}}-Q_{\boldsymbol{\theta}_{i}}\big{(}o_{i}^{(t)},a_{i}^{(t)}\big{)}\Big{)}^{2},~{}i=1,2.$
(4)
The parameters $\boldsymbol{\theta}_{i}$ are then updated via gradient descent
according to the gradient
$\nabla_{\boldsymbol{\theta}_{i}}L_{\text{DQN}}(\boldsymbol{\theta}_{i})$, and
the target network parameters are updated via
$\boldsymbol{\theta}_{i}^{-}\leftarrow\tau\boldsymbol{\theta}_{i}+(1-\tau)\boldsymbol{\theta}_{i}^{-},~{}~{}i=1,2,$
(5)
where $0\leq\tau\leq 1$. Due to Q-learning being bootstrapped, if the same
$Q_{\boldsymbol{\theta}_{i}}$ is used to estimate the state-action value of
time step $t$ and $t+1$, both values would move at the same time, which may
lead to the updates to never converge (like a dog chasing its tail). By
introducing the target networks, this effect is reduced due to the much slower
updates of the target network, as done in Eqn. (5).
To promote exploration, we use $\epsilon$-greedy, which chooses a random
action w.p. $\epsilon$ at each time step:
$a_{i}^{(t)}=\begin{cases}\operatorname*{arg\,max}_{a}Q_{\boldsymbol{\theta}_{i}}(o_{i}^{(t)},a),~{}&\text{w.p.
}1-\epsilon\\\ a\sim\text{Uniform}(\mathcal{A}_{i}),~{}&\text{w.p.
}\epsilon,\end{cases}$ (6)
where $a\sim\text{Uniform}(\mathcal{A}_{i})$ denotes an action that is sampled
uniformly from the action set $\mathcal{A}_{i}$. The proposed solution for the
BSC case is shown in Algorithm 1.
Initialize Q networks, $\boldsymbol{\theta}_{i},i=1,2$, using Gaussian
$\mathcal{N}(0,10^{-2})$. Copy parameters to target networks
$\boldsymbol{\theta}_{i}^{-}\leftarrow\boldsymbol{\theta}_{i}$.
$\textit{episode}=0$
while _$\text{episode} <\text{episode-max}$_ do
$episode=episode+1$
$t=0$
$\epsilon=\epsilon_{\text{end}}+(\epsilon_{0}-\epsilon_{\text{end}})e^{\big{(}\frac{\text{episode}}{-\lambda}\big{)}}$
while _Treasure NOT found OR $t<t_{\text{max}}$_ do
$t=t+1$
Observe $o_{1}^{(t)}=(p_{s}^{(t)},p_{g})$
$m_{1}^{(t)}=a_{1}^{(t)}=\begin{cases}\operatorname*{arg\,max}_{a}Q_{\boldsymbol{\theta}_{1}}(o_{1}^{(t)},a),~{}\text{w.p.
}1-\epsilon,\\\ a\sim\text{Uniform}(\mathcal{A}_{1}),~{}\text{w.p.
}\epsilon.\end{cases}$
Observe $o_{2}^{(t)}=P_{\text{BSC}}(\hat{m}_{2}^{(t)}|m_{1}^{(t)})$
$a_{2}^{(t)}=\begin{cases}\operatorname*{arg\,max}_{a}Q_{\boldsymbol{\theta}_{1}}(o_{2}^{(t)},a),~{}\text{w.p.
}1-\epsilon,\\\ a\sim\text{Uniform}(\mathcal{A}_{2}),~{}\text{w.p.
}\epsilon.\end{cases}$
Take action $a_{2}^{(t)}$, collect reward $r^{(t)}$
if _$t >1$_ then
Store experiences:
$(o_{1}^{(t-1)},a_{1}^{(t-1)},r^{(t-1)},o_{1}^{(t)})\in\mathcal{R}_{1}$ and
$(o_{2}^{(t-1)},a_{2}^{(t-1)},r^{(t-1)},o_{2}^{(t)})\in\mathcal{R}_{2}$
end while
Get batches $\mathcal{B}_{1}\subset\mathcal{R}_{1}$,
$\mathcal{B}_{2}\subset\mathcal{R}_{2}$
Compute DQN average loss $L_{\text{DQN}}(\boldsymbol{\theta}_{i}),i=1,2$ as in
Eqn. (4) using batch $\mathcal{B}_{i}$
Update $\boldsymbol{\theta}_{i}$ using
$\nabla_{\boldsymbol{\theta}_{i}}L_{\text{DQN}}(\boldsymbol{\theta}_{i}),i=1,2$.
Update target networks $\boldsymbol{\theta}_{i}^{-},i=1,2$ via Eqn. (5)
end while
Algorithm 1 Proposed solution for the guided robot problem with BSC.
For the binary input AWGN and BN channels, we can use the exact same solution
as the one used for BSC. Note that the observation set of the scout is
$\mathcal{O}_{2}=\mathbb{R}^{M}$. However, the more interesting case is when
$\mathcal{A}_{1}\in\mathbb{R}^{M}$. It has been observed in the JSCC
literature [41, 11], that relaxing the constellation constraints, similar to
analog communications, and training the JSCC scheme in an end-to-end fashion
can provide significant performance improvements thanks to the greater degree
of freedom available to the transmitter. In this case, since the guide can
output continuous actions, we can employ the deep deterministic policy
gradient (DDPG) algorithm proposed in [27]. DDPG uses a parameterized policy
function $\mu_{\boldsymbol{\psi}}(o_{1}^{(t)})$, which specifies the current
policy by deterministically mapping the observation $o_{1}^{(t)}$ to a
continuous action. The critic
$Q_{\boldsymbol{\theta}_{1}}(o_{1}^{(t)},\mu_{\boldsymbol{\psi}}(o_{1}^{(t)}))$,
then estimates the value of the action taken by
$\mu_{\boldsymbol{\psi}}(o_{1}^{(t)})$, and is updated as it is with DQN in
Eqn. (4).
The guide policy is updated by applying the chain rule to the expected return
from the initial distribution
$\displaystyle
J=\mathbb{E}_{o_{1}^{(t)}\sim\rho^{\pi_{1}},o_{2}^{(t)}\sim\rho^{\pi_{2}},a_{1}^{(t)}\sim\pi_{1},a_{2}^{(t)}\sim\pi_{2}}\Bigg{[}\sum_{t=1}^{\infty}\gamma^{t-1}r^{(t)}(o_{1}^{(t)},o_{2}^{(t)},a_{1}^{(t)},a_{2}^{(t)})\Bigg{]},$
(7)
where $\rho^{\pi_{i}}$ is the discounted observation visitation distribution
for policy $\pi_{i}$. Since we solve this problem by letting each agent treat
the other agent as part of the environment, the value of the action taken by
the guide is only dependent on its observation $o_{1}^{(t)}$ and action
$\mu_{\boldsymbol{\psi}}(o_{1}^{(t)})$. Thus, we use a result in [42] where
the gradient of the objective $J$ in Eqn. (7) with respect to the guide policy
parameters $\boldsymbol{\psi}$ is shown to be
$\displaystyle\nabla_{\boldsymbol{\psi}}J$
$\displaystyle=\mathbb{E}_{o_{1}^{(t)}\sim\rho^{\pi_{1}}}\Big{[}\nabla_{\boldsymbol{\psi}}Q_{\boldsymbol{\theta}_{1}}(o,a)\big{|}_{o=o_{1}^{(t)},a=\mu_{\boldsymbol{\psi}}(o_{1}^{(t)})}\Big{]}$
(8)
$\displaystyle=\mathbb{E}_{o_{1}^{(t)}\sim\rho^{\pi_{1}}}\Big{[}\nabla_{a}Q_{\boldsymbol{\theta}_{1}}(o,a)\big{|}_{o=o_{1}^{(t)},a=\mu_{\boldsymbol{\psi}}(o_{1}^{(t)})}\nabla_{\boldsymbol{\psi}}\mu_{\boldsymbol{\psi}}(o)\big{|}_{o=o_{1}^{(t)}}\Big{]}$
(9)
if certain conditions specified in Theorem 1 are satisfied.
###### Theorem 1 ([42])
A function approximator $Q_{\boldsymbol{\theta}}(o,a)$ is compatible (i.e.,
the gradient of the true Q function $Q_{\boldsymbol{\theta}^{\ast}}$ is
preserved by the function approximator) with a deterministic policy
$\mu_{\boldsymbol{\psi}}(o)$, such that
$\nabla_{\boldsymbol{\psi}}J(\boldsymbol{\psi})=\mathbb{E}[\nabla_{\boldsymbol{\psi}}\mu_{\boldsymbol{\psi}}(o)\nabla_{a}Q_{\boldsymbol{\theta}}(o,a)|_{a=\mu_{\boldsymbol{\psi}}(o)}]$,
if
1. 1.
$\nabla_{a}Q_{\boldsymbol{\theta}}(o,a)|_{a=\mu_{\boldsymbol{\psi}}(o)}=\nabla_{\boldsymbol{\psi}}\mu_{\boldsymbol{\psi}}(o)^{\top}\boldsymbol{\theta}$,
and
2. 2.
$\boldsymbol{\theta}$ minimizes the mean-squared error,
$\mathbb{E}[e(o;\boldsymbol{\theta},\boldsymbol{\psi})^{\top}e(o;\boldsymbol{\theta},\boldsymbol{\psi})]$,
where
$e(o;\boldsymbol{\theta},\boldsymbol{\psi})\\!=\\!\nabla_{a}\big{[}Q_{\boldsymbol{\theta}}(o,a)|_{a=\mu_{\boldsymbol{\psi}}(o)}-Q_{\boldsymbol{\theta}^{\ast}}(o,a)|_{a=\mu_{\boldsymbol{\psi}}(o)}\big{]}$,
and $\boldsymbol{\theta}^{\ast}$ are the parameters that describe the true Q
function exactly.
In practice, criterion 2) of Theorem 1 is approximately satisfied via mean-
squared error loss and gradient descent, but criterion 1) may not be
satisfied. Nevertheless, DDPG works well in practice.
The DDPG loss is two-fold: the critic loss is computed as
$\displaystyle
L_{\text{DDPG}}^{\text{Critic}}(\boldsymbol{\theta}_{1})=\Big{(}r^{(t)}+\gamma\Big{\\{}Q_{\boldsymbol{\theta}_{1}^{-}}(o_{1}^{(t+1)},\mu_{\boldsymbol{\psi}^{-}}(o_{1}^{(t+1)}))\Big{\\}}-Q_{\boldsymbol{\theta}_{1}}(o_{1}^{(t)},\mu_{\boldsymbol{\psi}}(o_{1}^{(t)})\Big{)}^{2},$
(10)
whereas the policy loss is computed as
$\displaystyle
L_{\text{DDPG}}^{\text{Policy}}(\psi)=-Q_{\boldsymbol{\theta}_{1}}(o_{1}^{(t)},\mu_{\boldsymbol{\psi}}(o_{1}^{(t)})).$
(11)
As with the DQN case, we can also use a replay buffer and target network to
train the DDPG policy. To promote exploration, we add noise to the actions
taken as follows:
$a_{1}^{(t)}=\mu_{\boldsymbol{\psi}}(o_{1}^{(t)})+w^{(t)},$ (12)
where $w^{(t)}$ is an Orstein-Uhlenbeck process [43] to generate temporally
correlated noise terms. The proposed solution for the AWGN and BN channel is
summarized in Algorithm 2. We find that by relaxing the modulation constraint
to $\mathbb{R}^{M}$, the learned policies of guide and scout are substantially
better than those achieved in the BPSK case. The numerical results
illustrating this conclusion will be discussed in Section VI.
Initialize Q networks $\boldsymbol{\theta}_{i},i=1,2$, using Gaussian
$\mathcal{N}(0,10^{-2})$ and policy network $\boldsymbol{\psi}$ if
$\mathcal{A}_{1}\in\mathbb{R}^{M}$. Copy parameters to target networks
$\boldsymbol{\theta}_{i}^{-}\leftarrow\boldsymbol{\theta}_{i}$,
$\boldsymbol{\psi}^{-}\leftarrow\boldsymbol{\psi}$.
$\textit{episode}=1$
while _$\text{episode} <\text{episode-max}$_ do
$t=1$
$\epsilon=\epsilon_{\text{end}}+(\epsilon_{0}-\epsilon_{\text{end}})e^{\big{(}\frac{\text{episode}}{-\lambda}\big{)}}$
while _Treasure NOT found OR $t<t_{\text{max}}$_ do
Observe $o_{1}^{(t)}=(p_{s}^{(t)},p_{g})$
if _$\mathcal{A}_{1}=\\{-1,+1\\}^{M}$_ then
$m_{1}^{(t)}=a_{1}^{(t)}=\begin{cases}\operatorname*{arg\,max}_{a}Q_{\boldsymbol{\theta}_{1}}(o_{1}^{(t)},a),~{}\text{w.p.
}1-\epsilon,\\\ a\sim\text{Uniform}(\mathcal{A}_{1}),~{}\text{w.p.
}\epsilon.\end{cases}$
else if _$\mathcal{A}_{1}=\mathbb{R}^{M}$_ then
$m_{1}^{(t)}=\mu_{\psi}(o_{1}^{(t)})+w^{(t)}$
Normalize $m_{1}^{(t)}$ via Eqn. (13)
Observe $o_{2}^{(t)}=P_{\text{AWGN}}(\hat{m}_{2}^{(t)}|m_{1}^{(t)})$ or
$P_{\text{BN}}(\hat{m}_{2}^{(t)}|m_{1}^{(t)})$
$a_{2}^{(t)}=\begin{cases}\operatorname*{arg\,max}_{a}Q_{\boldsymbol{\theta}_{1}}(o_{2}^{(t)},a),~{}\text{w.p.
}1-\epsilon,\\\ a\sim\text{Uniform}(\mathcal{A}_{2}),~{}\text{w.p.
}\epsilon.\end{cases}$
Take action $a_{2}^{(t)}$, collect reward $r^{(t)}$
if _$t >1$_ then
Store experiences:
$(o_{1}^{(t-1)},a_{1}^{(t-1)},r^{(t-1)},o_{1}^{(t)})\in\mathcal{R}_{1}$ and
$(o_{2}^{(t-1)},a_{2}^{(t-1)},r^{(t-1)},o_{2}^{(t)})\in\mathcal{R}_{2}$
$t=t+1$
end while
Compute average scout loss $L_{\text{DQN}}(\boldsymbol{\theta}_{2})$ as in
Eqn. (4) using batch $\mathcal{B}_{2}\subset\mathcal{R}_{2}$
Update $\boldsymbol{\theta}_{2}$ using
$\nabla_{\boldsymbol{\theta}_{2}}L_{\text{DQN}}(\boldsymbol{\theta}_{2})$
if _$\mathcal{A}_{1}=\\{-1,+1\\}^{M}$_ then
Compute DQN average loss $L_{\text{DQN}}(\boldsymbol{\theta}_{1})$ as in Eqn.
(4) using batch $\mathcal{B}_{1}\subset\mathcal{R}_{1}$
Update $\boldsymbol{\theta}_{1}$ using
$\nabla_{\boldsymbol{\theta}_{1}}L_{\text{DQN}}(\boldsymbol{\theta}_{1})$
Update target network $\boldsymbol{\theta}_{i}^{-},i=1,2$ via Eqn. (5)
else if _$\mathcal{A}_{1}=\mathbb{R}^{M}$_ then
Compute average DDPG Critic loss
$L_{\text{DDPG}}^{\text{Critic}}(\boldsymbol{\theta}_{1})$ as in Eqn. (10)
using batch $\mathcal{B}_{1}$
Compute average DDPG Policy loss
$L_{\text{DDPG}}^{\text{Policy}}(\boldsymbol{\psi})$ as in Eqn. (11) using
batch $\mathcal{B}_{1}$
Update $\boldsymbol{\theta}_{1}$ and $\boldsymbol{\psi}$ using
$\nabla_{\boldsymbol{\theta}_{1}}L_{\text{DDPG}}^{\text{Critic}}(\boldsymbol{\theta}_{1})$
and $\nabla_{\psi}L_{\text{DDPG}}^{\text{Policy}}(\boldsymbol{\psi})$
Update target network
$\boldsymbol{\theta}_{i}^{-},i=1,2,\boldsymbol{\psi}^{-}$ via Eqn. (5)
$\text{episode}=\text{episode}+1$
end while
Algorithm 2 Proposed solution for guided robot problem for AWGN and BN
channel.
To ensure that the actions taken by the guide meet the power constraint we
normalize the channel input to an average power of $1$ as follows:
$a_{1}^{(t)}[k]\leftarrow\sqrt{M}\frac{a_{1}^{(t)}[k]}{\sqrt{\Big{(}a_{1}^{(t)}\Big{)}^{\top}a_{1}^{(t)}}},~{}k=1,\dots,M.$
(13)
The signal-to-noise ratio (SNR) of the AWGN channel is then defined as
$\text{SNR}=-10\log_{10}(\sigma_{n}^{2})~{}\text{(dB)}.$ (14)
Due to the burst noise, we define SNR of the BN channel by the expected SNR of
the two noise states:
$\text{SNR}=-10((1-p_{b})\log_{10}(\sigma_{n}^{2})+p_{b}\log_{10}(\sigma_{n}^{2}+\sigma_{b}^{2}))~{}\text{(dB)}.$
(15)
In Section VI, we will study the effects of both the channel SNR and the
channel bandwidth on the performance. Naturally, the capacity of the channel
increases with both the SNR and the bandwidth. However, we would like to
emphasize that the Shannon capacity is not a relevant metric per se for the
problem at hand. Indeed, we will observe that the benefits from increasing
channel bandwidth and channel SNR saturate beyond some point. Nevertheless,
the performance achieved for the underlying single-agent MDP assuming a
perfect communication link from the guide to the scout serves as a more useful
bound on the performance with any noisy communication channel. The numerical
results for this example will be discussed in detail in Section VI.
## V Joint Channel Coding and Modulation
The formulation given in Section III can be readily extended to the
aforementioned classic “level A” communication problem of channel coding and
modulation. Channel coding is a problem where $B$ bits are communicated over
$M$ channel uses, which corresponds to a code rate of $B/M$ bits per channel
use. In the context of the Markov game introduced previously, we can consider
$2^{B}$ states corresponding to each possible message. Agent 2 has $2^{B}$
actions, each corresponding to a different reconstruction of the message at
agent 1. All the actions transition to the terminal state. The transmitter
observes the state and sends a message by using the channel $M$ times, and the
receiver observes a noisy version of the message at the output of the channel
and chooses an action. Herein, we consider the scenario with real channel
input and output values, and an average power constraint on the transmitted
signals at each time $t$. As such, we can define
$\mathcal{O}_{1}=\mathcal{A}_{2}=\\{0,1\\}^{B}$ and
$\mathcal{A}_{1}=\mathcal{O}_{2}=\mathcal{C}^{M}_{t}$. We note that maximizing
the average reward in this problem is equivalent to designing a channel code
with blocklength $B$ and rate $B/M$ with minimum BLER.
Figure 4: Information flow between the transmitter and the receiver.
There have been many recent studies focusing on the design of channel coding
and modulation schemes using machine learning techniques [28, 29, 30, 11, 31,
32]. Most of these works use supervised learning techniques, assuming a known
and differentiable channel model, which allows backpropagation through the
channel during training. On the other hand, here we assume that the channel
model is not known, and the agents are limited to their observations of the
noisy channel output signals, and must learn a communication strategy through
trial and error.
A similar problem is considered in [32] from a supervised learning
perspective. The authors show that by approximating the gradient of the
transmitter with the stochastic policy gradient of the vanilla REINFORCE
algorithm [44], it is possible to train both the transmitter and the receiver
without knowledge of the channel model. We wish to show here that this problem
is actually a special case of the problem formulation we constructed in
Section III and that by approaching this problem from a RL perspective, the
problem lends itself to a variety of solutions from the vast RL literature.
Initialize DNNs $\boldsymbol{\theta}_{i},i=1,2$, with Gaussian
$\mathcal{N}(0,10^{-2})$, and policy network $\boldsymbol{\psi}$ if using
DDPG.
$\textit{episode}=1$
while _$\text{episode} <\text{episode-max}$_ do
$\epsilon=\epsilon_{\text{end}}+(\epsilon_{0}-\epsilon_{\text{end}})e^{-\frac{\text{episode}}{\lambda}}$
Observe $o_{1}^{(1)}\sim\text{Uniform}(\mathcal{O}_{1})$
$m_{1}^{(1)}=\mu_{\boldsymbol{\psi}}(o_{1}^{(1)})+w^{(1)}$
Normalize $m_{1}^{(1)}$ via Eqn. (13)
Observe $o_{2}^{(1)}=P_{\text{AWGN}}(\hat{m}_{2}^{(1)}|m_{1}^{(1)})$ or
$P_{\text{BN}}(\hat{m}_{2}^{(1)}|m_{1}^{(1)})$
$a_{2}^{(1)}=\operatorname*{arg\,max}_{a}Q_{\boldsymbol{\theta}_{1}}(o_{2}^{(1)},a)$
Collect reward $r^{(1)}$
Store experiences:
$(o_{1}^{(1)},a_{1}^{(1)},r^{(1)})\in\mathcal{R}_{1}$ and
$(o_{2}^{(1)},a_{2}^{(1)},r^{(1)})\in\mathcal{R}_{2}$
Get batches $\mathcal{B}_{1}\subset\mathcal{R}_{1}$,
$\mathcal{B}_{2}\subset\mathcal{R}_{2}$
Compute average receiver loss
$L_{\text{CE}}(o_{2}^{(1)};\boldsymbol{\theta}_{2})$ as in Eqn. (16) using
batch $\mathcal{B}_{2}$
Update $\boldsymbol{\theta}_{2}$ using
$\nabla_{\boldsymbol{\theta}_{2}}L_{\text{CE}}(o_{2}^{(1)};\boldsymbol{\theta}_{2})$
if _use DDPG_ then
Compute average transmitter losses
$L_{\text{DDPG}}^{\text{Critic}}(\boldsymbol{\theta}_{1})$ and
$L_{\text{DDPG}}^{\text{Policy}}(\boldsymbol{\psi})$ as in Eqns. (17,18) using
$\mathcal{B}_{1}$
Update $\boldsymbol{\theta}_{1}$ and $\boldsymbol{\psi}$
$\nabla_{\boldsymbol{\theta}_{1}}L_{\text{DDPG}}^{\text{Critic}}(\boldsymbol{\theta}_{1})$
and
$\nabla_{\boldsymbol{\psi}}L_{\text{DDPG}}^{\text{Policy}}(\boldsymbol{\psi})$
else if _use REINFORCE_ then
Compute average transmitter gradient
$\nabla_{\boldsymbol{\theta}_{1}}J(\boldsymbol{\theta}_{1})$ as in Eqn. (19)
using $\mathcal{B}_{1}$
Update $\boldsymbol{\theta}_{1}$ using
$\nabla_{\boldsymbol{\theta}_{1}}J(\boldsymbol{\theta}_{1})$
else if _use Actor-Critic_ then
Compute average transmitter loss
$\nabla_{\boldsymbol{\theta}_{1}}J(\boldsymbol{\theta}_{1})$ as in Eqn. (21)
using $\mathcal{B}_{1}$
Update $\boldsymbol{\theta}_{1}$ using
$\nabla_{\boldsymbol{\theta}_{1}}J(\boldsymbol{\theta}_{1})$
Update value estimate $v_{\pi_{1}}(o_{1}^{(1)})$ via Eqn. (22)
$\text{episode}=\text{episode}+1$
end while
Algorithm 3 Proposed solution for joint channel coding-modulation problem.
Here, we opt to use DDPG to learn a deterministic joint channel coding-
modulation scheme and use the DQN algorithm for the receiver, as opposed to
the vanilla REINFORCE algorithm used in [32]. We use negative cross-entropy
(CE) loss as the reward function:
$r^{(1)}=-L_{\text{CE}}(\hat{m}^{(1)}_{1})=\sum_{k=1}^{2^{B}}\log(Pr(c_{k}|\hat{m}^{(1)}_{1})),$
(16)
where $c_{k}$ is the $k$th codeword in $\mathcal{O}_{1}$. The receiver DQN is
trained simply with the CE loss, while the transmitter DDPG algorithm receives
the reward $r^{(1)}$. Similar to the guided robot problem in Section IV, we
use replay buffer to improve the training process. We note here that in this
problem, each episode is simply a one-step MDP, as there is no state
transition. As such, the replay buffers store only
$(o_{1}^{(1)},a_{1}^{(1)},r^{(1)})$, $(o_{2}^{(1)},a_{2}^{(1)},r^{(1)})$ and a
target network is not required. Consequently, the DDPG losses can be
simplified as
$\displaystyle
L_{\text{DDPG}}^{\text{Critic}}(\boldsymbol{\theta}_{1})=\Big{(}Q_{\boldsymbol{\theta}_{1}}(o_{1}^{(1)},\mu_{\boldsymbol{\psi}}(o_{1}^{(1)})-r^{(1)}\Big{)}^{2},$
(17) $\displaystyle
L(\boldsymbol{\psi})_{\text{DDPG}}^{\text{Policy}}=-Q_{\boldsymbol{\theta}_{1}}(o_{1}^{(1)},\mu_{\boldsymbol{\psi}}(o_{1}^{(1)}))$
(18)
Furthermore, we improve upon the algorithm used in [32] by implementing a
critic, which estimates the advantage of a given state-action pair by
subtracting a baseline from policy gradient. That is, in the REINFORCE
algorithm, the gradient is estimated as
$\nabla_{\boldsymbol{\theta}_{1}}J(\boldsymbol{\theta}_{1})=\nabla_{\boldsymbol{\theta}_{1}}\log\pi_{1}(a_{1}^{(1)}|o^{(1)}_{1};\boldsymbol{\theta}_{1})r^{(1)}\;.$
(19)
It is shown in [33] that by subtracting a baseline $b(o_{1}^{(1)})$, the
variance of the gradient $\nabla_{\boldsymbol{\theta}}J(\boldsymbol{\theta})$
can be greatly reduced. Herein, we use the value of the state, defined by Eqn.
(1), except, in this problem, the trajectories all have length 1. Therefore,
the value function can be simplified to
$b(o_{1}^{(1)})=v_{\pi_{1}}(o_{1}^{(1)})=\mathbb{E}_{\pi_{1}}\big{[}r^{(1)}|o_{1}^{(1)}\big{]}.$
(20)
The gradient of the policy with respect to the expected return
$J(\boldsymbol{\theta}_{1})$ is then
$\nabla_{\boldsymbol{\theta}_{1}}J(\boldsymbol{\theta}_{1})=\nabla_{\boldsymbol{\theta}_{1}}\log\pi_{1}(a_{1}^{(1)}|o_{1}^{(1)};\boldsymbol{\theta}_{1})(r^{(1)}-v_{\pi_{1}}(o_{1}^{(1)})).$
(21)
In practice, to estimate $v_{\Pi}(o^{(1)}_{1})$, we use a weighted moving
average of the reward collected for a given state
$o_{1}^{(1)}\in\mathcal{O}_{1}$ in
$\mathcal{B}_{1}(o_{1}^{(1)})=\\{(o,a)\in\mathcal{B}_{1}|o=o_{1}^{(1)}\\}$ for
the batch of trajectories $\mathcal{B}_{1}$:
$v_{\pi_{1}}(o_{1}^{(1)})\leftarrow(1-\alpha)v_{\pi_{1}}(o_{1}^{(1)})+\frac{\alpha}{|\mathcal{B}_{1}(o_{1}^{(1)})|}\\!\\!\sum_{(o,a)\in\mathcal{B}_{1}(o_{1}^{(1)})}\\!\\!r^{(1)}(o,a),$
(22)
where $\alpha$ is the weight of the average and $v_{\pi_{1}}(o_{1}^{(1)})$ is
initialized with zeros. We use $\alpha=0.01$ in our experiments. The algorithm
for solving the joint channel coding and modulation problem is shown in
Algorithm 3. The numerical results and comparison with alternative designs are
presented in the next section.
## VI Numerical Results
TABLE I: DNN architecture and hyperparameters used. $Q_{\boldsymbol{\theta}_{i}}$ | $\mu_{\boldsymbol{\psi}}$ | Hyperparameters
---|---|---
Linear: 64 | Linear: 64 | $\gamma=0.99$
ReLU | ReLU | $\epsilon_{0}=0.9$
Linear: 64 | Linear: 64 | $\epsilon_{\text{end}}=0.05$
ReLU | ReLU | $\lambda=1000$
Linear: $\begin{cases}|\mathcal{A}_{i}|,~{}&\text{if DQN},\\\ 1,~{}&\text{if DDPG}\end{cases}$ | Linear: dim$(\mathcal{A}_{i})$ | $\tau=0.005$
We first define the DNN architecture used for all the experiments in this
section. For networks, the inputs are processed by three fully connected
layers followed by the rectified linear unit (ReLU) activation function. The
weights of the layers are initialized using Gaussian initialization with mean
0 and standard deviation $0.01$. We store $100K$ experience samples in the
replay buffer ($|\mathcal{R}_{i}|=100K$), and sample batches of size $128$ for
training. We train every experiment for $500K$ episodes. The function used for
$\epsilon$-greedy exploration is
$\epsilon=\epsilon_{\text{end}}+(\epsilon_{0}-\epsilon_{\text{end}})e^{\big{(}-\frac{\text{episode}}{\lambda}\big{)}}$
(23)
where $\lambda$ controls the decay rate of $\epsilon$. We use the ADAM
optimizer [45] with learning rate $0.001$ for all the experiments. The network
architectures and the hyperparameters chosen are summarized in Table I. We
consider $\text{SNR}\in[0,23]$ dB for the AWGN channel. For the BN channel, we
use the same SNR range as the AWGN channel for the low noise state and set
$\sigma_{b}=2$ for the high noise state. We consider $p_{b}\in\\{0.1,0.2\\}$
to see the effect of changing the high noise state probability.
$0.00$$0.10$$0.20$$0.30$$2.0$$3.0$$4.0$$5.0$$6.0$$7.0$$8.0$$9.0$$10.0$$p_{e}$Average
number of stepsJoint learning and communication ($M=7$)Separate learning and
communication / HCSeparate learning and communication / RCOptimal actions with
Hamming code / HCOptimal actions with Hamming code / RCOptimal actions without
noise (a) $\delta=0$
$0.00$$0.10$$0.20$$0.30$$2.0$$3.0$$4.0$$5.0$$6.0$$7.0$$8.0$$9.0$$10.0$$p_{e}$Average
number of stepsJoint learning and communication ($M=7$)Separate learning and
communication / HCSeparate learning and communication / RCOptimal actions with
Hamming code / HCOptimal actions with Hamming code / RCOptimal actions without
noise (b) $\delta=0.05$
Figure 5: Comparison of agents jointly trained to collaborate and communicate
over a BSC to separate learning and communications with a (7,4) Hamming code.
$0.00$$3.00$$6.00$$9.00$$12.00$$15.00$$18.00$$21.00$$2.5$$3.0$$3.5$$4.0$SNR
(dB)Average number of stepsJoint learning and communication (BPSK, $M=7$)Joint
learning and communication (Real, $M=7$)Separate learning and communication /
HCSeparate learning and communication / RCOptimal with Hamming code /
HCOptimal with Hamming code / RCOptimal actions without noise (a) $\delta=0$
$0.00$$3.00$$6.00$$9.00$$12.00$$15.00$$18.00$$21.00$$2.0$$2.5$$3.0$$3.5$$4.0$$4.5$$5.0$$5.5$SNR
(dB)Average number of stepsJoint learning and communication (BPSK, $M=7$)Joint
learning and communication (Real, $M=7$)Separate learning and communication /
HCSeparate learning and communication / RCOptimal actions with Hamming code /
HCOptimal actions with Hamming code / RCOptimal actions without noise (b)
$\delta=0.05$
Figure 6: Comparison of the agents jointly trained to collaborate and
communicate over an AWGN channel to separate learning and communications with
a (7,4) Hamming code.
(a) Separate learning and communication (HC).
(b) Joint learning and communication.
Figure 7: Example visualization of the codewords used by the guide, and the
path taken by the scout for $M=7$ uses of a BSC with $p_{e}=0.2$ and
$\delta=0$. The origin is at the top left corner.
For the grid world problem, presented in Section IV, the scout and treasure
are uniformly randomly placed on any distinct locations upon initialization
(i.e., $p_{g}\neq p_{s}^{(0)}$). These locations are one-hot encoded to form a
$2L^{2}$ vector that is the observation of the guide $o_{1}^{(t)}$. We fix the
channel bandwidth to $M=\\{7,10\\}$ and compare our solutions to a scheme that
separates the channel coding from the underlying MDP. That is, we first train
a RL agent that solves the grid world problem without communication
constraints. We then introduce a noisy communication channel and encode the
action chosen by the RL agent using a (7,4) Hamming code before transmission
across the channel. The received message is then decoded and the resultant
action is taken. We note that the (7,4) Hamming code is a perfect code that
encodes four data bits into seven channel bits by adding three parity bits;
thus, it can correct single bit errors. The association between the 16
possible actions and codewords of 4 bits can be done by random permutation,
which we refer to as random codewords (RC), or hand-crafted (HC) association
by assigning adjacent codewords to similar actions, as shown in Fig. 2. By
associating adjacent codewords to similar actions, the scout will take a
similar action to the one intended even if there is a decoding error, assuming
the number of bit errors is not too high. Lastly, we compute the optimal
solution, where the steps taken forms the shortest path to the treasure, and
use a Hamming (7,4) channel code to transmit those actions. This is referred
to as “Optimal actions with Hamming Code” and acts as a lower bound for the
separation-based results.
$0.00$$3.00$$6.00$$9.00$$12.00$$15.00$$18.00$$21.00$$3.0$$3.5$$4.0$$4.5$$5.0$$5.5$$6.0$SNR
(dB)Average number of stepsJoint learning and communication (BPSK, $M=7$)Joint
learning and communication (Real, $M=7$)Separate learning and communication /
HCSeparate learning and communication / RCOptimal actions with Hamming code /
HCOptimal actions with Hamming code / RC (a) $\delta=0,p_{b}=0.1$
$0.00$$3.00$$6.00$$9.00$$12.00$$15.00$$18.00$$21.00$$3.0$$3.5$$4.0$$4.5$$5.0$$5.5$$6.0$SNR
(dB)Average number of stepsJoint learning and communication (BPSK, $M=7$)Joint
learning and communication (Real, $M=7$)Separate learning and communication /
HCSeparate learning and communication / RCOptimal actions with Hamming code /
HCOptimal actions with Hamming code / RC (b) $\delta=0.05,p_{b}=0.1$
$0.00$$3.00$$6.00$$9.00$$12.00$$15.00$$18.00$$3.0$$3.5$$4.0$$4.5$$5.0$$5.5$$6.0$$6.5$SNR
(dB)Average number of stepsJoint learning and communication (BPSK, $M=7$)Joint
learning and communication (Real, $M=7$)Separate learning and communication /
HCSeparate learning and communication / RCOptimal actions with Hamming code /
HCOptimal actions with Hamming code / RC (c) $\delta=0,p_{b}=0.2$
$0.00$$3.00$$6.00$$9.00$$12.00$$15.00$$18.00$$3.5$$4.0$$4.5$$5.0$$5.5$$6.0$$6.5$$7.0$SNR
(dB)Average number of stepsJoint learning and communication (BPSK, $M=7$)Joint
learning and communication (Real, $M=7$)Separate learning and communication /
HCSeparate learning and communication / RCOptimal actions with Hamming code /
HCOptimal actions with Hamming code / RC (d) $\delta=0.05,p_{b}=0.2$
Figure 8: Comparison of the agents jointly trained to collaborate and
communicate over an BN channel to separate learning and communications with a
(7,4) Hamming code.
For the joint channel coding-modulation problem, we again compare the DDPG and
actor-critic results with a (7,4) Hamming code using BPSK modulation. The
source bit sequence is uniformly randomly chosen from the set $\\{0,1\\}^{M}$
and one-hot encoded to form the input state $o_{1}^{(1)}$ of the transmitter.
We also compare with the algorithm derived in [32], which uses supervised
learning for the receiver and the REINFORCE policy gradient to estimate the
gradient of the transmitter.
We first present the results for the guided robot problem. Fig. 5 shows the
number of steps, averaged over 10K episodes, needed by the scout to reach the
treasure for the BSC case with $\delta=\\{0,0.05\\}$. The “optimal actions
without noise” refers to the minimum number of steps required to reach the
treasure assuming a perfect communication channel and acts as the lower bound
for all the experiments. It is clear that jointly learning to communicate and
collaborate over a noisy channel outperforms the separation-based results with
both RC and HC. In Fig. 7, we provide an illustration of the actions taken by
the agent after some errors over the communication channel with the separate
learning and communication scheme (HC) and with the proposed joint learning
and communication approach. It can be seen that at step 2 the proposed scheme
takes a similar action $(-1,-1)$ to the optimal one $(-2,0)$ despite
experiencing 2 bit errors, and in step 3 despite experiencing 3 bit errors
(Fig. 7b). On the other hand, in the separate learning and communication
scheme with a (7,4) Hamming code and HC association of actions, the scout
decodes a very different action from the optimal one in step 2 which results
in an additional step being taken. However, it was able to take a similar
action to the optimal one in step 4 despite experiencing 2 bit errors. This
shows that although hand crafting codeword assignments can lead to some
performance benefits in the separate learning and communication scheme, which
was also suggested by Fig. 5, joint learning and communication leads to more
robust codeword assignments that give much more consistent results. Indeed, we
have also observed that, unlike the separation based scheme, where each
message corresponds to a single action, or equivalently, there are 8 different
channel output vectors for which the same action is taken, the codeword to
action mapping at the scout can be highly asymmetric for the learned scheme.
Moreover, neither the joint learning and communication results nor the
separation-based results achieve the performance of the optimal solution with
Hamming code. The gap between the optimal solution with Hamming code and the
results obtained by the guide/scout formulation is due to the DQN
architectures’ limited capability to learn the optimal solution and the
challenge of learning under noisy environments. Comparing Fig. 5a and 5b, the
performance degradation due to the separation-based results is slightly
greater than those from the joint framework. This is because the joint
learning and communication approach is better at adjusting its policy and
communication strategy to mitigate the effect of the channel noise than
employing a standard channel code.
$0.00$$0.10$$0.20$$0.30$$0.40$$0.50$$0.60$$0.70$$0.80$$0.90$$1.00$$\cdot
10^{5}$$0.0$$20.0$$40.0$$60.0$$80.0$$100.0$$120.0$$140.0$EpisodeNumber of
stepsBPSK BSC ($P_{e}=0.05$)Real AWGN ($10$ dB)BPSK AWGN ($10$ dB) Figure 9:
Convergence of each channel scenario for the grid world problem without noise
($M=7,~{}\delta=0$).
$0.00$$2.00$$4.00$$6.00$$8.00$$10.00$$12.00$$14.00$$16.00$$18.00$$20.00$$22.00$$2.4$$2.6$$2.8$$3.0$$3.2$$3.4$$3.6$$3.8$$4.0$SNR
(dB)Average number of stepsJoint learning and communication (BPSK, $M=7$)Joint
learning and communication (Real, $M=7$)Joint learning and communication
(BPSK, $M=10$)Joint learning and communication (Real, $M=10$) Figure 10:
Impact of the channel bandwidth $M=\\{7,10\\}$ on the performance for an AWGN
channel ($\delta=0$).
Similarly, in the AWGN case in Fig. 6, the results from joint learning and
communication clearly outperforms those obtained via separate learning and
communication. Here, the “Real” results refer to the guide agent with
$\mathcal{A}_{1}=\mathbb{R}^{M}$, while the “BPSK” results refer to the guide
agent with $\mathcal{A}_{1}=\\{-1,+1\\}^{M}$. The “Real” results here clearly
outperform all other schemes considered. The relaxation of the channel
constellation to all real values within a power constraint allows the guide to
convey more information than a binary constellation can achieve. We also
observe that the gain from this relaxation is higher at lower SNR values for
both $\delta$ values. This is in contrast to the gap between the channel
capacities achieved with Gaussian and binary inputs in an AWGN channel, which
is negligible at low SNR values and increases with SNR. This shows that
channel capacity is not the right metric for this problem, and even when two
channels are similar in terms of capacity, they can give very different
performances in terms of the discounted sum reward when used in the MARL
context.
$0.00$$0.50$$1.00$$1.50$$2.00$$2.50$$3.00$$3.50$$4.00$$4.50$$5.00$$0.001$$0.01$$0.1$SNR
(dB)BLERHAMMINGDDPGREINFORCEActor-Critic Figure 11: BLER performance of
different modulation and coding schemes over AWGN channel.
$0.00$$0.10$$0.20$$0.30$$0.40$$0.50$$0.60$$0.70$$0.80$$0.90$$1.00$$\cdot
10^{4}$$0.01$$0.1$EpisodeBLERDDPGREINFORCEActor-Critic Figure 12: Convergence
behavior for the joint channel coding and modulation problem in an AWGN
channel.
In the BN channel case (Fig. 8), similar observations can be made compared to
the AWGN case. The biggest difference is that we see a larger performance
improvement over the separation case when using our proposed framework than in
the AWGN case. This is particularly obvious when using BPSK modulation, where
the gap between the BPSK results for the joint learning and communication
scheme and those from the separate learning and communication is larger
compared to the AWGN channel case. This shows that in this more challenging
channel scenario, the proposed framework is better able to adjust jointly the
policy and the communication scheme to meet the conditions of the channel. It
also again highlights the fact that the Shannon capacity is not the most
important metric for this problem as the expected SNR is not significantly
less due to the burst noise but we observe an even more pronounced improvement
using the proposed schemes over the separation schemes.
In Figs. 5, 6 and 8, it can be seen that when the grid world itself is noisy
(i.e., $\delta>0$), the agents are still able to collaborate, albeit at the
cost of higher average steps required to reach the treasure. The convergence
of the number of steps used to reach the treasure for each channel scenario is
shown in Fig. 10. The slow convergence for the BSC channel indicates the
difficulty of learning a binary code for this channel. We also study the
effect of the bandwidth $M$ on the performance. In Fig. 10, we present the
average number of steps required for channel bandwidths $M=7$ and $M=10$. As
expected, increasing the channel bandwidth reduces the average number of steps
for the scout to reach the treasure. The gain is particularly significant for
BPSK at the low SNR regime as the guide is better able to protect the
information conveyed against the channel noise thanks to the increased
bandwidth.
Next, we present the results for the joint channel coding and modulation
problem. Fig. 12 shows the BLER performance obtained by BPSK modulation and
Hamming (7,4) code, our DDPG transmitter described in Section V, the one
proposed by [32], and the proposed approach using an additional critic,
labeled as “Hamming (7,4)”, “DDPG”, “REINFORCE”, and “Actor-Critic”,
respectively. It can be seen that the learning approaches (DDPG, REINFORCE and
Actor-Critic) perform better than the Hamming (7,4) code. Additionally,
stochastic policy algorithms (REINFORCE and Actor-Critic) perform better than
DDPG. This is likely due to the limitations of DDPG, as in practice, criterion
1) of Theorem 1 is often not satisfied. Lastly, we show that we can improve
upon the algorithm proposed in [32] by adding an additional critic that
reduces the variance of the policy gradients; and therefore, learns a better
policy. The results obtained by the actor-critic algorithm are superior to
those from the REINFORCE algorithm, especially in the higher SNR regime. On
average, the learning-based results are better than the Hamming (7,4)
performance by $1.24$, $2.58$ and $3.70$ dB for DDPG, REINFORCE and Actor-
Critic, respectively.
$0.00$$1.00$$2.00$$3.00$$4.00$$5.00$$0.01$$0.1$SNR
(dB)BLERHAMMINGDDPGREINFORCEActor-Critic (a) $p_{b}=0.1$
$0.00$$1.00$$2.00$$3.00$$4.00$$5.00$$0.1$$0.316$SNR
(dB)BLERHAMMINGDDPGREINFORCEActor-Critic (b) $p_{b}=0.2$
Figure 13: Comparison of the agents jointly trained to collaborate and
communicate over an BN channel to separate learning and communications with a
(7,4) Hamming code.
When considering the BN channel case, as shown in Fig. 13, while the BLER
increases due to the increased noise for all the schemes, we still see
improved performance with the learning algorithms. Fig. 12 shows the
convergence behavior of different learning algorithms for 5dB channel SNR. We
can see that the actor-critic algorithm converges the quickest and achieves
the lowest BLER, while REINFORCE converges the slowest but achieves lower BLER
than DDPG at the end of training. This is in accordance with the BLER
performance observed in Fig. 12. We reiterate that the joint channel coding
and modulation problem studied from the perspective of supervised learning in
[32] is indeed a special case of the joint learning and communication
framework we presented in Section III from a MARL perspective, and can be
solved using a myriad of algorithms from the RL literature.
Lastly, we note that due to the simplicity of our network architecture, the
computation complexity of our models is not significantly more than the
separation based results we present herein. The average computation time for
encoding and decoding using our proposed DRL solution is approximately $323\mu
s$ compared to $286\mu s$ for the separate learning and communication case
with a Hamming (7,4) code, using an Intel Core i9 processor. This corresponds
to roughly 13% increase in computation time, which is modest considering the
performance gains observed in both the guided robot problem and the joint
channel coding and modulation problem.
###### Remark 1
We note that both the grid world problem and the channel coding and modulation
problems are POMDPs. Therefore, recurrent neural networks (RNNs), such as
long-short term memory (LSTM) [46] networks, should provide performance
improvements as the cell states can act as belief propagation. However, in our
initial simulations, we were not able to observe such improvements, although
this is likely to be due to the limitations of our architectures.
###### Remark 2
Even though we have only considered the channel modulation and coding problem
in this paper due to lack of space, our framework can also be reduced to the
source coding and joint source-channel coding problems by changing the reward
function. If we consider an error-free channel with binary inputs and outputs,
and let the reward depend on the average distortion between the $B$-length
source sequence observed by agent 1 and its reconstruction generated by agent
2 as its action, we recover the lossy source coding problem, where the
length-$B$ sequence is compressed into $M$ bits. If we instead consider a
noisy channel in between the two agents, we recover the joint source-channel
coding problem with an unknown channel model.
## VII Conclusion
In this paper, we have proposed a comprehensive framework that jointly
considers the learning and communication problems in collaborative MARL over
noisy channels. Specifically, we consider a MA-POMDP where agents can exchange
messages with each other over a noisy channel in order to improve the shared
total long-term average reward. By considering the noisy channel as part of
the environment dynamics and the message each agent sends as part of its
action, the agents not only learn to collaborate with each other via
communications but also learn to communicate “effectively”. This corresponds
to “level C” of Shannon and Weaver’s organization of the communication
problems in [2], which seeks to answer the question “How effectively does the
received meaning affect conduct in the desired way?”. We show that by jointly
considering learning and communications in this framework, the learned joint
policy of all the agents is superior to that obtained by treating the
communication and the underlying MARL problem separately. We emphasize that
the latter is the conventional approach when the MARL solutions obtained in
the machine learning literature assume error-free communication links are
employed in practice when autonomous vehicles or robots communicate over noisy
wireless links to achieve the desired coordination and cooperation. We
demonstrate via numerical examples that the policies learned from our joint
approach produce higher average rewards than those where separate learning and
communication is employed. We also show that the proposed framework is a
generalization of most of the communication problems that have been
traditionally studied in the literature, corresponding to “level A” as
described by Shannon and Weaver. This formulation opens the door to employing
available numerical MARL techniques, such as the actor-critic framework, for
the design of channel modulation and coding schemes for communication over
unknown channels. We believe this is a very powerful framework, which has many
real world applications, and can greatly benefit from the fast developing
algorithms in the MARL literature to design novel communication codes and
protocols, particularly with the goal of enabling collaboration and
cooperation among distributed agents.
## References
* [1] J. P. Roig and D. Gündüz, “Remote reinforcement learning over a noisy channel,” in Proc. of IEEE GLOBECOM, 2020.
* [2] C. Shannon and W. Weaver, The Mathematical Theory of Communication. Urbana, IL: University of Illinois Press, 1949.
* [3] M. Tomasello, Origins of Human Communication. Cambridge, Mass.: The MIT Press, reprint edition ed., Aug. 2010.
* [4] D. H. Ackley and M. Littmann, “Altruism in the evolution of communication,” in Fourth Int’l Workshop on the Synthesis and Simulation of Living Systems (Artificial Life IV), pp. 40–48, Cambridge, MA: MIT Press, 1994.
* [5] K.-C. Jim and L. Giles, “How communication can improve the performance of multi-agent systems,” in Fifth Int’l Conf. on Autonomous Agents, AGENTS ’01, (New York, NY), p. 584–591, 2001.
* [6] B. Güler, A. Yener, and A. Swami, “The semantic communication game,” IEEE Transactions on Cognitive Communications and Networking, vol. 4, no. 4, pp. 787–802, 2018.
* [7] P. Popovski, O. Simeone, F. Boccardi, D. Gunduz, and O. Sahin, “Semantic-effectiveness filtering and control for post-5G wireless connectivity,” Journal of Indian Inst of Sciences, 2020.
* [8] M. Kountouris and N. Pappas, “Semantics-empowered communication for networked intelligent systems,” arXiv cs.IT:2007.11579, 2020.
* [9] H. Xie, Z. Qin, G. Y. Li, and B.-H. Juang, “Deep learning enabled semantic communication systems,” arXiv eess.SP:2006.10685, 2020.
* [10] E. C. Strinati and S. Barbarossa, “6G networks: Beyond shannon towards semantic and goal-oriented communications,” arXiv cs.NI:2011.14844, 2020\.
* [11] E. Bourtsoulatze, D. B. Kurka, and D. Gunduz, “Deep Joint Source-Channel Coding for Wireless Image Transmission,” arXiv:1809.01733 [cs, eess, math, stat], Sept. 2018. arXiv: 1809.01733.
* [12] Z. Weng, Z. Qin, and G. Y. Li, “Semantic communications for speech signals,” arXiv eess.AS:2012.05369, 2020.
* [13] S. Sreekumar and D. Gündüz, “Distributed Hypothesis Testing Over Discrete Memoryless Channels,” IEEE Transactions on Information Theory, vol. 66, pp. 2044–2066, Apr. 2020. Conference Name: IEEE Transactions on Information Theory.
* [14] M. Jankowski, D. Gündüz, and K. Mikolajczyk, “Wireless Image Retrieval at the Edge,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 1, pp. 89–100, 2021.
* [15] D. Gunduz, D. B. Kurka, M. Jankowski, M. M. Amiri, E. Ozfatura, and S. Sreekumar, “Communicate to Learn at the Edge,” IEEE Communications Magazine, 2021.
* [16] M. Lanctot, V. Zambaldi, A. Gruslys, A. Lazaridou, K. Tuyls, J. Perolat, D. Silver, and T. Graepel, “A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning,” arXiv:1711.00832 [cs], Nov. 2017\. arXiv: 1711.00832.
* [17] T. Balch and R. Arkin, “Communication in reactive multiagent robotic systems,” Autonomous Robots, pp. 27–52, 1994.
* [18] J. N. Foerster, Y. M. Assael, N. de Freitas, and S. Whiteson, “Learning to Communicate with Deep Multi-Agent Reinforcement Learning,” arXiv:1605.06676 [cs], May 2016. arXiv: 1605.06676.
* [19] J. Jiang and Z. Lu, “Learning attentional communication for multi-agent cooperation,” in Proceedings of the 32nd International Conference on Neural Information Processing Systems, p. 7265–7275, 2018.
* [20] N. Jaques, A. Lazaridou, E. Hughes, C. Gulcehre, P. Ortega, D. Strouse, J. Leibo, and N. deFreitas, “Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning,” arXiv:1810.08647 [cs, stat], June 2019.
* [21] A. Das, T. Gervet, J. Romoff, D. Batra, D. Parikh, M. Rabbat, and J. Pineau, “TarMAC: Targeted multi-agent communication,” in 36th Int’l Conf. on Machine Learning, (Long Beach, California), pp. 1538–1546, PMLR, Jun 2019\.
* [22] J. Wang, J. Liu, and N. Kato, “Networking and Communications in Autonomous Driving: A Survey,” IEEE Communications Surveys Tutorials, vol. 21, no. 2, pp. 1243–1274, 2019. Conference Name: IEEE Communications Surveys Tutorials.
* [23] M. Campion, P. Ranganathan, and S. Faruque, “UAV swarm communication and control architectures: a review,” Journal of Unmanned Vehicle Systems, Nov. 2018. Publisher: NRC Research Press.
* [24] A. Lazaridou, A. Peysakhovich, and M. Baroni, “Multi-Agent Cooperation and the Emergence of (Natural) Language,” arXiv:1612.07182 [cs], Mar. 2017. arXiv: 1612.07182.
* [25] A. Lazaridou, A. Potapenko, and O. Tieleman, “Multi-agent communication meets natural language: Synergies between functional and structural language learning,” 2020.
* [26] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, “Human-level control through deep reinforcement learning,” Nature, vol. 518, pp. 529–533, Feb. 2015.
* [27] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” arXiv:1509.02971 [cs, stat], July 2019. arXiv: 1509.02971.
* [28] E. Nachmani, E. Marciano, L. Lugosch, W. J. Gross, D. Burshtein, and Y. Be’ery, “Deep learning methods for improved decoding of linear codes,” IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 1, pp. 119–131, 2018.
* [29] S. Dorner, S. Cammerer, J. Hoydis, and S. ten Brink, “On deep learning-based communication over the air,” in 2017 51st Asilomar Conference on Signals, Systems, and Computers, pp. 1791–1795, 2017.
* [30] A. Felix, S. Cammerer, S. Dörner, J. Hoydis, and S. Ten Brink, “OFDM-autoencoder for end-to-end learning of communications systems,” in IEEE Int’l Workshop on Signal Proc. Advances in Wireless Comms., pp. 1–5, 2018.
* [31] D. B. Kurka and D. Gündüz, “Deepjscc-f: Deep joint source-channel coding of images with feedback,” IEEE Journal on Selected Areas in Information Theory, vol. 1, no. 1, pp. 178–193, 2020.
* [32] F. A. Aoudia and J. Hoydis, “Model-Free Training of End-to-End Communication Systems,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 11, pp. 2503–2516, 2019.
* [33] V. Konda and J. Tsitsiklis, “Actor-critic algorithms,” in Advances in Neural Information Processing Systems (S. Solla, T. Leen, and K. Müller, eds.), vol. 12, pp. 1008–1014, MIT Press, 2000.
* [34] K. Wagner, J. A. Reggia, J. Uriagereka, and G. S. Wilkinson, “Progress in the Simulation of Emergent Communication and Language:,” Adaptive Behavior, July 2016. Publisher: SAGE Publications.
* [35] “Learning multiagent communication with backpropagation,” in Proc. of 30th Int’l Conf. on Neural Information Proc. Systems, NIPS’16, (Red Hook, NY), pp. 2252–2260, Dec. 2016.
* [36] S. Havrylov and I. Titov, “Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols,” p. 11, 2017.
* [37] R. E. Wang, M. Everett, and J. P. How, “R-MADDPG for Partially Observable Environments and Limited Communication,” arXiv:2002.06684 [cs], Feb. 2020. arXiv: 2002.06684.
* [38] R. Lowe, Y. WU, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperative-competitive environments,” in Advances in Neural Information Processing Systems (I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds.), vol. 30, pp. 6379–6390, Curran Associates, Inc., 2017.
* [39] P. Peng, Y. Wen, Y. Yang, Q. Yuan, Z. Tang, H. Long, and J. Wang, “Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games,” arXiv:1703.10069 [cs], Sept. 2017. arXiv: 1703.10069.
* [40] A. Mostaani, O. Simeone, S. Chatzinotas, and B. Ottersten, “Learning-based Physical Layer Communications for Multiagent Collaboration,” in IEEE Int’l Symp. on Personal, Indoor and Mobile Radio Comms. (PIMRC), pp. 1–6, Sept. 2019.
* [41] T. Tung and D. Gündüz, “SparseCast: Hybrid Digital-Analog Wireless Image Transmission Exploiting Frequency Domain Sparsity,” IEEE Communications Letters, pp. 1–1, 2018.
* [42] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller, “Deterministic Policy Gradient Algorithms,” p. 9, 2014.
* [43] G. E. Uhlenbeck and L. S. Ornstein, “On the Theory of the Brownian Motion,” Physical Review, vol. 36, pp. 823–841, Sept. 1930.
* [44] R. J. Williams, “Simple statistical gradient-following algorithms for connectionist reinforcement learning,” Machine Learning, vol. 8, pp. 229–256, May 1992.
* [45] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv:1412.6980 [cs], Jan. 2017. arXiv: 1412.6980.
* [46] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, p. 1735–1780, Nov. 1997.
|
# Redshift Evolution of the H2/HI Mass Ratio In Galaxies
Laura Morselli1,2, A. Renzini2, A. Enia3,4, G. Rodighiero1,2
1 Dipartimento di Fisica e Astronomia, Università di Padova, vicolo
dell’Osservatorio 3, I-35122 Padova, Italy
2 INAF $-$ Osservatorio Astrofisico di Padova, vicolo dell’Osservatorio 5,
I-35122 Padova, Italy
3 Dipartimento di Fisica e Astronomia, Università di Bologna, Via Gobetti
93/2, I-40129, Bologna, Italy
4 INAF - Osservatorio di Astrofisica e Scienza dello Spazio, Via Gobetti 93/3,
I-40129, Bologna, Italy E-mail<EMAIL_ADDRESS>
(Accepted 2021 January 18. Received 2021 January 12; in original form 2020
December 11)
###### Abstract
In this paper we present an attempt to estimate the redshift evolution of the
molecular to neutral gas mass ratio within galaxies (at fixed stellar mass).
For a sample of five nearby grand design spirals located on the Main Sequence
(MS) of star forming galaxies, we exploit maps at 500 pc resolution of stellar
mass and star formation rate ($M_{\star}$ and SFR). For the same cells, we
also have estimates of the neutral ($M_{\rm HI}$) and molecular ($M_{\rm
H_{2}}$) gas masses. To compute the redshift evolution we exploit two
relations: i) one between the molecular-to-neutral mass ratio and the total
gas mass ($M_{\rm gas}$), whose scatter shows a strong dependence with the
distance from the spatially resolved MS, and ii) the one between
$\log(M_{\rm{H_{2}}}/M_{\star})$ and $\log(M_{\rm{HI}}/M_{\star})$. For both
methods, we find that $M_{\rm H_{2}}$/$M_{\rm HI}$ within the optical radius
slightly decreases with redshift, contrary to common expectations of galaxies
becoming progressively more dominated by molecular hydrogen at high redshifts.
We discuss possible implications of this trend on our understanding of the
internal working of high redshift galaxies.
###### keywords:
galaxies: evolution – galaxies: star formation – galaxies: spirals
††pubyear: 2021††pagerange: Redshift Evolution of the H2/HI Mass Ratio In
Galaxies–References
## 1 Introduction
Our understanding of galaxy formation and evolution is strictly connected to
the accretion of cold gas on galaxies across cosmic time: this gas coming from
the cosmic web cools down to form atomic hydrogen (HI) first, and then
molecular hydrogen (H2), that can eventually collapse under gravitational
instability to form new stars. Feedback from star formation also plays a
crucial role, as it is a necessary ingredient to ensure a low efficiency of
the star formation process itself: without feedback the gas in a galaxy would
be consumed almost completely over a free-fall time, turning most baryons into
stars, as opposed to the $\sim 10$ per cent of baryons being locked into stars
as actually observed in the local Universe (e.g. Bigiel et al., 2008; Krumholz
et al., 2012; Hayward & Hopkins, 2017). Feedback from star formation includes
photo-dissociation of H2 into HI due to the radiation emitted by young stars
(e.g. Allen et al., 2004; Sternberg et al., 2014). Therefore, HI is not only
an intermediate gas phase towards star formation, but also one of its
products, and it is key in establishing the self-regulating nature of the star
formation process. Unfortunately, till now our knowledge of the HI content in
individual galaxies is restricted to the low redshift Universe, where HI is
detected in emission via the 21cm line. Several surveys have targeted HI in
galaxies at $z<0.05$: HIPASS (Meyer et al., 2004), ALFALFA (Giovanelli et al.,
2005), xGASS (Catinella et al., 2018), HI-MaNGA (Masters et al., 2019). At
higher redshift, the HIGHz survey (Catinella & Cortese, 2015) targeted the HI
emission of massive galaxies at $z\sim 0.2$, while the CHILES survey pushed
the limit of individual detections up to $z\sim 0.4$ (Fernández et al., 2016).
At even higher redshift our knowledge of HI content is entirely obtained by
stacking analysis: Kanekar et al. (2016) at $z\sim 1.3$ and Chowdhury et al.
(2020, C20 hereafter) at $z\sim 1$. Damped Ly$\alpha$ or MgII absorption line
systems give us the chance to estimate the HI content at $z\gtrsim$1.5, with
the caveat that they trace HI located well outside the optical disk of
galaxies, hence revealing little about what is going on inside their star-
forming body.
Recently, in Morselli et al. (2020, M20 hereafter) we analyzed the HI and H2
content of five nearby, grand-design, massive main sequence (MS) galaxies on
scales of $\sim 500$pc and linked the availability of molecular and neutral
hydrogen to the star formation rate (SFR) of each region. We found that H2/HI
increases with gas surface density, and at fixed total gas surface density it
decreases (increases) for regions with a higher (lower) specific star
formation rate (sSFR). In this paper we exploit tight correlations to estimate
the evolution with redshift of the H2/HI mass ratio within galaxies. It is
generally assumed that this ratio increases with redshift, because galaxies
are more gas rich and as the gas surface density increases, recombination is
favored. However, galaxies at high redshift are also more star forming, and
higher levels of star formation favor photo-dissociation of the H2 molecule,
hence it is not a priori obvious which trend would dominate over the other.
## 2 Data: M∗, SFR, H2 and HI at 500 pc Resolution
The methodology to retrieve estimates of the stellar mass ($M_{\star}$), SFR,
HI mass ($M_{\rm HI}$) and H2 mass ($M_{\rm{H_{2}}}$) is detailed in Enia et
al. (2020) and M20. Briefly, starting from the DustPedia archive (Davies et
al., 2017; Clark et al., 2018) we built a sample of five nearby, face-on,
grand design spiral galaxies with stellar mass in the range
$10^{10.2-10.7}M_{\odot}$, that lie on the MS relation at $z=0$. These sources
have been observed in at least 18 bands from the far ultraviolet (FUV) to the
far infrared (FIR). We used the photometric data from FUV to FIR to run SED
fitting with MAGPHYS (da Cunha et al., 2008) on cells of 500pc$\times$500pc.
We obtained the SFR as the sum of the un-obscured (SFRUV) and obscured (SFRIR)
contributions. To this aim, SFRUV and SFRIR have been computed using the
scaling relations of Bell & Kennicutt (2001) and Kennicutt (1998),
respectively, where the UV and IR luminosities ($L_{\rm UV}$ and $L_{\rm IR}$)
are evaluated from the best-fit SED (see Enia et al., 2020). Finally, as these
sources are included in the HERACLES (Leroy et al., 2009) and THINGS (Walter
et al., 2008) surveys, they have been observed in CO(2-1) and HI at 21 cm.
Hereafter, we make use of the H2 estimated using $\alpha_{\rm CO}$ =
4.3$M_{\odot}$${\rm(K\cdot km\cdot s^{-1}pc^{2})^{-1}}$ (e.g. Bolatto et al.,
2013). Details on how the HI and H2 maps at 500pc resolution were obtained can
be found in M20, where the consistency of the results using a constant or
metallicity-dependent $\alpha_{\rm CO}$ is discussed.
## 3 the H2/HI mass ratio at high redshift
In this paper we exploit local correlations observed at 500pc resolution to
estimate the redshift evolution of the H2/HI ratio. An important caveat of
this procedure is the validity on galactic scales of correlations observed on
sub-galactic scales or, in other words, whether integrated quantities can be
estimated from spatially-resolved relations. In recent years several studies
have indeed revealed that the "main" correlations involved in the star
formation process, the MS of star forming galaxies and the molecular gas Main
Sequence (MGMS, e.g. Lin et al., 2019) have very similar slopes when analyzed
on sub-galactic or galactic scales (e.g. Hsieh et al., 2017; Lin et al., 2019;
Cano-Díaz et al., 2019; Enia et al., 2020).
Figure 1: Left panel: ${\rm log({H_{2}}/{HI})}$ \- ${\rm log}\Sigma_{\rm gas}$
plane, adapted from Figure 8 of M20. Each cell is color-coded according to the
average value of $\Delta_{\rm MS}$. The blue solid line is the best fit to the
cells having an average value of $\Delta_{\rm{MS}}$ in the range [-0.2,0.2];
the slope of this best fit is $m_{1}$. The gray shaded area includes the
values for which we compute $\Delta_{\rm MS}$ as a function of H2/HI ratio, as
shown in the right panel: the slope of the best fit (blue solid line) give us
$m_{2}$.
### 3.1 Method 1
To estimate the redshift evolution of $M_{\rm H_{2}}$/$M_{\rm HI}$ in MS
galaxies we proceed as follows. We define the variable $Y$ as the log of
$M_{\rm H_{2}}$/$M_{\rm HI}$ and express it as a function the the total gas
mass ($M_{\rm gas}=M_{\rm H_{2}}+M_{\rm HI}$) and SFR :
$Y={\rm log}\frac{M_{\rm H_{2}}}{M_{\rm HI}}=f(M_{\rm gas},{\rm SFR}).$ (1)
It follows that:
${dY\over d{\rm log}(1+z)}=m_{1}{d{\rm log}M_{\rm gas}\over d{\rm
log}(1+z)}+m_{2}{d{\rm log(SFR)}\over d{\rm log}(1+z)},$ (2)
where:
${\partial Y\over\partial{\rm log}M_{\rm gas}}\simeq m_{1}\quad{\rm
and}\quad{\partial Y\over\partial{\rm log(SFR)}}\simeq m_{2},$ (3)
with $m_{1}$ describing the conversion of HI into H2 and $m_{2}$ the opposite
conversion from H2 to HI due to photo-dissociation. From Tacconi et al. (2018)
we have that, at fixed stellar mass,
${d{\rm log}M_{\rm H_{2}}\over d{\rm log}(1+z)}=2.6$ (4)
which refers only to $M_{\rm{H_{2}}}$, not to $M_{\rm gas}$. For the redshift
evolution of the SFR (at fixed stellar mass) we adopt the scaling from Speagle
et al. (2014):
${d{\rm log}{\rm SFR}\over d{\rm log}(1+z)}=3.5.$ (5)
Therefore, Equation (2) becomes:
${dY\over d{\rm log}(1+z)}=m_{1}{d{\rm log}M_{\rm gas}\over d{\rm
log}(1+z)}+3.5m_{2}.$ (6)
As a next step, we need to derive ${d{\rm log}M_{\rm gas}\over d{\rm
log}(1+z)}$. Since we have:
${\rm log}M_{\rm HI}={\rm log}M_{\rm H_{2}}-Y,$ (7)
then:
$M_{\rm gas}=M_{\rm H_{2}}\times(1+10^{-Y}),$ (8)
and the derivative becomes:
$\begin{split}{d{\rm log}M_{\rm gas}\over d{\rm log}(1+z)}={d{\rm log}M_{\rm
H_{2}}\over d{\rm log}(1+z)}+{d{\rm log}(1+10^{-Y})\over d{\rm log}(1+z)}=\\\
2.6-\left(1+{M_{\rm H_{2}}\over M_{\rm HI}}\right)^{-1}{dY\over d{\rm
log}(1+z)}\end{split}$ (9)
where the first derivative is given by Equation (4). Therefore, using Equation
(9), Equation (6) becomes:
$\begin{split}{dY\over d{\rm log}(1+z)}=-m_{1}\left(1+{M_{\rm H_{2}}\over
M_{\rm HI}}\right)^{-1}{dY\over d{\rm log}(1+z)}+2.6m_{1}+3.5m_{2}\end{split}$
(10)
Now we integrate the left and right sides of Equation (10) between $z=0$ and
$z$:
$\begin{split}&\int_{0}^{z}\left(1+{m_{1}\over
1+10^{Y}}\right)\,{dY}=(2.6m_{1}+3.5m_{2})\int_{0}^{\log(1+z)}\,{d{\rm
log(}1+z)}\end{split}$ (11)
By solving the integrals of the left and right sides of Equation (11) we get:
$\begin{split}&(1+m_{1})(Y_{z}-Y_{0})+m_{1}(\log(1+10^{Y_{0}})-\log(1+10^{Y_{z}}))\\\
&=(2.6m_{1}+3.5m_{2})\log(1+z),\end{split}$ (12)
where the subscript 0 ($z$) refers to the values at redshift 0 ($z$). Thus,
this equation is meant to describe the redshift evolution of the H2/HI mass
ratio at fixed stellar mass. To proceed with the numerical solution of
Equation (12), we need the values of $m_{1}$ and $m_{2}$ that we obtain from
Figure 8 of M20, reported here in the left panel of Figure 1. This Figure
shows how the ratio of molecular to atomic hydrogen varies as a function of
the total gas surface density and distance from the spatially resolved MS
relation, ${\rm\Delta_{MS}}$, which is defined as the difference between
log(SFR) of a region and its MS value at the same stellar mass. Inside
galaxies, the H${}_{\rm{}_{2}}$/HI mass ratio is very strongly correlated with
the total gas surface density and anticorrelated with the local SFR, as
quantified by ${\rm\Delta_{MS}}$. In M20 we interpret this anticorrelation as
evidence that the UV radiation from recently formed, massive stars has the
effect of photo-dissociating molecular hydrogen, a manifestation of the self-
regulating nature of the star formation process.
We estimate $m_{1}$ by fitting the relation between ${\rm log({H_{2}}/{HI})}$
and ${\rm log}\Sigma_{\rm gas}$ along the MS ($\Delta_{\rm{MS}}\sim 0$): the
best fit returns a slope of 1.49 (blue solid line in the left panel of Figure
1). To estimate $m_{2}$, we calculate the slope of the ${\rm
log({H_{2}}/HI)}\\!-\\!\Delta_{\rm{MS}}$ relation at fixed ${\rm
log}\Sigma_{\rm gas}$, considering a narrow range of ${\rm log}\Sigma_{\rm
gas}$ values where data exist over the widest range of the ${\rm{H_{2}}/{HI}}$
mass ratio (the vertical grey region in the left panel), hence offering the
best possible estimate of this derivative. The best fit returns a slope of
$-1.55$ (right panel of Figure 1). We adopt these two derivatives as proxies
for $m_{1}$ and $m_{2}$ as defined by Equations (3), based on the
aforementioned similarity between the corresponding spatially resolved and
global relations. For simplicity, in the following we assume $m_{1}$=1.5 and
$m_{2}$=$-1.5$, values which are perfectly consistent with the best fit ones.
Under this assumption, Equation (10) becomes:
${dY\over d{\rm log}(1+z)}=-{1.35\over 1+1.5\left(1+{M_{\rm H_{2}}\over M_{\rm
HI}}\right)^{-1}}\quad.$ (13)
Equation (13) implies that the redshift derivative of $Y$ is always negative,
i.e., the phase equilibrium shifts in favour of HI in high redshift galaxies.
This comes from the SFR increasing with redshift faster than the molecular gas
mass, see the above Equations (4) and (5). Let us consider three limiting
cases. If $M_{\rm H_{2}}$ largely dominates over $M_{\rm HI}$, then the
denominator in Equation (13) is $\sim 1$ and the derivative is $-1.35$. If
$M_{\rm HI}$ largely dominates, the derivative becomes -0.54. Finally, if the
two phases are nearly equal in mass the denominator is $\sim$ 1.75 and the
derivative becomes -0.77. So, the derivative will always be between -0.54 and
$-1.35$.
However, an analytical solution of Equation 12 is also possible, and it is
shown in Figure 2 (solid lines) for $m_{1}=1.5$, $m_{2}=-1.5$, and for $M_{\rm
H_{2},0}$/$M_{\rm HI,0}$ = 1/3, 1 and 3, i.e., three typical values of the
H2/HI mass ratio within the optical radius of MS galaxies in the local
Universe (Casasola et al., 2020). Galaxies that at $z=0$ are HI dominated, or
in which the two phases are equal in mass, show just a mild evolution of
$M_{\rm H_{2}}$/$M_{\rm HI}$, implying that by $z\sim 2$ HI still holds the
majority share. However, galaxies that locally are H2 dominated will tend to
show a slightly steeper evolution, to reach $M_{\rm H_{2}}$/$M_{\rm HI}$
$\sim$ 1.2 at $z=2$. We notice that lower values than 3.5 in Equation (5) can
be found in the literature: they would imply a flatter evolution of $M_{\rm
H_{2}}$/$M_{\rm HI}$ compared to our results.
Figure 2: Redshift evolution at fixed stellar mass of the H2/HI mass ratio,
obtained applying Method 1 (solid lines) and Method 2 (dashed lines), for
three different values of ($M_{\rm H_{2}}/M_{\rm HI})_{z=0}$= 1/3 (turquoise),
1 (gray) and 3 (black). The values obtain from the HI detection of C20 at
z=1.04 are marked with the white-to-black colored bar, with the gradient
indicating variations of the fraction of HI inside the optical radius. The
values estimated from the correlations of Zhang et al. (2020) at z=0, 0.83 and
1.23 are indicated with the yellow-to-purple colored bar, with the gradient
indicating the variations in stellar mass.
### 3.2 Method 2
With the data for the five galaxies in the sample of M20 we analyze how
$M_{\rm HI}$ and $M_{\rm H_{2}}$ are linked on scales of 500 pc. We observe a
slightly super-linear correlation between log($M_{\rm HI}/M_{\star}$) and
log($M_{\rm H_{2}}/M_{\star}$), characterized by a slope of 1.13, a Spearman
coefficient of 0.62 and $p$-value $\sim$ 0:
${\rm log}\frac{M_{\rm HI}}{M_{\star}}\propto 1.13\ {\rm log}\frac{M_{\rm
H_{2}}}{M_{\star}}$ (14)
and the correlation is shown in Figure 3. We note that one of our five
galaxies, NGC5194 (M51), has a significantly flatter slope and smaller
Spearman coefficient, and interestingly is the only galaxy in the sample to be
experiencing an interaction (with M51b) as well as the only one to have T-type
= 4 (while the rest of the galaxies have T-type between 5.2 and 5.9). We
decided to keep NGC5194 in our sample for consistency with Method 1, but
noting that the slope for the remaining four galaxies is slightly steeper
(1.24). This correlation gives us the possibility to estimate the evolution of
$M_{\rm HI}/M_{\star}$ with $z$ just by considering the evolution of the
molecular gas (at fixed stellar mass), expressed in Equation (4). Hence,
Equation (14) becomes:
$\frac{M_{\rm HI}}{M_{\star}}\propto\left(\frac{M_{\rm
H_{2}}}{M_{\star}}\right)^{1.13}\propto(1+z)^{1.13\times 2.6}$ (15)
and thus:
$\frac{M_{\rm H_{2}}}{M_{\rm
HI}}\propto(1+z)^{2.6}\times(1+z)^{-2.94}\propto(1+z)^{-0.34}.$ (16)
The trend expressed by Equation (16) is shown in Figure 2 (dashed lines) for
the three values of $M_{\rm H_{2}}$/$M_{\rm HI}$ at $z$ = 0 used in Method 1:
1/3, 1 and 3. The two methods appear to give basically consistent results,
with only a modest evolution of $M_{\rm H_{2}}$/$M_{\rm HI}$ with redshift in
favor of HI, which is more pronounced in Method 1 (we note that a steeper
slope than the one expressed in Equation (15) would increase the consistency
between the two methods). This agreement may not be surprising, as the two
methods are in fact more similar than they appear. Indeed, in Method 1 the
effect of the SFR on $M_{\rm H_{2}}$/$M_{\rm HI}$ is treated explicitly,
whereas in Method 2 it is implicit in the $M_{\rm H_{2}}$-$M_{\rm HI}$
correlation.
## 4 Discussion
### 4.1 Comparison with other estimates of the HI content of high redshift
galaxies
We compare these trends with the recent detection of HI in emission in $z\sim
1$ galaxies, obtained by C20 via stacking analysis over 7,653 star forming
galaxies. They find that in their sample, with a mean stellar mass of
$9.4\cdot 10^{9}\,M_{\odot}$, the mean HI mass is $1.19\cdot
10^{10}\,M_{\odot}$. To compute the mean H2/HI mass ratio in the galaxies
observed by C20 we proceed as follows. We consider the mean molecular-to-
stellar mass ratio in the local Universe for galaxies with $M_{\star}\sim
10^{10}\,M_{\odot}$ to be $\sim$ 0.1 (e.g. Casasola et al., 2020; Hunt et al.,
2020). Thus, the mean molecular gas mass of galaxies having a mean stellar
mass of $9.4\cdot 10^{9}\,M_{\odot}$ is $\sim 9.4\cdot 10^{8}\,M_{\odot}$. By
applying the scaling from Tacconi et al. (2018), expressed by Equation (4),
the expected mean $M_{\rm H_{2}}$ in $z=1$ galaxies turns out to be $\sim
5.7\cdot 10^{9}\,M_{\odot}$, hence:
$\left({\frac{M_{\rm H_{2}}}{M_{\rm HI}}}\right)_{z=1}=\frac{5.7\cdot
10^{9}}{1.19\cdot 10^{10}}=0.48.$ (17)
This value is obtained assuming that the HI detected by C20 ($M_{\rm
HI_{tot}}$) lies completely within the optical radius ($R_{25}$) of the
galaxies in the sample (i.e., $f_{\rm R25}=\frac{M_{\rm HI_{R25}}}{M_{\rm
HI_{tot}}}=1$, with $M_{\rm HI_{R25}}$ the HI mass within $R_{25}$). The
average beam of the observations described in C20 is between 30 and 60 kpc,
thus it is likely that a certain fraction of the observed HI lies outside the
optical radius, resulting in an underestimation of the H2/HI mass ratio within
the optical radius. The white-to-black bar in Figure 2 represents the estimate
of $M_{\rm H_{2}}/M_{\rm HI}$ at $z$=1, assuming that C20 have sampled a
fraction of HI inside the optical radius, varying from 100 per cent (white) to
25 per cent (black). In particular, we find that $M_{\rm HI}>M_{\rm H_{2}}$ at
$z\sim$1 for $f_{\rm R25}>0.4$. For $f_{\rm R25}<0.4$, a value consistent with
the $z=0$ estimate of Hunt et al. (2020) of galaxies having on average 30$\%$
of their total HI inside the optical radius, we get $M_{\rm H_{2}}>M_{\rm HI}$
at $z\sim 1$, but even when this fraction is only 20%, $M_{\rm H_{2}}$ is only
a factor of 2 higher than $M_{\rm HI}$.
In Figure 2 we also include, with yellow-to-purple vertical bars, the recent
estimates of $M_{\rm HI}$ obtained by Zhang et al. (2020) from the local
correlations between log${\frac{M_{\rm HI}}{M_{\star}}}$ and the $(NUV-r)$
color, which is a proxy for the specific SFR. We report their results at three
different redshifts: $z$ = 0, 0.83 and 1.23. As above, we use the evolution of
$M_{\rm H_{2}}$ with redshift as given by Equation (4) to estimate $M_{\rm
H_{2}}$ at the three redshifts, while $M_{\rm HI}$ is obtained for M⋆ varying
between $10^{9}\,\hbox{$M_{\odot}$}$ (in yellow in Figure 2) and
$10^{10.5}\,\hbox{$M_{\odot}$}$ (in purple in Figure 2). It is worth noting
that the HI estimates used in Zhang et al. (2020) do not refer to values
within the optical radius; this is clear at $z=0$, where the estimates of
$M_{\rm H_{2}}$/$M_{\rm HI}$ are significantly smaller than those of Casasola
et al. (2020) computed within the optical radius.
While our two methods and the one of Zhang et al. (2020) yield similar results
(in that they suggest a non-vanishing HI contribution at high redshift), it is
worth recapping the underlying physical motivations of each of them. Method 1
is built on the observed scaling of the $M_{\rm H_{2}}/M_{\star}$ ratio with
redshift, Eq. (4), and attempts to include the effect of photo-dissociation of
the H2 molecules by young stars. Method 2 assumes that the local correlation
between $M_{\rm H_{2}}$ and $M_{\rm HI}$ holds at all redshifts, and the
rationale of it is that if galaxies have more H2 they must have also more HI,
which is the necessary step to form H2. The method of Zhang et al. (2020)
assumes that the local correlation between $M_{\rm HI}$ and the ultraviolet-
optical colour (a SFR proxy) holds also at all redshifts: as the SFR increases
with redshift, so has to do $M_{\rm HI}$ as well. We notice that only our two
methods use the observed increasing trend with redshift of the H2 to stellar
mass ratio.
Figure 3: Correlation between log($M_{\rm HI}$/$M_{\star}$) and log($M_{\rm
H_{2}}$/$M_{\star}$) at 500 pc resolution, for the 5 galaxies of M20. The best
fit correlation (solid orange line) has a slope of 1.13 and a Spearman
coefficient of 0.62.
### 4.2 Implications and conclusions
All the above results rely on extrapolations from local trends that may or may
not hold when applying them to high redshift, thus at this stage we consider
the results tentative. Yet, in all methods the H2/HI mass ratio is expected to
decrease with redshift, contrary to the notion that it would increase, with H2
dominating at high redshift. Thus, these results suggest that HI cannot be
neglected at high redshift and we discuss below some implications for our
understanding of high redshift galaxies.
The first one concerns star formation, namely the gas depletion time $M_{\rm
gas}$/SFR and the star formation efficiency (SFE). For lack of direct evidence
on the HI mass, the H2 mass has been generally used as a proxy for the total
gas mass. If our projections are correct, and if some (if not all) of the HI
observed within the optical disk of galaxies comes from H2 photo-dissociation,
then the total gas depletion time should be at least a factor of $\sim 2$
longer than previously estimated (e.g. Scoville et al., 2017; Tacconi et al.,
2018). In the end, the HI to H2 to stars conversion is not a one-way process
inside galaxies, but rather a cycle in which part of the H2 in converted back
to HI. Thus, the total gas depletion time is a more informative quantity
compared to the molecular gas depletion time.
The second implication concerns the contribution of HI to the total baryonic
mass inside the stellar disk of high redshift galaxies. Even when ignoring HI,
spatially-resolved dynamical studies have shown that $z\sim 2$ galaxies are
strongly baryon dominated inside their effective radius (Genzel et al., 2017,
2020). If the mass of HI is comparable to that of H2, as it is at $z\sim 0$,
then these galaxies may turn out even more baryon dominated than estimated
thus far. Similarly, a higher gas fraction due to the addition of the HI
component would lower the Toomre parameter, making disks more prone to clump
formation instabilities.
For a direct assessment of the HI content of star forming galaxies at high
redshifts we will have to wait for the planned surveys with the Square
Kilometer Array (SKA). Indeed, ultra-deep SKA1 surveys may probe massive
galaxies (with $M_{\rm HI}\lower
2.15277pt\hbox{$\;\buildrel>\over{\sim}\;$}10^{10}\,\hbox{$M_{\odot}$}$) up to
$z\lower 2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}1.7$ (Blyth et al., 2015),
or even beyond via stacking. Strawman HI surveys with SKA1 foresee two
medium/high redshift surveys (Blyth et al., 2015): a deep survey (150 deg2)
that will detect the mentioned amount of HI up to $z\sim 0.7$ and an ultra-
deep survey (2 deg2) that will reach $z\sim 1.7$. These observations should be
amply sufficient to check the extent to which our projections are correct.
## acknowledgments
We are grateful to the anonymous referee for a careful consideration of our
manuscript, to Leslie Hunt for useful comments on an early version, and to
Lucia Rodríguez-Muñoz, Arianna Renzini, Bhaskar Agarwal and Hannah Übler for
fruitful discussion and valuable inputs. LM acknowledges support from the BIRD
2018 research grant from the Universit$\grave{\rm a}$ degli Studi di Padova.
AE and GR acknowledge the support from grant PRIN MIUR 2017 - 20173ML3WW 001.
## Data Availability
The derived data underlying this article will be shared on reasonable request
to the corresponding author.
## References
* Allen et al. (2004) Allen R. J., Heaton H. I., Kaufman M. J., 2004, ApJ, 608, 314
* Bell & Kennicutt (2001) Bell E. F., Kennicutt Robert C. J., 2001, ApJ, 548, 681
* Bigiel et al. (2008) Bigiel F., Leroy A., Walter F., Brinks E., de Blok W. J. G., Madore B., Thornley M. D., 2008, AJ, 136, 2846
* Blyth et al. (2015) Blyth S., et al., 2015, in Advancing Astrophysics with the Square Kilometre Array (AASKA14). p. 128 (arXiv:1501.01295)
* Bolatto et al. (2013) Bolatto A. D., Wolfire M., Leroy A. K., 2013, ARA&A, 51, 207
* Cano-Díaz et al. (2019) Cano-Díaz M., Ávila-Reese V., Sánchez S. F., Hernández-Toledo H. M., Rodríguez-Puebla A., Boquien M., Ibarra-Medel H., 2019, MNRAS, 488, 3929
* Casasola et al. (2020) Casasola V., et al., 2020, A&A, 633, A100
* Catinella & Cortese (2015) Catinella B., Cortese L., 2015, MNRAS, 446, 3526
* Catinella et al. (2018) Catinella B., et al., 2018, MNRAS, 476, 875
* Chowdhury et al. (2020) Chowdhury A., Kanekar N., Chengalur J. N., Sethi S., Dwarakanath K. S., 2020, Nature, 586, 369
* Clark et al. (2018) Clark C. J. R., et al., 2018, A&A, 609, A37
* Davies et al. (2017) Davies J. I., et al., 2017, PASP, 129, 044102
* Enia et al. (2020) Enia A., et al., 2020, MNRAS, 493, 4107
* Fernández et al. (2016) Fernández X., et al., 2016, ApJ, 824, L1
* Genzel et al. (2017) Genzel R., et al., 2017, Nature, 543, 397
* Genzel et al. (2020) Genzel R., et al., 2020, ApJ, 902, 98
* Giovanelli et al. (2005) Giovanelli R., et al., 2005, AJ, 130, 2613
* Hayward & Hopkins (2017) Hayward C. C., Hopkins P. F., 2017, MNRAS, 465, 1682
* Hsieh et al. (2017) Hsieh B. C., et al., 2017, ApJ, 851, L24
* Hunt et al. (2020) Hunt L. K., Tortora C., Ginolfi M., Schneider R., 2020, A&A, 643, A180
* Kanekar et al. (2016) Kanekar N., Sethi S., Dwarakanath K. S., 2016, ApJ, 818, L28
* Kennicutt (1998) Kennicutt Robert C. J., 1998, arXiv.org, pp 189–232
* Krumholz et al. (2012) Krumholz M. R., Dekel A., McKee C. F., 2012, ApJ, 745, 69
* Leroy et al. (2009) Leroy A. K., et al., 2009, AJ, 137, 4670
* Lin et al. (2019) Lin L., et al., 2019, ApJ, 884, L33
* Masters et al. (2019) Masters K. L., et al., 2019, MNRAS, 488, 3396
* Meyer et al. (2004) Meyer M. J., et al., 2004, MNRAS, 350, 1195
* Morselli et al. (2020) Morselli L., et al., 2020, MNRAS, 496, 4606
* Scoville et al. (2017) Scoville N., et al., 2017, ApJ, 837, 150
* Speagle et al. (2014) Speagle J. S., Steinhardt C. L., Capak P. L., Silverman J. D., 2014, ApJS, 214, 15
* Sternberg et al. (2014) Sternberg A., Le Petit F., Roueff E., Le Bourlot J., 2014, ApJ, 790, 10
* Tacconi et al. (2018) Tacconi L. J., et al., 2018, ApJ, 853, 179
* Walter et al. (2008) Walter F., Brinks E., de Blok W. J. G., Bigiel F., Kennicutt Robert C. J., Thornley M. D., Leroy A., 2008, AJ, 136, 2563
* Zhang et al. (2020) Zhang W., Kauffmann G., Wang J., Chen Y., Fu J., Wu H., 2020, arXiv e-prints, p. arXiv:2011.04500
* da Cunha et al. (2008) da Cunha E., Charlot S., Elbaz D., 2008, MNRAS, 388, 1595
|
11institutetext: Department of Compouter Science, Purdue University, IN, USA
11email<EMAIL_ADDRESS>
# DAHash: Distribution Aware Tuning of Password Hashing Costs
Wenjie Bai 11 Jeremiah Blocki 11
###### Abstract
An attacker who breaks into an authentication server and steals all of the
cryptographic password hashes is able to mount an offline-brute force attack
against each user’s password. Offline brute-force attacks against passwords
are increasingly commonplace and the danger is amplified by the well
documented human tendency to select low-entropy password and/or reuse these
passwords across multiple accounts. Moderately hard password hashing functions
are often deployed to help protect passwords against offline attacks by
increasing the attacker’s guessing cost. However, there is a limit to how
“hard” one can make the password hash function as authentication servers are
resource constrained and must avoid introducing substantial authentication
delay. Observing that there is a wide gap in the strength of passwords
selected by different users we introduce DAHash (Distribution Aware Password
Hashing) a novel mechanism which reduces the number of passwords that an
attacker will crack. Our key insight is that a resource-constrained
authentication server can dynamically tune the hardness parameters of a
password hash function based on the (estimated) strength of the user’s
password. We introduce a Stackelberg game to model the interaction between a
defender (authentication server) and an offline attacker. Our model allows the
defender to optimize the parameters of DAHash e.g., specify how much effort is
spent to hash weak/moderate/high strength passwords. We use several large
scale password frequency datasets to empirically evaluate the effectiveness of
our differentiated cost password hashing mechanism. We find that the defender
who uses our mechanism can reduce the fraction of passwords that would be
cracked by a rational offline attacker by around $15\%$.
###### Keywords:
Password hashing DAHash Stackelberg game.
## 1 Introduction
Breaches at major organizations have exposed billions of user passwords to the
dangerous threat of offline password cracking. An attacker who has stolen the
cryptographic hash of a user’s password could run an offline attack by
comparing the stolen hash value with the cryptographic hashes of every
password in a large dictionary of popular password guesses. An offline
attacker can check as many guesses as s/he wants since each guess can be
verified without interacting with the authentication server. The attacker is
limited only by the cost of checking each password guess i.e., the cost of
evaluating the password hash function.
Offline attacks are a grave threat to security of users’ information for
several reasons. First, the entropy of a typical user chosen password is
relatively low e.g., see [8]. Second, users often reuse passwords across
multiple accounts to reduce cognitive burden. Finally, the arrival of GPUs,
FPGAs and ASICs significantly reduces the cost of evaluating a password hash
functions such as PBKDF2 [17] millions or billions of times. Blocki et al. [7]
recently argued that PBKDF2 cannot adequately protect user passwords without
introducing an intolerable authentication delay (e.g., $2$ minutes) because
the attacker could use ASICs to reduce guessing costs by many orders of
magnitude.
Memory hard functions (MHFs) [24, 4] can be used to build ASIC resistant
password hashing algorithms. The Area x Time complexity of an ideal MHF will
scale with $t^{2}$, where $t$ denotes the time to evaluate the function on a
standard CPU. Intuitively, to evaluate an MHF the attacker must dedicate $t$
blocks of memory for $t$ time steps, which ensures that the cost of computing
the function is equitable across different computer architectures i.e., RAM on
an ASIC is still expensive. Because the “full cost” [34] of computing an ideal
MHF scales quadratically with $t$ it is also possible to rapidly increase
guessing costs without introducing an untenable delay during user
authentication — by contrast the full cost of hash iteration based KDFs such
as PBKDF2 [17] and BCRYPT [25] scale linearly with $t$. Almost all of the
entrants to the recent Password Hashing Competition (PHC) [33] claimed some
form of memory-hardness.
Even if we use MHFs there remains a fundamental trade-off in the design of
good password hashing algorithms. On the one hand the password hash function
should be sufficiently expensive to compute so that it becomes economically
infeasible for the attacker to evaluate the function millions or billions of
times per user — even if the attacker develops customized hardware (ASICs) to
evaluate the function. On the other hand the password hashing algorithm cannot
be so expensive to compute that the authentication server is unable to handle
the workload when multiple users login simultaneously. Thus, even if an
organization uses memory hard functions it will not be possible to protect all
user passwords against an offline attacker e.g., if the password hashing
algorithm is not so expensive that the authentication server is overloaded
then it will almost certainly be worthwhile for an offline attacker to check
the top thousand passwords in a cracking dictionary against each user’s
password. In this sense all of the effort an authentication server expends
protecting the weakest passwords is (almost certainly) wasted.
##### Contributions
We introduce DAHash (Distribution Aware Hash) a password hashing mechanism
that minimizes the damage of an offline attack by tuning key-stretching
parameters for each user account based on password strength. In many empirical
password distributions there are often several passwords that are so popular
that it would be infeasible for a resource constrained authentication server
to dissuade an offline attacker from guessing these passwords e.g., in the
Yahoo! password frequency corpus [8, 6] the most popular password was selected
by approximately $1\%$ of users. Similarly, other users might select passwords
that are strong enough to resist offline attacks even with minimal key
stretching. The basic idea behind DAHash is to have the resource-constrained
authentication server shift more of its key-stretching effort towards saveable
password i.e., passwords the offline attacker could be disuaded from checking.
Our DAHash mechanism partitions passwords into $\tau$ groups e.g., weak,
medium and strong when $\tau=3$. We then select a different cost parameter
$k_{\\_}i$ for each group $G_{\\_}i$, $i\leq\tau$ of passwords. If the input
password $pw$ is in group $G_{\\_}i$ then we will run our moderately hard key-
derivation function with cost parameter $k_{\\_}i$ to obtain the final hash
value $h$. Crucially, the hash value $h$ stored on the server will not reveal
any information about the cost parameter $k_{\\_}i$ or, by extension, the
group $G_{\\_}i$.
We adapt a Stackelberg Game model of Blocki and Datta [5] to help the defender
(authentication server) tune the DAHash cost parameters $k_{\\_}i$ to minimize
the fraction of cracked passwords. The Stackelberg Game models the interaction
between the defender (authentication server) and an offline attacker as a
Stackelberg Game. The defender (leader) groups passwords into different
strength levels and selects the cost parameter $k_{\\_}i$ for each group of
passwords (subject to maximum workload constraints for the authentication
server) and then the offline attacker selects the attack strategy which
maximizes his/her utility (expected reward minus expected guessing costs). The
attacker’s expected utility will depend on the DAHash cost paremeters
$k_{\\_}i$ as well, the user password distribution, the value $v$ of a cracked
password to the attacker and the attacker’s strategy i.e., an ordered list of
passwords to check before giving up. We prove that an attacker will maximize
its utility by following a simple greedy strategy. We then use an evolutionary
algorithm to help the defender compute an optimal strategy i.e., the optimal
way to tune DAHash cost parameters for different groups of passwords. The goal
of the defender is to minimize the percentage of passwords that an offline
attacker cracks when playing the utility optimizing strategy in response to
the selected DAHash parameters $k_{\\_}1,\ldots,k_{\\_}{\tau}$.
Finally, we use several large password datasets to evaluate the effectiveness
of our differentiated cost password hashing mechanism. We use the empirical
password distribution to evaluate the performance of DAHash when the value $v$
of a cracked password is small. We utilize Good-Turing frequency estimation to
help identify and highlight uncertain regions of the curve i.e., where the
empirical password distribution might diverge from the real password
distribution. To evaluate the performance of DAHash when $v$ is large we
derive a password distirbution from guessing curves obtained using the
Password Guessing Service [28]. The Password Guessing Service uses
sophisticated models such as Probabilistic Context Free Grammars [32, 18, 30],
Markov Chain Models [11, 10, 19, 28] and even neural networks [21] to generate
password guesses using Monte Carlo strength estimation [12]. We find that
DAHash reduces the fraction of passwords cracked by a rational offline
attacker by up to $15\%$ (resp. $20\%$) under the empirical distribution
(resp. derived distribution).
## 2 Related Work
Key-stretching was proposed as early as 1979 by Morris and Thomson as a way to
protect passwords against brute force attacks [22]. Traditionally key
stretching has been performed using hash iteration e.g., PBKDF2 [17] and
BCRYPT [25]. More modern hash functions such as SCRYPT and Argon2 [4], winner
of the password hashing competition in 2015 [33], additionally require a
significant amount of memory to evaluate. An economic analysis Blocki et al.
[7] suggested that hash iteration based key-derivation functions no longer
provide adequate protection for lower entropy user passwords due to the
existence of ASICs. On a positive note they found that the use of memory hard
functions can significantly reduce the fraction of passwords that a rational
adversary would crack.
The addition of “salt” is a crucial defense against rainbow table attacks [23]
i.e., instead of storing $(u,H(pw_{\\_}u))$ and authentication server will
store $(u,s_{\\_}u,H(s_{\\_}u,pw_{\\_}u))$ where $s_{\\_}u$ is a random string
called the salt value. Salting defends against pre-computation attacks (e.g.,
[13]) and ensures that each password hash will need to be cracked
independently e.g., even if two users $u$ and $u^{\prime}$ select the same
password we will have $H(s_{\\_}{u^{\prime}},pw_{\\_}{u^{\prime}})\neq
H(s_{\\_}u,pw_{\\_}u)$ with high probability as long as $s_{\\_}u\neq
s_{\\_}{u^{\prime}}$.
Manber proposed the additional inclusion of a short random string called
“pepper” which would not be stored on the server [20] e.g., instead of storing
$(u,s_{\\_}u,H(s_{\\_}u,pw_{\\_}u))$ the authentication server would store
$(u,s_{\\_}u,H(s_{\\_}u,x_{\\_}u,pw_{\\_}u))$ where the pepper $x_{\\_}u$ is a
short random string that, unlike the salt value $s_{\\_}u$, is not recorded.
When the user authenticates with password guess $pw^{\prime}$ the server would
evaluate $H(s_{\\_}u,x,pw^{\prime})$ for each possible value of $x\leq
x_{\\_}{max}$ and accept if and only if
$H(s_{\\_}u,x,pw^{\prime})=H(s_{\\_}u,x_{\\_}u,pw_{\\_}u)$ for some value of
$x$. The potential advantage of this approach is that the authentication
server can usually halt early when the legitimate user authenticates, while
the attacker will have to check every different value of
$x\in[1,x_{\\_}{max}]$ before rejecting an incorrect password. Thus, on
average the attacker will need to do more work than the honest server.
Blocki and Datta observed that non-uniform distributions over the secret
pepper value $x\in[1,x_{\\_}{max}]$ can sometime further increase the
attacker’s workload relative to an honest authentication server [5]. They
showed how to optimally tune the pepper distribution by using Stackelberg game
theory [5]. However, it is not clear how pepper could be effectively
integrated with a modern memory hard function such as Argon2 or SCRYPT. One of
the reasons that MHFs are incredibly effective is that the “full cost” [34] of
evaluation can scale quadratically with the running time $t$. Suppose we have
a hard limit on the running time $t_{\\_}{max}$ of the authentication
procedure e.g., $1$ second. If we select a secret pepper value
$x\in[1,x_{\\_}{max}]$ then we would need to ensure that
$H(s_{\\_}u,x,pw^{\prime})$ can be evaluated in time at most
$t_{\\_}{max}/x_{\\_}{max}$ — otherwise the total running time to check all of
the different pepper values sequentially would exceed $t_{\\_}{max}$. In this
case the “full cost” to compute $H(s_{\\_}u,x,pw^{\prime})$ for every
$x\in[1,x_{\\_}{max}]$ would be at most
$O\left(x_{\\_}{max}\times(t_{\\_}{max}/x_{\\_}{max})^{2}\right)=O\left(t_{\\_}{max}^{2}/x_{\\_}{max}\right)$.
If instead we had not used pepper then it would have been possible to ensure
that the full cost could be as large as $\Omega(t_{\\_}{max}^{2})$ simply by
allowing the MHF to run for time $t_{\\_}{max}$ on a single input. Thus, in
most scenarios it would be preferable for the authentication server to use a
memory-hard password hashing algorithm without incorporating pepper.
Boyen’s work on “Halting Puzzles” is also closely related to our own work [9].
In a halting puzzle the (secret) running time parameter $t\leq t_{\\_}{max}$
is randomly chosen whenever a new account is created. The key idea is that an
attacker will need to run in time $t_{\\_}{max}$ to definitively reject an
incorrect password while it only takes time $t$ to accept a correct password.
In Boyen’s work the distribution over running time parameter $t$ was the same
for all passwords. By contrast, in our work we assign a fixed hash cost
parameter to each password and this cost parameter may be different for
distinct passwords. We remark that it may be possible to combine both ideas
i.e., assign a different maximum running time parameter $t_{\\_}{max,pw}$ to
different passwords. We leave it to future work to explore whether or not the
composition of both mechanisms might yield further security gains.
## 3 DAHash
In this section, we first introduce some preliminaries about passwords then
present the DAHash and explain how the authentication process works with this
mechanism. We also discuss ways in which a (rational) offline attacker might
attempt to crack passwords protected with the differentiated cost mechanism.
### 3.1 Password Notation
We let $\mathcal{P}=\\{pw_{\\_}1,pw_{\\_}2,\ldots,\\}$ be the set of all
possible user-chosen passwords. We will assume that passwords are sorted so
that $pw_{\\_}i$ represents the $i$’th most popular password. Let
$\Pr[pw_{\\_}i]$ denote the probability that a random user selects password
$pw_{\\_}i$ we have a distribution over $\mathcal{P}$ with
$\Pr[pw_{\\_}1]\geq\Pr[pw_{\\_}2]\geq\ldots$ and
$\sum_{\\_}i\Pr[pw_{\\_}i]=1$.
The distributions we consider in our empirical analysis have a compressed
representation. In particular, we can partition the set of passwords
$\mathcal{P}$ into $n^{\prime}$ equivalence sets
$es_{\\_}1,\ldots,es_{\\_}{n^{\prime}}$ such that for any $i$,
$pw,pw^{\prime}\in es_{\\_}i$ we have $\Pr[pw]=\Pr[pw^{\prime}]=p_{\\_}i$. In
all of the distributions we consider we will have
$n^{\prime}\ll\left|\mathcal{P}\right|$ allowing us to efficiently encode the
distribution using $n^{\prime}$ tuple
$(|es_{\\_}1|,p_{\\_}1),\ldots,(|es_{\\_}{n^{\prime}}|,p_{\\_}{n^{\prime}})$
where $p_{\\_}i$ is the probability of any password in equivalence set
$es_{\\_}i$. We will also want to ensure that we can optimize our DAHash
parameters in time proportional to $n^{\prime}$ instead of
$\left|\mathcal{P}\right|$.
### 3.2 DAHash
Account Creation: When a new user first register an account with user name
$u$ and password $pw_{\\_}u\in\mathcal{P}$ DAHash will first assign a hash
cost parameter $k_{\\_}u=\mathsf{GetHardness}(pw_{\\_}u)$ based on the
(estimated) strength of the user’s password. We will then randomly generate a
$L$ bit string $s_{\\_}u\leftarrow\\{0,1\\}^{L}$ (a “salt”) then compute hash
value $h_{\\_}u=H\left(pw_{\\_}u,s_{\\_}u;k_{\\_}u\right)$, at last store the
tuple $\left(u,s_{\\_}u,h_{\\_}u\right)$ as the record for user $u$. The salt
value $s_{\\_}u$ is used to thwart rainbow attacks [23] and $k_{\\_}u$
controls the cost of hash function111We remark that the hardness parameter $k$
is similar to “pepper” [20] in that it is not stored on the server. However,
the hardness parameter $k$ is distinct from pepper in that it is derived
deterministically from the input password $pwd_{\\_}u$. Thus, unlike pepper,
the authentication server will not need to check the password for every
possible value of $k$. .
Authentication with DAHash: Later, when user $u$ enters her/his password
$pw_{\\_}u^{\prime}$, the server first retrieves the corresponding salt value
$s_{\\_}u$ along with the hash value $h_{\\_}u$, runs
$\mathsf{GetHardness}(pw_{\\_}u^{\prime})$ to obtain $k_{\\_}u^{\prime}$ and
then checks whether the hash
$h_{\\_}u^{\prime}=H(pw_{\\_}u^{\prime},s_{\\_}u;~{}k_{\\_}u^{\prime})$ equals
the stored record $h_{\\_}u$ before granting access. If
$pw_{\\_}u^{\prime}=pw_{\\_}u$ is the correct password then we will have
$k_{\\_}u^{\prime}=k_{\\_}u$ and $h_{\\_}u^{\prime}=h_{\\_}u$ so
authentication will be successful. Due to the collision resistance of
cryptographic hash functions, a login request from someone claiming to be user
$u$ with password $pw\textquoteright_{\\_}u\neq pw_{\\_}u$ will be rejected.
The account creation and authentication processes are formally presented in
Algorithms 1 and 2 (see Appendix 0.A).
In the traditional (distribution oblivious) key-stretching mechanism
$\mathsf{GetHardness}(pw_{\\_}u)$ is a constant function which always returns
the same cost parameter $k$. Our objective will be to optimize
$\mathsf{GetHardness}(pw_{\\_}u)$ to minimize the percentage of passwords
cracked by an offline attacker. This must be done subject to any workload
constraints of the authentication server and (optionally) minimum protection
constraint, guiding the minimum acceptable key-stretching parameters for any
password.
The function $\mathsf{GetHardness}(pw_{\\_}u)$ maps each password to a
hardness parameter $k_{\\_}u$ which controls the cost of evaluating our
password hash function $H$. For hash iteration based key-derivation functions
such as PBKDF2 we would achieve cost $k_{\\_}u$ by iterating the underling
hash function $t=\Omega(k)$ times. By contrast, for an ideal memory hard
function the full evaluation cost scales quadratically with the running time
$t_{\\_}u$ so we have $t_{\\_}u=O\left(\sqrt{k_{\\_}u}\right)$ i.e., the
attacker will need to allocate $t_{\\_}u$ blocks of memory for $t_{\\_}u$ time
steps. In practice, most memory hard functions will take the parameter $t$ as
input directly. For simplicity, we will assume that the cost parameter $k$ is
given directly and that the running time $t$ (and memory usage) is derived
from $k$.
Remark. We stress that the hardness parameter $k$ returned by
$\mathsf{GetHardness}(pw_{\\_}u)$ should not be stored on the server.
Otherwise, an offline attacker can immediately reject an incorrect password
guess $pw^{\prime}\neq pw_{\\_}u$ as soon as he/she observes that
$k\neq\mathsf{GetHardness}(pw^{\prime})$. Furthermore, it should not possible
to directly infer $k_{\\_}u$ from the hash value $h_{\\_}u\leftarrow
H(pw_{\\_}u,s_{\\_}u;~{}k_{\\_}u)$. Any MHF candidate such as SCRYPT [24],
Argon2 [4] or DRSample [3] will satisfy this property. While the hardness
parameter $k_{\\_}u$ is not stored on the server, we do assume that an offline
attacker who has breached the authentication server will have access to the
function $\mathsf{GetHardness}(pw_{\\_}u)$ (Kerckhoff’s Principle) since the
code for this function would be stored on the authentication server. Thus,
given a password guess $pw^{\prime}$ the attacker can easily generate the
hardness parameter $k^{\prime}=\mathsf{GetHardness}(pw^{\prime})$ for any
particular password guess.
Defending against Side-Channel Attacks. A side-channel attacker might try to
infer the hardness parameter $k$ (which may in turn be correlated with the
strength of the user’s password) by measuring delay during a successful login
attempt. We remark that for modern memory hard password hashing algorithms
[24, 4, 3] the cost parameter $k$ is modeled as the product of two parameters:
memory and running time. Thus, it is often possible to increase (decrease) the
cost parameter without affecting the running time simply by tuning the memory
parameter222By contrast, the cost parameter for PBKDF2 and BCRYPT is directly
proportional to the running time. Thus, if we wanted to set a high cost
parameter $k$ for some groups of passwords we might have to set an intolerably
long authentication delay [7].. Thus, if such side-channel attacks are a
concern the authentication server could fix the response time during
authentication to some suitable constant and tune the memory parameter
accordingly. Additionally we might delay the authentication response for a
fixed ammount of time (e.g., 250 milliseconds) to ensure that there is no
correlation between response time and the user’s password.
### 3.3 Rational Adversary Model
We consider an untargeted offline adversary whose goal is to break as many
passwords as possible. In the traditional authentication setting an offline
attacker who has breached the authentication server has access to all the data
stored on the server, including each user’s record $(u,s_{\\_}u,h)$ and the
code for hash function $H$ and for the function $\mathsf{GetHardness}()$. In
our analysis we assume that $H$ can only be used as a black box manner (e.g.,
random oracle) to return results of queries from the adversary and that
attempts to find a collision or directly invert $H(\cdot)$ succeed with
negligible probability. However, an offline attacker who obtains
$(u,s_{\\_}u,h)$ may still check whether or not $pw_{\\_}u=pw^{\prime}$ by
setting $k^{\prime}=\mathsf{GetHardness}(pw^{\prime})$ and checking whether or
not $h=H(pw^{\prime},s_{\\_}u;~{}k^{\prime})$. The only limitation to
adversary’s success rate is the resource she/he would like to put in cracking
users’ password.
We assume that the (untargetted) offline attacker has a value $v=v_{\\_}u$ for
password of user $u$. For simplicity we will henceforth use $v$ for password
value since the attacker is untargetted and has the same value $v_{\\_}u=v$
for every user $u$. There are a number of empirical studies of the black
market [2, 16, 27] which show that cracked passwords can have substantial
value e.g., Symantec reports that passwords generally sell for $\$4-\$30$ [14]
and [27] reports that e-mail passwords typically sell for $\$1$ on the Dark
Web. Bitcoin “brain wallets” provide another application where cracked
passwords can have substantial value to attackers [29].
We also assume that the untargetted attacker has a dictionary list which s/he
will use as guesses of $pw_{\\_}u$) e.g., the attacker knows $pw_{\\_}i$ and
$\Pr[pw_{\\_}i]$ for each password $i$. However, the the attacker will not
know the particular password $pw_{\\_}u$ selected by each user $u$. Therefore,
in cracking a certain user’s account the attacker has to enumerate all the
candidate passwords and check if the guess is correct until there is a guess
hit or the attacker finally gives up. We assume that the attacker is rational
and would choose a strategy that would maximize his/her expected utility. The
attacker will need to repeat this process independently for each user $u$. In
our analysis we will focus on an individual user’s account that the attacker
is trying to crack.
## 4 Stackelberg Game
In this section, we use Stackelberg Game Theory [31] to model the interaction
between the authentication server and an untargeted adversary so that we can
optimize the DAHash cost parameters. In a Stackelberg Game the leader
(defender) moves first and then the follower (attacker) plays his/her best
response. In our context, the authentication server (leader) move is to
specify the function $\mathsf{GetHardness}()$. After a breach the offline
attacker (follower) can examine the code for $\mathsf{GetHardness}()$ and
observe the hardness parameters that will be selected for each different
password in $\mathcal{P}$. A rational offline attacker may use this knowledge
to optimize his/her offline attack. We first formally define the action space
of the defender (leader) and attacker (follower) and then we formally define
the utility functions for both players.
### 4.1 Action Space of Defender
The defender’s action is to implement the function $\mathsf{GetHardness}()$.
The implementation must be efficiently computable, and the function must be
chosen subject to maximum workload constraints on the authentication server.
Otherwise, the optimal solution would simply be to set the cost parameter $k$
for each password to be as large as possible. In addition, the server should
guarantee that each password is granted with at least some level of protection
so that it will not make weak passwords weaker.
In an idealized setting where the defender knows the user password
distribution we can implement the function $\mathsf{GetHardness}(pw_{\\_}u)$
as follows: the authentication server first partitions all passwords into
$\tau$ mutually exclusive groups $G_{\\_}i$ with $i\in\\{1,\cdots,\tau\\}$
such that $\mathcal{P}=\bigcup_{\\_}{i=1}^{\tau}G_{\\_}i$ and
$\Pr[pw]>\Pr[pw^{\prime}]$ for every $pw\in G_{\\_}i$ and $pw^{\prime}\in
G_{\\_}{i+1}$. Here, $G_{\\_}1$ will correspond to the weakest group of
passwords and $G_{\\_}{\tau}$ corresponds to the group of strongest passwords.
For each of the $\lvert G_{\\_}i\rvert$ passwords $pw\in G_{\\_}i$ we assign
the same hash cost parameter $k_{\\_}i=\mathsf{GetHardness}(pw)$.
The cost of authenticating a password that is from $G_{\\_}i$ is simply
$k_{\\_}i$. Therefore, the amortized server cost for verifying a correct
password is:
$\small C_{\\_}{SRV}=\sum_{\\_}{i=1}^{\tau}k_{\\_}i\cdot\Pr[pw\in G_{\\_}i],$
(1)
where $\Pr[pw\in G_{\\_}i]=\sum_{\\_}{pw\in G_{\\_}i}Pr[pw]$ is total
probability mass of passwords in group $G_{\\_}i$. In general, we will assume
that the server has a maximum amortized cost $C_{\\_}{max}$ that it is
willing/able to incur for user authentication. Thus, the authentication server
must pick the hash cost vector
$\vec{k}=\\{k_{\\_}1,k_{\\_}2,\cdots,k_{\\_}{\tau}\\}$ subject to the cost
constraint $C_{\\_}{SRV}\leq C_{\\_}{max}$. Additionally, we require that
$k(pw_{\\_}i)\geq k_{\\_}{min}$ to ensure a minimum acceptable level of
protection for all accounts. The attacker will need to repeat this process
independently for each user $u$. Thus, in our analysis we can focus on an
individual user’s account that the attacker is trying to crack.
### 4.2 Action Space of Attacker
After breaching the authentication server the attacker may run an offline
dictionary attack. The attacker must fix an ordering $\pi$ over passwords
$\mathcal{P}$ and a maximum number of guesses $B$ to check i.e., the attacker
will check the first $B$ passwords in the ordering given by $\pi$. If $B=0$
then the attacker gives up immediately without checking any passwords and if
$B=\infty$ then the attacker will continue guessing until the password is
cracked. The permutation $\pi$ specifies the order in which the attacker will
guess passwords, i.e., the attacker will check password $pw_{\\_}{\pi(1)}$
first then $pw_{\\_}{\pi(2)}$ second etc… Thus, the tuple $(\pi,B)$ forms a
_strategy_ of the adversary. Following that strategy the probability that the
adversary succeeds in cracking a random user’s password is simply sum of
probability of all passwords to be checked:
$\small P_{\\_}{ADV}=\lambda(\pi,B)=\sum_{\\_}{i=1}^{B}p_{\\_}{\pi(i)}\ .$ (2)
Here, we use short notation $p_{\\_}{\pi(i)}=\Pr[pw_{\\_}{\pi(i)}]$ which
denotes the probability of the $i$th password in the ordering $\pi$.
### 4.3 Attacker’s Utility
Given the estimated average value for one single password $v$ the expected
gain of the attacker is simply $v\times\lambda(\pi,B)$ i.e., the probability
that the password is cracked times the value $v$. Similarly, given a hash cost
parameter vector $\vec{k}$ the expected cost of the attacker is
$\sum^{B}_{\\_}{i=1}k(pw_{\\_}{\pi(i)})\cdot\left(1-\lambda(\pi,i-1)\right).$
We use the shorthand $k(pw)=k_{\\_}i=\mathsf{GetHardness}(pw)$ for a password
$pw\in G_{\\_}i$. Intuitively, the probability that the first $i-1$ guesses
are incorrect is $\left(1-\lambda(\pi,i-1)\right)$ and we incur cost
$k(pw_{\\_}{\pi(i)})$ for the $i$’th guess if and only if the first $i-1$
guesses are incorrect. Note that $\lambda(\pi,0)=0$ so the attacker always
pays cost $k(pw_{\\_}{\pi(1)})$ for the first guess. The adversary’s expected
utility is the difference of expected gain and expected cost:
$\displaystyle
U_{\\_}{ADV}\left(v,\vec{k},(\pi,B)\right)=v\cdot\lambda(\pi,B)-\sum^{B}_{\\_}{i=1}k(pw_{\\_}{\pi(i)})\cdot\left(1-\lambda(\pi,i-1)\right).$
(3)
### 4.4 Defender’s Utility
After the defender (leader) moves the offline attacker (follower) will respond
with his/her utility optimizing strategy. We let $P_{\\_}{ADV}^{*}$ denote the
probability that the attacker cracks a random user’s password when playing
his/her optimal strategy.
$\small P_{\\_}{ADV}^{*}=\lambda(\pi^{*},B^{*})\
,~{}~{}~{}\mbox{where~{}~{}~{}}(\pi^{*},B^{*})=\arg\max_{\\_}{\pi,B}U_{\\_}{ADV}\left(v,\vec{k},(\pi,B)\right).$
(4)
$P_{\\_}{ADV}^{*}$ will depend on the attacker’s utility optimizing strategy
which will in turn depend on value $v$ for a cracked password, the chosen cost
parameters $k_{\\_}i$ for each group $G_{\\_}i$, and the user password
distribution. Thus, we can define the authentication server’s utility as
$\small U_{\\_}{SRV}(\vec{k},v)=-P_{\\_}{ADV}^{*}\ .$ (5)
The objective of the authentication is to minimize the success rate
$P_{\\_}{ADV}^{*}(v,\vec{k})$ of the attacker by finding the optimal action
i.e., a good way of partitioning passwords into groups and selecting the
optimal hash cost vector $\vec{k}$. Since the parameter $\vec{k}$ controls the
cost of the hash function in passwords storage and authentication, we should
increase $k_{\\_}i$ for a specific group $G_{\\_}i$ of passwords only if this
is necessary to help deter the attacker from cracking passwords in this group
$G_{\\_}i$. The defender may not want to waste too much resource in protecting
the weakest group $G_{\\_}1$ of passwords when password value is high because
they will be cracked easily regardless of the hash cost $k_{\\_}1$.
### 4.5 Stackelberg Game Stages
Since adversary’s utility depends on $(\pi,B)$ and $\vec{k}$, wherein
$(\pi,B)$ is the responses to server’s predetermined hash cost vector
$\vec{k}$. On the other hand, when server selects different hash cost
parameter for different groups of password, it has to take the reaction of
potential attackers into account. Therefore, the interaction between the
authentication server and the adversary can be modeled as a two stage
Stackelberg Game. Then the problem of finding the optimal hash cost vector is
reduced to the problem of computing the equilibrium of Stackelberg game.
In the Stackelberg game, the authentication server (leader) moves first (stage
I); then the adversary follows (stage II). In stage I, the authentication
server commits hash cost vector $\vec{k}=\\{k_{\\_}1,\cdots k_{\\_}{\tau}\\}$
for all groups of passwords; in stage II, the adversary yields the optimal
strategy $(\pi,B)$ for cracking a random user’s password. Through the
interaction between the legitimate authentication server and the untargeted
adversary who runs an offline attack, there will emerge an equilibrium in
which no player in the game has the incentive to unilaterally change its
strategy. Thus, an equilibrium strategy profile
$\left\\{\vec{k}^{*},(\pi^{*},B^{*})\right\\}$ must satisfy
$\small\begin{cases}U_{\\_}{SRV}\left(\vec{k}^{*},v\right)\geq
U_{\\_}{SRV}\left(\vec{k},v\right),&\forall\vec{k}\in\mathcal{F}_{\\_}{C_{\\_}{max}},\\\
U_{\\_}{ADV}\left(v,\vec{k}^{*},(\pi^{*},B^{*})\right)\geq
U_{\\_}{ADV}\left(v,\vec{k}^{*},(\pi,B)\right),&\forall(\pi,B)\end{cases}$ (6)
Assuming that the grouping $G_{\\_}1,\ldots,G_{\\_}\tau$ of passwords is
fixed. The computation of equilibrium strategy profile can be transformed to
solve the following optimization problem, where $\Pr(pw_{\\_}i)$,
$G_{\\_}1,\cdots,G_{\\_}{\tau}$, $C_{\\_}{max}$ are input parameters and
$(\pi^{*},B^{*})$ and $\vec{k}^{*}$ are variables.
$\displaystyle\min_{\\_}{\vec{k}^{*},\pi^{*},B*}$
$\displaystyle\lambda(\pi^{*},B^{*})$ (7) s.t. $\displaystyle
U_{\\_}{ADV}\left(v,\vec{k},(\pi^{*},B^{*})\right)\geq
U_{\\_}{ADV}\left(v,\vec{k},(\pi,B)\right),~{}~{}\forall(\pi,B),$
$\displaystyle\sum_{\\_}{i=1}^{\tau}k_{\\_}i\cdot\Pr[pw\in G_{\\_}i]\leq
C_{\\_}{max},$ $\displaystyle k_{\\_}i\geq k_{\\_}{min},\mbox{~{}$\forall
i\leq\tau$}.$
The solution of the above optimization problem is the equilibrium of our
Stackelberg game. The first constraint implies that adversary will play
his/her utility optimizing strategy i.e., given that the defender’s action
$\vec{k}^{*}$ is fixed the utility of the strategy $(\pi^{*},B^{*})$ is at
least as large as any other strategy the attacker might follow. Thus, a
rational attacker will check the first $B^{*}$ passwords in the order
indicated by $\pi^{*}$ and then stop cracking passwords. The second constraint
is due to resource limitations of authentication server. The third constraint
sets lower-bound for the protection level. In order to tackle the first
constraint, we need to specify the optimal checking sequence and the optimal
number of passwords to be checked.
## 5 Attacker and Defender Strategies
In the first subsection, we give an efficient algorithm to compute the
attacker’s optimal strategy $(\pi^{*},B^{*})$ given the parameters $v$ and
$\vec{k}$. This algorithm in turn is an important subroutine in our algorithm
to find the best stragety $\vec{k}^{*}$ for the defender.
### 5.1 Adversary’s Best Response (Greedy)
In this section we show that the attacker’s optimal ordering $\pi^{*}$ can be
obtained by sorting passwords by their “bang-for-buck” ratio. In particular,
fixing an ordering $\pi$ we define the ratio
$r_{\\_}{\pi(i)}=\frac{p_{\\_}{\pi(i)}}{k(pw_{\\_}{\pi(i)})}$ which can be
viewed as the priority of checking password $pw_{\\_}{\pi(i)}$ i.e., the cost
will be $k(pw_{\\_}{\pi(i)})$ and the probability the password is correct is
$p_{\\_}{\pi(i)}$. Intuitively, the attacker’s optimal strategy is to order
passwords by their “bang-for-buck” ratio guessing passwords with higher
checking priority first. Theorem 5.1 formalizes this intuition by proving that
the optimal checking sequence $\pi^{*}$ has no inversions.
We say a checking sequence $\pi$ has an _inversion_ with respect to $\vec{k}$
if for some pair $a>b$ we have $r_{\\_}{\pi(a)}>r_{\\_}{\pi(b)}$ i.e.,
$pw_{\\_}{\pi(b)}$ is scheduled to be checked before $pw_{\\_}{\pi(a)}$ even
though password $pw_{\\_}{\pi(a)}$ has a higher “bang-for-buck” ratio. Recall
that $pw_{\\_}{\pi(b)}$ is the $b$’th password checked in the ordering $\pi$.
The proof of Theorem 5.1 can be found in the appendix 0.B. Intuitively, we
argue that consecutive inversions can always be swapped without decreasing the
attacker’s utility.
###### Theorem 5.1
Let $(\pi^{*},B^{*})$ denote the attacker’s optimal strategy with respect to
hash cost parameters $\vec{k}$ and let $\pi$ be an ordering with no inversions
relative to $\vec{k}$ then
$U_{\\_}{ADV}\left(v,\vec{k},(\pi,B^{*})\right)\geq
U_{\\_}{ADV}\left(v,\vec{k},(\pi^{*},B^{*})\right)\ .$
Theorem 5.1 gives us an easy way to compute the attacker’s optimal ordering
$\pi^{*}$ over passwords i.e., by sorting passwords according to their “bang-
for-buck” ratio. It remains to find the attacker’s optimal guessing budget
$B^{*}$. As we previously mentioned the password distributions we consider can
be compressed by grouping passwords with equal probability into equivalence
sets. Once we have our cost vector $\vec{k}$ and have implemented
$\mathsf{GetHardness}()$ we can further partition password equivalence sets
such that passwords in each set additionally have the same bang-for-buck
ratio. Theorem 5.2 tells us that the optimal attacker strategy will either
guess all of the passwords in such an equivalence set $ec_{\\_}j$ or none of
them. Thus, when we search for $B^{*}$ we only need to consider $n^{\prime}+1$
possible values of this parameter. We will use this observation to improve the
efficiency of our algorithm to compute the optimal attacker strategy.
###### Theorem 5.2
Let $(\pi^{*},B^{*})$ denote the attacker’s optimal strategy with respect to
hash cost parameters $\vec{k}$. Suppose that passwords can be partitioned into
$n$ equivalence sets $es_{\\_}1,\ldots,es_{\\_}{n^{\prime}}$ such that
passwords $pw_{\\_}a,pw_{\\_}b\in es_{\\_}i$ have the same probability and
hash cost i.e., $p_{\\_}a=p_{\\_}b=p^{i}$ and
$k(pw_{\\_}a)=k(pw_{\\_}b)=k^{i}$. Let $r^{i}=p^{i}/k^{i}$ denote the bang-
for-buck ratio of equivalence set $es_{\\_}i$ and assume that $r^{1}\geq
r^{2}\geq\ldots\geq r_{\\_}{n^{\prime}}$ then
$B^{*}\in\left\\{0,|es_{\\_}1|,|es_{\\_}1|+|es_{\\_}2|,\cdots,\sum_{\\_}{i=1}^{n^{\prime}}|es_{\\_}i|\right\\}$.
The proof of both theorems can be found in Appendix 0.B. Theorem 5.2 implies
that when cracking users’ accounts the adversary increases number of guesses
$B$ by the size of the next equivalence set (if there is net profit by doing
so). Therefore, the attacker finds the optimal strategy $(\pi^{*},B^{*})$ with
Algorithm $\mathsf{BestRes}(v,\vec{k},D)$ in time $\mathcal{O}(n^{\prime}\log
n^{\prime})$ — see Algorithm 3 in Appendix 0.A. The running time is dominated
by the cost of sorting our $n^{\prime}$ equivalence sets.
### 5.2 The Optimal Strategy of Selecting Hash Cost Vector
In the previous section we showed that there is an efficient greedy algorithm
$\mathsf{BestRes}(v,\vec{k},D)$ which takes as input a cost vector $\vec{k}$,
a value $v$ and a (compressed) description of the password distribution $D$
computes the the attacker’s best response $(\pi^{*},B^{*})$ and outputs
$\lambda(\pi^{*},B^{*})$ — the fraction of cracked passwords. Using this
algorithm $\mathsf{BestRes}(v,\vec{k},D)$ as a blackbox we can apply
derivative-free optimization to the optimization problem in equation (7) to
find a good hash cost vector $\vec{k}$ which minimizes the objective
$\lambda(\pi^{*},B^{*})$ There are many derivative-free optimization solvers
available in the literature [26], generally they fall into two categorizes,
deterministic algorithms (such as Nelder-Mead) and evolutionary algorithm
(such as BITEOPT[1] and CMA-EA). We refer our solver to as
$\mathsf{OptHashCostVec}(v,C_{\\_}{max},k_{\\_}{min},D)$. The algorithm takes
as input the parameters of the optimization problem (i.e., password value $v$,
$C_{\\_}{max}$, $k_{\\_}{min}$, and a (compressed) description of the password
distribution $D$) and outputs an optimized hash cost vector $\vec{k}$.
During each iteration of $\mathsf{OptHashCostVec}(\cdot)$, some candidates
$\\{\vec{k}_{\\_}{c_{\\_}i}\\}$ are proposed, together they are referred as
_population_. For each candidate solution $\vec{k}_{\\_}{c_{\\_}i}$ we use our
greedy algorithm $\mathsf{BestRes}(v,\vec{k}_{\\_}{c_{\\_}i},D)$ to compute
the attacker’s best response $(\pi^{*},B^{*})$ i.e., fixing any feasible cost
vector $\vec{k}_{\\_}{c_{\\_}i}$ we can compute the corresponding value of the
objective function
$P_{\\_}{adv,\vec{k}_{\\_}{c_{\\_}i}}:=\sum_{\\_}{i=1}^{B^{*}}p_{\\_}{\pi^{*}(i)}$.
We record the corresponding success rate
$P_{\\_}{adv,\vec{k}_{\\_}{c_{\\_}i}}$ of the attacker as “fitness”. At the
end of each iteration, the population is updated according to fitness of its’
members, the update could be either through deterministic transformation
(Nelder-Mead) or randomized evolution (BITEOPT, CMA-EA). When the iteration
number reaches a pre-defined value $ite$, the best fit member $\vec{k}^{*}$
and its fitness $P_{\\_}{adv}^{*}$ are returned.
## 6 Empirical Analysis
In this section, we design experiments to analyze the effectiveness of DAHash.
At a high level we first fix (compressed) password distributions
$D_{\\_}{train}$ and $D_{\\_}{eval}$ based on empirical password datasets and
an implementation of $\mathsf{GetHardness}()$. Fixing the DAHash parameters
$v$, $C_{\\_}{max}$ and $k_{\\_}{min}$ we use our algorithm
$\mathsf{OptHashCostVec}(v,C_{\\_}{max},k_{\\_}{min},D_{\\_}{train})$ to
optimize the cost vector $\vec{k}^{*}$ and then we compute the attacker’s
optimal response $\mathsf{BestRes}(v,\vec{k}^{*},D_{\\_}{eval})$. By setting
$D_{\\_}{train}=D_{\\_}{eval}$ we can model the idealized scenario where the
defender has perfect knowledge of the password distribution. Similarly, by
setting $D_{\\_}{train}\neq D_{\\_}{eval}$ we can model the performance of
DAHash when the defender optimizes $\vec{k}^{*}$ without perfect knowledge of
the password distribution. In each experiment we fix
$k_{\\_}{min}=C_{\\_}{max}/10$ and we plot the fraction of cracked passwords
as the value to cost ratio $v/C_{\\_}{max}$ varies. We compare DAHash with
traditional password hashing fixing the hash cost to be $C_{\\_}{max}$ for
every password to ensure that the amortized server workload is equivalent.
Before presenting our results we first describe how we define the password
distributions $D_{\\_}{train}$ and $D_{\\_}{eval}$ and how we implement
$\mathsf{GetHardness}()$.
### 6.1 The Password Distribution
One of the challenges in evaluating DAHash is that the exact distribution over
user passwords is unkown. However, there are many empirical password datasets
available due to password breaches. We describe two methods for deriving
password distributions from password datasets.
#### 6.1.1 Empirical Password Datasets
We consider nine empirical password datasets (along with their size $N$):
Bfield ($0.54$ million), Brazzers ($0.93$ million), Clixsense ($2.2$ million),
CSDN ($6.4$ million), LinkedIn ($174$ million), Neopets ($68.3$ million),
RockYou ($32.6$ million), 000webhost ($153$ million) and Yahoo! ($69.3$
million). Plaintext passwords are available for all datasets except for the
differentially private LinkedIn [15] and Yahoo! [8, 6] frequency corpuses
which intentionally omit passwords. With the exception of the Yahoo! frequency
corpus all of the datasets are derived from password breaches. The
differentially LinkedIn dataset is derived from cracked LinkedIn passwords
333The LinkedIn password is derived from 174 million (out of 177.5 million)
cracked password hashes which were cracked by KoreLogic [15]. Thus, the
dataset omits $2\%$ of uncracked passwords. Another caveat is that the
LinkedIn dataset only contains $164.6$ million unique e-mail addresses so
there are some e-mail addresses with multiple associated password hashes..
Formally, given $N$ user accounts $u_{\\_}1,\ldots,u_{\\_}N$ a dataset of
passwords is a list
$D=pw_{\\_}{u_{\\_}1},\ldots,pw_{\\_}{u_{\\_}N}\in\mathcal{P}$ of passwords
each user selected. We can view each of these passwords $pw_{\\_}{u_{\\_}i}$
as being sampled from some unkown distribution $D_{\\_}{real}$.
#### 6.1.2 Empirical Distribution.
Given a dataset of $N$ user passwords the corresponding password frequency
list is simply a list of numbers $f_{\\_}1\geq f_{\\_}2\geq\ldots$ where
$f_{\\_}i$ is the number of users who selected the $i$th most popular password
in the dataset — note that $\sum_{\\_}{i}f_{\\_}i=N$. In the empirical
password distribution we define the probability of the $i$th most likely
password to be $\hat{p}_{\\_}i=f_{\\_}i/N$. In our experiments using the
empirical password distribution we will set $D_{\\_}{train}=D_{\\_}{eval}$
i.e., we assume that the empirical password distribution is the real password
distribution and that the defender knows this distribution.
In our experiments we implement $\mathsf{GetHardness}()$ by partitioning the
password dataset $D_{\\_}{train}$ into $\tau$ groups
$G_{\\_}1,\ldots,G_{\\_}\tau$ using $\tau-1$ frequency thresholds
$t_{\\_}1>\ldots>t_{\\_}{\tau-1}$ i.e., $G_{\\_}1=\\{i:f_{\\_}i\geq
t_{\\_}1\\}$, $G_{\\_}{j}=\\{i:t_{\\_}{j-1}>f_{\\_}i\geq t_{\\_}j\\}$ for
$1<j<\tau$ and $G_{\\_}\tau=\\{i:f_{\\_}i<t_{\\_}{\tau-1}\\}$. Fixing a hash
cost vector $\vec{k}=(k_{\\_}1,\ldots,k_{\\_}{\tau})$ we will assign passwords
in group $G_{\\_}j$ to have cost $k_{\\_}j$ i.e.,
$\mathsf{GetHardness}(pw)$$=k_{\\_}j$ for $pw\in G_{\\_}j$. We pick the
thresholds to ensure that the probability mass $Pr[G_{\\_}j]=\sum_{\\_}{i\in
G_{\\_}j}f_{\\_}i/N$ of each group is approximately balanced (without
separating passwords in an equivalence set). While there are certainly other
ways that $\mathsf{GetHardness}()$ could be implemented (e.g., balancing
number of passwords/equivalence sets in each group) we found that balancing
the probability mass was most effective.
Good-Turing Frequency Estimation. One disadvantage of using the empirical
distribution is that it can often overestimate the success rate of an
adversary. For example, let
$\hat{\lambda}_{\\_}B:=\sum_{\\_}{i=1}^{B}\hat{p}_{\\_}i$ and $N^{\prime}\leq
N$ denote the number of distinct passwords in our dataset then we will always
have $\hat{\lambda}_{\\_}{N^{\prime}}:=\sum_{\\_}{i\leq
N^{\prime}}\hat{p}_{\\_}i=1$ which is inaccurate whenever
$N\leq\left|\mathcal{P}\right|$. However, when $B\ll N$ we will have
$\hat{\lambda}_{\\_}B\approx\lambda_{\\_}B$ i.e., the empirical distribution
will closely match the real distribution. Thus, we will use the empirical
distribution to evaluate the performance of DAHash when the value to cost
ratio $v/C_{\\_}{max}$ is smaller (e.g, $v/C_{\\_}{max}\ll 10^{8}$) and we
will highlight uncertain regions of the curve using Good-Turing frequency
estimation.
Let $N_{\\_}f=|\\{i:f_{\\_}i=f\\}|$ denote number of distinct passwords in our
dataset that occur exactly $f$ times and let
$B_{\\_}f=\sum_{\\_}{i>f}N_{\\_}i$ denote the number of distinct passwords
that occur more than $f$ times. Finally, let
$E_{\\_}f:=|\lambda_{\\_}{B_{\\_}f}-\hat{\lambda}_{\\_}{N_{\\_}{B_{\\_}f}}|$
denote the error of our estimate for $\lambda_{\\_}{B_{\\_}{f}}$, the total
probability of the top $B_{\\_}{f}$ passwords in the real distribution. If our
dataset consists of $N$ independent samples from an unknown distribution then
Good-Turing frequency estimation tells us that the total probability mass of
all passwords that appear exactly $f$ times is approximately
$U_{\\_}f:=(f+1)N_{\\_}{f+1}/N$ e.g., the total probability mass of unseen
passwords is $U_{\\_}0=N_{\\_}1/N$. This would imply that
${\lambda}_{\\_}{B_{\\_}f}\geq
1-\sum_{\\_}{j=0}^{f}U_{\\_}j=1-\sum_{\\_}{j=0}^{i}\frac{(j+1)N_{\\_}{j+1}}{N}$
and $E_{\\_}f\leq U_{\\_}f$.
The following table plots our error upper bound $U_{\\_}f$ for $0\leq f\leq
10$ for 9 datasets. Fixing a target error threshold $\epsilon$ we define
$f_{\\_}{\epsilon}=\min\\{i:U_{\\_}i\leq\epsilon\\}$ i.e., the minimum index
such that the error is smaller than $\epsilon$. In our experiments we focus on
error thresholds $\epsilon\in\\{0.1,0.01\\}$. For example, for the Yahoo!
(resp. Bfield) dataset we have $f_{\\_}{0.1}=1$ (resp. $j_{\\_}{0.1}=2$) and
$j_{\\_}{0.01}=6$ (resp. $j_{\\_}{0.01}=5$). As soon as we see passwords with
frequency at most $j_{\\_}{0.1}$ (resp. $j_{\\_}{0.01}$) start to get cracked
we highlight the points on our plots with a red (resp. yellow).
Table 1: Error Upper Bounds: $U_{\\_}i$ for Different Password Datasets | Bfield | Brazzers | Clixsense | CSDN | Linkedin | Neopets | Rockyou | 000webhost | Yahoo!
---|---|---|---|---|---|---|---|---|---
$U_{\\_}0$ | 0.69 | 0.531 | 0.655 | 0.557 | 0.123 | 0.315 | 0.365 | 0.59 | 0.425
$U_{\\_}1$ | 0.101 | 0.126 | 0.095 | 0.092 | 0.321 | 0.093 | 0.081 | 0.124 | 0.065
$U_{\\_}2$ | 0.036 | 0.054 | 0.038 | 0.034 | 0.043 | 0.051 | 0.036 | 0.055 | 0.031
$U_{\\_}3$ | 0.02 | 0.03 | 0.023 | 0.018 | 0.055 | 0.034 | 0.022 | 0.034 | 0.021
$U_{\\_}4$ | 0.014 | 0.02 | 0.016 | 0.012 | 0.018 | 0.025 | 0.017 | 0.022 | 0.015
$U_{\\_}5$ | 0.01 | 0.014 | 0.011 | 0.008 | 0.021 | 0.02 | 0.013 | 0.016 | 0.012
$U_{\\_}6$ | 0.008 | 0.011 | 0.009 | 0.006 | 0.011 | 0.016 | 0.011 | 0.012 | 0.01
$U_{\\_}7$ | 0.007 | 0.01 | 0.007 | 0.005 | 0.011 | 0.013 | 0.01 | 0.009 | 0.009
$U_{\\_}8$ | 0.006 | 0.008 | 0.006 | 0.004 | 0.008 | 0.011 | 0.009 | 0.008 | 0.008
$U_{\\_}9$ | 0.005 | 0.007 | 0.005 | 0.004 | 0.007 | 0.01 | 0.008 | 0.006 | 0.007
$U_{\\_}{10}$ | 0.004 | 0.007 | 0.004 | 0.003 | 0.006 | 0.009 | 0.007 | 0.005 | 0.006
#### 6.1.3 Monte Carlo Distribution
As we observed previously the empirical password distribution can be highly
inaccurate when $v/C_{\\_}{max}$ is large. Thus, we use a different approach
to evaluate the performance of DAHash when $v/C_{\\_}{max}$ is large. In
particular, we subsample passwords, obtain gussing numbers for each of these
passwords and fit our distribution to the corresponding guessing curve. We
follow the following procedure to derive a distribution: (1) subsample $s$
passwords $D_{\\_}s$ from dataset $D$ with replacement; (2) for each
subsampled passwords $pw\in D_{\\_}s$ we use the Password Guessing Service
[28] to obtain a guessing number $\\#\mathsf{guessing}(pw)$ which uses Monte
Carlo methods [12] to estimate how many guesses an attacker would need to
crack $pw$ 444The Password Guessing Service [28] gives multiple different
guessing numbers for each password based on different sophisticated cracking
models e.g., Markov, PCFG, Neural Networks. We follow the suggestion of the
authors [28] and use the minimum guessing number (over all autmated
approached) as our final estimate.. (3) For each $i\leq 199$ we fix guessing
thresholds $t_{\\_}0<t_{\\_}1<\ldots<t_{\\_}{199}$ with $t_{\\_}0:=0$,
$t_{\\_}1:=15$, $t_{\\_}i-t_{\\_}{i-1}=1.15^{i+25}$, and
$t_{\\_}{199}=\max_{\\_}{pw\in D_{\\_}s}\\{\\#\mathsf{guessing}(pw)\\}$. (4)
For each $i\leq 199$ we compute $g_{\\_}i$, the number of samples $pw\in
D_{\\_}s$ with $\\#\mathsf{guessing}(pw)\in[t_{\\_}{i-1},t_{\\_}i)$. (5) We
output a compressed distribution with $200$ equivalences sets using histogram
density i.e., the $i$th equivalence set contains $t_{\\_}{i}-t_{\\_}{i-1}$
passwords each with probability
$\frac{g_{\\_}i}{s\times(t_{\\_}i-t_{\\_}{i-1})}$.
In our experiments we repeat this process twice with $s=12,500$ subsamples to
obtain two password distributions $D_{\\_}{train}$ and $D_{\\_}{eval}$. One
advantage of this approach is that it allows us to evaluate the performance of
DAHash against a state of the art password cracker when the ratio
$v/C_{\\_}{max}$ is large. The disadvantage is that the distributions
$D_{\\_}{train}$ and $D_{\\_}{eval}$ we extract are based on current state of
the art password cracking models. It is possible that we optimized our DAHash
parameters with respect to the wrong distribution if an attacker develops an
improved password cracking model in the future.
Implementing $\mathsf{GetHardness}()$ for Monte Carlo Distributions. For Monte
Carlo distribution $\mathsf{GetHardness}(pw)$ depends on the guessing number
$\\#\mathsf{guessing}(pw)$. In particular, we fix thresholds points
$x_{\\_}{1}>\ldots>x_{\\_}{\tau-1}$ and (implicitly) partition passwords into
$\tau$ groups $G_{\\_}1,\ldots,G_{\\_}t$ using these thresholds i.e.,
$G_{\\_}i=\\{pw~{}:~{}x_{\\_}{i-1}\geq\\#\mathsf{guessing}(pw)>x_{\\_}{i}\\}$.
Thus, $\mathsf{GetHardness}(pw)$ would compute $\\#\mathsf{guessing}(pw)$ and
assign hash cost $k_{\\_}i$ if $pw\in G_{\\_}i$. As before the thresholds
$x_{\\_}1,\ldots,x_{\\_}{\tau-1}$ are selected to (approximately) balance the
probability mass in each group.
$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$0$$0.2$$0.4$$0.6$$0.8$$1$
uncertain region
$v/C_{max}$ Fraction of Cracked Passwords deterministic$\tau=3$$\tau=5$
improvement: black- red improvement: black- blue (a) Bfield
$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$0$$0.2$$0.4$$0.6$$0.8$$1$
uncertain region
$v/C_{max}$ Fraction of Cracked Passwords deterministic$\tau=3$$\tau=5$
improvement: black- red improvement: black- blue (b) Brazzers
$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$0$$0.2$$0.4$$0.6$$0.8$$1$
uncertain region
$v/C_{max}$ Fraction of Cracked Passwords deterministic$\tau=3$$\tau=5$
improvement: black- red improvement: black- blue (c) Clixsense
$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$0$$0.2$$0.4$$0.6$$0.8$$1$
uncertain region
$v/C_{max}$ Fraction of Cracked Passwords deterministic$\tau=3$$\tau=5$
improvement: black- red improvement: black- blue (d) CSDN
$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$0$$0.2$$0.4$$0.6$$0.8$$1$
uncertain region
$v/C_{max}$ Fraction of Cracked Passwords deterministic$\tau=3$$\tau=5$
improvement: black- red improvement: black- blue (e) Linkedin
$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$0$$0.2$$0.4$$0.6$$0.8$$1$
uncertain region
$v/C_{max}$ Fraction of Cracked Passwords deterministic$\tau=3$$\tau=5$
improvement: black- red improvement: black- blue (f) Neopets
$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$0$$0.2$$0.4$$0.6$$0.8$$1$
uncertain region
$v/C_{max}$ Fraction of Cracked Passwords deterministic$\tau=3$$\tau=5$
improvement: black- red improvement: black- blue (g) Rockyou
$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$0$$0.2$$0.4$$0.6$$0.8$$1$
uncertain region
$v/C_{max}$ Fraction of Cracked Passwords deterministic$\tau=3$$\tau=5$
improvement: black- red improvement: black- blue (h) 000webhost
$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$0$$0.2$$0.4$$0.6$$0.8$$1$
uncertain region
$v/C_{max}$ Fraction of Cracked Passwords deterministic$\tau=3$$\tau=5$
improvement: black- red improvement: black- blue (i) Yahoo
Figure 1: Adversary Success Rate vs $v/C_{\\_}{max}$ for Empirical
Distributions
the red (resp. yellow) shaded areas denote unconfident regions where the the
empirical distribution might diverges from the real distribution $U_{\\_}i\geq
0.1$ (resp. $U_{\\_}i\geq 0.01$).
$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$10^{9}$$10^{10}$$10^{11}$$10^{12}$$0$$0.2$$0.4$$0.6$$0.8$$v/C_{max}$
Fraction of Cracked Passwords deterministic$\tau=3$$\tau=5$ improvement:
black- red improvement: black- blue (a) Bfield
$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$10^{9}$$10^{10}$$10^{11}$$10^{12}$$0$$0.2$$0.4$$0.6$$0.8$$1$$v/C_{max}$
Fraction of Cracked Passwords deterministic$\tau=3$$\tau=5$ improvement:
black- red improvement: black- blue (b) Brazzers
$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$10^{9}$$10^{10}$$10^{11}$$10^{12}$$0$$0.2$$0.4$$0.6$$0.8$$v/C_{max}$
Fraction of Cracked Passwords deterministic$\tau=3$$\tau=5$ improvement:
black- red improvement: black- blue (c) Clixsense
$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$10^{9}$$10^{10}$$10^{11}$$10^{12}$$0$$0.2$$0.4$$0.6$$0.8$$v/C_{max}$
Fraction of Cracked Passwords deterministic$\tau=3$$\tau=5$ improvement:
black- red improvement: black- blue (d) CSDN
$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$10^{9}$$10^{10}$$10^{11}$$10^{12}$$0$$0.2$$0.4$$0.6$$0.8$$v/C_{max}$
Fraction of Cracked Passwords deterministic$\tau=3$$\tau=5$ improvement:
black- red improvement: black- blue (e) Neopets
$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$10^{9}$$10^{10}$$10^{11}$$10^{12}$$0$$0.1$$0.2$$0.3$$0.4$$0.5$$v/C_{max}$
Fraction of Cracked Passwords deterministic$\tau=3$$\tau=5$ improvement:
black- red improvement: black- blue (f) 000webhost
Figure 2: Adversary Success Rate vs $v/C_{\\_}{max}$ for Monte Carlo
Distributions
$10^{2}$$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$0$C_{\\_}{max}$$2C_{\\_}{max}$$3C_{\\_}{max}$$v/C_{\\_}{max}$
Hash Cost Vector $\vec{k}$ $k_{\\_}1$$k_{\\_}2$$k_{\\_}3$ (a) $k_{\\_}i^{*}$
against $v/C_{\\_}{max}$
$10^{2}$$10^{3}$$10^{4}$$10^{5}$$10^{6}$$10^{7}$$10^{8}$$0$$0.2$$0.4$$0.6$$0.8$$1$$v/C_{\\_}{max}$
Fraction of Cracked passwords weak passwordsmedium passwordsstrong passwords
(b) Cracked pwds per group
Figure 3: Hash Costs and Cracked Fraction per Group for RockYou (Empirical
Distribution)
### 6.2 Experiment Results
Figure 1i evalutes the performance of DAHash on the empirical distributions
empirical datasets. To generate each point on the plot we first fix
$v/C_{\\_}{max}\in\\{i\times 10^{2+j}:1\leq i\leq 9,0\leq j\leq 5\\}$, use
$\mathsf{OptHashCostVec}()$ to tune our DAHash parameters $\vec{k}^{*}$ and
then compute the corresponding success rate for the attacker. The experiment
is repeated for the empirical distributions derived from our $9$ different
datasets. In each experiment we group password equivalence sets into $\tau$
groups ($\tau\in\\{1,3,5\\}$) $G_{\\_}1,\ldots,G_{\\_}\tau$ of (approximately)
equal probability mass. In addition, we set $k_{\\_}{min}=0.1C_{\\_}{max}$ and
iteration of BITEOPT to be 10000. The yellow (resp. red) regions correspond to
unconfident zones where we expect that the our results for empirical
distribution might differ from reality by $1\%$ (resp. $10\%$).
Figure 2 evaluates the performance of DAHash for for Monte Carlo distributions
we extract using the Password Guessing Service. For each dataset we extract
two distributions $D_{\\_}{train}$ and $D_{\\_}{eval}$. For each
$v/C_{\\_}{max}\in\\{j\times 10^{i}:~{}3\leq i\leq 11,j\in\\{2,4,6,8\\}\\}$ we
obtain the corresponding optimal hash cost $\vec{k}^{*}$ using
$\mathsf{OptHashCostVec}()$ with the distribution $D_{\\_}{train}$ as input.
Then we compute success rate of attacker on $D_{\\_}{eval}$ with the same cost
vector $\vec{k}^{*}$. We repeated this for 6 plaintext datasets: Bfield,
Brazzers, Clixsense, CSDN, Neopets and 000webhost for which we obtained
guessing numbers from the Password Guessing Service.
Figure 1i and Figures 2 plot $P_{\\_}{ADV}$ vs $v/C_{\\_}{max}$ for each
different dataset under empirical distribution and Monte Carlo distribution.
Each sub-figure contains three separate lines corresponding to
$\tau\in\\{1,3,5\\}$ respectively. We first remark that $\tau=1$ corresponds
to the status quo when all passwords are assigned the same cost parameter
i.e., $\mathsf{getHardness}(pw_{\\_}u)=C_{\\_}{max}$. When $\tau=3$ we can
interpret our mechanism as classifying all passwords into three groups (e.g.,
weak, medium and strong) based on their strength. The fine grained case
$\tau=5$ has more strength levels into which passwords can be placed.
DAHash Advantage: For empirical distributions the improvement peaks in the
uncertain region of the plot. Ignoring the uncertain region the improvement is
still as large as 15%. For Monte Carlo distributions we find a 20% improvement
e.g., $20\%$ of user passwords could be saved with the DAHash mechanism.
Figure 3a explores how the hash cost vector $\vec{k}$ is allocated between
weak/medium/strong passwords as $v/C_{\\_}{max}$ varies (using the RockYou
empirical distribution with $\tau=3$). Similarly, Figure 3b plots the fraction
of weak/medium/strong passwords being cracked as adversary value increases. We
discuss these each of these figures in more detail below.
#### 6.2.1 How Many Groups ($\tau$)?
We explore the impact of $\tau$ on the percentage of passwords that a rational
adversary will crack. Since the untargeted adversary attacks all user accounts
in the very same way, the percentage of passwords the adversary will crack is
the probability that the adversary succeeds in cracking a random user’s
account, namely, $P_{\\_}{ADV}^{*}$. Intuitively, a partition resulting in
more groups can grant a better protection for passwords, since by doing so the
authentication server can deal with passwords with more precision and can
better tune the fitness of protection level to password strength. We observe
in Figure 1i and Figures 2 for most of time the success rate reduction when
$\tau=5$ is larger compared to $\tau=3$. However, the marginal benefit
plummets, changing $\tau$ from 3 to 5 does not bring much performance
improvement. A positive interpretation of this observation is that we can
glean most of the benefits of our differentiated hash cost mechanism without
making the $\mathsf{getHardness}()$ procedure too complicated e.g., we only
need to partition passwords into three groups weak, medium and strong.
Our hashing mechanism does not overprotect passwords that are too weak to
withstand offline attack when adversary value is sufficiently high, nor
passwords that are strong enough so that a rational offline attacker loses
interest in cracking. The effort previously spent in protecting passwords that
are too weak/strong can be reallocated into protecting “savable” passwords at
some $v/C_{\\_}{max}$. Thus, our DAHash algorithm beats traditional hashing
algorithm without increasing the server’s expected workload i.e., the cost
parameters $\vec{k}$ are tuned such that expected workload is always
$C_{\\_}{max}$ whether $\tau=1$ (no differentiated costs), $\tau=3$
(differentiated costs) or $\tau=5$ (finer grained differentiated costs). We
find that the defender can reduce the percentage of cracked passwords
$P_{\\_}{ADV}^{*}$ without increasing the workload $C_{\\_}{max}$.
#### 6.2.2 Understanding the Optimal Allocation $\vec{k}^{*}$
We next discuss how our mechanism re-allocates the cost parameters across
$\tau=3$ different groups as $v/C_{\\_}{max}$ increases — see Figures 3a. At
the very beginning $v/C_{\\_}{max}$ is small enough that a rational password
gives up without cracking any password even if the authentication server
assigns equal hash costs to different groups of password, e.g.,
$k_{\\_}1=k_{\\_}2=k_{\\_}3=C_{\\_}{max}$.
As the adversary value increases the Algorithm $\mathsf{OptHashCostVec}()$
starts to reallocate $\vec{k}$ so that most of the authentication server’s
effort is used to protect the weakest passwords in group $G_{\\_}1$ while
minimal key-stretching effort is used to protect the stronger passwords in
groups $G_{\\_}2$ and $G_{\\_}3$ In particular, we have $k_{\\_}1\approx
3C_{\\_}{max}$ for much of the interval $v/C_{\\_}{max}\in[4*10^{3},10^{5}]$
while $k_{\\_}2,k_{\\_}3$ are pretty small in this interval e.g.,
$k_{\\_}2,k_{\\_}3\approx 0.1\times C_{\\_}{max}$. However, as the ratio
$v/C_{\\_}{max}$ continues to increase from $10^{6}$ to $10^{7}$ Algorithm
$\mathsf{OptHashCostVec}()$ once again begins to reallocate $\vec{k}$ to place
most of the weight on $k_{\\_}2$ as it is now necessary to protect passwords
in group $G_{\\_}2$. Over the same interval the value of $k_{\\_}1$ decreases
sharply as it is no longer possible to protect all of the weakest passwords
group $G_{\\_}1$.
As $v/C_{\\_}{max}$ continues to increase Algorithm
$\mathsf{OptHashCostVec}()$ once again reallocates $\vec{k}$ to place most of
the weight on $k_{\\_}3$ as it is now necessary to protect the strongest
passwords in group $G_{\\_}3$ (and no longer possible to protect all of the
medium strength passwords in group $G_{\\_}2$). Finally, $v/C_{\\_}{max}$ gets
too large it is no longer possible to protect passwords in any group so
Algorithm $\mathsf{OptHashCostVec}()$ reverse back to equal hash costs , i.e.,
$k_{\\_}1=k_{\\_}2=k_{\\_}3=C_{\\_}{max}$.
Figures 3a and 3b tell a complementary story. Weak passwords are cracked first
as $v/C_{\\_}{max}$ increases, then follows the passwords with medium strength
and the strong passwords stand until $v/C_{\\_}{max}$ finally becomes
sufficiently high. For example, in Figure 3b we see that initially the
mechanism is able to protect all passwords, weak, medium and strong. However,
as $v/C_{\\_}{max}$ increases from $10^{5}$ to $10^{6}$ it is no longer
possible to protect the weakest passwords in group $G_{\\_}1$. Up until
$v/C_{\\_}{max}=10^{6}$ the mechanism is able to protect all medium strength
passwords in group $G_{\\_}2$, but as the $v/C_{\\_}{max}$ crosses the
$10^{7}$ threshold it is not feasible to protect passwords in group
$G_{\\_}2$. The strongest passwords in group $G_{\\_}3$ are completely
projected until $v/C_{\\_}{max}$ reaches $2\times 10^{7}$ at which point it is
no longer possible to protect any passwords because the adversary value is too
high.
Viewing together with Figure 3a, we observe that it is only when weak
passwords are about to be cracked completely (when $v/C_{\\_}{max}$ is around
$7\times 10^{5}$) that the authentication server begin to shift effort to
protect medium passwords. The shift of protection effort continues as the
adversary value increases until medium strength passwords are about to be
massively cracked. The same observation applies to medium passwords and strong
password. While we used the plots from the RockYou dataset for discussion, the
same trends also hold for other datasets (concrete thresholds may differ).
Robustness We remark that in Figure 1i and Figure 2 the actual hash cost
vector $\vec{k}$ we chose is not highly sensitive to small changes of the
adversary value $v$ (only in semilog x axis fluctuation of $\vec{k}$ became
obvious). Therefore, DAHash may still be useful even when it is not possible
to obtain a precise estimate of $v$ or when the attacker’s value $v$ varies
slightly over time.
Incentive Compatibility One potential concern in assigning different hash cost
parameters to different passwords is that we might inadvertently provide
incentive for a user to select weaker passwords. In particular, the user might
prefer a weaker password $pw_{\\_}i$ to $pw_{\\_}j$
($\Pr[pw_{\\_}i]>\Pr[pw_{\\_}j]$) if s/he believes that the attacker will
guess $pw_{\\_}j$ before $pw_{\\_}i$ e.g., the hash cost parameter
$k(pw_{\\_}j)$ is so small that makes $r_{\\_}j>r_{\\_}i$. We could directly
encode incentive compatibility into our constraints for the feasible range of
defender strategies $\mathcal{F}_{\\_}{C_{\\_}{max}}$ i.e., we could
explicitly add a constraints that $r_{\\_}j\leq r_{\\_}i$ whenever
$\Pr[pw_{\\_}i]\leq\Pr[pw_{\\_}j]$. However, Figures 3b suggest that this is
not necessary. Observe that the attacker does not crack any medium/high
strength passwords until all weak passwords have been cracked. Similarly, the
attacker does not crack any high strength passwords until all medium strength
passwords have been cracked.
## 7 Conclusions
We introduce the notion of DAHash. In our mechanism the cost parameter
assigned to distinct passwords may not be the same. This allows the defender
to focus key-stretching effort primarily on passwords where the effort will
influence the decisions of a rational attacker who will quit attacking as soon
as expected costs exceed expected rewards. We present Stackelberg game model
to capture the essentials of the interaction between the legitimate
authentication server (leader) and an untargeted offline attacker (follower).
In the game the defender (leader) commits to the hash cost parameters
$\vec{k}$ for different passwords and the attacker responds in a utility
optimizing manner. We presented a highly efficient algorithm to provably
compute the attacker’s best response given a password distribution. Using this
algorithm as a subroutine we use an evolutionary algorithm to find a good
strategy $\vec{k}$ for the defender. Finally, we analyzed the performance of
our differentiated cost password hashing algorithm using empirical password
datasets . Our experiments indicate that DAHash can dramatically reduce the
fraction of passwords that would be cracked in an untargeted offline attack in
comparison with the traditional approach e.g., by up to $15\%$ under empirical
distributions and $20\%$ under Monte Carlo distributions. This gain comes
without increasing the expected workload of the authentication server. Our
mechanism is fully compatible with modern memory hard password hashing
algorithms such as SCRYPT [24], Argon2id [4] and DRSample [3].
## Acknowledgment
The work was supported by the National Science Foundation under grants CNS
#1704587, CNS #1755708 and CNS #1931443. The authors wish to thank Matteo
Dell‘Amico (shepherd) and other anonymous reviewers for constructive feedback
which helped improve the paper.
## References
* [1] Biteopt algorithm. https://github.com/avaneev/biteopt
* [2] Allodi, L.: Economic factors of vulnerability trade and exploitation. In: Thuraisingham, B.M., Evans, D., Malkin, T., Xu, D. (eds.) ACM CCS 2017. pp. 1483–1499. ACM Press, Dallas, TX, USA (Oct 31 – Nov 2, 2017). https://doi.org/10.1145/3133956.3133960
* [3] Alwen, J., Blocki, J., Harsha, B.: Practical graphs for optimal side-channel resistant memory-hard functions. In: Thuraisingham, B.M., Evans, D., Malkin, T., Xu, D. (eds.) ACM CCS 2017. pp. 1001–1017. ACM Press, Dallas, TX, USA (Oct 31 – Nov 2, 2017). https://doi.org/10.1145/3133956.3134031
* [4] Biryukov, A., Dinu, D., Khovratovich, D.: Argon2: new generation of memory-hard functions for password hashing and other applications. In: Security and Privacy (EuroS&P), 2016 IEEE European Symposium on. pp. 292–302. IEEE (2016)
* [5] Blocki, J., Datta, A.: CASH: A cost asymmetric secure hash algorithm for optimal password protection. In: IEEE 29th Computer Security Foundations Symposium. pp. 371–386 (2016)
* [6] Blocki, J., Datta, A., Bonneau, J.: Differentially private password frequency lists. In: NDSS 2016. The Internet Society, San Diego, CA, USA (Feb 21–24, 2016)
* [7] Blocki, J., Harsha, B., Zhou, S.: On the economics of offline password cracking. In: 2018 IEEE Symposium on Security and Privacy. pp. 853–871. IEEE Computer Society Press, San Francisco, CA, USA (May 21–23, 2018). https://doi.org/10.1109/SP.2018.00009
* [8] Bonneau, J.: The science of guessing: Analyzing an anonymized corpus of 70 million passwords. In: 2012 IEEE Symposium on Security and Privacy. pp. 538–552. IEEE Computer Society Press, San Francisco, CA, USA (May 21–23, 2012). https://doi.org/10.1109/SP.2012.49
* [9] Boyen, X.: Halting password puzzles: Hard-to-break encryption from human-memorable keys. In: Provos, N. (ed.) USENIX Security 2007. USENIX Association, Boston, MA, USA (Aug 6–10, 2007)
* [10] Castelluccia, C., Chaabane, A., Dürmuth, M., Perito, D.: When privacy meets security: Leveraging personal information for password cracking. arXiv preprint arXiv:1304.6584 (2013)
* [11] Castelluccia, C., Dürmuth, M., Perito, D.: Adaptive password-strength meters from Markov models. In: NDSS 2012. The Internet Society, San Diego, CA, USA (Feb 5–8, 2012)
* [12] Dell’Amico, M., Filippone, M.: Monte carlo strength evaluation: Fast and reliable password checking. In: Ray, I., Li, N., Kruegel, C. (eds.) ACM CCS 2015\. pp. 158–169. ACM Press, Denver, CO, USA (Oct 12–16, 2015). https://doi.org/10.1145/2810103.2813631
* [13] Dodis, Y., Guo, S., Katz, J.: Fixing cracks in the concrete: Random oracles with auxiliary input, revisited. In: Coron, J., Nielsen, J.B. (eds.) EUROCRYPT 2017, Part II. LNCS, vol. 10211, pp. 473–495. Springer, Heidelberg, Germany, Paris, France (Apr 30 – May 4, 2017). https://doi.org/10.1007/978-3-319-56614-616
* [14] Fossi, M., Johnson, E., Turner, D., Mack, T., Blackbird, J., McKinney, D., Low, M.K., Adams, T., Laucht, M.P., Gough, J.: Symantec report on the underground economy (November 2008), retrieved 1/8/2013.
* [15] Harsha, B., Morton, R., Blocki, J., Springer, J., Dark, M.: Bicycle attacks considered harmful: Quantifying the damage of widespread password length leakage. Computers & Security 100, 102068 (2021). https://doi.org/https://doi.org/10.1016/j.cose.2020.102068, http://www.sciencedirect.com/science/article/pii/S0167404820303412
* [16] Herley, C., Florêncio, D.: Nobody sells gold for the price of silver: Dishonesty, uncertainty and the underground economy. Economics of information security and privacy pp. 33–53 (2010)
* [17] Kaliski, B.: Pkcs# 5: Password-based cryptography specification version 2.0 (2000)
* [18] Kelley, P.G., Komanduri, S., Mazurek, M.L., Shay, R., Vidas, T., Bauer, L., Christin, N., Cranor, L.F., Lopez, J.: Guess again (and again and again): Measuring password strength by simulating password-cracking algorithms. In: 2012 IEEE Symposium on Security and Privacy. pp. 523–537. IEEE Computer Society Press, San Francisco, CA, USA (May 21–23, 2012). https://doi.org/10.1109/SP.2012.38
* [19] Ma, J., Yang, W., Luo, M., Li, N.: A study of probabilistic password models. In: 2014 IEEE Symposium on Security and Privacy. pp. 689–704. IEEE Computer Society Press, Berkeley, CA, USA (May 18–21, 2014). https://doi.org/10.1109/SP.2014.50
* [20] Manber, U.: A simple scheme to make passwords based on one-way functions much harder to crack. Computers & Security 15(2), 171–176 (1996)
* [21] Melicher, W., Ur, B., Segreti, S.M., Komanduri, S., Bauer, L., Christin, N., Cranor, L.F.: Fast, lean, and accurate: Modeling password guessability using neural networks. In: Holz, T., Savage, S. (eds.) USENIX Security 2016. pp. 175–191. USENIX Association, Austin, TX, USA (Aug 10–12, 2016)
* [22] Morris, R., Thompson, K.: Password security: A case history. Communications of the ACM 22(11), 594–597 (1979), http://dl.acm.org/citation.cfm?id=359172
* [23] Oechslin, P.: Making a faster cryptanalytic time-memory trade-off. In: Boneh, D. (ed.) CRYPTO 2003. LNCS, vol. 2729, pp. 617–630. Springer, Heidelberg, Germany, Santa Barbara, CA, USA (Aug 17–21, 2003). https://doi.org/10.1007/978-3-540-45146-436
* [24] Percival, C.: Stronger key derivation via sequential memory-hard functions. In: BSDCan 2009 (2009)
* [25] Provos, N., Mazieres, D.: Bcrypt algorithm. USENIX (1999)
* [26] Rios, L.M., Sahinidis, N.V.: Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization 56(3), 1247–1293 (2013)
* [27] Stockley, M.: What your hacked account is worth on the dark web (Aug 2016), https://nakedsecurity.sophos.com/2016/08/09/what-your-hacked-account-is-worth-on-the-dark-web/
* [28] Ur, B., Segreti, S.M., Bauer, L., Christin, N., Cranor, L.F., Komanduri, S., Kurilova, D., Mazurek, M.L., Melicher, W., Shay, R.: Measuring real-world accuracies and biases in modeling password guessability. In: Jung, J., Holz, T. (eds.) USENIX Security 2015. pp. 463–481. USENIX Association, Washington, DC, USA (Aug 12–14, 2015)
* [29] Vasek, M., Bonneau, J., Castellucci, R., Keith, C., Moore, T.: The bitcoin brain drain: Examining the use and abuse of bitcoin brain wallets. In: Grossklags, J., Preneel, B. (eds.) FC 2016. LNCS, vol. 9603, pp. 609–618. Springer, Heidelberg, Germany, Christ Church, Barbados (Feb 22–26, 2016)
* [30] Veras, R., Collins, C., Thorpe, J.: On semantic patterns of passwords and their security impact. In: NDSS 2014. The Internet Society, San Diego, CA, USA (Feb 23–26, 2014)
* [31] Von Stackelberg, H.: Market structure and equilibrium. Springer Science & Business Media (2010)
* [32] Weir, M., Aggarwal, S., de Medeiros, B., Glodek, B.: Password cracking using probabilistic context-free grammars. In: 2009 IEEE Symposium on Security and Privacy. pp. 391–405. IEEE Computer Society Press, Oakland, CA, USA (May 17–20, 2009). https://doi.org/10.1109/SP.2009.8
* [33] Wetzels, J.: Open sesame: The password hashing competition and Argon2. Cryptology ePrint Archive, Report 2016/104 (2016), http://eprint.iacr.org/2016/104
* [34] Wiener, M.J.: The full cost of cryptanalytic attacks. Journal of Cryptology 17(2), 105–124 (Mar 2004). https://doi.org/10.1007/s00145-003-0213-5
## Appendix 0.A Algorithms
1:$u$, $pw_{\\_}u$, $L$
2:$s_{\\_}u\overset{\$}{\leftarrow}\\{0,1\\}^{L}$;
3:$k\leftarrow\mathsf{GetHardness}(pw_{\\_}u)$;
4:$h\leftarrow H(pw_{\\_}u,s_{\\_}u;~{}k)$;
5:$\mathsf{StoreRecord}$ $(u,s_{\\_}u,h)$
Algorithm 1 Account creation
1:$u$, $pw_{\\_}u^{\prime}$
2:$(u,s_{\\_}u,h)\leftarrow\mathsf{FindRecord}(u)$;
3:$k^{\prime}\leftarrow\mathsf{GetHardness}(pw_{\\_}u^{\prime})$;
4:$h^{\prime}\leftarrow H(pw_{\\_}u,s_{\\_}u;~{}k^{\prime})$;
5:Return $h==h^{\prime}$
Algorithm 2 Password authentication Algorithm 3 The adversary’s best response
$\mathsf{BestRes}(v,\vec{k},D),$
1:$\vec{k}$, $v$, $D$
2:$(\pi^{*},B^{*})$
3:sort $\\{\frac{p_{\\_}i}{k_{\\_}i}\\}$ and reindex such that
$\frac{p_{\\_}1}{k_{\\_}1}\geq\cdots\geq\frac{p_{\\_}{n\textquoteright}}{k_{\\_}{n\textquoteright}}$
to get $\pi^{*}$;
4:$B^{*}=\arg\max U_{\\_}{ADV}\left(v,\vec{k},(\pi^{*},B)\right)$
5:return $(\pi^{*},B^{*})$;
## Appendix 0.B Missing Proofs
### Proof of Theorem5.1
Reminder of Theorem 5.1. Let $(\pi^{*},B^{*})$ denote the attacker’s optimal
strategy with respect to hash cost parameters $\vec{k}$ and let $\pi$ be an
ordering with no inversions relative to $\vec{k}$ then
$U_{\\_}{ADV}\left(v,\vec{k},(\pi,B^{*})\right)\geq
U_{\\_}{ADV}\left(v,\vec{k},(\pi^{*},B^{*})\right)\ .$
Proof of Theorem5.1: Fixing $B,v,\vec{k}$ we let $\pi$ be the optimal
ordering of passwords. If there are multiple optimal orderings we take the
ordering $\pi$ with the fewest number of inversions. Recall that an inversion
is a pair $b<a$ such that $r_{\\_}{\pi(a)}>r_{\\_}{\pi(b)}$ i.e.,
$pw_{\\_}{\pi(b)}$ is scheduled to be checked before $pw_{\\_}{\pi(a)}$ but
password $pw_{\\_}{\pi(a)}$ has a higher “bang-for-buck” ratio. We say that we
have a consecutive inversion if $a=b+1$. Suppose for contradiction that $\pi$
has an inversion
* •
If $\pi$ has an inversion then $\pi$ also has a consecutive inversion. Let
$(a,b)$ be the closest inversion i.e., minimizing $|a-b|$. The claim is that
$(a,b)$ is a consecutive inversion. If not there is some $c$ such that
$b<c<a$. Now either $r_{\\_}{\pi(c)}<r_{\\_}{\pi(a)}$ (in which case the pair
$(c,a)$ form a closer inversion) or $r_{\\_}{\pi(c)}\geq
r_{\\_}{\pi(a)}>r_{\\_}{\pi(b)}$ (in which case the pair $(b,c)$ forms a
closer inversion). In either case we contradict our assumption.
* •
Let $b$, $b+1$ be a consecutive inversion. We now define $\pi^{\prime}$ to be
the same ordering as $\pi$ except that the order of $b$ and $b+1$ is flipped
i.e., $\pi^{\prime}(b)=\pi(b+1)$ and $\pi^{\prime}(b+1)=\pi(b)$ so that we now
check password $pw_{\\_}{\pi(b+1)}$ before password $pw_{\\_}{\pi(b)}$. Note
that $\pi^{\prime}$ has one fewer inversion than $\pi$.
* •
We will prove that
$U_{\\_}{ADV}\left(v,\vec{k},(\pi^{\prime},B)\right)\geq
U_{\\_}{ADV}\left(v,\vec{k},(\pi,B)\right)$
contradicting the choice of $\pi$ as the optimal ordering with the fewest
number of inversions. By definition (7) we have
$\displaystyle
U_{\\_}{ADV}\left(v,\vec{k},(\pi,B)\right)=v\cdot\lambda(\pi,B)-\sum^{B}_{\\_}{i=1}k(pw_{\\_}{\pi(i)})\cdot\left(1-\lambda(\pi,i-1)\right),$
and
$\displaystyle
U_{\\_}{ADV}\left(v,\vec{k},(\pi^{\prime},B)\right)=v\cdot\lambda(\pi\textquoteright,B)-\sum^{B}_{\\_}{i=1}k(pw_{\\_}{\pi\textquoteright(i)})\cdot\left(1-\lambda(\pi\textquoteright,i-1)\right).$
Note that $\pi$ and $\pi\textquoteright$ only differ at guesses $b$ and $b+1$
and coincide at the rest of passwords. Thus, we have
$\lambda(\pi,i)=\lambda(\pi^{\prime},i)$ when $0\leq i\leq b-1$ or when $i\geq
b+1$. For convenience, set $\lambda=\lambda(\pi,b-1)$.
Assuming that $b+1\leq B$ and taking difference of above two equations,
$\displaystyle
U_{\\_}{ADV}\left(v,\vec{k},(\pi,B)\right)-U_{\\_}{ADV}\left(v,\vec{k},(\pi^{\prime},B)\right)$
(8)
$\displaystyle=k(pw_{\\_}{\pi(b)})\lambda+k(pw_{\\_}{\pi(b+1)})(\lambda+p_{\\_}{\pi(b)})$
$\displaystyle-k(pw_{\\_}{\pi(b+1)})\lambda+k(pw_{\\_}{\pi(b)})(\lambda+p_{\\_}{\pi(b+1)})$
$\displaystyle=p_{\\_}{\pi(b)}\cdot
k(pw_{\\_}{\pi(b+1)})-p_{\\_}{\pi(b+1)}\cdot k(pw_{\\_}{\pi(b)})\leq 0.$
The last inequality holds since
$0>(r_{\\_}{\pi(b)}-r_{\\_}{\pi(b+1)})=\frac{p_{\\_}{\pi(b)}}{k(pw_{\\_}{\pi(b)})}-\frac{p_{\\_}{\pi(b+1)}}{k(pw_{\\_}{\pi(b+1)})}$
(we multiply by both sides of the inequality by
$\left(k(pw_{\\_}{\pi(b+1)})k(pw_{\\_}{\pi(b)})\right)$ to obtain the result).
From equation (8) we see that the new swapped strategy $\pi\textquoteright$
has a utility at least as large as $\pi$. Contradiction!
If $b>B$ then swapping has no impact on utility as neither password
$pw_{\\_}{\pi(b)}$ or $pw_{\\_}{\pi(b+1)}$ will be checked.
Finally if $B=b$ then checking last password in $\pi$ provides non-negative
utility, i.e.,
$v\cdot p_{\\_}{\pi(B)}-k(pw_{\\_}{\pi(B)})(1-\lambda(\pi,B-1))\geq 0,$ (9)
whereas continue to check $pw(B+1)$ after executing strategy $(\pi,B)$ would
reduce utility, i.e.,
$v\cdot p_{\\_}{\pi(B+1)}-k(pw_{\\_}{\pi(B+1)})(1-\lambda(\pi,B))<0.$ (10)
From the above two equations, we have
$r_{\\_}{\pi(B)}=\frac{p_{\\_}{\pi(B)}}{k(pw_{\\_}{\pi(B)})}\geq\frac{1-\lambda(\pi,B-1)}{v}>\frac{1-\lambda(\pi,B)}{v}>\frac{p_{\\_}{\pi(B+1)}}{k(pw_{\\_}{\pi(B)+1})}=r_{\\_}{\pi(B+1)}.$
(11)
Again, we have contradiction. Therefore, an optimal checking sequence does not
contain inversions.
$\Box$
### Proof of Theorem 5.2
Reminder of Theorem 5.2. Let $(\pi^{*},B^{*})$ denote the attacker’s optimal
strategy with respect to hash cost parameters $\vec{k}$. Suppose that
passwords can be partitioned into $n$ equivalence sets
$es_{\\_}1,\ldots,es_{\\_}{n^{\prime}}$ such that passwords
$pw_{\\_}a,pw_{\\_}b\in es_{\\_}i$ have the same probability and hash cost
i.e., $p_{\\_}a=p_{\\_}b=p^{i}$ and $k(pw_{\\_}a)=k(pw_{\\_}b)=k^{i}$. Let
$r^{i}=p^{i}/k^{i}$ denote the bang-for-buck ratio of equivalence set
$es_{\\_}i$ and assume that $r^{1}\geq r^{2}\geq\ldots\geq
r_{\\_}{n^{\prime}}$ then
$B^{*}\in\left\\{0,|es_{\\_}1|,|es_{\\_}1|+|es_{\\_}2|,\cdots,\sum_{\\_}{i=1}^{n^{\prime}}|es_{\\_}i|\right\\}$.
Proof of Theorem 5.2: The proof of Theorem 5.2 follows from the following
lemma which states that whenever $pwd_{\\_}i$ and $pwd_{\\_}j$ are in the same
equivalence set the optimal attack strategy will either check both of these
passwords or neither.
###### Lemma 1
Let $(\pi^{*},B^{*})$ be the optimal strategy of the adversary and given two
passwords $pw_{\\_}i$ and $pw_{\\_}j$ in the same equivalence set. Then
$\mathsf{Inv}_{\\_}{\pi^{*}}(i)\leq
B^{*}\Leftrightarrow\mathsf{Inv}_{\\_}{\pi^{*}}(j)\leq B^{*}\ .$ (12)
###### Proof
Suppose for contradiction that the optimal strategy checks $pwd_{\\_}i$ but
not $pwd_{\\_}j$. Then WLOG we can assume that
$\mathsf{Inv}_{\\_}{\pi^{*}}(i)=B^{*}$ is the last password to be checked and
that $\mathsf{Inv}_{\\_}{\pi^{*}}(j)=B^{*}+1$ is the next password to be
checked (otherwise, we can swap $pwd_{\\_}j$ with the password in the
equivalence set that will be checked next). Since $pw_{\\_}i$ and $pwd_{\\_}j$
are in the same equivalence set, we have $\Pr[pw_{\\_}i]=\Pr[pw_{\\_}j]$ and
$k(pw_{\\_}i)=k(pw_{\\_}j)$. The marginal utility of checking $pwd_{\\_}i$ is
$\Delta_{\\_}i=v\Pr[pw_{\\_}i]-k(pw_{\\_}i)(1-\lambda(\pi^{*},B^{*})).$
Because checking $pwd_{\\_}i$ is part of the optimal strategy, it must be the
case $\Delta_{\\_}i\geq 0$. Otherwise, we would immediately derive a
contradiction since the strategy $(\pi^{*},B^{*}-1)$ would have greater
utility than $(\pi^{*},B^{*})$. Now the marginal utility
$\Delta_{\\_}j=U_{\\_}{ADV}\left(v,\vec{k},(\pi^{*},B^{*}+1)\right)-U_{\\_}{ADV}\left(v,\vec{k},(\pi^{*},B^{*})\right)$
of checking $pw_{\\_}j$ as well is
$\Delta_{\\_}j=v\Pr[pw_{\\_}j]-k(pw_{\\_}j)(1-\lambda(\pi,B^{*})-\Pr[pw_{\\_}j])>\Delta_{\\_}i\geq
0\ .$
Since $\Delta_{\\_}j>0$ we have
$U_{\\_}{ADV}\left(v,\vec{k},(\pi^{*},B^{*}+1)\right)>U_{\\_}{ADV}\left(v,\vec{k},(\pi^{*},B^{*})\right)$
contradicting the optimality of $(\pi^{*},B^{*})$. $\square$
From Theorem 5.1 it follows that we will check the equivalence sets in the
order of bang-for-buck ratios. Thus, $B^{*}$ must lie in the set
$\\{0,|es_{\\_}1|,|es_{\\_}1|+|es_{\\_}2|,\ldots,\sum_{\\_}{i=1}^{n^{\prime}}|es_{\\_}i|\\}$.
$\Box$
## Appendix 0.C FAQ
### Could this mechanism harm user’s who pick weak passwords?
We understand the concern that our mechanism might provide less protection for
weak passwords since we using a uniform hash cost for all passwords. If our
estimation of the value $v$ of a cracked password is way too high then it is
indeed possible that the DAHash parameters would be misconfigured in a way
that harms users with weak passwords. However, even in this case we ensure
that every password recieves a minimum level of acceptable protection by
setting a minimum hash cost parameter $k_{\\_}{min}$ for any password. We note
that if our estimation of $v$ is accurate and it is feasible to deter an
attacker from cracking weaker passwords then DAHash will actually tend to
provide stronger protection for these passwords. On the other hand if the
password is sufficiently weak that we cannot deter an attacker then these weak
passwords will always be cracked no matter what actions we take. Thus, DAHash
will reallocate effort to focus on protecting stronger passwords.
|
# In Situ Generation of High-Energy Spin-Polarized Electrons in a Beam-Driven
Plasma Wakefield Accelerator
Zan Nie<EMAIL_ADDRESS>Department of Electrical and Computer Engineering,
University of California Los Angeles, Los Angeles, California 90095, USA Fei
Li<EMAIL_ADDRESS>Department of Electrical and Computer Engineering,
University of California Los Angeles, Los Angeles, California 90095, USA
Felipe Morales Serguei Patchkovskii Olga Smirnova Max Born Institute, Max-
Born-Str. 2A, D-12489 Berlin, Germany Weiming An Department of Astronomy,
Beijing Normal University, Beijing 100875, China Noa Nambu Daniel Matteo
Kenneth A. Marsh Department of Electrical and Computer Engineering,
University of California Los Angeles, Los Angeles, California 90095, USA
Frank Tsung Department of Physics and Astronomy, University of California Los
Angeles, Los Angeles, California 90095, USA Warren B. Mori Department of
Electrical and Computer Engineering, University of California Los Angeles, Los
Angeles, California 90095, USA Department of Physics and Astronomy,
University of California Los Angeles, Los Angeles, California 90095, USA Chan
Joshi<EMAIL_ADDRESS>Department of Electrical and Computer Engineering,
University of California Los Angeles, Los Angeles, California 90095, USA
###### Abstract
In situ generation of a high-energy, high-current, spin-polarized electron
beam is an outstanding scientific challenge to the development of plasma-based
accelerators for high-energy colliders. In this Letter we show how such a
spin-polarized relativistic beam can be produced by ionization injection of
electrons of certain atoms with a circularly polarized laser field into a
beam-driven plasma wakefield accelerator, providing a much desired one-step
solution to this challenge. Using time-dependent Schrödinger equation (TDSE)
simulations, we show the propensity rule of spin-dependent ionization of xenon
atoms can be reversed in the strong-field multi-photon regime compared with
the non-adiabatic tunneling regime, leading to high total spin-polarization.
Furthermore, three-dimensional particle-in-cell (PIC) simulations are
incorporated with TDSE simulations, providing start-to-end simulations of
spin-dependent strong-field ionization of xenon atoms and subsequent trapping,
acceleration, and preservation of electron spin-polarization in lithium
plasma. We show the generation of a high-current (0.8 kA), ultra-low-
normalized-emittance ($\sim$ 37 nm), and high-energy (2.7 GeV) electron beam
within just 11 cm distance, with up to $\sim$ 31% net spin polarization.
Higher current, energy, and net spin-polarization beams are possible by
optimizing this concept, thus solving a long-standing problem facing the
development of plasma accelerators.
In high-energy lepton colliders, collisions between spin-polarized electron
and positron beams are preferred Barish and Brau (2013). Spin-polarized
relativistic particles are chiral and therefore ideally suited for selectively
enhancing or suppressing specific reaction channels and thereby better
characterizing the quantum numbers and chiral couplings of the new particles.
To enable science at the ever-increasing energy frontier of elementary
particle physics while simultaneously shrinking the size and cost of future
colliders, development of advanced accelerator technologies is considered
essential. While plasma-based accelerator (PBA) schemes have made impressive
progress in the past three decades, a concept for in situ generation of spin-
polarized beams has thus far proven elusive. The most common spin-polarized
electron sources are based on photoemission from a Gallium Arsenide (GaAs)
cathode Pierce and Meier (1976). Spin-polarized positron beams may be obtained
from pair production by polarized bremsstrahlung photons, the latter produced
by passing a spin-polarized relativistic electron beam through a high-Z target
Abbott et al. (2016). Unfortunately, none of the above methods can generate
ultra-short (few microns long) and precisely (fs) synchronized spin-polarized
electron beams necessary for injection into PBAs.
The only previous proposal for producing spin-polarized electron beams from
PBA Vieira et al. (2011); Wen et al. (2019); Wu et al. (2019a, b) involves
injecting spin-polarized electrons into a wake excited by a moderate intensity
laser pulse or a moderate charged electron beam in a density down-ramp.
However, this proposal is a two-step scheme. The first step requires the
generation of spin-polarized electrons outside of the PBA set-up by employing
a complicated combination (involving multiple lasers) of molecular alignment,
photodissociation and photoionization of hydrogen halides Sofikitis et al.
(2017, 2018). Even though the spin polarization of the hydrogen atoms can be
high, the overall net spin polarization of electrons ionized from both
hydrogen and halide atoms is expected to be low Wen et al. (2019). The second
step involves the injection of these spin-polarized electrons crossing the
strong electromagnetic fields of the plasma wake. To avoid severe spin
depolarization due to these strong electromagnetic fields, the wakefield
should be moderately strong, which limits both the accelerating gradient and
charge of the injected electrons.
In the one-step solution we propose here, the generation and subsequent
acceleration of spin-polarized electrons is integrated within the wake itself.
Using a combination of TDSE Patchkovskii and Muller (2016); Manolopoulos
(2002); Morales et al. (2016) and 3D-PIC Fonseca et al. (2002, 2008); Li et
al. (2021) simulations, we show that spin-polarized electrons can be produced
in situ directly inside a beam-driven plasma wakefield accelerator and rapidly
accelerated to multi GeV energies by the wakefield without significant
depolarization. Electrons are injected and simultaneously spin-polarized via
ionization of the outermost p-orbital of a selected noble gas (no need for
pre-alignment) using a circularly polarized laser Barth and Smirnova (2013a).
The mitigation of depolarization is another benefit of laser-induced
ionization injection Oz et al. (2007); Pak et al. (2010): the electrons can be
produced inside the wake close to the wake axis, where the transverse magnetic
and electric fields of the wake are near zero Lu et al. (2006), minimizing
both the beam emittance and depolarization due to spin precession. A third
advantage of our scheme is that the wake can be in the highly nonlinear or
bubble regime where electrons are rapidly accelerated to $c$ minimizing the
emittance growth and accelerating the electrons at higher gradients.
The proposed experimental layout of our scheme is shown in Supplementary
Materials. A relativistic drive electron beam traverses a column of gas
containing a mixture of lithium (Li) and xenon (Xe) atoms. The ionization
potentials of the $2s$ electron of Li atoms and the outermost $5p^{6}$
electron of Xe atoms are 5.4 eV and 12.13 eV, respectively. The electron beam
fully ionizes Li atoms and produces the wake while keeping Xe atoms unionized.
If the driving electron beam is ultra-relativistic ($\gamma\gg 1$) and
sufficiently dense ($n_{b}>n_{p}$, $k_{p}\sigma_{r,z}<1$), the $2s$ electrons
of the Li atoms are ionized during the risetime of the beam current and blown
out by the transverse electric field of the beam to form a bubble-like wake
cavity Lu et al. (2006); Litos et al. (2014) that contains only the Li ions
and the neutral Xe atoms. Now an appropriately delayed circularly polarized
ultra-short laser pulse copropagating with the electron beam is focused at the
entrance of the Li plasma to strong-field ionize the $5p^{6}$ electron of the
Xe atoms, producing spin-polarized electron beam close to the center (both
transversely and longitudinally) of the first bucket of the wake. The injected
electrons are subsequently trapped by the wake potential and accelerated to
$\sim$ 2.7 GeV energy in $\sim$ 11 cm without significant depolarization.
It is known that strong field ionization rate of a fixed orbital in circularly
polarized fields depends on the sense of electron rotation (i.e. the magnetic
quantum number $m_{l}$) in the initial state Popruzhenko et al. (2008); Barth
and Smirnova (2011); Barth and Smirnova (2013b). Based on this phenomenon and
spin-orbit interaction in the ionic core, spin-polarized electrons can be
produced by strong-field ionization Barth and Smirnova (2013a). Here we use Xe
atoms as an example, but there are many other possibilities. Xe has six
$p$-electrons in its outermost shell, with $m_{l}\equiv l_{z}=0,\pm 1$.
Strong-field ionization from the $p^{0}$ orbital ($m_{l}=0$) in circularly
polarized laser fields is negligible in the strong–field regime Barth and
Smirnova (2011); Barth and Smirnova (2013b). Consider first ionization from
the $p^{+}$ orbital (co-rotating with the laser field) into the two lowest
states of Xe+, ${}^{2}\text{P}_{3/2}$ and ${}^{2}\text{P}_{1/2}$, see the left
half of the ionization pathways in Fig. 1(a). Removal of a spin-up $p^{+}$
electron ($s_{z}=1/2$, $l_{z}=1$) would create a hole with $j_{z}=+3/2$ and
could only generate the ion in the state ${}^{2}\text{P}_{3/2}$. Removal of a
spin-down $p^{+}$ electron ($s_{z}=-1/2$, $l_{z}=1$) would create a hole with
$j_{z}=+1/2$ and can generate the ion both in the ${}^{2}\text{P}_{3/2}$ and
${}^{2}\text{P}_{1/2}$ states, with the Clebsch-Gordan coefficients squared
splitting the two pathways as 1/3 for ${}^{2}\text{P}_{3/2}$ and 2/3 for
${}^{2}\text{P}_{1/2}$. Repeating the same analysis for the $p^{-}$ electron
(right half of ionization pathways in Fig. 1(a)), one obtains the following
expressions for the ionization rates $W_{\uparrow}$ and $W_{\downarrow}$ of
spin-up and spin-down electrons Barth and Smirnova (2013a):
$\displaystyle
W_{\uparrow}=W_{\frac{3}{2}p^{+}}+\frac{2}{3}\,W_{\frac{1}{2}p^{-}}+\frac{1}{3}\,W_{\frac{3}{2}p^{-}}$
(1) $\displaystyle
W_{\downarrow}=W_{\frac{3}{2}p^{-}}+\frac{2}{3}\,W_{\frac{1}{2}p^{+}}+\frac{1}{3}\,W_{\frac{3}{2}p^{+}}$
(2)
where $W_{\frac{3}{2}p^{+}}$, $W_{\frac{3}{2}p^{-}}$, $W_{\frac{1}{2}p^{+}}$,
and $W_{\frac{1}{2}p^{-}}$ denote ionization rates of a $p^{+}$ electron into
the ${}^{2}\text{P}_{3/2}$ state, a $p^{-}$ electron into the
${}^{2}\text{P}_{3/2}$ state, a $p^{+}$ electron into the
${}^{2}\text{P}_{1/2}$ state, and a $p^{-}$ electron into the
${}^{2}\text{P}_{1/2}$ state, respectively. Net spin polarization arises under
two conditions: (i) either $p^{+}$ ionization dominates $p^{-}$ or vice versa
and (ii) one of the two ionic states is more likely to be populated.
Figure 1: (a) Schematic of spin-dependent photoionization showing possible
ionization pathways from Xe to Xe+. (b) TDSE simulation results of the multi-
photon ionization photoelectron spectra for the final ionic state,
Xe${}^{+}(^{2}\text{P}_{3/2})$ or Xe${}^{+}(^{2}\text{P}_{1/2})$, the energy
and the initial quantum number $m_{l}=\pm 1$ of the photoelectron, for 10 fs
(FWHM), $\lambda=260$ nm laser pulse with peak intensity $I=2.5\times
10^{13}\,\text{W/cm}^{2}$. (c,d) Log-log plot of the simulated ionization
rates and yields of spin-up and spin-down electrons as a function of laser
peak intensity of a 260 nm, 10 fs (FWHM), circularly polarized laser. (e) Spin
polarization as a function of peak laser intensity without and with focal-
volume averaging.
In the adiabatic tunneling regime of strong-field ionization (Keldysh
parameter Keldysh (1965) $\gamma_{\text{K}}\ll 1$), the ionization rates of
$p^{+}$ and $p^{-}$ electrons are the same and ionization is not spin-
selective. In the non-adiabatic tunneling regime ($\gamma_{\text{K}}\sim\,1$)
Ivanov et al. (2005), the $p^{-}$ electrons are more likely to be ionized
Barth and Smirnova (2011); Barth and Smirnova (2013b); Herath et al. (2012);
Eckart et al. (2018), and the population of Xe${}^{+}(^{2}\text{P}_{1/2})$ is
suppressed due to its higher ionization potential (
I${}_{\text{p}}\,(^{2}\text{P}_{1/2})=13.44$ eV compared to
I${}_{\text{p}}\,(^{2}\text{P}_{3/2})=12.13$ eV), satisfying both conditions
for generating spin-polarized electrons. Both the $m_{l}$-dependent ionization
rates and the resulting spin polarization have been experimentally verified
Hartung et al. (2016); Herath et al. (2012); Eckart et al. (2018); Trabert et
al. (2018); Liu et al. (2018). However, the observed spin polarization
generated by ionization of Xe at 800 nm and 400 nm changes sign both between
the two ionization channels and across the photoelectron spectrum Barth and
Smirnova (2013a); Hartung et al. (2016); Trabert et al. (2018); Liu et al.
(2018), reducing the net spin polarization upon integrating over all
photoelectron energies and both ionic states.
Theory and simulations show that propensity rules for ionization can be
reversed in the multi-photon regime ($\gamma_{\text{K}}\gg 1$) Bauer et al.
(2014); Zhu et al. (2016); Xu et al. (2020). From our TDSE simulations,
ionization of Xe by the third harmonic ($\lambda$=260 nm) of a Ti:Sapphire
laser is strongly dominated by the removal of a $p^{+}$ electron at all laser
intensities, until saturation, and for all photoelectron energies, with
ionization into Xe${}^{+}(^{2}\text{P}_{1/2})$ strongly suppressed (Fig.
1(b)), which leads to high total spin-polarization. We have performed
simulations for a range of intensities from $3.5\times
10^{10}\,\text{W/cm}^{2}$ to $6.3\times 10^{13}\,\text{W/cm}^{2}$, by solving
the TDSE for each intensity for four ionization pathways: $\frac{3}{2}p^{+}$,
$\frac{1}{2}p^{+}$, $\frac{3}{2}p^{-}$, and $\frac{1}{2}p^{-}$, and calculated
the corresponding spin-up and spin-down electron ionization rates and yields
(Fig. 1(c,d)) according to Eq. (1)(2). The net spin-polarization with
integration over temporal and spatial intensity distribution, all
photoelectron energies, and final ionic states (see Supplementary Material of
Ref. Zimmermann et al. (2017)) is shown in Fig. 1(e). For the laser intensity
we used in the following PIC simulations ($I=2.5\times
10^{13}\,\text{W/cm}^{2}$), the net spin-polarization reached 32% after focal-
volume averaging.
We have incorporated the spin-dependent ionization results into our wakefield
acceleration simulations. By tightly focusing a 260 nm circularly polarized
laser pulse at the appropriate position in the wake bubble where the
longitudinal and transverse electric fields are zero (Fig. 2(a)), electrons
with a net spin polarization are generated and injected into the wakefield.
The trapping condition is given by Pak et al. (2010)
$\Delta\Psi\equiv\Psi-\Psi_{\text{init}}\lesssim-1$, where
$\Psi\equiv\frac{e(\phi-A_{z})}{mc^{2}}$ is the normalized pseudo potential of
the wake, and $\Psi_{\text{init}}$ is the pseudo potential at the position
where the electron is born (injected). The pseudo potential is maximum at the
center of the bubble and minimum close to the rear. For this reason, we choose
to inject electrons where $\Psi_{\text{init}}$ is maximum so that the injected
electrons are most easily trapped by the wake (Fig. 2(a,b)).
Figure 2: (a),(b) Two snapshots show the charge density distribution of
driving electron beam (grey), beam ionized Li electrons (green), laser ionized
Xe electrons (brown), and wakefield ionized Xe electrons (blue) at (a)
$z=210\,\mu$m (end of ionization injection) and (b) $z=425\,\mu$m (after being
trapped). The dashed lines in (b) show the on-axis wake pseudo potential. The
wakefield ionized Xe electrons (blue) are only generated at the tail of the
bubble and cannot be trapped by the wake. (c),(d) The spin vector density
distribution of Xe electrons ionized by the UV laser at the same moment of (a)
and (b).
Previous studies have shown that spin dynamics due to the Stern-Gerlach force,
the Sokolov-Ternov effect (spin flip), and radiation reaction force are
negligible in our case Vieira et al. (2011); Wu et al. (2019b); Wen et al.
(2019); Wu et al. (2019a). Therefore, only spin precession needs to be
considered. We have implemented the spin precession module into the 3D-PIC
code OSIRIS Fonseca et al. (2002, 2008) following the Thomas-Bargmann-Michel-
Telegdi (T-BMT) equation Bargmann et al. (1959)
$\displaystyle d\mathbb{s}/dt=\mathbb{\Omega}\times\mathbb{s}$ (3)
where
$\mathbb{\Omega}=\frac{e}{m}(\frac{1}{\gamma}\mathbb{B}-\frac{1}{\gamma+1}\frac{\mathbb{v}}{c^{2}}\times\mathbb{E})+a_{e}\frac{e}{m}[\mathbb{B}-\frac{\gamma}{\gamma+1}\frac{\mathbb{v}}{c^{2}}(\mathbb{v}\cdot\mathbb{B})-\frac{\mathbb{v}}{c^{2}}\times\mathbb{E}]$
. Here, $\mathbb{E},\mathbb{B}$ are the electric and magnetic field,
$\mathbb{v}$ is the electron velocity, $\gamma=\frac{1}{\sqrt{1-v^{2}/c^{2}}}$
is the relativistic factor, and $a_{e}\approx 1.16\times 10^{-3}$ is the
anomalous magnetic moment of the electron.
As shown in Fig. 2(c) and (d), the spin vector distribution is at first
concentrated around the top and bottom points of $s_{z}=\pm 1$ with a very
small spread when the Xe electrons are photoionized (Fig. 2(c)), caused by the
spread of the ionizing laser wavevectors at different ionization positions. In
our case, this initial spread of the spin vector is within $1^{\circ}$, which
is negligible compared to the spread due to spin precession induced by the
wakefield at later times (Fig. 2(d)).
Figure 3 describes start-to-end simulations incorporating both the TDSE and
PIC components. The whole simulation consists of two stages: the injection and
trapping stage (0-0.74 mm) and acceleration stage (0.74-110 mm). The injection
and trapping stage was simulated using the OSIRIS code Fonseca et al. (2002,
2008) with high temporal resolution and the acceleration stage was simulated
using the QPAD code Li et al. (2021); Sprangle et al. (1990) with lower
temporal resolution. The density profiles of Xe and Li gases are shown in Fig.
3(a). The Xe gas column, with a density of $n_{\text{Xe}}=8.7\times
10^{17}\,\text{cm}^{-3}$, is 420 $\mu$m long. The exact length of the Xe
region is not important as long as Xe is not ionized by the electron beam. The
Li gas, with a density of $n_{\text{Li}}=8.7\times 10^{16}\,\text{cm}^{-3}$,
extends across the whole interaction region and provides background plasma
electrons when ionized by the drive electron beam. The driving beam electron
energy is 10 GeV with a Gaussian profile
$n_{b}=\frac{N}{(2\pi)^{3/2}\sigma_{r}^{2}\sigma_{z}}\,\text{exp}(-\frac{r^{2}}{2\sigma_{r}^{2}}-\frac{\xi^{2}}{2\sigma_{z}^{2}})$,
where $N=4.11\times 10^{9}$ (658 pC), and $\sigma_{r}=\sigma_{z}=11.4\,\mu$m
are the transverse and longitudinal beam sizes, respectively. Such a beam has
a maximum electric field of 16 GV/m, which is far larger than that required to
fully ionize the Li atoms, but not the Xe atoms. It forms the plasma and blows
out the plasma electrons to create the wake cavity. The 260 nm ionization
laser is delayed by 148 fs (44.5 $\mu$m) from the peak current position of the
drive electron beam. The laser pulse has Gaussian envelope with pulse duration
(FWHM) of 30 fs and focal spot size of $w_{0}=1.5\,\mu$m. The peak laser
intensity is $2.5\times 10^{13}\,\text{W/cm}^{2}$ (the same intensity as in
Fig. 1(b)) to make a tradeoff between net spin polarization and ionization
yield. At this peak laser intensity, the $5p^{6}$ (outermost) electron of Xe
is partially ionized ($\sim 32\%$ at focus) while the $5p^{5}$ (second)
electron of Xe is not ionized at all ($<10^{-6}$).
Figure 3: (a) The density profiles of the Xe and Li gases used in the
simulations. (b) Evolution of beam charge (left blue axis), peak current
(right red axis) and normalized emittance $\epsilon_{n}$ (right green axis).
(c) Evolution of Lorentz factor $\gamma$. The dashed line presents mean energy
$\langle\gamma\rangle$. d, Evolution of spin vector in the x direction:
$s_{x}$. The dashed line represents $\langle s_{x}\rangle$. e, Evolution of
spin vector in the z direction: $s_{z}$. The top box plots the $s_{z}$
distribution in the range of 0.8 and 1. The central box plots $\langle
s_{z}\rangle$ (net spin polarization) in the range of 0.2 and 0.4. The bottom
box plots the $s_{z}$ distribution in the range of $-1$ and $-0.8$. The long
vertical dashed black line marks the focal position ($z=0.18$ mm) of the
ionization laser. The plots in the range of 0.74-2.5 mm and 2.5-110 mm are
shown in two temporal scales to clearly present the whole evolution dynamics
but the actual simulation was run with one temporal resolution in the whole
acceleration stage.
Evolution of injected beam parameters including charge, peak current,
normalized emittance, and spin vector distribution as a function of
propagation distance in the plasma are shown in Fig. 3(b)-(e). All
photoionized electrons with charge of 3 pC (Fig. 3(b) left axis) are injected,
trapped and accelerated to 2.7 GeV (Fig. 3(c)) within 11 cm to give a peak
current of $I=0.8$ kA (Fig. 3(b) right red axis) and normalized transverse
emittance of $\epsilon_{n}=36.6$ nm (Fig. 3(b) right green axis). This
emittance compares favorably with the brightest beams available today Schmerge
et al. (2015). The spin vector evolutions in the $x$ and $z$ directions are
shown in Fig. 3(d) and (e), respectively. The spin spread in $x$ (or $y$)
direction is symmetric so that $\langle s_{x}\rangle\approx 0$ (or $\langle
s_{y}\rangle\approx 0$) as shown in Fig. 3(d). Therefore, the net spin
polarization $P=P_{z}=\langle s_{z}\rangle$ only depends on the spin
distribution in the $z$ direction. The spin depolarization mainly occurs
during the first 500 $\mu$m distance as electrons are injected into the wake
until they become ultra-relativistic ($\gamma\sim 10$). Thereafter the spin
polarization remains constant within the statistical sampling error. The final
averaged spin polarization is $\langle s_{z}\rangle=30.7\%$ (Fig. 3(e)),
corresponding to 96% of the initial spin polarization at birth. This result is
comparable to the first-generation GaAs polarized electron sources, that are
most commonly used in conventional rf accelerators. The reason why
depolarization is small in our case is that the injected electrons are always
close to the axis of the wake so that the transverse magnetic and electric
fields they feel are close to zero. In a nonlinear wake bucket, the transverse
magnetic field $B_{\phi}$ scales linearly with distance from the center of the
wake ($B_{\phi}\propto r$) Lu et al. (2006). From Eq. (3), the spin precession
frequency $\Omega\approx-eB_{\phi}/m\gamma$ when $\gamma\sim 1$. Therefore, if
the electrons are close to the axis ($r\approx 0$), the spin precession
frequency $\Omega\approx 0$. In addition, once the electron energy is
increased to ultra-relativistic level ($\gamma\gg 1$) by the longitudinal
wakefield, the spin precession effect is negligible Vieira et al. (2011).
We have investigated how the variation of injected beam charge (by either
varying the Xe density or the spot size of the ionization laser) affects the
final spin polarization of the injected electrons. The parameter scanning
results are summarized in Fig. 4(a). The spin polarization drops slowly and
linearly with the increase of the beam charge. This indicates that the space
charge force is the probable cause of spin depolarization in our case, which
is confirmed by analyzing the tracks of the ionized electrons (see
Supplementary Material for details, which includes Ref. Clayton et al.
(2016)). Considering practical issues in experiments, we have investigated how
the laser transverse offset relative to the drive electron beam affects the
spin polarization and normalized emittance as shown in Fig. 4(b). The spin
polarization is essentially not affected by the transverse displacement in
$\pm 3\,\mu$m range. The normalized emittance in $x$ direction grows with the
laser offset in $x$ direction and the normalized emittance in $y$ direction
remains almost the same. These emittances are within values envisioned for
future plasma-based colliders. Another possible issue in experiments might be
the synchronization between the drive electron beam and the ionizing laser
pulse. To make sure the ionized electrons are trapped by the wake (meet the
trapping condition $\Delta\Phi\lesssim-1$), the relative timing jitter should
be within $\pm 80$ fs in our simulation case. This requirement can be further
relaxed if using higher drive beam charge and lower plasma density.
Figure 4: (a) Spin polarization v.s. injected beam charge by either varying
the Xe density (blue) or the spot size of the ionization laser (red). The five
data points of Xe density scanning correspond to Xe density of $8.7\times
10^{16}\,\text{cm}^{-3}$, $8.7\times 10^{17}\,\text{cm}^{-3}$, $1.7\times
10^{18}\,\text{cm}^{-3}$, $3.5\times 10^{18}\,\text{cm}^{-3}$, and $7.0\times
10^{18}\,\text{cm}^{-3}$ while keeping the spot size of 1.5 $\mu$m. The four
data points of spot size scanning correspond to ionization laser spot size of
1 $\mu$m, 1.5 $\mu$m, 2 $\mu$m, and 2.5 $\mu$m while keeping the Xe density of
$8.7\times 10^{17}\,\text{cm}^{-3}$. (b) Spin polarization (left) and
normalized emittance (right) after propagation distance of 0.74 mm v.s. laser
transverse displacement in x direction.
Here we have used a single collinearly (to the electron beam) propagating
laser pulse for ionizing the Xe atoms. To obtain even lower emittance ($<$10
nm) beams, one could use two transverse Li et al. (2013) or longitudinal
Hidding et al. (2012) colliding laser pulses instead. We also note that the
beam charge, peak current, and the maximum spin polarization observed here are
not limited by theory. The first two can be increased by optimizing the
ionizing laser parameters, drive beam parameters, and the beam loading within
the wake. The latter may be increased by using electrons in the $d$ or $f$
orbitals instead of $p$ orbitals – for instance by using Yb III Kaushal and
Smirnova (2018a, b). A modified version of this scheme may also be useful for
generating a spin-polarized electron beam in a laser wakefield accelerator
(LWFA) Xu et al. (2014).
###### Acknowledgements.
We thank Nuno Lemos and Christopher E. Clayton for useful discussions
regarding this work. This work was supported by AFOSR grant FA9550-16-1-0139,
DOE grant DE-SC0010064, DOE SciDAC through FNAL subcontract 644405, and NSF
grants 1734315 and 1806046. The simulations were performed on Hoffman cluster
at UCLA and NERSC at LBNL.
## References
* Barish and Brau (2013) B. Barish and J. E. Brau, Int. J. Mod. Phys. A 28, 1330039 (2013).
* Pierce and Meier (1976) D. T. Pierce and F. Meier, Phys. Rev. B 13, 5484 (1976).
* Abbott et al. (2016) D. Abbott, P. Adderley, A. Adeyemi, P. Aguilera, M. Ali, H. Areti, M. Baylac, J. Benesch, G. Bosson, B. Cade, et al., Phys. Rev. Lett. 116, 214801 (2016).
* Vieira et al. (2011) J. Vieira, C.-K. Huang, W. B. Mori, and L. O. Silva, Phys. Rev. ST Accel. Beams 14, 071303 (2011).
* Wen et al. (2019) M. Wen, M. Tamburini, and C. H. Keitel, Phys. Rev. Lett. 122, 214801 (2019).
* Wu et al. (2019a) Y. Wu, L. Ji, X. Geng, Q. Yu, N. Wang, B. Feng, Z. Guo, W. Wang, C. Qin, X. Yan, et al., New J. Phys. 21, 073052 (2019a).
* Wu et al. (2019b) Y. Wu, L. Ji, X. Geng, Q. Yu, N. Wang, B. Feng, Z. Guo, W. Wang, C. Qin, X. Yan, et al., Phys. Rev. E 100, 043202 (2019b).
* Sofikitis et al. (2017) D. Sofikitis, P. Glodic, G. Koumarianou, H. Jiang, L. Bougas, P. C. Samartzis, A. Andreev, and T. P. Rakitzis, Phys. Rev. Lett. 118, 233401 (2017).
* Sofikitis et al. (2018) D. Sofikitis, C. S. Kannis, G. K. Boulogiannis, and T. P. Rakitzis, Phys. Rev. Lett. 121, 083001 (2018).
* Patchkovskii and Muller (2016) S. Patchkovskii and H. Muller, Comput. Phys. Commun. 199, 153 (2016).
* Manolopoulos (2002) D. E. Manolopoulos, J. Chem. Phys. 117, 9552 (2002).
* Morales et al. (2016) F. Morales, T. Bredtmann, and S. Patchkovskii, J. Phys. B At. Mol. Opt. Phys. 49, 245001 (2016).
* Fonseca et al. (2002) R. A. Fonseca, L. O. Silva, F. S. Tsung, V. K. Decyk, W. Lu, C. Ren, W. B. Mori, S. Deng, S. Lee, T. Katsouleas, et al., Lect. Notes Comput. Sci. 2331, 342 (2002).
* Fonseca et al. (2008) R. Fonseca, S. Martins, L. Silva, J. Tonge, F. Tsung, and W. Mori, Plasma Phys. Controlled Fusion 50, 124034 (2008).
* Li et al. (2021) F. Li, W. An, V. K. Decyk, X. Xu, M. J. Hogan, and W. B. Mori, Comput. Phys. Commun. 261, 107784 (2021).
* Barth and Smirnova (2013a) I. Barth and O. Smirnova, Phys. Rev. A 88, 013401 (2013a).
* Oz et al. (2007) E. Oz, S. Deng, T. Katsouleas, P. Muggli, C. D. Barnes, I. Blumenfeld, F. J. Decker, P. Emma, M. J. Hogan, R. Ischebeck, et al., Phys. Rev. Lett. 98, 084801 (2007).
* Pak et al. (2010) A. Pak, K. A. Marsh, S. F. Martins, W. Lu, W. B. Mori, and C. Joshi, Phys. Rev. Lett. 104, 025003 (2010).
* Lu et al. (2006) W. Lu, C. Huang, M. Zhou, W. B. Mori, and T. Katsouleas, Phys. Rev. Lett. 96, 165002 (2006).
* Litos et al. (2014) M. Litos, E. Adli, W. An, C. I. Clarke, C. E. Clayton, S. Corde, J. P. Delahaye, R. J. England, A. S. Fisher, J. Frederico, et al., Nature 515, 92 (2014).
* Popruzhenko et al. (2008) S. V. Popruzhenko, G. G. Paulus, and D. Bauer, Phys. Rev. A 77, 053409 (2008).
* Barth and Smirnova (2011) I. Barth and O. Smirnova, Phys. Rev. A 84, 063415 (2011).
* Barth and Smirnova (2013b) I. Barth and O. Smirnova, Phys. Rev. A 87, 013433 (2013b).
* Keldysh (1965) L. Keldysh, Sov. Phys. JETP 20, 1307 (1965).
* Ivanov et al. (2005) M. Y. Ivanov, M. Spanner, and O. Smirnova, J. Mod. Opt. 52, 165 (2005).
* Herath et al. (2012) T. Herath, L. Yan, S. K. Lee, and W. Li, Phys. Rev. Lett. 109, 043004 (2012).
* Eckart et al. (2018) S. Eckart, M. Kunitski, M. Richter, A. Hartung, J. Rist, F. Trinter, K. Fehre, N. Schlott, K. Henrichs, L. P. H. Schmidt, et al., Nat. Phys. 14, 701 (2018).
* Hartung et al. (2016) A. Hartung, F. Morales, M. Kunitski, K. Henrichs, A. Laucke, M. Richter, T. Jahnke, A. Kalinin, M. Schöffler, L. P. H. Schmidt, et al., Nat. Photonics 10, 526 (2016).
* Trabert et al. (2018) D. Trabert, A. Hartung, S. Eckart, F. Trinter, A. Kalinin, M. Schöffler, L. P. H. Schmidt, T. Jahnke, M. Kunitski, and R. Dörner, Phys. Rev. Lett. 120, 043202 (2018).
* Liu et al. (2018) M.-M. Liu, Y. Shao, M. Han, P. Ge, Y. Deng, C. Wu, Q. Gong, and Y. Liu, Phys. Rev. Lett. 120, 043201 (2018).
* Bauer et al. (2014) J. H. Bauer, F. Mota-Furtado, P. F. O’Mahony, B. Piraux, and K. Warda, Phys. Rev. A 90, 063402 (2014).
* Zhu et al. (2016) X. Zhu, P. Lan, K. Liu, Y. Li, X. Liu, Q. Zhang, I. Barth, and P. Lu, Opt. Express 24, 4196 (2016).
* Xu et al. (2020) S. Xu, Q. Zhang, X. Fu, X. Huang, X. Han, M. Li, W. Cao, and P. Lu, Phys. Rev. A 102, 063128 (2020).
* Zimmermann et al. (2017) H. Zimmermann, S. Patchkovskii, M. Ivanov, and U. Eichmann, Phys. Rev. Lett. 118, 013003 (2017).
* Bargmann et al. (1959) V. Bargmann, L. Michel, and V. L. Telegdi, Phys. Rev. Lett. 2, 435 (1959).
* Sprangle et al. (1990) P. Sprangle, E. Esarey, and A. Ting, Phys. Rev. Lett. 64, 2011 (1990).
* Schmerge et al. (2015) J. F. Schmerge, A. Brachmann, D. Dowell, A. Fry, R. K. Li, Z. Li, T. Raubenheimer, T. Vecchione, F. Zhou, /SLAC, et al., Tech. Rep., SLAC National Accelerator Lab., Menlo Park, CA (United States) (2015), URL https://www.osti.gov/biblio/1169455.
* Clayton et al. (2016) C. E. Clayton, E. Adli, J. Allen, W. An, C. I. Clarke, S. Corde, J. Frederico, S. Gessner, S. Z. Green, M. J. Hogan, et al., Nat. Commun. 7, 12483 (2016).
* Li et al. (2013) F. Li, J. F. Hua, X. L. Xu, C. J. Zhang, L. X. Yan, Y. C. Du, W. H. Huang, H. B. Chen, C. X. Tang, W. Lu, et al., Phys. Rev. Lett. 111, 015003 (2013).
* Hidding et al. (2012) B. Hidding, G. Pretzler, J. B. Rosenzweig, T. Königstein, D. Schiller, and D. L. Bruhwiler, Phys. Rev. Lett. 108, 035001 (2012).
* Kaushal and Smirnova (2018a) J. Kaushal and O. Smirnova, Journal of Physics B: Atomic, Molecular and Optical Physics 51, 174001 (2018a).
* Kaushal and Smirnova (2018b) J. Kaushal and O. Smirnova, Journal of Physics B: Atomic, Molecular and Optical Physics 51, 174003 (2018b).
* Xu et al. (2014) X. L. Xu, Y. P. Wu, C. J. Zhang, F. Li, Y. Wan, J. F. Hua, C.-H. Pai, W. Lu, P. Yu, C. Joshi, et al., Phys. Rev. ST Accel. Beams 17, 061301 (2014), URL https://link.aps.org/doi/10.1103/PhysRevSTAB.17.061301.
|
# $2$-generated axial algebras of Monster type $(2\beta,\beta)$
Clara Franchi, Mario Mainardis, Sergey Shpectorov
###### Abstract.
In this paper we prove that $2$-generated primitive axial algebras of Monster
type $(2\beta,\beta)$ over a ring $R$ in which $2$ and $\beta$ are invertible
can be generated as $R$-module by $8$ vectors. We then completely classify
$2$-generated primitive axial algebras of Monster type $(2\beta,\beta)$ over
any field of characteristic other than $2$.
## 1\. Introduction
Axial algebras constitute a class of commutative non-associative algebras
generated by certain idempotent elements (called axes) such that their adjoint
action is semisimple and the relative eigenvectors satisfy a prescribed fusion
law. Let $R$ be a ring, $\\{\alpha,\beta\\}\subseteq R\setminus\\{0,1\\}$ and
$\alpha\neq\beta$. An axial algebra over $R$ is called of Monster type
$(\alpha,\beta)$ if it satisfies the fusion law $\mathcal{M}(\alpha,\beta)$
given in Table 1.
$\begin{array}[]{|c||c|c|c|c|}\hline\cr\star&1&0&\alpha&\beta\\\
\hline\cr\hline\cr 1&1&\emptyset&\alpha&\beta\\\ \hline\cr
0&\emptyset&0&\alpha&\beta\\\ \hline\cr\alpha&\alpha&\alpha&1,0&\beta\\\
\hline\cr\beta&\beta&\beta&\beta&1,0,\alpha\\\ \hline\cr\end{array}$ Table 1.
Fusion law $\mathcal{M}(\alpha,\beta)$
This means that the adjoint action of every axis has spectrum
$\\{1,0,\alpha,\beta\\}$ and, for any two eigenvectors $v_{\gamma}$,
$v_{\delta}$ with relative eigenvalues
$\gamma,\delta\in\\{1,0,\alpha,\beta\\}$, the product $v_{\gamma}\cdot
v_{\delta}$ is a sum of eigenvectors relative to eigenvalues contained in
$\gamma\star\delta$. This class was introduced by J. Hall, F. Rehren and S.
Shpectorov [8] in order to axiomatise some key features of many important
classes of algebras, such as the weight-2 components of OZ-type vertex
operator algebras, Jordan algebras and Matsuo algebras (see the introductions
of [8], [15] and [5]). They are also of particular interest for finite group
theorists as most of the finite simple groups, or their automorphs, can be
faithfully and effectively represented as automorphism groups of these
algebras.
In [15, 14], F. Rehren proved that every $2$-generated primitive axial
algebras of Monster type $(\alpha,\beta)$ over a ring $R$ in which $2$,
$\alpha$, $\beta$, $\alpha-\beta$, $\alpha-2\beta$, and $\alpha-4\beta$ are
invertible can be generated as $R$-module by $8$ vectors and computed the
structure constants with respect to these elements. This result has been re-
proved in a slightly simpler way by the authors in [3], where a bound for the
special case of $2$-generated primitive axial algebras of Monster type
$(4\beta,\beta)$ over a field of odd characteristic was also obtained under
the hypothesis that $\beta\neq 1/2$ and the algebra is symmetric, i.e. the map
swapping the generating axes induces an automorphism of the entire algebra. On
the other hand, in [4] an example of a $2$-generated symmetric primitive axial
algebra of Monster type $(2,\frac{1}{2})$ of infinite dimension is
constructed.
In this paper, we focus on $2$-generated primitive axial algebras of Monster
type $(2\beta,\beta)$ and we assume that $R$ has characteristic other than
$2$. Denote by $R_{0}$ the prime subring of $R$ and let
$R_{0}[\frac{1}{2},\beta,\frac{1}{\beta}][x,y,z,t]$ be the polynomial ring in
$4$ variables over $R_{0}[\frac{1}{2},\beta,\frac{1}{\beta}]$. We prove the
following result.
###### Theorem 1.1.
There exists a subset $T\subseteq
R_{0}[\frac{1}{2},\beta,\frac{1}{\beta}][x,y,z,t]$ of size $4$, depending only
on $R_{0}$ and $\beta$, such that every $2$-generated primitive axial algebra
of Monster type $(2\beta,\beta)$ over $R$ is completely determined, up to
homomorphic images, by a quadruple $(x_{0},y_{0},z_{0},t_{0})\in R^{4}$ which
is a common zero of all the elements of $T$. In particular, every
$2$-generated primitive axial algebra of Monster type $(2\beta,\beta)$ over
$R$ is linearly spanned by at most $8$ vectors.
In Section 4, we give explicitly some of the polynomials of the set $T$ and
show how to compute those that are too long to be written down here. In the
symmetric case, the knowledge of the set $T$ is enough to obtain all
quadruples $(x_{0},y_{0},z_{0},t_{0})$ corresponding to primitive axial
algebras of Monster type $(2\beta,\beta)$ over any field ${\mathbb{F}}$ of
characteristic either than $2$. In Section 5 we give a complete classification
of these algebras (Theorem 5.7). Note that, in this case, our results confirm
those of T. Yabe [17] but are independent on them.
In Section 6, we classify the non-symmetric algebras. In [5], A. Galt, V.
Joshi, A. Mamontov, S. Shpectorov and A. Staroletov introduced the concept of
double axis in a Matsuo algebra $M_{\eta}(\Gamma)$. They proved that double
axes satisfy the fusion law $\mathcal{M}(2\eta,\eta)$ and classified the
primitive subalgebras of $M_{\eta}(\Gamma)$ generated by two double axes or by
an axis and a double axis: besides the algebras of Jordan type $1A$, $2B$,
$3C(\eta)$ and $3C(2\eta)$, they found three new algebras of dimensions $4$,
$5$ and $8$ which we refer to by $Q_{2}(\eta)$, $V_{5}(\eta)$, and
$V_{8}(\eta)$, respectively.
###### Theorem 1.2.
Let $V$ be a $2$-generated primitive axial algebra of Monster type
$(2\beta,\beta)$ over a field ${\mathbb{F}}$ of characteristic other than $2$.
Then, either $V$ is symmetric (and it is described in Theorem 5.7), or $V$ is
isomorphic to $Q_{2}(\beta)$ or to its $3$-dimensional quotient when
$\beta=-\frac{1}{2}$.
## 2\. Basics
We start by recalling the definition and basic features of axial algebras. Let
$R$ be a ring with identity where $2$ is invertible and let $\mathcal{S}$ be a
finite subset of $R$ with $1\in\mathcal{S}$. A fusion law on $\mathcal{S}$ is
a map
$\star\colon\mathcal{S}\times\mathcal{S}\to 2^{\mathcal{S}}.$
An axial algebra over $R$ with spectrum $\mathcal{S}$ and fusion law $\star$
is a commutative non-associative $R$-algebra $V$ generated by a set
$\mathcal{A}$ of nonzero idempotents (called axes) such that, for each
$a\in{\mathcal{A}}$,
1. (Ax1)
$ad(a):v\mapsto av$ is a semisimple endomorphism of $V$ with spectrum
contained in $\mathcal{S}$;
2. (Ax2)
for every $\lambda,\mu\in\mathcal{S}$, the product of a $\lambda$-eigenvector
and a $\mu$-eigenvector of ${\rm ad}_{a}$ is the sum of $\delta$-eigenvectors,
for $\delta\in\lambda\star\mu$.
Furthermore, $V$ is called primitive if
1. (Ax3)
$V_{1}=\langle a\rangle$.
An axial algebra over $R$ is said to be of Monster type $(\alpha,\beta)$ if it
satisfies the fusion law $\mathcal{M}(\alpha,\beta)$ given in Table 1, with
$\alpha,\beta\in{\mathbb{F}}\setminus\\{0,1\\}$, with $\alpha\neq\beta$.
Let $V$ be an axial algebra of Monster type $(\alpha,\beta)$ and let
$a\in\mathcal{A}$. Let ${\mathcal{S}}^{+}:=\\{1,0,\alpha\\}$ and
${\mathcal{S}}^{-}:=\\{\beta\\}$. The partition
$\\{{\mathcal{S}}^{+},{\mathcal{S}^{-}}\\}$ of $\mathcal{S}$ induces a
${\mathbb{Z}}_{2}$-grading on ${\mathcal{S}}$ which, on turn, induces, a
${\mathbb{Z}}_{2}$-grading $\\{V_{+}^{a},V_{-}^{a}\\}$ on $V$ where
$V_{+}^{a}:=V_{1}^{a}+V_{0}^{a}+V_{\alpha}^{a}$ and $V_{-}^{a}=V_{\beta}^{a}$.
It follows that, if $\tau_{a}$ is the map from $R\cup V$ to $R\cup V$ such
that $\tau_{a|_{V}}$ is the nultilication by $-1$ on $V_{\beta}^{a}$ and
leaves invariant the elements of $V_{+}^{a}$ and $\tau_{a|_{R}}$ is the
identity, then $\tau_{a}$ is an involutory automorphism of $V$ (see [8,
Proposition 3.4]). The map $\tau_{a}$ is called the Miyamoto involution
associated to the axis $a$. By definition of $\tau_{a}$, the element
$av-{\beta}v$ of $V$ is $\tau_{a}$-invariant and, since $a$ lies in
$V_{+}^{a}\leq C_{V}(\tau_{a})$, also $av-{\beta}(a+v)$ is
$\tau_{a}$-invariant. In particular, by symmetry,
###### Lemma 2.1.
Let $a$ and $b$ be axes of $V$. Then $ab-\beta(a+b)$ is fixed by the
2-generated group $\langle\tau_{a},\tau_{b}\rangle$.
If $V$ is generated by the set of axes $\mathcal{A}:=\\{a_{0},a_{1}\\}$, for
$i\in\\{1,2\\}$, let $\tau_{i}$ be the Miyamoto involutions associated to
$a_{i}$. Set $\rho:=\tau_{0}\tau_{1}$, and for $i\in{\mathbb{Z}}$,
$a_{2i}:=a_{0}^{\rho^{i}}$ and $a_{2i+1}:=a_{1}^{\rho^{i}}$. Since $\rho$ is
an automorphism of $V$, for every $j\in{\mathbb{Z}}$, $a_{j}$ is an axis.
Denote by $\tau_{j}:=\tau_{a_{j}}$ the corresponding Miyamoto involution.
###### Lemma 2.2.
For every $n\in{\mathbb{N}}$, and $i,j\in{\mathbb{Z}}$ such that $i\equiv
j\>\bmod n$ we have
$a_{i}a_{i+n}-\beta(a_{i}+a_{i+n})=a_{j}a_{j+n}-\beta(a_{j}+a_{j+n}),$
###### Proof.
This follows immediately from Lemma 2.1. ∎
For $n\in{\mathbb{N}}$ and $r\in\\{0,\ldots,n-1\\}$ set
(1) $s_{r,n}:=a_{r}a_{r+n}-\beta(a_{r}+a_{r+n}).$
If $\\{0,1,\alpha,\beta\\}$ are pairwise distinguishable in $R$, i.e.
$\alpha$, $\beta$, $\alpha-1$, $\beta-1$, and $\alpha-\beta$ are invertible in
$R$, by [3, Proposition 2.4], for every $a\in\mathcal{A}$, there is a function
$\lambda_{a}:V\to R$, such that every $v\in V$ can be written as
$v=\lambda_{a}(v)a+u\mbox{ with }u\in\bigoplus_{\delta\neq 1}V_{\delta}^{a}.$
For $i\in{\mathbb{Z}}$, let
(2) $a_{i}=\lambda_{a_{0}}(a_{i})a_{0}+u_{i}+v_{i}+w_{i}$
be the decomposition of $a_{i}$ into $ad_{a_{0}}$-eigenvectors, where $u_{i}$
is a $0$-eigenvector, $v_{i}$ is an $\alpha$-eigenvector and $w_{i}$ is a
$\beta$-eigenvector. From now on we assume $0,1,\alpha,\beta$ are pairwise
distinguishable in $R$.
###### Lemma 2.3.
With the above notation,
1. (1)
$u_{i}=\frac{1}{\alpha}((\lambda_{a_{0}}(a_{i})-\beta-\alpha\lambda_{a_{0}}(a_{i}))a_{0}+\frac{1}{2}(\alpha-\beta)(a_{i}+a_{-i})-s_{0,i})$;
2. (2)
$v_{i}=\frac{1}{\alpha}((\beta-\lambda_{a_{0}}(a_{i}))a_{0}+\frac{\beta}{2}(a_{i}+a_{-i})+s_{0,i})$;
3. (3)
$w_{i}=\frac{1}{2}(a_{i}-a_{-i})$.
###### Lemma 2.4.
Let $I$ be an ideal of $V$, $a$ an axis of $V$, $x\in V$ and let
$x=x_{1}+x_{0}+x_{\alpha}+x_{\beta}$
be the decomposition of $x$ as sum of ${\rm ad}_{a}$-eigenvectors. If $x\in
I$, then $x_{1},x_{0},x_{\alpha},x_{\beta}\in I$. Moreover, $I$ is
$\tau_{a}$-invariant.
###### Proof.
Suppose $x\in I$. Then $I$ contains the vectors
$\displaystyle x-ax$ $\displaystyle=$ $\displaystyle
x_{0}+(1-\alpha)x_{\alpha}+(1-\beta)x_{\beta},$ $\displaystyle a(x-ax)$
$\displaystyle=$
$\displaystyle\alpha(1-\alpha)x_{\alpha}+\beta(1-\beta)x_{\beta},$
$\displaystyle a(a(x-ax))-\beta a(x-ax)$ $\displaystyle=$
$\displaystyle\alpha(\alpha-\beta)(1-\alpha)x_{\alpha}.$
Since, $0,1,\alpha,\beta$ are pairwise distinguishable in $R$, it follows that
$I$ contains $x_{1}$, $x_{0}$, $x_{\alpha}$, $x_{\beta}$. Since
$x^{\tau_{a}}=x_{1}+x_{0}+x_{\alpha}-x_{\beta}\in I$, the last assertion
follows. ∎
## 3\. The multiplication table
From now on we assume that $\alpha=2\beta$, $\\{1,0,2\beta,\beta\\}$ is a set
of pairwise distinguishable elements in $R$, and $2$ is invertible in $R$. Let
${\overline{V}}$ be the universal $2$-generated primitive axial algebra over
the ring ${\overline{R}}$ as defined in [3] and let
$\mathcal{A}:=\\{a_{0},a_{1}\\}$ be its generating set of axes. That is,
${\overline{R}}$ and ${\overline{V}}$ are defined as follows
* -
$D$ is the polynomial ring
${{\mathbb{Z}}}[x_{i},y_{i},w_{i},t_{1}\>|\>i,j\in\\{1,2\\},i<j],$
where $x_{i},y_{i},w,z_{i,j},t_{1}$ are algebraically independent
indeterminates over ${\mathbb{Z}}$, for $i,j\in\\{1,2\\}$, with $i<j$;
* -
$L$ is the ideal of $D$ generated by the set
$\Sigma:=\\{x_{i}y_{i}-1,\>(1-x_{i})w_{i}-1,\>2t_{1}-1,\>x_{1}-2x_{2}\>|\>i\in\\{1,2\\}\\};$
* -
$\hat{D}:=D/L$. For $d\in D$, we denote the element $L+d$ by $\hat{d}$.
* -
$W$ is the free commutative magma generated by the elements of $\mathcal{A}$
subject to the condition that every element of $\mathcal{A}$ is idempotent;
* -
${\hat{R}}:={\hat{D}}[\Lambda]$ is the ring of polynomials with coefficients
in $\hat{D}$ and indeterminates set
$\Lambda:=\\{\lambda_{c,w}\>|\>c\in\mathcal{A},w\in W,c\neq w\\}$ where
$\lambda_{c,w}=\lambda_{c^{\prime},w^{\prime}}$ if and only if $c=c^{\prime}$
and $w=w^{\prime}$.
* -
${\hat{V}}:={\hat{R}}[W]$ is the set of all formal linear combinations
$\sum_{w\in W}\gamma_{w}w$ of the elements of $W$ with coefficients in
${\hat{R}}$, with only finitely many coefficients different from zero. Endow
${\hat{V}}$ with the usual structure of a commutative non associative
${\hat{R}}$-algebra.
For $i\in{\mathbb{Z}}$, set
$\lambda_{i}:=\lambda_{a_{0}}(a_{i}).$
By Corollary 3.7 in [3], the permutation that swaps $a_{0}$ with $a_{1}$
induces a automorphism $f$ of ${\overline{V}}$ such that
$\lambda_{a_{1}}(a_{0})=\lambda_{1}^{f},\mbox{ and
}\>\>\lambda_{a_{1}}(a_{-1})=\lambda_{2}^{f}.$
Set $T_{0}:=\langle\tau_{0},\tau_{1}\rangle$ and
$T:=\langle\tau_{0},f\rangle$.
###### Lemma 3.1.
The groups $T_{0}$ and $T$ are dihedral groups, $T_{0}$ is a normal subgroup
of $T$ such that $|T:T_{0}|\leq 2$. For every $n\in{\mathbb{N}}$, the set
$\\{s_{0,n},\ldots,s_{n-1,n}\\}$ is invariant under the action of $T$. In
particular, if $K_{n}$ is the kernel of this action, we have
1. (1)
$K_{1}=T$;
2. (2)
$K_{2}=T_{0}$, in particular $s_{0,2}^{f}=s_{1,2}$;
3. (3)
$T/K_{3}$ induces the full permutation group on the set
$\\{s_{0,3},s_{1,3},s_{2,3}\\}$ with point stabilisers generated by
$\tau_{0}K_{3}$, $\tau_{1}K_{3}$ and $fK_{3}$, respectively. In particular
$s_{0,3}^{f}=s_{1,3}$ and $s_{0,3}^{\tau_{1}}=s_{2,3}$.
###### Proof.
This follows immediately from the definitions. ∎
For $i,j\in\\{1,2,3\\}$, with the notation fixed before Lemma 2.3, set
$P_{ij}:=u_{i}u_{j}+u_{i}v_{j}\>\>\mbox{ and
}\>\>Q_{ij}:=u_{i}v_{j}-\frac{1}{\alpha^{2}}s_{0,i}s_{0,j}.$
###### Lemma 3.2.
For $i,j\in\\{1,2,3\\}$ we have
(3) $s_{0,i}\cdot s_{0,j}=\alpha(a_{0}P_{ij}-\alpha Q_{ij}).$
###### Proof.
Since $u_{i}$ and $v_{j}$ are a $0$-eigenvector and an $\alpha$-eigenvector
for ${\rm ad}_{a_{0}}$, respectively, by the fusion rule, we have
$a_{0}P_{ij}=\alpha(u_{i}\cdot v_{j})$ and the result follows. ∎
The following polynomial will play a crucial rôle in the classification of the
non symmetric algebras in Section 6:
$Z(x,y):=\frac{2}{\beta}x+\frac{(2\beta-1)}{\beta^{2}}y-\frac{(4\beta-1)}{\beta}.$
###### Lemma 3.3.
In ${\overline{V}}$ the following equalities hold:
$\displaystyle s_{0,2}$ $\displaystyle=$
$\displaystyle-\frac{\beta}{2}(a_{-2}+a_{2})+\beta
Z(\lambda_{1},\lambda_{1}^{f})(a_{1}+a_{-1})$ $\displaystyle-$
$\displaystyle\left[2Z(\lambda_{1},\lambda_{1}^{f})(\lambda_{1}-\beta)-(\lambda_{2}-\beta)\right]a_{0}+2Z(\lambda_{1},\lambda_{1}^{f})s_{0,1}.$
and
$\displaystyle s_{1,2}$ $\displaystyle=$
$\displaystyle-\frac{\beta}{2}(a_{-1}+a_{3})+\beta
Z(\lambda_{1}^{f},\lambda_{1})(a_{0}+a_{2})$ $\displaystyle-$
$\displaystyle\left[2Z(\lambda_{1}^{f},\lambda_{1})(\lambda_{1}^{f}-\beta)-(\lambda_{2}^{f}-\beta)\right]a_{1}+2Z(\lambda_{1}^{f},\lambda_{1})s_{0,1}.$
###### Proof.
Since $\alpha=2\beta$, from the first formula in [3, Lemma 4.7] we deduce the
expression for $s_{0,2}$. The expression for $s_{1,2}$ follows by applying
$f$. ∎
###### Lemma 3.4.
In ${\overline{V}}$ we have
$\displaystyle a_{4}=a_{-2}$ $\displaystyle-$ $\displaystyle
2Z(\lambda_{1},\lambda_{1}^{f})(a_{-1}-a_{3})$ $\displaystyle+$
$\displaystyle\frac{1}{\beta}\left[4Z(\lambda_{1},\lambda_{1}^{f})\left(\lambda_{1}-\beta\right)-\left(2\lambda_{2}-\beta\right)\right](a_{0}-a_{2})$
and
$\displaystyle a_{-3}=a_{3}$ $\displaystyle-$ $\displaystyle
2Z(\lambda_{1}^{f},\lambda_{1})(a_{2}-a_{-2})$ $\displaystyle+$
$\displaystyle\frac{1}{\beta}\left[4Z(\lambda_{1}^{f},\lambda_{1})\left(\lambda_{1}^{f}-\beta\right)-\left(2\lambda_{2}^{f}-\beta\right)\right](a_{1}-a_{-1}).$
###### Proof.
Since $s_{0,2}$ is invariant under $\tau_{1}$, we have
$s_{0,2}-s_{0,2}^{\tau_{1}}=0$. On the other hand, in the expression
$s_{0,2}-s_{0,2}^{\tau_{1}}$ obtained from the first formula of Lemma 3.3, the
coefficient of $a_{4}$ is $-\beta/2$, which is invertible in ${\overline{R}}$.
Hence we deduce the expression for $a_{4}$. By applying the map $f$ to the
expression for $a_{4}$ we get the expression for $a_{-3}$. ∎
###### Lemma 3.5.
In ${\overline{V}}$ we have
$\displaystyle s_{0,1}s_{0,1}=$
$\displaystyle+\frac{\beta^{2}}{4}Z(\lambda_{1}^{f},\lambda_{1})(a_{-2}+a_{2})$
$\displaystyle+\frac{1}{2}\left[-2(2\beta-1)(\lambda_{1}^{2}+{\lambda_{1}^{f}}^{2})-\frac{(8\beta^{2}-4\beta+1)}{\beta}\lambda_{1}\lambda_{1}^{f}+(16\beta^{2}-7\beta+1)\lambda_{1}\right.$
$\displaystyle\>\>\>\>\>\>\left.+(14\beta^{2}-8\beta+1)\lambda_{1}^{f}-\beta(14\beta^{2}-7\beta+1)\right](a_{-1}+a_{1})$
$\displaystyle+\left[\frac{2(2\beta-1)}{\beta}\lambda_{1}^{3}+\frac{(8\beta^{2}-4\beta+1)}{\beta^{2}}\lambda_{1}^{2}\lambda_{1}^{f}+\frac{2(2\beta-1)}{\beta}\lambda_{1}{\lambda_{1}^{f}}^{2}-(18\beta-6)\lambda_{1}^{2}\right.$
$\displaystyle\>\>\>\>\>\>\left.-\frac{2(10\beta^{2}-5\beta+1)}{\beta}\lambda_{1}\lambda_{1}^{f}-2(\beta-1){\lambda_{1}^{f}}^{2}-\frac{(2\beta-1)}{2}\lambda_{1}\lambda_{2}-\beta\lambda_{1}^{f}\lambda_{2}\right.$
$\displaystyle\>\>\>\>\>\>\left.+\frac{(54\beta^{2}-17\beta+1)}{2}\lambda_{1}+(9\beta^{2}-6\beta+1)\lambda_{1}^{f}+\frac{\beta(5\beta-1)}{2}\lambda_{2}-\frac{\beta^{2}}{2}\lambda_{2}^{f}\right.$
$\displaystyle\>\>\>\>\>\>\left.-\frac{\beta(24\beta^{2}-9\beta+1)}{2}\right]a_{0}$
$\displaystyle+\left[-\frac{2(2\beta-1)}{\beta}\lambda_{1}^{2}-\frac{(6\beta^{2}-3\beta+1)}{\beta^{2}}\lambda_{1}\lambda_{1}^{f}-\frac{2(\beta-1)}{\beta}{\lambda_{1}^{f}}^{2}+\frac{(16\beta^{2}-7\beta+1)}{\beta}\lambda_{1}\right.$
$\displaystyle\>\>\>\>\>\>\>\left.+\frac{(10\beta^{2}-7\beta+1)}{\beta}\lambda_{1}^{f}-\frac{\beta}{2}\lambda_{2}^{f}-\frac{(57\beta^{2}+26\beta-4)}{4}\right]s_{0,1}$
$\displaystyle+\frac{\beta^{2}}{4}s_{0,3}.$
###### Proof.
By Lemma 3.3, Lemma 3.4, and Lemma 4.3 in [3], we may compute the expression
on the right hand side of the formula in Lemma 3.2, with $i=j=1$, and the
result follows. ∎
###### Lemma 3.6.
In ${\overline{V}}$ we have
$\displaystyle s_{1,3}=s_{0,3}+\beta
Z(\lambda_{1}^{f},\lambda_{1})a_{-2}-\beta
Z(\lambda_{1},\lambda_{1}^{f})a_{3}$
$\displaystyle+\frac{1}{\beta^{3}}\left[-4\beta(2\beta-1)(\lambda_{1}^{2}+{\lambda_{1}^{f}}^{2})-2(8\beta^{2}-4\beta+1)\lambda_{1}\lambda_{1}^{f}+2\beta(15\beta^{2}-7\beta+1)\lambda_{1}\right.$
$\displaystyle\left.\>\>\>\>\>\>\>+\beta(26\beta^{2}-15\beta+2)\lambda_{1}^{f}-\beta^{2}(24\beta^{2}-13\beta+2)\right]a_{-1}$
$\displaystyle+\frac{1}{\beta^{4}}\left[8\beta(2\beta-1)\lambda_{1}^{3}+4(8\beta^{2}-4\beta+1)\lambda_{1}^{2}\lambda_{1}^{f}+8\beta(2\beta-1)\lambda_{1}{\lambda_{1}^{f}}^{2}\right.$
$\displaystyle\>\>\>\>\>\>\left.-4\beta^{2}(15\beta-5)\lambda_{1}^{2}-2\beta(32\beta^{2}-16\beta+3)\lambda_{1}\lambda_{1}^{f}+4\beta^{2}{\lambda_{1}^{f}}^{2}-2\beta^{2}(2\beta+1)\lambda_{1}\lambda_{2}\right.$
$\displaystyle\left.\>\>\>\>\>\>-4\beta^{3}\lambda_{1}^{f}\lambda_{2}+2\beta^{3}(40\beta-9)\lambda_{1}+2\beta^{2}(2\beta^{2}-5\beta+1)\lambda_{1}^{f}+2\beta^{3}(5\beta-1)\lambda_{2}\right.$
$\displaystyle\left.\>\>\>\>\>\>-2\beta^{4}\lambda_{2}^{f}-4\beta^{4}(5\beta-1)\right]a_{0}$
$\displaystyle+\frac{1}{\beta^{4}}\left[-8\beta(2\beta-1)\lambda_{1}^{2}\lambda_{1}^{f}-4(8\beta^{2}-2\beta+1)\lambda_{1}{\lambda_{1}^{f}}^{2}-8\beta(2\beta-1){\lambda_{1}^{f}}^{3}-4\beta^{2}\lambda_{1}^{2}\right.$
$\displaystyle\left.\>\>\>\>\>\>+2\beta(32\beta^{2}-16\beta+3)\lambda_{1}\lambda_{1}^{f}+4\beta^{2}(16\beta-5){\lambda_{1}^{f}}^{2}+4\beta^{3}\lambda_{1}\lambda_{2}^{f}\right.$
$\displaystyle\left.\>\>\>\>\>\>\>+2\beta^{2}(2\beta-1)\lambda_{1}^{f}\lambda_{2}^{f}-2\beta^{2}(2\beta^{2}+5\beta+1)\lambda_{1}-2\beta^{3}(40\beta-9)\lambda_{1}^{f}+2\beta^{4}\lambda_{2}\right.$
$\displaystyle\left.\>\>\>\>\>\>-2\beta^{3}(5\beta-1)\lambda_{2}^{f}+4\beta^{4}(5\beta-1)\right]a_{1}$
$\displaystyle+\frac{1}{\beta^{3}}\left[4\beta(2\beta-1)\lambda_{1}^{2}+2(8\beta^{2}-4\beta+1)\lambda_{1}\lambda_{1}^{f}+\beta(8\beta-4){\lambda_{1}^{f}}^{2}\right.$
$\displaystyle\left.\>\>\>\>\>-\beta(26\beta^{2}-15\beta+2)\lambda_{1}-2\beta(15\beta^{2}-7\beta+1)\lambda_{1}^{f}+\beta^{2}(24\beta^{2}-13\beta+2)\right]a_{2}$
$\displaystyle+\frac{1}{\beta^{2}}\left[-8(\lambda_{1}^{2}-{\lambda_{1}^{f}}^{2})+24\beta(\lambda_{1}-\lambda_{1}^{f})+2\beta(\lambda_{2}-\lambda_{2}^{f})\right]s_{0,1}.$
Similarly, $s_{2,3}$ belongs to the linear span of the elements $a_{-2}$,
$a_{-1}$, $a_{0}$, $a_{1}$, $a_{2}$, $a_{3}$, $s_{0,1}$, and $s_{0,3}$.
###### Proof.
Since, by Lemma 3.1, $s_{0,1}$ is invariant under $f$, we have
$s_{0,1}s_{0,1}-(s_{0,1}s_{0,1})^{f}=0$. Comparing the expressions for
$s_{0,1}s_{0,1}$ and $(s_{0,1}s_{0,1})^{f}$ obtained from Lemma 3.5, we deduce
the expression for $s_{1,3}$. By applying the map $\tau_{0}$ to the expression
for $s_{1,3}$ we get the expression for $s_{2,3}$ and the last assertion
follows from Lemma 3.4. ∎
As a consequence of the resurrection principle [11, Lemma 1.7], we can now
prove the following result, which, by Theorem 3.6 and Corollary 3.8 in [3],
implies the second part of Theorem 1.1. We use double angular brackets to
denote algebra generation and singular angular brackets for linear span.
###### Proposition 3.7.
${\overline{V}}=\langle
a_{-2},a_{-1},a_{0},a_{1},a_{2},a_{3},s_{0,1},s_{0,3}\rangle$.
###### Proof.
Set $U:=\langle a_{-2},a_{-1},a_{0},a_{1},a_{2},a_{3},s_{0,1},s_{0,3}\rangle$.
By Lemma 3.4, $a_{4},a_{-3}\in U$, and by Lemma 3.6 also $s_{1,3}$ and
$s_{2,3}$ belong to $U$. It follows that $U$ is invariant under the maps
$\tau_{0}$, $\tau_{1}$, and $f$. Hence, $a_{i}$ belongs to $U$, for every
$i\in{\mathbb{Z}}$. Now we show that $U$ is closed under the algebra product.
Since it is invariant under the maps $\tau_{0}$, $\tau_{1}$, and $f$, it is
enough to show that it is invariant under the action of ${\rm ad}_{a_{0}}$ and
it contains $s_{0,1}s_{0,1}$, $s_{0,3}s_{0,3}$, and $s_{0,1}s_{0,3}$. The
products $a_{0}a_{i}$, for $i\in\\{-2,-1,0,1,2,3\\}$ belong to $U$ by the
definition of $U$ and by Lemma 3.3. By [3, Lemma 4.3], $U$ contains
$a_{0}s_{0,1}$ and $a_{0}s_{0,3}$. The product $s_{0,1}s_{0,1}$ belongs to $U$
by Lemma 3.5 and similarly, by Lemma 2.3 and Lemma 3.2, the products
$s_{0,3}s_{0,3}$, and $s_{0,1}s_{0,3}$ belong to $U$. Hence $U$ is a
subalgebra of ${\overline{V}}$ and, since it contains the generators $a_{0}$
and $a_{1}$, we get $U={\overline{V}}$. ∎
###### Remark 3.8.
Note that the above proof gives a constructive way to compute the structure
constants of the algebra ${\overline{V}}$ relative to the generating set $B$.
This has been done with the use of GAP [6]. The explicit expressions however
are far too long to be written here.
###### Corollary 3.9.
${\overline{R}}$ is generated as a $\hat{D}$-algebra by $\lambda_{1}$,
$\lambda_{2}$, $\lambda_{1}^{f}$, and $\lambda_{2}^{f}$.
###### Proof.
By Proposition 3.7, ${\overline{V}}$ is generated as ${\overline{R}}$-module
by the set $B:=\\{a_{-2},a_{-1},$
$a_{0},a_{1},a_{2},a_{3},s_{0,1},s_{0,3}\\}$. Since
$\lambda_{a_{1}}(v)=(\lambda_{a_{0}}(v^{f}))^{f}$, $\lambda_{a_{0}}$ is a
linear function, and ${\overline{R}}={\overline{R}}^{f}$, we just need to show
that
$\lambda_{a_{0}}(v)\in\hat{D}[\lambda_{1},\lambda_{1}^{f},\lambda_{2},\lambda_{2}^{f}]$
for every $v\in B$. By definition we have
$\lambda_{a_{0}}(a_{0})=1,\>\>\lambda_{a_{0}}(a_{1})=\lambda_{1},\>\>\lambda_{a_{0}}(a_{2})=\lambda_{2},\mbox{
and }\>\>\lambda_{a_{0}}(a_{3})=\lambda_{3}.$
Since $\tau_{0}$ fixes $a_{0}$ and is an ${\overline{R}}$-automorphism of
${\overline{V}}$, we get
$\lambda_{a_{0}}(a_{-1})=\lambda_{a_{0}}((a_{1})^{\tau_{0}})=\lambda_{1},$
$\lambda_{a_{0}}(a_{-2})=\lambda_{a_{0}}((a_{2})^{\tau_{0}})=\lambda_{2},$
and
$\lambda_{a_{0}}(s_{0,1})=\lambda_{a_{0}}(a_{0}a_{1}-\beta a_{0}-\beta
a_{1})=\lambda_{1}-\beta-\beta\lambda_{1},$
and
$\lambda_{a_{0}}(s_{0,3})=\lambda_{a_{0}}(aa_{3}-\beta a-\beta
a_{3})=\lambda_{3}-\beta-\beta\lambda_{3}.$
We conclude the proof by showing that
$\lambda_{3}\in\hat{D}[\lambda_{1},\lambda_{1}^{f},\lambda_{2},\lambda_{2}^{f}]$.
Set
$\phi:=u_{1}u_{1}-v_{1}v_{1}-\lambda_{a_{0}}(u_{1}u_{1}-v_{1}v_{1})a_{0}$
and
$z:=\phi-2(2\beta-\lambda_{1})u_{1}.$
Then, by the fusion law, $\phi$ is a $0$-eigenvector for ${\rm ad}_{a_{0}}$
and so $z$ is a $0$-eigenvector for ${\rm ad}_{a_{0}}$ as well. Since
$s_{0,1}$ is $\tau_{0}$-invariant, it lies in ${\overline{V}}^{a_{0}}_{+}$ and
the fusion law implies that the product $zs_{0,1}$ is a sum of a $0$\- and an
$\alpha$-eigenvector for ${\rm ad}_{a_{0}}$. In particular,
$\lambda_{a_{0}}(zs_{0,1})=0$. By Remark 3.8 we can compute explicitly the
product $zs_{0,1}$:
$\displaystyle zs_{0,1}=-\frac{\beta^{3}}{4}a_{3}$
$\displaystyle+\frac{\beta}{4}\left[2\beta\lambda_{1}-\lambda_{1}^{f}-\beta(\beta-1)\right]a_{-2}$
$\displaystyle+\left[-\beta^{2}\lambda_{1}^{2}-\frac{(2\beta^{2}+\beta-1)}{2}\lambda_{1}\lambda_{1}^{f}-\frac{(2\beta^{2}-4\beta+1)}{2\beta}{\lambda_{1}^{f}}^{2}+\frac{(4\beta^{3}-\beta^{2}-\beta)}{2}\lambda_{1}\right.$
$\displaystyle\left.+(2\beta^{2}-4\beta+1)\lambda_{1}^{f}+\frac{\beta^{3}}{4}\lambda_{2}-\frac{\beta^{2}}{4}\lambda_{2}^{f}+\frac{\beta(4\beta-1}{2}\right]a_{-1}$
$\displaystyle+\left[2(2\beta-1)\lambda^{3}+\frac{(2\beta-1)^{2}}{\beta}\lambda_{1}^{2}\lambda_{1}^{f}-(10\beta^{2}-8\beta+1)\lambda_{1}^{2}+(-2\beta^{2}+3\beta-1)\lambda_{1}\lambda_{1}^{f}\right.$
$\displaystyle\left.-\frac{\beta(2\beta-1)}{2}\lambda_{1}\lambda_{2}+\beta(2\beta-1)^{2}\lambda_{1}+\frac{\beta(2\beta-1)}{2}\lambda_{1}^{f}+\frac{\beta^{2}(\beta-1)}{2}\lambda_{2}-\frac{\beta^{2}(4\beta-1)}{2}\right]a_{0}$
$\displaystyle+\left[-\beta^{2})\lambda_{1}^{2}-\frac{(\beta+3)(2\beta-1)}{2}\lambda_{1}\lambda_{1}^{f}-\frac{(6\beta^{2}-4\beta+1)}{2\beta}{\lambda_{1}^{f}}^{2}+\frac{\beta(4\beta^{2}+3\beta-3)}{2}\lambda_{1}\right.$
$\displaystyle\left.+(8\beta^{2}-5\beta+1)\lambda_{1}^{f}+\frac{bt^{3}}{4}\lambda_{2}+\frac{\beta^{2}}{4}\lambda_{2}^{f}-\frac{\beta(17\beta^{2}-12\beta+2)}{4}\right]a_{1}$
$\displaystyle+\left[\frac{\beta(3\beta-1)}{2}\lambda_{1}+\frac{\beta(4\beta-1)}{4}\lambda_{1}^{f}-\frac{3\beta^{2}(3\beta-1)}{4}\right]a_{2}$
$\displaystyle+\left[-2\beta\lambda_{1}^{2}-(2\beta-1)\lambda_{1}\lambda_{1}^{f}+\beta(4\beta+1)\lambda_{1}+(2\beta-1)\lambda_{1}^{f}+\frac{\beta^{2}}{2}\lambda_{2}-\frac{\beta(9\beta+2)}{2}\right]s_{0,1}.$
Since $\lambda_{a_{0}}(zs_{0,1})=0$, taking the image under $\lambda_{a_{0}}$
of both sides, we get
$\displaystyle\lambda_{3}$ $\displaystyle=$
$\displaystyle\frac{8(\beta-1)}{\beta^{3}}\lambda_{1}^{3}-\frac{4(2\beta^{2}+\beta-1)}{\beta^{4}}\lambda_{1}^{2}\lambda_{1}^{f}-\frac{4(2\beta-1)^{2}}{\beta^{4}}\lambda_{1}{\lambda_{1}^{f}}^{2}$
$\displaystyle-\frac{4(4\beta^{2}-7\beta+1)}{\beta^{3}}\lambda_{1}^{2}+\frac{16(2\beta-1)}{\beta^{2}}\lambda_{1}{\lambda_{1}^{f}}+\frac{6}{\beta}\lambda_{1}\lambda_{2}+\frac{2(2\beta-1)}{\beta^{2}}\lambda_{1}^{f}\lambda_{2}$
$\displaystyle+\frac{(\beta^{2}-22\beta+4)}{\beta^{2}}\lambda_{1}-\frac{2(2\beta-1)}{\beta^{2}}\lambda_{1}^{f}-\frac{2(5\beta+1)}{\beta}\lambda_{2}+\frac{2(5\beta-1)}{\beta}.$
∎
We conclude this section with some relations in ${\overline{V}}$ and
${\overline{R}}$ which will be useful in the sequel for the classification of
the algebras. Set
$d_{1}:=s_{2,3}^{f}-s_{2,3},\>\>d_{2}:={d_{1}}^{\tau_{1}},\>\>\mbox{ and, for
}i\in\\{1,2\\},\>\>D_{i}:={d_{i}}^{\tau_{0}}-d_{i};$
$e:=u_{1}^{\tau_{1}}v_{3}^{\tau_{1}}\>\>\mbox{ and }\>E:=a_{2}e-2\beta e.$
###### Lemma 3.10.
The following identities hold in ${\overline{V}}$, for $i\in\\{1,2\\}$:
1. (1)
$d_{i}=0,\>\>,D_{i}=0,\>\>E=0$;
2. (2)
there exists an element
$t(\lambda_{1},\lambda_{1}^{f},\lambda_{2},\lambda_{2}^{f})\in{\overline{R}}$
such that
$t_{1}(\lambda_{1},\lambda_{1}^{f},\lambda_{2},\lambda_{2}^{f})a_{0}+\frac{2}{\beta}(\lambda_{1}-\lambda_{1}^{f})\left[\beta\lambda_{1}+(\beta-1)(\lambda_{1}^{f}-\beta)\right](a_{-1}+a_{1}+\frac{2}{\beta}s_{0,1})=0.$
###### Proof.
Identities involving the $d_{i}$’s and $D_{i}$’s follow from Lemma 3.1. By the
fusion law, the product $u_{1}u_{2}$ is a $0$-eigenvector for ${\rm
ad}_{a_{0}}$ and the product $u_{1}^{\tau_{1}}v_{3}^{\tau_{1}}$ is a
$2\beta$-eigenvector for ${\rm ad}_{a_{2}}$. The last claim follows by an
explicit computation of the product $a_{0}(u_{1}u_{2})$, which gives the left
hand side of the equation. ∎
###### Lemma 3.11.
In the ring ${\overline{R}}$ the following holds:
1. (1)
$\lambda_{a_{0}}(a_{4}a_{4}-a_{4})=0$,
2. (2)
$\lambda_{a_{0}}(d_{1})=0$,
3. (3)
$\lambda_{a_{0}}(d_{2})=0$,
4. (4)
$\lambda_{a_{1}}(d_{1})=0$.
###### Proof.
The first equation follows from the fact that $a_{4}$ is an idempotent. The
remaining follow from Lemma 3.10. ∎
## 4\. Strategy for the classification
By Remark 3.8, the four expressions on the left hand side of the identities in
Lemma 3.11 can be computed explicitly and produce respectively four
polynomials $p_{i}(x,y,z,t)$ for $i\in\\{1,\ldots,4\\}$ in $\hat{D}[x,y,z,t]$
(with $x,y,z,t$ indeterminates on $\hat{D}$), that simultaneously annihilate
on the quadruple $(\lambda_{1},\lambda_{1}^{f},\lambda_{2},\lambda_{2}^{f})$.
We define also, for $i\in\\{1,2,3\\}$, $q_{i}(x,z):=p_{i}(x,x,z,z)$. The
polynomials $p_{i}$’s are too long to be displayed here but can be computed
using [1] or [6], while the polynomials $q_{i}$ are the following
$\displaystyle q_{1}(x,z)=$
$\displaystyle\frac{128}{\beta^{10}}(-384\beta^{5}+608\beta^{4}-376\beta^{3}+114\beta^{2}-17\beta+1)x^{7}$
$\displaystyle+\frac{64}{\beta^{10}}(4352\beta^{6}-6080\beta^{5}+2992\beta^{4}-516\beta^{3}-40\beta^{2}+23\beta-2)x^{6}$
$\displaystyle+\frac{64}{\beta^{8}}(64\beta^{4}-96\beta^{3}+52\beta^{2}-12\beta+1)x^{5}z$
$\displaystyle+\frac{16}{\beta^{9}}(-38720\beta^{6}+42912\beta^{5}-9252\beta^{4}-5928\beta^{3}+3477\beta^{2}-660\beta+44)x^{5}$
$\displaystyle+\frac{16}{\beta^{8}}(-3168\beta^{5}+4832\beta^{4}-2782\beta^{3}+747\beta^{2}-92\beta+4)x^{4}z$
$\displaystyle+\frac{32}{\beta^{5}}(8\beta^{2}-6\beta+1)x^{3}z^{2}$
$\displaystyle+\frac{8}{\beta^{8}}(84832\beta^{6}-48224\beta^{5}-50482\beta^{4}+55573\beta^{3}-20164\beta^{2}+3262\beta-200)x^{4}$
$\displaystyle+\frac{8}{\beta^{7}}(19792\beta^{5}-30292\beta^{4}+17700\beta^{3}-4917\beta^{2}+647\beta-32)x^{3}z$
$\displaystyle+\frac{16}{\beta^{5}}(-72\beta^{3}+62\beta^{2}-15\beta+1)x^{2}z^{2}$
$\displaystyle+\frac{8}{\beta^{7}}(-45888\beta^{6}-33584\beta^{5}+119184\beta^{4}-85132\beta^{3}+27054\beta^{2}-4089\beta+240)x^{3}$
$\displaystyle+\frac{4}{\beta^{6}}(-52880\beta^{5}+81156\beta^{4}-47828\beta^{3}+13527\beta^{2}-1838\beta+96)x^{2}z$
$\displaystyle+\frac{32}{\beta^{4}}(48\beta^{3}-44\beta^{2}+12\beta-1)xz^{2}+\frac{4}{\beta^{2}}(2\beta-1)z^{3}$
$\displaystyle+\frac{4}{\beta^{6}}(19648\beta^{6}+114384\beta^{5}-204648\beta^{4}+128262\beta^{3}-38411\beta^{2}+5598\beta-320)x^{2}$
$\displaystyle+\frac{8}{\beta^{5}}(16288\beta^{5}-25096\beta^{4}+14904\beta^{3}-4272\beta^{2}+593\beta-32)xz$
$\displaystyle+\frac{2}{\beta^{3}}(-322\beta^{3}+301\beta^{2}-86\beta+8)z^{2}$
$\displaystyle+\frac{8}{\beta^{5}}(-26112\beta^{5}+40040\beta^{4}-23878\beta^{3}+6959\beta^{2}-995\beta+56)x$
$\displaystyle+\frac{2}{\beta^{4}}(-15264\beta^{5}+23658\beta^{4}-14169\beta^{3}+4110\beta^{2}-580\beta+32)z$
$\displaystyle+\frac{4}{\beta^{4}}(7632\beta^{5}-11668\beta^{4}+6932\beta^{3}-2011\beta^{2}+286\beta-16),$
$\displaystyle q_{2}(x,z)=$ $\displaystyle=$
$\displaystyle\frac{-8(8\beta^{2}-6\beta+1)}{\beta^{4}}x^{4}+\frac{(160\beta^{3}-56\beta^{2}-28\beta+8)}{\beta^{4}}x^{3}+\frac{(8\beta-4)}{\beta^{2}}x^{2}z$
$\displaystyle-\frac{(96\beta^{3}+96\beta^{2}-112\beta+20)}{\beta^{3}}x^{2}-\frac{(44\beta^{2}-30\beta+4)}{\beta^{2}}xz$
$\displaystyle+\frac{(140\beta^{2}-102\beta+16)}{\beta^{2}}x+\frac{(36\beta^{2}-26\beta+4)}{\beta}z$
$\displaystyle-\frac{(36\beta^{2}-26\beta+4)}{\beta},$ $\displaystyle
q_{3}(x,z)=$
$\displaystyle\frac{(-128\beta^{3}+160\beta^{2}-64\beta+8)}{\beta^{5}}x^{4}+\frac{(64\beta^{2}-48\beta+8)}{\beta^{4}}x^{3}z$
$\displaystyle+\frac{(288\beta^{4}-280\beta^{3}+20\beta^{2}+40\beta-8)}{\beta^{5}}x^{3}+\frac{(-112\beta^{3}+48\beta^{2}+12\beta-4)}{\beta^{4}}x^{2}z$
$\displaystyle-\frac{(8\beta-4)}{\beta^{2}}xz^{2}+\frac{(-160\beta^{4}+8\beta^{3}+228\beta^{2}-136\beta+20)}{\beta^{4}}x^{2}$
$\displaystyle+\frac{(12\beta^{3}+70\beta^{2}-54\beta+8)}{\beta^{3}}xz+\frac{(8\beta-4)}{\beta}z^{2}+\frac{(148\beta^{3}-246\beta^{2}+118\beta-16)}{\beta^{3}}x$
$\displaystyle+\frac{(36\beta^{3}-70\beta^{2}+34\beta-4)}{\beta^{2}}z+\frac{(-36\beta^{3}+62\beta^{2}-30\beta+4)}{\beta^{2}}.$
We can now prove the following result that implies the first part of Theorem
1.1.
###### Theorem 4.1.
Let $V$ be a $2$-generated primitive axial algebra of Monster type
$(2\beta,\beta)$ over a ring $R$ in which $2$ is invertible and the elements
$0$, $1$, $\beta$ and $2\beta$ are pairwise distinguishable. Then, $V$ is
completely determined, up to homomorphic images, by a quadruple
$(x_{0},y_{0},z_{0},t_{0})\in R^{4}$, which is a solution of the system
(8) $\displaystyle\left\\{\begin{array}[]{rcl}p_{1}(x,y,z,t)&=&0\\\
p_{2}(x,y,z,t)&=&0\\\ p_{3}(x,y,z,t)&=&0\\\ p_{4}(x,y,z,t)&=&0.\\\
\end{array}\right.$
###### Proof.
Let $V$ be a primitive axial algebra of Monster type $(2\beta,\beta)$ over a
ring $R$ as in the statement, generated by the two axes $\bar{a}_{0}$ and
$\bar{a}_{1}$. Then, by [3, Corollary 3.8], $V$ is a homomorphic image of
${\overline{V}}\otimes_{\hat{D}}R$ and $R$ is a homomorphic image of
${\overline{R}}\otimes_{\hat{D}}R$. We identify the elements of $\hat{D}$ with
their images in $R$ so that the polynomials $p_{i}$ and $q_{i}$ are considered
as polynomials in $R[x,y,z,t]$ and $R[x,z]$, respectively. For each
$i\in{\mathbb{Z}}$, let $\bar{a}_{i}$ be the image of the axis $a_{i}$. By
Proposition 3.7 and Corollary 3.9, the algebra $V$ is completely determined,
up to homomorphic images, by the quadruple
$(\lambda_{\bar{a}_{0}}(\bar{a}_{1}),\lambda_{\bar{a}_{1}}(\bar{a}_{0}),\lambda_{\bar{a}_{0}}(\bar{a}_{2}),\lambda_{\bar{a}_{1}}(\bar{a}_{-1})).$
This quadruple is the homomorphic image in $R^{4}$ of the quadruple
$(\lambda_{1},\lambda_{1}^{f},\lambda_{2},\lambda_{2}^{f})$ defined in Section
3 and so it is a solution of the system (8) by the definition of the
polynomials $p_{i}$’s as claimed. ∎
If $V$ satisfies the hypothesis of Theorem 4.1 and, in addition, it is
symmetric, then
$\lambda_{\bar{a}_{0}}(\bar{a}_{1})=\lambda_{\bar{a}_{1}}(\bar{a}_{0})\>\>\mbox{
and
}\>\>\lambda_{\bar{a}_{0}}(\bar{a}_{2}),=\lambda_{\bar{a}_{1}}(\bar{a}_{-1})$
and the pair
$(\lambda_{\bar{a}_{0}}(\bar{a}_{1}),\lambda_{\bar{a}_{0}}(\bar{a}_{2}))$ is a
solution of the system
(12) $\displaystyle\left\\{\begin{array}[]{rcl}q_{1}(x,z)&=&0\\\
q_{2}(x,z)&=&0\\\ q_{3}(x,z)&=&0.\\\ \end{array}\right.$
###### Lemma 4.2.
For any field ${\mathbb{F}}$, the resultant of the polynomials $q_{2}(x,z)$
and $q_{3}(x,z)$ with respect to $z$ is
$\gamma
x(x-1)(2x-\beta)^{3}[(16\beta-6)x+(-18\beta^{2}+\beta+2)][(8\beta-2)x+(-9\beta^{2}+2\beta)],$
where
$\gamma:=\frac{-16(2\beta-1)^{3}(4\beta-1)}{\beta^{10}}.$
###### Proof.
The resultant can be computed in the ring ${\mathbb{Z}}[\beta,\beta^{-1}][x]$
using [1]. ∎
We set
$\displaystyle\mathcal{S}_{0}$ $\displaystyle:=$
$\displaystyle\left\\{\left(\frac{\beta}{2},\frac{\beta}{2}\right),\>\>\left(\beta,0\right),\>\>\left(\beta,\frac{\beta}{2}\right)\right\\},$
$\displaystyle\mathcal{S}_{1}$ $\displaystyle:=$
$\displaystyle\mathcal{S}_{0}\cup\left\\{(1,1),\>\>(0,1),\>\>\left(\beta,1\right)\right\\},$
$\displaystyle\mathcal{S}_{2}$ $\displaystyle:=$
$\displaystyle\mathcal{S}_{1}\cup\left\\{\left(\frac{(18\beta^{2}-\beta-2)}{2(8\beta-3)},\frac{(48\beta^{4}-28\beta^{3}+7\beta-2)(3\beta-1)}{2\beta^{2}(8\beta-3)^{2}}\right)\right\\}\mbox{
if }\beta\neq\frac{3}{8},$ $\displaystyle\mathcal{S}_{3}$ $\displaystyle:=$
$\displaystyle\mathcal{S}_{2}\cup\left\\{\left(\frac{(9\beta^{2}-2\beta)}{2(4\beta-1)},\frac{(9\beta^{2}-2\beta)}{2(4\beta-1)}\right)\right\\}\mbox{
if }\beta\neq\frac{1}{4}.$
###### Lemma 4.3.
Let ${\mathbb{F}}$ be a field of characteristic other than $2$,
$\beta\in{\mathbb{F}}$. If Then, the set of the solutions of the system of
equations (12) is
1. (1)
$\mathcal{S}_{0}\cup\left\\{(\mu,1)\>|\>\mu\in{\mathbb{F}}\right\\}$, if
$\beta=\frac{1}{4}$;
2. (2)
$\mathcal{S}_{2}\cup\mathcal{S}_{3}$, if either
$\beta\in\\{\frac{1}{2},\frac{1}{3},\frac{2}{7}\\}$ or
$\beta\not\in\left\\{\frac{1}{4},\frac{3}{8}\right\\}$ and
(13)
$(16\beta^{4}-48\beta^{3}-51\beta^{2}+46\beta-8)(18\beta^{2}-\beta-2)(5\beta^{2}+\beta-1)(4\beta^{2}+2\beta-1)=0;$
3. (3)
$\mathcal{S}_{3}$ in all the remaining cases.
###### Proof.
Using [1], it is straightforward to check that the possible values $x_{0}$ for
a solution $(x_{0},z_{0})$ of the system (12) are given by Lemma 4.2 when
$\beta\neq\frac{1}{4}$ and can be computed directly when $\beta=\frac{1}{4}$.
Since $q_{2}(x,z)$ is linear in $z$, for every value of $x$ there is at most a
solution of the system. The elements of $\mathcal{S}_{2}$ are indeed
solutions. When $x_{0}=\frac{(18\beta^{2}-\beta-2)}{2(8\beta-3)}$, we solve
$q_{2}(x_{0},z)=0$ obtaining the given corresponding value for $z_{0}$. On the
other hand, the value of $q_{1}(x,z)$ computed in ${\mathbb{Z}}[\beta]$ on
this pair is a non-zero polynomial in $\beta$ which vanishes exactly when
either $\beta\in\\{\frac{1}{2},\frac{1}{3},\frac{2}{7}\\}$, or
$\beta\neq\frac{3}{8}$ and Equation (13) is satisfied. ∎
In order to classify primitive axial algebras of Monster type $(2\beta,\beta)$
over ${\mathbb{F}}$ generated by two axes $\bar{a}_{0}$ and $\bar{a}_{1}$ we
can proceed, similarly as we did in [3], in the following way. We first solve
the system (12) and classify all symmetric algebras. Then we observe that, the
even subalgebra $\langle\langle\bar{a}_{0},\bar{a}_{2}\rangle\rangle$ and the
odd subalgebra $\langle\langle\bar{a}_{-1},\bar{a}_{1}\rangle\rangle$ are
symmetric, since the automorphisms $\tau_{1}$ and $\tau_{0}$ respectively,
swap the generating axes. Hence, from the classification of the symmetric
case, we know all possible configurations for the subalgebras
$\langle\langle\bar{a}_{0},\bar{a}_{2}\rangle\rangle$ and
$\langle\langle\bar{a}_{-1},\bar{a}_{1}\rangle\rangle$ and from the relations
found in Section 3, we derive the structure of the entire algebra.
## 5\. The symmetric case
In this and in the following section we let $V$ be a primitive axial algebra
of Monster type $(2\beta,\beta)$ over a field ${\mathbb{F}}$ of characteristic
other than $2$, generated by the two axes $\bar{a}_{0}$ and $\bar{a}_{1}$. By
[3, Corollary 3.8], $V$ is a homomorphic image of
${\overline{V}}\otimes_{\hat{D}}{\mathbb{F}}$. For every element
$v\in{\overline{V}}$, we denote by $\bar{v}$ its image in $V$. In particular
$\bar{a}_{0}$ and $\bar{a}_{1}$ are the images of $a_{0}$ and $a_{1}$ and all
the formulas obtained from the ones in Lemmas 3.2, 3.3, 3.4, 3.10 with
$\bar{a}_{i}$ and $\bar{s}_{r,j}$ in the place of $a_{i}$ and $s_{r,j}$,
respectively, hold in $V$. With an abuse of notation we identify the elements
of ${\overline{R}}$ with their images in ${\mathbb{F}}$, so that in particular
$\lambda_{1}=\lambda_{\bar{a}_{0}}(\bar{a}_{1})$,
$\lambda_{1}^{f}=\lambda_{\bar{a}_{1}}(\bar{a}_{0})$,
$\lambda_{2}=\lambda_{a_{0}}(\bar{a}_{2})$ , and
$\lambda_{2}^{f}=\lambda_{\bar{a}_{1}}(\bar{a}_{2})$.
We begin with a quick overview of the known $2$-generated primitive symmetric
algebras of Monster type $(2\beta,\beta)$. Among these, there are
1. (1)
the algebras of Jordan type $1A$, $2B$, $3C(\beta)$ and $3C(2\beta)$ which we
denote by $2A(\beta)$ (see [9]).
2. (2)
the algebra $3A(2\beta,\beta)$ defined in [15].
3. (3)
the algebras $V_{5}(\beta)$ and $V_{8}(\beta)$ defined in [5].
4. (4)
the $3$-dimensional algebra $V_{3}(\beta)$ with basis
$(\bar{a}_{0},\bar{a}_{1},\bar{a}_{2})$ and the multiplication defined as in
Table 2. Note that it coincides with the algebra
$III_{3}(\xi,\frac{1-3\xi^{2}}{3\xi-1},0)^{\times}$ defined by Yabe in [17],
with $\xi=2\beta$.
5. (5)
the $5$-dimensional algebra $Y_{5}(\beta)$ with basis
$(\bar{a}_{3},\bar{a}_{0},\bar{a}_{1},\bar{a}_{2},\bar{s})$ and multiplication
table Table 3. Note that it coincides with the algebra $IV_{2}(\xi,\beta,\mu)$
defined by Yabe [17], when $\beta=\frac{1-\xi^{2}}{2}$ and $\xi=2\beta$.
$\begin{array}[]{|c||c|c|c|}\hline\cr&\bar{a}_{-1}&\bar{a}_{0}&\bar{a}_{1}\\\
\hline\cr\hline\cr\bar{a}_{-1}&\bar{a}_{-1}&\frac{3}{2}\beta(\bar{a}_{0}+\bar{a}_{-1})+\frac{\beta}{2}\bar{a}_{1}&\frac{3}{2}\beta(\bar{a}_{-1}+\bar{a}_{1})+\frac{\beta}{2}\bar{a}_{0}\\\
\hline\cr\bar{a}_{0}&\frac{3}{2}\beta(\bar{a}_{0}+\bar{a}_{-1})+\frac{\beta}{2}\bar{a}_{1}&\bar{a}_{0}&\frac{3}{2}\beta(\bar{a}_{0}+\bar{a}_{1})+\frac{\beta}{2}\bar{a}_{-1}\\\
\hline\cr\bar{a}_{1}&\frac{3}{2}\beta(\bar{a}_{-1}+\bar{a}_{1})+\frac{\beta}{2}\bar{a}_{0}&\frac{3}{2}\beta(\bar{a}_{0}+\bar{a}_{1})+\frac{\beta}{2}\bar{a}_{-1}&\bar{a}_{1}\\\
\hline\cr\end{array}$
Table 2. Multiplication table for the algebra $V_{3}(\beta)$
It is immediate that the values of $(\lambda_{1},\lambda_{2})$ corresponding
to the trivial algebra $1A$ and to the algebra $2B$ are $(1,1)$ and $(0,1)$,
respectively. In the following lemma we list the key features of the algebra
$V_{3}(\beta)$.
###### Lemma 5.1.
Let ${\mathbb{F}}$ be a field of characteristic other than $2$ and
$\beta\in{\mathbb{F}}$ such that $18\beta^{2}-\beta-1=0$. The algebra
$V_{3}(\beta)$ is a $2$-generated symmetric Frobenius axial algebra satisfying
the fusion law $\mathcal{M}(2\beta,\beta)$ and such that for every
$i\in\\{-1,0,1\\}$, ${\rm ad}_{\bar{a}_{i}}$ has eigenvalues $1$, $2\beta$,
and $\beta$. In particular it not an axial algebra of Jordan type. Moreover,
$\lambda_{1}=\lambda_{2}=\frac{9\beta+1}{4}$. Furthermore
1. (1)
if $ch\>{\mathbb{F}}\neq 3$, the algebra $V_{3}(\beta)$ is primitive and
simple;
2. (2)
if $ch\>{\mathbb{F}}=3$, then $\beta=2$ and $V_{3}(\beta)$ is neither
primitive nor simple. It has a $2$-dimensional quotient over the ideal
${\mathbb{F}}(\bar{a}_{0}+\bar{a}_{-1}+\bar{a}_{2})$ isomorphic to
$3C(-1)^{\times}$ and a quotient isomorphic to $1A$ (over the ideal
$\langle\bar{a}_{0}-\bar{a}_{1},\bar{a}_{0}-\bar{a}_{2}\rangle$).
###### Proof.
If $ch\>{\mathbb{F}}\neq 3$, then $\bar{a}_{1}-\bar{a}_{-1}$ and
$-\frac{3\beta+1}{8}\bar{a}_{0}+\frac{\beta}{2}(\bar{a}_{1}+\bar{a}_{-1})$ are
respectively a $\beta$\- and $2\beta$-eigenvector for ${\rm ad}_{\bar{a}_{0}}$
and
$\bar{a}_{1}=\frac{3\beta+1}{8\beta}\bar{a}_{0}+\frac{1}{\beta}(-\frac{3\beta+1}{8}\bar{a}_{0}+\frac{\beta}{2}(\bar{a}_{1}+\bar{a}_{-1})+\frac{\beta}{2}(\bar{a}_{1}-\bar{a}_{-1}),$
whence $\lambda_{1}=\frac{3\beta+1}{8\beta}=\frac{9\beta+1}{4}$. The Frobenius
form is defined by $(\bar{a}_{i},\bar{a}_{i})=1$ and
$(\bar{a}_{i},\bar{a}_{j})=\lambda_{1}$, for $i,j\in\\{-1,0,1\\}$ and $i\neq
j$. The projection graph (see [10] for the definition) has $\bar{a}_{0}$ and
$\bar{a}_{1}$ as vertices and an edge between them since
$(\bar{a}_{0},\bar{a}_{1})\neq 0$. Thus it is connected and so by [10,
Corollary 4.15 and Corollary 4.11] every proper ideal of $V$ is contained in
the radical of the form. Since the determinant of the Gram matrix of the
Frobenius form with respect to the basis
$(\bar{a}_{-1},\bar{a}_{0},\bar{a}_{1})$ is always non-zero, the algebra is
simple.
If $ch\>{\mathbb{F}}=3$, then condition $18\beta^{2}-\beta-1=0$ implies
$\beta=2$. Hence $2\beta=1$ and $\bar{a}_{0}$ and $\bar{a}_{1}$ are both
$1$-eigenvectors for ${\rm ad}_{\bar{a}_{0}}$. All the other properties are
easily verified. ∎
###### Lemma 5.2.
Let ${\mathbb{F}}$ be a field of characteristic other than $2$ and
$\beta\in{\mathbb{F}}\setminus\\{0,1,\frac{1}{2}\\}$. The algebra
$3A(2\beta,\beta)$ is a $2$-generated symmetric Frobenius axial algebra of
Monster type $(2\beta,\beta)$ with
$\lambda_{1}=\lambda_{2}=\frac{\beta(9\beta-2)}{2(4\beta-1)}$. It is simple
except when $(18\beta^{2}-\beta-1)(9\beta^{2}-10\beta+2)(5\beta-1)=0$, in
which case one of the following holds
1. (1)
$\beta=\frac{1}{5}$, $ch\>{\mathbb{F}}\neq 3$, and there is a unique quotient
of maximal dimension which is isomorphic to $3C(\beta)$;
2. (2)
$18\beta^{2}-\beta-1=0$, $ch\>{\mathbb{F}}\neq 3$, and there is a unique non
trivial quotient which is isomorphic to $V_{3}(\beta)$;
3. (3)
$9\beta^{2}-10\beta+2=0$, $ch\>{\mathbb{F}}\neq 3$, and there is a unique non
trivial quotient which is isomorphic to $1A$;
4. (4)
$ch\>{\mathbb{F}}=3$, $\beta=-1$ and there are four non trivial quotients
isomorphic respectively to $3C(-1),3C(-1)^{\times},V_{3}(-1),1A$ (see [9,
(3.4)] for the definition of $3C(-1)^{\times}$).
###### Proof.
Let $V$ be the algebra $3A(2\beta,\beta)$. Then $V$ has a Frobenius form and
the projection graph (see [10] for the definition) has $\bar{a}_{0}$ and
$\bar{a}_{1}$ as vertices and an edge between them since
$(\bar{a}_{0},\bar{a}_{1})\neq 0$. Thus it is connected and so by [10,
Corollary 4.15 and Corollary 4.11] every proper ideal of $V$ is contained in
the radical of the form. The Gram matrix of the Frobenius form with respect to
the basis $(\bar{a}_{0},\bar{a}_{1},\bar{a}_{2},\bar{s}_{1,0})$ can be
computed easily and has determinant
$-\frac{\beta(9\beta^{2}-10\beta+2)^{3}(18\beta^{2}-\beta-1)(5\beta-1)}{16(4\beta-1)^{5}}.$
Suppose first $ch\>{\mathbb{F}}\neq 3$. When $\beta=\frac{1}{5}$ we see that
the radical is generated by the vector
$\frac{\beta}{2}(\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2})+\bar{s}_{0,1}$ and hence
the quotient over the radical is isomorphic to the algebra $3C(\beta)$. If
$(18\beta^{2}-\beta-1)=0$, then the radical is generated by the vector
$-\frac{\beta}{2}(\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2})+\bar{s}_{0,1}$ and it
follows that the quotient over the radical is isomorphic to the algebra
$V_{3}(\beta)$, which by Lemma 5.1 is simple. Finally, if
$(9\beta^{2}-10\beta+2)=0$, then the radical is three dimensional, with
generators
$\bar{a}_{0}-\bar{a}_{2},\>\>\bar{a}_{0}-\bar{a}_{1},\>\>(2\beta-1)\bar{a}_{0}+\bar{s}_{0,1}.$
It is immediate to see that the quotient over the radical is the trivial
algebra $1A$. Using Lemma 2.4, it is straightforward to prove that the radical
is a minimal ideal.
Now assume $ch\>{\mathbb{F}}=3$. Then the radical of the form is three
dimensional, with generators
$\bar{a}_{0}-\bar{a}_{2},\>\>\bar{a}_{0}-\bar{a}_{1},\>\>\bar{s}_{0,1}$
and it is straightforward to see that it contains properly the non-zero ideals
${\mathbb{F}}(\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2}+\bar{s}_{0,1})$,
${\mathbb{F}}(\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2}-\bar{s}_{0,1})$, and
${\mathbb{F}}(\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2})$. Claim $(4)$ follows. ∎
###### Lemma 5.3.
Let ${\mathbb{F}}$ be a field of characteristic other than $2$ and
$\beta\in{\mathbb{F}}\setminus\\{0,1,\frac{1}{2}\\}$.
1. (1)
The algebra $2A$ has $(\lambda_{1},\lambda_{2})=(\beta,1)$. It is simple
except when $\beta=-\frac{1}{2}$, in which case it has a non trivial quotient
of dimension $2$, denoted by $3C(-\frac{1}{2})^{\times}$ (see [9, (3.4)]).
2. (2)
The algebra $3C(\beta)$ has $\lambda_{1}=\lambda_{2}=\frac{\beta}{2}$. It is
simple except when $\beta=-1$, in which case it has a non trivial quotient of
dimension $2$, denoted by $3C(-1)^{\times}$ (see [9, (3.4)]).
3. (3)
The algebra $V_{5}(\beta)$ has $(\lambda_{1},\lambda_{2})=(\beta,0)$. It is
simple except when $\beta=-\frac{1}{4}$, in which case it has a unique non
trivial quotient over the ideal generated by
$\bar{a}_{-1}+\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2}-\frac{2}{\beta}\bar{s}_{0,1}$,
which is a simple algebra of dimension $4$.
4. (4)
The algebra $V_{8}(\beta)$ has
$(\lambda_{1},\lambda_{2})=\left(\beta,\frac{\beta}{2}\right)$. It is simple
provided $\beta\not\in\\{2,-\frac{1}{7}\\}$.
> If $\beta=-\frac{1}{7}$, the algebra has a unique non trivial quotient over
> the ideal
> ${\mathbb{F}}(\bar{a}_{-2}+\bar{a}_{-1}+\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2}+\bar{a}_{3}-\frac{1}{\beta}\bar{s}_{0,3}-\frac{2}{\beta}\bar{s}_{0,1})$
> which is a simple algebra of dimension $7$.
> If $\beta=2$, then it has a unique non trivial quotient which is isomorphic
> to $2A(\beta)$.
###### Proof.
(1) and (2) are proved in [9, (3.4)]. Let
$V\in\\{V_{5}(\beta),V_{8}(\beta)\\}$. Then, $V$ is a subalgebra of a Matsuo
algebra and so it is endowed of a Frobenius form. As in the proof of Lemma
5.2, every proper ideal of $V$ is contained in the radical of the form. When
$V=V_{5}(\beta)$, the Gram matrix, with respect to the basis
$\bar{a}_{-1},\bar{a}_{0},\bar{a}_{1},\bar{a}_{2},-\frac{2}{\beta}\bar{s}_{0,1}$,
is
$2\left(\begin{array}[]{ccccc}1&\beta&0&\beta&2\beta\\\
\beta&1&\beta&0&2\beta\\\ 0&\beta&1&\beta&2\beta\\\ \beta&0&\beta&1&2\beta\\\
2\beta&2\beta&2\beta&2\beta&2\end{array}\right).$
The determinant of this matrix is $2(2\beta-1)^{2}(4\beta+1)$ and so, if
$\beta\neq-\frac{1}{4}$ we get the thesis. If $\beta=-\frac{1}{4}$, the
radical of the form is the $1$-dimensional ideal
${\mathbb{F}}(\bar{a}_{-1}+\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2}-\frac{2}{\beta}\bar{s}_{0,1})$.
When $V=V_{8}(\beta)$, the Gram matrix, with respect to the basis
$\bar{a}_{0},\bar{a}_{2},\bar{a}_{-2},\bar{a}_{1},\bar{a}_{-1},\bar{a}_{3},-\frac{1}{\beta}\bar{s}_{0,3},-\frac{2}{\beta}\bar{s}_{0,1}$
given in [5, Table 8], is
$\left(\begin{array}[]{cccccccc}2&\beta&\beta&2\beta&2\beta&2\beta&2\beta&4\beta\\\
\beta&2&\beta&2\beta&2\beta&2\beta&2\beta&4\beta\\\
\beta&\beta&2&2\beta&2\beta&2\beta&2\beta&4\beta\\\
2\beta&2\beta&2\beta&2&\beta&\beta&2\beta&4\beta\\\
2\beta&2\beta&2\beta&\beta&2&\beta&2\beta&4\beta\\\
2\beta&2\beta&2\beta&\beta&\beta&2&2\beta&4\beta\\\
2\beta&2\beta&2\beta&2\beta&2\beta&2\beta&2&2\beta\\\
4\beta&4\beta&4\beta&4\beta&4\beta&4\beta&2\beta&4+2\beta\\\
\end{array}\right).$
The determinant of this matrix is $-16(2\beta-1)^{2}(\beta-2)^{5}(7\beta+1)$
and so, if $\beta\not\in\\{2,-\frac{1}{7}\\}$, the algebra is simple. If
$\beta=-\frac{1}{7}$, then the radical of the form is the $1$-dimensional
ideal
${\mathbb{F}}(\bar{a}_{-2}+\bar{a}_{-1}+\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2}+\bar{a}_{3}-\frac{1}{\beta}\bar{s}_{0,3}-\frac{2}{\beta}\bar{s}_{0,1})$
and the result follows. Finally suppose $\beta=2$. Then the radical of the
form is $5$-dimensional with basis
$\bar{a}_{0}-\bar{a}_{2},\>\bar{a}_{0}-\bar{a}_{-2},\>\bar{a}_{1}-\bar{a}_{-1},\>\bar{a}_{1}-\bar{a}_{3},\>\bar{s}_{0,1}-\bar{s}_{0,3}$
and the quotient over the radical is an algebra of type $2A(\beta)$. Using
Lemma 2.4, it is straightforward to prove that the radical is a minimal ideal.
∎
$\begin{array}[]{|c||c|c|c|c|c|}\hline\cr&\bar{a}_{3}&\bar{a}_{0}&\bar{a}_{1}&\bar{a}_{2}&\bar{s}\\\
\hline\cr\hline\cr\bar{a}_{3}&\bar{a}_{3}&\bar{s}+\beta(\bar{a}_{3}+\bar{a}_{0})&4\beta\bar{s}-\frac{2\beta-1}{2}(\bar{a}_{0}+\bar{a}_{2})&\bar{s}+\beta(\bar{a}_{3}+\bar{a}_{2})&\beta\bar{s}+\frac{\beta^{2}}{2}(\bar{a}_{0}+\bar{a}_{2})\\\
\hline\cr\bar{a}_{0}&&\bar{a}_{0}&\bar{s}+\beta(\bar{a}_{0}+\bar{a}_{1})&4\beta\bar{s}-\frac{2\beta-1}{2}(\bar{a}_{1}+\bar{a}_{3})&\beta\bar{s}+\frac{\beta^{2}}{2}(\bar{a}_{1}+\bar{a}_{3})\\\
\hline\cr\bar{a}_{1}&&&\bar{a}_{1}&\bar{s}+\beta(\bar{a}_{1}+\bar{a}_{2})&\beta\bar{s}+\frac{\beta^{2}}{2}(\bar{a}_{0}+\bar{a}_{2})\\\
\hline\cr\bar{a}_{2}&&&&\bar{a}_{2}&\beta\bar{s}+\frac{\beta^{2}}{2}(\bar{a}_{1}+\bar{a}_{3})\\\
\hline\cr\bar{s}&&&&\beta\bar{s}+\frac{\beta^{2}}{2}(\bar{a}_{1}+\bar{a}_{3})&\begin{array}[]{ll}\frac{3\beta-1}{8}(4\bar{s}-\bar{a}_{3}-\bar{a}_{0}-\bar{a}_{1}-\bar{a}_{2})\end{array}\\\
\hline\cr\end{array}$ Table 3. Multiplication table for the algebra
$Y_{5}(\beta)$
###### Lemma 5.4.
Let ${\mathbb{F}}$ be a field of characteristic other than $2$ and
$\beta\in{\mathbb{F}}$ such that $4\beta^{2}+2\beta-1=0$. The algebra
$Y_{5}(\beta)$ is a $2$-generated primitive symmetric Frobenius axial algebra
of Monster type $(2\beta,\beta)$, with $\lambda_{1}=\beta+\frac{1}{4}$ and
$\lambda_{2}=\beta$. It is simple, except when $ch\>{\mathbb{F}}=11$ and
$\beta=4$, in which case it has a unique non-trivial quotient over the ideal
${\mathbb{F}}(\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2}+\bar{a}_{3}+3\bar{s})$.
###### Proof.
All the properties are easily verified. Note that the Frobenius form is
defined by $(\bar{a}_{i},\bar{a}_{i})=1$,
$(\bar{a}_{i},\bar{a}_{j})=\lambda_{1}$, for $i,j\in\\{0,1,2,3\\}$ such that
$i-j\equiv_{2}1$,
$(\bar{a}_{0},\bar{a}_{2})=(\bar{a}_{1},\bar{a}_{3})=\lambda_{2}$, and
$(\bar{a}_{i},\bar{s})=\frac{1}{4}\beta$ for $i\in\\{0,1,2,3\\}$. Then, the
Frobenius form has Gram matrix, with respect to the basis
$(\bar{a}_{0},\bar{a}_{1},\bar{a}_{2},\bar{a}_{3},\bar{s})$,
$\left(\begin{array}[]{ccccc}1&\beta+\frac{1}{4}&\beta&\beta+\frac{1}{4}&\frac{1}{4}\beta\\\
\beta+\frac{1}{4}&1&\beta+\frac{1}{4}&\beta&\frac{1}{4}\beta\\\
\beta&\beta+\frac{1}{4}&1&\beta+\frac{1}{4}&\frac{1}{4}\beta\\\
\beta+\frac{1}{4}&\beta&\beta+\frac{1}{4}&1&\frac{1}{4}\beta\\\
\frac{1}{4}\beta&\frac{1}{4}\beta&\frac{1}{4}\beta&\frac{1}{4}\beta&\frac{1}{8}\beta\end{array}\right)$
with determinant $\frac{1}{32}\beta(\beta-1)^{2}(2\beta-1)(2\beta+3)$. For
$\beta=-\frac{3}{2}$, condition $4\beta^{2}+2\beta-1=0$ implies
$ch\>{\mathbb{F}}=11$ and we get that the radical of the form is one-
dimensional generated by
$\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2}+\bar{a}_{3}+3\bar{s}$. The result follows
with the argument already used to prove Lemma 5.3. ∎
###### Lemma 5.5.
Let $V$ be a symmetric primitive axial algebra of Monster type
$(\alpha,\beta)$ over a field ${\mathbb{F}}$, generated by two axes
$\bar{a}_{0}$ and $\bar{a}_{1}$. Suppose there exists $A\in F$ such that
$\bar{a}_{2}=\bar{a}_{-1}+A(\bar{a}_{0}-\bar{a}_{1}).$
Then, one of the following holds
1. (1)
$A=0$ and $\bar{a}_{2}=\bar{a}_{-1}$;
2. (2)
$A=1$, $\bar{a}_{1}=\bar{a}_{-1}$ and $V$ is spanned by
$\bar{a}_{0},\bar{a}_{1},\bar{s}_{0,1}$.
###### Proof.
If $A=0$, the claim is trivial. Suppose $A\neq 0$. By the symmetries of the
algebra, we get
$\bar{a}_{-2}=\bar{a}_{1}+A(\bar{a}_{0}-\bar{a}_{-1})\>\mbox{ and
}\bar{a}_{3}=\bar{a}_{0}+A(\bar{a}_{1}-\bar{a}_{2}).$
By substituting the expression for $\bar{a}_{2}$ in the definition of
$\bar{s}_{0,2}$ we get
$0=\bar{s}_{0,2}-\bar{s}_{0,2}^{\tau_{1}}=A(1-2\beta)(\bar{a}_{0}-\bar{a}_{2}).$
Then, since $\beta\neq 1/2$, we have $\bar{a}_{2}=\bar{a}_{0}$ and, by the
symmetry, $\bar{a}_{-1}=\bar{a}_{1}$. By Lemma 4.3 in [3], (2) holds. ∎
###### Proposition 5.6.
Let $V$ be a symmetric primitive axial algebra of Monster type
$(2\beta,\beta)$ over a field ${\mathbb{F}}$ of characteristic other than $2$,
generated by two axes $\bar{a}_{0}$ and $\bar{a}_{1}$. If $V$ has dimension at
most $3$, then either $V$ is an algebra of Jordan type $\beta$ or $2\beta$, or
$18\beta^{2}-\beta-1=0$ in ${\mathbb{F}}$ and $V$ is isomorphic to the algebra
$V_{3}(\beta)$.
###### Proof.
Since $V$ is symmetric, ${\rm ad}_{\bar{a}_{0}}$ and ${\rm ad}_{\bar{a}_{1}}$
have the same eigenvalues. Since $1$ is an eigenvalue for ${\rm
ad}_{\bar{a}_{0}}$, it follows from the fusion law that if $0$ is an
eigenvalue for ${\rm ad}_{\bar{a}_{0}}$, or $V$ has dimension at most $2$,
then $V$ is of Jordan type $\beta$ or $2\beta$. Let us assume that $0$ is not
an eigenvalue for ${\rm ad}_{\bar{a}_{0}}$. Then $\bar{u}_{1}=0$ (recall the
definition of $u_{1}$ in Section 2) and we get
$\bar{s}_{0,1}=[\lambda_{1}(1-2\beta)-\beta]\bar{a}_{0}+\frac{\beta}{2}(\bar{a}_{1}+\bar{a}_{-1}).$
Since we have also $\bar{u}_{1}^{f}=0$ we deduce
$\bar{a}_{2}=\bar{a}_{-1}+\left[\frac{2}{\beta}(\lambda_{1}(1-2\beta)-\beta)-1\right](\bar{a}_{0}-\bar{a}_{1}).$
Thus we can apply Lemma 5.5. If claim (2) or (3) holds, then $V$ has dimension
at most $2$ and we are done. Suppose claim (1) holds, that is
$\frac{2}{\beta}(\lambda_{1}(1-2\beta)-\beta)-1=0$. Then
$\bar{s}_{0,1}=\frac{\beta}{2}(\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{-1})\>\mbox{
and
}\bar{a}_{0}\bar{a}_{1}=\frac{3}{2}\beta(\bar{a}_{0}+\bar{a}_{1})+\frac{\beta}{2}\bar{a}_{-1},$
whence we get that $V$ satisfies the multiplication given in Table 2 and so it
is isomorphic to a quotient of $V_{3}(\beta)$. Since by hypothesis
$\beta\not\in\\{1,\frac{1}{2}\\}$, by Lemma 5.1, $V_{3}(\beta)$ is simple and
$V\cong V_{3}(\beta)$. The vector
$v:=3\beta\bar{a}_{0}+(2\beta-1)(\bar{a}_{-1}+\bar{a}_{1})$ is a
$2\beta$-eigenvector for ${\rm ad}_{\bar{a}_{0}}$ and, in order to satisfy the
fusion law (in particular $v\cdot v$ must be a $1$-eigenvector for ${\rm
ad}_{\bar{a}_{0}}$), $\beta$ must be such that $18\beta^{2}-\beta-1=0$. ∎
###### Theorem 5.7.
Let $V$ be a primitive symmetric axial algebra of Monster type
$(2\beta,\beta)$ over a field ${\mathbb{F}}$ of characteristic other than $2$,
generated by two axes $\bar{a}_{0}$ and $\bar{a}_{1}$. Then, one of the
following holds:
1. (1)
$V$ is an algebra of Jordan type $\beta$ or $2\beta$;
2. (2)
$18\beta^{2}-\beta-1=0$ in ${\mathbb{F}}$ and $V$ is an algebra of type
$V_{3}(\beta)$;
3. (3)
$V$ is an algebra of type $3A(2\beta,\beta)$;
4. (4)
$V$ is an algebra of type $V_{5}(\beta)$;
5. (5)
$V$ is an algebra of type $V_{8}(\beta)$;
6. (6)
$4\beta^{2}+2\beta-1=0$ in ${\mathbb{F}}$ and $V$ is an algebra of type
$Y_{5}(\beta)$;
7. (7)
$\beta=-\frac{1}{4}$ and $V$ is isomorphic to the quotient of $V_{5}(\beta)$
over the one-dimensional ideal generated by
$\bar{a}_{-1}+\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2}-\frac{2}{\beta}\bar{s}_{0,1}$;
8. (8)
$\beta=-\frac{1}{7}$ and $V$ is isomorphic to the quotient of $V_{8}(\beta)$
over the one-dimensional ideal generated by
$\bar{a}_{-2}+\bar{a}_{-1}+\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2}+\bar{a}_{3}-\frac{2}{\beta}\bar{s}_{0,1}-\frac{1}{\beta}\bar{s}_{0,3}$;
9. (9)
$ch\>{\mathbb{F}}=11$, $\beta=4$ and $V$ is isomorphic to the quotient of
$Y_{5}(\beta)$ over the one-dimensional ideal generated by
$\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2}+\bar{a}_{2}+\frac{1}{\beta}\bar{s}_{0,1}$.
###### Proof.
By the remark after Theorem 4.1, $V$ is determined, up to homomorphic images,
by the pair $(\lambda_{1},\lambda_{2})$, which must be a solution of (12). By
[3, Corollary 3.8 ] and Proposition 3.7, $V$ is spanned on $F$ by the set
$\bar{a}_{-2}$, $\bar{a}_{-1}$, $\bar{a}_{0}$,
$\bar{a}_{1}$,$\bar{a}_{2}$,$\bar{a}_{3}$, $\bar{s}_{0,1}$, and
$\bar{s}_{0,3}$.
Assume first $\lambda_{1}=\beta$. Then, by Lemma 3.6, we get
$\bar{s}_{1,3}=\bar{s}_{2,3}=\bar{s}_{0,3}$. If
$(\lambda_{1},\lambda_{2})=(\beta,\frac{\beta}{2})$, we see that the algebra
satisfies the multiplication table of the algebra $V_{8}(\beta)$. Hence $V$ is
isomorphic to a quotient of $V_{8}(\beta)$ and by Lemma 5.3 we get that either
(5) or (8) holds. Assume $\lambda_{2}\neq\frac{\beta}{2}$. We compute
$\bar{E}=\frac{(2\beta-1)(2\lambda_{2}-\beta)}{4}\left[\bar{s}_{0,3}+\beta(\bar{s}_{0,1}-\bar{a}_{-1}+\bar{a}_{3})\right],$
hence, since $(2\beta-1)(2\lambda_{2}-\beta)\neq 0$, we get
$\bar{s}_{0,3}=\beta(\bar{a}_{-1}-\bar{a}_{3}-\bar{s}_{0,1})$. Then, from the
identity $\bar{s}_{0,3}-\bar{s}_{0,3}^{\tau_{1}}=0$ we get
$\bar{a}_{3}=\bar{a}_{-1}$, and so $\bar{s}_{0,1}=\bar{s}_{0,3}$ and
$\bar{a}_{-2}=\bar{a}_{2}$. Hence the dimension is at most $5$. If
$(\lambda_{1},\lambda_{2})=(\beta,0)$ we see that $V$ satisfies the
multiplication table of $V_{5}(\beta)$ and either (4) or (7) holds. Finally,
if $(\lambda_{1},\lambda_{2})=(\beta,1)$, then $Z(\beta,\beta)=0$ and so by
Lemma 3.3 and Equation (1) we get $\bar{a}_{0}\bar{a}_{2}=\bar{a}_{0}$, that
is $\bar{a}_{0}$ is a $1$-eigenvector for ${\rm ad}_{\bar{a}_{2}}$. By
primitivity, this implies $\bar{a}_{2}=\bar{a}_{0}$. Consequently, we have
$\bar{a}_{-1}=\bar{a}_{2}^{f}=\bar{a}_{0}^{f}=\bar{a}_{1}$ and from the
multiplication table we see that $V$ is a quotient of the algebra $2A(\beta)$.
Thus $V$ an axial algebra of Jordan type $2\beta$.
Now assume $\lambda_{1}\neq\beta$. We have
$\displaystyle\bar{D}_{1}=\frac{-2(2\beta-1)(\lambda_{1}-\beta)}{\beta^{2}}\left[(\beta-1)\frac{}{}(\bar{a}_{-2}-\bar{a}_{2})\right.$
$\displaystyle\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\left.+\left(\frac{2\lambda_{1}(4\beta-1)(2\lambda_{1}-3\beta)}{\beta^{2}}+10\beta-3-2\lambda_{2}\right)(\bar{a}_{-1}-\bar{a}_{1})\right].$
By Lemma 3.10, $\bar{D}_{1}=0$. Since $\lambda_{1}\neq\beta$ and
$\beta\not\in\\{1,\frac{1}{2}\\}$, the coefficient of $\bar{a}_{-2}$ in
$\bar{D}_{1}$ is non zero and we get
(14)
$\bar{a}_{-2}=\bar{a}_{2}+\frac{1}{(\beta-1)}\left[\frac{2\lambda_{1}(4\beta-1)(2\lambda_{1}-3\beta)}{\beta^{2}}+10\beta-3-2\lambda_{2}\right](\bar{a}_{1}-\bar{a}_{-1}).$
Since $V$ is symmetric, the map $f$ swapping $\bar{a}_{0}$ and $\bar{a}_{1}$
is an algebra automorphism and so
(15)
$\bar{a}_{3}=\bar{a}_{-1}+\frac{1}{(\beta-1)}\left[\frac{2\lambda_{1}(4\beta-1)(2\lambda_{1}-3\beta)}{\beta^{2}}+10\beta-3-2\lambda_{2}\right](\bar{a}_{0}-\bar{a}_{2}).$
It follows also
$\bar{s}_{0,3}\in\langle\bar{a}_{-1},\bar{a}_{0},\bar{a}_{1},\bar{a}_{2},\bar{s}_{0,1}\rangle$
and hence
$V=\langle\bar{a}_{-1},\bar{a}_{0},\bar{a}_{1},\bar{a}_{2},\bar{s}_{0,1}\rangle$.
Moreover, equation $\bar{d}_{1}=0$ of Lemma 3.10 becomes
(16) $B(\bar{a}_{-1}-\bar{a}_{2})+C(\bar{a}_{0}-\bar{a}_{1})=0$
with $B$ and $C$ in ${\mathbb{F}}$.
Assume $\beta\neq\frac{1}{4}$ and
$(\lambda_{1},\lambda_{2})=\left(\frac{(9\beta^{2}-2\beta)}{2(4\beta-1)},\frac{(9\beta^{2}-2\beta)}{2(4\beta-1)}\right)$
(note that $\lambda_{1}\neq\beta$ since $\beta\neq 0$). Then the identities
$\bar{a}_{-2}^{2}=\bar{a}_{-2}$ and $\bar{a}_{3}^{2}=\bar{a}_{3}$ give the
equations
$\frac{2\beta(18\beta-5)}{(4\beta-1)}(\bar{a}_{2}-\bar{a}_{-1})=0\mbox{ and
}\frac{2(18\beta^{2}-9\beta+1)}{(4\beta-1)}(\bar{a}_{2}-\bar{a}_{-1})=0.$
Since, in any field ${\mathbb{F}}$, the two polynomials $(18\beta-5)$ and
$(18\beta^{2}-9\beta+1)$ have no common roots, we have
$\bar{a}_{2}=\bar{a}_{-1}$, whence $\bar{a}_{-2}=\bar{a}_{1}$, and it is
straightforward to see that $V$ satisfies the multiplication table of the
algebra $3A(2\beta,\beta)$. Hence the result follows from Lemma 5.2.
Now assume
$(\lambda_{1},\lambda_{2})=\left(\frac{\beta}{2},\frac{\beta}{2}\right)$. Then
we have
$B=\frac{(\beta-1)(4\beta-1)}{2\beta}\mbox{ and }C=0.$
Moreover, the identity $\bar{a}_{-2}^{2}=\bar{a}_{-2}$ give the equation
$\frac{(\beta^{2}+2\beta-1)}{\beta}(\bar{a}_{2}-\bar{a}_{-1})=0$
Suppose $B\neq 0$, or $(\beta^{2}+2\beta-1)\neq 0$: we get
$\bar{a}_{-1}=\bar{a}_{2}$ and consequently $\bar{s}_{0,2}=\bar{s}_{0,1}$.
From the identity $\bar{s}_{0,2}-\bar{s}_{0,1}=0$ we get
$\frac{(5\beta-1)}{\beta}\left[\bar{s}_{0,1}+\frac{\beta}{2}(\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{-1})\right]=0.$
If $\beta\neq\frac{1}{5}$, it follows that the dimension is at most $3$ and in
fact we get a quotient of the algebra $3C(\beta)$ which is an algebra of
Jordan type $\beta$. If $\beta=1/5$, we have
$\frac{\beta}{2}=\frac{(9\beta^{2}-2\beta)}{2(4\beta-1)}$ and $V$ is a
quotient of the algebra $3A(2\beta,\beta)$. Finally, if
$B=0=(\beta^{2}+2\beta-1)$, then we get $ch\>{\mathbb{F}}=7$ and $\beta=2$. In
this case, by Equation (14), we have
$\bar{a}_{-2}=\bar{a}_{2}+\bar{a}_{1}-\bar{a}_{-1}$. Computing the vectors
$\bar{u}_{2}$ and $\bar{v}_{2}$ with Lemma 2.3, we get $\bar{v}_{2}=0$ and
hence
$0=\bar{a}_{2}-\lambda_{2}\bar{a}_{0}-u_{2}-w_{2}=2(\bar{a}_{1}-\bar{a}_{-1})$.
Thus, $\bar{a}_{-1}=\bar{a}_{1}$ and $\bar{a}_{2}=\bar{a}_{0}$, $V$ has
dimension at most $3$ and the result follows from Proposition 5.6.
Assume $(\lambda_{1},\lambda_{2})=(0,1)$. Then
$B=C=\frac{-(9\beta-4)(4\beta^{2}+2\beta-1)}{\beta(\beta-1)},$
the identity $\bar{a}_{3}^{2}=\bar{a}_{3}$ gives the equation
(17)
$\frac{5(2\beta-1)}{\beta(\beta-1)^{2}}\left[-2B(\bar{a}_{1}-\bar{a}_{-1})+(18\beta^{3}+27\beta^{2}-24\beta+4)(\bar{a}_{0}-\bar{a}_{2})\right]=0,$
and the identity $\bar{D}_{2}=0$ of Lemma 3.10 gives
$(4\beta-1)B(\bar{a}_{-1}-\bar{a}_{1})=0$
Thus, if $(4\beta-1)B\neq 0$, we get $\bar{a}_{-1}=\bar{a}_{1}$. Then,
$\bar{a}_{2}=\bar{a}_{-1}^{f}=\bar{a}_{1}^{f}=\bar{a}_{0}$ and hence
$\bar{s}_{0,2}=(1-2\beta)\bar{a}_{0}$. Further, from the first equation of
Lemma 3.3, we also get $\bar{s}_{0,1}=-\beta(\bar{a}_{0}+\bar{a}_{1})$, whence
$\bar{a}_{0}\bar{a}_{1}=0$. Thus $V$ is isomorphic to the algebra $2B$.
Suppose $B=0$. If $(18\beta^{3}+27\beta^{2}-24\beta+4)\neq 0$, from Equation
(17), we get $\bar{a}_{0}=\bar{a}_{2}$ and, by the symmetry, also
$\bar{a}_{1}=\bar{a}_{-1}$. Thus the dimension is at most $3$ and we conclude
by Proposition 5.6. If $(18\beta^{3}+27\beta^{2}-24\beta+4)=0$, then it
follows $ch\>{\mathbb{F}}=31$ and $\beta=9$. In this case
$\bar{E}-\bar{E}^{\tau_{0}}\neq 0$: a contradiction to Lemma 3.10. Now assume
$\beta=\frac{1}{4}$ and $B\neq 0$ (thus, in particular, $ch\>{\mathbb{F}}\neq
3$). From Equation (14) we get
$\bar{a}_{-2}=\bar{a}_{2}+\frac{10}{3}(\bar{a}_{1}-\bar{a}_{-1})$. Further,
$\bar{v}_{2}=0$ and we get
$0=\bar{a}_{2}-\lambda_{2}\bar{a}_{0}-u_{2}-w_{2}=\frac{10}{3}(\bar{a}_{1}-\bar{a}_{-1})$.
Thus, if $ch\>{\mathbb{F}}\neq 5$, we get $\bar{a}_{-1}=\bar{a}_{1}$ and
$\bar{a}_{2}=\bar{a}_{0}$; $V$ has dimension at most $3$ and the result
follows from Proposition 5.6. If $ch\>{\mathbb{F}}=5$, then $B=-1$, Equation
(16) gives $\bar{a}_{2}=\bar{a}_{-1}-(\bar{a}_{0}-\bar{a}_{1})$ and so Lemma
5.5 implies that $V$ has dimension at most $3$ and we conclude as above.
Suppose $(\lambda_{1},\lambda_{2})=(1,1)$. Then identity $\bar{D}_{2}=0$ is
equal to
$\frac{2(4\beta^{3}-14\beta^{2}+11\beta-2)(9\beta^{2}-10\beta+2)(\beta-2)(2\beta-1)}{\beta^{6}}(\bar{a}_{-1}-\bar{a}_{1})=0$
Hence, if
$(4\beta^{3}-14\beta^{2}+11\beta-2)(9\beta^{2}-10\beta+2)(\beta-2)\neq 0$, we
get $\bar{a}_{-1}=\bar{a}_{1}$, by the symmetry, $\bar{a}_{2}=\bar{a}_{0}$ and
$V$ has dimension at most $3$ and we conclude by Proposition 5.6. So let us
assume
(18) $(4\beta^{3}-14\beta^{2}+11\beta-2)(9\beta^{2}-10\beta+2)(\beta-2)=0.$
Further, we have
$B=-\frac{1}{\beta^{3}}\left(36\beta^{4}-126\beta^{3}+127\beta^{2}-48\beta+6\right)$
and
$C=-\frac{1}{\beta^{4}}(4\beta^{2}+2\beta-1)(9\beta^{2}-10\beta+2)(\beta-2).$
If $(4\beta^{3}-14\beta^{2}+11\beta-2)=0$, then $B$ and $C$ are not zero and
by Lemma 5.5, $V$ has dimension at most $3$ and we conclude by Proposition
5.6. Moreover, if $\beta=2$ then
$(\lambda_{1},\lambda_{2})=(\frac{\beta}{2},\frac{\beta}{2})$ and we reduce to
the previous case. Hence we assume
$(9\beta^{2}-10\beta+2)=0.$
Then $C=0$. If $B\neq 0$, from Equation (16) we get $\bar{a}_{2}=\bar{a}_{-1}$
and by symmetry, $\bar{a}_{1}=\bar{a}_{-2}$. Then, Equation (14) becomes
$\bar{a}_{1}-\bar{a}_{-1}=\frac{1}{(\beta-1)}\left[\frac{2(4\beta-1)(2-3\beta)}{\beta^{2}}+10\beta-5\right](\bar{a}_{-1}-\bar{a}_{1}),$
whence, either $\bar{a}_{1}=\bar{a}_{-1}$ and again $V$ has dimension at most
$3$ and we conclude by Proposition 5.6, or
$\frac{1}{(\beta-1)}\left[\frac{2(4\beta-1)(2-3\beta)}{\beta^{2}}+10\beta-5\right]=-1$
that is
$\frac{(11\beta^{3}-30\beta^{2}+22\beta-4)}{\beta^{2}(\beta-1)}=0.$
It is now straightforward to check that the two polynomials
$(9\beta^{2}-10\beta+2)$ and $(11\beta^{3}-30\beta^{2}+22\beta-4)$ have no
common roots in any field of characteristic other than 2 and we get a
contradiction. We are now left to consider the case when $B=0$. Then the two
polynomials $(9\beta^{2}-10\beta+2)$ and $B$ have a common root if and only if
$ch\>{\mathbb{F}}\in\\{3,7\\}$ and the common root if $\beta=-1$. In both
cases we get
$(\lambda_{1},\lambda_{2})=(1,1)=\frac{(9\beta^{2}-2\beta)}{2(4\beta-1)}$ and
so $V$ is a quotient of the algebra $3A(2\beta,\beta)$ as already proved.
Suppose now $\beta=\frac{1}{4}$ and $(\lambda_{1},\lambda_{2})=(\mu,1)$, for
$\mu\in{\mathbb{F}}\setminus\\{0,1,\beta\\}$. In particular, note that
$ch\>{\mathbb{F}}\neq 3$, since $\beta\neq 1$. Then Equations (14) and (15)
become
$\bar{a}_{-2}=\bar{a}_{2}+\frac{10}{3}(\bar{a}_{-1}-\bar{a}_{1})\>\mbox{ and
}\bar{a}_{3}=\bar{a}_{-1}+\frac{10}{3}(\bar{a}_{2}-\bar{a}_{0})$
and the identity $\bar{D}_{2}=0$ gives the relation
$\frac{28}{3}(4\mu-1)(\bar{a}_{1}-\bar{a}_{-1})=0.$
Since we are assuming $\lambda_{1}\neq\beta$, $(4\mu-1)\neq 0$. So, if
$ch\>{\mathbb{F}}\neq 7$, we get $\bar{a}_{1}=\bar{a}_{-1}$. Hence $V$ ha
dimension at most $3$ and we conclude by Proposition 5.6. If
$ch\>{\mathbb{F}}=7$, then $\beta=2$ and we conclude, with the same argument
used above for the case when
$(\lambda_{1},\lambda_{2})=(\frac{\beta}{2},\frac{\beta}{2})$,
$ch\>{\mathbb{F}}=7$, $\beta=2$.
Finally assume that $\beta\neq\frac{3}{8}$,
$\beta\in\\{\frac{1}{2},\frac{1}{3},\frac{2}{7}\\}$ or $\beta$ satisfies
Equation (18), and
$(\lambda_{1},\lambda_{2})=\left(\frac{(18\beta^{2}-\beta-2)}{2(8\beta-3)},\frac{(48\beta^{4}-28\beta^{3}+7\beta-2)(3\beta-1)}{2\beta^{2}(8\beta-3)^{2}}\right)$.
First of all, note that the only common solution of Equation (18) and of the
equation $2\beta^{2}+5\beta-2=0$ is $\beta=1$ when $ch\>{\mathbb{F}}=5$. Since
we are assuming $\beta\neq 1$, under our hypotheses $2\beta^{2}+5\beta-2$ is
always non-zero in ${\mathbb{F}}$: in particular this implies
$\lambda_{1}\neq\beta$. The identity $\bar{D}_{2}=0$ becomes
$\frac{(2\beta^{2}+5\beta-2)(4\beta^{4}+2\beta-1)(5\beta^{2}+\beta-1)(7\beta-2)(2\beta-1)}{\beta^{5}(8\beta-3)^{2}(\beta-1)}(\bar{a}_{1}-\bar{a}_{-1})=0.$
If $(4\beta^{4}+2\beta-1)(5\beta^{2}+\beta-1)(7\beta-2)\neq 0$ in
${\mathbb{F}}$, we deduce $\bar{a}_{1}=\bar{a}_{-1}$ and, by symmetry,
$\bar{a}_{0}=\bar{a}_{2}$. Thus $V$ has dimension at most $3$ and we conclude
by Proposition 5.6. If $(5\beta^{2}+\beta-1)=0$, then
$\lambda_{1}=\lambda_{2}=\frac{\beta}{2}$ and we are in a case considered
above. If $(4\beta^{4}+2\beta-1)=0$, we get $\lambda_{1}=\beta+\frac{1}{4}$
and $\lambda_{2}=\beta$. From the identity $\bar{E}=0$ we get
$\bar{a}_{3}=\bar{a}_{-1}$, consequently $\bar{a}_{-2}=\bar{a}_{2}$, and it
follows that $V$ satisfies the multiplication table of the algebra
$Y_{5}(\beta)$. Hence, by Lemma 5.4, either (6) or (9) hold. Finally assume
$ch\>{\mathbb{F}}\neq 7$ and $\beta=\frac{2}{7}$. Then
$\lambda_{1}=\lambda_{2}=\frac{4}{7}$ and, since $\beta\neq 1$, we have also
$ch\>{\mathbb{F}}\neq 5$. From identity $\bar{d}_{1}=0$, we get
$\bar{a}_{-2}=\bar{a}_{2}$ and consequently $\bar{a}_{3}=\bar{a}_{-1}$. If
further $ch\>{\mathbb{F}}\neq 3$, identity $\bar{E}=0$ implies
$\bar{a}_{-1}=\bar{a}_{2}$. Then $\bar{a}_{0}=\bar{a}_{1}$, but
$\lambda_{1}=\frac{4}{7}\neq 1$, a contradiction. Hence $ch\>{\mathbb{F}}=3$,
$(\lambda_{1},\lambda_{2})=(\frac{\beta}{2},\frac{\beta}{2})$ and we conclude
as in the case considered above. ∎
## 6\. The non symmetric case
This section is devoted to the proof of Theorem 1.2. Let $V$ be generated by
the two axes $\bar{a}_{0}$ and $\bar{a}_{1}$. We set
$V_{e}:=\langle\langle\bar{a}_{0},\bar{a}_{2}\rangle\rangle$ and
$V_{o}:=\langle\langle\bar{a}_{-1},\bar{a}_{1}\rangle\rangle$. As noticed at
the end of Section 4, $V_{e}$ and $V_{o}$ are symmetric, since the
automorphisms $\tau_{1}$ and $\tau_{0}$ respectively, swap their generating
axes. Hence, from Theorem 5.7 we get the possible values for the pair
$(\lambda_{2},\lambda_{2}^{f})$ and the structure of those subalgebras. Note
that $V$ is symmetric if and only if $\lambda=\lambda_{1}^{f}$ and
$\lambda_{2}=\lambda_{2}^{f}$ in ${\mathbb{F}}$.
###### Lemma 6.1.
If $V$ has dimension $8$, then $V\cong V_{8}(\beta)$.
###### Proof.
Suppose $V$ has dimension $8$. Then the generators
$\bar{a}_{-2},\bar{a}_{-1},\bar{a}_{0},\bar{a}_{1},\bar{a}_{2},\bar{a}_{3},\bar{s}_{0,1}$,
and $\bar{s}_{0,3}$ are linearly independent. We express $\bar{d}_{1}$ defined
before Lemma 3.10 as a linear combination of the basis vectors and since
$\bar{d}_{1}=0$ in $V$, every coefficient must be zero. In particular,
considering the coefficients of $\bar{a}_{-2},\bar{a}_{3}$, and
$\bar{s}_{0,1}$ we get, respectively, the equations
$\displaystyle\frac{(6\beta-1)}{\beta}\lambda_{1}+\frac{(2\beta-2)}{\beta}\lambda_{1}^{f}-(8\beta-3)=0$
$\displaystyle\frac{(2\beta-2)}{\beta}\lambda_{1}+\frac{(6\beta-1)}{\beta}\lambda_{1}^{f}-(8\beta-3)=0$
$\displaystyle\frac{8}{\beta^{2}}(\lambda_{1}^{2}-{\lambda_{1}^{f}}^{2})-\frac{24}{\beta}(\lambda_{1}-\lambda_{1}^{f})-\frac{2}{\beta}(\lambda_{2}-\lambda_{2}^{f})=0$
whose common solutions have $\lambda_{1}=\lambda_{1}^{f}$ and
$\lambda_{2}=\lambda_{2}^{f}$. Hence $V$ is symmetric and the result follows
from Theorem 5.7. ∎
Lemma 6.1 and Theorem 5.7 imply that if $V$ is non-symmetric, then the
dimensions of the even subalgebra and of the odd subalgebra are both at most
$5$. As a consequence, from Lemma 3.4 we derive some relations between the odd
and even subalgebras.
###### Lemma 6.2.
If $\bar{a}_{-3}=\bar{a}_{5}$, then
$\displaystyle
Z(\lambda_{1}^{f},\lambda_{1})(\bar{a}_{0}-\bar{a}_{2}+\bar{a}_{-2}-\bar{a}_{4})=$
$\displaystyle=$
$\displaystyle\frac{1}{\beta}\left[2Z(\lambda_{1}^{f},\lambda_{1})\left(\lambda_{1}^{f}-\beta\right)-\left(\lambda_{2}^{f}-\beta\right)\right](\bar{a}_{-1}-\bar{a}_{3}).$
If $\bar{a}_{-4}=\bar{a}_{4}$, then
$\displaystyle
Z(\lambda_{1},\lambda_{1}^{f})(\bar{a}_{-1}-\bar{a}_{3}+\bar{a}_{-3}-\bar{a}_{1})=$
$\displaystyle=$
$\displaystyle\frac{1}{\beta}\left[2Z(\lambda_{1},\lambda_{1}^{f})\left(\lambda_{1}-\beta\right)-\left(\lambda_{2}-\beta\right)\right](\bar{a}_{-2}-\bar{a}_{2}).$
If $\bar{a}_{-3}=\bar{a}_{3}$, then
$\displaystyle 2Z(\lambda_{1}^{f},\lambda_{1})(\bar{a}_{2}-\bar{a}_{-2})=$
$\displaystyle=$
$\displaystyle\frac{1}{\beta}\left[4Z(\lambda_{1}^{f},\lambda_{1})\left(\lambda_{1}^{f}-\beta\right)-\left(\lambda_{2}^{f}-\beta\right)\right](\bar{a}_{1}-\bar{a}_{-1}).$
If $\bar{a}_{-2}=\bar{a}_{4}$, then
$\displaystyle 2Z(\lambda_{1},\lambda_{1}^{f})(\bar{a}_{-1}-\bar{a}_{3})=$
$\displaystyle=$
$\displaystyle\frac{1}{\beta}\left[4Z(\lambda_{1},\lambda_{1}^{f})\left(\lambda_{1}-\beta\right)-\left(\lambda_{2}-\beta\right)\right](\bar{a}_{0}-\bar{a}_{2}).$
If $\bar{a}_{-2}=\bar{a}_{4}$ and $\bar{a}_{3}=\bar{a}_{-1}$, then
$\displaystyle 2\left[\beta
Z(\lambda_{1},\lambda_{1}^{f})-2(\lambda_{1}^{f}-\beta)\right](\bar{a}_{2}-\bar{a}_{-2})=$
$\displaystyle=$ $\displaystyle\frac{1}{\beta^{2}}\left[4(2\beta-1)\beta
Z(\lambda_{1},\lambda_{1}^{f})(\lambda_{1}^{f}-\beta)-8\beta(\lambda_{1}-\lambda_{1}^{f})\left(2\beta-\lambda_{1}-\lambda_{1}^{f}\right)\right.$
$\displaystyle\left.+2(2\beta-1)^{2}(\lambda_{1}^{f}-\beta)-2\beta^{2}(\lambda_{2}-\lambda_{2}^{f})\right](\bar{a}_{1}-\bar{a}_{-1}).$
###### Proof.
By applying the maps $\tau_{0}$ and $\tau_{1}$ to the formulas of Lemma 3.4 we
find similar formulas for $a_{-4}$ and $a_{5}$. Equations (6.2), (6.2), (6.2),
and (6.2) follow. To prove Equation 6.2, note that if
$\bar{a}_{-2}=\bar{a}_{4}$ and $\bar{a}_{3}=\bar{a}_{-1}$, then
$\bar{s}_{0,3}=\bar{s}_{0,1}$. Thus $\bar{s}_{1,3}-\bar{s}_{2,3}=0$ and the
claim follows from Lemma 3.6.∎
In view of the above relations, it is important to investigate some
subalgebras of the symmetric algebras.
###### Lemma 6.3.
1. (1)
If $V$ is one of the algebras $V_{5}(\beta)$, $Y_{5}(\beta)$, or their four
dimensional quotients, then,
$V=\langle\langle\bar{a}_{-1}-\bar{a}_{1},\bar{a}_{0}-\bar{a}_{2}\rangle\rangle.$
2. (2)
If $V$ is one of the algebras $3C(\beta)$, $3C(-1)^{\times}$,
$3A(2\beta,\beta)$, and $V_{3}(\beta)$, then,
$V=\langle\langle\bar{a}_{-1}-\bar{a}_{1},\bar{a}_{0}-\bar{a}_{1},\rangle\rangle,$
unless $V=3C(2)$ and $ch\>{\mathbb{F}}=5$, or $ch\>{\mathbb{F}}=3$ and
$V=3C(-1)^{\times}$ or $V=V_{3}(-1)$.
###### Proof.
To prove (1), set
$W:=\langle\langle\bar{a}_{-1}-\bar{a}_{1},\bar{a}_{0}-\bar{a}_{2}\rangle\rangle$.
Let $V$ be equal to $V_{5}(\beta)$ or its four dimensional quotient when
$\beta=-\frac{1}{4}$. Then $\bar{a}_{-1}\bar{a}_{1}=0=\bar{a}_{0}\bar{a}_{2}$.
Thus $(\bar{a}_{-1}-\bar{a}_{1})^{2}=\bar{a}_{-1}+\bar{a}_{1}$ and
$(\bar{a}_{0}-\bar{a}_{2})^{2}=\bar{a}_{0}+\bar{a}_{2}$, whence we get that
$\bar{a}_{0},\bar{a}_{1}$ belong to $W$ and the claim follows.
Let $V$ be equal to $Y_{5}(\beta)$. Then, $W$ contains the vectors
$\displaystyle(\bar{a}_{-1}-\bar{a}_{1})^{2}=\bar{a}_{-1}+\bar{a}_{1}+(2\beta-1)(\bar{a}_{0}+\bar{a}_{2})-8\beta\bar{s}_{0,1},$
$\displaystyle(\bar{a}_{0}-\bar{a}_{2})^{2}=\bar{a}_{0}+\bar{a}_{2}+(2\beta-1)(\bar{a}_{-1}+\bar{a}_{1})-8\beta\bar{s}_{0,1},$
$\displaystyle\bar{a}_{2}-\bar{a}_{1}=\frac{1}{4(\beta-1)}\left\\{(\bar{a}_{-1}-\bar{a}_{1})^{2}-(\bar{a}_{0}-\bar{a}_{2})^{2}-2(\beta-1)[(\bar{a}_{-1}-\bar{a}_{1})+(\bar{a}_{0}-\bar{a}_{2})]\right\\},$
and
$(\bar{a}_{2}-\bar{a}_{1})^{2}=\bar{a}_{2}+\bar{a}_{1}-2\beta(\bar{a}_{2}+\bar{a}_{1})-2\bar{s}_{0,1}.$
It is straightforward to check that the five vectors
$\bar{a}_{-1}-\bar{a}_{1}$, $\bar{a}_{0}-\bar{a}_{2}$,
$(\bar{a}_{-1}-\bar{a}_{1})^{2}$, $\bar{a}_{2}-\bar{a}_{1}$, and
$(\bar{a}_{2}-\bar{a}_{1})^{2}$ generate the entire algebra $Y_{5}(\beta)$.
Suppose now $ch\>{\mathbb{F}}=11$ and $V$ is the quotient of $Y_{5}(4)$ over
the ideal
${\mathbb{F}}(\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2}+\bar{a}_{3}+3\bar{s})$. By
proceeding as above, we get that $W$ contains the vectors
$\bar{a}_{-1}-\bar{a}_{1},\>\>\bar{a}_{0}-\bar{a}_{2},\>\>\bar{a}_{0}-\bar{a}_{1},\>\mbox{
and }\>\bar{a}_{1}-\bar{a}_{2},$
which, again, are generators of the entire algebra $V$.
To prove (2), set
$W:=\langle\langle\bar{a}_{-1}-\bar{a}_{1},\bar{a}_{0}-\bar{a}_{1},\rangle\rangle$.
Let $V$ be equal to $3C(\beta)$. Then, $W$ contains also
$(\bar{a}_{0}-\bar{a}_{1})^{2}=(1-\beta)(\bar{a}_{0}+\bar{a}_{1})+\beta\bar{a}_{-1}$
and the three vectors generate $V$, unless $\beta=2$ and $ch\>{\mathbb{F}}=5$.
If $V=3A(2\beta,\beta)$, then $W$ contains also the vectors
$(\bar{a}_{0}-\bar{a}_{1})^{2}=(1-2\beta)(\bar{a}_{0}+\bar{a}_{1})-2\bar{s}_{0,1}$
and
$(\bar{a}_{0}-\bar{a}_{1})(\bar{a}_{-1}-\bar{a}_{1})=(1-2\beta)\bar{a}_{1}-\bar{s}_{0,1}$.
Thus we get four vectors that generate $V$. If $V=V_{3}(\beta)$, then $W$
contains
$(\bar{a}_{0}-\bar{a}_{1})^{2}=(1-3\beta)(\bar{a}_{0}+\bar{a}_{1})-\beta\bar{a}_{-1}$
and we get that $W$ contains three vectors that generate $V$, unless $\beta=2$
and $ch\>{\mathbb{F}}=3$. Finally, if $V=3C(-1)^{\times}$, then
$\bar{a}_{0}-\bar{a}_{1}$ and
$(\bar{a}_{0}-\bar{a}_{1})^{2}=3(\bar{a}_{0}+\bar{a}_{1})$ generate $V$,
unless $ch\>{\mathbb{F}}\neq 3$. ∎
We start now to consider the possible configurations.
###### Lemma 6.4.
If $V$ is non-symmetric, then $V_{e}$ and $V_{o}$ are not isomorphic to
$V_{5}(\beta)$, $Y_{5}(\beta)$ or to one of their four dimensional quotients.
###### Proof.
It is enough to show that the claim holds for $V_{e}$. Let us assume by
contradiction that $V_{e}$ is isomorphic to a quotient of $V_{5}(\beta)$ or to
a quotient of $Y_{5}(\beta)$. Then, the vectors
$\bar{a}_{-2},\bar{a}_{0},\bar{a}_{2},\bar{a}_{4}$ are linearly independent
and, from the first formula in Lemma 3.4, we get that
$Z(\lambda_{1},\lambda_{1}^{f})\neq 0$ and $\bar{a}_{3}\neq\bar{a}_{-1}$.
Moreover $\bar{a}_{-4}=\bar{a}_{4}$ and so Equation (6.2) holds.
Suppose
$2Z(\lambda_{1},\lambda_{1}^{f})\left(\lambda_{1}-\beta\right)-\left(\lambda_{2}-\beta\right)\neq
0$. Then
$(\bar{a}_{-2}-\bar{a}_{2})\in\langle\langle\bar{a}_{-1},\bar{a}_{1}\rangle\rangle$.
Since $V_{o}$ is invariant under $\tau_{1}$, it contains also
$\bar{a}_{0}-\bar{a}_{4}$. Thus by Lemma 6.3, $V=V_{o}$, a contradiction.
Assume now
$2Z(\lambda_{1},\lambda_{1}^{f})\left(\lambda_{1}-\beta\right)-\left(\lambda_{2}-\beta\right)=0$.
Then, by Equation (6.2), we have
$\bar{a}_{1}-\bar{a}_{-1}+\bar{a}_{3}-\bar{a}_{-3}=0$. Since
$\bar{a}_{3}\neq\bar{a}_{-1}$, $V_{o}$ must be isomorphic to one of the
following: $3A(2\beta,\beta)$, $3C(\beta)$, $V_{3}(\beta)$ or
$3C(-1)^{\times}$. Then $\bar{a}_{-3}=\bar{a}_{3}$. Thus we get
$\bar{a}_{-1}-\bar{a}_{1}=0$ and $\bar{a}_{3}=\bar{a}_{-1}$, a contradiction.∎
###### Lemma 6.5.
If $V$ is non-symmetric, then $V_{e}$ and $V_{o}$ are not isomorphic to $1A$.
###### Proof.
Clearly, it is enough to show that the claim holds for $V_{e}$. Let us assume
by contradiction that $V_{e}$ is isomorphic to $1A$, so that $\lambda_{2}=1$
and, for every $i\in{\mathbb{Z}}$, $\bar{a}_{0}=\bar{a}_{2i}$ and
$\bar{s}_{0,2}=(1-2\beta)\bar{a}_{0}$. Then, the Miyamoto involution
$\tau_{1}$ is the identity on $V$ and the $\beta$-eigenspace for ${\rm
ad}_{\bar{a}_{1}}$ is trivial. In particular, $\bar{a}_{3}=\bar{a}_{-1}$, and
$V_{o}$ is isomorphic either to $1A$, or to $2A(\beta)$, or to $2B$. Then,
$\bar{s}_{0,3}=\bar{s}_{0,1}$ and $V$ is generated by
$\bar{a}_{-1},\bar{a}_{0},\bar{a}_{1}$, and $\bar{s}_{0,1}$. Suppose
$V_{o}\cong 1A$. Then, $V$ is generated by $\bar{a}_{0},\bar{a}_{1}$, and
$\bar{s}_{0,1}$. It follows that $\tau_{0}$ is the identity on $V$ and the
$\beta$-eigenspace for ${\rm ad}_{\bar{a}_{0}}$ is trivial. This implies that
$V$ is an axial algebra of Jordan type, a contradiction since $V$ is non-
symmetric.
Now suppose $V_{o}\cong 2A$ or $2B$. If $Z(\lambda_{1},\lambda_{1}^{f})\neq
0$, from the formula for $\bar{s}_{0,2}$ in Lemma 3.3 we get
$\bar{s}_{0,1}=(\lambda_{1}-\beta)\bar{a}_{0}-\frac{\beta}{2}(\bar{a}_{1}+\bar{a}_{-1})$,
whence
$\bar{a}_{0}\bar{a}_{1}=\lambda_{1}\bar{a}_{0}+\frac{\beta}{2}(\bar{a}_{1}-\bar{a}_{-1})$,
a contradiction since $\bar{a}_{0}\bar{a}_{1}$ is $\tau_{0}$-invariant. Hence,
$Z(\lambda_{1},\lambda_{1}^{f})=0$ and so
$Z(\lambda_{1}^{f},\lambda_{1})=\frac{(4\beta-1)}{2\beta^{3}}(\lambda_{1}^{f}-\beta)$.
From the formula for $\bar{a}_{-3}$ in Lemma 3.4 and Equation (6.2) we get
that the quadruple $(\lambda_{1},\lambda_{1}^{f},1,\lambda_{2}^{f})$ must be a
solution of the system
(24) $\left\\{\begin{array}[]{l}Z(\lambda_{1},\lambda_{1}^{f})=0\\\
(4\beta-1)(\lambda_{1}^{f}-\beta)^{2}=\beta^{3}\lambda_{2}^{f}\\\
-\frac{8}{\beta}(\lambda_{1}-\lambda_{1}^{f})\left(2\beta-\lambda_{1}-\lambda_{1}^{f}\right)+\frac{2(2\beta-1)^{2}}{\beta^{2}}(\lambda_{1}^{f}-\beta)-2(1-\lambda_{2}^{f})=0\\\
p_{2}(\lambda_{1},\lambda_{1}^{f},1,\lambda_{2}^{f})=0\\\
p_{4}(\lambda_{1},\lambda_{1}^{f},1,\lambda_{2}^{f})=0.\end{array}\right.$
When $\lambda_{2}^{f}=0$, the second equation yields that either
$\lambda_{1}^{f}=\beta$ or $\beta=\frac{1}{4}$. In the former case, we obtain
$\lambda=\lambda_{1}^{f}=\beta$ from the first equation and the third gives
$\lambda_{2}^{f}=1$: a contradiction. In the latter case $\beta=\frac{1}{4}$
the system (24) is equivalent to
$\left\\{\begin{array}[]{l}-\frac{1}{32}\lambda_{1}+\frac{7}{512}=0\\\
16\lambda_{1}=0\\\ -564\lambda_{1}+53=0\end{array}\right.$
which does not have any solution in any field ${\mathbb{F}}$. Finally, when
$\lambda_{2}^{f}=\beta$, again we get a contradiction since the system (24)
does not have any solution in any field ${\mathbb{F}}$ and for every $\beta$.
∎
###### Lemma 6.6.
Let $V$ be non-symmetric. If $\bar{a}_{4}=\bar{a}_{-2}$, then
$\bar{a}_{3}\neq\bar{a}_{-3}$ and vice versa.
###### Proof.
Let us assume by contradiction that $\bar{a}_{4}=\bar{a}_{-2}$ and
$\bar{a}_{-3}=\bar{a}_{3}$. Then, Equations (6.2) and (6.2) hold.
If $Z(\lambda_{1},\lambda_{1}^{f})=0=Z(\lambda_{1}^{f},\lambda_{1})$, then it
follows $\lambda_{1}=\lambda_{1}^{f}$ and, by Equations (6.2) and (6.2),
$\lambda_{2}=\lambda_{2}^{f}=\frac{\beta}{2}$. Thus $V$ is symmetric: a
contradiction. Therefore, without loss of generality, we may assume
$Z(\lambda_{1}^{f},\lambda_{1})\neq 0$. Then, Equation (6.2) implies
$\bar{a}_{2}-\bar{a}_{-2},\>\>\bar{a}_{0}-\bar{a}_{2}\in V_{o},$
and, since $\beta\neq\frac{1}{2}$ and $V\neq V_{o}$, Lemma 6.3 yields that
$ch\>{\mathbb{F}}=5,\>\>\beta=2,\>\mbox{ and }\>\>V_{e}\cong 3C(2).$
If also $Z(\lambda_{1},\lambda_{1}^{f})\neq 0$ or
$4Z(\lambda_{1}^{f},\lambda_{1})\left(\lambda_{1}^{f}-\beta\right)-\left(2\lambda_{2}^{f}-\beta\right)\neq
0$, from Equations (6.2) and (6.2), respectively, we get that
$\bar{a}_{-1}-\bar{a}_{3}$ and $\bar{a}_{-1}-\bar{a}_{1}$ are contained in the
even subalgebra and so, by Lemma 6.3, we get $V_{o}\cong 3C(2)$. From the
formulas in Lemma 3.3 we find
$\bar{s}_{0,1}=(\lambda_{1}-\beta)\bar{a}_{0}-\beta(\bar{a}_{-1}+\bar{a}_{1})\>\mbox{
and
}\>\bar{s}_{0,1}=(\lambda_{1}^{f}-\beta)\bar{a}_{1}-\beta(\bar{a}_{0}+\bar{a}_{2}).$
Comparing the two expressions and using the invariance of $\bar{s}_{0,1}$
under $\tau_{0}$ and $\tau_{1}$, we get
$\displaystyle\lambda_{1}\bar{a}_{0}-\beta\bar{a}_{-1}$ $\displaystyle=$
$\displaystyle\lambda_{1}^{f}\bar{a}_{1}-\beta\bar{a}_{2}$
$\displaystyle(\lambda_{1}-\beta)(\bar{a}_{0}-\bar{a}_{2})$ $\displaystyle=$
$\displaystyle\beta(\bar{a}_{-1}-\bar{a}_{3})$
$\displaystyle(\lambda_{1}^{f}-\beta)(\bar{a}_{1}-\bar{a}_{-1})$
$\displaystyle=$ $\displaystyle\beta(\bar{a}_{2}-\bar{a}_{-2}).$
From the above identities we can express $\bar{a}_{3},\bar{a}_{-2},$ and
$\bar{a}_{-1}$ as linear combinations of
$\bar{a}_{0},\bar{a}_{1},\bar{a}_{2}$. So $V$ has dimension at most $3$ and so
$V=V_{o}=V_{e}$, a contradiction.
Suppose finally $Z(\lambda_{1},\lambda_{1}^{f})=0\mbox{ and
}\>4Z(\lambda_{1}^{f},\lambda_{1})\left(\lambda_{1}^{f}-\beta\right)-\left(2\lambda_{2}^{f}-\beta\right)=0.$
Then, we have $\lambda_{1}^{f}=2\lambda_{1}+3$,
$\lambda_{2}^{f}=\lambda_{1}(\lambda_{1}+1)$, and
$\begin{array}[]{l}p_{2}(\lambda_{1},2\lambda_{1}+3,1,\lambda_{1}(\lambda_{1}+1))=\lambda_{1}(\lambda_{1}-2)\\\
p_{4}(\lambda_{1},2\lambda_{1}+3,1,\lambda_{1}(\lambda_{1}+1))=2\lambda_{1}^{2}-\lambda_{1}-1.\end{array}$
Hence, by Theorem 4.1, must be $\lambda_{1}=2$ and we get a contradiction to
our initial assumption, since $Z(2\lambda_{1}+3,\lambda_{1})=2-\lambda_{1}=0$.
∎
###### Lemma 6.7.
If $V$ is non-symmetric, then $V_{e}$ and $V_{o}$ are not isomorphic to one of
the following: $3A(2\beta,\beta)$, $3C(\beta)$, $V_{3}(\beta)$, or their
quotients.
###### Proof.
Let us assume by contradiction that $V_{e}$ is isomorphic to one of the
algebras $3A(2\beta,\beta)$, $3C(\beta)$, $V_{3}(\beta)$, or their quotients.
By the previous lemmas, we may also assume that $V_{o}$ is isomorphic to $2A$
or $2B$. Then, $\bar{a}_{4}=\bar{a}_{-2}$, $\bar{a}_{3}=\bar{a}_{-1}$, and
$\bar{a}_{5}=\bar{a}_{-3}$. From Equations (6.2) and (6.2) we get that the
quadruple $(\lambda_{1},\lambda_{1}^{f},\lambda_{2},\lambda_{2}^{f})$
satisfies the following conditions
(25) $\left\\{\begin{array}[]{l}Z(\lambda_{1}^{f},\lambda_{1})=0\\\
4Z(\lambda,\lambda_{1}^{f})\left(\lambda_{1}-\beta\right)=\left(2\lambda_{2}-\beta\right).\end{array}\right.$
Since $Z(\lambda_{1}^{f},\lambda_{1})=0$, from the second formula of Lemma
3.3, we get
$\bar{s}_{1,2}=-\beta\bar{a}_{-1}+(\lambda_{2}^{f}-\beta)\bar{a}_{1},$
whence it follows that $V_{o}\cong 2B$ and $\lambda_{2}^{f}=0$. In particular,
$(\bar{a}_{1}-\bar{a}_{-1})^{2}=\bar{a}_{1}+\bar{a}_{-1}$ and so
$V_{o}=\langle\langle\bar{a}_{1}-\bar{a}_{-1}\rangle\rangle$. Let us now
consider Equation (6.2). If the coefficient of $\bar{a}_{1}-\bar{a}_{-1}$ in
Equation (6.2) is not $0$, then, the above remark implies $V=V_{e}$, a
contradiction. Thus the coefficients $\bar{a}_{-2}-\bar{a}_{2}$ and
$\bar{a}_{1}-\bar{a}_{-1}$ in Equation (6.2) are $0$. Then, by Theorem 4.1,
$(\lambda_{1},\lambda_{1}^{f},\lambda_{2},0)$ is a solution of the system:
(26) $\left\\{\begin{array}[]{l}Z(\lambda_{1}^{f},\lambda_{1})=0\\\
4Z(\lambda,\lambda_{1}^{f})\left(\lambda_{1}-\beta\right)=\left(2\lambda_{2}-\beta\right)\\\
\beta Z(\lambda_{1},\lambda_{1}^{f})-2(\lambda_{1}^{f}-\beta)=0\\\
\left[\frac{8(2\beta-1)}{\beta^{2}}(\lambda_{1}^{f}-\beta)^{2}-\frac{8}{\beta}(\lambda_{1}-\lambda_{1}^{f})\left(2\beta-\lambda_{1}-\lambda_{1}^{f}\right)+\frac{2(2\beta-1)^{2}}{\beta^{2}}(\lambda_{1}^{f}-\beta)-2\lambda_{2}\right]=0\\\
p_{2}(\lambda_{1},\lambda_{1}^{f},\lambda_{2},0)=0\\\
p_{4}(\lambda_{1},\lambda_{1}^{f},\lambda_{2},0)=0.\end{array}\right.$
Now $Z(\lambda_{1}^{f},\lambda_{1})=0$ implies that
$Z(\lambda_{1},\lambda_{1}^{f})=\frac{4\beta-1}{2\beta^{3}}(\lambda_{1}-\beta)$,
whence the system in (26) is equivalent to
(27) $\left\\{\begin{array}[]{l}Z(\lambda_{1}^{f},\lambda_{1})=0\\\
\frac{2(4\beta-1)}{\beta^{3}}\left(\lambda_{1}-\beta\right)^{2}=\left(2\lambda_{2}-\beta\right)\\\
\frac{(4\beta^{2}+2\beta-1)}{2\beta^{2}}(\lambda_{1}^{f}-\beta)=0\\\
\left[\frac{8(2\beta-1)}{\beta^{2}}(\lambda_{1}^{f}-\beta)^{2}-\frac{8}{\beta}(\lambda_{1}-\lambda_{1}^{f})\left(2\beta-\lambda_{1}-\lambda_{1}^{f}\right)+\frac{2(2\beta-1)^{2}}{\beta^{2}}(\lambda_{1}^{f}-\beta)-2\lambda_{2}\right]=0\\\
p_{2}(\lambda_{1},\lambda_{1}^{f},\lambda_{2},0)=0\\\
p_{4}(\lambda_{1},\lambda_{1}^{f},\lambda_{2},0)=0\end{array}\right.$
If $\lambda_{2}=\frac{\beta}{2}$, then the system has the unique solution
$(\beta,\beta,\frac{\beta}{2},0)$, which is not a solution of Equation (8),
since $p_{2}(\beta,\beta,\frac{\beta}{2},0)=-\frac{\beta^{2}}{2}\neq 0$. This
is a contradiction to Theorem 4.1. Suppose
$\lambda_{2}=\frac{\beta(9\beta-2)}{2(4\beta-1)}$ or $18\beta^{2}-\beta-1=0$
and $\lambda_{2}=\frac{9\beta+1}{4}$, but $\lambda_{2}\neq\frac{\beta}{2}$.
Using the first equation we express $\lambda_{1}^{f}$ as a polynomial in
$\lambda_{1}$. Then, by the second equation $\lambda_{1}\neq\beta$ and so the
third equation implies $4\beta^{2}+2\beta-1=0$. We then compute the resultants
with respect to $\lambda_{1}$ between the fourth equation and the two last
equations and between the two last equations. The three resultants are
polynomial expressions in $\beta$ which vanish provided the System (26) has a
solution. Comparing these expressions, we obtain that the system has no
solution, in any field ${\mathbb{F}}$ and for any value of $\beta$. ∎
Proof of Theorem 1.2. Let $V$ be a non-symmetric $2$-generated primitive axial
algebra of Monster type $(2\beta,\beta)$. By the previous lemmas in this
section, we may assume that the even and the odd subalgebras are isomorphic to
either $2A$ or $2B$. Then, from Lemma 3.4 we get that
$(\lambda_{1},\lambda_{1}^{f},\lambda_{2},\lambda_{2}^{f})$ satisfies the
following equations
(28) $Z(\lambda_{1},\lambda_{1}^{f})(\lambda_{1}-\beta)=\frac{\lambda_{2}}{2}$
and
(29)
$Z(\lambda_{1}^{f},\lambda_{1})(\lambda_{1}^{f}-\beta)=\frac{\lambda_{2}^{f}}{2}.$
Suppose first that $\lambda_{2}=\lambda_{2}^{f}$. Then we get
$Z(\lambda_{1},\lambda_{1}^{f})(\lambda_{1}-\beta)=Z(\lambda_{1}^{f},\lambda_{1})(\lambda_{1}^{f}-\beta)$,
which is equivalent to
$(\lambda_{1}-\lambda_{1}^{f})(\lambda_{1}+\lambda_{1}^{f}-2\beta)=0$ and so
$\lambda_{1}+\lambda_{1}^{f}-2\beta=0$, since $\lambda_{1}\neq\lambda_{1}^{f}$
as the algebra is non-symmetric. Then, Equations (28) and (29) are equivalent
to
(30)
$\frac{1}{\beta^{2}}(\lambda_{1}-\beta)^{2}=\frac{\lambda_{2}}{2}\>\>\mbox{
and
}\>\>\frac{1}{\beta^{2}}(\lambda_{1}^{f}-\beta)^{2}=\frac{\lambda_{2}^{f}}{2}$
If $\lambda_{2}=\lambda_{2}^{f}=0$, we get the solution $(\beta,\beta,0,0)$
which correspond to a symmetric algebra, a contradiction. Suppose
$\lambda_{2}=\lambda_{2}^{f}=\beta$. Then it is long but straightforward to
check that there is no quadruple $(\lambda_{1},\lambda_{1}^{f},\beta,\beta)$
which is a common solution of Equations (8) and (30).
Finally assume that $\lambda_{2}=\beta$ and $\lambda_{2}^{f}=0$. Then, by
Equation (29), either $Z(\lambda_{1}^{f},\lambda_{1})=0$ or
$\lambda_{1}^{f}=\beta$. If $Z(\lambda_{1}^{f},\lambda_{1})=0$, we check that
no quadruple $(\lambda_{1},\lambda_{1}^{f},\beta,0)$ is a common solution of
Equation (28) and of the system in Equation (8). So $\lambda_{1}^{f}=\beta$.
Then Equation (28) becomes
$(\lambda_{1}-\beta)^{2}=\frac{\beta^{2}}{4}$
and we get the two quadruples $(\frac{3}{2}\beta,\beta,\beta,0)$ and
$(\frac{\beta}{2},\beta,\beta,0)$. A direct check shows that the former one is
not a solution of the system in (8). If
$(\lambda_{1},\lambda_{1}^{f},\lambda_{2},\lambda_{2}^{f})=(\frac{\beta}{2},\beta,\beta,0)$,
then from Lemma 3.3 we get $s_{0,1}=-\beta(\bar{a}_{0}+\bar{a}_{2})$ and so
$V$ has dimension at most $4$. Moreover, $V$ satisfies the same multiplication
table as the algebra $Q_{2}(\beta)$. By Theorem 8.6 in [5], for
$\beta\neq-\frac{1}{2}$ the algebra $Q_{2}(\beta)$ is simple, while it has a
$3$-dimensional quotient over the radical
${\mathbb{F}}(\bar{a}_{0}+\bar{a}_{1}+\bar{a}_{2}+\bar{a}_{-1})$ when
$\beta=-\frac{1}{2}$. The claim follows. $\square$
## References
* [1] Decker, W.; Greuel, G.-M.; Pfister, G.; Schönemann, H.: Singular 4-1-1 — A computer algebra system for polynomial computations. http://www.singular.uni-kl.de (2018).
* [2] De Medts, T., Peacock S.F., Shpectorov, S. and van Couwenberghe M., Decomposition algebras and axial algebras, J. Algebra
* [3] Franchi, C., Mainardis, M., Shpectorov, S., $2$-generated axial algebras of Monster type. https://arxiv.org/abs/2101.10315.
* [4] Franchi, C., Mainardis, M., Shpectorov, S., An infinite dimensional $2$-generated axial algebra of Monster type. https://arxiv.org/abs/2007.02430.
* [5] Galt, A., Joshi, V., Mamontov, A., Shpectorov, S., Staroletov, A., Double axes and subalgebras of Monster type in Matsuo algebras. https://arxiv.org/abs/2004.11180.
* [6] The GAP Group, GAP – Groups, Algorithms, and Programming, Version 4.10.0; 2019. (https://www.gap-system.org)
* [7] Griess, R., The friendly giant. Invent. Math. , 69, (1982), 1-102.
* [8] Hall, J., Rehren, F., Shpectorov, S.: Universal Axial Algebras and a Theorem of Sakuma, J. Algebra 421 (2015), 394-424.
* [9] Hall, J., Rehren, F., Shpectorov, S.: Primitive axial algebras of Jordan type, J. Algebra 437 (2015), 79-115.
* [10] Khasraw, S.M.S., McInroy, J., Shpectorov, S.: On the structure of axial algebras, Trans. Amer. Math. Soc., 373 (2020), 2135-2156.
* [11] Ivanov, A. A., Pasechnik, D. V., Seress, Á., Shpectorov, S.: Majorana representations of the symmetric group of degree $4$, J. Algebra 324 (2010), 2432-2463
* [12] Norton, S. P.: The uniqueness of the Fischer-Griess Monster. In: McKay, J. (ed.) Finite groups-coming of age (Montreal, Que.,1982). Contemp. Math. 45, pp. 271–285. AMS, Providence, RI (1985)
* [13] Norton, S. P.: The Monster algebra: some new formulae. In Moonshine, the Monster and related topics (South Hadley, Ma., 1994), Contemp. Math. 193, pp. 297-306. AMS, Providence, RI (1996)
* [14] Rehren, F., Axial algebras, PhD thesis, University of Birmingham, 2015.
* [15] Rehren, F., Generalized dihedral subalgebras from the Monster, Trans. Amer. Math. Soc. 369 (2017), 6953-6986.
* [16] Sakuma, S.: $6$-transposition property of $\tau$-involutions of vertex operator algebras. Int. Math. Res. Not. (2007). doi:10.1093/imrn/rmn030
* [17] Yabe, T.: On the classification of $2$-generated axial algebras of Majorana type, arXiv:2008.01871
|
# A nowcasting approach to generate timely estimates of Mexican economic
activity: An application to the period of COVID-19
Francisco Corona Corresponding author<EMAIL_ADDRESS>Please,
if you require to quote this working progress, request it to the corresponding
author. The views expressed here are those of the authors and do not reflect
those of INEGI. Instituto Nacional de Estadística y Geografía Graciela
González-Farías Centro de Investigación en Matemáticas A.C. Jesús López-
Pérez Instituto Nacional de Estadística y Geografía
(This version: November 6, 2020)
###### Abstract
In this paper, we present a new approach based on dynamic factor models (DFMs)
to perform nowcasts for the percentage annual variation of the Mexican Global
Economic Activity Indicator (IGAE in Spanish). The procedure consists of the
following steps: i) build a timely and correlated database by using economic
and financial time series and real-time variables such as social mobility and
significant topics extracted by Google Trends; ii) estimate the common factors
using the two-step methodology of Doz et al., (2011); iii) use the common
factors in univariate time-series models for test data; and iv) according to
the best results obtained in the previous step, combine the statistically
equal better nowcasts (Diebold-Mariano test) to generate the current nowcasts.
We obtain timely and accurate nowcasts for the IGAE, including those for the
current phase of drastic drops in the economy related to COVID-19 sanitary
measures. Additionally, the approach allows us to disentangle the key
variables in the DFM by estimating the confidence interval for both the factor
loadings and the factor estimates. This approach can be used in official
statistics to obtain preliminary estimates for IGAE up to 50 days before the
official results.
Keywords: Dynamic Factor Models, Global Mexican Economic Activity Indicator,
Google Trends, LASSO regression, Nowcasts.
## 1 Introduction
Currently, the large amount of economic and financial time series collected
over several years by official statistical agencies allows researchers to
implement statistical and econometric methodologies to generate accurate
models to understand any macroeconomic phenomenon. One of the most important
events to anticipate is the movement of the gross domestic product (GDP)
because doing so allows policy to be carried out with more certainty,
according to the expected scenario. For instance, if an economic contraction
is foreseeable, businesses can adjust their investment or expansion plans,
governments can apply countercyclical policy, and consumers can adjust their
spending patterns.
As new economic and financial information is released, the forecasts for a
certain period are constantly also being updated; thus, different GDP
estimations arise. In this sense, a new, unexpected event can drastically
affect predictions in the short term; consequently, it might be necessary to
use not only economic and financial information but also nontraditional and
high-frequency indicators, such as news, search topics extracted from the
Internet, social networks, etc. The seminal work of Varian, (2014) is an
obligatory reference for the inclusion of high-frequency information by
economists, and Buono et al., (2018) is also an important reference to
characterize the types of nontraditional data and see the econometric methods
usually employed to extract information from these data.
Thus, the term “nowcast”, or real-time estimation, is relevant because we can
use a rich variety of information to model, from a multivariate point of view,
macroeconomic and financial events, plus specific incidents that can affect
the dynamics of GDP in the short run. Econometrically and statistically, these
facts are related to the literature on large dynamic factor models (DFMs)
because a large amount of time series is useful to estimate underlying common
factors. First introduced in economics by Geweke, (1977) and Sargent and Sims,
(1977), DFMs have recently become very attractive in practice given the
current requirements of dealing with large datasets of time series using high-
dimensional DFM; see, for example, Breitung and Eickmeier, (2006), Bai and Ng,
(2008), Stock and Watson, (2011), Breitung and Choi, (2013) and Bai and Wang,
(2016) for reviews of the existing literature.
An open question in the literature on large DFMs is whether a large number of
series is adequate for a particular forecasting objective. In that sense,
preselecting variables has proven to reduce the error prediction with respect
to using the complete dataset Boivin and Ng, (2006); that is, not always by
using a large set of variables, we can obtain closer factor estimates with
respect to when we use fewer variables, especially under finite sample
performance Poncela and Ruiz, (2016). Even when the number of time series is
moderate, approximately 15, we can accurately estimate the simulated common
factors, as shown by Corona et al., (2020) in a Monte Carlo analysis. The
latter also corroborates that the Doz et al., (2011) two-step (2SM) factor
extraction method performs better than other approaches available in the
literature above all when the data are nonstationary.
DFM methodology has already been used to nowcast or predict the Mexican
economy. Corona et al., 2017a , one of the first works in this line, estimated
common trends in a large and nonstationary DFM to predict the Global Economic
Activity Indicator (IGAE in Spanish) two steps ahead and concluded that the
error prediction was reduced with respect to some benchmarking univariate and
multivariate time-series models. Caruso, (2018) focuses on international
indicators, mainly for the US economy, to show that its nowcasts of quarterly
GDP outperform the predictions obtained by professional forecasters. Recently,
Gálvez-Soriano, (2020) concluded that bridge equations perform better than DFM
and static principal components (PCs) when making the nowcasts of quarterly
GDP. An important work related with timely GDP estimation is Guerrero et al.,
(2013) where, based on vector autoregression (VAR) models, they generate rapid
GDP estimates (and its three grand economic activities) with a delay of up to
15 days from the end of the reference quarter, while the official GDP takes
around 52 days after the quarter closes. This work is the main reference to
INEGI’s “Estimación Oportuna del PIB
Trimestral.”111https://www.inegi.org.mx/temas/pibo/
Although prior studies are empirically relevant for the case of Mexico, our
analysis goes beyond including nontraditional information to capture more
drastic frictions that occur in the very short run, one or two months. We
identify that previous works focus on traditional information, which limits
their capacity to predict the recent historical declines attributed to
COVID-19 and the associated economic closures since March 2020. Our approach
maximizes the structural explanation of the already relevant macroeconomic and
financial time series with the timeliness of other high-frequency variables
commonly used in big data analysis.
In this tradition, this work estimates a flexible and trained DFM to verify
the assumptions that guarantee the consistency of the component estimation
from a statistical point of view. That is, we use previous knowledge and
attempt to fill in the identified gaps by focusing on the Mexican case in the
following ways: i) build a timely and correlated database by using traditional
economic and financial time series and real-time nontraditional information,
determining the latter relevant variables with least absolute selection and
shrinkage operator (LASSO) regression, a method of variable selection; ii)
estimate the common factors using the two-step methodology of Doz et al.,
(2011); iii) train univariate time series models with the DFM’s common factors
to select the best nowcasts; iv) determine the confidence intervals for both
the factor loadings and the factor itself to analyze the importance of each
variable and the uncertainty attributed to the estimation; and iv) combine the
statistically equal better nowcasts to generate the current estimates.
In practice, we consider the benefits of this paper to be opportunity and
openness. First, given the timely availability of the information that our
approach uses, we can generate nowcasts of the IGAE up to 50 days before the
official data release; thus, our approach becomes an alternative to obtaining
IGAE’s preliminary estimates, which are very important in official statistics.
Second, this paper illustrates the empirical strategy to generate IGAE
nowcasts step-by-step to practitioners, so any user can replicate the results
for other time series. Third, and very important, the nowcasting approach
allows to known which variables are the most relevant in the nowcasts,
consequently, we emphasize in the structural explanation of our results.
The remainder of this paper is structured as follows. The next section, 2,
summarizes the Mexican economy evolution in the era of COVID-19. Section 3
presents the methodology considered to generate the nowcasts. Section 4
describes the data and the descriptive analysis. Section 5 contains the
empirical results. Finally, Section 6 concludes the paper.
## 2 The Mexican economy amid the COVID-19 pandemic
The first six months of the COVID-19 pandemic (until September 2020) has had
severe impacts on the Mexican economy. The first case of coronavirus in Mexico
was documented on February 27, 2020. Despite government efforts to cope with
the effects of the obligatory halt of economic activity, GDP in the second
quarter plummeted with a historic 18.7% yearly contraction. Moreover, the
pandemic accelerated economic stagnation that had begun to show signs of
amelioration, following three quarters of negative growth of 0.5, 0.8 and 2.1%
since the third quarter of 2019. However, starting in 2020, the actual values
were not foreseen by national and international institutions such as private
and central banks. For example, the November 2019 Organisation for Economic
Co-operation and Development Economic Outlook estimated the real GDP variation
for 2020 at 1.29%, while the June 2020 report updated it to -8.6%, a
difference of 9.8% in absolute terms. Moreover, even when the Mexican Central
Bank expected barely zero economic growth for 2020, placing its November 2019
outlook between -0.2% and 0.2%, it did not anticipate such a contraction as
has seen so far this year.
Between January 2019 and February 2020, before the COVID-19 outbreak started
in Mexico, the annual growth of IGAE222The IGAE is the monthly proxy variable
for Mexican GDP, which covers approximately 95% of the total economy. Its
publication is available two months after the reference month
(https://www.inegi.org.mx/temas/igae/). already showed signs of slowing and
fluctuated around -1.75 and 0.76%, and since May 2019, the economy exhibited
nine consecutive months of negative growth. Broken down by sector and using
IGAE, the economy suffered devastating consequences in the secondary and
tertiary sectors. Overall, the pandemic brought about -19.7, -21.6 and -14.5%
contractions in total economic activity for April, May and June of 2020,
respectively.
The industrial sector registered the deepest contractions, reducing its
activity in April and May by -30.1 and -29.6%, respectively, in annual terms,
mainly driven by the closure of manufacturing and construction operations,
which were considered nonessential businesses, following a slight recovery in
June, -17.5%, when an important number of activities, including automobile
manufacturing, resumed but remained at low activity levels. The services
sector also suffered from lockdown measures, falling by -15.9, -19 and -13.6%
in the three months of the second quarter, respectively, especially due to
transportation, retail, lodging and food preparation, mainly due to the
decrease in tourist activity, although restaurants and airports were not
closed. The primary sector showed signs of resilience and even grew in April
and May 2020, by 1.4 and 2.7%, and only shrank in June by -1.5% on an annual
basis.
The great confinement in Mexico, which officially lasted from March 23 to May
31 (named “Jornada Nacional de Sana Distancia”), had severe consequences for
the components of the aggregate demand: consumption, investment and foreign
trade suffered consequences. Consumption had been on a deteriorating path
since September 2019, and in May 2020, the last month for which data are
available, it exhibited a -23.5% plunge compared to the same period of 2019.
Similarly, investment, which peaked in June 2018, continued to deteriorate and
registered a drop of -38.4% in May 2020 on a year-over-year basis. Regarding
international trade, exports began to abate in August 2019, hit a record low
in May 2020, and despite a slight recovery in June, the yearly variation in
July 2020 was still -8.8% below its 2019 level. Similarly, imports registered
a maximum in November 2018, and despite improvements in May 2020, the yearly
variation as of July 2020 was still -26.3% under its 2019 level.
Prices and employment, to round out description of the Mexican economy, also
suffered the ravages of the pandemic. Prices, unlike during other periods of
economic instability in Mexico, do not seem to be into an inflationary spiral;
in fact, the inflation rate in July 2020 compared to the previous year was
3.6%, and the central bank expects it will hover around 3% for the next 12 to
24 months. Additionally, different job-related statistics also reveal an
underutilization of the labor force. For example, IMSS-affiliated workers, who
account for approximately 90% of the formal sector, suffered 1.3 million in
job losses from the peak in November 2019 to July 2020. Similarly, the
underemployment rate, an indicator of part-time employment, increased over
twelve months from 7.6% to 20.1% in June 2020. In addition, the labor force
participation rate showed a sharp decline in the first months of the social
distancing policies, implying that 12 million people were dropped from the
economy’s active workforce thanks to COVID-19. Thus, the unemployment rate,
people actively looking for a remunerated job, registered an annual increase
of 1.32% in June 2020 to stand at 5.5%.
The literature on the effects of the pandemic on the economy has grown
rapidly; see, for instance, Covid Economics from the Center for Economic and
Policy Research and numerous working papers from the National Bureau of
Economic Research. For the case of the Mexican economy, the works of Campos-
Vazquez et al., (2020) who analyze online job advertisements in times of
COVID-19 and Lustig et al., (2020) who conducts simulations to project poverty
figures across different population sectors by using survey’s microdata, stand
out. Along the same lines, the journal EconomíaUNAM dedicated its number 51 of
volume 17 in its entirety to study the impacts in Mexico of the pandemic,
covering a wide range of issues related mainly to health economics (Vanegas,,
2020, Kershenobich,, 2020), labor economics (Samaniego,, 2020) , inequality
(Alberro,, 2020), poverty (Fernández,, 2020) and public policy (Sánchez,,
2020, Moreno-Brid,, 2020). None of these related to short-term forecasting of
the economic activity.
The closest paper to ours is Meza, (2020), who projects the economic impact of
COVID-19 for twelve variables, including IGAE, based on a Susceptible-
Infectious-Recovered epidemic model and a novel method to handle a sequence of
extreme observations when estimating a VAR model (Lenza and Primiceri,, 2020).
To make the forecasts, Meza, (2020) first estimates the shocks that hit the
economy since March 2020, and then produce four forecasts considering a path
for the pandemic or not, and if so then considers three scenarios. Opposite to
this work, the forecast horizon focuses in the mid term, June 2020 to February
2023, rather than ours in the short term, one or two months ahead.
## 3 Methodology
This section describes how we employ DFM to generate the nowcasts of the IGAE.
First, we describe how LASSO regression is used as a variable selection method
to select among various Google Trends topics. Then, we report how the
stationary DFM shrinks the complete dataset in the 2SM strategy to obtain the
estimated factor loadings and common factors and in the Onatski, (2010)
procedure to detect the number of common factors. Finally, we describe the
nowcasting approach.
### 3.1 LASSO regression
LASSO regression was introduced by Tibshirani, (1996) as a new method of
estimation in linear models by minimizing the residual sum of the squares
(RSS) subject to the sum of the absolute value of the coefficients being less
than a constant. In this sense, LASSO regression is related to ridge
regression, but the former focuses on determining the tuning parameter,
$\lambda$, that controls the regularization effect; consequently, we can have
better predictions than ordinary least squares (OLS) in a variety of
scenarios, depending on its choice.
Let $W_{t}=(w_{1t},\dots,w_{Kt})^{\prime}$ be a $K\times 1$ vector of
stationary and standardized variables. Consider the following penalized RSS:
$\min_{RSS}=(y-W\beta)^{\prime}(y-W\beta)\quad\mbox{s.t}\quad f(\beta)\leq c,$
(1)
where $y=(y_{1},\dots,y_{T})^{\prime}$ is a $T\times 1$ vector,
$\beta=(\beta_{1},\dots\beta_{K})^{\prime}$ is a $K\times 1$ vector,
$W=(W1,\dots,W_{T})^{\prime}$ is a $T\times K$ matrix and $c\geq 0$ is a
tuning parameter that controls the shrinkage of the estimates.
If $f(b)=\sum_{j=1}^{K}\beta_{j}^{2}$, the ridge solution is
$\widehat{\beta}^{Ridge}_{\lambda}=(W^{\prime}W-\lambda
I_{p})^{-1}W^{\prime}y$. In practice, this solution never sets coefficients to
exactly zero; therefore, ridge regression cannot perform as a variable
selection method in linear models, although its prediction ability is better
than OLS.
Tibshirani, (1996) considers a penalty function as
$f(\beta)=\sum_{j=1}^{K}|\beta_{j}|\leq c$; in this case, the solution of (4)
is not closed, and it is obtained by convex optimization techniques. The LASSO
solution has the following implications: i) when $\lambda\rightarrow 0$, we
obtain solutions similar to OLS, and ii) when $\lambda\rightarrow\infty$,
$\widehat{\beta}^{LASSO}_{\lambda}\rightarrow 0.$ Therefore, LASSO regression
can perform as a variable selection method in linear models. Consequently, if
$\lambda$ is large, more coefficients tend to zero, selecting the variables
that minimize the error prediction.
In macroeconomic applications, Aprigliano and Bencivelli, (2013) use LASSO
regression to select the relevant economic and financial variables in a large
data set with the goal of estimating a new Italian coincident indicator.
### 3.2 Dynamic Factor Model
We consider a stationary DFM where the observations, $X_{t}$, are generated by
the following process:
$X_{t}=PF_{t}+\varepsilon_{t},$ (2) $\Phi(L)F_{t}=\eta_{t},$ (3)
$\Gamma(L)\varepsilon_{t}=a_{t},$ (4)
where $X_{t}=(x_{1t},\dots,x_{Nt})^{\prime}$ and
$\varepsilon_{t}=(\varepsilon_{1t},\dots,\varepsilon_{Nt})^{\prime}$ are
$N\times 1$ vectors of the variables and idiosyncratic noises observed at time
$t$. The common factors, $F_{t}=(F_{1t},\dots,F_{rt})^{\prime}$, and the
factor disturbances, $\eta_{t}=(\eta_{1t},\dots,\eta_{rt})^{\prime}$, are
$r\times 1$ vectors, with $r$ $(r<N)$ being the number of static common
factors, which is assumed to be known. The $N\times 1$ vector of idiosyncratic
disturbances, $a_{t}$, is distributed independently of the factor
disturbances, $\eta_{t}$, for all leads and lags, denoted by $L$, where
$LX_{t}=X_{t-1}$. Furthermore, $\eta_{t}$ and $a_{t}$, are assumed to be
Gaussian white noises with positive definite covariance matrices
$\Sigma_{\eta}=\text{diag}(\sigma_{\eta_{1}}^{2},\dots,\sigma_{\eta_{r}}^{2})$
and $\Sigma_{a},$ respectively. $P=(p_{1},\dots,p_{N})^{\prime}$, is the
$N\times r$ matrix of factor loadings, where,
$p_{i}=(p_{i1},\dots,p_{ir})^{\prime}$ is an $r\times 1$ vector. Finally,
$\Phi(L)=I-\sum_{i=1}^{k}\Phi L^{i}$ and $\Gamma=I-\sum_{j=1}^{s}\Gamma
L^{j}$, where $\Phi$ and $\Gamma$ are $r\times r$ and $N\times N$ matrices
containing the VAR parameters of the factors and idiosyncratic components with
$k$ and $s$ orders, respectively. For simplicity, we assume that the number of
dynamic factors, $r_{1}$, is equal to $r$.
Alternative representations in the stationary case are given by Doz et al.,
(2011, 2012), who assume that $r$ can be different from $r_{1}$. Additionally,
when $r=r_{1}$, Bai and Ng, (2004), Choi, (2017), and Corona et al., (2020)
also assume possible nonstationarity in the idiosyncratic noises. Barigozzi et
al., (2016, 2017) assume possible nonstationarity in $F_{t}$,
$\varepsilon_{t}$ and $r\neq r_{1}$.
The DFM in equations (2) to (4) is not identified. As we noted in the
Introduction, the factor extraction used in this work is the 2SM;
consequently, in the first step, we estimate the common factors by using PCs
to solve the identification problem and uniquely define the factors; we impose
the restrictions $P^{\prime}P/N=I_{r}$ and $F^{\prime}F$ being diagonal, where
$F=(F_{1},\dots,F_{T})$ is $r\times T$. For a review of restrictions in the
context of PC factor extraction, see Bai and Ng, (2013).
#### 3.2.1 Two-step method for factor extraction
Giannone et al., (2008) popularized the usage of 2SM factor extraction to
estimate the common factors by using monthly information with the goal of
generating the nowcasts of quarterly GDP. However, Doz et al., (2011) proved
the statistical consistency of the estimated common factor using 2SM. In the
first step, PC factor extraction consistently estimates the static common
factors without assuming any particular distribution, allowing weak serial and
cross-sectional correlation in the idiosyncratic noises; see, for example,
Bai, (2003). In the second step, we model the dynamics of the common factors
via the Kalman smoother, allowing idiosyncratic heteroskedasticity, a
situation that occurs frequently in practice. In a finite sample study, Corona
et al., (2020) show that with the 2SM of Doz et al., (2011) based on PC and
Kalman smoothing, we can obtain closer estimates of the common factors under
several data generating processes that can occur in empirical analysis, such
as heteroskedasticity and serial and cross-sectional correlation in
idiosyncratic noises. Additionally, following Giannone et al., (2008), this
method is useful when the objective is nowcasting given the flexibility to
estimate common factors when all variables are not updated at the same time.
The 2SM procedure is implemented according to the following steps:
1. 1.
Set $\hat{P}$ as $\sqrt{N}$ times the $r$ largest eigenvalues of
$X^{\prime}X$, where $X=(X_{1},\dots,X_{T})^{\prime}$ is a $T\times N$ matrix.
By regressing $X$ on $\hat{P}$ and using the identifiability restrictions,
obtain $\hat{F}=X\hat{P}/N$ and
$\hat{\varepsilon}=X-\hat{F}^{\prime}\hat{P}^{\prime}.$ Compute the asymptotic
confidence intervals for both factor loadings and common factors as proposed
by Bai, (2003).
2. 2.
Set the estimated covariance matrix of the idiosyncratic errors as
$\hat{\Psi}=\text{diag}\left(\hat{\Sigma}_{\varepsilon}\right)$, where the
diagonal of $\hat{\Psi}$ includes the variances of each variable of $X$;
hence, $\hat{\sigma}^{2}_{i}$ for $i=1,\dots,N.$
3. 3.
Estimate a VAR(k) model by OLS to the estimated common factors, $\hat{F}$, and
compute their estimated autoregressive coefficients as the VAR(1) model,
denoted by $\hat{\Phi}$. Assuming that $f_{0}\sim N(0,\Sigma_{f})$, the
unconditional covariance matrix of the factors can be estimated as
$\text{vec}\left(\hat{\Sigma}_{f}\right)=\left(I_{r^{2}}-\hat{\Phi}\otimes\hat{\Phi}\right)^{-1}\text{vec}\left(\hat{\Sigma}_{\eta}\right)$,
where $\hat{\Sigma}_{\eta}=\hat{\eta}^{\prime}\hat{\eta}/T$.
4. 4.
Write DFM in equations (2) to (4) in state-space form, and with the system
matrices substituted by $\hat{P}$, $\hat{\Psi}$, $\hat{\Phi}$,
$\hat{\Sigma}_{\eta}$ and $\hat{\Sigma}_{f},$ use the Kalman smoother to
obtain an updated estimation of the factors denoted by $\tilde{F}$.
In practice, $X_{t}$ are not updated for all $t$; in these cases, we apply the
Kalman smoother, $E(\hat{F}_{t}|\Omega_{T})$, where $\Omega_{T}$ is all the
available information in the sample, and we take into account the following
two cases:
$\hat{\Psi}_{i}=\left\\{\begin{matrix}\hat{\sigma}^{2}_{i}&\mbox{if
}x_{it}\mbox{ is available,}\\\ \infty&\mbox{if }x_{it}\mbox{ is not
available.}\end{matrix}\right.$
Empirically, when specific data on $X_{t}$ are not available, Harvey and
Phillips, (1979) suggests using a diffuse value equal to $10^{7}$; however, we
use $10^{32}$ according to the package nowcast of the R program, see de Valk
et al., (2019).
#### 3.2.2 Determining the number of common factors
To detect the estimated number of common factors, $\widehat{r}$, Onatski,
(2010) proposes a procedure when the proportion of the observed variance
attributed to the factors is small relative to that attributed to the
idiosyncratic term. This method determines a sharp threshold, $\delta$, which
consistently separates the bounded and diverging eigenvalues of the sample
covariance matrix. The author proposes the following algorithm to estimate
$\delta$ and determine the number of factors:
1. 1.
Obtain and sort in descending order the $N$ eigenvalues of the covariance
matrix of observations, $\widehat{\Sigma}_{X}$. Set $j=r_{\max}$ \+ 1.
2. 2.
Obtain $\widehat{\gamma}$ as the OLS estimator of the slope of a simple linear
regression, with a constant of
$\left\\{\lambda_{j},\dots,\lambda_{j+4}\right\\}$ on
$\left\\{(j-1)^{2/3},\dots(j+3)^{2/3}\right\\}$, and set
$\delta=2|\widehat{\gamma}|$.
3. 3.
Let $r_{\max}^{(N)}$ be any slowly increasing sequence (in the sense that it
is $o(N)$). If $\widehat{\lambda}_{k}-\widehat{\lambda}_{k+1}<\delta$, set
$\widehat{r}=0$; otherwise, set $\widehat{r}=\max\\{k\leq
r_{\max}^{(N)}\mid\widehat{\lambda}_{k}-\widehat{\lambda}_{k+1}\geq\delta\\}$.
4. 4.
With $j=\widehat{r}$ \+ 1, repeat steps 2 and 3 until convergence.
This algorithm is known as edge distribution, and Onatski, (2010) proves the
consistency of $\widehat{r}$ for any fixed $\delta>0$. Corona et al., 2017b
shows that this method works reasonably well in small samples. Two important
features of this method are that the number of factors can be estimated
without previously estimating the common components and that the common
factors may be integrated.
### 3.3 Nowcasting approach
In this subsection, we describe the nowcasting approach to estimate the annual
percentage variation of IGAE, denoted by $y^{*}=(y_{1},\dots,y_{T^{*}})$,
where $T^{*}=T-2$; hence, we focus on generating the nowcasts two steps ahead.
#### 3.3.1 Selecting relevant Google Trends topics
Currently, Google Trends topics, an up-to-date source of information that
provides an index of Internet searches or queries by category and geography,
are frequently used to predict economic phenomena. See, for instance,
Stephens-Davidowitz and Varian, (2014) for a full review of this tool and
other analytical tools from Google applied to social sciences. Other recent
examples are Ali et al., (2020), who analyzes online job postings in the US
childcare market under stay-at-home orders, Goldsmith-Pinkham and Sojourner,
(2020), who nowcast the number of workers filing unemployment insurance claims
in the US, based on the intensity of search for the term “file for
unemployment”, and Caperna et al., (2020), who develop random forest models to
nowcast country-level unemployment figures for the 27 European Union countries
based on queries that best predict the unemployment rate to create a daily
indicator of unemployment-related searches.
In this way, for a sample $K$ topics on Google Trends, the relevant topics
$l=0,\dots,\zeta$, with $\zeta\geqslant 0$ are selected with LASSO regression
as follows:
1. 1.
Split the data for $t=1,\dots,T^{*}-H_{g}$.
2. 2.
For $h=1$ and for the sample of size $T^{*}-H_{g}+h$, estimate
$\widehat{\beta}^{LASSO}_{\lambda,h}$. Compute the following vector of
indicator variables:
$\widehat{\beta}_{j,h}=\left\\{\begin{matrix}1&\mbox{if
}\widehat{\beta}^{LASSO}_{\lambda,h}\neq 0\\\ 0&\mbox{if
}\widehat{\beta}^{LASSO}_{\lambda,h}=0\end{matrix}\right.$
3. 3.
Repeat 2 until $H_{g}$.
4. 4.
Define the $H_{g}\times K$ matrix,
$\widehat{\beta}=(\widehat{\beta}_{1},\dots,\widehat{\beta}_{K})$, where
$\widehat{\beta_{j}}=(\widehat{\beta}_{j,1},\dots,\widehat{\beta}_{j,H_{g}})^{\prime}$
is an $H_{g}\times 1$ vector.
5. 5.
Select the $l$ significant variables that satisfy the condition
$\widehat{\beta}_{l}=\left(\widehat{\beta}_{l\in
j}|\textbf{1}\widehat{\beta}>\varphi\right)$, where $\varphi$ is the
$1-\alpha$ sample quantile of $\textbf{1}\widehat{\beta}$ with 1 being and
vector $1\times H_{g}$ of ones.
With this procedure, we select the topics that frequently reduce the
prediction error – in sample – for the IGAE estimates during the last $H_{g}$
months. We estimate the optimum $\lambda$ by using the glmnet package from the
R program.
#### 3.3.2 Transformations
In our case, to predict $y^{*}$, the time series
$X_{i}=(x_{i1},\dots,x_{iT^{*}})$ are transformed such that they satisfy the
following condition:
$X^{*}_{i}=\left(f(X_{i})\mid\max_{corr}(f(X_{i}),y^{*})\right).$ (5)
Hence, we select the $f(X_{i})$ that maximizes the correlation between $y$.
Consider $f(\cdot)$ as follows:
1. 1.
None (n)
2. 2.
Monthly percentage variation (m): $\left(\frac{X_{t}}{X_{t-1}}\times
100\right)-100$
3. 3.
Annual percentage variation (a): $\left(\frac{X_{t}}{X_{t-12}}\times
100\right)-100$
4. 4.
Lagged (l): $X_{t-1}$
Note that these transformations do not have the goal of achieving
stationarity, although intrinsically these transformations are stationary
transformations regardless of whether $y^{*}$ is stationary; in fact, the
transformations $m$ and $a$ tend to be stationary transformations when the
time series are $I(1)$, which is frequent in economics; see Corona et al.,
2017b . Otherwise, it is necessary that $(f(X_{i}),y^{*})$ are cointegrated.
The implications of equations (2) to (4) are very important because it is
necessary to stationarize the system in terms that, theoretically, although
some common factor, $F_{t}$, can be nonstationary, consistent estimates remain
regardless of whether the idiosyncratic errors are stationary, see Bai,
(2004). In this way, we use the PANIC test (Bai and Ng,, 2004) to verify this
assumption. Additionally, an alternative to estimate nonstationary common
factors by using 2SM when the time series are $I(1)$ is given by Corona et
al., (2020).
#### 3.3.3 Nowcasting approach
Having estimated the common factors as described in subsection 3.2.1 by using
$X_{t}^{*}$ for $t=1,\dots,T$, we estimate a linear regression model with
autoregressive moving average (ARMA) errors to generate the nowcasts
$y^{*}_{t}=a+b\tilde{F}_{t}+u_{t}\quad t=1,\dots,T-2,$ (6)
where $u_{t}=\phi(L)u_{t}+\gamma(L)v_{t}$ with
$\phi(L)=\sum_{i=1}^{p}\phi_{i}L^{i}$ and
$\gamma(L)=1+\sum_{j=1}^{q}\gamma_{j}L^{j}$. The parameters are estimated by
maximum likelihood. Consequently, the nowcasts are obtained by the following
expression:
$\widehat{y}_{T^{*}+h}=\widehat{a}+\widehat{b}\tilde{F}_{T^{*}+h}+\widehat{u}_{T^{*}+h}\quad\text{for}\quad
h=1,2.$ (7)
Note that Giannone et al., (2008) propose using the model with $p=q=0$; hence,
the nowcasts are obtained by using the expression (7). In our case, we
estimate different models by the orders $p=0,\dots p_{\max}$ and $q=0,\dots
q_{\max}$; thus ,the case of Giannone et al., (2008) is a particular case of
this expression. Now, our interest is in selecting models with similar
performance for training data. In this way, we carry out the following
procedure:
1. 1.
Start with $p=0$ and $q=0$.
2. 2.
Estimate the nowcasts for $T^{*}+1$ and $T^{*}+2$, namely,
$\widehat{y}^{0,0}=(\widehat{y}_{T^{*}+1},\widehat{y}_{T^{*}+2})^{\prime}$.
3. 3.
Split the data for $t=1,\dots,T^{*}-H_{t}.$
4. 4.
For $h=1$ and for the sample of size $T^{*}-H_{t}+h$, estimate equation (6),
generate the nowcasts with expression (7) one step ahead, and calculate the
errors and absolute error (AE) as follows:
$e^{0,0}_{1}=y_{T^{*}-H_{t}+1}-\widehat{y}_{T^{*}-H_{t}+1}$
$AE_{1}^{0,0}=|e^{0,0}|$
5. 5.
Repeat steps 3 and 4 until $H_{t}$. Hence, estimate
$e^{0,0}=(e^{0,0}_{1},\dots,e^{0,0}_{H})^{\prime}$ and
$AE^{0,0}=(AE^{0,0}_{1},\dots,AE^{0,0}_{H_{t}})$. Additionally, we define the
weighted AE (WAE) as $WAE^{0,0}=AE^{0,0}\Upsilon$ where $\Upsilon$ is a
weighted $H_{t}\times 1$ matrix that penalizes the nowcasting errors such that
$\Upsilon\textbf{1}^{\prime}=1.$
6. 6.
Repeat steps for all combinations of $p$ and $q$ until $p_{\max}$ and
$q_{\max}$. Generate the following elements:
$\widehat{y}(p,q)=(\widehat{y}^{0,0},\widehat{y}^{1,0},\dots,\widehat{y}^{p_{\max},q_{\max}}),$
$e(p,q)=(e^{0,0},e^{1,0},\dots,e^{p_{\max},q_{\max}}),$
$WAE(p,q)=(WAE^{0,0},WAE^{1,0},\dots,WAE^{p_{\max},q_{\max}})^{\prime},$
where $\widehat{y}$ is a $2\times(p_{\max}+1)(q_{\max}+1)$ matrix of nowcasts,
$e$ is an $H_{t}\times(p_{\max}+1)(q_{\max}+1)$ matrix that contains the
nowcast errors in the training data, and $WAE$ is an $H_{t}\times 1$ vector of
the weighted errors in the training data.
7. 7.
We select the best nowcast as a function of $p$ and $q$, denoted by
$\widehat{y}(p^{*},q^{*})$, where $p^{*},q^{*}$ are obtained as follows:
$p^{*},q^{*}=\operatornamewithlimits{argmin}\limits_{0\leq p,q\leq
p_{\max},q_{\max}}WAE(p,q)$
8. 8.
To use models with similar performance, we combine the nowcasts of
$\widehat{y}(p^{*},q^{*})$ with models with equal forecast errors according to
Diebold and Mariano, (1995) tests, by using the $e(p,q)$, carrying out pairs
of tests between the model with minimum $AE(p,q)$ and the others.
Consequently, from the models with statistically equal performance, we select
the median of the nowcasts, namely, $\widehat{y}$.
This nowcasting approach allows the generation of nowcasts based on a trained
process, taking advantage of the information of similar models. It is clear
that $\widehat{b}$ must be significant to exploit the relationship between the
IGAE and the information summarized by the DFM. Note that $\Upsilon$ is a
weighted matrix that penalizes the nowcasts errors. The most common form is
$\Upsilon=(1/H_{t},\dots,1/H_{t})^{\prime}$, a $H_{t}\times 1$ matrix where
all nowcasts errors have equal weight named in literature as mean absolute
error (MAE). Therefore, we are not considering by default the traditional MAE,
but rather a weighted (or equal) average of the individual AE. For example, we
could have penalized with more weight the last nowcasts errors, that is, in
the COVID-19 period. Also, note that we can obtain $AE(p,q)$ and estimate the
median or some specific quantile for each vector of this matrix.
Note that despite root mean squared errors (RMSEs) are often used in the
forecast literature, we prefer a weighted function of AEs, although in this
work we use equal weights i.e., the MAE. The main advantages of MAE over RMSE
are in two ways: i) it is easy to interpret since it represents the average
deviation without considering their direction, while the RMSE averages the
squared errors and then we apply the root, which tends to inflate larger
errors and ii) RMSE does not necessarily increase with the variance of the
errors. Anyway, the two criteria are in the interval $\left[0,\infty\right)$
and are indistinct to the sign of errors.
## 4 Data and descriptive analysis
### 4.1 Data
The variables to estimate the DFM are selected by using the criteria of timely
and contemporaneous correlation with respect to $y^{*}$. In this sense, the
model differs from the traditional literature on large DFMs, which uses a
large amount of economic and financial variables; see, for example, Corona et
al., 2017a who use 211 time series to estimate the DFM for the Mexican case
with the goal of generating forecasts for the levels of IGAE. On the other
hand, Gálvez-Soriano, (2020) uses approximately 30 selected time series to
generate nowcasts of Mexican quarterly GDP. Thus, our number of variables is
intermediate between these two cases. However, as noted by Boivin and Ng,
(2006), in the context of DFM, we can reduce the forecast prediction error
with selected variables by estimating the common components. Additionally,
Poncela and Ruiz, (2016) and Corona et al., (2020) show that with a relativity
small sample size, for example, $N=12$, we can accurately estimate a rotation
of the common factors.
Consequently, given the timely and possibly contemporaneous correlation with
respect to the $y^{*}$, the features of the variables considered in this work
are described in Annex 1.333All variables are seasonally adjusted in the
following ways: i) directly downloadable from their source or ii) by applying
the X-13ARIMA-SEATS.
Hence, we initialized with 68 time series divided into three blocks. The first
block is timely traditional information such as the Industrial Production
Index values for Mexico and the United States, business confidence, and
exports and imports, among many others. In this block, all variables are
monthly. In the second block, we have the high-frequency traditional variables
such as Mexican stock market index, nominal exchange rate, interest rate and
the Standard Poor’s 500. These variables can be obtained daily, but we decide
to use the averages to obtain monthly time series. Finally, for the high-
frequency nontraditional variables, we have daily variables such as the social
media mobility index obtained from Twitter and the topics extracted from
Google Trends. These topics are manually selected according to several
phenomena that occur in Mexican society, such as politicians’ names, natural
disasters, economic themes and topics related to COVID-19, such as
coronavirus, quarantine, or facemask. The Google Trends variable takes a value
of 0 when the topic is not searched in the time span and 100 when the topic
has the maximum search in the time span. In a similar way, although these
variables are expressed as monthly variables, for the social media mobility
index, we average the daily values, and for Google Trends we download the
variables by month.
The social media mobility index is calculated based on Twitter information. We
select around 70,000 daily tweets georeferenced to the Mexican, each one is
associated with a bounding box. Then, movement data analysis is performed by
identifying users and their sequence of daily tweets: a trip is considered for
each pair of consecutive geo-tagged tweets found in different bounding boxes.
The total number of trips per day is obtained and divided by the average
number of users in the month. The number obtained can be interpreted as the
average number of trips that tweeters make per day.
To select the relevant topics, we apply the methodology described in
subsection 3.3.1 by using $H_{g}=36$ and $\alpha=0.10$; consequently, we
select the topics that are relevant in 90% of cases in the training data. In
this way, the significant topics are quarantine and facemask.
Once $X$ is defined, we apply the transformations suggested by equation (5) to
define $X^{*}$. Figure 1 shows each $X_{i}^{*}$ ordered according to its
correlation with $y^{*}$.
Figure 1: Blue indicates the specific $X_{i}^{*}$, and red indicates the
specified $y^{*}$. Numbers in parentheses indicate the linear correlation and
those between brackets the transformation.
We can see the behavior of each variable, and industrial production is the
variable with the most correlated time series with the IGAE, followed by
imports and industrial production in the United States. Note that
nontraditional time series are also correlated with $y^{*}$ such as facemask,
quarantine and the social mobility index. Finally, the variables less related
to the IGAE are the variables related to business confidence and the monetary
aggregate M4.
To summarize whether the time series capture prominent macroeconomic events as
the 2009 financial crisis and the economic deceleration in effect since 2019,
Figure 2 shows the heat map by time series plotted in Figure 1
Figure 2: Heat map plot of the variables. The time series inversely related to
the IGAE are converted to have a positive relationship with it. We estimate
the empirical quantiles $\varphi(\cdot)$ according to their historical values.
The first quantile $(\varphi(X_{i}^{*})<0.25)$ is in red, the second quantile
$(0.25<\varphi(X_{i}^{*})<0.50)$ is in orange, the third quantile
$(0.50<\varphi(X_{i}^{*})<0.75)$ is in yellow, and finally, the fourth
quantile $(0.75<\varphi(X_{i}^{*}))$ is green. Gray indicates that information
is not available.
We can see that during the 2009 financial crisis, the variables are mainly
red, including the Google Trends variables, which is reasonable because the
AH1N1 pandemic also occurred during March and April of 2009. Additionally,
during 2016, some variables related to the international market were red, for
example, the US industrial production index, the exchange rate and the S&P
500. Note that since 2019, all variables are orange or red, denoting the
weakening of the economy. Consequently, it is unsurprising that the estimated
common factor summarizes these dynamics. Note that this graph has only a
descriptive objective. It cannot be employed to generate recommendations for
policy making because that some variables may be nonstationary.
### 4.2 Timely estimates
The nowcasts depend on the dates of the information released. Depending on the
day of the current month, we can obtain nowcasts with a larger or smaller
percentage of updated variables. For example, it is clear that the high-
frequency variables are available in real time, but the traditional and
monthly time series, with are timely with respect to the IGAE, are available
on different dates according to the official release dates. Figure 3 shows the
approximate day when the information is released for $T^{*}+2$ after the
current month $T^{*}$.
Figure 3: Percentage of updated information to carry out the nowcasts
$T^{*}+2$ once the current month $T^{*}$ is closed.
We can see that traditional and nontraditional high-frequency variables,
business confidence and fuel demand, can be obtained on the day after the
month $T^{*}$ is closed. This indicates that on the first day of month
$T^{*}+1$, we can generate the nowcasts to $T^{*}+2$ with approximately 50% of
the updated information and 81% for the current month, $T^{*}+1$. Note that on
day 12, the IMSS variable is updated, and on day 16, the IPI USA is updated.
These variables are highly correlated with $\widehat{y}$ with linear
correlations of 0.77 and 0.80, respectively. Consequently, in official
statistics, we recommend conducting the nowcasts on the first day of $T^{*}+1$
and 16 days after, updating the nowcasts with two timely traditional and
important time series, taking into account the timely estimates but with
relevant variables updated.444Note that IPI represents around the 34% of the
monthly GDP, and represents more than 97% of the second grand economic
activity. Given that the IPI is updated around 10 days after the end of the
reference month, this information is very valuable to carry out the $T^{*}+1$
nowcasts.
In this work, the update of the database is August 13, 2020; consequently, we
generate the nowcasts 13 days before the official result of June 2020 and 43
days before the official value of July 2020, having 88% and 52% of updated
variables at $T^{*}+1$ and $T^{*}+2$, respectively.
## 5 Nowcasting results
### 5.1 Estimating the common factors and the loading weights
By applying the Onatski, (2010) procedure to the covariance matrix of $X^{*}$,
we can conclude that $\hat{r}=1$ is adequate to define the number of common
factors. Hence, the estimated static common factor obtained by PCs by using
the set of variables, $X^{*}$, their confidence intervals at 95%, and the
dynamic factor estimates by applying the 2SM procedure with $k=1$ lags, are
presented in Figure 4
Figure 4: Factor estimates. The blue line is the static common factor, the red
lines are their confidence intervals, and the green line is the smoothed or
dynamic common factor.
We observe the common factors summarizing the previous elements representing
the decline in the economy in 2009 and 2020. Note that in the last period, the
dynamic common factor shows a slight recovery of the economy because this
common factor supplies more timely information than the static common factor.
Thus, the static common factor has information until May 2020, while the
dynamic factor has information until July 2020. Note that the confidence
intervals are closed with respect to the static common factor, which implies
that the uncertainty attributed to the estimation is well modeled. It is
important to analyze the contemporaneous correlation with respect to IGAE.
Thus, Figure 5 shows the correlation coefficient of $\tilde{F}_{t}$ with
$y^{*}$ since 2008.
Figure 5: Blue line is $Corr(\tilde{F}_{t},y^{*})$ from January 2008 to May
2020. Red lines represent the confidence interval at 95%.
We see that the correlation is approximately 0.86 prior to the financial
crisis of 2009, increasing from this year to 0.98, showing a slight decrease
since 2011, dropping in 2016 to 0.95 and fully reaching levels of 0.96 since
2020. The confidence intervals are between 0.75 and 0.97 during all sample
being the smallest value during the first years of the sample and the largest
one in the final of period. Consequently, we can exploit the contemporaneous
relationship between the dynamic factor and the IGAE to generate their
nowcasts for the two following months that the common factors have estimated
with respect to the IGAE.
Having estimated the dynamic factor by the 2SM approach, we show the results
of the loading weight estimates that capture the specific contribution of the
common factor to each variable, or in other words, given the PC restrictions,
they can be seen as $N$ times the contribution of each variable in the common
factor. We compute the confidence interval at 95% denoted by
$CI_{\hat{P},0.05}$. Once the dynamic factor is estimated by using the Kalman
smoother, it is necessary to reestimate the factor loadings to have
$\hat{P}=f(\tilde{F})$, such that $\tilde{F}=g(\tilde{P})$. To do so, we use
Monte Carlo estimation iterating 1,000 samples and select the replication that
best satisfies the following condition:
$\tilde{F}\approx X\tilde{P}/N\quad\mbox{s.t}\quad\tilde{P}\in
CI_{\hat{P},0.05}.$
The results of the estimated factor loadings are shown in Figure 6. The
loadings are ordered from the most positive contribution to the most negative.
Figure 6: Factor loadings. The blue point is each $\hat{P}_{i}$ with its
respective 95% confidence interval. Red curves are the $\tilde{P}_{i}$.
We observe several similarities with respect to Figure 1. Note that the more
important variables in the factor estimates are the industrial production of
Mexico and the U.S., exports and imports along with Google Trends topics such
as quarantine and facemask, which makes sense in the COVID-19 period.
Obviously, when these variables are updated, it will be more important to
update the nowcasts. In this way, note that Google Trends are available in
real time. Other timely variables, such as IMO, CONF MANUF, GAS, S&P 500,
MOBILITY and E, are also very relevant. However, note that all variables are
significant in all cases, and the confidence interval does not contain zero.
The less important variables are M4, the business confidence of the
construction sector and remittances. Also, note that the most relevant
variables are very timely with respect to the IGAE: the industrial production
index of Mexico and the U.S. are updated around days 10 and 16 for $T^{*}+1$
and $T^{*}+2$, respectively, once closed the current month; furthermore, the
exports and imports are updated for $T^{*}+2$ by 25th day, while IMO and IMSS
are updated since the first day and 12th day, respectively for $T^{*}+2$.
Consequently, this allows us to have more accurate and correlated estimates
since the first day of the current month for both, $T^{*}+1$ and $T^{*}+2$.
As we have previously noted, to obtain a consistent estimation of $\tilde{F}$
and $\hat{P}$ it is necessary that $\hat{\varepsilon}$ be stationary. We check
this point with the PANIC test of Bai and Ng, (2004), concluding that we
achieved stationarity in the idiosyncratic component, obtaining a statistic of
6.6 that generates a p-value of 0.00; hence, $\hat{\varepsilon}$ does not have
a unit root. Additionally, we can verify with the augmented Dickey-Fuller test
that $\tilde{F}$ is stationary with a p-value of 0.026; consequently, we also
achieved stationarity in $X^{*}$.
### 5.2 Nowcasts in data test
We apply the procedure described in subsection 3.3.3 by using a
$\Upsilon=(1/H_{t},1/H_{t},\dots,1/H_{t})^{\prime}$; then, we assume that each
AE has equal weight over time in step 5. Additionally, we fix
$p_{\max}=q_{\max}=4$. The obtained results indicate that the optimums $p^{*}$
and $q^{*}$ are selected to be equal to 4. Consequently, the best model is the
following:
$\begin{split}y^{*}_{t}=\underset{(0.1553)}{1.7160}+\underset{(0.0393)}{1.2625}\tilde{F}_{t}+\underset{(0.0830)}{0.3664}\widehat{u}_{t-1}+\underset{(0.0932)}{1.0741}\widehat{u}_{t-2}+\underset{(0.0912)}{0.2794}\widehat{u}_{t-3}\underset{(0.0811)}{-0.8036}\widehat{u}_{t-4}+\\\
\underset{(0.0893)}{0.1001}\widehat{v}_{t-1}\underset{(0.098)}{-0.949}\widehat{v}_{t-2}\underset{(0.1002)}{-0.4736}\widehat{v}_{t-3}+\underset{(0.0756)}{0.5904}\widehat{v}_{t-4}+\widehat{v}_{t}\quad\hat{\sigma}^{2}=0.4763.\end{split}$
(8)
Note that all coefficients are significant and the contribution of the factor
over the IGAE is positive. Additionally, estimating the Ljung-Box test over
the residuals produces a result of serially uncorrelated. This model generates
the following historical nowcasts one step ahead during $H_{t}=36$ months that
are presented in Figure 7
Figure 7: Nowcasts of training model. Asterisks are the observed values, the
red line depicts the nowcasts, and the green lines are the confidence
intervals.
We can see that the nowcast model performs well given that in 92% of cases,
the observed values are within the confidence interval at 95%. The MAE (equal
weights in $\Upsilon$) is 0.65, and the mean absolute annual growth of IGAE is
2.55%. Regarding the median of the AEs, the estimated value is 0.36. These
statistics are very competitive with respect to the model estimated by
Statistics Netherlands, see Kuiper and Pijpers, (2020). They also estimate
common factors to generate the nowcasts of the annual variation of quarterly
Netherlands GDP. According to Table 7.2 in their work, the root of the mean of
squared forecast errors is between 0.91 and 0.67 during 2011 and 2017.
Additionally, the confidence interval captures approximately 70% of the
observed values. Therefore, our nowcast approach generates good results even
when considering a monthly variable and COVID-19.
In addition, we compare our results to Corona et al., 2017a , which forecasts
IGAE levels two steps ahead. To have comparable results between such study and
this one, we take the median of the root squared errors obtained by the former
just for the first step forward, which is between 0.4 and 0.5, while the
current work generates a median AEs of 0.397 for the last $H_{t}=36$ months,
including the COVID-19 period. Therefore, our approach is slightly more
accurate when nowcasting the IGAE levels. Note that the number of the
variables is drastically less, 211 there versus 68 here. Another nowcasting
model to compare with is INEGI’s “Early Monthly Estimation of Mexico’s
Manufacturing Production
Level,”555https://www.inegi.org.mx/investigacion/imoam/ whose target variable
is manufacturing activity, generating the one step ahead nowcasts by using a
timely electricity indicator. The average MAE for the annual percentage
variation of manufacturing activity in the last 36 months, from August 2017 to
July 2020, is 1.35. Consequently, in a similar sample period, we have a
smaller average MAE than another nowcasting model where its monthly target
variable is specified as annual percentage variation.
In order to contrast the results of our approach with those obtained by other
procedures, we consider the following two alternative models:
* •
Naive model: We assume that all variables have equal weights in the factor,
consequently, we standardize the variables used in the DFM, $X_{t}^{*}$, and
by averaging their rows, we obtain a $F_{t}^{*}$. Then, we use this naive
factor in a linear regression in order to obtain the nowcasts by the last
$H_{t}=36$ months.
* •
DFM without nontraditional information: We estimate a traditional DFM similar
to Corona et al., 2017a or Gálvez-Soriano, (2020), but using only economic
and financial time series, i.e. without considering the social mobility index
and the relevant topics extracted from Google Trends. Hence, we carry out the
last $H_{t}=36$ nowcasts.
Figure 8 shows the accumulated MAEs for the training period by the previous
two models and the obtained by equation (8).
Figure 8: Cummulative MAEs for models in training data. Blue is the nowcasting
approach suggested in this work, red is the naive model, green is the
traditional DFM (without nontraditional information). The vertical line
indicates indicates the COVID-19 onset period.
We can see that, in training data the named naive model is the one with the
weakest performance, followed by traditional DFM. Specifically, the MAE is
1.02 for the naive model, 0.74 when using DFM without nontraditional
information and, as we have commented, 0.65 for the incumbent model, which
includes this type of information. Note that the use of nontraditional
information does not affects the behaviour of the MAEs previous to COVID-19
pandemic and reduces the error during this period. Consequently, the
performance of the suggested approach is highly competitive when compared with
i) similar models for nowcasting of GDP, ii) models that estimate the levels
of the objective variable and iii) alternative models that can be used in
practice.
### 5.3 Current nowcasts
Having verified our approach in the previous section as highly competitive to
capture the real observed values, the final nowcasts for the IGAE annual
percentage variation for June and July 2020 are shown in Figure 9. These are
obtained after combining the statistically equal models to the best model with
the approach previously described and the traditional nowcasting model of
Giannone et al., (2008), weighting both nowcasts according to their
MAEs.666Note that the model of Giannone et al., (2008) uses only the estimated
dynamic factors as regressors, i.e., linear regression models. Our approach
also considers the possibility to model the errors with ARMA models. In order
to consider nowcasts associated to specifically the dynamic of the common
factors, we take into account the Giannone et al., (2008) model although its
contribution in the final nowcasts is small given that, frequently, during the
test period, the nowcast errors are greater than the regression models with
ARMA errors.
Figure 9: Nowcasts for June and July 2020. The blue line indicates the
observed values, the red small dotted line the fit of the model, the red large
dotted line the nowcasts and the green lines the confidence intervals.
We expect a slight recovery of the economy in June and July 2020, obtaining
nowcasts of -15.2% and -13.2%, respectively, with confidence intervals of
(-16.3, -14.1) and (-14.1, -12.4) for both months. Considering the observed
values for June, released on August 25 by INEGI, the annual percentage change
for the IGAE was -14.5%; consequently, the model is very accurate since the
deviation from the real value was 0.7% and falls within the confidence
interval.
### 5.4 Historical behaviour of nowcasting model in real time: COVID-19
The procedure described in the previous subsection allows to generate nowcasts
using databases with different cut dates. In this way, we carry out the
procedure updating the databases twice a month during the COVID-19 period.
Table 1 summarizes the nowcasts results, comparing them with the observed
values.
Table 1: Nowcasts with different updates in COVID-19 times: annual percentage variation of IGAE | | Date of nowcasts |
---|---|---|---
Date | Observed | 04/06/2020 | 18/06/2020 | 07/07/2020 | 16/07/2020 | 06/08/2020 | 12/08/2020
2020/04 | -19.7 | -18.3 | -18.0 | | | |
2020/05 | -21.6 | -20.4 | -21.0 | -21.8 | -20.4 | |
2020/06 | -14.5 | | | -16.6 | -16.4 | -15.5 | -15.2
2020/07 | | | | | | -13.9 | -13.2
We can see that in June 4, 2020, the nowcasts were very accurate, capturing
the drastic drop occurred in April (previous month was -2.5%) and May, with
absolute discrepancies of 1.4 and 1.2% respectively. The update of June 18,
2020 shows a slight accuracy improvement. The following two nowcasts generate
also closes estimates with respect to the observed value of May, being the
more accurate, the updated carried out in July 7, 2020. Note the the last
updates generate nowcasts by June around -16.6 and -15.2%, being the more
accurate the last nowcasts described in this work, with an absolute error of
0.7%. Considering these results, our approach anticipates the drop attributed
to the COVID-19 and foresees and slight recovery since June, although it is
also weak. According to Gálvez-Soriano, (2020), the IGAE’s accurate and timely
estimates can drastically improve the nowcasts of the quarterly GDP;
consequently, the benefits of our approach are also related to quarterly time
series nowcast models.
## 6 Conclusions and further research
In this paper, we contribute to the nowcasting literature by focusing on the
two step-ahead of the annual percentage variation of IGAE, the equivalently of
the Mexican monthly GDP, during COVID-19 times. For this purpose, we use
statistical and econometric tools to obtain accurate and timely estimates,
even, around 50 days before that the official data. The suggested approach
consists in using LASSO regression to select the relevant topics that affect
the IGAE in the short term, build a correlated and timely database to exploit
the correlation among the variables and the IGAE, estimate a dynamic factor by
using the 2SM approach, training a linear regression with ARMA errors to
select the better models and generate current nowcasts.
We highlight the following key results. We can see that our approach is highly
competitive considering other models as naive regressions or traditional DFM,
our procedure frequently captures the observed value, both, in data test and
in real time, obtaining absolute errors between 0.2% and 1.4% during the
COVID-19 period. Another contribution of this paper lies in a statistical
point of view, given that we compute the confidence interval of the factor
loadings and the factor estimates, verifying the significance of the factor on
each variable and the uncertainty attributed to the factor estimates.
Additionally, we consider some econometric issues to guarantee the consistency
of estimates like stationarity in idiosyncratic noises and uncorrelated errors
in nowcasting models. Additionally, it is of interest to denote in-sample
performance whether the nowcast error increases when using monthly versus
quarterly data.
Future research topics emerged when doing this research. One is the
implementation of an algorithm to allow to estimate nonstationary common
factors and making the selection to the number of factors flexible, such as
the one developed in Corona et al., (2020), to minimize a measure of
nowcasting errors. Another interesting research line is to incorporate machine
learning techniques to automatically select the possible relevant topics from
Google Trends. Also, it would be interesting to incorporate IPI information as
restrictions to the nowcasts, by exploring some techniques to incorporate
nowcasts restrictions when official countable information is available.
Finally, for future research in this area, its worth to deep into the effects
of monthly timely estimate variables versus quarterly time series in
nowcasting models, this can be achieved by Monte Carlo analysis with different
data generating process which can occur in practice to compare the increase in
the error estimation when distinct frequencies of time series are used.
## Acknowledgements
The authors thankfully acknowledge the comments and suggestions carried out by
the authorities of INEGI Julio Santaella, Sergio Carrera and Gerardo Leyva.
The seminars and meetings organized by them were very useful to improve this
research. To Elio Villasenõr who provided the Twitter social mobility index
and Manuel Lecuanda by the discussion about the Google Trend topics to be
considered. Partial financial support from CONACYT CB-2015-25996 is gratefully
acknowledged by Francisco Corona and Graciela González-Farías.
## References
* Alberro, (2020) Alberro, J. (2020). La pandemia que perjudica a casi todos, pero no por igual/The pandemic that harms almost everyone, but not equally. EconomíaUNAM, 17(51):59–73.
* Ali et al., (2020) Ali, U., Herbst, C. M., and Makridis, C. (2020). The impact of COVID-19 on the US child care market: Evidence from stay-at-home orders. Technical report, Available at SSRN 3600532.
* Aprigliano and Bencivelli, (2013) Aprigliano, V. and Bencivelli, L. (2013). Ita-coin: a new coincident indicator for the Italian economy. Banca D’Italia. Working papers, 935.
* Bai, (2003) Bai, J. (2003). Inferential theory for factor models of large dimensions. Econometrica, 71(1):135–171.
* Bai, (2004) Bai, J. (2004). Estimating cross-section common stochastic trends in nonstationary panel data. Journal of Econometrics, 122(1):137–183.
* Bai and Ng, (2004) Bai, J. and Ng, S. (2004). A PANIC attack on unit roots and cointegration. Econometrica, 72(4):1127–1177.
* Bai and Ng, (2008) Bai, J. and Ng, S. (2008). Large dimensional factor analysis. Foundations and Trends in Econometrics, 3(2):89–163.
* Bai and Ng, (2013) Bai, J. and Ng, S. (2013). Principal components estimation and identification of static factors. Journal of Econometrics, 176(1):18–29.
* Bai and Wang, (2016) Bai, J. and Wang, P. (2016). Econometric analysis of large factor models. Annual Review of Economics, 8:53–80.
* Barigozzi et al., (2016) Barigozzi, M., Lippi, M., and Luciani, M. (2016). Non-Stationary Dynamic Factor Models for Large Datasets. Finance and Economics Discussion Series Divisions of Research $\&$ Statistics and Monetary Affairs Federal Reserve Board, Washington, D.C., 024.
* Barigozzi et al., (2017) Barigozzi, M., Lippi, M., and Luciani, M. (2017). Dynamic factor models, cointegration, and error correction mechanisms. Working Paper.
* Boivin and Ng, (2006) Boivin, J. and Ng, S. (2006). Are more data always better for factor analysis? Journal of Econometrics, 132(1):169–194.
* Breitung and Choi, (2013) Breitung, J. and Choi, I. (2013). Factor models, in Hashimzade, N. and Thorthon, M.A. (eds.). Handbook of Research Methods and Applications in Empirical Macroeconomics, United Kingdom: Edward Elgar Publishing.
* Breitung and Eickmeier, (2006) Breitung, J. and Eickmeier, S. (2006). Dynamic factor models, in Hübler, O. and J. Frohn (eds.). Modern Econometric Analysis, Berlin: Springer.
* Buono et al., (2018) Buono, D., Kapetanios, G., Marcellino, M., Mazzi, G., and Papailias, F. (2018). Big data econometrics: Now casting and early estimates. Technical report, BAFFI CAREFIN, Centre for Applied Research on International Markets, Banking, Finance and Regulation, Universitá Bocconi, Milano, Italy.
* Campos-Vazquez et al., (2020) Campos-Vazquez, R. M., Esquivel, G., and Badillo, R. Y. (2020). How Has Labor Demand Been Affected by the COVID-19 Pandemic? Evidence from Job Ads in Mexico. Covid Economics, CEPR, 1(46):94–122.
* Caperna et al., (2020) Caperna, G., Colagrossi, M., Geraci, A., and Mazzarella, G. (2020). Googling unemployment during the pandemic: Inference and nowcast using search data. Technical report, Available at SSRN 3627754.
* Caruso, (2018) Caruso, A. (2018). Nowcasting with the help of foreign indicators: The case of Mexico. Economic Modelling, 69(C):160–168.
* Choi, (2017) Choi, I. (2017). Efficient estimation of nonstationary factor models. Journal of Statistical Planning and Inference, 183:18–43.
* (20) Corona, F., González-Farías, G., and Orraca, P. (2017a). A dynamic factor model for the Mexican economy: are common trends useful when predicting economic activity? Latin American Economic Review, 27(1).
* (21) Corona, F., Poncela, P., and Ruiz, E. (2017b). Determining the number of factors after stationary univariate transformations. Empirical Economics, 53(1):351–372.
* Corona et al., (2020) Corona, F., Poncela, P., and Ruiz, E. (2020). Estimating Non-stationary Common Factors: Implications for Risk Sharing. Computational Economics, 55(1):37–60.
* de Valk et al., (2019) de Valk, S., de Mattos, D., and Ferreira, P. (2019). Nowcasting: An R Package for Predicting Economic Variables Using Dynamic Factor Models. The R Journal, 11(1).
* Diebold and Mariano, (1995) Diebold, F. X. and Mariano, R. (1995). Comparing predictive accuracy. Journal of Business and Economic Statistics, 13:253–263.
* Doz et al., (2011) Doz, C., Giannone, D., and Reichlin, L. (2011). A two-step estimator for large approximate dynamic factor models based on Kalman filtering. Journal of Econometrics, 164(1):188–205.
* Doz et al., (2012) Doz, C., Giannone, D., and Reichlin, L. (2012). A quasi maximum likelihood approach for large, approximate dynamic factor models. The Review of Economics and Statistics, 94(4):1014–1024.
* Fernández, (2020) Fernández, C. L. (2020). La pandemia del Covid-19: los sistemas y la seguridad alimentaria en América Latina/Covid-19 pandemic: systems and food security in Latin America. economíaUNAM, 17(51):168–179.
* Gálvez-Soriano, (2020) Gálvez-Soriano, O. (2020). Nowcasting Mexico’s quarterly GDP using factor models and bridge equations. Estudios Económicos, 70(35):213–265.
* Geweke, (1977) Geweke, J. (1977). The dynamic factor analysis of economic time series, in Aigner, D.J. and Goldberger, A.S. (eds.). Latent Variables in Socio-Economic Models, Amsterdam: North-Holland:365–382.
* Giannone et al., (2008) Giannone, D., Reichlin, L., and Small, D. (2008). Nowcasting: The real-time informational content of macroeconomic data. Journal of Montery Economics, 55:665–676.
* Goldsmith-Pinkham and Sojourner, (2020) Goldsmith-Pinkham, P. and Sojourner, A. (2020). Predicting Initial Unemployment Insurance Claims Using Google Trends. Technical report, Working Paper (preprint), Posted April 3. https://paulgp.github.io/GoogleTrendsUINowcast/google_trends_UI.html.
* Guerrero et al., (2013) Guerrero, V. M., García, A. C., and Sainz, E. (2013). Rapid Estimates of Mexico’s Quarterly GDP. Journal of Official Statistics, 29(3):397–423.
* Harvey and Phillips, (1979) Harvey, A. and Phillips, G. (1979). Maximum Likelihood Estimation of Regression Models With Autoregressive-Moving Averages Disturbances. Biometrika, 152:49–58.
* Kershenobich, (2020) Kershenobich, D. (2020). Fortalezas, deficiencias y respuestas del sistema nacional de salud frente a la Pandemia del Covid-19/Strengths, weaknesses and responses of the national health system to the Covid-19 Pandemic. EconomíaUNAM, 17(51):53–58.
* Kuiper and Pijpers, (2020) Kuiper, M. and Pijpers, F. (2020). Nowcasting GDP growth rate: a potential substitute for the current flash estimate. Technical report, Statistics Netherlands: Discussion Paper.
* Lenza and Primiceri, (2020) Lenza, M. and Primiceri, G. E. (2020). How to Estimate a VAR after March 2020. Technical report, National Bureau of Economic Research.
* Lustig et al., (2020) Lustig, N., Pabon, V. M., Sanz, F., Younger, S. D., et al. (2020). The Impact of COVID-19 Lockdowns and Expanded Social Assistance on Inequality, Poverty and Mobility in Argentina, Brazil, Colombia and Mexico. Covid Economics, CEPR, 1(46):32–67.
* Meza, (2020) Meza, F. (2020). Forecasting the impact of the COVID-19 shock on the Mexican economy. Covid Economics, CEPR, 1(48):210–225.
* Moreno-Brid, (2020) Moreno-Brid, J. C. (2020). Pandemia, política pública y panorama de la economía mexicana en 2020/Pandemic, public policy and the outlook for the Mexican economy in 2020. EconomíaUNAM, 17(51):335–348.
* Onatski, (2010) Onatski, A. (2010). Determining the number of factors from empirical distribution of eigenvalues. The Review of Economics and Statistics, 92(4):1004–1016.
* Poncela and Ruiz, (2016) Poncela, P. and Ruiz, E. (2016). Small versus big data factor extraction in Dynamic Factor Models: An empirical assessment in dynamic factor models, in Hillebrand, E. and Koopman, S.J. (eds.). Advances in Econometrics, 35:401–434.
* Samaniego, (2020) Samaniego, N. (2020). El Covid-19 y el desplome del empleo en México/The Covid-19 and the Collapse of Employment in Mexico. EconomíaUNAM, 17(51):306–314.
* Sánchez, (2020) Sánchez, E. C. (2020). México en la pandemia: atrapado en la disyuntiva salud vs economía/Mexico in the pandemic: caught in the disjunctive health vs economy. EconomíaUNAM, 17(51):282–295.
* Sargent and Sims, (1977) Sargent, T. J. and Sims, C. A. (1977). Business cycle modeling without pretending to have too much a priory economic theory, in Sims, C.A. (ed.). New Methods in Business Cycle Research, Minneapolis: Federal Reserve Bank of Minneapolis.
* Stephens-Davidowitz and Varian, (2014) Stephens-Davidowitz, S. and Varian, H. (2014). A hands-on guide to Google data. Technical report, Google Inc.
* Stock and Watson, (2011) Stock, J. H. and Watson, M. W. (2011). Dynamic factor models, in Clements, M.P and Hendry, D.F. (eds.). Oxford Handbook of Economic Forecasting, Oxford: Oxford University Press.
* Tibshirani, (1996) Tibshirani, R. (1996). Regression shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1):267–288.
* Vanegas, (2020) Vanegas, L. L. (2020). Los desafíos del sistema de salud en México/The health system challenges in Mexico. EconomíaUNAM, 17(51):16–27.
* Varian, (2014) Varian, H. R. (2014). Big data: New tricks for econometrics. Journal of Economic Perspectives, 28(2):3–28.
## Annexes
Annex 1: Database Traditional and timely information
---
Short | Variable | Source | Time Span
ANTAD | Total sales of departmental stores | ANTAD | 2004/01-2020/06
AUTO | Automobiles production | INEGI | 2004/01-2020/07
CONF COM | Right time to invest (Commerce) | INEGI | 2011/06-2020/07
CONF CONS | Right time to invest (Construction) | INEGI | 2011/06-2020/07
CONF MANU | Right time to invest (Manufacturing) | INEGI | 2004/01-2020/07
CONF SERV | Right time to invest (Services) | INEGI | 2017/01-2020/07
GAS | Fuel demand | SENER | 2004/01-2020/07
HOTEL | Hotel occupancy | Tourism secretariat | 2004/01-2020/06
IMO | Index of manufacturing orders | INEGI | 2004/01-2020/07
IMSS | Permanent and eventual insureds of the Social Security | IMSS | 2004/01-2020/07
IPI | Industrial Production Index | INEGI | 2004/01-2020/06
IPI USA | Industrial Production Index (USA) | BEA | 2004/01-2020/07
IRGS | Income of retail goods and services | INEGI | 2008/01-2020/05
L MANUF | Trend of labor in manufacturing | INEGI | 2007/01-2020/05
M | Total imports | INEGI | 2004/01-2020/06
M4 | Monetary aggregate M4 | Banxico | 2004/01-2020/06
REM | Total remittances | Banxico | 2004/01-2020/06
U | Unemployment rate | INEGI | 2005/01-2020/06
X | Total exports | INEGI | 2004/01-2020/06
High frequency traditional variables
Short | Variable | Source | Time Span
E | Nominal exchange rate | Banxico | 2004/01-2020/07
IR 28 | Interest rate (28 days) | Banxico | 2004/01-2020/07
MSM | Mexican stock market index | Banxico | 2004/01-2020/07
SP 500 | Standard & Poor’s 500 | Yahoo! finance | 2004/01-2020/07
High frequency nontraditional variables
Short | Variable | Source | Time Span
AH1N1 | AH1N1 online search index | Google | 2004/01-2020/07
AMLO | AMLO online search index | Google | 2004/01-2020/07
Ayotzinapa | Ayotzinapa online search index | Google | 2004/01-2020/07
Calderón | Calderón online search index | Google | 2004/01-2020/07
Cártel | Cártel online search index | Google | 2004/01-2020/07
Casa Blanca | Casa Blanca online search index | Google | 2004/01-2020/07
Chapo | Chapo online search index | Google | 2004/01-2020/07
China | China online search index | Google | 2004/01-2020/07
Coronavirus | Coronavirus online search index | Google | 2004/01-2020/07
Corrupción | Corrupción online search index | Google | 2004/01-2020/07
Crisis económica | Crisis económica online search index | Google | 2004/01-2020/07
Crisis sanitaria | Crisis sanitaria online search index | Google | 2004/01-2020/07
Cuarentena | Cuarentena online search index | Google | 2004/01-2020/07
Cubrebocas | Cubrebocas online search index | Google | 2004/01-2020/07
Desempleo | Desempleo online search index | Google | 2004/01-2020/07
Dólar | Dólar online search index | Google | 2004/01-2020/07
Elecciones | Elecciones online search index | Google | 2004/01-2020/07
EPN | EPN online search index | Google | 2004/01-2020/07
Gasolina | Gasolina online search index | Google | 2004/01-2020/07
Homicidios | Homicidios online search index | Google | 2004/01-2020/07
Huachicol | Huachicol online search index | Google | 2004/01-2020/07
Inflación | Inflación online search index | Google | 2004/01-2020/07
Inseguridad | Inseguridad online search index | Google | 2004/01-2020/07
Mascarilla N95 | Mascarilla N95 online search index | Google | 2004/01-2020/07
Medidas económicas | Medidas económicas online search index | Google | 2004/01-2020/07
Migración | Migración online search index | Google | 2004/01-2020/07
Migrantes | Migrantes online search index | Google | 2004/01-2020/07
MOBILITY | Media mobility index | Twitter | 2004/01-2020/07
Morena | Morena online search index | Google | 2004/01-2020/07
Muertos | Muertos online search index | Google | 2004/01-2020/07
Muro | Muro online search index | Google | 2004/01-2020/07
Pacto | Pacto online search index | Google | 2004/01-2020/07
PAN | PAN online search index | Google | 2004/01-2020/07
Pandemia | Pandemia online search index | Google | 2004/01-2020/07
PEMEX | PEMEX online search index | Google | 2004/01-2020/07
Peso | Peso online search index | Google | 2004/01-2020/07
Petróleo | Petróleo online search index | Google | 2004/01-2020/07
PRI | PRI online search index | Google | 2004/01-2020/07
Recesión | Recesión online search index | Google | 2004/01-2020/07
Reformas | Reformas online search index | Google | 2004/01-2020/07
Salario | Salario online search index | Google | 2004/01-2020/07
Sismo | Sismo online search index | Google | 2004/01-2020/07
Tipo de cambio | Tipo de cambio online search index | Google | 2004/01-2020/07
Trump | Trump online search index | Google | 2004/01-2020/07
Violencia | Violencia online search index | Google | 2004/01-2020/07
|
# ProbLock: Probability-based Logic Locking
Michael Yue
Department of Electrical and Computer Engineering
Santa Clara University
Santa Clara, California, USA
<EMAIL_ADDRESS>
&Fatemeh Tehranipoor
Department of Electrical and Computer Engineering
Santa Clara University
Santa Clara, California, USA
<EMAIL_ADDRESS>
###### Abstract
Integrated circuit (IC) piracy and overproduction are serious issues that
threaten the security and integrity of a system. Logic locking is a type of
hardware obfuscation technique where additional key gates are inserted into
the circuit. Only the correct key can unlock the functionality of that circuit
otherwise the system produces the wrong output. In an effort to hinder these
threats on ICs, we have developed a probability-based logic locking technique
to protect the design of a circuit. Our proposed technique called “ProbLock”
can be applied to combinational and sequential circuits through a critical
selection process. We used a filtering process to select the best location of
key gates based on various constraints. Each step in the filtering process
generates a subset of nodes for each constraint. We also analyzed the
correlation between each constraint and adjusted the strength of the
constraints before inserting key gates. We have tested our algorithm on 40
benchmarks from the ISCAS ’85 and ISCAS ’89 suite.
_K_ eywords Hardware Security, Logic Locking, Obfuscation
## 1 Introduction and Background
The semiconductor industry is constantly changing from the production of ICs
to the complexity of their design. The industry has moved to a fabless model
where most of the fabrication for a chip is outsourced a less secure and less
trusted environment. These environments include testing and fabrication
facilities that are necessary for the pipeline. While this model does improve
production costs and development, it has also led to the consequence of
piracy, overproduction, and cloning. The chips are also vulnerable to various
attacks [1] that attempt to extract the design of the chip or other
information from the device. Due to these security issues, researchers have
developed techniques to counter these attacks. One technique to improve the
security of ICs is hardware obfuscation [2]. Hardware obfuscation is a
technique that modifies the structure or description of a circuit in order to
make it harder for an attacker to reverse engineer the hardware. Some
obfuscation techniques modify the gate level structure of the circuit while
other techniques add gates to protect the logic of the circuit. Logic locking
is a technique that inserts additional gates and logic components into a
circuit which will lock the circuit and produce an incorrect output unless the
proper key is provided to the circuit. The IC will be considered locked or
functionally incorrect until the correct key unlocks the additional gates.
Using XOR and XNOR components as key gates, the proper key value will make the
gate act as a buffer and have no effect on the rest of the logic. If the wrong
key value is provided, the key gate will produce a wrong value and make the
circuit nonfunctional. Figure 1 shows an example of logic locking. A key gate
is added in between logic gates with one input connected to the key bit value.
The addition of these key gates adds a small overhead to the overall circuit
while increasing the security of the device.
(a) Unlocked Circuit
(b) locked Circuit
Figure 1: An example of logic locking circuit.
Various techniques have been proposed by other researchers to protect the
integrity and privacy of integrated circuits. Logic cone analysis was used to
develop a logic locking technique in 2015 [3]. This technique used fan-in and
fan-out metrics to insert key gates into a netlist. Logic cone analysis was
vulnerable to SAT attacks that were developed over the next few years. SAT
attacks probed the input and output patterns of the system to determine the
key to unlock a circuit. SAT attacks have proven to be very effective against
logic locking techniques. Strong logic locking (SLL) was another technique
developed that analyzed the relationship between inserted key gates in the
form of a graph [4] [5]. SLL was also vulnerable to SAT-based attacks. A new
SAT resistant technique called SARLock was later implemented with the main
purpose of thwarting SAT attacks [6]. The method made SAT attacks exponential
in complexity and therefore ineffective. Tehranipoor et al. [7] explores the
potential of employing the state-of-the-art deep RNN that allows an attacker
to derive the correct values of the key inputs (secret key) from logic
encryption hardware obfuscation techniques.
All of the logic locking discussed is effective in improving security for an
IC, however, they are still vulnerable to sensitization exploits and strong
oracle based attacks.
To overcome the aforementioned issues, in this paper we propose a very new
logic locking technique which we call "ProbLock". ProbLock is a probability-
based technique that inserts key gates into a circuit netlist where only the
correct key value will unlock the circuit. In this paper, we propose this
technique as a form of logic locking where each step of the process narrows
down a set of best nodes to insert key gates. We used four constraints to
filter out the best nodes and choose the location of the inserted key gates. A
probability constraint is the main metric that we used to lock the circuits.
We tested our technique by obfuscating a set of circuit benchmarks from ISCAS
’85 and ISCAS ’89 suite [8] [9]. These include a variety of combinations and
sequential circuits. We analyzed the relationship and correlation between
constraints in our technique and found some relationships that support the
strength of our technique.
Specifically, we have the following contributions in this paper:
* •
We present a probability-based logic locking (ProbLock) technique to lock a
circuit with low overhead using a filtering process.
* •
We implemented a design where the strength of the filtering process can be
adjusted for different situations.
* •
We analyzed the correlation between constraints and showed how the
relationship between constraints can strengthen the security process.
* •
We obfuscated 40 benchmarks from ISCAS ’85 and ISCAS ’89 using ProbLock.
## 2 Literature Review
Many techniques of logic locking have already been proposed and tested against
certain attacks and on circuit benchmarks. One of the earliest logic locking
techniques inserted key gates randomly into the circuit. This provided some
security, but many attacks were developed to break this method. Another
obfuscation technique was developed using logic cone analysis in [3]. Sections
of a circuit can be grouped into logic cones by calculating the fan-in and
fan-out values of a gate. Inserting key gates at certain logic cones areas
will increase the security of the system. Logic cone analysis is good for
countering logic cone attacks. Certain attacks will exploit these weak logic
cones and try to discover the key to unlock the circuit. Logic cone analysis
is vulnerable to other types of attacks such as SAT and functional attacks.
Strong logic locking (SLL) is another obfuscation technique, but it is also
vulnerable to SAT attacks [4]. SLL is based on interference graphs that show
how inserted key gates interfere with each other. The interference graph shows
the relationship between an inserted key gate and its surrounding key gates
and wires. The interference graph shows if key gates are on a cascading path,
parallel path, or if they don’t interfere with each other at all. The
interference graph along with other information makes it harder for an
attacker to unlock the circuit even with a SAT attack model.
More recent techniques have been developed to counter SAT attacks and other
related schemes. The obfuscation technique needs to be strong enough to resist
certain attacks otherwise the integrity of the IC would be compromised. The
goal of an adversary during an attack is to determine the secret key to unlock
the circuit or gain other important information from the system. SARLock was
developed to make the SAT attack model inefficient [6]. SARLock employs a
small overhead strategy that exponentially increases the number of DIPs needed
to unlock the circuit. SARLock is very strong against SAT attacks since it
uses the basis of the attack model to determine where to insert key gates. The
input pattern and corresponding key values can be analyzed during the
insertion process of the obfuscation technique.
In 2017, TTLock was proposed that resisted all known attacks including SAT and
sensitization attacks [10]. TTLock would invert the response to a logic cone
to protect the input pattern. The logic cone would be restored only if the
correct key is provided. The small change to the functionality of the circuit
would maximize the efforts needed for the SAT attacks. The generalized form of
stripping away the functionality of logic cones and hiding it from attackers
is known as stripped-functionality logic locking (SFLL). However, the design
of TTLock didn’t account for the cost of tamper-proof memory which could lead
to high overhead in the re-synthesis process [11] [12]. Another group
automated the general process of TTLock to identify the parts of the design
that needed to be modified in an efficient way. They used ATPG tools to
develop a scalable and more efficient way of protecting these patterns from
attackers. Overall, a 35% improvement in overhead was achieved with the
automated process. Later, a modified version of SFLL was proposed based on the
hamming distance of the key. This was referred to as SFLL-hd [13]. The hamming
distance metric was used to determine which pattern to modify in the SFLL
scheme. Depending on the type of attack, the hamming distance can be adjusted
accordingly. In 2019, the idea of exploring high-level synthesis (HLS) with
logic locking was proposed with SFLL-HLS [14]. SFLL-HLS was proposed to
improve the system-wide security of an IC. The design resulted in faster
validation of design and higher levels of abstraction. The HLS implementation
in this technique was used to identify the functional units and logic cones to
be operated on with respect to SFLL. They observed low overhead and power
results from their analysis. Most recently in 2020, LoPher was developed as
another SAT resistant ofuscation technique [15]. LoPher uses a block cipher to
produce the same behavior as a logic gate. The basic component for the block
cipher is configurable and allows many logic permutation to occur which
further increases the security of the system.
Many forms and variations of SAT attacks have been created in order to show
the weaknesses of various hardware obfuscation techniques. Algorithms have
been developed for SAT competitions and the results can be used in a variety
of applications including hardware obfuscation [16] [17]. These tools are used
to evaluate the strength of logic locking techniques and can be used to bypass
the security of integrated circuits. As a result, an anti-SAT unit was
developed as a general solution to the SAT attack [18]. The anti-SAT block
consists of a low overhead unit that can be added to any obfuscation technique
to help counter the SAT attacks. The unit requires the key length for the
locked circuit to increase as inputs to the anti-SAT block. The number of DIPs
and input patterns that an adversary needs would grow exponentially due to
this change. This would make the complexity of the SAT attack exponential
instead of linear and therefore inefficient. The recent innovation in anti-SAT
has inspired us to develop a technique that will be resistant to various SAT
attacks. We designed constraints that should minimize the effects of a SAT
algorithm.
## 3 ProbLock
ProbLock is based on filtering out nodes in a circuit to find the best
location to insert key gates. ProbLock is a logic locking technique where the
key gates are either XOR or XNOR gates and a key is used to unlock the
circuit. We used four constraints to determine the best candidate nodes to
insert our XOR or XNOR key gate; longest path, non-critical path, low
dependent nodes, and best probability nodes. The first three constraints find
the set of nodes that lie on the longest path, non-critical path, and have low
dependent wires. The last constraint uses probability to find the set of nodes
equal to the key length where we will insert the key gates. We chose the
longest path and non-critical path constraint in order to avoid critical
timing elements and to insert key gates on parts of the circuit that was being
used the most. We chose the low dependent wires and probability constraint to
determine locations where the output would be changed the most. This would
make it harder for an attack to generate the golden circuit using an oracle
based attack. Once we determine the location of the key nodes, we can insert
key gates into the netlist and re-synthesize the circuit. In Equation 1 the
candidate nodes are determined from a function of all four constraints. $LP$
is the set of nodes on the longest paths while $NCP$ is the set of nodes on
non-critical paths. $LD$ represents the set of low dependent nodes and $P$ is
the set of probability nodes.
$selectedNodes\subset{P}\subset{LD}\subset{NCP}\subset{LP}$ (1)
For our obfuscation technique, we decided to lock a set of combinational and
sequential circuit netlists using the ISCAS ’85 and ISCAS ’89 circuit
benchmarks. We obfuscated a total of 40 benchmarks using ProbLock. For some of
the constraints, we had to use an unrolling technique described in [19] to
accurately filter out nodes. This unrolling technique was only used in
sequential circuits to simplify the concepts of flip flops and other
sequential logic. The sequential logic can be replaced by the main stage and a
$k$ number of sub-stages depending on the number of times unrolled. This
results in a $k$-unrolled circuit that has the same functionality as the
regular circuit. For this process, we generated a set of unrolled ISCAS ’89
benchmarks which we used in some constraint algorithms. We unrolled these
circuits once to prevent inaccuracies in constraints such as the longest path
and non-critical path.
### 3.1 Longest Path Constraint
The longest path constraint isolates a subset of nodes that lie on the longest
paths in a circuit netlist. The subset of nodes is different for each circuit
and is a function on the key length determined for each circuit. We represent
the netlist of each benchmark as a directed acyclic graph (DAG) and perform
the longest path analysis on each DAG. Each vertex in the DAG is a gate
element from the netlist and each vector represents the wire connecting to the
next gate element. Once the DAG is constructed for each benchmark, we
calculated the longest paths of the DAG using a depth first search (DFS)
technique. We then calculate the next longest path to generate a subset of
nodes along the longest paths. Each unique node in the longest path gets added
to a subset during each iteration until the size of the subset is bigger than
two times the key length for that circuit. The structure of this theory is
shown in Algorithm 1 which uses the DFS in Algorithm 2. Figure 2 shows the
longest path for the circuit to be 3 since there are 3 gates between input $A$
and output $Y$. The next longest path would also be 3 from input $B$ to output
$Y$. All of the nodes along both longest paths would be added to a subset of
the longest path nodes. Once this subset of longest path nodes is determined,
that subset gets used in the next filtering constraint. This subset can be
adjusted to include more or fewer nodes depending on other filtering
constraints. If more nodes are needed, this constraint is the first to be
modified.
We chose to use the longest path constraint in order to counter oracle guided
attacks. Oracle guided attacks will query the IC with various inputs and
observe the output. This gives the attacker information about how the circuit
behaves and the adversary can use this information to determine the secret
key. We want to insert key gates where most of the logic and activity occur in
the circuit. An oracle guided attack will most likely pass data through the
longest paths of a circuit so we want to protect these parts of the IC by
inserting key gates on the longest path.
input : Circuit Graph and Key Length
output : List of nodes on the longest path
G$\leftarrow$ circuit graph;
V$\leftarrow$ source vertex of G;
overallNodes $\leftarrow$ [];
while _true_ do
allPaths $\leftarrow$ DFS(G,V);
for _p in allPaths_ do
if _len(p) = maxLength_ then
maxPath $\leftarrow$ len(p);
end if
end for
for _p in maxPath_ do
if _p not in overallNodes_ then
overallNodes.append(p);
end if
end for
if _len(overallNodes $>$ keyLength * 3)_ then
return overallNodes;
end if
end while
Algorithm 1 Get Longest Path
input : Circuit Graph G and Vertex Source V
output : Longest path in Circuit
mark V as visited;
for _all neighbors W of V_ do
if _W is not visited_ then
DFS(G,W);
end if
end for
Algorithm 2 Depth First Search Figure 2: Longest Path (in red)
### 3.2 Critical Path Constraint
The critical path constraint is similar to the longest path; however, rather
than considering logic depth, we look at timing information. This constraint
is essential, as adding gates on the critical path could break the circuit
functionality or change timing specifications. The nodes selected often
overlapped with other constraints (e.g. the longest path was often the
critical path), though oftentimes the critical path would involve gates with
large fan out. Determining the critical path is largely technology-specific;
different PDKs will have different timing information which can affect which
paths are critical paths. We removed any nodes that were on the critical path
from the set of nodes passed into this constraint. The resulting subset
results in nodes that are on the longest, non-critical path.
Figure 3: Critical Paths (in red)
### 3.3 Low Wire Dependency Constraint
The next constraint generates a subset of nodes that are connected to low
dependent wires. The output wire of a gate is considered low dependent if the
input wires to that gate have little influence on the value of output. This
idea is modified from a technique called FANCI where suspicious wires can be
detected in a Trojan infected design [20]. A functional truth table is created
for each output wire of each gate in the circuit. The inputs of the truth
table correspond to the inputs of the gate being analyzed. For each input
column, the other columns are fixed and each row is tested with a 0 or 1 to
determine the output. This results in two functions when setting the value to
either 0 or 1. The boolean difference between these two functions results in a
value between zero and one that can be further analyzed. The value for each
input gets stored as a list for each output wire. We take the average value of
the entire list to determine the dependency of an output wire. The algorithm
logic is shown in Figure 3. This analysis can determine if certain inputs are
low dependent or if they rarely affect the corresponding logic. Low dependent
wire are weak spots in the circuit so this constraint isolates those locations
in order to improve the security. We insert key gates next to low dependent
wires to fortify any weaknesses. The filtering process passes the subset of
nodes to the final constraint.
input : Circuit Graph
output : List of low dependent wires
$G$ $\leftarrow$ circuit graph;
foreach _gates in G_ do
foreach _output wire $w$_ do
$T$ $\leftarrow$ Truthtable($w$);
$L$ $\leftarrow$ empty list of control values;
foreach _column c in $T$_ do
$count$ $\leftarrow$ 0;
foreach _row $r$ in $T$_ do
$x_{0}$ $\leftarrow$ Value of $w$ when input value = 0;
$x_{1}$ $\leftarrow$ Value of $w$ when input value = 1;
if _$x_{0}$ != $x_{1}$_ then
$count$++;
end if
end foreach
$L$.append$\frac{count}{size(T)}$;
end foreach
$avg$ $\leftarrow$ average($L$));
if _$avg$ $<$ 0.5_ then
$controlNodes$.append(gate);
end if
end foreach
end foreach
Algorithm 3 Find Low Dependent Wires
### 3.4 Biased Probabilities Constraint
The probability constraint focuses on reducing the effectiveness of the SAT
attacks. In a SAT attack, a distinguishing input (DI) is chosen and the
attacker runs through various key values, eliminating any which yield an
incorrect output. Thus, to reduce the effectiveness of a SAT attack, the
number of wrong keys produced for a given DI must decrease. This can be done
by bringing the probability of any given node being $1$ closer to $0.5$, since
any node which is biased towards 0 or 1 will propagate through to the output
nodes, making it easier for SAT attacks to eliminate key values. Since a two-
input XOR/XNOR has an output probability of $0.5$, we can insert our key gates
at nodes heavily biased towards 0 or 1 and "reset" the probability to $0.5$.
The algorithm used to obtain the $N$ nodes with the most biased probabilities
is shown in Algorithm 4. It is worth noting that while generating node
probabilities for combinational circuits is trivial, sequential circuits pose
a potential problem because of the D flip flops (DFFs). However, giving the
DFF outputs a starting probability of 0.5 and propagating running a few
iterations (three is sufficient) will asymptotically approach the correct
probability for the DFF node.
input : Circuit Graph and Circuit Input Probabilities
output : List of $N$ most biased nodes
for _$i\leftarrow 1$ to $N$_ do
DFF initial probability$\leftarrow$0.5;
while _any probability unknown_ do
foreach _node with unknown probability_ do
if _all node input probabilities known_ then
Compute node output probability;
end if
end foreach
end while
$Node\leftarrow max(abs(circuitprobabilities-0.5))$;
Add $Node$ to output list;
Insert XOR/XNOR gate at $Node$ in Circuit Graph;
end for
Algorithm 4 Find Biased Nodes
(a) Pre-Insertion Probabilities
(b) Post-Insertion Probabilities
Figure 4: Key Gate Insertion Probabilities
An example of this is illustrated in Figure 4. Figure 4 4(a) shows a sample
circuit with each node annotated with the probability of that node being a
logic 1. The output shown is heavily biased toward logic 0, which makes it
more susceptible to SAT attacks. Strategically adding a key gate, as shown in
Figure 4 4(b) brings the output probability closer to 0.5, reducing the
effectiveness of the SAT attack.
## 4 Implementations
To obfuscate the benchmarks, we created a python script that implements this
algorithm. The script takes in a benchmark netlist in Verilog format and
returns an obfuscated netlist in the same format. The obfuscated netlist
included the key gates inserted as well as the key defined to unlock the
circuit. We created a function to parse each netlist for information. The
information was organized into lists of inputs, outputs, and gate types. We
used this information to determine the key size relative to the number of
gates in a netlist. We also created functions for each constraint in our
algorithm. A set of overall nodes was passed through each function and then
narrowed down to a set of best nodes for key gate insertion. Another function
was created to insert key gates from a data structure into a new netlist. We
specified the key inputs, key gates, and the key value in the header of the
new netlist for development purposes. Throughout the development process, we
ran tests to verify the intention of our script and to make sure each new
netlist was correct.
We used Synopsys Design Compiler to synthesize and view the netlists before
and after obfuscation [21]. The Synopsys tool allowed us to see the gate level
representation of each benchmark during the analysis process. We were able to
see the location of the logic components as well as the inserted key gate
after the obfuscation process occurred. We also used the Design Compiler for
critical path analysis in our second constraint. The tool allowed for timing
analysis between different logic components of the netlist. We used this to
calculate the critical paths for our constraint and removed any nodes that lie
on this critical path. We integrated the results from Synopsys by passing it
through a textfile that gets parsed in the main script.
## 5 Experimental Results
During the development process of ProbLock, we analyzed the correlation and
relationship between constraints. We documented this relationship to show how
each constraint impacted the overall strength of the technique. We chose two
constraints based on path elements and two constraints based on nodes and
wires. Due to this design, we were able to analyze the correlation between
constraints and adjust the strength of the filtering process based on this
analysis. Overall, we wanted the correlation between constraints to be large
enough to remove any nodes that didn’t belong in both sets. This would allow
the filtering process from each constraint to generate a subset of nodes each
time until only the best candidate nodes remain to be inserted. The strength
of the correlation varies between benchmarks because of the shape and
functionality of each circuit. Each subsequent constraint filtered out a set
of nodes based on the relationship between the constraint and the overall set
of nodes. Table 1 shows the experimental correlations for ISCAS ’85 and ’89
benchmarks. We only show some of the results in the table as all 40 benchmarks
are not included. The longest path (LP) length and critical path (CP) count
are based on the path constraints. The rest of the categories including non-
critical path (NCP), low depending wires (LD), and biased probabilities (Prob)
are based on nodes.
For each constraint, a smaller set of nodes is generated until final set is
determined which corresponds to the location of inserted key gates. In the
ISCAS ’85 benchmark suite, the correlation between nodes on the longest path,
and the original set of nodes was 36% on average. Between nodes on the non-
critical path and nodes on the longest path, the correlation was about 63% on
average. The correlation between low dependent nodes and nodes on the non-
critical path was about 73% and the final correlation between biased
probabilities and low dependant nodes was about 65%. For the ISCAS ’89
benchmark suite, the correlation between the longest path and overall nodes is
27%. The correlation between the critical path and the longest path is 84%.
The correlation between low dependent nodes and non-critical path is 85% and
the final correlation between biased probabilities and low dependent nodes is
45%. The numbers that we analyzed were ideal for the filtering process. Enough
nodes were removed with each subset until the final set of best candidates
were discovered. For the final biased probabilities constraint, the final set
of nodes was equal to the size of the key. For the other constraints, we
adjusted the filtering threshold accordingly. Depending on the situation, the
strength of the constraints can be adjusted which allows flexibility in our
algorithm. After re synthesising the obfuscated netlists, we used Synopsys to
verify the behavior of the locked circuits [21]. The critical path of the
netlists remained the same and the timing analysis remained consistant for all
benchmarks. We also used Synopsys to verify that the overhead of netlist was
no greater than 10%.
Table 1: ISCAS ’85 & ’89 Constraint Correlation ISCAS 85 | Key Size | LP Length | CP Count | Total Nodes | LP Subset | NCP Subset | LD Subset | Prob Subset
---|---|---|---|---|---|---|---|---
c432 | 16 | 18 | 7 | 160 | 88 | 60 | 33 | 16
c499 | 16 | 12 | 32 | 202 | 186 | 104 | 99 | 16
c1355 | 32 | 25 | 32 | 546 | 485 | 253 | 53 | 32
c1908 | 64 | 39 | 25 | 880 | 205 | 145 | 129 | 64
c2670 | 64 | 31 | 100 | 1269 | 217 | 200 | 75 | 64
c3540 | 128 | 42 | 22 | 1669 | 260 | 173 | 151 | 128
c5315 | 128 | 47 | 100 | 2307 | 411 | 226 | 180 | 128
c7552 | 256 | 35 | 100 | 3513 | 532 | 341 | 278 | 256
ISCAS 89 | Key Size | LP Length | CP Count | Total Nodes | LP Subset | NCP Subset | LD Subset | Prob Subset
s298 | 8 | 10 | 6 | 75 | 26 | 26 | 18 | 8
s344 | 8 | 21 | 11 | 101 | 21 | 19 | 19 | 8
s382 | 8 | 10 | 6 | 99 | 29 | 29 | 21 | 8
s386 | 8 | 12 | 7 | 118 | 49 | 32 | 24 | 8
s400 | 8 | 10 | 6 | 106 | 30 | 30 | 22 | 8
s444 | 8 | 12 | 6 | 119 | 38 | 38 | 30 | 8
s526 | 8 | 10 | 6 | 141 | 26 | 26 | 18 | 8
s641 | 8 | 75 | 24 | 107 | 80 | 53 | 51 | 8
s713 | 8 | 75 | 23 | 139 | 84 | 61 | 56 | 8
s838 | 16 | 18 | 1 | 288 | 45 | 29 | 26 | 16
s1238a | 32 | 23 | 14 | 428 | 132 | 76 | 64 | 32
s1488 | 32 | 18 | 19 | 550 | 89 | 60 | 48 | 32
s5378a | 64 | 15 | 46 | 1004 | 134 | 96 | 92 | 64
s9234a | 128 | 19 | 37 | 2027 | 264 | 261 | 213 | 128
s13207a | 256 | 28 | 100 | 2573 | 573 | 521 | 482 | 256
s15850a | 256 | 22 | 100 | 3448 | 553 | 544 | 506 | 256
s38584 | 256 | 15 | 100 | 11448 | 717 | 716 | 571 | 256
## 6 Conclusions and Future Work
We propose ProbLock, a probability-based logic locking technique that uses a
filtering process to determine the location of inserted key gates. ProbLock
uses four constraints to narrow the set of nodes in a netlist to be used for
insertion. We obfuscated $40$ different sequential and combinational
benchmarks from the ISCAS ’85 and ISCAS ’89 suite. After obfuscating the
circuits, we analyzed the correlation between constraints and implemented the
capability to adjust these constraints depending on the situation. In the
future, we intend to test the obfuscated benchmarks against known attacks and
compare it to other logic locking techniques. We will implement logic locking
attacks such as SAT attacks and sensitization attacks. Each attack will be
executed against benchmarks obfuscated with ProbLock. We will then run the
same attacks on locking schemes such as SLL [4], logic cone locking [3], and
SARLock [6]. We will evaluate how well each benchmark performs by measuring
overhead of the obfuscation technique, complexity of the technique, and
execution time of the attack. After running each attack scheme, we can compare
and evaluate the true strength of ProbLock compared to other published logic
locking techniques. We also plan on strengthening ProbLock against SAT attacks
by integrating SAT resistant logic near the key gate locations. This would
increase overhead, but we would also try to optimize this in our experiment.
## References
* [1] Jake Mellor, Allen Shelton, Michael Yue, and Fatemeh Tehranipoor. Attacks on logic locking obfuscation techniques. In 2021 IEEE International Conference on Consumer Electronics (ICCE), pages 1–6, 2021.
* [2] D. Forte, S. Bhunia, and M. Tehranipoor. Hardware protection through obfuscation. 2017\.
* [3] Y. Lee and N. A. Touba. Improving logic obfuscation via logic cone analysis. 16th Latin-American Test Symposium (LATS), pages 1–6, 2015.
* [4] M. Yasin, J. J. V. Rajendran, O. Sinanoglu, and R. Karri. On improving the security of logic locking. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 35(9):1411–1424, 2016.
* [5] F. Tehranipoor, W. Yan, and J. A. Chandy. Development and evaluation of hardware obfuscation benchmarks. J Hardware System Security 2, pages 142–161, 2018.
* [6] M. Yasin, B. Mazumdar, J. J. V. Rajendran, and O. Sinanoglu. Sarlock: Sat attack resistant logic locking. IEEE International Symposium on Hardware Oriented Security and Trust (HOST), pages 236–241, 2016.
* [7] Fatemeh Tehranipoor, Nima Karimian, Mehran Mozaffari Kermani, and Hamid Mahmoodi. Deep rnn-oriented paradigm shift through bocanet: Broken obfuscated circuit attack. In Proceedings of the 2019 on Great Lakes Symposium on VLSI, pages 335–338, 2019.
* [8] F. Brglez and H. Fujiwara. A neutral netlist of 10 combinational benchmark circuits and a target translator in fortan. Proc. of the International Symposium on Circuits and Systems, pages 663–698, 1985.
* [9] F. Brglez, D. Bryan, and K. Kozminski. Combinational profiles of sequential benchmark circuits. Proc. of the International Symposium of Circuits and Systems, pages 1929–1934, 1989.
* [10] M. Yasin and O. Sinanoglu B. Mazumdar, J. J. V. Rajendran. Ttlock: Tenacious and traceless logic locking. 2017 IEEE International Symposium on Hardware Oriented Security and Trust (HOST), pages 166–166, 2017.
* [11] A. Sengupta, M. Nabeel, M. Yasin, and O. Sinanoglu. Atpg-based cost-effective, secure logic locking. 2018 IEEE 36th VLSI Test Symposium (VTS), pages 1–6, 2018.
* [12] M. Yasin, A. Sengupta, and J. J. V. Rajendran O. Sinanoglu M. Thari Nabeel, M.Ashraf. Provably-secure logic locking: From theory to practice. CCS ’17: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 1601–1618, 2020.
* [13] F. Yang, M. Tang, and O. Sinanoglu. Stripped functionality logic locking with hamming distance-based restore unit (sfll-hd) - 2013 unlocked. IEEE Transactions on Information Forensics and Security, 14(10):2778–2786, 2019.
* [14] C. Zhao J. J. V. Rajendran M. Yasin. Sfll-hls: Stripped-functionality logic locking meets high-level synthesis. 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pages 1–4, 2019.
* [15] A. Saha, S. Saha, and B. B. Bhattacharya S. Chowdhury, D. Mukhopadhyay. Lopher: Sat-hardened logic embedding on block ciphers. 2020 57th ACM/IEEE Design Automation Conference (DAC), pages 1–6, 2020.
* [16] P. Subramanyan, S. Ray, and S. Malik. Evaluating the security of logic encryption algorithms. In 2015 IEEE International Symposium on Hardware Oriented Security and Trust (HOST), pages 137–143, 2015.
* [17] A. Biere. Splatz, Lingeling, Plingeling, Treengeling, YalSAT Entering the SAT Competition 2016. In Tomáš Balyo, Marijn Heule, and Matti Järvisalo, editors, Proc. of SAT Competition 2016 – Solver and Benchmark Descriptions, volume B-2016-1 of Department of Computer Science Series of Publications B, pages 44–45. University of Helsinki, 2016.
* [18] Y. Xie and A. Srivastava. Anti-sat: Mitigating sat attack on logic locking. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 38(22):199–207, 2019.
* [19] N. Miskov-Zivanov and D. Marculescu. "modeling and optimization for soft-error reliability of sequential circuits. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 27(5):803–816, 2008.
* [20] A. Waksman, M. Suozzo, and S. Sethumadhavan. Fanci: Identification of stealthy malicious logic using boolean functional analysis. Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security, pages 697–708, 2013.
* [21] Design compiler graphical. Synopsys, 2018.
|
[figure]style=plain,subcapbesideposition=top
# Globally optimal stretching foliations of dynamical systems reveal the
organizing skeleton of intensive instabilities
Sanjeeva Balasuriya School of Mathematical Sciences, University of Adelaide,
Adelaide SA 5005, Australia Erik M. Bollt Department of Electrical and
Computer Engineering and $C^{3}S^{2}$ the Clarkson Center for Complex Systems
Science, Clarkson University, Potsdam, New York 13699
###### Abstract
Understanding instabilities in dynamical systems drives to the heart of modern
chaos theory, whether forecasting or attempting to control future outcomes.
Instabilities in the sense of locally maximal stretching in maps is well
understood, and is connected to the concepts of Lyapunov exponents/vectors,
Oseledec spaces and the Cauchy–Green tensor. In this paper, we extend the
concept to global optimization of stretching, as this forms a skeleton
organizing the general instabilities. The ‘map’ is general but incorporates
the inevitability of finite-time as in any realistic application: it can be
defined via a finite sequence of discrete maps, or a finite-time flow
associated with a continuous dynamical system. Limiting attention to two-
dimensions, we formulate the global optimization problem as one over a
restricted class of foliations, and establish the foliations which both
maximize and minimize global stretching. A classification of nondegenerate
singularities of the foliations is obtained. Numerical issues in computing
optimal foliations are examined, in particular insights into special curves
along which foliations appear to veer and/or do not cross, and foliation
behavior near singularities. Illustrations and validations of the results to
the Hénon map, the double-gyre flow and the standard map are provided.
Keywords: Lyapunov vector , finite-time flow, punctured foliation
Mathematics Subject Classification: 37B55, 37C60, 53C12
## 1 Graphical Abstract
Sanjeeva Balasuriya, Erik Bollt
#### Highlights
1. 1.
Understanding the organizing skeleton of instability for orbits must be
premised on analysis of globally optimal stretching.
2. 2.
Provides the theory to obtain foliation for globally optimizing stretching for
any two-dimensional map (analytically specified, derived from a finite-time
flow or a sequence of maps, and/or given via data);
3. 3.
Classifies singularities and provides insight and solutions to spurious
artefacts emerging when attempting to numerically determine such a foliation;
4. 4.
Establishes connections with a range of well-established methods: locally
optimizing stretching, Cauchy–Green eigenvalues and singularities, Lyapunov
exponents, Lyapunov vectors, Oseledec spaces, and variational Lagrangian
coherent structures.
## 2 Introduction
A central topic of dynamical systems theory involves analysis of
instabilities, since this is the central ideas behind the possibility of
forecast time horizon, or even of ease of control of future outcomes. The
preponderance of work has involved analysis of local instability, whether by
the Hartman-Grobman theorem and center manifold theorem [1] for periodic
orbits and similarly for invariant sets [2]. For general orbits, local
instability is characterized by Oseledec spaces [3] which are identified via
Lyapunov exponents [4] and Lyapunov vectors [5, 6]. Via these techniques,
locally optimizing stretching due to the operation of a map from subsets of
$\mathbb{R}^{n}$ to subsets of $\mathbb{R}^{n}$ is well-understood. Computing
the map’s derivative matrix at each point is allows for computation of
Oseledecs/Lyapunov information: its singular values and corresponding singular
vectors are respectively associated with stretching rates and relevant
directions in the domain, and its (scaled) operator norm is the classical
Lyapunov exponent of the orbit beginning at that point.
In this paper, we assert that understanding the global dynamics—how a system
organizes orbits—is related to a global view of instabilities. The related
organizing skeleton of orbits must therefore be premised on analysis of
globally optimal stretching. Here, orbits will be in relation to two-
dimensional maps which can be derived from various sources: a finite sequence
of discrete maps, or a flow occurring over a finite time period. The latter
situation is particularly relevant when seeking regions in unsteady flows
which remain ‘coherent’ over a given time period [7]. In all these cases, we
emphasize that we are not seeking to understand stretching in the infinite-
time limit—which is the focus in many classical approaches [3, 2]—but rather
stretching associated with a one-step map derived from any of these
approaches. From the applications perspective, the one-step map would be
parametrized by the discrete or continuous time over which the map operates,
and this number would of necessity be finite in any computational
implementation.
When additionally seeking global optimization, the first issue is defining
what this means with respect to a bounded open domain on which the map
operates. In Section 3, we pose this question as an optimization over
foliations, but need to restrict these foliations in a certain way because
they would generically have singularities. We are able to characterize the
restricted foliations of optimal stretching (minimal or maximal) in a
straightforward geometric way, while establishing connections to well-known
local stretching optimizing entities. We provide a complete classification of
the nondegenerate singularities using elementary arguments in Section 4,
thereby easily identifying $1$\- and $3$-pronged singularities as the primary
scenarios. We argue in Section 5 the inevitability of a ‘branch cut’
phenomenon if attempting to compute these restricted foliations using a vector
field; this will generically possess discontinuities across one-dimensional
curves which we can characterize. Other computational ramifications are
addressed in Section 6, which includes issues of curves stopping abruptly when
coming in horizontally or vertically, and veering along spurious curves. We
are able to give explicit insights into the emergence of these issues as a
result of standard numerical implementations, and we suggest an alternative
integral-curve formulation which avoids these difficulties. In Section 7, we
demonstrate computations of globally optimal restricted foliations for several
well-known examples: the Hénon map [8], the Chirikov (standard) map [9], and
the double-gyre flow [4], each implemented over a finite time. The
aforementioned numerical issues are highlighted in these examples.
## 3 Globally optimizing stretching
Let $\Omega$ be a bounded two-dimensional subset of $\mathbb{R}^{2}$
consisting of a finite union of connected open sets, each of whose closure has
at most a finite number of boundary components. So $\Omega$ may, for example,
consist of disconnected open sets and/or entities which are topologically
equivalent to the interior of an annulus. We will use $(x,y)$ to denote points
in $\Omega$. Let $F$ be a map on $\Omega$ to $\mathbb{R}^{2}$ which is given
componentwise by
$\mbox{\boldmath$F$}\left(x,y\right)=\left(\begin{array}[]{c}u(x,y)\\\
v(x,y)\end{array}\right)\,.$ (1)
###### Hypothesis 1 (Smoothness of $F$).
Let the map $\mbox{\boldmath$F$}\in{\mathrm{C}}^{2}(\Omega)$.
Physically, we note that $F$ can be generated in various ways. It can be
simply one iteration of a given map, multiple (finitely-many) iterations of a
map, or even the application of a finite sequence of maps. It can also be the
flow-map generated from a nonautonomous flow in two-dimensions over a finite
time. In this sense, $F$ encapsulates the fact that finiteness is inevitable
in any numerical, experimental or observational situation, while allowing for
both discrete and continuous time, as well as nonautonomy. The time over which
the system operates can be thought of as a parameter which is encoded within
$F$, and its effect can be investigated if needed by varying this parameter.
The relative stretching of a tiny line (of length $\delta>0$) placed at a
point $(x,y)$ in $\Omega$, with an orientation given by
$\theta\in[-\pi/2,\pi/2)$ due to the action of $F$ is
$\Lambda(x,y,\theta)=\lim_{\delta\rightarrow
0}\frac{\left\|\mbox{\boldmath$F$}\left(x+\delta\cos\theta,y+\delta\sin\theta\right)-\mbox{\boldmath$F$}(x,y)\right\|}{\delta}\,.$
This is the magnitude of $F$’s directional derivative in the $\theta$
direction. It is clear that
$\Lambda(x,y,\theta):=\left\|\mbox{\boldmath$\nabla$}\mbox{\boldmath$F$}(x,y)\left(\begin{array}[]{c}\cos\theta\\\
\sin\theta\end{array}\right)\right\|=\left\|\left(\\!\\!\begin{array}[]{cc}u_{x}(x,y)&u_{y}(x,y)\\\
v_{x}(x,y)&v_{y}(x,y)\end{array}\\!\\!\right)\,\left(\\!\begin{array}[]{c}\cos\theta\\\
\sin\theta\end{array}\\!\right)\right\|\,.$ (2)
We refer to $\Lambda(x,y,\theta)$ in (2) as the local stretching associated
with a point $(x,y)\in\Omega$; note that this also depends on a choice of
angle $\theta$ in which an infinitesimal line is to be positioned. If we take
the supremum over all $\theta\in[-\pi/2,\pi/2)$ of the right-hand side of (2),
we would get the operator (matrix) norm
$\left\|\mbox{\boldmath$\nabla$}\mbox{\boldmath$F$}\right\|$, computable for
example via Cauchy–Green tensor
$C(x,y):=\left[\mbox{\boldmath$\nabla$}\mbox{\boldmath$F$}(x,y)\right]^{\top}\mbox{\boldmath$\nabla$}\mbox{\boldmath$F$}(x,y)\,.$
(3)
Thus, our development has close relationships to well-established methods
related to the Cauchy–Green tensor, finite-time Lyapunov exponents, and
methods for determining Lagrangian coherent structures, which we describe in
more detail in D. However, at this stage our local stretching definition in
(2) is $\theta$-dependent.
###### Definition 1 (Isotropic and remaining sets).
The isotropic set $I\subset\Omega$ is defined by
$I:=\left\\{(x,y)\in\Omega\,\,:\,\,\frac{\partial\Lambda(x,y,\theta)}{\partial\theta}=0\,\right\\}\,,$
(4)
and the remaining set is
$\Omega_{0}:=\Omega\setminus I\,.$ (5)
The isotropic set $I$ consists of points at which the local stretching does
not depend on directionality of a local line segment. Given the smoothness we
have assumed in $F$, $I$ must be a ‘nice’ closed set; it cannot, for example,
be fractal. In general, $I$ may be empty, equal to $\Omega$, or consist of a
mixture of finitely many isolated points and closed regions of $\Omega$.
We are seeking a partition of $\Omega$ into a family of nonintersecting
curves, such that global stretching is optimized in a way to be made specific.
Since the local stretching at points in $I$ is impervious to the
directionality of lines passing through them, these families of curves only
need be defined on $\Omega_{0}=\Omega\setminus I$, with the understanding that
this has nonempty interior. In more formal language, we need to think of
singular codimension-$1$ foliations on $\Omega$, whose singularities are
restricted to $I$. We codify this in terms of the required geometric
properties of the family of curves:
###### Definition 2 (Restricted foliation).
A restricted foliation, $f$, on $\Omega$ consists of a family of curves
defined in the remaining set $\Omega_{0}$ such that
* (a)
The curves of $f$ (‘the leaves of the foliation’) are disjoint;
* (b)
The union of all these curves covers $\Omega_{0}$;
* (c)
The tangent vector varies in a ${\mathrm{C}}^{1}$-smooth fashion along each
curve.
Our definition is consistent with the local properties expected from a formal
definition of foliations on manifolds [10], but bears in mind that
$\Omega_{0}$ is not a manifold because of the omission of the closed set $I$
from $\Omega$. We remark that if $I$ consists of a finite number of points,
our restricted foliation definition is equivalent to that of a ‘punctured
foliation’ [11] on $\Omega$, where the punctures are at the points in $I$.
This turns out to be a generic expectation for $I$, and we will examine this
(both theoretically and numerically) in more detail later.
The properties of Definition 2 ensure that every restricted foliation $f$ is
associated with a unique ${\mathrm{C}}^{1}$-smooth angle field on the
remaining set $\Omega_{0}$ in the following sense. Given a point
$(x,y)\in\Omega_{0}$, there exists a unique curve from $f$ which passes
through it. The tangent line drawn at this point makes an angle $\theta_{f}$
with the positive $x$-axis. This angle can always be chosen uniquely modulo
$\pi$, from the set $[-\pi/2,\pi/2)$: vertical lines have $\theta_{f}=-\pi/2$,
while horizontal lines have $\theta=0$. Thus, every foliation induces a unique
angle field $\theta_{f}:\Omega_{0}\rightarrow[-\pi/2,\pi/2)$ (modulo $\pi$).
The angle field must be ${\mathrm{C}}^{1}$-smooth to complement the continuous
variation in the tangent spaces of $f$’s leaves. Conversely, suppose a
${\mathrm{C}}^{1}$-smooth angle field
$\theta_{f}:\Omega_{0}\rightarrow[-\pi/2,\pi/2)$ (modulo $\pi$) is given.
Given an arbitrary point $(x_{\alpha},y_{\alpha})\in\Omega_{0}$, the existence
of solutions to the differential equation
$\left(\sin\theta_{f}(x,y)\right)\mathrm{d}x-\left(\cos\theta_{f}(x,y)\right)\mathrm{d}y=0\,$
passing through the point $(x_{\alpha},y_{\alpha})$ ensures that there is an
integral curve of the form $g_{\alpha}(x,y)=0$, in which $g_{\alpha}$ is
${\mathrm{C}}^{1}$-smooth in both arguments. This is possible for each and
every $(x_{\alpha},y_{\alpha})\in\Omega_{0}$, and uniqueness ensures that the
curves $g_{\alpha}(x,y)=0$ do not intersect one another. Moreover,
$\Omega_{0}$ is spanned by
$\bigcup_{\alpha}\left\\{(x,y)\,:\,g_{\alpha}(x,y)=0\right\\}$ because
$\Omega_{0}=\bigcup_{\alpha}\left\\{(x_{\alpha},y_{\alpha})\right\\}$,
ensuring that there is a curve passing through every point
$(x_{\alpha},y_{\alpha})$. Hence, this process generates a unique restricted
foliation $f$ on $\Omega_{0}$.
We are now in a position to define the global stretching which we seek to
optimize.
###### Definition 3 (Global stretching).
Given any restricted foliation $f$, we define the global stretching on
$\Omega$ as the local stretching integrated over $\Omega$, i.e.,
$\Sigma_{f}:=\int\\!\\!\\!\\!\int_{\Omega_{0}}\Lambda\left(x,y,\theta_{f}(x,y)\right)\,\mathrm{d}x\,\mathrm{d}y+\int\\!\\!\\!\\!\int_{I}\Lambda\left(x,y,\centerdot\right)\,\mathrm{d}x\,\mathrm{d}y\,,$
(6)
in which $\theta_{f}$ is the angle field induced by a choice of restricted
foliation $f$.
Notice that the integral over the full domain $\Omega$ has been split into one
over $\Omega_{0}$ (on which $f$ and thus $\theta_{f}$ is well-defined) and
over $I$ (over which the directionality has no influence on $\Lambda$, and has
thus been omitted). Thus, any understanding of foliation leaves on $I$ is
irrelevant to the global stretching, motivating our definition of restricted
foliation defined only on $\Omega_{0}$.
As central premise of this work, we seek restricted foliations $f$ which
optimize (maximize, as well as minimize) $\Sigma_{f}$. Partitions of
$\Omega_{0}$ which are extremal in this way represent the greatest instability
or most stability associated with the dynamical system, and so orbits
associated with these are distinguished for their corresponding difficulties
in forecasting, or alternatively, relative coherence. Before we state the main
theorems, some notation is needed. On $\Omega$, we define the
${\mathrm{C}}^{1}$-smooth functions
$\phi(x,y)=\frac{u_{x}(x,y)^{2}+v_{x}(x,y)^{2}-u_{y}(x,y)^{2}-v_{y}(x,y)^{2}}{2}$
(7)
and
$\psi(x,y)=u_{x}(x,y)u_{y}(x,y)+v_{x}(x,y)v_{y}(x,y)$ (8)
in terms of the partial derivatives $u_{x}$, $u_{y}$, $v_{x}$ and $v_{y}$ of
the mapping $F$. First, we show the connection between zero level sets of
$\phi$ and $\psi$ and the isotropic set $I$.
###### Lemma 1 (Isotropic set).
The isotropic set $I$ defined in (4) can be equivalently characterized by
$I:=\left\\{(x,y)\in\Omega\,:\,\phi(x,y)=0\,\,{\mathrm{and}}\,\,\psi(x,y)=0\right\\}\,,$
(9)
###### Proof.
See A. ∎
We reiterate from this recharacterization of $I$ that generically, it will
consist of finitely many points (at which the curves $\phi(x,y)=0$ intersect
the curves $\psi(x,y)=0$), but may contain curve segments (if the two curves
are tangential in a region), or areas (if both $\phi$ and $\psi$ are zero in
two-dimensional regions). Even for the generic case (finitely many isolated
points), we will see that $I$ will strongly influence the nature of the
optimal foliations in $\Omega_{0}$.
Next, we define the angle field
$\theta^{+}:\Omega_{0}\rightarrow[-\pi/2,\pi/2)$ by
$\theta^{+}(x,y):=\frac{1}{2}\,\tilde{\tan}^{-1}\left(\psi(x,y),\phi(x,y)\right)\qquad({\mathrm{mod}}\,\pi)\,,$
(10)
in terms of the four-quadrant inverse tangent function
$\tilde{\tan}^{-1}(\tilde{y},\tilde{x})$ (sometimes called atan2 in computer
science applications, which assigns the angle in $[-\pi,\pi)$ associated with
the quadrant in $\left(\tilde{x},\tilde{y}\right)$-space when computing
$\tan^{-1}(\tilde{y}/\tilde{x})$). We also define the angle field
$\theta^{-}:\Omega_{0}\rightarrow[-\pi/2,\pi/2)$ by
$\theta^{-}(x,y)=\frac{\pi}{2}+\frac{1}{2}\,\tilde{\tan}^{-1}\left(\psi(x,y),\phi(x,y)\right)\qquad({\mathrm{mod}}\,\pi)\,,$
(11)
and observe that
$\theta^{+}(x,y)-\theta^{-}(x,y)=-\frac{\pi}{2}\qquad({\mathrm{mod}}\,\pi)\,.$
(12)
###### Lemma 2 (Equivalent characterizations of angle fields,
$\theta^{\pm}$).
On $\Omega_{0}$, $\theta^{\pm}\in[-\pi/2,\pi/2)$ are representable as
$\theta^{+}(x,y):=\tan^{-1}\frac{-\phi(x,y)+\sqrt{\phi(x,y)^{2}+\psi(x,y)^{2}}}{\psi(x,y)}\qquad({\mathrm{mod}}\,\pi)$
(13)
and
$\theta^{-}(x,y):=\tan^{-1}\frac{-\phi(x,y)-\sqrt{\phi(x,y)^{2}+\psi(x,y)^{2}}}{\psi(x,y)}\qquad({\mathrm{mod}}\,\pi)\,.$
(14)
###### Proof.
See B. ∎
###### Remark 1 (Removable singularities at $\psi=0$ and $\phi\neq 0$).
While it appears that points where $\psi=0$ but $\phi\neq 0$ are not in the
domain of $\theta^{+}$ as written in (13) and (14), these turn out to be
removable singularities, and thus can be thought of in the sense of keeping
$\phi$ constant and taking the limit $\psi\rightarrow 0$. More specifically,
this implies that
$\theta^{+}(x,y)\Big{|}_{\psi=0}=\left\\{\begin{array}[]{ll}-\pi/2&~{}~{}~{}~{}{\mathrm{if}}\,\phi<0\\\
0&~{}~{}~{}~{}{\mathrm{if}}\,\phi>0\end{array}\right.\,.$ (15)
With this understanding of dealing with the removable singularities, we will
simply view (13) as being defined on $\Omega_{0}$. Similarly,
$\theta^{-}(x,y)\Big{|}_{\psi=0}=\left\\{\begin{array}[]{ll}0&~{}~{}~{}~{}{\mathrm{if}}\,\phi<0\\\
-\pi/2&~{}~{}~{}~{}{\mathrm{if}}\,\phi>0\end{array}\right.\,.$ (16)
###### Remark 2 (Smoothness of $\theta^{\pm}$ in $\Omega_{0}$).
Subject to the removable singularity understanding of Remark 1, $\theta^{+}$
and $\theta^{-}$ are ${\mathrm{C}}^{1}$-smooth in $\Omega_{0}$, and thereby
respectively induce well-defined foliations $f^{+}$ and $f^{-}$ on
$\Omega_{0}$.
While theoretically, the alternative expressions in (13)-(14) for
$\theta^{\pm}$ are equivalent to the definitions in (10)-(11), practically in
fact, which of these is chosen will cause differences when performing
numerical optimal foliation computations. We will highlight similarities and
differences between their usage in Section 6, and demonstrate these issues
numerically in Section 7.
We can now state our first main result:
###### Theorem 1 (Stretching Optimizing Restricted Foliation - Maximum
($\mbox{SORF}_{max}$)).
The restricted foliation $f^{+}$ which maximizes the global stretching (6) is
that associated with the angle field $\theta^{+}$. The corresponding maximum
of the global stretching (6) is
$\Sigma^{+}=\int\\!\\!\\!\\!\int_{\Omega}\left[\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}+\sqrt{\phi^{2}+\psi^{2}}\right]^{1/2}\,\mathrm{d}x\,\mathrm{d}y\,.$
(17)
###### Proof.
See C. ∎
###### Remark 3 (Lyapunov exponent field).
The integand of (17) is the $\Lambda$ field associated with maximizing
stretching, and is given by
$\Lambda^{+}(x,y)=\left[\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}+\sqrt{\phi^{2}+\psi^{2}}\right]^{1/2}=\left\|\mbox{\boldmath$\nabla$}\mbox{\boldmath$F$}(x,y)\right\|\,.$
(18)
This is (a scaled version of) the standard Lyapunov exponent field. We avoid a
time-scaling here since, for example, $F$ may be derived from a sequence of
application of various forms of maps (indeed, any sequential combination of
discrete maps and continuous flows). Neither will we take a logarithm, since
we do not necessarily want to think of the stretching field as an exponent
because the finite ‘amount of time’ associated with $F$ depends on its
discrete/continuous nature, which is flexible in our implementation.
###### Remark 4 (Stretching on the isotropic set $I$).
The value of the global stretching restricted to $I$ (i.e., the second
integral in (6)) is, from (17),
$\displaystyle\int\\!\\!\\!\\!\int_{I}\left[\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}\right]^{1/2}\\!\\!\mathrm{d}x\,\mathrm{d}y\\!\\!$
$\displaystyle=$
$\displaystyle\\!\frac{1}{2}\int\\!\\!\\!\\!\int_{I}\left\|\mbox{\boldmath$\nabla$}\mbox{\boldmath$F$}\right\|_{\mathrm{Frob}}\,\mathrm{d}x\,\mathrm{d}y$
$\displaystyle=$
$\displaystyle\frac{1}{2}\int\\!\\!\\!\\!\int_{I}\left\\{{\mathrm{Tr}}\left[\mbox{\boldmath$\nabla$}\mbox{\boldmath$F$}\left(\mbox{\boldmath$\nabla$}\mbox{\boldmath$F$}\right)^{\top}\right]\right\\}^{1/2}\\!\\!\mathrm{d}x\,\mathrm{d}y$
, $\displaystyle=$
$\displaystyle\frac{1}{2}\int\\!\\!\\!\\!\int_{I}\left\\{{\mathrm{Tr}}\left[C(x,y)\right]\right\\}^{1/2}\mathrm{d}x\,\mathrm{d}y\,,$
(19)
expressed in terms of the Frobenius norm
$\left\|\centerdot\right\|_{\mathrm{Frob}}$ or trace
${\mathrm{Tr}}\left[\centerdot\right]$ of the Cauchy–Green tensor (3).
Similar to the maximizing result, we also have the minimal foliation:
###### Theorem 2 (Stretching Optimizing Restricted Foliation - Minimum
($\mbox{SORF}_{min}$)).
The restricted foliation $f^{-}$ which minimizes the global stretching (6) is
that associated with the angle field $\theta^{-}$. The corresponding minimum
of the global stretching (6) is
$\Sigma^{-}=\int\\!\\!\\!\\!\int_{\Omega}\left[\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}-\sqrt{\phi^{2}+\psi^{2}}\right]^{1/2}\,\mathrm{d}x\,\mathrm{d}y\,.$
(20)
###### Proof.
See C. ∎
###### Corollary 1 ($\mbox{SORF}_{max}$ and $\mbox{SORF}_{min}$ are
orthogonal).
If any curve from $\mbox{SORF}_{max}$ intersects a curve from
$\mbox{SORF}_{min}$ in $\Omega_{0}$, then it does so orthogonally.
###### Proof.
The $\mbox{SORF}_{max}$ and $\mbox{SORF}_{min}$ curves are respectively
tangential to the angle fields $\theta^{+}$ and $\theta^{-}$, which are known
to be orthogonal by (12). ∎
There is clearly a strong interaction between local properties and quantities
related to global stretching optimization. We summarize some properties below.
We do not discuss them in detail, but provide additional explanations in D.
###### Remark 5 (Maximal or minimal local stretching).
* (a)
Given a point $(x,y)\in\Omega_{0}$, if we pose the question of determining the
orientation of an infinitesimal line positioned here in order to experience
the maximum stretching, then this is at an angle $\theta^{+}$.
* (b)
The local maximal stretching associated with choosing the angle of orientation
$\theta^{+}$ is exactly the operator norm of the gradient of the map $F$,
which is expressible in terms of the Cauchy–Green tensor (3).
* (c)
The above quantity is associated with the Lyapunov exponent field, given in
(18), which is defined on all of $\Omega$ despite having the above
interpretation only on $\Omega_{0}$.
* (d)
In $\Omega_{0}$, the $\mbox{SORF}_{max}$ leaves (curves) lie along streamlines
of the eigenvector field of the Cauchy–Green tensor corresponding to the
larger eigenvalue. This eigenvector field can also be thought of as the
Lyapunov or Oseledec vector field associated with $F$.
* (e)
If the question is instead to find the orientation of an infinitesimal line
positioned at $(x,y)$ in order to experience the minimum stretching, then the
angle of this line is $\theta^{-}$. Compare this statement to observation (a)
together with Corollary 1.
* (f)
In $\Omega_{0}$, Eq. (5), the $\mbox{SORF}_{min}$ leaves lie along streamlines
of the eigenvector field of the Cauchy–Green tensor corresponding to the
smaller eigenvalue.
* (g)
The set $I$ corresponds to points in $\Omega$ at which the two eigenvalues of
the Cauchy–Green tensor coincide.
## 4 Behavior near singularities
The previous section’s optimization ignored the isotropic set $I$, since the
local stretching within $I$ was independent of direction. In this section, we
analyze the topological structure of our optimal foliations near generic
points in $I$, which can be thought of as singularities with respect to
optimal foliations. By Lemma 1, these are points where both $\phi$ and $\psi$
are zero.
###### Definition 4 (Nondegenerate singularity).
If a point $\mbox{\boldmath$p$}\in I$ is such that
$\Big{[}\mbox{\boldmath$\nabla$}\phi\times\mbox{\boldmath$\nabla$}\psi\Big{]}_{\mbox{\boldmath$p$}}\neq\mbox{\boldmath$0$}\quad{\mathrm{or~{}equivalently}}\quad\mathrm{det}\,\frac{\partial(\phi,\psi)}{\partial(x,y)}\Big{|}_{\mbox{\boldmath$p$}}\neq
0\,,$ (21)
then $p$ is a nondegenerate singularity.
Figure 1: Topological classification of nondegenerate singularities with
respect to $\mbox{SORF}_{max}$ or -min (a) a $1$-pronged (intruding) point,
and (b) a $3$-pronged (separating) point. See Property 1. Compare to Fig. 13.
Since by Hypothesis 1 both $\phi$ and $\psi$ are ${\mathrm{C}}^{1}$-smooth in
$\Omega$, their gradients are well-defined on $\Omega$. Nondegeneracy
precludes either $\phi$ or $\psi$ possessing critical points at $p$; thus, we
cannot get self-intersections of either $\phi=0$ or $\psi=0$ contours at $p$,
have local extrema of $\phi$ or $\psi$ at $p$, or have a situation where
$\phi$ or $\psi$ is constant in an open neighborhood around $p$. Nondegeneracy
also precludes $\phi=0$ and $\psi=0$ contours intersecting tangentially at $p$
(although we will be able to make some remarks about this situation later).
Thus, at nondegenerate points $p$, the curves $\phi=0$ and $\psi=0$ intersect
transversely. We explain in E how we obtain the following complete
classification for nondegenerate singularities, as illustrated in Fig. 1:
###### Property 1 ($1$\- and $3$-pronged singularities).
Let $\mbox{\boldmath$p$}\in I$ be a nondegenerate singularity, and let
$\hat{k}$ be the unit-vector in the $+z$-direction (i.e., ‘pointing out of the
page’ for a standard right-handed Cartesian system). Then,
* •
If $p$ is right-handed, i.e., if
$\Big{[}\mbox{\boldmath$\nabla$}\phi\times\mbox{\boldmath$\nabla$}\psi\Big{]}_{\mbox{\boldmath$p$}}\cdot\mbox{\boldmath$\hat{k}$}=\mathrm{det}\,\frac{\partial(\phi,\psi)}{\partial(x,y)}\Big{|}_{\mbox{\boldmath$p$}}>0\,,$
(22)
then $p$ is a 1-pronged singularity (an ‘intruding point’), with nearby
foliation of both $f^{+}$ and $f^{-}$ topologically equivalent to Fig. 1(a);
and
* •
If $p$ is left-handed, i.e., if
$\Big{[}\mbox{\boldmath$\nabla$}\phi\times\mbox{\boldmath$\nabla$}\psi\Big{]}_{\mbox{\boldmath$p$}}\cdot\mbox{\boldmath$\hat{k}$}=\mathrm{det}\,\frac{\partial(\phi,\psi)}{\partial(x,y)}\Big{|}_{\mbox{\boldmath$p$}}<0\,,$
(23)
then $p$ is a 3-pronged singularity (a ‘separating point’), with nearby
foliation of both $f^{+}$ and $f^{-}$ topologically equivalent to Fig. 1(b).
The intrusions/separations occur in opposite directions for the two orthogonal
foliations $f^{\pm}$.
We use the ‘$1$-pronged’ and ‘$3$-pronged’ terminology from the theory of
singularities of measured foliations [12, 13]. We also note that in the case
of all singularities being nondegenerate, the curves on $\Omega_{0}$ may be
thought of as a punctured foliation [11, e.g.] on $\Omega$. These two
singularities also correspond to the index of the foliation being $+1/2$ and
$-1/2$ respectively (for e.g., see Fig. 1 in [14]). These two topologically
distinct singularities serve as the organizing skeleton around which the rest
of the SORF smoothly vary. These topologies have been observed numerically
[15, 16] but apparently not classified before.
We have claimed in Property 1 that the topology of $f^{-}$ is similar to that
of $f^{+}$ as illustrated in Fig. 1. To see why this is so, imagine reflecting
these curves about the vertical line going through $p$. This generates an
orthogonal set of curves, which are the complementary (orthogonal) foliation.
Thus, $f^{+}$ and $f^{-}$ have the same topology near $p$.
At the next-order of degeneracy, we will have $\phi=0$ and $\psi=0$ contours
continuing to be curves, but now intersecting at $p$ nontangentially. In that
case, it turns out that Fig. 2 gives the possible topologies for
$\mbox{SORF}_{max}$, which are explained in detail in E. If $p$ is not an
isolated point in $I$, then many other possibilities exist. The
$\mbox{SORF}_{min}$ in the mildly degenerate situations in Fig. 2 represent
curves which are orthogonal to the pictured ones, by Corollary 1. Their
topology will be identical.
Figure 2: Some possible topologies for $\mbox{SORF}_{max}$ near $p$ when
transversality is relaxed (see E for explanations of these structures).
## 5 Discontinuity in Lyapunov vectors
We have determined slope fields $\theta^{+}$ and $\theta^{-}$ corresponding to
maximizing and minimizing the global stretching. By Remark 5, maximizing the
local stretching at a point in $\Omega_{0}$ also results in an angle
corresponding to $\theta^{+}$. Such local stretching is well-studied; it is
related to the Lyapunov exponent, and the directions are associated with
Lyapunov vectors [5] or Oseledec spaces [3]. Additionally, the direction
associated with $\theta^{+}$ can be characterized in terms of the eigenvector
associated with the larger Cauchy–Green eigenvalue. See D for a more extensive
discussion of these connections.
Here, we analyze the vector fields associated with $\theta^{\pm}$ in some
detail, using the behavior in the $(\phi,\psi)$-plane introduced in the
previous section. The main observation is that, generically, it is not
possible to express a ${\mathrm{C}}^{0}$-vector field on the closure of
$\Omega_{0}$ from the $\theta^{\pm}$ angle fields. This has implications in
numerically computing curves in the optimal foliations, where we give insight
into spurious effects that arise.
The $\theta^{+}$ field in $\Omega_{0}$ is given by (10). To determine a curve
from the $\mbox{SORF}_{max}$, we need to pick an initial point in
$\Omega_{0}$, and evolve it according to ‘the’ vector field generated from
$\theta^{+}$. A simple possibility would be to take the (unit) vector field
$\mbox{\boldmath$w$}^{+}(x,y):=\left(\begin{array}[]{c}\cos\left[\theta^{+}(x,y)\right]\\\
\sin\left[\theta^{+}(x,y)\right]\end{array}\right)\,,$ (24)
in which $\theta^{+}$ is computed from (10). In evolving trajectories
associated with this vector field—i.e., in determining streamlines of (10)—one
can of course multiply $\mbox{\boldmath$w$}^{+}$ by a scalar function
$m(x,y)$, which simply changes the parametrization along the
trajectory/streamline. As verified in D, (24) is indeed the eigenvector
associated with the larger eigenvalue of the Cauchy–Green tensor at $(x,y)$,
with the understanding that it can be multiplied by a nonzero scalar. The fact
that the eigenvector at each point is unique, modulo a constant multiple, is
of course directly related to these observations.
Exactly the same arguments hold when attempting to compute the
$\mbox{SORF}_{min}$: from the angle field $\theta^{-}$ we can construct the
vector field
$\mbox{\boldmath$w$}^{-}(x,y):=\left(\begin{array}[]{c}\cos\left[\theta^{-}(x,y)\right]\\\
\sin\left[\theta^{-}(x,y)\right]\end{array}\right)\,,$ (25)
where $\theta^{-}$ is defined from (11).
###### Property 2 (Generating foliation curves using vector fields).
If generating a $\mbox{SORF}_{max}$ or $\mbox{SORF}_{min}$ curve in
$\Omega_{0}$, we can in general find solutions to
$\frac{d}{ds}\left(\begin{array}[]{c}x\\\
y\end{array}\right)=\mbox{\boldmath$w$}\left(x(s),y(s)\right)\quad;\quad\left(\begin{array}[]{c}x(0)\\\
y(0)\end{array}\right)=\left(\begin{array}[]{c}x_{0}\\\
y_{0}\end{array}\right)\,,$ (26)
where $s$ is the parameter along the curve and $(x_{0},y_{0})\in\Omega_{0}$,
and we can choose a Lyapunov vector field in the form
$\mbox{\boldmath$w$}(x,y)=m(x,y)\,\mbox{\boldmath$w$}^{\pm}(x,y)$ (27)
for a suitable scalar function $m$.
If we use $m\equiv 1$ on $\Omega_{0}$, the parametrization $s$ along the
trajectory is exactly the arclength. However, more general scalar functions
$m$ can be used in (26), reflecting the fact that the vector fields which
generate the foliations are actually direction fields, and thus can be
multiplied at each point by a scalar. The only restrictions are (i) $m$ can
never be zero, because if it is, we introduce a spurious fixed point in the
system (26) which ‘stops’ the curve, and (ii) $m$ is sufficiently smooth to
ensure that the equation (26) has unique ${\mathrm{C}}^{1}$-smooth solutions.
From the perspective of a SORF curve, making a choice of the function $m$
simply adjusts the parametrization along the curve. Notice that if we flip the
sign of $m$ we would be going along the curve in the opposite direction.
Figure 3: The map from $\Omega$ to $(\phi,\psi)$-space, illustrating the sets
$I^{\prime}$ and $B^{\prime}$ to which the sets $I$ and $B$ map. In red, we
have stated the value of the field $\theta^{+}$ in (10) in each quadrant.
To understand the generation of curves from (27), it helps to think of the
mapping from $\Omega$ to $(\phi,\psi)$-space, illustrated in Fig. 3. We have
already characterized an important subset of $\Omega$ in relation to this
mapping: the isotropic set $I$ is the kernel of this mapping (by Lemma 1). Its
image is denoted by $I^{\prime}$, the origin in $(\phi,\psi)$-space.
Another important set that we require is
###### Definition 5 (Branch cut).
The branch cut $B$ is the set of points $(x,y)\in\Omega$ such that
$B:=\left\\{(x,y)\in\Omega\,:\,\phi(x,y)<0~{}~{}{\mathrm{and}}~{}~{}\psi(x,y)=0\,\right\\}\,.$
(28)
The image $B^{\prime}$ of the branch cut is also shown in Fig. 3 as the
negative $\phi$-axis. In each of the four quadrants of Fig. 3, we have
carefully stated the value of the $\theta^{+}$ field in terms of the standard
inverse tangent function. We focus here near a nondegenerate singularity $p$,
where the $\phi=0$ and $\psi=0$ contours must cross $p$ transversely, given
that the Jacobian determinant of $(\phi,\psi)$ with respect to $(x,y)$ is
nonzero. The axis-crossings in Fig. 3 will have the same topology as these
contours if the determinant is positive (the map is orientation-preserving).
The relevant set $B$ in $\Omega_{0}$, near $p$, must therefore have the
structure as seen in Fig. 4(a). Consider a small circle around $p$ as drawn in
Fig. 4(a), and indicated via arrows the directions of the vector field
$\mbox{\boldmath$w$}^{+}$ along it. The reasons for these directions stems
directly from Fig. 3; we need to take the cosine (for the $x$-component) and
the sine (for the $y$-component) of the angle field defined therein. While
$\mbox{\boldmath$w$}^{+}$ must vary smoothly along the circle, it exhibits a
discontinuity across the branch cut $B$, because the angle has rotated around
from $-\pi/2$ to $+\pi/2$. Clearly, the same behavior occurs for left-handed
$p$: in this case we need to consider Fig. 3 with the $\psi$-axis flipped
(this orientation-reversing case is indeed pictured in Fig. 13(b)). Once
again, it is the $\phi_{-}$ axis to which the branch cut $B\in\Omega_{0}$ gets
mapped. The intuition of Fig. 4 gives us a theoretical issue related to using
a vector field to find curves:
Figure 4: Vector field of (26) using $\mbox{\boldmath$w$}^{+}$, near a
nondegenerate singularity $p$, with the branch cut $B$ shown in green: (a)
right-handed $p$ and (b) left-handed $p$.
###### Theorem 3 (Impossibility of continuous Lyapunov vector field).
If there exists at least one nondegenerate singularity
$\mbox{\boldmath$p$}\in\Omega$, then no nontrivial scalar function $m$ in (26)
exists such that the right-hand side (i.e., vector field associated with the
angle field $\theta^{+}$) is a ${\mathrm{C}}^{0}$-smooth nonzero vector field
in $\Omega_{0}$. The same conclusion holds for vector fields generated from
$\theta^{-}$.
###### Proof.
See F. ∎
## 6 Computational issues of finding foliations
In the previous section, we have outlined a theoretical concern in defining a
vector field for computing optimal foliations. We show here related numerical
issues which emerge when attempting to compute foliating curves.
First, we remark that using a vector field to generate curves of streamlines
of eigenvector fields of a tensor—which as seen here are equivalent to
$\mbox{SORF}_{max}$ and $\mbox{SORF}_{min}$ curves—is standard practice.
Numerical issues in doing so have been observed previously, and ad hoc
remedies proposed:
* •
In generating trajectories following ‘smooth’ fields from grid-based data, one
suggested approach is to keep checking the direction of the vector field
within each cell a trajectory ventures into, and then flip the vector field at
the bounding gridpoints to all be in the same direction before interpolating
[16].
* •
In dealing with points at which the eigenvector field is not defined, an
approach is to mollify the field by multiplying with a sufficiently smooth
field which is zero at such points (e.g., the square of the difference in the
two eigenvalues [15]).
Our Theorem 3 gives explicit insights into the nature of both these issues.
Both ad hoc numerical methods relate to choosing the function $m$
(respectively as $\pm 1$, or a smooth scalar field which is zero at
singularities). In either case, actual behavior near the singularities gets
blurred by this process.
The branch cut near singularities also leads to more subtle—and apparently
hitherto unidentified in the literature of following streamlines of tensor
fields—issues when performing numerical computations. In G, we explain why the
following occur.
###### Property 3 (Numerical computation of optimal foliations using vector
fields).
Suppose we numerically compute a $\mbox{SORF}_{max}$ (resp.
$\mbox{SORF}_{min}$) curve by using (26) with $m=1$ and the vector field
$\mbox{\boldmath$w$}^{+}$ (resp. $\mbox{\boldmath$w$}^{-}$), by allowing the
parameter $s$ to evolve in both directions. Then
* (a)
$\mbox{SORF}_{max}$ curves will not cross a one-dimensional part of $B$
vertically, and may also veer along $B$ even though $B$ may not be a genuine
$\mbox{SORF}_{max}$ curve;
* (b)
$\mbox{SORF}_{min}$ curves will not cross a one-dimensional part of $B$
horizontally, and may also veer along $B$ even though $B$ may not be a genuine
$\mbox{SORF}_{min}$ curve.
These problems are akin to branch splitting issues arising when applying curve
continuation methods in instances such as bifurcations [17]. Is it possible to
choose a function $m$ which is not identically $1$ to remove these
difficulties? The proof of Theorem 3 tells us that the answer is no. Either
the branch cut gets moved to a different curve connected to $p$ across which
there is a similar discontinuity, or it gets converted to a curve which has
spurious fixed points (i.e., a center manifold curve) because $m=0$ on it. In
either case, the numerical evaluation will give problems.
Thus, there are several numerical issues in computing foliations using the
vector fields $\mbox{\boldmath$w$}^{\pm}$. Lemma 2 suggests a straightfoward
alternative method for numerically computing such curves in generic
situations, while systematically avoiding all these issues. Let
$\displaystyle\Phi_{-}$ $\displaystyle:=$
$\displaystyle\left\\{(x,y):\phi(x,y)<0~{}~{}{\mathrm{and}}~{}~{}\psi(x,y)=0\right\\}\quad{\mathrm{and}}$
$\displaystyle\Phi_{+}$ $\displaystyle:=$
$\displaystyle\left\\{(x,y):\phi(x,y)>0~{}~{}{\mathrm{and}}~{}~{}\psi(x,y)=0\right\\}\,;$
these are points mapping to the ‘negative $\phi$-axis’ and the ‘positive
$\phi$-axis’ (see Figs. 3 and 13), and we also note that $\Phi_{-}=B$. In
seeking the maximizing foliation, we define on $\Omega_{0}\setminus\Phi_{-}$,
$h^{+}(x,y)=\left\\{\begin{array}[]{ll}\frac{-\phi(x,y)+\sqrt{\phi^{2}(x,y)+\psi^{2}(x,y)}}{\psi(x,y)}&~{}~{}{\mathrm{if}}~{}~{}\psi(x,y)\neq
0\\\
0&~{}~{}{\mathrm{if}}~{}~{}\psi(x,y)=0~{}{\mathrm{and}}~{}\phi(x,y)>0\end{array}\right.\,.$
(29)
This is essentially the function $\tan\theta^{+}$ as defined in (13), and is
${\mathrm{C}}^{1}$ in $\Omega_{0}\setminus\Phi_{-}$ because of Remark 1. The
reason for not defining $h^{+}$ on $\Phi_{-}$ is because the relevant tangent
line becomes vertical. Hence we define its reciprocal, ${\mathrm{C}}^{1}$ on
$\Omega_{0}\setminus\Phi_{+}$, by
$\mathrel{\raisebox{0.0pt}{\rotatebox[origin={c}]{180.0}{$h$}}}^{+}(x,y):=\left\\{\begin{array}[]{ll}\frac{\phi(x,y)+\sqrt{\phi^{2}(x,y)+\psi^{2}(x,y)}}{\psi(x,y)}&~{}~{}{\mathrm{if}}~{}~{}\psi(x,y)\neq
0\\\
0&~{}~{}{\mathrm{if}}~{}~{}\psi(x,y)=0~{}{\mathrm{and}}~{}\phi(x,y)<0\end{array}\right.\,.$
(30)
The minimizing foliation is associated with the angle field $\theta^{-}$. Thus
we define on $\Omega_{0}\setminus\Phi_{+}$,
$h^{-}(x,y):=\left\\{\begin{array}[]{ll}\frac{-\phi(x,y)-\sqrt{\phi^{2}(x,y)+\psi^{2}(x,y)}}{\psi(x,y)}&~{}~{}{\mathrm{if}}~{}~{}\psi(x,y)\neq
0\\\
0&~{}~{}{\mathrm{if}}~{}~{}\psi(x,y)=0~{}{\mathrm{and}}~{}\phi(x,y)<0\end{array}\right.\,,$
(31)
which gives the slope field associated with $\theta^{-}$, and on
$\Omega_{0}\setminus\Phi_{-}$ its reciprocal
$\mathrel{\raisebox{0.0pt}{\rotatebox[origin={c}]{180.0}{$h$}}}^{-}(x,y):=\left\\{\begin{array}[]{ll}\frac{\phi(x,y)-\sqrt{\phi^{2}(x,y)+\psi^{2}(x,y)}}{\psi(x,y)}&~{}~{}{\mathrm{if}}~{}~{}\psi(x,y)\neq
0\\\
0&~{}~{}{\mathrm{if}}~{}~{}\psi(x,y)=0~{}{\mathrm{and}}~{}\phi(x,y)>0\end{array}\right.\,.$
(32)
###### Property 4 (Foliations as integral curves).
Within $\Omega_{0}$, a $\mbox{SORF}_{max}$ curve can be determined by taking
an initial point $(x_{0},y_{0})$ and then numerically following
$\frac{dy}{dx}=h^{+}(x,y)~{}~{}~{}{\mathrm{if}}~{}~{}\left|h^{+}(x,y)\right|\leq
1~{}~{}~{}~{}{\mathrm{and}}~{}~{}~{}~{}\frac{dx}{dy}=\,\mathrel{\raisebox{0.0pt}{\rotatebox[origin={c}]{180.0}{$h$}}}^{+}(x,y)~{}~{}~{}{\mathrm{if~{}else}}\,,$
(33)
where we keep switching between the equations depending on the size of
$\left|h^{+}\right|$. This generates a sequence $(x_{i},y_{i})$ to numerically
approximate an integral curve. Similarly, a $\mbox{SORF}_{min}$ curve can be
determined in $\Omega_{0}$ as integral curves of
$\frac{dy}{dx}=h^{-}(x,y)~{}~{}~{}{\mathrm{if}}~{}~{}\left|h^{-}(x,y)\right|\leq
1~{}~{}~{}~{}{\mathrm{and}}~{}~{}~{}~{}\frac{dx}{dy}=\,\mathrel{\raisebox{0.0pt}{\rotatebox[origin={c}]{180.0}{$h$}}}^{-}(x,y)~{}~{}~{}{\mathrm{if~{}else}}\,.$
(34)
Property 4 is an attractive alternative which avoids issues related to the
branch cut and vector field discontinuities. Moreover, it is directly
expressed in terms of the functions $\phi$ and $\psi$ via the straightforward
definitions of $h^{\pm}$ and
$\mathrel{\raisebox{0.0pt}{\rotatebox[origin={c}]{180.0}{$h$}}}^{\pm}$. The
switching between the $dy/dx$ and $dx/dy$ forms avoids the infinite slopes
which may result if only one of these forms is used. Thus, we can follow a
particular curve as it meanders around $\Omega_{0}$, having vertical and
horizontal tangents, and also crossing branch cuts, with no problem.
## 7 Numerical examples of optimal foliations
We will demonstrate applications of the theory to several maps $F$, generated
from several applications of discrete maps, and from sampling flows driven by
unsteady velocities. The examples include situations which are highly
disordered (e.g., maps known to be chaotic under repeated iterations, flows
known to possess chaos over infinite times). Moreover, the maps $F$ need not
be area-preserving.
In order to retain sufficient resolution to view relevant features in the many
subfigures that we present in this Section, we will dispense with axes labels
when these are self-evident: $x$ will be the horizontal axis and $y$ the
vertical as per standard convention.
### 7.1 Hénon map
Figure 5: Optimal foliation computations for
$\mbox{\boldmath$F$}=\mathfrak{H}^{4}$: (a) the logarithm of the maximum
stretching field $\Lambda_{+}$, (b) zero contours of $\phi$ and $\psi$, (c)
vector field $\mbox{\boldmath$w$}^{+}$generated from (24), (d) vector field
$\mbox{\boldmath$w$}^{-}$ generated from (25), (e) $\mbox{SORF}_{max}$ by
implementing vector field in (c), (f) $\mbox{SORF}_{min}$ by implementing
vector field in (d), (g) $\mbox{SORF}_{max}$ with branch cut (green), (h)
$\mbox{SORF}_{min}$ with branch cut (green).
As our first example, consider the Hénon map, which is defined by [8]
$\mathfrak{H}(x,y)=\left(\begin{array}[]{c}1-ax^{2}+y\\\ bx\end{array}\right)$
on $\Omega=\mathbb{R}^{2}$, and where we make the classical parameter choices
$a=1.4$ and $b=0.3$. We choose $F$ to be four iterations of the Hénon map,
i.e., $\mbox{\boldmath$F$}=\mathfrak{H}^{4}$. Fig. 5 demonstrates the computed
foliations and related graphs. The stretching field $\Lambda^{+}$ is first
displayed in Fig. 5(a). In Fig. 5(b), we show the zero contours of $\phi$ and
$\psi$. In this case, there are no nice transversalities. Indeed, there are
several regions of almost tangencies, and the fact that several of the zero
contours almost coincide in the two outer streaks in the figure, indicate that
degenerate foliations are to be expected in their vicinity. The ‘squashing
together’ that is occurring here is because we are at an intermediate stage in
which initial conditions are gradually collapsing to the Hénon attractor. The
vector fields $\mbox{\boldmath$w$}^{\pm}$, computed using (24) and (25) and
shown in Figs. 5(c,d) display discontinuities, which impact the computation of
the SORF curves in (e) and (f). These are obtained by seeding 300 initial
locations randomly in the domain, and then computing streamlines generated
from (26) with $m=1$ in forward, as well as backward, $s$. Since the $\phi$
and $\psi$ fields have large variations at small spatial scales because of the
chaotic nature of the map, finding the branch cut $B$ (where where $\psi=0$
and $\phi<0$) as obtained from (28) requires care. We assess each gridpoint,
and color it in (in green) if it has a different sign of $\psi$ in comparison
to any of its four nearest neighbors, and the $\phi$ value at this point is
negative. The lowermost panel overlays the (green) set $B$ on the SORF curves,
indicating why some of the apparent behavior in (e) and (f) is not
representative of the true foliation; the center vertical line in (f), for
example, occurs because of Property 3(b), while the $\mbox{SORF}_{max}$ (resp.
$\mbox{SORF}_{min}$) curves stop abruptly on $B$ if crossing vertically (resp.
horizontally).
Figure 6: Zooming in to an area associated with the map
$\mbox{\boldmath$F$}=\mathfrak{H}^{4}$ (a) the zero contours of $\phi$ and
$\psi$, (b) the $\mbox{SORF}_{max}$, and (c) the $\mbox{SORF}_{min}$.
On the other hand, Fig. 5(b) indicates that the zero contours of $\phi$ and
$\psi$ almost coincide on two curves: ‘outer’ and ‘inner’ parabolic shapes.
These are also identified as part of the branch cut set $B$ because
$\psi\approx 0$ and $\phi$ is slightly negative here. These curves are
‘almost’ a curve of $I$, and we see accumulation of $\mbox{SORF}_{max}$ curves
towards these, indicating—at this level of resolution—potential degeneracy of
the foliation. We zoom in to this in Fig. 6. In conjunction with the
explanations in Fig. 13, what occurs here is that the inner green line in Fig.
6(a) must have a slope field which is $-\pi/2$ (it is in $\Phi_{-}=B$ with
respect to Fig. 6), while on the inner pink line it should be $-\pi/4$
(corresponding to $\Psi_{-}$ in Fig. 13(a)). The extreme closeness of the
contours means that a very sharp change in direction must be achieved in a
tiny region, which then visually appears as a form of degeneracy.
This example highlights an important computational issue which is very
general: even though relevant foliations will exist, in order to resolve them,
one needs a spatial resolution which can resolve the spatial changes in the
$\phi$ and $\psi$ fields.
### 7.2 Double-gyre flow
As an example of when $F$ is generated from a finite-time flow, let us
consider the flow map from time $t=0$ to $2$ generated from the differential
equation
$\frac{d}{dt}\left(\begin{array}[]{c}x\\\
y\end{array}\right)=\left(\begin{array}[]{l}-\pi A\sin\left[\pi
g(x,t)\right]\cos\left[\pi y\right]\\\ \pi A\cos\left[\pi
g(x,t)\right]\sin\left[\pi y\right]\frac{\partial g}{\partial
x}(x,t)\end{array}\right)\,,$ (35)
in which $g(x,t):=\varepsilon\sin\left(\omega
t\right)x^{2}+\left[1-2\varepsilon\sin\left(\omega t\right)\right]x$ and
$\Omega=(0,2)\times(0,1)$. This is the well-studied double-gyre model [4], but
we exclude the boundary of the domain. We use the parameter values $A=1$,
$\omega=2\pi$ and $\varepsilon=0.1$, and the optimal reduced foliations are
demonstrate in Fig. 7.
Figure 7: Optimal foliation computations for the double-gyre flow: (a) The
logarithm of the field $\Lambda^{+}$, (b) zero contours of $\phi$ and $\psi$,
(c) vector field $\mbox{\boldmath$w$}^{+}$generated from (24), (d) vector
field $\mbox{\boldmath$w$}^{-}$ generated from (25), (e) $\mbox{SORF}_{max}$
by implementing vector field in (c), (f) $\mbox{SORF}_{min}$ by implementing
vector field in (d), (g) $\mbox{SORF}_{max}$ with branch cut (green), (g)
$\mbox{SORF}_{min}$ with branch cut.
Fig. 7(a) is a classical figure in this context: the logarithm of the field
$\Lambda^{+}$; if divided by the time-of-flow $2$, this is the finite-time
Lyapunov exponent field. Fig. 7(b) indicates the $\phi=0$ and $\psi=0$
contours, with their intersections defining $I$. We use the ‘standard’
$\mbox{\boldmath$w$}^{\pm}$ unit versions, Eq. (24), to generate the vector
fields in (c) and (d), and the corresponding SORFs are determined in (e) and
(f). Figs. 7(g) and (h) overlay the branch cuts (green), which are parts of
the green curves in Fig. 7(b) at which $\phi<0$. As expected, the
$\mbox{SORF}_{max}$ curves fail to cross the branch cut vertically, as do the
$\mbox{SORF}_{min}$ curves horizontally. Moreover, foliation curves which do
get pushed in towards the branch cuts tend to meander along them, giving an
impact of spurious accumulations. We zoom in towards one of these regions in
Fig. 8; the $\mbox{SORF}_{max}$ curves requirements of having slopes $-\pi/4$
(resp. $+\pi/2$) on $\Phi_{-}$ (resp. $\Phi_{+}$) result in abrupt curving.
The accumulation is not exactly to $\Psi_{-}$, but rather to a curve which is
very close, as seen in Fig. 8(b). Thus, it is not true that there is a one-
dimensional part of the isotropic set $I$ along here. The geometric insights
of the previous sections allows us to understand and interpret these issues,
while appreciating how resolution may give misleading visual cues.
Figure 8: Zooming in to near an ‘accumulating’ $\mbox{SORF}_{max}$ from Fig.
7: (a) the relevant zero contours of $\phi$ and $\psi$, and (b) the
$\mbox{SORF}_{max}$.
Figure 9: Zooming in to the $\mbox{SORF}_{max}$ (left) and $\mbox{SORF}_{min}$
(right) in the double-gyre. The top and bottom panels correspond to different
locations, respectively near two adjacent intruding ($1$-pronged) points, and
a separating ($3$-pronged) point. The branch cut is shown in green. Compare to
Fig. 1 and Property 1.
In Fig. 9, we zoom in to two difference locations, chosen by zeroeing in to
two different intersection points of the zero $\phi$ and $\psi$-contours. The
top panels illustrate the $\mbox{SORF}_{max}$ (left) and the
$\mbox{SORF}_{min}$ (right) curves at the same location. The theory related to
$1$-pronged intruding points is well-demonstrated, with there being two such
points adjacent to each other. The two orthogonal families ‘reverse’ the
locations of the singularities for the maximizing and minimizing foliations,
and the branch cut (green) forms vertical/horizontal barriers as appropriate.
In contrast, the bottom figures are of a $3$-pronged separating point; again,
the numerics validate the theory.
### 7.3 Chirikov map
Figure 10: Optimal foliation computations for the Chirikov map
$\mbox{\boldmath$F$}=\mathfrak{C}_{2}^{4}$: (a) the logarithm of the field
$\Lambda^{+}$, (b) zero contours of $\phi$ and $\psi$, (c) $mbox{SORF}_{max}$
with branch cut (green), (d) $\mbox{SORF}_{min}$ with branch cut (green).
The Chirikov (also called ‘standard’) map is defined on the doubly-periodic
domain $\Omega=[0,2\pi)\times[0,2\pi)$ by [9]
$\mathfrak{C}_{k}(x,y)=\left(\begin{array}[]{c}x+y+k\sin
x~{}~{}~{}({\mathrm{mod}}\,\,2\pi)\\\ y+k\sin
x~{}~{}~{}({\mathrm{mod}}\,\,2\pi)\end{array}\right)\,.$
We choose $\mbox{\boldmath$F$}=\mathfrak{C}_{k}^{n}$, that is, $n$ iterations
of the Chirikov map for a given value of the parameter $k$. Increasing $k$
increases the disorder of the map, as does having $n$ large. (The map is a
classical example of chaos, with $\Omega$ consisting of quasiperiodic islands
in a chaotic sea, where ‘chaos/chaotic’ must be understood in the limit
$n\rightarrow\infty$.) In more disorderly situations, increasingly fine
resolution is required to reveal the structures that we have defined.
Relevant computations for $k=2$ and $n=4$ are shown in Fig. 10. There are
significant regions where the behavior is quite orderly. There is ‘greater
disorder’ in the region foliated with large values of $\Lambda^{+}$ in
(a)—indeed, this region is associated with the ‘chaotic sea’ when the map is
iterated many more times—with the outer parts of low $\Lambda^{+}$ being
associated with quasiperiodic islands and hence order. All features mentioned
in previous examples are reiterated in the pictures. Moreover, the
$\mbox{SORF}_{min}$ foliation somewhat mirrors the structure expected from
classical Poincaré section numerics.
If we instead consider $k=1$ and $n=2$, an interesting degenerate singularity
(corresponding to the $\psi=0$ contour crossing exactly a saddle point of
$\phi$) is displayed in Fig. 11. The singularity in the $S\mbox{SORF}_{max}$
foliation (b) appears like a degenerate form of a separating point, if
thinking in terms of curves coming from above. However, if viewed in terms of
curves coming in from below, it appears as an intruding point with a sharp
(triangular) end. The $\mbox{SORF}_{min}$ conforms to this, having elements of
a separating point, and an intruding point, as well. (The numerical issue of
$\mbox{SORF}_{min}$ not crossing $B$ horizontally is displayed in Fig. 11(c);
in reality, the $\mbox{SORF}_{min}$ curves should connect smoothly across.)
Figure 11: A degenerate singularity of the map
$\mbox{\boldmath$F$}=\mathfrak{C}_{1}^{2}$, shown zoomed-in: (a) the zero
contours of $\phi$ and $\psi$, (b) $\mbox{SORF}_{max}$, and (c)
$\mbox{SORF}_{min}$.
Next, we demonstrate in Fig. 12, using
$\mbox{\boldmath$F$}=\mathfrak{C}_{2}^{2}$, the efficacy of using the
integral-curve forms (33) and (34) of the foliations, rather than using a
vector field. The $\ln\Lambda^{+}$ field in Fig. 12(a) has several sharp
ridges; these are well captured by locations where the $\phi$ and $\psi$ zero-
contours in Fig. 12(b) coincide. The $\mbox{SORF}_{max/min}$ foliations in (b)
and (c) are computed respectively using the vector fields
$\mbox{\boldmath$w$}^{\pm}$ as in previous situations, and exhibit the usual
issues when crossing $B$. In contrast, the lower row is generated by using the
integral-curve forms (33) and (34), where we have once again started from
$300$ random initial conditions. For each initial condition $(x_{1},y_{1})$,
we define the next point $(x_{2},y_{2})$ on a $\mbox{SORF}_{max}$ curve by
$x_{2}=x_{1}+\mathrel{\raisebox{0.0pt}{\rotatebox[origin={c}]{180.0}{$h$}}}^{+}(x_{1},y_{1})\delta
y$ where $\delta y>0$ is the spatial resolution in the $y$-direction, and
$dx/dy$ is based on (33). Similarly, $y_{2}=y_{1}+h^{+}(x_{1},y_{1})\delta x$
using (33), and where $\delta x>0$ is the resolution chosen in $x$-direction.
This initializes the process. Next, we check the value of
$h_{+}(x_{2},y_{2})$, thereby deciding which of the equations in (33) to
implement. If the $dy/dx$ equation, we take
$x_{3}=x_{2}+{\mathrm{sign}}\left(x_{2}-x_{1}\right)\delta x$, and thus find
$y_{3}$ using the ODE solver. Having now obtained $(x_{3},y_{3})$, we again
use the last two points to make decisions on which of the two equations to
use, and continue in this fashion for a predetermined number of steps. Next,
we go back to $(x_{1},y_{1})$ and now set
$x_{2}=x_{1}-\mathrel{\raisebox{0.0pt}{\rotatebox[origin={c}]{180.0}{$h$}}}^{+}(x_{1},y_{1})\delta
y$ and $y_{2}=y_{1}-h^{+}(x_{1},y_{1})\delta y$, thereby going in the opposite
direction. Having initiated this process, we can then continue this curve
using the same continuation scheme. The $\mbox{SORF}_{min}$ are obtained
similarly, using the two equations in (34). There is sensitivity in the
process to locations where $\phi$ and $\psi$ change rapidly (they are each of
the order $10^{5}$ in this situation), and in particular where zeros are near.
The resolution scales $\delta x$ and $\delta y$ need to be reduced
sufficiently to not capture spurious effects. Notice that there are no branch-
cut problems in the resulting foliations obtained using the integral-curve
approach, since we do not have to worry about a discontinuity in a vector
field. Neither are there any abrupt stopping of curves.
Figure 12: Comparison between using the integral-curve forms (33) and (34) and
the vector field forms for $\mbox{\boldmath$F$}=\mathfrak{C}_{2}^{2}$: (a)
$\ln\Lambda^{+}$ field, (b) zero contours of $\phi$ and $\psi$, (c)
$\mbox{SORF}_{max}$ using the vector field (24), (d) $\mbox{SORF}_{min}$ using
the vector field (25), (e) $\mbox{SORF}_{max}$ using the integral curve form
(33), and (f) $\mbox{SORF}_{min}$ using the form (34).
## 8 Concluding remarks
In this paper, we have examined the issue of determining foliations which
globally maximize and minimize stretching associated with a two-dimensional
map, where the map can be defined in terms of a finite sequence of discrete
maps, or a finite-time flow of a differential equation. Our formulation
establishes a connection to the well-known local optimizing issue, and
provides new insights into the resulting foliations and their singularities.
In particular, an easy criterion for classifying the nature of generic
singularities is expressed. Some numerical artefacts arising when computing
these foliations in standard ways are characterized in terms of a ‘branch cut’
phenomenon, and a methodology of avoiding these is developed. We have
expressed connections with a range of related and highly studied concepts
(Cauchy–Green tensor, Lyapunov vectors, singularities of vector fields), and
demonstrated computations in both discretely- and continuously-derived maps.
We expect these results to help researchers interpret, and improve, numerical
calculations in related situations. In particular, misinterpretations of
numerics can be mitigated via the understandings presented here. Regions of
high sensitivity towards spatial resolutions are also identifiable in terms of
the near-zero sets of the $\phi$ and $\psi$ functions.
We wish to highlight from our numerical results the role of
$\mbox{SORF}_{min}$ restricted foliations as being effective demarcators of
complication flow regimes. These curves—observable for example in blue in
Figs. 5, 7, 10 and 12—indicate curves along which there is minimal stretching.
Consequently, there is maximal stretching in the orthogonal direction to these
curves. This indicates that the $\mbox{SORF}_{min}$ curves are barriers in
some senses: disks of initial conditions positioned on such a curve experience
sharp stretching orthogonal to them. That is, initial conditions on one side
of such a curve get separated quickly from those on the other side, with the
curve positioned optimally to maximize the separation. Our methodology enables
this intuitive idea to be put into a global optimizing foliation framework.
Looking at this another way, the dense regions of the $\mbox{SORF}_{min}$
(blue) foliations in Figs. 5, 7, 10 and 12 are reminiscent of separation
curves which attempt to demarcate chaotic from regular regions. We emphasize,
though, that ‘chaotic’ has no proper meaning in the finite-time context since
it must be understood in terms of infinite-time limits; in this case, the
separation one may try to obtain is between more ‘disorderly’ and ‘orderly’
regions. The ambiguity of defining these is reflected in the Figures, in which
the $\mbox{SORF}_{min}$ foliation nonetheless identifies coherence-related
topological structures in $\Omega$ which are strongly influenced by the nature
of the singularities in the foliation.
Note that the interaction of $\phi=0$ and $\psi=0$ level sets as seen in Fig.
5(b) bear a striking resemblance to Figures regarding zero angle between
stable and unstable foliations of Lyapunov vectors such as in Fig. 1 for the
Hénon map from [18] that was part of a search for primary heteroclinic
tangencies when developing symbolic dynamic generating partitions of the Henon
map, [19, 20, 21, 22]. Indeed this analysis likely bears a relationship, in
that in a infinite time limit, the Lyapunov vectors suggested come to the same
point as those much earlier stories underlying the topological dynamics of
smooth dynamical systems. What is clear in the finite time discussion here is
that when we see a coincidence between the stretching and folding, that in
successively longer time windows, these properties repeat in progressively
smaller regions. As suggested by Fig. 5, e.g. (h), any point of tangency would
in turn be infinitely repeated in the long time limit. The perspective of this
current work may further understanding of what has always been the intricate
topic of why and how hyperbolicity is lost in nonuniformly hyperbolic systems
wherein seemingly paradoxically, errors can grow along the directions related
to stable manifolds, such as highlighted by Fig. 5 in [23].
Acknowledgements: SB acknowledges with thanks partial support from the
Australian Research Council via grant DP200101764. EB acknowledges with thanks
the Army Research Office (N68164-EG) and also DARPA.
## Appendix A Proof of Lemma 1
Given a general point $(x,y)\in\Omega_{0}$, let $\theta\in[-\pi/2,\pi/2)$. The
local stretching (2) associated with this point and direction is
$\Lambda\left(x,y,\theta\right)=\sqrt{\left(u_{x}\cos\theta+u_{y}\sin\theta\right)^{2}+\left(v_{x}\cos\theta+v_{y}\sin\theta\right)^{2}}\,.$
where the $(x,y)$-dependence on $u_{x}$, $u_{y}$, $v_{x}$ and $v_{y}$ has been
omitted from the right-hand side for brevity. Hence,
$\Lambda^{2}=\frac{u_{x}^{2}+v_{x}^{2}-u_{y}^{2}-v_{y}^{2}}{2}\cos
2\theta+\left(u_{x}u_{y}+v_{x}v_{y}\right)\sin
2\theta+\frac{u_{y}^{2}+v_{y}^{2}+u_{x}^{2}+v_{x}^{2}}{2}\,.$
Using the definitions for the functions $\phi$ and $\psi$ from (7) and (8),
$\Lambda^{2}=\phi\cos 2\theta+\psi\sin
2\theta+\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}\,.$
(36)
Given the linear independence of the sine and cosine functions, the value of
$\Lambda^{2}$ at $(x,y)$ is independent of $\theta$ if and only if $\phi$ and
$\psi$ are both zero. Thus, the isotropic set is characterized as the
intersection of the zero sets of the functions $\phi$ and $\psi$.
## Appendix B Proof of Lemma 2
Figure 13: $\mbox{SORF}_{max}$ near a nondegenerate singularity: (a) Value of
$\theta^{+}\in[-\pi/2,\pi/2)$ in $(\phi,\psi)$-space using (10), (b) as in
(a), but shown in a left-hand system, (c) and (d) qualitative slope fields for
(a) and (b); (e) $1$-pronged ‘intruding point’ associated with the structure
(c); (f) $3$-pronged ‘separating’ point associated with the structure (d); (g)
intruding point when axes are tilted; (h) separating point when axes are
tilted. Compare to Fig. 1 and Property 1.
We begin with (10), and obtain (13). Assuming for now that both $\phi$ and
$\psi$ are not zero, we use the double-angle formula to obtain
$\frac{2\tan\theta^{+}}{1-\tan^{2}\theta^{+}}=\tan
2\theta^{+}=\frac{\psi}{\phi}\,.$
Solving the quadratic for $\tan\theta^{+}$, we see that
$\tan\theta^{+}=\frac{-1\pm\sqrt{(\psi/\phi)^{2}+1}}{\psi/\phi}=\frac{-\phi\pm\sqrt{\phi^{2}+\psi^{2}}}{\psi}$
(37)
We now need to choose the sign in this expression, bearing in mind the usage
of the four-quadrant inverse tangent as used in (10). The four quadrants here
are in the $(\phi,\psi)$-space, which is indicated in Fig. 13(a). If $\phi>0$
and $\psi>0$, this implies that $2\theta^{+}$ is in the first quadrant, and
thus so is $\theta^{+}$. This means that $\tan\theta^{+}>0$, and consequently
the positive sign must be chosen. If $\phi>0$ and $\psi<0$, $2\theta^{+}$ is
in fourth quadrant, or $2\theta^{+}\in(-\pi/2,0)$. Thus, $\tan\theta^{+}<0$,
and so the positive sign must be chosen in (37) to ensure that the division by
$\psi<0$ leads to an eventual negative sign. Next, if $\phi<0$ and $\psi>0$,
$2\theta^{+}\in(\pi/2,\pi)$, and $\theta^{+}\in(\pi/4,\pi/2)$, leading to
$\tan\theta^{+}>0$ and the necessity of choosing the positive sign in (37).
Finally, if $\phi<0$ and $\psi<0$, $2\theta^{+}\in(-\pi,-\pi/2)$ and
$\theta^{+}\in(-\pi/2,-\pi/4)$, and thus $\tan\theta^{+}<0$ and the positive
sign in the numerator of (37) must be chosen. Thus, all cases lead to a
positive sign, and so
$\tan\theta^{+}=\frac{-\phi+\sqrt{\phi^{2}+\psi^{2}}}{\psi}\,,$
whence (13) when neither $\phi$ nor $\psi$ is zero.
Next, we rationalize the fact that (13) arises from (10) even if one or the
other of $\phi$ or $\psi$ is zero. The arguments to follow are equivalent to
considering the four emanating axes in Fig. 13(a). If $\phi=0$ and $\psi\neq
0$, (10) tells us that
$2\theta^{+}=\,(\pi/2)\,{\mathrm{sign}}\left(\psi\right)$ and thus
$\tan\theta^{+}=\tan(\pi/4)\,{\mathrm{sign}}\left(\psi\right)={\mathrm{sign}}\left(\psi\right)$.
This is consistent with what (13) gives when $\phi=0$ is inserted. If $\psi=0$
and $\phi\neq 0$, (10), which tells us that $2\theta^{+}=-\pi$ if $\phi<0$, or
$2\theta^{+}=0$ if $\phi>0$. Thus if $\psi=0$, $\theta^{+}=-\pi/2$ if
$\phi<0$, and $\theta^{+}=0$ if $\phi>0$. This verifies that (13) is
equivalent to (10) in $\Omega_{0}$.
Now, $\theta^{-}$ in (11) is defined specifically to be orthogonal to
$\theta^{+}$. There is only one angle in $[-\pi/2,\pi/2)$ which obeys this
condition. It is straightforward to verify from (13) and (14) that
$\left(\tan\theta^{+}\right)\left(\tan\theta^{-}\right)=-1$
in $\Omega_{0}$. Thus, $\theta^{-}$ as defined in (14) is at right-angles to
$\theta^{+}$ as defined in (13), which has been established to be equivalent
to (10).
## Appendix C Proofs of Theorems 1 and 2
First, we tackle Theorem 1, related to maximizing the global stretching. Let
$f$ be a restricted foliation on $\Omega$, and $\theta_{f}$ be the unique
angle field in $\Omega_{0}$ associated with it. From (36) from the proof of
Lemma 1, we have that the local stretching $\Lambda$ at a point
$(x,y)\in\Omega_{0}$ related to the angle $\theta_{f}$ obeys
$\displaystyle\Lambda^{2}$ $\displaystyle=$
$\displaystyle\sqrt{\phi^{2}+\psi^{2}}\left[\frac{\phi}{\sqrt{\phi^{2}+\psi^{2}}}\cos
2\theta_{f}+\frac{\psi}{\sqrt{\phi^{2}+\psi^{2}}}\sin
2\theta_{f}\right]+\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}$
(38) $\displaystyle=$ $\displaystyle\sqrt{\phi^{2}+\psi^{2}}\left[\cos
2\theta^{+}\cos 2\theta+\sin 2\theta^{+}\sin
2\theta_{f}\right]+\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}$
$\displaystyle=$
$\displaystyle\sqrt{\phi^{2}+\psi^{2}}\cos\left[2\left(\theta^{+}-\theta_{f}\right)\right]+\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}$
in which $\theta^{+}=\theta^{+}(x,y)$ satisfies
$\cos
2\theta^{+}=\frac{\phi}{\sqrt{\phi^{2}+\psi^{2}}}\quad{\mathrm{and}}\quad\sin
2\theta^{+}=\frac{\psi}{\sqrt{\phi^{2}+\psi^{2}}}\,.$ (39)
Thus, $\tan 2\theta^{+}=\psi/\phi$. If applying the inverse tangent to
determine $2\theta^{+}$ from this, we need to take the two equations (39) into
account in choosing the correct branch. This clearly depends on the signs of
$\phi$ and $\psi$, which is automatically dealt with if the four-quadrant
inverse tangent is used. Consequently, (39) implies that
$\theta^{+}(x,y)=\frac{1}{2}\,\tilde{\tan}^{-1}\left(\psi(x,y),\phi(x,y)\right)\,,$
which is chosen modulo $\pi$ because of the premultiplier of $1/2$ (the four-
quandrant inverse tangent is modulo $2\pi$). Thus, $\theta^{+}$ as defined
here is identical to that given in (10), which by Lemma 2 is equivalent to
(13).
Next, given that the cosine function is always between $-1$ and $1$, we see
that the local stretching must obey
$\left[-\sqrt{\phi^{2}+\psi^{2}}+\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}\right]^{1/2}\leq\Lambda\leq\left[\sqrt{\phi^{2}+\psi^{2}}+\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}\right]^{1/2}\,,$
and consequently the global stretching (6) satisfies
$\displaystyle\Sigma_{f}$ $\displaystyle\geq$
$\displaystyle\int\\!\\!\\!\\!\int_{\Omega}\left[-\sqrt{\phi^{2}+\psi^{2}}+\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}\right]^{1/2}\,\mathrm{d}x\,\mathrm{d}y\quad{\mathrm{and}}$
(40) $\displaystyle\Sigma_{f}$ $\displaystyle\leq$
$\displaystyle\int\\!\\!\\!\\!\int_{\Omega}\left[\sqrt{\phi^{2}+\psi^{2}}+\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}\right]^{1/2}\,\mathrm{d}x\,\mathrm{d}y\,.$
(41)
for any choice of foliation.
Let $f^{+}$ be the foliation identified with the angle field $\theta^{+}(x,y)$
at every location in $\Omega_{0}$. Inserting this into (38) renders the cosine
term $1$, and thus the right-hand side of (41) is achieved for this foliation.
There can be no foliation with gives a larger value of $\Sigma_{f}$. This
foliation is equivalent to pointwise maximizing $\Lambda$ in $\Omega_{0}$.
Can there be a different acceptable foliation, $\tilde{f}$, which also attains
this maximum value for $\Sigma_{f}$ (i.e., that
$\Sigma_{\tilde{f}}=\Sigma_{f^{+}}$)? If so, there must be a point
$\left(\tilde{x},\tilde{y}\right)\in\Omega_{0}$ where the induced slopes
$\theta_{\tilde{f}}$ and $\theta^{+}$ of the two different foliations are
different. Given that foliations must be smooth, this implies the presence of
an open neighborhood $N_{\varepsilon}$ (with positive measure) around this
point such that $\cos
2\left(\theta_{\tilde{f}}-\theta^{+}\right)<1-\varepsilon$, for any given
$\varepsilon>0$. Thus the integrated local stretching in $N_{\varepsilon}$ for
$\tilde{f}$ is strictly less than that of $f^{+}$. Since it is not possible to
obtain a greater integrated stretching outside of $N_{\varepsilon}$ (because
$f^{+}$, by forcing the cosine term to take its maximum possible value, cannot
be bettered), this would imply that the integrated stretching of $\tilde{f}$
over $\Omega_{0}$ is strictly less than that of $f^{+}$. Given that the
contribution to the integral in $I$ is independent of the foliation, this
provides a contradiction. Therefore, the foliation $f^{+}$, corresponding to
the choice of angle field $\theta^{+}$ as given in (10), maximizes
$\Sigma_{f}$, and is uniquely defined in $\Omega_{0}$.
The proof of Theorem 2 related to minimizing the global stretching is similar.
We use (40), which corresponds to choosing $\theta_{f}$ such that the term
$\cos 2\left(\theta^{+}-\theta_{f}\right)$ is always $-1$. This tells us that
$\theta_{f}$ must be chosen perpendicular to $\theta^{+}$. Thiis is exactly
the characterization used to determine $\theta^{-}$ in (11), and the
equivalence to (14) has been established in Lemma 2.
## Appendix D Local stretching connections related to Remark 5
Given a location $(x,y)$, suppose we wanted to determine the direction
(encoded by an angle $\theta$) to place an infinitesimal line segment such
that it stretches the most under $F$. From (2), we need to solve
$\sup_{\theta}\left\|\mbox{\boldmath$\nabla$}\mbox{\boldmath$F$}(x,y)\left(\begin{array}[]{c}\cos\theta\\\
\sin\theta\end{array}\right)\right\|:=\left\|\mbox{\boldmath$\nabla$}\mbox{\boldmath$F$}\right\|\,,$
where the right-hand side is the operator norm of $\nabla$$F$. This is
computable by the square-root of the larger eigenvalue of
$\left[\mbox{\boldmath$\nabla$}\mbox{\boldmath$F$}\right]^{\top}\mbox{\boldmath$\nabla$}\mbox{\boldmath$F$}$,
i.e., of the Cauchy–Green tensor $C$ as defined in (3). Given the map (1),
since
$\mbox{\boldmath$\nabla$}\mbox{\boldmath$F$}=\left(\begin{array}[]{cc}u_{x}&u_{y}\\\
v_{x}&v_{y}\end{array}\right)\,,$
it is clear that the Cauchy–Green strain tensor (as defined in (3)) is
$\mbox{\boldmath$C$}:=\left[\mbox{\boldmath$\nabla$}\mbox{\boldmath$F$}\right]^{\top}\,\mbox{\boldmath$\nabla$}\mbox{\boldmath$F$}=\left(\begin{array}[]{cc}u_{x}^{2}+v_{x}^{2}&u_{x}u_{y}+v_{x}v_{y}\\\
u_{x}u_{y}+v_{x}v_{y}&u_{y}^{2}+v_{y}^{2}\end{array}\right)\,.$
Accordingly, the eigenvectors $\lambda$ of the Cauchy–Green tensor (i.e., the
singular values of $\nabla$$F$) obey
$\lambda^{2}-\left(\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}\right)\lambda+\left[(u_{x}^{2}+v_{x}^{2})(u_{y}^{2}+v_{y}^{2})-\left(u_{x}u_{y}+v_{x}v_{y}\right)^{2}\right]=0\,,$
and thus
$\displaystyle\lambda$ $\displaystyle=$
$\displaystyle\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}\pm\sqrt{\phi^{2}+\psi^{2}}$
(42)
by using the definitions for $\phi$ and $\psi$ in (7) and (8).
We assume that $\phi$ and $\psi$ are not simultaneously $0$ (in our framework,
that we are not in $I$). Clearly, the larger value of $\lambda$ is obtained by
taking the positive sign, and the square-root of this is the matrix norm of
$C$. This gives exactly the pointwise maximized local stretching of
$\Lambda^{2}$ as defined in (38), which satisfies
$\left(\Lambda^{+}\right)^{2}=\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}+\sqrt{\phi^{2}+\psi^{2}}\,.$
The quantity $\Lambda^{+}$ defined above (and also given in the main text as
(18)), is related to the finite-time Lyapunov exponent or simply the Lyapunov
exponent. We note that for defining $\theta^{+}$ (for optimizing stretching)
we required that $(x,y)\neq I$, but $\Lambda^{+}$ can be thought of as a field
on all of $\Omega$.
Obtaining the eigenvector of the Cauchy–Green tensor $C$ corresponding to the
$\lambda$ associated with (18) is somewhat unpleasant. However, our equation
for $\theta^{+}$ in (13) indicates that eigenvector—modulo a nonzero
scaling—can be written as
$\tilde{\mbox{\boldmath$w$}}^{+}=\left(\begin{array}[]{c}\psi\\\
-\phi+\sqrt{\phi^{2}+\psi^{2}}\end{array}\right)\,,$
as long as this value is not zero (which is when $\psi=0$ and $\phi>0$, in
which case $\mbox{\boldmath$w$}^{+}=\left(1\,\,0\right)^{\top}$). Tedious
calculations reveal that
$\displaystyle C\,\tilde{\mbox{\boldmath$w$}}^{+}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cc}u_{x}^{2}+v_{x}^{2}&\psi\\\
\psi&u_{y}^{2}+v_{y}^{2}\end{array}\right)\,\left(\begin{array}[]{c}\psi\\\
-\phi+\sqrt{\phi^{2}+\psi^{2}}\end{array}\right)$ $\displaystyle=$
$\displaystyle\ldots=\left(\Lambda^{+}\right)^{2}\,\tilde{\mbox{\boldmath$w$}}^{+}\,,$
verifying that our expression does indeed give the relevant eigenvector. The
situation of $\psi=0$ and $\phi>0$ is easy to check as well. Using
$\tilde{\mbox{\boldmath$w$}}^{+}=\left(1\,\,\,0\right)^{\top}$, we once again
get
$C\tilde{\mbox{\boldmath$w$}}^{+}=\left(\phi+\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}\right)\tilde{\mbox{\boldmath$w$}}^{+}=\left(\Lambda^{+}\right)^{2}\tilde{\mbox{\boldmath$w$}}^{+}\,.$
The eigenvector field $\tilde{\mbox{\boldmath$w$}}^{+}$ of $C$ (or a scalar
multiple of it) is only defined on $\Omega_{0}$. In the literature, this is
variously referred to as the Lyapunov [5, 6] or Oseledec [3] vector field,
related to the local direction (in the domain of $F$) in which the stretching
due to the application of $F$ will be the most. If $F$ were a flow map derived
from a flow over a finite-time, then these would depend both on the initial
time $t_{0}$ and a time $t$ at the end. In other words, $F$ would be the flow
map from time $t_{0}$ to $t$. In this situation, the variation of the vector
field with respect to both $t_{0}$ and $t$ is to be noted.
The smaller eigenvalue of the Cauchy–Green tensor is obtained by taking the
negative sign in (42), which gives
$\left(\Lambda^{-}\right)^{2}=\frac{\left|\mbox{\boldmath$\nabla$}u\right|^{2}+\left|\mbox{\boldmath$\nabla$}v\right|^{2}}{2}-\sqrt{\phi^{2}+\psi^{2}}\,.$
This is clearly the local stretching minimizing choice, corresponding to
choosing $\theta=\theta^{-}$ (i.e., making the cosine term equal to $-1$). The
corresponding eigenvector $\tilde{\mbox{\boldmath$w$}}^{-}$ can be verified
(as above) to be in the direction specified by $\theta^{-}$. However, given
that $\sqrt{\phi^{2}+\psi^{2}}\neq 0$, we have distinct eigenvalues for the
symmetric matrix $C$, and thus the two eigenvectors must be orthogonal by
standard spectral theory. Hence we can easily conclude that $\theta^{-}$
corresponds to $\tilde{\mbox{\boldmath$w$}}^{-}$, the eigenvector of $C$
corresponding to the smaller eigenvalue.
The situation in which the eigenvalues of $C$ coincide corresponds to
‘singularities,’ in particular because this means that an orthogonal
eigenbasis may not exist. This can only occur when the eigenvalues are
repeated, and from (42) this occurs only when $\phi^{2}+\psi^{2}=0$. Thus,
both $\phi$ and $\psi$ must be zero. Thus corresponds exactly to the isotropic
set $I$, in Definition 1 and Lemma 1.
We note that Haller [24] uses streamlines of the eigenvector fields from the
Cauchy–Green tensor in his theories of variational Lagrangian coherent
structures, looking for example for curves to which there is extremal
attraction or repulsion due to a flow over a given time period. Our foliations
obtained here, corresponding to globally maximizing and minimizing stretching,
are generated from fibers of the same fields. Therefore, our insights into
singularities and branch-cut discontinuities are therefore relevant to these
approaches as well.
## Appendix E Singularity classification
This section provides explanations for the nondegenerate singularity
classification of Property 1. Given the transverse intersection of the
$\phi=0$ and $\psi=0$ contours at a singularity $p$, we examine nearby
contours not in standard $(x,y)$-space, but in $(\phi,\psi)$-space, in which
$p$ is at the origin. The angle fields $\theta^{\pm}$ are the defining
characteristics of the foliation, and thus we show in Fig. 13(a) a schematic
of the maximizing angle field $\theta^{+}$. A nonstandard labelling of the
$\phi$ and $\psi$ axes is used here because the relative orientations of the
positive axes $\phi_{+}$ and $\psi_{+}$ (the directions in which $\phi>0$ and
$\psi>0$ resp.) and negative axes $\phi_{-}$ and $\psi_{-}$ is related to
whether $p$ is right- or left-handed. Thus, Fig. 13(a) corresponds to $p$
being right-handed. The slope fields and expressions indicated are based on
the four-quadrant inverse tangent (10), expressed in terms of the regular
inverse tangent in each quadrant. We also express the values of $\theta^{+}$
on each of the axes in Figs. 13(a), along which $\theta^{+}$ is seen to be
constant.
In Figs. 13(c), just below, we indicate the angle field $\theta^{+}$ by
drawing tiny lines which have the relevant slope. What happens when we
‘connect these lines’ to form a foliation is shown underneath in Figs. 13(e).
The foliation bends around the origin (shown as the blue point $p$),
effectively rotating around it by $\pi$. However, it must be cautioned that
while Fig. 13(e) seems to indicate that the fracture ray lies along
$\phi_{+}$, this is in general not the case. The angle fields shown in Figs.
13(c) and (e) display directions in physical ($\Omega$) space, in which the
$\phi=0$ and $\psi=0$ contours intersect in some slanted way. We show one
possibility in Fig. 13(g), in which the fracture ray will be approximately
from the northwest. We identify $p$ in this case an intruding point or a
$1$-pronged singularity. The nearby $\mbox{SORF}_{max}$ curves rotate by $\pi$
around it.
Figure 14: $\mbox{SORF}_{max}$ near $p$ when transversality is relaxed: (a),
(b) and (c) show different possibilities for axes to intersect, and the
corresponding $\mbox{SORF}_{max}$ topologies are illustrated in Fig. 2.
In the right-hand panels of Fig. 13 we examine the other possibility of $p$
being left-handed. This is achieved in Fig. 13(b) by simply flipping the
$\psi_{-}$ and $\psi_{+}$ axes, and retaining the information that we have
already determined in Fig. 13(a). The corresponding slope field is displayed
in Fig. 13(d). The fracture ray (also along the $\phi_{+}$-axis in this case)
now separates out curves coming from the right, rather than causing them to
turn around the origin. Fig. 13(f) demonstrates this behavior, obtained by
connecting the angle fields into curves. There are two other fracture rays
generated by this process of separation, because curves in the $\phi_{-}$
region are forced to rotate away from the origin without approaching it. Fig.
13(h) is an orientation-preserving rotation of the axes in Fig. 13(f), which
highlights that the directions of the three fracture rays are based on the
orientations of the axes in physical space. Based on the topology of the
foliation, when $p$ is left-handed, we thus have a separating point or
$3$-pronged singularity.
Suppose next that the nondegeneracy of $p$ is relaxed mildly by allowing the
$\phi=0$ and $\psi=0$ contours (both still considered to be one-dimensional)
to intersect tangentially at $p$. To achieve this, imagine bending the
$\psi$-axis in Figs. 13(a) and (c) so that it becomes tangential to the
$\phi$-axis, but the axes still cross each other. This degenerate situation is
shown in Fig.14(a), and we note that the orientation of the axes remains
right-handed despite the tangency. Connecting the angle field lines gives the
relevant topological structure of Fig. 2(a). The topology is very close to the
nondegenerate intruding point, but there is an accumulation of curves towards
the fracture ray from one side. It is easy to verify (not shown) that there is
no change in this topology if the tangentiality shown in Fig. 14(a) goes in
the other direction, with $\psi_{+}$ becoming tangential to $\phi_{+}$ and
$\psi_{-}$ to $\phi_{-}$. Fig. 14(b) examines the impact on the degenerate
left-handed situation; Fig. 2(b) indicates that the fracture ray acquires a
similar one-sided accumulation effect, while the remainder of the portrait
remains essentially as it was. So this is a degenerate separation point.
Finally, in Fig. 14(c) we consider the case where the tangentiality is such
that the $\phi$\- and $\psi$-axes do not cross one another. In this case,
drawing connecting curves reveals that the topology is a combination of
degenerate intruding and separating points, and is illustrated in Fig. 2(c).
Testing the other possibilities (interchanging the $\psi_{-}$ and $\psi_{+}$
axes locations, and doing the same analysis with them below the $\phi$-axis)
yields no new topologies. One way to rationalize this is that the relative
(degenerate) orientation between the negative axes and that between the
positive axes is in this case exactly opposite; one is as if there is a right-
handed orientation, while the other is left-handed.
## Appendix F Proof of Theorem 3
We have established via Fig. 4 that if there exists a nondegenerate
singularity $p$, then $\mbox{\boldmath$w$}^{+}$ is not continuous across the
branch cut $B$. This vector field is ‘the’ Lyapunov vector field, generated
from the eigenvector field corresponding to the larger eigenvalue of the
Cauchy–Green tensor field, where this is well-defined (i.e., in $\Omega_{0}$).
However, a vector field associated with the angle field $\theta^{+}$ is not
unique, as is reflected in the presence of the arbitrary function $m$ in (26).
The nonuniqueness is equivalent to the potential of scaling Lyapunov vectors
in a nonuniform way in $\Omega\setminus I$, by multiplying by a nonzero
scalar. The question is: is it possible to remove the discontinuity that
$\mbox{\boldmath$w$}^{+}$ has across $B$ by choosing a scaling function $m$?
From Fig. 4, we argue that the answer is no. Imagine going around the black
dashed curve, $C$, and attempting to have $\mbox{\boldmath$w$}^{+}$ be
continuous while doing so. Since $\mbox{\boldmath$w$}^{+}$ has a jump
discontinuity across $B$, it will therefore be necessary to choose $m$ to have
the opposite jump discontinuity for $m\mbox{\boldmath$w$}^{+}$ to be smooth.
So $m$ must jump from $+1$ to $-1$ in a certain direction of crossing.
However, since $\mbox{\boldmath$w$}^{+}$ is continuous on $C\setminus B$, to
retain this continuity $m$ must also remain continuous along $C\setminus B$.
This implies that $m$ must cross zero at some point in $C\setminus B$. Doing
so would render the Lyapunov vector $\mbox{\boldmath$w$}^{+}$ invalid. We have
therefore established Theorem 3 using elementary geometric means. We remark
that this theorem is analogous to the classical “hairy ball” theorem due to
Poincaré [25].
## Appendix G Branch cut effects on computations
If $p$ is a nondegenerate singularity, then the vector field of (26) with
$m=1$ and the choice of the positive sign ($\mbox{SORF}_{max}$) will locally
have the behavior as shown in Fig. 4. Now, in general, in finding a
$\mbox{SORF}_{max}$ which passes through $(x_{0},y_{0})$, we can implement
(26) for the choice of $m=1$, in both directions (increasing and decreasing
$s$), thereby obtaining the curve which crosses the point. An equivalent
viewpoint is that we implement (26) with $m=1$, and $s>0$, and then implement
it with $m=-1$ while retaining $s>0$.
If using (26) with $m=+1$ (globally) and $\mbox{\boldmath$w$}^{+}$ to generate
a $\mbox{SORF}_{max}$ curve, the vector field in Fig. 4(a) must be followed.
However, it is clear that anything approaching the branch cut $B$ gets pushed
away in the vertical direction. Thus, $\mbox{SORF}_{max}$ curves near $B$ will
in general be difficult to find.
The solution appears to be to set $m=-1$, which reverses the vector field.
However, this is essentially the diagram in Fig. 4(b), corresponding to a
left-handed $p$. This is of course equivalent to implementing (26) with $m=+1$
but in the $s<0$ direction. Curves coming in to $B$ now get stopped abruptly,
because the vector field on the other side of $B$ directly opposes the
vertical motion. Thus, curves will not cross $B$ vertically. However, since
any incoming curve will in general have a vector field component tangential to
$B$, this will cause a veering along the curve $B$. The curve will continue
along $B$, because the vector field pushes in on to $B$ vertically, preventing
departure from it. Thus when numerically finding $\mbox{SORF}_{max}$ curves,
curves which appear to tangentially approach the branch cut $B$ will be seen.
These curves are not real $\mbox{SORF}_{max}$ curves because, as is clear from
Fig. 4, the actual vector field is not necessarily tangential to $B$. That is,
the branch cut is not necessarily a streamline of the direction field
$\theta^{+}$.
A similar analysis (not shown) indicates that if using
$\mbox{\boldmath$w$}^{-}$ (as suggested via Theorem 2) to generate
$\mbox{SORF}_{min}$ curves, then these curves will not cross $B$ horizontally,
and also have the potential for tangentially approaching $B$ in a spurious
way. Notice moreover that, while we have discussed the branch cut locally near
$p$, these objects extend through $\Omega_{0}$, potentially connecting with
several singularities.
Finally, suppose there are parts of $B$ that are two-dimensional regions. In
such regions, Fig. 13(a) indicates that the angle field $\theta^{+}$ is
vertical; alternatively, see (15). Consequently, $\theta^{-}$ is horizontal
everywhere. However, numerical issues as above will occur when crossing the
one-dimensional boundary $\bar{B}\setminus B$, due to the inevitable issue of
the reversal of the vector field along at least one part of this boundary.
## References
* [1] J. Guckenheimer, P. Holmes, Nonlinear oscillations, dynamical systems and bifurcation of vector fields, Springer-Verlag, 1983.
* [2] R. Sacker, G. Sell, Dichotomies and invariant splittings for linear differential equations, J. Differential Equations 15 (1974) 429–458.
* [3] V. Oseledec, A multiplicative ergodic theorem, Trans. Moscow Math. Soc. 19 (1968) 197–231.
* [4] S. Shadden, F. Lekien, J. Marsden, Definitions and properties of Lagrangian coherent structures from finite-time Lyapunov exponents in two-dimensional aperiodic flows, Physica D 212 (2005) 271–304.
* [5] C. Wolfe, R. Samelson, An efficient method for recovering Lyapunov vectors from singular vectors, Tellus 59A (2007) 355–366.
* [6] K. Ramasubramanian, M. Sriram, A comparative study of computation of Lyapunov spctra with different algorithms, Physica D 139 (2000) 72–86.
* [7] S. Balasuriya, N. Ouellette, I. Rypina, Generalized Lagrangian coherent structures, Physica D 372 (2018) 31–51.
* [8] M. Hénon, A two-dimensional mapping with a strange attractor, Commun. Math. Phys. 50 (1976) 69–77.
* [9] B. Chirikov, A universal instability of many-dimensional oscillator systems, Physics Reports 52 (1979) 263–379.
* [10] H. Lawson, Foliations, Bull Amer Math Soc 80 (1974) 369–418.
* [11] L. Mosher, Tiling the projective foliation space of a punctured surface, Trans. Amer. Math. Soc. 306 (1988) 1–70.
* [12] W. Thurston, On the geometry and dynamics of diffeomorphisms of surfaces, Bull. Amer. Math. Soc. 19 (1988) 417–431.
* [13] J. Hubbard, H. Masur, Quadratic differentials and foliations, Acta Mathematica 142 (1979) 221–274.
* [14] E. Rykken, Expanding factors for psedo-Anosov homeomorphisms, Michigan Math. J. 46 (1999) 281–296.
* [15] K.-F. Tchon, J. Dompierre, M.-G. Vallet, F. Guibault, R. Camarero, Two-dimensional metric tensor visualization using pseudo-meshes, Engineering with Computers 22 (2006) 121–131.
* [16] M. Farazmand, D. Blazevski, G. Haller, Shearless transport barriers in unsteady two-dimensional flows and maps, Physica D 278-279 (2014) 44–57.
* [17] E. J. Doedel, T. F. Fairgrieve, B. Sandstede, A. R. Champneys, Y. A. Kuznetsov, X. Wang, Auto-07p: Continuation and bifurcation software for ordinary differential equations, Tech. rep., http://indy.cs.concordia.ca/auto/ (2007).
* [18] L. Jaeger, H. Kantz, Structure of generating partitions for two-dimensional maps, Journal of Physics A: Mathematical and General 30 (16) (1997) L567.
* [19] P. Grassberger, H. Kantz, U. Moenig, On the symbolic dynamics of the Hénon map, Journal of Physics A: Mathematical and General 22 (24) (1989) 5217.
* [20] E. M. Bollt, T. Stanford, Y.-C. Lai, K. Życzkowski, What symbolic dynamics do we get with a misplaced partition?: On the validity of threshold crossings analysis of chaotic time-series, Physica D: Nonlinear Phenomena 154 (3-4) (2001) 259–286.
* [21] E. M. Bollt, T. Stanford, Y.-C. Lai, K. Życzkowski, Validity of threshold-crossing analysis of symbolic dynamics from chaotic time series, Physical Review Letters 85 (16) (2000) 3524.
* [22] F. Christiansen, A. Politi, Guidelines for the construction of a generating partition in the standard map, Physica D: Nonlinear Phenomena 109 (1-2) (1997) 32–41.
* [23] L. Jaeger, H. Kantz, Homoclinic tangencies and non-normal Jacobians - effects of noise in nonhyperbolic chaotic systems, Physica D: Nonlinear Phenomena 105 (1-3) (1997) 79–96.
* [24] G. Haller, Lagrangian coherent structures, Annu. Rev. Fluid Mech. 47 (2015) 137–162.
* [25] H. Poincaré, Sur les courbes definies par les equations differentielles, J. Math. Pures Appl. 1 (1885) 167–244.
|
# A Missing Data Imputation Method for 3D Object Reconstruction using Multi-
modal Variational Autoencoder
Hyeonwoo Yu and Jean Oh Hyeonwoo Yu and Jean Oh are affiliated with the
Robotics Institute of Carnegie Mellon University, Pittsburgh, PA 15213, USA
<EMAIL_ADDRESS>
###### Abstract
For effective human-robot teaming, it is important for the robots to be able
to share their visual perception with the human operators. In a harsh remote
collaboration setting, however, it is especially challenging to transfer a
large amount of sensory data over a low-bandwidth network in real-time, e.g.,
for the task of 3D shape reconstruction given 2D camera images. To reduce the
burden of data transferring, data compression techniques such as autoencoder
can be utilized to obtain and transmit the data in terms of latent variables
in a compact form. However, due to the low-bandwidth limitation or
communication delay, some of the dimensions of latent variables can be lost in
transit, degenerating the reconstruction results. Moreover, in order to
achieve faster transmission, an intentional over compression can be used where
only partial elements of the latent variables are used. To handle these
incomplete data cases, we propose a method for imputation of latent variables
whose elements are partially lost or manually excluded. To perform imputation
with only some dimensions of variables, exploiting prior information of the
category- or instance-level is essential. In general, a prior distribution
used in variational autoencoders is achieved from all of the training
datapoints regardless of their labels. This type of flattened prior makes it
difficult to perform imputation from the category- or instance-level
distributions. We overcome this limitation by exploiting a category-specific
multi-modal prior distribution in the latent space. By finding a modal in a
latent space according to the remaining elements of the latent variables, the
missing elements can be sampled. We evaluate the proposed approach on the 3D
object reconstruction from a single 2D image task and show that the proposed
approach is robust against significant data losses.
## I Introduction
When a human operator is teaming with robots in a remote location,
establishing a shared visual perception of the remote location is crucial for
a successful team operation. Particularly, object recognition is a key element
for semantic scene reconstruction and object-oriented simultaneous
localization and mapping (SLAM) [1, 2, 3, 4, 5]. In this paper, we focus on
the camera-based approaches for representing 3D object information [6, 7]. An
object can be defined in terms of various characteristics such as the scale,
texture, orientation, and 3D shape. In general, these disentangled features
follow non-linear and intractable distributions. With the recent development
of 2D and 3D Convolutional Neural Network (CNN) architectures, it is
achievable to map 2D images to such complex object features. Especially, a
number of methods have been proposed for 3D shape inference that humans can
intuitively recognize as well [8, 9, 10, 11, 12, 13].
Figure 1: An overview of the proposed method. We train VAE with a multi-modal
prior distribution. By using the intact elements of the transmitted vector and
the prior, we can find the correct modal to perform imputation. The
supplemented vectors can be subsequently converted to a 3D shape by the
decoder. Figure 2: An overview of the proposed network. During training, the
prior network is also trained that represents a multi-modal prior
distribution. The encoder can be equipped on a remotely operating robot, while
the prior network and 3D decoder are utilized in the base server for a human
operator. Each dimension of the latent space is assumed to be independent to
each other so that target modal of prior distribution can be found by
exploiting only a subset of the elements of the latent variable. We can,
therefore, perform the element-wise imputation of corrupted or over-compressed
vectors.
The use of the autoencoder (AE) has been particularly successful where latent
variables compressed from the 2D observation by the encoder can be converted
to the 3D shape using the decoder [14, 15, 16, 17, 18].
In the remote human-robot teaming context, it is challenging to support real-
time sharing of visual perception from a robot in a limited communication
environment as the amount of visual sensory data is significantly larger when
compared to that of wave, text, or other $1$D signals. In this case, the
observed 2D images of objects can be compressed to a $1$D latent vector by
using the encoder embedded on an on-board computer of a robot. With this
characteristic, the AE structure can be adopted for data compression and data
transmission to address the bottleneck issue in the communication network.
Rather than transmitting the entire 2D or 3D information, telecommunication
can be performed more efficiently in real-time by transmitting only the
compressed vectors. These vectors can easily be disentangled to the 3D shape
by the decoder on the remote human operator’s end.
In this paper, we further address a challenge of handling missing data during
transmission. In the case that the communication condition is unstable, some
elements of the compressed vector can be lost. Another case is that some
elements of the vector can be intentionally excluded to perform over
compression in order to achieve faster data transferring.
To address these missing data issues, we propose an approach that considers
not the latent space for the entire datapoints, but category-specific
distributions for the missing data imputation task. Specifically, we exploit
the idea of category-specific multi-modal prior for VAE [14, 15, 19]. After
training, the closest modal to the latent variable whose dimension is
partially lost can be found, which denotes the label of the latent vector. By
sampling the missing elements from that modal, missing data imputation can be
performed. In other words, we can consider the characteristics of a specific
category or instance while performing imputations.
For robust imputation, some elements of the latent variable are exploited to
find the modal to which the object belongs. Each dimension is assumed to be
independent in latent space, and each element is trained to be projected onto
a category-specific multi-modal distribution, i.e., our purpose is to train
the network for element-wise category clustering. The latent vector is
restored from the imputation process by finding the correct modal even with
partial elements of the incomplete latent variable. These restored latent
variables can be converted to the fully reconstructed 3D shapes by the
decoder.
An overview of the proposed method is shown in Fig. 1. The proposed method is
proceeded as follows: first, imputation for the missing elements is performed
by using a specific prior of the object label. Second, 3D shapes of the object
are reconstructed from the retrieved latent variables using the decoder that
are familiar to the latent variables as well as prior distributions.
Our method can be applied to 3D shape estimation robustly against both the
data loss due to unstable networks and the partial omission due to arbitrary
compression. Based on our experiments on the Pascal3D dataset [20], the
proposed method is able to retrieve intact 3D shapes even when more than a
half of the data have been lost.
## II Related work
For the 2D-3D alignment, diverse techniques using AE structure have been
studied [14, 15, 16, 17, 18]. In this case, the encoder is composed of 2D
convolutional layers to represent an observed 2D image into an abstract latent
space, whereas the decoder consists of 3D convolutional layers to estimate the
3D shape from the latent space. Here, each pair of 2D encoder and 3D decoder
shares an intermediate vector. In other words, these structures share the
latent variables with each other, enabling the 2D-3D projection through the
shared latent space. In this way, latent variables compressed from the 2D
observation by the encoder can be converted to the 3D shape using the decoder.
We exploit such a characteristics of the AE structure to adopt it for data
compression and data transmission specifically under a harsh network
condition.
For the benefit of faster data transfer, over compression can be performed,
omitting partial data. For the case of the intentional over compression of
latent variables, other dimensional reduction techniques such as Principal
Component Analysis (PCA) have been applied [21, 22, 23]. In this case,
however, the decoder trained with the encoder still focuses on the shared
latent space which makes it challenging to apply such a decoder to the new
latent space given by the other dimensional reduction methods. To cope with
both accidental or intentional loss cases, it is desirable to make the AE to
perform on the latent variables robustly against missing elements as well.
Generally, in the AE, the latent space is determined by the distribution of
the dataset. Intuitively, a sampling-based method in a latent space can be
used to perform imputation of the missing element [24, 25, 26, 27]. The main
concern here is that the distribution of the latent space is hardly
represented as a closed form, so it is inevitable for the actual imputation
approximation to utilize the statistical approaches such as using the average
of latent variables. In the case of variational autoencoder (VAE), a prior
distribution for a latent space can be manually defined during the training
time [28]. Since the distribution is generally assumed to be isotropic
Gaussian, imputation can be performed by sampling from the prior distribution
for the missing elements. By using this aspect that a generative model has a
tractable prior distribution, many studies of missing data imputation have
been conducted in various fields [29, 30, 31].
Even with a generative model such as VAE applied, it still remains challenging
to handle missing data. Due to the characteristic of object-oriented features,
category- or instance-level characteristics are highly correlated to 3D shape
reconstruction. Based on this intuition, we build our approach.
## III Approach
In order to perform data compression for 2D-3D estimation, we can use AE or
generative models such as VAE, for projecting a 2D image of an object into a
latent space which is shared with the 3D shape decoder. The compressed latent
vector can be converted to the 3D shape of the object by the decoder.
In certain cases, the latent variable might be corrupted during transmission
by losing arbitrary elements of the vector. For instance, when transmitting
such a compressed $1$D latent vector from a remote robot (encoder) to the
server (decoder), some of the elements being transmitted can be lost due to a
communication failure. Meanwhile, there is also a case in which it is
necessary to achieve faster transmission at the cost of incomplete data, e.g.,
by further reducing the dimension of the data representing the latent
variable. In these data loss cases, the decoder should be able to perform
robust 3D reconstruction from the latent variable whose elements are partially
missing.
To accomplish a robust reconstruction, it is desired to restore the distorted
latent variables. The prior for a latent space can be learned for a generative
model, and then missing element imputation can be performed using this prior.
To meet these needs, we propose a method of missing data imputation for 3D
shapes by retrieving missing elements from the prior distributions.
### III-A Prior Distribution for Missing Data Imputation
For the object representation, let $I$ and $\boldsymbol{x}$ denote the
observed 2D image and its 3D shape, respectively; let $\boldsymbol{z}$ be the
$N$ dimensional latent vector transmitted from the encoder. Assume that some
of the elements of $z$ might have been lost while transmission, or excluded
for further compression.
In order to retrieve an accurate 3D shape from such incomplete data
dimensions, AE or vanilla VAE can be exploited. When the incomplete vector is
simply inputted into the decoder, however, it is hard to expect an accurate
result as the decoder has been trained for the complete latent space. In order
to approximately retrieve the incomplete latent variable, missing elements can
be compensated for by sampling from the latent space. In AE, however, there is
not enough prior information that can be leveraged to restore the missing data
as the AE does not prescribe the prior distribution of latent space.
Meanwhile, in the case of vanilla VAE, the prior is assumed to be isotropic
Gaussian. Since we assume a specific prior distribution of the latent
variables for the training data, we can approximately have the distributions
of 3D shape $\boldsymbol{x}$ as follows:
$\displaystyle p\left(\boldsymbol{x}\right)$ $\displaystyle=\int
p_{\theta}\left(\boldsymbol{x}|\boldsymbol{z}\right)p\left(\boldsymbol{z}\right)d\boldsymbol{z}$
$\displaystyle\simeq\frac{1}{N}\sum^{i=N}_{\boldsymbol{z}_{i}\sim
p\left(\boldsymbol{z}\right)}p_{\theta}\left(\boldsymbol{x}|\boldsymbol{z}_{i}\right)$
(1)
where
$p\left(\boldsymbol{z}\right)=N\left(\boldsymbol{z};0,\boldsymbol{I}\right)$
representing the latent space of vanilla VAE. Inspired by this, missing
elements can be retrieved by sampling from $p\left(\boldsymbol{z}\right)$ for
the incomplete latent variable. Here, the average of the sampled latent
variables is zero as the prior distribution is defined as isotropic. We,
therefore, can approximately perform data imputation for the latent variable
with missing elements as the following:
$\displaystyle\boldsymbol{z}^{imp}=\begin{cases}z^{imp}_{i}=0,&\text{ if
}z^{miss}_{i}=\textit{None}\\\
z^{imp}_{i}=z^{miss}_{i},&\text{else}\end{cases}$ (2)
where $\boldsymbol{z}^{miss}$ is the transmitted vector with missing elements;
$\boldsymbol{z}^{imp}$, the retrieved vector by imputation; and $i$, the
element index of vector $z$. None denotes that the element is missing or
excluded.
In this case, the imputation result only concerns the distribution of the
entire latent space, as it is hard to know the distributions of each
datapoint. Due to this reason, the category-level shape retrieval becomes
challenging. To achieve the prior knowledge of category or instance, we
exploit the multi-modal prior distribution according to the category label of
each object. This prior can be denoted as:
$\displaystyle
p_{\psi}\left(\boldsymbol{z}|l\right)=N\left(\boldsymbol{z};\boldsymbol{\mu}\left(\boldsymbol{l}\right),\boldsymbol{I}\right),$
(3)
where $\boldsymbol{l}$ is the category label of the object. The prior
distribution is multi-modal prior, and it can be represented as the
conditional distribution of the label as in Eq. (3). Here,
$\boldsymbol{\mu}\left(\boldsymbol{l}\right)$ is the function of the label
$\boldsymbol{l}$. Then, the target distribution of 3D shape
$p\left(\boldsymbol{x}\right)$ can be represented as:
$\displaystyle\log p\left(\boldsymbol{x}\right)\geq$ $\displaystyle-
KL\left(q_{\phi}\left(\boldsymbol{z}|I\right)||p_{\psi}\left(\boldsymbol{z}|\boldsymbol{l}\right)\right)$
$\displaystyle+\mathbb{E}_{\boldsymbol{z}\sim q_{\phi}}\left[\log
p_{\theta}\left(\boldsymbol{x}|\boldsymbol{z}\right)\right].$ (4)
By defining category-specific prior distribution, we can choose the closest
modal only with partial element of a latent variable and perform imputation as
follows:
$\displaystyle\boldsymbol{z}^{imp}=\begin{cases}z^{imp}_{i}=\mu_{i}^{near},&\text{
if }z^{miss}_{i}=\textit{None}\\\
z^{imp}_{i}=z^{miss}_{i},&\text{else}\end{cases}$ (5)
where $\boldsymbol{\mu}^{near}$ is the mean of the closest modal to the latent
variable $\boldsymbol{z}^{miss}$. In the case of VAE, variational likelihood
$q_{\phi}\left(\boldsymbol{z}|\boldsymbol{x}\right)$ approximates the
posterior $p\left(\boldsymbol{z}|\boldsymbol{x},\boldsymbol{l}\right)$. The
networks are trained to fit the variational likelihood to the prior
distribution as in Eq. (4), the prior distribution also approximates the
posterior to some extent. Consequently, when the modal
$p_{\psi}\left(\boldsymbol{z}|\boldsymbol{l}\right)$ is chosen correctly, it
also means that the conditional posterior
$p\left(\boldsymbol{z}|\boldsymbol{x},\boldsymbol{l}\right)$ is also chosen
well, which leads to the correct imputation. Once the latent variable is
retrieved properly using the prior, the 3D shape can be estimated using the
decoder trained on the latent space.
Figure 3: The precision-recall curve for 30, 50, 70, and 90% missing ratio.
For the 30 and 50% cases, the proposed method outperforms other approaches.
For substantial data loss cases of more than half of the missing ratio, the
naïve method of simply using the average of entire traning datapoints performs
the best.
### III-B Modal Selection
The key of retrieving the incomplete vector is to find the prior modal
corresponding to the original latent variable. According to the mean field
theorem, each dimension of the latent space can be assumed to be independent.
Therefore, for the incomplete latent variable $\boldsymbol{z}$, optimal label
$\boldsymbol{l}^{*}$ corresponding to the original $\boldsymbol{z}$ can be
found by comparing the modal of the prior in element-wise manner as follows:
$\displaystyle\boldsymbol{l}^{*}$
$\displaystyle=\operatorname*{argmax}_{\boldsymbol{l}}\prod_{z^{miss}_{i}\neq
None}p\left(z_{i}=z^{miss}_{i}|l_{i}\right)$
$\displaystyle=\operatorname*{argmin}_{\boldsymbol{l}}\sum_{z^{miss}_{i}\neq
None}|z^{miss}_{i}-\mu_{i}|^{2}$ (6)
In other words, the category- or instance-level classification is performed
only with those elements where the latent variable is not missing and a multi-
modal prior. Since we assume that each modal of the prior is Gaussian,
summations of the element-wise distance are calculated and compared. In order
to make this approach hold, each modal of the prior distribution in the latent
space should be separated from each other by a certain distance threshold or
more. To meet this condition, we give an additional constraint between two
different labels $\boldsymbol{l}^{j}$ and $\boldsymbol{l}^{k}$ while training
multi-modal VAE as in [14, 15, 19]:
$\displaystyle|\mu\left(\boldsymbol{l}^{j}\right)_{i}-\mu\left(\boldsymbol{l}^{k}\right)_{i}|>\sigma,\text{
}\forall i,j,k,\text{ }j\neq k$ (7)
From Eq. (7), each dimension of the latent space follows an independent multi-
modal distribution, and each modal becomes distinguishable according to the
label. Consequently, target modal can be found using only some non-missing
elements of the latent variable, and element-wise imputation can be achieved
from this selected modal.
### III-C Decoder and Prior Distribution
After training is completely converged, we can find the category-specific
modal $p_{\psi}\left(\boldsymbol{z}|\boldsymbol{l}\right)$ of the incomplete
latent variable and let the latent variable be supplemented. Subsequently, the
robust 3D reconstruction can then be achieved by the decoder. However, since
it is challenging for the variational likelihood
$q_{\phi}\left(\boldsymbol{z}|\boldsymbol{x}\right)$ to accurately approximate
the prior $p\left(\boldsymbol{z}|\boldsymbol{x},\boldsymbol{l}\right)$ in
practice, adapting the decoder to the prior distribution as well can flexibly
cope with the latent variables under the imputation process. Therefore, we
replace the expectation term in Eq. (4) with the following:
$\displaystyle\mathbb{E}_{\boldsymbol{z}\sim
q_{\phi}\left(\boldsymbol{z}|\boldsymbol{x}\right)}\left[\log
p_{\theta}\left(\boldsymbol{x}|\boldsymbol{z}\right)\right]+\mathbb{E}_{\boldsymbol{z}\sim
p_{\psi}\left(\boldsymbol{z}|\boldsymbol{l}\right)}\left[\log
p_{\theta}\left(\boldsymbol{x}|\boldsymbol{z}\right)\right]$ (8)
By Eq. (8), the decoder also estimates the 3D shape from the latent variable
sampled from the prior distribution according to the label. With this
modification, when the incomplete latent variable is supplemented by replacing
the missing element with the variables from the prior, we can obtain more
robust 3D reconstruction results. In the actual training phase, those two
expectation terms are not trained at the same time and randomly selected per
one training iteration.
## IV Implementation
To implement the proposed model, we use DarkNet-19 structure [32] as a
backbone structure of our encoder. We construct the encoder by adding one
convolutional layer on top of the backbone for latent variables. We pretrain
the backbone network on the Imagenet classification dataset [33]. We use the
Adam optimizer [34] with a learning rate of $10^{-4}$. For the entire training
process, a multi-resolution augmentation scheme is adopted. Similar to the
ideas used in [32, 15], Gaussian blur, HSV saturation, RGB inversion, and
random brightness are applied to the 2D images while training. Random scaling
and translation are also used. For the decoder, we adopt the structure of the
3D generator in [18]. We construct the prior network for implementing
$\boldsymbol{\mu}\left(\boldsymbol{l}\right)$ in Eq. (3), using 3 dense
layers. Dropout is not applied as the network is a part of the generative
model.
## V Experiments
TABLE I: Classification results of incomplete latent variables | | | | | unit: %
---|---|---|---|---|---
missing rate | | 30% | 50% | 70% | 90%
train$\rightarrow$test | | 92.22 | 83.80 | 61.90 | 28.17
test$\rightarrow$train | | 94.24 | 85.49 | 65.65 | 30.71
Figure 4: Examples of 3D shape reconstruction.
In order to verify the proposed method, we use the Pascal3D dataset [20]. The
dataset provides the split of train and test in the manual. For the purpose of
concrete verification, we also evaluate the proposed method on the reversed
split case as well; that is, we train our network on the test split and
validate it on the train split. While transmitting the latent variable, some
elements can be lost or can be rejected at various rates. To simulate that
effect, in this experiment, the missing ratios (or probability) of elements
are set to 30, 50, 70, and 90%. Since there are also the images of multi-
object scenes, we crop the images to obtain single-object images using
bounding boxes. The size of the train and test images is set to $224\times
224$.
The proposed method aims to achieve robust 3D shape reconstruction from the
corrupted latent variable as elements of the transmitted vector are lost or
omitted. To handle this issue, it is important to find the modal corresponding
to the label of the object with only exploiting the elements that remain from
the original vector. In other words, the possibility of performing correct 3D
reconstruction increases when label classification (or modal selection) using
Eq. (6) is successfully performed. We evaluate the label classification
accuracy by finding the nearest modal with the remaining elements of the
latent variable.
We also analyze the 3D reconstruction results using the decoder, after
performing missing element imputation. The case of using AE and vanilla VAE
are also evaluated for comparison. We follow Eq. (2) for VAE when performing
missing element imputation of latent variables. In the case of AE, since there
is no assumption of the latent space, we simply assume that the prior
distribution is Gaussian similar to VAE. The mean and variance of the latent
variables for the all training datapoints are calculated and used as the
parameters of the Gaussian distribution.
### V-A Classification
Table I shows the results of classifications for two split cases.
Classifications are performed using Eq. (6). Since dimensions are assumed to
be independent to each other and each element follows a one-dimensional multi-
modal prior, the classifications tasks are performed relatively well even in
the cases where most of the elements of the latent variables have been lost.
When a half of the dimensions are lost, the accuracies reached 83% or more.
Even the classification is conducted only with 10% of the elements, the method
achieved almost 30% accuracy. This indicates that even when the latent
variable fails to accurately follow the class-wise multi-modal distribution
independently for each dimension, the exact modal according to the label of
the object can be estimated with only a few dimensions of the latent vector.
Compared to the 3D reconstruction, the classification task showed a higher
success rate as the task follows a regression for a much simpler multinoulli
distribution rather than the multi-dimensional binary estimation for complex
3D grids.
### V-B Reconstruction
We represent the quantitative results of 3D shape reconstruction in Fig. 3.
Similar to the classification task, the precision-recall curves are obtained
for various missing rates, 30, 50, 70, and 90%. The upper row is for the
original train-test split, and the lower one is for the reversal split. In the
case of AE and VAE, imputations are performed under the assumption that their
prior follows a Gaussian distribution. The proposed method assumes a multi-
modal prior, but similar to the case of AE or VAE, a prior distribution can be
assumed as unimodal for the simple version. In this case, the prior is assumed
to be Gaussian, and the mean and variance can be obtained from each mean of
all modals. We denote this case as mVAE-a. The proposed method using multi-
modal is denoted as mVAE-s.
In the cases that missing rates are 30 and 50%, the proposed method showed
better performance than other methods. This trend appears up to the case of
70% missing rate. However, for 70% or higher missing rate, mVAE-a, which
performs imputation using the average of all modals, showed better
performance. Similar results are shown in the 90% missing rate; in this
extreme case, naïve methods, such as AE or the methods simply use the average
of the entire datapoints during the imputation, performed relatively better
than the proposed method; however, at this level of substantial damage in the
data none of the approaches showed usable performance.
Some qualitative examples of the 3D shape estimation are shown in Fig. 4. The
top four rows and the bottom four rows are the results of the original train-
test split and those of the reverse split, respectively. In the case of 30 and
50% missing rate, the results indicate that the proposed method achieves
robust reconstruction results. We found that the result shows blurred
reconstruction when the missing rate exceeds 70%, similar to the case of the
precision-recall evaluation. In consideration of this, we manually select the
showcase examples where the proposed method almost completely reconstruct the
3D shape despite of the extremely high loss rate of the latent variable.
## VI Conclusion
We propose a missing data imputation method by considering the category-
specific multi-modal distribution. While transmitting observed objects over
unstable communication networks, some data can be lost or corrupted. In other
case, only partial elements of data can be transferred to achieve the over-
compression and real-time transmission. Although Autoencoder (AE) and
Variational Autoencoder (VAE) are exploited as key structures to compress
data, it is not suitable for decoding severely corrupted latent variables. Due
to the simplicity of their prior distributions, imputing lost elements in the
aspect of category or instance is challenging. To achieve the category-level
imputation and complete the 3D shape reconstruction from the 2D image, we
exploit the idea of multi-modal prior distribution for the latent space. We
determine the modal of latent variables using only the transmitted elements in
the latent space. Different from the vanilla VAE, each modal in the proposed
approach contains information of specific category. By imputing lost elements
with sampled variables from the chosen modal, we can robustly achieve the
latent vector retrieval and 3D shape reconstruction.
## Acknowledgement
This work was funded in part by the AI-Assisted Detection and Threat
Recognition Program through the US ARMY ACC-APG-RTP under Contract No.
W911NF1820218, “Leveraging Advanced Algorithms, Autonomy, and Artificial
Intelligence (A4I) to enhance National Security and Defense” and the Air Force
Office of Scientific Research under Award No. FA2386-17-1-4660.
## References
* [1] R. F. Salas-Moreno, R. A. Newcombe, H. Strasdat, P. H. Kelly, and A. J. Davison, “Slam++: Simultaneous localisation and mapping at the level of objects,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2013, pp. 1352–1359.
* [2] P. Parkhiya, R. Khawad, J. K. Murthy, B. Bhowmick, and K. M. Krishna, “Constructing category-specific models for monocular object-slam,” _arXiv preprint arXiv:1802.09292_ , 2018.
* [3] S. Yang and S. Scherer, “Cubeslam: Monocular 3-d object slam,” _IEEE Transactions on Robotics_ , vol. 35, no. 4, pp. 925–938, 2019.
* [4] S. L. Bowman, N. Atanasov, K. Daniilidis, and G. J. Pappas, “Probabilistic data association for semantic slam,” in _2017 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2017, pp. 1722–1729.
* [5] K. Doherty, D. Fourie, and J. Leonard, “Multimodal semantic slam with probabilistic data association,” in _2019 international conference on robotics and automation (ICRA)_. IEEE, 2019, pp. 2419–2425.
* [6] H. Su, C. R. Qi, Y. Li, and L. J. Guibas, “Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2015, pp. 2686–2694.
* [7] Y. Xiang, W. Choi, Y. Lin, and S. Savarese, “Data-driven 3d voxel patterns for object category recognition,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2015, pp. 1903–1911.
* [8] H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller, “Multi-view convolutional neural networks for 3d shape recognition,” in _Proceedings of the IEEE international conference on computer vision_ , 2015, pp. 945–953.
* [9] S. Bai, X. Bai, Z. Zhou, Z. Zhang, Q. Tian, and L. J. Latecki, “Gift: Towards scalable 3d shape retrieval,” _IEEE Transactions on Multimedia_ , vol. 19, no. 6, pp. 1257–1271, 2017.
* [10] R. Girdhar, D. F. Fouhey, M. Rodriguez, and A. Gupta, “Learning a predictable and generative vector representation for objects,” in _European Conference on Computer Vision_. Springer, 2016, pp. 484–499.
* [11] B. Shi, S. Bai, Z. Zhou, and X. Bai, “Deeppano: Deep panoramic representation for 3-d shape recognition,” _IEEE Signal Processing Letters_ , vol. 22, no. 12, pp. 2339–2343, 2015.
* [12] E. Johns, S. Leutenegger, and A. J. Davison, “Pairwise decomposition of image sequences for active multi-view recognition,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 3813–3822.
* [13] P. Papadakis, I. Pratikakis, T. Theoharis, and S. Perantonis, “Panorama: A 3d shape descriptor based on panoramic views for unsupervised 3d object retrieval,” _International Journal of Computer Vision_ , vol. 89, no. 2, pp. 177–192, 2010.
* [14] H. Yu and B. H. Lee, “A variational feature encoding method of 3d object for probabilistic semantic slam,” in _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2018, pp. 3605–3612.
* [15] H. Yu, J. Moon, and B. Lee, “A variational observation model of 3d object for probabilistic semantic slam,” in _2019 International Conference on Robotics and Automation (ICRA)_. IEEE, 2019, pp. 5866–5872.
* [16] J. Wu, Y. Wang, T. Xue, X. Sun, B. Freeman, and J. Tenenbaum, “Marrnet: 3d shape reconstruction via 2.5 d sketches,” in _Advances in neural information processing systems_ , 2017, pp. 540–550.
* [17] J. K. Pontes, C. Kong, S. Sridharan, S. Lucey, A. Eriksson, and C. Fookes, “Image2mesh: A learning framework for single image 3d reconstruction,” _arXiv preprint arXiv:1711.10669_ , 2017.
* [18] J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum, “Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling,” in _Advances in Neural Information Processing Systems_ , 2016, pp. 82–90.
* [19] H. Yu and B. Lee, “Zero-shot learning via simultaneous generating and learning,” in _Advances in Neural Information Processing Systems_ , 2019, pp. 46–56.
* [20] Y. Xiang, R. Mottaghi, and S. Savarese, “Beyond pascal: A benchmark for 3d object detection in the wild,” in _Applications of Computer Vision (WACV), 2014 IEEE Winter Conference on_. IEEE, 2014, pp. 75–82.
* [21] Y. Qu, G. Ostrouchov, N. Samatova, and A. Geist, “Principal component analysis for dimension reduction in massive distributed data sets,” in _Proceedings of IEEE International Conference on Data Mining (ICDM)_ , vol. 1318, no. 1784, 2002, p. 1788.
* [22] J. Ye, R. Janardan, and Q. Li, “Gpca: an efficient dimension reduction scheme for image compression and retrieval,” in _Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining_ , 2004, pp. 354–363.
* [23] A. Rooshenas, H. R. Rabiee, A. Movaghar, and M. Y. Naderi, “Reducing the data transmission in wireless sensor networks using the principal component analysis,” in _2010 Sixth International Conference on Intelligent Sensors, Sensor Networks and Information Processing_. IEEE, 2010, pp. 133–138.
* [24] Y. L. Qiu, H. Zheng, and O. Gevaert, “Genomic data imputation with variational auto-encoders,” _GigaScience_ , vol. 9, no. 8, p. giaa082, 2020.
* [25] R. D. Camino, C. A. Hammerschmidt, and R. State, “Improving missing data imputation with deep generative models,” _arXiv preprint arXiv:1902.10666_ , 2019.
* [26] M. Friedjungová, D. Vašata, M. Balatsko, and M. Jiřina, “Missing features reconstruction using a wasserstein generative adversarial imputation network,” in _International Conference on Computational Science_. Springer, 2020, pp. 225–239.
* [27] Q. Ma, W.-C. Lee, T.-Y. Fu, Y. Gu, and G. Yu, “Midia: exploring denoising autoencoders for missing data imputation,” _Data Mining and Knowledge Discovery_ , vol. 34, no. 6, pp. 1859–1897, 2020.
* [28] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” _arXiv preprint arXiv:1312.6114_ , 2013.
* [29] J. T. McCoy, S. Kroon, and L. Auret, “Variational autoencoders for missing data imputation with application to a simulated milling circuit,” _IFAC-PapersOnLine_ , vol. 51, no. 21, pp. 141–146, 2018.
* [30] B. Shen, L. Yao, and Z. Ge, “Nonlinear probabilistic latent variable regression models for soft sensor application: From shallow to deep structure,” _Control Engineering Practice_ , vol. 94, p. 104198, 2020.
* [31] R. Xie, N. M. Jan, K. Hao, L. Chen, and B. Huang, “Supervised variational autoencoders for soft sensor modeling with missing data,” _IEEE Transactions on Industrial Informatics_ , vol. 16, no. 4, pp. 2820–2828, 2019\.
* [32] J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 7263–7271.
* [33] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in _Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on_. Ieee, 2009, pp. 248–255.
* [34] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _arXiv preprint arXiv:1412.6980_ , 2014.
|
# Anchor Distance for 3D Multi-Object Distance Estimation from 2D Single Shot
Hyeonwoo Yu and Jean Oh Hyeonwoo Yu and Jean Oh are with the Robotics
Institute of Carnegie Mellon University, Pittsburgh, PA 15213, USA
<EMAIL_ADDRESS>
###### Abstract
Visual perception of the objects in a 3D environment is a key to successful
performance in autonomous driving and simultaneous localization and mapping
(SLAM). In this paper, we present a real time approach for estimating the
distances to multiple objects in a scene using only a single-shot image. Given
a 2D Bounding Box (BBox) and object parameters, a 3D distance to the object
can be calculated directly using 3D reprojection; however, such methods are
prone to significant errors because an error from the 2D detection can be
amplified in 3D. In addition, it is also challenging to apply such methods to
a real-time system due to the computational burden. In the case of the
traditional multi-object detection methods, existing works have been developed
for specific tasks such as object segmentation or 2D BBox regression. These
methods introduce the concept of anchor BBox for elaborate 2D BBox estimation,
and predictors are specialized and trained for specific 2D BBoxes. In order to
estimate the distances to the 3D objects from a single 2D image, we introduce
the notion of anchor distance based on an object’s location and propose a
method that applies the anchor distance to the multi-object detector
structure. We let the predictors catch the distance prior using anchor
distance and train the network based on the distance. The predictors can be
characterized to the objects located in a specific distance range. By
propagating the distance prior using a distance anchor to the predictors, it
is feasible to perform the precise distance estimation and real-time execution
simultaneously. The proposed method achieves about 30 FPS speed, and shows the
lowest RMSE compared to the existing methods.
## I Introduction
Real-time visual perception and understanding of an environment is critical to
the successes of robotic applications including autonomous driving. Visual
Simultaneous Localization and Mapping (SLAM), in particular, is essential for
performing navigation and exploration tasks as well as supporting high-level
missions that require reliable mobility. Recently, SLAM focuses on two major
facets: semantic recognition and spatial understanding [1, 2, 3, 4, 5]. With
the advancement of deep neural networks, several approaches have been
developed to achieve highly accurate results on semantic recognition, taking
advantage of rich and sophisticated semantic features and various disentangled
features such as shape, orientation, or dimensions of 2D and 3D Bounding Box
(BBox) [6, 7, 8, 9, 10, 11]. In this paper, we focus on real-time spatial
understanding including accurate estimation of object locations specifically
addressing the challenge in a monocular camera setting.
The majority of existing work on SLAM systems for autonomous driving relies
heavily on an inverse reprojection method [12, 13, 14, 15] where an optimal
distance is computed by reprojecting a 3D shape or its 3D BBox to a
corresponding 2D BBox. The performance of such a method, however, tends to be
extremely sensitive to that of image segmentation and/or 2D BBox estimation.
Figure 1: The overview of the proposed method. The network estimates the
distance of multi-object with multiple predictors from single image. Each
predictor is specialized to the object located in specific distance range by
using anchor distance. With the anchor distance, predictors are provided
priors of the distance so that accurate estimation under simple and fast
network structure can be achieved.
Alternatively, depth estimation methods [16, 17, 18] can be utilized for
object-level 3D estimation, by taking the median depth of all geometric
features or pixels within the detected 2D BBox [4, 13, 19]. In these methods,
however, it is hard to discard the outlier pixels especially if an object is
severely occluded. Adding a segmentation method such as [20] can have minor
improvements in the performance at the cost of significant computational
burden; estimating the depth of object’s center from depth values of partial
observation is still challenging.
To achieve robust object depth estimation, we can exploit existing single-
shot, multi-object detection approaches, which are specialized in category
classification, segmentation, and 2D BBox regression [20, 21, 22, 23]. Based
on such methods, various approaches have been proposed for object disentangled
representation [24, 25, 26, 27]. Some works on multi-object understanding use
multi-object detector to obtain object region followed by a post-processing
step [28, 9, 29]. Others train their baseline network with Region Proposal
Network (RPN) and perform Region of Interest (RoI) pooling to obtain visual
features of objects [30]. These methods deploy additional structure to
estimate various object representations in addition to the baseline for multi-
object detection. Such an increased complexity of the network makes real-time
performance challenging [20, 28]. To simplify the structure of the network,
prior knowledge of an object can be exploited. Since the purpose of the
existing multi-object detector is to perform the 2D BBox regression, anchor
BBoxes are used as prior knowledge of 2D BBox [23, 22, 31]. To utilize such
prior knowledge represented by anchor BBoxes, several methods construct their
networks by deploying multiple predictors according to anchor BBoxes. These
methods, however, have limitations in learning object representations such as
distance estimation as they mainly focus on the 2D BBox on the image plane.
To bridge the gap between existing multi-object detection and distance
estimation, we introduce the notion of anchor distance. We then propose a
multi-object distance estimation method specifically designed for both real-
time performance and accurate estimation. Given a 2D image as an input, we aim
to estimate the distance to objects in a 3D space. Our work makes the
following contributions to the state-of-the-art methods:
* •
as shown in Fig. 1, we transform the 2D single shot to 3D spatial grids
represented as predictors so that the proposed method achieves real-time
performance without a 2D RoI pooling network;
* •
compared to existing 2D detectors, the proposed method achieves robust
detection results with overlapped objects since objects are detected in 3D
grid space; and
* •
we define and utilize the notion of anchor distance, thus predictors in our
proposed network are specialized and robust to the objects in specific
distance range.
When compared to the state-of-the-art method, the proposed method runs about
$4$ times faster at 30 FPS and shows competitive results in Abs Relative (Abs
Rel) difference, and outstanding results in RMSE.
## II Related work
In the context of mobile robot navigation, 3D object detection and
localization are compulsory to perform collision avoidance, object-oriented
SLAM or safe navigation in autonomous driving. In this section, we discuss
related works on monocular SLAM specifically focusing on the 3D localization.
Various existing works on this end adopt the idea of inverse perspective
projection [13, 14, 3, 15]. The 2D projection of an object is invertible since
its extent parameters resolve depth ambiguity. That is, by mapping between 2D
BBoxes from an object detection module and 3D BBoxes, these approaches can
estimate the distance in 3D given a 2D input image. Also, in a special setting
such as autonomous driving, we can assume that the height of a camera is known
and all of the objects are placed on the ground. This allows the algorithm to
resolve the scale-ambiguity and estimate object location. These approaches are
still based on 2D BBox regression, the quality of distance estimation is also
bounded by the precision of the 2D BBoxes.
To obtain 2D perception such as 2D BBox regression, using an additional object
detection method is inevitable. Several convolutional neural network
(CNN)-based algorithms have been proposed to perform multi-object detection in
real time. These approaches are mainly customized for specific tasks such as
detection using 2D BBox regression, object categorization, and segmentation.
Because the performance of these approaches depends on the quality of 2D
BBoxes inferred from an object detector, results of 2D BBox detection have
been commonly used as an evaluation criterion for the overall performance
[23]. This trend has led to the development of new techniques and network
designs to improve the performance of 2D BBox regression [22, 31].
Existing multi-object perception techniques mainly focus on 2D BBox regression
and category estimation, and their variations estimate disentangled
representations such as orientation, shape, or distance of objects [7, 24,
32]. Most of the proposed methods exploit visual features to learn the complex
representations of objects [6, 11, 33, 34, 35]. Therefore, by using an
additional multi-object detector, 2D BBox is provided for RoI pooling in order
to obtain visual features for the target region. Other methods are proposed by
modifying the existing multi-object detector structure for directly extracting
features and performing object understanding [36, 24, 20, 30]. However, they
still deploy the RPN for RoI pooling. Moreover, it is necessary to construct
the networks in addition to the baseline structure for object understanding
tasks with a large variation such as depth estimation. For these networks they
use structures with several drawbacks such as the increase of memory footprint
to cover the non-linearity of object understanding and perform elaborate
estimation [30, 32]. These methods are effective in representing specific
aspects of objects, but it is challenging to apply to mobile robot systems in
real-time due to their complexity.
In order to relax the network and achieve more robust results, a number of
methods have been introduced to provide prior information about objects to the
networks using anchors. Techniques for 2D detection exploit anchor of 2D BBox
[23, 22, 31], and techniques for 3D include orientation and distance as well
as 2D BBox in anchor [32, 27]. However, in these methods, anchors are still
defined by clustering based on the 2D BBox of object on 2D image plane. Since
various object representations such as locations, shape, or orientation are
not directly proportional to the projected 2D BBox, it is not suitable to use
anchors arranged with 2D BBox for learning the distance of objects. Therefore,
we define anchor distance for object 3D localization and introduce a method of
training networks based on the anchors that achieves real-time performance.
## III Approach
In general, when a monocular camera sensor is used for recognizing an object,
the location coordinates can be estimated using the mapping between the 2D
object detection and both the dimension and the orientation of a corresponding
3D BBox. Assume that a 2D BBox of a detected object and the dimension and the
orientation of a 3D BBox are given or have been estimated. With the constraint
that the 3D BBox fits tightly into the 2D detection box on the 2D input image,
3D coordinates of the detected object can be calculated; however, estimating
3D coordinates by overlapping the projection of 3D to 2D BBoxes usually causes
inaccurate results due to the sensitivity to the size of the 2D BBoxes. A
small error in the size of BBox can cause a substantial error in the distance
estimation calculation. Furthermore, this approach can add a significant
calculation burden, as each side of the 2D BBox can correspond to any of the
eight corners of the 3D BBox. Even with the strong constraint that most of the
objects are located on a ground plane, at least 64 configurations must be
checked [10]. To address these challenges, we propose an approach that
achieves both high accuracy and real-time performance by directly estimating
the object distance.
### III-A 3D Coordinate and 2D Bounding Box
Given 3D directions to the center of object and depth (or distance from
origin), 3D coordinates can solely be determined. For estimating the center of
the 3D object, the center of a 2D BBox can be exploited by reprojecting it to
the 3D real-world coordinate. In this way, we can have a normalized ray
direction vector toward the center of the 3D object from the origin. Hence,
the 3D coordinate of an object can be obtained by multiplying this ray
direction vector and estimated distance. The network can also be trained to
directly estimate all 3D coordinates of an object. In case of using multi-
object detector, however, it can be relatively advantageous to estimate the
center of an object by using the predictors distributed in a grid form. Hence,
it is more accurate to compute the coordinate from reprojection than learning
the $x$,$y$, and $z$ components of location directly. We design our network to
train on the distance from the origin and the 2D BBoxes as in [21, 31].
### III-B Anchor Distance
In order to achieve the real-time multi-object distance estimation, our
proposed method is based on a simple multi-object detection structure, namely,
YOLOv2 [21]. As the existing network only estimates the center and dimensions
of a 2D BBox, we let our network predict the distance from the origin of the
object as well. The following problem, however, still remains: the purpose of
conventional detectors is to estimate the 2D BBoxes rather than the distances
to the objects, and each predictor are dedicated to learn for the
corresponding BBoxes of different sizes. To reduce such a burden of the
network, a 2D anchor BBox is applied to each predictor as the prior knowledge
about the 2D BBox sizes. These approaches reduce the variation of the 2D
BBoxes that each predictor should predict, resulting in more accurate 2D BBox
estimation.
TABLE I: Variance of the distance of 2D BBox groups and distance groups for KITTI dataset | | | (unit : $m^{2}$)
---|---|---|---
# of predictors | order | 2D BBox | distance
(groups) | grouping | grouping
2 | 1 | 14.84 | 25.69
2 | 220.12 | 31.76
3 | 1 | 9.07 | 12.26
2 | 33.26 | 8.60
3 | 186.27 | 20.30
5 | 1 | 7.08 | 5.36
2 | 18.68 | 4.27
3 | 49.57 | 3.10
4 | 91.14 | 9.28
5 | 98.21 | 3.55
In order to achieve 3D coordinate estimations of multiple objects, we
introduce the concept of anchor distance that is similar to the anchor BBox.
To obtain the prior knowledge of the distance in a simple manner, the average
distance corresponding to each anchor BBox can be defined by using the average
of distances of objects belonging to each anchor BBox group. Unfortunately,
the size of an anchor BBox is not exactly proportional to the distance of the
object. When an anchor BBox simply includes the average distance, each
predictor can undesirably learn the mapping between the size of a 2D BBox and
a distance. In other words, the network still learns the 2D BBox regression as
its main task, leaving the burden of distance estimation to each individual
predictor where the distances within a large range should still be estimated.
To address this issue, we define the concept of an anchor distance to train
each predictor as follows. With an anchor distance, each predictor is
specialized in estimating the distance of the object in a specific distance
range, instead of being specialized in specific 2D BBoxes. Similar to
obtaining the anchor BBoxes, the distances of objects are grouped through
$k$-means clustering. Each center of the groups (or clusters) is defined as
anchor distance. We compare the variance of the average distance obtained from
2D BBox clustering and that of the anchor distance from distance clustering
for the case of $k=2,3,5$ in Table I. We use the car category in the KITTI
dataset [37] as an example. The predictors are sorted by the corresponding
distance in ascending order. In the case of 2D BBox grouping, the variance of
the group for the long distance is greater than the one for the short
distance. This is because objects that are far away from the origin are
similar in terms of size of the 2D BBox. On the other hand, in the case of
distance clustering, the variance is much smaller than that in the case of 2D
BBox clustering for all predictors. Therefore, predictors can show more
precise estimation results as they infer more consistent distances with anchor
distance. The results of the network prediction using anchor distance are
denoted as follows:
$\displaystyle d_{i}$ $\displaystyle=d^{a}_{i}\times exp(t_{i})\text{ ,
}\text{ }i\in\\{0,1,...,k-1\\}$ (1)
where $t$ is an output of the network, and $d^{a}$ is the anchor distance.
Similar to [21, 31], we use an exponential function as the final activation
function of the output.
In our method, the dimension of 2D BBox has no effect on distance estimation
directly, but the center of 2D BBox is crucial as it is exploited to calculate
the 3D coordinate by finding the ray direction. To achieve the 2D BBox
regression, priors of the BBox can be defined for our distance grouping.
Simply, we can define the average BBox of clusters for anchor distance by
taking the arithmetic mean of dimensions of the BBoxes. However, our clusters
focus on distance so that BBoxes in a group have large variance in terms of
their sizes. For more accurate BBox regression with anchor distance groups, we
take the average BBox approximately where we minimize not the differences of
dimensions but the differences of intersection over union (IoU) as follows:
$\displaystyle\left(h^{m}\right)^{2}=\frac{\sum w_{j}h_{j}^{2}}{\sum
w_{j}},\text{ }\text{ }\left(w^{m}\right)^{2}=\frac{\sum h_{j}w_{j}^{2}}{\sum
h_{j}}$ (2)
where $h^{m}$ and $w^{m}$ are the height and width of the average BBox,
respectively. $h$ and $w$ are height and width of BBox. Using this average
BBox, the network output for BBox dimension is given as:
$\displaystyle h_{i}$ $\displaystyle=h^{m}_{i}\times exp\left(u_{i}\right)$
$\displaystyle w_{i}$ $\displaystyle=w^{m}_{i}\times
exp\left(v_{i}\right)\text{ , }\text{ }i\in\\{0,1,...,k-1\\}$ (3)
where $u$ and $v$ are outputs of the network. For the center of the 2D BBox,
we follow the similar settings of [21].
Figure 2: Distance estimation error of $x$ and $z$ axises, which denote the
horizontal location and depth of the object. As the distance of the object
increases, the estimation error increases in case of other methods. The
proposed method shows consistent error independent to the distance from origin
as the method exploits the anchor distance.
### III-C Predictors
With the anchor distance, we can provide the prior knowledge of the distance
to our network. The predictor is specialized and trained for objects near
specific distance as each anchor distance is assigned to each predictor of the
network. As the variance of the distance inferred by a predictor decreases,
the complexity of the network is decreased. The predictors of the network can
construct a 3D environment without additional structures such as 3D RPN or 3D
convolutional layers.
The existing multi-object detectors utilize 2D anchor BBoxes by clustering 2D
BBoxes. In order to cluster 2D BBoxes, we can use the dimensions of the BBox
or IoU. Likewise, anchor distance can be achieved by using various formats
such as normal, squared, or log-scaled. In our work we apply all three cases
to obtain the anchor distance and train the network. Whilst training, the
existing multi-object detector using anchor BBox chooses the predictor that
infers the closest BBox to the target BBox and assigns that target BBox to
learn. Similarly, the proposed method learns the distance of the target object
by selecting and training a predictor which infers the value closest to the
target distance. Note that the same distance format used for obtaining anchor
distance is also used when comparing the the difference between the target
distance and estimated one by the predictor during training.
At the beginning of training, we found that the predictor’s inference value
highly fluctuates in order to learn distances with quite large variations. In
this case, objects are inconsistently assigned to the predictors regardless of
the anchor distance. Therefore, during the training phase, we use the
predictor’s anchor distance rather than the estimated result for predictor
selection.
### III-D Training Loss
In the proposed method, the distance from the origin of an object is
estimated. Since a ray vector indicating the center direction of an object is
obtained using a 2D BBox, all 3 components of 3D coordinate can be obtained by
using distance from origin or depth. Therefore, it is possible to separately
train $x$,$y$, and $z$. However, we assume that 2D BBox and 3D location are
independent and let 2D BBox and the distance be learned separately. For the 2D
BBox training, we adopt CIoU loss [38, 31]. For depth estimation, $L_{1}$ loss
is generally used, but in our work $L_{2}$ loss is applied.
## IV Implementation and Training Details
To implement the proposed observation model, we use the darknet19 structure
[21] for the encoder backbone. We construct the encoder by adding 3
convolutional layers with 1,024 filters followed by one convolutional layer on
top of the backbone. The final convolutional layer has
$k\times\left(4+1\right)$ filters; $4$ for 2D BBox dimensions and center, and
$1$ for the distance. Each predictor estimates 2D BBox and distance from the
origin of the object. We pretrain the backbone network on the Imagenet
classification dataset. We use the Adam optimizer with a learning rate of
$10^{-4}$. For all training processes, a multi-resolution augmentation scheme
is adopted. Similar to [21, 8], Gaussian blur, HSV saturation, RGB invertion
and random brightness are applied to 2D images while training. Random scaling
and translation are not used in order to preserve the 3D coordinates of
objects.
## V Experiments
We evaluate the proposed method in various aspects. Similar to the traditional
multi-object detectors, as the number of predictors increases, the estimation
can be performed more precisely. In our experiments, we train the network with
various numbers of predictors such as $2$, $3$, $5$ and so on. We compare our
methods to existing methods based on Faster R-CNN using RPN. A depth
estimation method is also compared by using ground truth 2D BBox. We use the
median depth within 2D BBox as the depth of a detected object. For the depth
estimation method, we choose [16] since the method proposes ordinal regression
in order to consider depth intervals. Additionally, we also implement several
methods including ours in order to estimate the distance of objects - 1) 3D
proj: we implement the method that calculates distance by projecting 3D BBox
in 3D space to 2D BBox on 2D image. We construct the network which shares the
same backbone with our proposed method, and train the network on dimensions of
2D, 3D BBox and orientations of objects. 2) anchor BBox: similar to the
existing detector, we evaluate the method that uses anchor BBox and average
distance of groups obtained from 2D BBox clustering. In this case the average
distance is calculated by taking arithmetic mean. We also evaluate the case
without any distance prior. 3) proposed method: the proposed method using
anchor distance and its training scheme is evaluated. For obtaining anchor
distance by distance clustering, normal, squared, and log-scale formats are
used for 3 individual models. For each model, the same format used for anchor
distance is also used for training; while choosing the predictor for the
target object, target distance and estimated distances from predictors are
compared by using the same format of distance that is used for clustering.
For all experiments, we use the car category in the KITTI 3D object detection
dataset. Since each grid in our network has multiple predictors, for
evaluation we choose the predictor which estimates the highest 3D IoU to the
ground truth.
### V-A Relation Between Object Distance and Error
We compare the results of 3D proj, anchor BBox with and without average
distance, and the proposed method in Fig. 2. We plot the distance error of the
object according to its distance from origin. For the method anchor BBox and
the proposed method, we set $k=5$. For the distance format of the proposed
method, we use squared format. Results of $x$ and $z$ axis are shown, which
denote horizontal and depth of the object location respectively. We leave the
result of $y$ axis out, since the height of cars have small variations
compared to the depth of cars so that errors of $y$ axis are significantly
smaller than errors of the other axes. As shown in the graph, the estimation
result of 3D proj is significantly more inaccurate as the objects locate
further. The further the objects are, the more similar the sizes of the 2D
BBoxes are; therefore, it is challenging to infer the distance precisely since
the estimations are highly sensitive to the errors of small 2D BBoxes of
objects in a far distance. Compared to 3D proj, methods using anchor BBox have
more robust inferences of distance. However, as the variations are high for
the distance of objects located far from the camera, it is still challenging
to estimate the far distance without a tremendous error. From these results we
can conclude that the distance and the 2D BBox of the object are not directly
proportional to each other, but there exists a slight correlation. In the case
of using anchor distance, estimations results show constant and uniform error
regardless of the distance of the object.
TABLE II: Variance of the distance of 2D BBox groups and distance groups for KITTI dataset Method | FPS | # of | $\sigma<{1.25}$ | $\sigma<{1.25}^{2}$ | | Abs Rel | Sqr Rel | RMSE | RMSElog
---|---|---|---|---|---|---|---|---|---
predictors | (higher is better) | | (lower is better)
SVR[39] | - | - | 0.345 | 0.595 | | 1.494 | 47.748 | 18.970 | 1.494
IPM[40] | - | - | 0.701 | 0.898 | | 0.497 | 35.924 | 15.415 | 0.451
Zhu et al.(ResNet50)[28] | $<15$ | - | 0.796 | 0.924 | | 0.188 | 0.843 | 4.134 | 0.256
Zhu et al.(VggNet16)[28] | - | 0.848 | 0.934 | | 0.161 | 0.619 | 3.580 | 0.228
Zhang et al.(MaskRCNN[ResNet50])[30] | $<7$ | - | 0.988 | - | | 0.051 | - | 2.103 | -
Zhang et al.(MaskRCNN[ResNet50] + addons)[30] | - | 0.992 | - | | 0.049 | - | 1.931 | -
DORN (depth map estimation) [16] | - | - | 0.883 | 0.934 | | 0.190 | 1.153 | 4.802 | 0.287
2D anchor BBox w/o distance prior | $<\boldsymbol{35}$ | 3 | 0.906 | 0.977 | | 0.103 | 0.547 | 4.475 | 0.167
5 | 0.911 | 0.981 | | 0.096 | 0.474 | 4.225 | 0.157
7 | 0.926 | 0.984 | | 0.085 | 0.410 | 3.727 | 0.145
2D anchor BBox + average distance | 3 | 0.914 | 0.982 | | 0.099 | 0.491 | 4.196 | 0.159
5 | 0.923 | 0.984 | | 0.092 | 0.437 | 3.911 | 0.152
Ours (normal anchor distance) | 3 | 0.949 | 0.988 | | 0.094 | 0.344 | 3.401 | 0.144
5 | 0.968 | 0.990 | | 0.084 | 0.230 | 2.527 | 0.131
9 | 0.971 | 0.989 | | 0.076 | 0.155 | 1.770 | 0.124
Ours (log-scale anchor distance) | 3 | 0.931 | 0.987 | | 0.098 | 0.401 | 3.903 | 0.149
5 | 0.957 | 0.989 | | 0.084 | 0.281 | 3.182 | 0.133
9 | 0.972 | 0.990 | | 0.073 | 0.150 | 1.915 | 0.117
Ours (squared anchor distance) | 3 | 0.952 | 0.988 | | 0.092 | 0.313 | 2.936 | 0.142
5 | 0.962 | 0.989 | | 0.084 | 0.220 | 2.080 | 0.134
9 | 0.970 | 0.989 | | 0.079 | 0.165 | 1.719 | 0.127
### V-B Depth Estimation and Anchor Distance
We represent the metric evaluations of $z$ coordinate distance (depth)
estimation of the object in Table II. We follow the definitions of the metrics
as in [28]. Compared to the methods based on RPN [28, 30], our method achieves
a better performance in RMSE at a substantially higher frame rate. Using depth
estimation directly for object detection showed degraded performance compared
to other methods as it hardly consider the overlapped or occluded states of
objects.
In order to validate the anchor distance, we also evaluate the methods of 2D
anchor BBox with and without average distance. As the number of predictors
increases, the burden of one predictor decreases and the network can achieve
the robust estimations. In other words, the more number of predictors there
are, the more accurate inference is achieved. This trend is more pronounced
when anchor distance is applied, and in this case the network shows
significantly improved performance than when the predictors simply focus on 2D
BBox.
The proposed method has the highest performance when the squared format is
used, and shows the lowest performance when log-scale is used. We found that
the squared distance format covers the largest range as shown in Table III.
With the large range of anchor distances, the network can handle the objects
distributed in various ranges efficiently.
TABLE III: Anchor Distance of Different Distance Format | | | | | | (unit : $m$)
---|---|---|---|---|---|---
order of predictors | 1 | 2 | 3 | 4 | 5
anchor BBox(avr dist) | 7.73 | 13.59 | 23.83 | 33.81 | 52.52
anchor distance | normal | 11.20 | 23.18 | 35.30 | 49.50 | 66.21
log-scale | 7.60 | 15.17 | 24.78 | 37.59 | 57.51
squared | 17.49 | 32.86 | 45.27 | 57.41 | 71.52
Meawhile, in the case of 2D BBox with average distance, prior distance is the
most similar to that of log-scale format. Reversely, log-scale distance
grouping shows the most similar average 2D BBox to that of 2D BBox grouping;
we display the 2D BBoxes of each grouping in Table IV.
TABLE IV: Comparison of 2D BBox Dimensions for KITTI dataset | | | | | | | | | | | | | (unit : pixel)
---|---|---|---|---|---|---|---|---|---|---|---|---|---
# of predictors | | 2 | | 3 | | 5
orders | | 1 | 2 | | 1 | 2 | 3 | | 1 | 2 | 3 | 4 | 5
anchor box(IoU) | | 151/268 | 51/86 | | 164/296 | 83/144 | 39/64 | | 175/321 | 108/183 | 57/104 | 37/61 | 24/35
anchor dist(log-scale) | | 139/226 | 37/67 | | 156/246 | 59/103 | 29/53 | | 181/273 | 104/178 | 57/99 | 36/67 | 23/41
anchor dist(normal) | | 133/219 | 30/55 | | 141/229 | 42/75 | 24/42 | | 156/246 | 63/111 | 38/69 | 27/49 | 19/32
anchor dist(squared) | | 129/215 | 24/43 | | 134/220 | 33/60 | 21/36 | | 140/227 | 42/75 | 29/54 | 22/39 | 17/29
Therefore, we can conclude that the distance in the log-scale format is mostly
related to the 2D BBox, but using the log-scale distance or 2D BBox is not
sufficient to achieve the best performance as the 2D BBox is not directly
proportional to the distance. In other words, using log-scale distance or 2D
BBox for grouping in order to obtain prior distance is not effective. For 3D
localization tasks it is better concentrating not on 2D projection, but on
distance itself for defining anchors of 3D location.
### V-C Estimation Error and Execution Time
The proposed method shows a significantly better performance on the RMSE error
than that of others, but in the Abs Rel metric the method does not outperform
the existing method using complex structures such as ResNet or Mask-RCNN. In
other words, although our proposed method is slightly inferior to the previous
methods for objects that are close to the origin, it is more robust when the
objects are far from the origin. The structure of the proposed network is
simpler than others, yet our approach using anchor distance and training
scheme consistently generally achieves more accurate estimation regardless of
the distance of objects.
We also present the frame rate in terms of FPS for each method. In [28], they
assume that the 2D BBox regression is given in advance, so we approximately
record its FPS by assuming that YOLOv2 is used. In [30], the FPS of their
method is under 7, as the baseline is MaskRCNN [20] that shows 7 FPS and the
method deploys additional network structure for distance estimation. The
proposed network, which is based on YOLOv2 [21], however, only takes about
0.03 seconds and shows under 35 FPS for the entire estimation process. This
execution time is irrelevant to the number of predictors, as adding one
predictor is equivalent to adding $\left(4+1\right)$ convolutional filters
that only have $\left(3\times 3\right)$ parameters per filter. The method with
3D projection takes another 0.04 seconds after estimating 2D BBox, 3d BBox,
and orientation which takes 0.03 seconds.
Figure 3: The RMSE and FPS of the methods are shown. Since our method is
based on a real-time detector, it demonstrates high FPS.Adopting the anchor
distance as prior information compensates the simple network structure,
achieving faster and better performance compared to others.
We display the relation between RMSE error and FPS of various methods in Fig.
3.
For qualitative analysis, we display the visualization examples in Fig. 4
represented in a bird-eye view. For 3D proj, we exploit the 2D BBox regression
results from the network with $k=5$. The network for 3D proj shares the same
structure of our proposed method. Also, it estimates orientations and
dimensions of 2D and 3D BBox which are used for calculating distance. Here, we
use a squared format for the proposed method. In 3D proj, the error of the
distance estimation increases since the size of the 2D BBox becomes similar as
the objects are located far away. In the proposed method, the higher the
number of predictors $k$ is, the more accurate the estimated results are.
Figure 4: Examples of the visualizations in a bird-eye view. We compare the
results of using 3D projection and the results of ours with $k=2,3,5$. Since
the method using 3D projection highly depends on the 2D bounding box and
orientation, it shows incorrect results for the objects located far away or
occluded. The proposed method shows better performance with multiple
predictors, as a number of anchor distances allows the network to estimate the
results with low non-linear complexity.
## VI Conclusion
We propose a multi-object distance estimation method using anchor distance.
Conventional methods based on multi-object detection train their predictors
based on 2D Bounding Box (BBox). Other techniques for multi-object distance
estimation rely heavily on complex structure for sophisticated distance
estimation. The proposed method can achieve robust estimation and real-time
performance as the method selects and trains the predictors of the network
based on the distance of objects. To build a prior, we define the anchor
distance by clustering the objects with their distance in various formats such
as squared or log-scaled. The anchor distance gives predictors a strong prior
knowledge of distance. Predictors are dedicated to learning objects in a
specific distance range according to their anchor distances. Because the
proposed method trains the network based on distance, it is able to achieve
more accurate estimations. Traditional methods of directly calculating the
distance by projecting 3D BBox to 2D BBox require a large amount of
computation, so an increased number of objects tend to decline the speed of
estimation during execution. Using anchor distance as a prior the proposed
approach develops a computationally concise network and performs single-shot,
real-time, multi-object detection even for an arbitrarily large number of
objects.
## Acknowledgement
We would like to thank Jihoon Moon, who gives us intuitive advice. This work
is in part supported by the Air Force Office of Scientific Research under
award number FA2386-17-1-4660.
## References
* [1] R. F. Salas-Moreno, R. A. Newcombe, H. Strasdat, P. H. Kelly, and A. J. Davison, “Slam++: Simultaneous localisation and mapping at the level of objects,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2013, pp. 1352–1359.
* [2] P. Parkhiya, R. Khawad, J. K. Murthy, B. Bhowmick, and K. M. Krishna, “Constructing category-specific models for monocular object-slam,” _arXiv preprint arXiv:1802.09292_ , 2018.
* [3] S. Yang and S. Scherer, “Cubeslam: Monocular 3-d object slam,” _IEEE Transactions on Robotics_ , vol. 35, no. 4, pp. 925–938, 2019.
* [4] S. L. Bowman, N. Atanasov, K. Daniilidis, and G. J. Pappas, “Probabilistic data association for semantic slam,” in _2017 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2017, pp. 1722–1729.
* [5] D. F. Kevin Doherty and J. Leonard, “Multimodal semantic slam with probabilistic data association,” in _2019 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2019.
* [6] J. Wu, Y. Wang, T. Xue, X. Sun, B. Freeman, and J. Tenenbaum, “Marrnet: 3d shape reconstruction via 2.5 d sketches,” in _Advances in neural information processing systems_ , 2017, pp. 540–550.
* [7] H. Yu and B. H. Lee, “A variational feature encoding method of 3d object for probabilistic semantic slam,” in _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2018, pp. 3605–3612.
* [8] H. Yu, J. Moon, and B. Lee, “A variational observation model of 3d object for probabilistic semantic slam,” in _2019 International Conference on Robotics and Automation (ICRA)_. IEEE, 2019, pp. 5866–5872.
* [9] Y. Xiang, W. Choi, Y. Lin, and S. Savarese, “Data-driven 3d voxel patterns for object category recognition,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2015, pp. 1903–1911.
* [10] A. Mousavian, D. Anguelov, J. Flynn, and J. Košecká, “3d bounding box estimation using deep learning and geometry,” in _Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on_. IEEE, 2017, pp. 5632–5640.
* [11] D. Proklova, D. Kaiser, and M. V. Peelen, “Disentangling representations of object shape and object category in human visual cortex: The animate–inanimate distinction,” _Journal of cognitive neuroscience_ , vol. 28, no. 5, pp. 680–692, 2016.
* [12] A. Mousavian, D. Anguelov, J. Flynn, and J. Kosecka, “3d bounding box estimation using deep learning and geometry,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 7074–7082.
* [13] D. P. Frost, O. Kähler, and D. W. Murray, “Object-aware bundle adjustment for correcting monocular scale drift,” in _2016 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2016, pp. 4770–4776.
* [14] D. Frost, V. Prisacariu, and D. Murray, “Recovering stable scale in monocular slam using object-supplemented bundle adjustment,” _IEEE Transactions on Robotics_ , vol. 34, no. 3, pp. 736–747, 2018.
* [15] E. Sucar and J.-B. Hayet, “Bayesian scale estimation for monocular slam based on generic object detection for correcting scale drift,” in _2018 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2018, pp. 1–7.
* [16] H. Fu, M. Gong, C. Wang, K. Batmanghelich, and D. Tao, “Deep ordinal regression network for monocular depth estimation,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 2002–2011.
* [17] J. M. Facil, B. Ummenhofer, H. Zhou, L. Montesano, T. Brox, and J. Civera, “Cam-convs: camera-aware multi-scale convolutions for single-view depth,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2019, pp. 11 826–11 835.
* [18] D. Wofk, F. Ma, T.-J. Yang, S. Karaman, and V. Sze, “Fastdepth: Fast monocular depth estimation on embedded systems,” in _2019 International Conference on Robotics and Automation (ICRA)_. IEEE, 2019, pp. 6101–6108.
* [19] F. Zhong, S. Wang, Z. Zhang, and Y. Wang, “Detect-slam: Making object detection and slam mutually beneficial,” in _2018 IEEE Winter Conference on Applications of Computer Vision (WACV)_. IEEE, 2018, pp. 1001–1010.
* [20] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 2961–2969.
* [21] J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 7263–7271.
* [22] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in _European conference on computer vision_. Springer, 2016, pp. 21–37.
* [23] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in _Advances in neural information processing systems_ , 2015, pp. 91–99.
* [24] B. Tekin, S. N. Sinha, and P. Fua, “Real-time seamless single shot 6d object pose prediction,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 292–301.
* [25] Y. Zhou and O. Tuzel, “Voxelnet: End-to-end learning for point cloud based 3d object detection,” _arXiv preprint arXiv:1711.06396_ , 2017.
* [26] A. Kundu, Y. Li, and J. M. Rehg, “3d-rcnn: Instance-level 3d object reconstruction via render-and-compare,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 3559–3568.
* [27] Y. Huo, H. Yi, Z. Wang, J. Shi, Z. Lu, P. Luo, _et al._ , “Learning depth-guided convolutions for monocular 3d object detection,” in _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_. IEEE, 2020, pp. 4306–4315.
* [28] J. Zhu and Y. Fang, “Learning object-specific distance from a monocular image,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 3839–3848.
* [29] A. Simonelli, S. R. Bulo, L. Porzi, M. López-Antequera, and P. Kontschieder, “Disentangling monocular 3d object detection,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 1991–1999.
* [30] Y. Zhang, Y. Li, M. Zhao, and X. Yu, “A regional regression network for monocular object distance estimation,” in _2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)_. IEEE, 2020, pp. 1–6.
* [31] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” _arXiv preprint arXiv:2004.10934_ , 2020.
* [32] G. Brazil and X. Liu, “M3d-rpn: Monocular 3d region proposal network for object detection,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 9287–9296.
* [33] A. A. Soltani, H. Huang, J. Wu, T. D. Kulkarni, and J. B. Tenenbaum, “Synthesizing 3d shapes via modeling multi-view depth maps and silhouettes with deep generative networks,” in _The IEEE conference on computer vision and pattern recognition (CVPR)_ , vol. 3, 2017, p. 4.
* [34] J. K. Pontes, C. Kong, S. Sridharan, S. Lucey, A. Eriksson, and C. Fookes, “Image2mesh: A learning framework for single image 3d reconstruction,” _arXiv preprint arXiv:1711.10669_ , 2017.
* [35] Z. Liu, Z. Wu, and R. Tóth, “Smoke: Single-stage monocular 3d object detection via keypoint estimation,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops_ , 2020, pp. 996–997.
* [36] M. Simony, S. Milzy, K. Amendey, and H.-M. Gross, “Complex-yolo: An euler-region-proposal for real-time 3d object detection on point clouds,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 0–0.
* [37] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in _Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on_. IEEE, 2012, pp. 3354–3361.
* [38] Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, “Distance-iou loss: Faster and better learning for bounding box regression,” 2019.
* [39] F. Gökçe, G. Üçoluk, E. Şahin, and S. Kalkan, “Vision-based detection and distance estimation of micro unmanned aerial vehicles,” _Sensors_ , vol. 15, no. 9, pp. 23 805–23 846, 2015.
* [40] S. Tuohy, D. O’Cualain, E. Jones, and M. Glavin, “Distance determination for an automobile environment using inverse perspective mapping in opencv,” 2010\.
|
# Endpoint $\ell^{r}$ improving estimates for Prime averages
Michael T. Lacey School of Mathematics, Georgia Institute of Technology,
Atlanta GA 30332, USA<EMAIL_ADDRESS>, Hamed Mousavi School of
Mathematics, Georgia Institute of Technology, Atlanta GA 30332, USA and
Yaghoub Rahimi School of Mathematics, Georgia Institute of Technology,
Atlanta GA 30332, USA
###### Abstract.
Let $\Lambda$ denote von Mangoldt’s function, and consider the averages
$\displaystyle A_{N}f(x)$ $\displaystyle=\frac{1}{N}\sum_{1\leq n\leq
N}f(x-n)\Lambda(n).$
We prove sharp $\ell^{p}$-improving for these averages, and sparse bounds for
the maximal function. The simplest inequality is that for sets
$F,G\subset[0,N]$ there holds
$N^{-1}\langle A_{N}\mathbf{1}_{F},\mathbf{1}_{G}\rangle\ll\frac{\lvert
F\rvert\cdot\lvert G\rvert}{N^{2}}\Bigl{(}\operatorname{Log}\frac{\lvert
F\rvert\cdot\lvert G\rvert}{N^{2}}\Bigr{)}^{t},$
where $t=2$, or assuming the Generalized Riemann Hypothesis, $t=1$. The
corresponding sparse bound is proved for the maximal function
$\sup_{N}A_{N}\mathbf{1}_{F}$. The inequalities for $t=1$ are sharp. The proof
depends upon the Circle Method, and an interpolation argument of Bourgain.
MTL: The author is a 2020 Simons Fellow. Research supported in part by grant
from the US National Science Foundation, DMS-1949206
###### Contents
1. 1 Introduction
2. 2 Notation
3. 3 Approximations of the Kernel
4. 4 Properties of the High, Low and Exceptional Terms
1. 4.1 The High Terms
2. 4.2 The Low Terms
3. 4.3 The Exceptional Term
5. 5 Proofs of the Fixed Scale and Sparse Bounds
6. 6 Proof of Corollary 1.9
## 1\. Introduction
We consider discrete averages over the prime integers. The averages are
weighted by the von Mangoldt function.
(1.1) $\displaystyle A_{N}f(x)$ $\displaystyle=\frac{1}{N}\sum_{1\leq n\leq
N}f(x-n)\Lambda(n)$ (1.2) $\displaystyle\Lambda(n)$
$\displaystyle=\begin{cases}\log(p)&n=p^{a},\textup{$p$ prime}\\\
0&\textup{Otherwise}.\end{cases}$
Our interest is in _scale free_ $\ell^{r}$ improving estimates for these
averages. The question presents itself in different forms.
For an interval $I$ in the integers and function $f\;:\;I\to\mathbb{C}$, set
(1.3) $\langle f\rangle_{I,r}=\Bigl{[}\lvert I\rvert^{-1}\sum_{x\in I}\lvert
f(x)\rvert^{r}\Bigr{]}^{1/r}.$
If $r=1$, we will suppress the index in the notation. And, set
$\operatorname{Log}x=1+\lvert\log x\rvert$, for $x>0$.
The kind of estimate we are interested in takes the the following form, in the
simplest instance. What is the ‘smallest’ function
$\psi\;:\;[0,1]\to[1,\infty)$ so that for all integers $N$ and indicator
functions $f,g\;:\;I\to\\{0,1\\}$, there holds
$N^{-1}\langle A_{N}f,g\rangle\leq\langle f\rangle_{I}\langle
g\rangle_{I}\psi(\langle f\rangle_{I}\langle g\rangle_{I}).$
That is, the right hand side is independent of $N$, making it scale-free. We
specified that $f,g$ be indicator functions as that is sometimes the sharp
form of the inequality. Of course it is interesting for arbitrary functions,
but the bound above is not homogeneous, so not the most natural estimate in
that case.
The points of interest in these two results arises from, on the one hand, the
distinguished role of the prime integers. And, on the other, endpoint results
are significant interest in Harmonic Analysis, as the techniques which apply
are the sharpest possible. In this instance, the sharp methods depend very
much on the prime numbers.
For the primes, we expect that the Riemann Hypothesis to be relevant. We state
unconditional results, and those that depend upon the Generalized Riemann
Hypothesis (GRH). Note that according to GRH all zeroes in the critical strip
$0<Re(s)<1$ of an arbitrary $L-$function $L(f,s)$ are on the critical line
$Re(s)=\frac{1}{2}$. Under GRH, the primes are equitably distributed mod $q$,
with very good error bounds. Namely,
(1.4) $\psi(x,q,a)=\sum_{\begin{subarray}{c}n<x\\\ n\equiv
a\pmod{q}\end{subarray}}\Lambda(n)=\frac{x}{\phi(q)}+O(x^{\frac{1}{2}}\log^{2}(q)).$
###### Theorem 1.5.
There is a constant $C$ so that this holds. For integers $N>30$, and interval
$I$ of length $N$, the following inequality holds for all functions
$f=\mathbf{1}_{F}$ and $g=\mathbf{1}_{G}$ with $F,G\subset I$
(1.6) $N^{-1}\langle A_{N}f,g\rangle\leq C\langle f\rangle_{I}\langle
g\rangle_{I}\times\begin{cases}\operatorname{Log}(\langle f\rangle_{I}\langle
g\rangle_{I})&\textup{assuming GRH}\\\ (\operatorname{Log}(\langle
f\rangle_{I}\langle g\rangle_{I}))^{t}&\end{cases}$
The inequality assuming GRH is sharp, as can be seen by taking $f$ to be the
indicator of the primes, and $g=\mathbf{1}_{0}$. It is also desirable to have
a form of the inequality above that holds for the maximal function
$A^{\ast}f=\sup_{N}\lvert A_{N}f\rvert.$
Our second main theorem is sparse bound for $A^{\ast}$. The definition of a
sparse bound is postponed to Definition 5.3. Remarkably, the inequality takes
the same general form, although we consider a substantially larger operator.
###### Theorem 1.7.
For functions $f=\mathbf{1}_{F}$ and $g=\mathbf{1}_{G}$, for finite sets
$F,G\subset\mathbb{Z}$, there is a sparse collection of intervals
$\mathcal{S}$ so that we have
(1.8) $\langle A^{\ast}f,g\rangle\lesssim\sum_{I\in\mathcal{S}}\langle
f\rangle_{I}\langle g\rangle_{I}(\operatorname{Log}\langle f\rangle_{I}\langle
g\rangle_{I})^{t}\lvert I\rvert,$
where we can take $t=1$ under GRH, and otherwise we take $t=2$.
The sparse bound is very strong, implying weighted inequalities for the
maximal operator $A^{\ast}$. These inequalities could be further quantified,
but we do not detail those consequences, as they are essentially known. See
[MR3897012]. One way to see that the sparse bound is stronger is these
inequalities are a corollary.
###### Corollary 1.9.
The maximal operator $A^{\ast}$ satisfies these inequalities, where $t=1$
under GRH, and $t=2$ otherwise. First, a sparse bound with $\ell^{p}$ norms.
For all $1<p<2$, there holds
(1.10) $\langle
A^{\ast}\mathbf{1}_{F},\mathbf{1}_{G}\rangle\lesssim(p-1)^{-t}\sup_{\mathcal{S}}\sum_{I\in\mathcal{S}}\langle\mathbf{1}_{F}\rangle_{I,p}\langle\mathbf{1}_{G}\rangle_{I,p}\lvert
I\rvert.$
Second, the restricted weak-type inequalities
(1.11)
$\sup_{0<\lambda<1}\frac{\lambda}{(\operatorname{Log}\lambda)^{t}}\lvert\\{A^{\ast}\mathbf{1}_{F}>\lambda\\}\rvert\lesssim\lvert
F\rvert.$
Third, the weak-type inequality below holds for finitely supported non-
negative functions $f$ on $\mathbb{Z}$
(1.12)
$\sup_{\lambda>0}\lambda\lvert\\{A^{\ast}f>\lambda\\}\rvert\lesssim\lVert
f\rVert_{\ell(\log\ell)^{t}(\log\log\ell)}$
where the last norm is defined in §6.
This subject is an outgrowth of Bourgain’s fundamental work on arithmetic
ergodic theorems [MR937582, MR1019960]. These inequalities proved therein
focused on the diagonal case, principally $\ell^{p}$ to $\ell^{p}$ estimates
for maximal functions. Bourgain’s work has been very influential, with a very
rich and sophisticated theory devoted to the diagonal estimates. We point to
just two papers, [MR2188130], and very recently [2020arXiv200805066T]. The
subject is very rich, and the reader should consult the references in these
papers.
Shortly after Bourgain’s first results, Wierdl [MR995574] studied the primes,
and the simpler form of the Circle method in that case allowed him to prove
diagonal inequalities for all $p>1$, which was a novel result at that time.
The result was revisited by Mirek and Trojan [MR3370012]. The unconditional
version of the endpoint result (1.11) above is the main result of Trojan
[MR4029173]. The approach of this paper differs in some important aspects from
the one in [MR4029173]. (The low/high decomposition is dramatically different,
to point to the single largest difference.)
The subject of sparse bounds originated in harmonic analysis, with a detailed
set of applications in the survey [2018arXiv181200850P], with a wide set of
references therein. The paper [MR3892403] initiated the study of sparse bounds
in the discrete setting. While the result in that paper of an ‘$\epsilon$
improvement’ nature, for averages it turns out there are very good results
available, as was first established for the discrete sphere in [MR4064582,
MR4149830]. There is a rich theory here, with a range of inequalities for the
Magyar-Stein-Wainger [MR1888798] maximal function in [MR4041278]. Nearly sharp
results for certain polynomial averages are established in
[2019arXiv190705734H, 2020arXiv200211758D], and a surprisingly good estimate
for arbitrary polynomials is in [MR4106792]. The latter result plays an
interesting role in the innovative result of Krause, Mirek and Tao
[2020arXiv200800857K].
The $\ell^{p}$ improving property for the primes was investigated in
[MR4072599], but not at the endpoint. That paper result established the first
weighted estimates for the averages for the prime numbers. This paper
establishes the sharp results, under GRH. Mirek [MR3375866] addresses the
diagonal case for Piatetski-Shapiro primes. It would be interesting to obtain
$\ell^{p}$ improving estimates in this case.
Our proof uses the Circle Method to approximate the Fourier multiplier,
following Bourgain [MR937582]. In the unconditional case, we use Page’s
Theorem, which leads to the appearance of exceptional characters in the Circle
method. Under GRH, there are no exceptional characters, and one can identify,
as is well known, a very good approximation to the multiplier.
The Fourier multiplier is decomposed at the end of §3 in such a way to fit an
interpolation argument of Bourgain [MR812567], also see [MR2053347]. We call
it the High/Low Frequency method. To acheive the endpoint results, this
decomposition has to be carefully phrased. There are two additional features
of this decomposition we found necessary to add in. First, certain
difficulties associated with Ramanujan sums are addressed by making a
significant change to a Low Frequency term. The sum defining the Low Frequency
term (3.25) is over all $Q$-smooth square free denominators. Here, the integer
$Q$ can vary widely, as small as $1$ and as large as $N^{1/10}$, say. (The
largest $Q$-smooth square denominator will be of the order of $e^{Q}$.)
Second, in the unconditional case, the exceptional characters are grouped into
their own term. As it turns out, they can be viewed as part of the Low
Frequency term. The properties we need for the High/Low method are detailed in
§4. The following sections are applications of those properties.
## 2\. Notation
We write $A\ll B$ if there is a constant $C$ so that $A\leq CB$. In such
instances, the exact nature of the constant is not important.
Let $\mathcal{F}$ denote the Fourier transform on $\mathbb{R}$, defined for by
$\mathcal{F}f(\xi)=\int_{\mathbb{R}}f(x)e^{-2\pi ix\xi}\;dx,\qquad f\in
L^{1}(\mathbb{R}).$
The Fourier transform on $\mathbb{Z}$ is denoted by $\widehat{f}$, defined by
$\widehat{f}(\xi)=\sum_{n\in\mathbb{Z}}f(n)e^{-2\pi in\xi},\qquad
f\in\ell^{1}(\mathbb{Z}).$
Let $G$ be a finite Abelian group. The characters of $G$ form a complete
orthogonal system. That is for any complex function $f:G\rightarrow\mathbb{C}$
(2.1) $f=\frac{1}{|G|}\sum_{\psi\in\hat{G}}\langle f,\psi\rangle\psi$
where
(2.2) $\langle f,\psi\rangle=\sum_{g\in G}f(g)\bar{\psi}(g).$
We consider Gauss sum associated with character $\chi$ at $a\pmod{q}$.
(2.3) $G(\chi,a)=\frac{1}{\phi(q)}\sum_{r\in A_{q}}\chi(r)e(\frac{ra}{q}).$
This is also called an $L$-function. Throughout, we denote
$A_{q}=\\{a\in\mathbb{Z}/q\mathbb{Z}\;:\;(a,q)=1\\}$, so that $\lvert
A_{q}\rvert=\phi(q)$, the totient function. We have
(2.4) $\frac{q}{\operatorname{Log}\operatorname{Log}q}\ll\phi(q)\leq q-1.$
It is known that $|G(\chi,a)|<q^{-\frac{1}{2}}$, see [MR2061214]*Chapter 3. In
particular, if $\chi$ is identity, then we get Ramanujan’s sum
(2.5) $\displaystyle c_{q}(n):=\phi(q)G(\mathbf{1}_{A_{q}},a)=\sum_{r\in
A_{q}}e\bigl{(}\frac{ra}{q}\bigr{)}.$
It is known that the all of the nontrivial zeroes of an $L-$function are in
the critical strip. For some choices $q$, there could be a special zero for
the $L$-function corresponding to the real character modulo $q$, where there
is exactly one real zero for $L(\sigma,it)$ very near to $1$. We call that
zero $1/2<\beta_{q}<1$. It is known that
(2.6) $1-\beta_{q}\ll\frac{1}{(t+3)\log(q)}.$
The exceptional $q$ are rare. There is a constant $C>1$ so that
$q_{n+1}>q_{n}^{c}$, where $q_{n}$ is the $n$th exceptional $q$.
Let $\chi_{q}$ denote the exceptional character. It is a non-trivial quadratic
Dirichlet character modulo $q$, that is $\chi_{q}$ takes values $-1,0,1$, and
takes the value $-1$ at least once. We also know that $\chi_{q}$ is primitive,
namely that its period is $q$. As a matter of convenience, if $q$ does not
have an exceptional character, we will set $\chi_{q}\equiv 0$, and
$\beta_{q}=1$. These properties are important to Lemma 4.14.
Page’s Theorem uses the exceptional characters to give an approximation to the
prime counting function. Counting primes in an arithmetic progression of
modulus $q$, we have
(2.7)
$\displaystyle\psi(N;q,r)-\frac{N}{\phi(q)}+\frac{\chi_{q}(x)}{\phi(q)}\beta^{-1}_{q}x^{\beta_{q}}\ll
Ne^{c\sqrt{\log N}}.$
## 3\. Approximations of the Kernel
Denote the kernel of $A_{N}$ with the same symbol, so that
$A_{N}(x)=N^{-1}\sum_{n\leq N}\Lambda(n)\delta_{n}(x)$. It follows that
$\widehat{A_{N}}(\xi)=\frac{1}{N}\sum_{n\leq N}\Lambda(n)e^{-2\pi n\xi}.$
The core of the paper is the approximation to $\widehat{A_{N}}(\xi)$, and its
further properties, detailed in the next section.
Set
(3.1) $M_{N}^{\beta}=\frac{1}{N\beta}\sum_{n\leq
N}[n^{\beta}-(n-1)^{\beta}]\delta_{n},\qquad\tfrac{1}{2}<\beta\leq 1.$
We write $M_{N}=M_{N}^{1}$ when $\beta=1$, which is the standard average. For
$\beta<1$, these are not averaging operators. They are the operators
associated to the exceptional characters. The Fourier transforms are straight
forward to estimate.
###### Proposition 3.2.
We have the estimates
(3.3)
$\displaystyle\lvert\widehat{M_{N}}(\xi)\rvert\ll\min\\{1,(N\lvert\xi\rvert)^{-1}\\},$
(3.4)
$\displaystyle\lvert\widehat{M_{N}^{\beta}}(\xi)\rvert\ll(N\lvert\xi\rvert)^{-1},$
(3.5)
$\displaystyle\lvert\widehat{M_{N}^{\beta}}(\xi)-\beta^{-1}N^{\beta-1}\rvert\ll
N^{\beta}\lvert\xi\rvert.$
For integers $q$ and $a\in A_{q}$,
(3.6)
$\displaystyle\widehat{L^{a,q}_{N}}(\xi)=G(\mathbf{1}_{A_{q}},a)\widehat{M_{N}}(\xi)-G(\chi_{q},a)\widehat{M^{\beta_{q}}_{N}}(\xi)$
We state the approximation to the kernel at rational point, with small
denominator.
###### Lemma 3.7.
Assume that $|\xi-\frac{a}{q}|\leq N^{-1}Q$ for some $1\leq a\leq q\leq Q$ and
$\gcd(a,q)=1$. Then
(3.10)
$\displaystyle\widehat{A_{N}}(\xi)=\widehat{L^{a,q}_{N}}(\xi-\tfrac{a}{q})+\bigg{\\{}\begin{array}[]{lr}O(QN^{-\frac{1}{2}+\epsilon}),&\text{
Assuming GRH}\\\ O(Qe^{-c\sqrt{n}}),&\text{ Otherwise}\end{array}$
###### Proof.
We proceed under GRH, and return to the unconditional case at the end of the
argument. The key point is that we have the approximation (1.4) for
$\psi(N;q,r)$. Set $\alpha:=\xi-\frac{a}{q}$. Using Abel summation, we can
write
$\displaystyle N\widehat{M_{N}}(\alpha)$ $\displaystyle=Ne(\alpha
N)-\sqrt{N}e(\alpha\sqrt{N})-2\pi
i\alpha\int_{\sqrt{N}}^{N}e^{t\alpha}\;dt+O(\sqrt{N}).$
Turning to the primes, we separate out the sum below according to residue
classes mod $q$. Since $\xi=\frac{a}{q}+\alpha$,
(3.11) $\displaystyle\sum_{\ell\leq N}e(\xi\ell)\Lambda(\ell)$
$\displaystyle=\sum_{\begin{subarray}{c}0\leq r\leq q\\\
\gcd(r,q)=1\end{subarray}}\sum_{\begin{subarray}{c}\ell\leq N\\\ \ell\equiv
r\mod q\end{subarray}}e(\xi\ell)\Lambda(\ell)$ (3.12)
$\displaystyle=\sum_{r\in
A_{q}}e\bigl{(}\tfrac{ra}{q}\bigr{)}\sum_{\begin{subarray}{c}\ell\leq N\\\
\ell\equiv r\mod{q}\end{subarray}}e(\alpha\ell)\Lambda(\ell).$
Examine the inner sum. Using Abel’s summation formula, and the notation $\psi$
for prime counting function, we have
$\displaystyle\sum_{\begin{subarray}{c}\ell\leq N\\\ \ell\equiv r\mod
q\end{subarray}}e(\alpha\ell)\Lambda(\ell)$ $\displaystyle=\psi(N;q,r)e(\alpha
N)-\psi(\sqrt{N};q,r)e(\alpha\sqrt{N})$ $\displaystyle\qquad-2\pi
i\alpha\int_{\sqrt{N}}^{N}\psi(t;q,r)e(\alpha t)dt+O(\sqrt{N}).$
At this point we can use the Generalized Riemann Hypothesis. From (1.4), it
follows that
$\displaystyle\sum_{\begin{subarray}{c}\ell\leq N\\\ \ell\equiv r\mod
q\end{subarray}}e(\alpha\ell)\Lambda(\ell)-\frac{N}{\phi(q)}\widehat{M_{N}}(\alpha)$
$\displaystyle=(\psi(N;q,r)-\frac{N}{\phi(q)}e(\alpha N))e(\alpha N)$
$\displaystyle\qquad-2\pi
i\alpha\int_{\sqrt{N}}^{N}e(t\alpha)(\psi(t;q,r)-t)\;dt+O(\sqrt{N})$
$\displaystyle\ll
N^{\frac{1}{2}+\epsilon}+\frac{Q}{N}\int_{\sqrt{N}}^{N}t^{\frac{1}{2}+\epsilon}dt+O(N^{\frac{1}{2}+\epsilon})$
$\displaystyle\ll QN^{\frac{1}{2}+\epsilon}.$
The proof without GRH uses Page’s Theorem (2.7) in place of (1.4). We omit the
details.
∎
The previous Lemma approximates $\widehat{A_{N}}(\xi)$ near a rational point.
We extend this approximation to the entire circle. This is done with these
definitions.
(3.14)
$\displaystyle\widehat{V_{s,n}}(\xi)=\sum_{\begin{subarray}{c}a/q\in\mathcal{R}_{s}\end{subarray}}G(\mathbf{1}_{A_{q}},a)\widehat{M_{N}}(\xi-a/q)\eta_{s}(\xi-a/q),$
(3.15)
$\displaystyle\widehat{W_{s,n}}(\xi)=\sum_{a/q\in\mathcal{R}_{s}}G(\chi_{q},a)\widehat{M_{N}^{\beta_{q}}}(\xi-a/q)\eta_{s}(\xi-a/q),$
(3.16) $\displaystyle\mathcal{R}_{s}=\\{a/q\;:\;a\in A_{q},\ 2^{s}\leq
q<2^{s+1}\\},$
and $\mathcal{R}_{0}=\\{0\\}$. Further
$\mathbf{1}_{[-1/4,1/4]}\leq\eta\leq\mathbf{1}_{[-1/2,1/2]}$, and
$\eta_{s}(\xi)=\eta(4^{s}\xi)$. In (3.24), recall that if $q$ is not
exceptional, we have $\chi_{q}=0$. Otherwise, $\chi_{q}$ is the associated
exceptional Dirichlet character. Given integer $N=2^{n}$, set
(3.17) $\tilde{N}=\begin{cases}e^{c\sqrt{n}/4}&\textup{where $c$ is as in
\eqref{e:PNTlemma}}\\\ N^{1/5}&\textup{under GRH}\end{cases}$
###### Lemma 3.18.
Let $N=2^{n}$. Write $A_{N}=B_{N}+\textup{Err}_{N}$, where
(3.19) $B_{N}=\sum_{s\;:\;2^{s}<(\tilde{N})^{1/400}}V_{s,n}-W_{s,n}.$
Then, we have
$\lVert\textup{Err}_{N}f\rVert_{\ell^{2}}\ll(\tilde{N})^{-1/1000}\lVert
f\rVert_{\ell^{2}}$.
###### Proof.
We estimate the $\ell^{2}$ norm by Plancherel’s Theorem. That is, we bound
$\lVert\widehat{A_{N}}-\widehat{B_{N}}\rVert_{L^{\infty}(\mathbb{T})}\ll(\tilde{N})^{-1/1000}.$
Fix $\xi\in\mathbb{T}$, where we will estimate the $L^{\infty}$ norm above. By
Dirichlet’s Theorem, there are relatively prime integers $a,q$ with $0\leq
a<q\leq(\tilde{N})^{1/5}$ with
$\lvert\xi-a/q\rvert<\frac{1}{q^{2}}.$
The argument now splits into cases, depending upon the size of $q$.
Assume that $(\tilde{N})^{1/400}<q\leq(\tilde{N})^{1/5}$. This is a situation
for which the classical Vinogradov inequality [MR0062138]*Chapter 9 was
designed. That estimate is however is not enough for our purposes. Instead we
use [MR2061214]*Thm 13.6 for the estimate below.
$\displaystyle\lvert\widehat{A_{N}}(\xi)\rvert$
$\displaystyle\ll(q^{-1/2}+(q/N)^{1/2}+N^{-1/5})\log^{3}N\ll(\tilde{N})^{-1/1000}.$
So, in this case we should also see that $\widehat{B_{N}}(\xi)$ satisfies the
same bound. The function $\widehat{B_{N}}$ is a sum over $\widehat{V_{s,n}}$
and $\widehat{W_{s,n}}$. The argument for both is the same. Suppose that
$\widehat{V_{s,n}}(\xi)\neq 0$. The supporting intervals for
$\eta_{s}(\xi-a/q)$ for $a/q\in\mathcal{R}_{s}$ are pairwise disjoint. We must
have $\lvert\xi-a_{0}/q_{0}\rvert<2^{-2s}$ for some
$a_{0}/q_{0}\in\mathcal{R}_{s}$, where $2^{s}<(\tilde{N})^{1/400}$. Then,
$\lvert\xi-a_{0}/q_{0}\rvert\geq\lvert
a_{0}/q_{0}-a/q\rvert-\lvert\xi-a/q\rvert\geq(qq_{0})^{-1}-q^{-2}\geq
q_{0}^{-4}.$
But then by the decay estimate (3.3), we have
$\displaystyle\lvert G(\mathbf{1}_{A_{q}},a_{0})\widehat{M_{N}}(\xi-
a_{0}/q_{0})\rvert\ll(Nq_{0}^{-4})^{-1}\ll N^{-1}(\tilde{N})^{1/100}$
This estimate is summed over $s\leq(\tilde{N})^{1/400}$ to conclude this case.
Proceed under the assumption that $q\leq N_{0}=(\tilde{N})^{1/400}$. From
Lemma 3.7, the inequality (3.10) holds.
$\displaystyle\widehat{A_{N}}(\xi)$
$\displaystyle=\widehat{L^{a,q}_{N}}(\xi-\tfrac{a}{q})+O(N_{0}^{-1/2})$
The Big $O$ term is as is claimed, so we verify that
$\widehat{B_{N}}(\xi)-\widehat{L^{a,q}_{N}}(\xi-\tfrac{a}{q})\ll
N_{0}^{-1/2}$.
The analysis depends upon how close $\xi$ is to $a/q$. Suppose that
$\lvert\xi-a/q\rvert<\tfrac{1}{4}N_{0}^{-2}$. Then $a/q$ is the unique
rational $b/r$ with $(b,r)=1$ and $0\leq b<r\leq N_{0}$ that meets this
criteria. That means that
$\displaystyle\widehat{B_{N}}(\xi)$
$\displaystyle=\widehat{L^{a,q}_{N}}(\xi-a/q)\eta_{s}(\xi-a/q)$
where in the last term on the right, $2^{s}\leq q<2^{s+1}$. By definition
$\eta_{s}(\xi-a/q)=\eta(4^{s}(\xi-a/q))$, which equals one by assumption on
$\xi$. That completes this case.
Continuing, suppose that there is no $a/q$ with
$\lvert\xi-a/q\rvert<N_{0}^{-2}$. The point is that we have the decay
estimates (3.3) and (3.4) which imply
$\lvert\widehat{M_{N}}(\xi-a/q)\rvert+\lvert\widehat{M_{N}^{\beta}}(\xi-a/q)\rvert\ll[N(\xi-a/q)]^{-1}\ll\frac{N_{0}^{2}}{N}\ll
N^{-3/5}.$
But then, from the definition (3.6), we have
$\lvert\widehat{L^{a,q}_{N}}(\xi-\tfrac{a}{q})\rvert\ll N^{-1/5}.$
And as well, trivially bounding Gauss sums by $1$, we have
$\lvert\widehat{B_{N}}(\xi)\rvert\ll\frac{n^{3/5}}{N}\ll N^{-1/5},$
by just summing over all $a/q\in\mathcal{R}_{s}$, with
$s<(\tilde{N})^{1/400}$. That completes the proof.
∎
The discussion to this point is of a standard nature. We state here a
decomposition of the operator $B_{N}$ defined in (3.19). It encodes our
High/Low/Exceptional decomposition, and requires some care to phrase, in order
to prove our endpoint type results for the prime averages. It depends upon a
supplementary parameter $Q$. This parameter $Q$ will play two roles,
controlling the size and smoothness of denominators. Recall that an integer
$q$ is _$Q$ -smooth_ if all of its prime factors are less than $Q$. Let
$\mathbb{S}_{Q}$ be the collection of square-free $Q$-smooth integers.
(3.21)
$\displaystyle\widehat{V_{s,n}^{Q,\textup{lo}}}(\xi)=\sum_{\begin{subarray}{c}a/q\in\mathcal{R}_{s}\\\
q\in\mathbb{S}_{Q}\end{subarray}}G(\mathbf{1}_{A_{q}},a)\widehat{M_{N}}(\xi-a/q)\eta_{s}(\xi-a/q),$
(3.22)
$\displaystyle\widehat{V_{s,n}^{Q,\textup{hi}}}(\xi)=\sum_{\begin{subarray}{c}a/q\in\mathcal{R}_{s}\\\
q\not\in\mathbb{S}_{Q}\end{subarray}}G(\mathbf{1}_{A_{q}},a)\widehat{M_{N}}(\xi-a/q)\eta_{s}(\xi-a/q),$
(3.24)
$\displaystyle\widehat{W_{s,n}}(\xi)=\sum_{a/q\in\mathcal{R}_{s}}G(\chi_{q},a)\widehat{M_{N}^{\beta_{q}}}(\xi-a/q)\eta_{s}(\xi-a/q),$
Define
(3.25)
$\displaystyle{\operatorname{Lo}_{Q,N}}=\sum_{s}V_{s,n}^{Q,\textup{lo}},$
(3.26) $\displaystyle{\operatorname{Hi}_{Q,N}}=\sum_{s\;:\;Q\leq
2^{s}\leq(\tilde{N})^{1/400}}V_{s,n}^{Q,\textup{hi}}-W_{s,n}$ (3.27)
$\displaystyle\operatorname{Ex}_{Q,N}=\sum_{s\;:\;2^{s}\leq Q}W_{s,n}$
Concerning these definitions, in the Low term (3.25), there is no restriction
on $s$, but the sum only depends upon the finite number of square-free
$Q$-smooth numbers in $\mathbb{S}_{Q}$. (Due to (4.8), the non-square free
integers will not contribute to the sum.) The largest integer in
$\mathbb{S}_{Q}$ will be about $e^{Q}$, and the value of $Q$ can be as big as
$\tilde{N}$. In the High term (3.26), there are two parts associated with the
principal and exceptional characters. For the principal characters, we exclude
the square free $Q$-smooth denominators which are both larger than $Q$ and
less than $(\tilde{N})^{1/400}$. These are included in the Low term. We
include all the denominators for the exceptional characters. In the
Exceptional term (3.27), we just impose the restriction on the size of the
denominator to be not more than $Q$. This will be part of the Low term.
The sum of these three terms well approximates $B_{N}$.
###### Proposition 3.28.
Let $1\leq Q\leq\tilde{N}$. We have the estimate
$\lVert\textup{Err}^{\prime}_{N}f\rVert_{\ell^{2}}\lesssim(\tilde{N})^{-1/2}\lVert
f\rVert_{\ell^{2}}$, where
(3.29)
$\textup{Err}^{\prime}_{N}={\operatorname{Lo}_{Q,N}}+{\operatorname{Hi}_{Q,N}}+\operatorname{Ex}_{N}+\textup{Err}_{N}-B_{N}.$
###### Proof.
From (3.19), we see that
$\widehat{\textup{Err}^{\prime}_{N}}(\xi)=\sum_{s\;:\;2^{s}>(\tilde{N})^{1/400}}\widehat{V_{s,n}^{Q,\textup{lo}}}(\xi)$
Recalling the definition of $V_{s,n}^{Q,\textup{lo}}$ from (3.21), it is
straight forward to estimate this last sum in $L^{\infty}(\mathbb{T})$, using
the Gauss sum estimate
$G(\mathbf{1}_{A_{q}},a)\ll\frac{\operatorname{Log}\operatorname{Log}q}{q}$. ∎
## 4\. Properties of the High, Low and Exceptional Terms
The further properties of the High, Low and Exceptional terms are given here,
in that order.
### 4.1. The High Terms
We have the $\ell^{2}$ estimates for the fixed scale, and and for the supremum
over large scales, for the High term defined in (3.26). Note that the supremum
is larger by a logarithmic factor.
###### Lemma 4.1.
We have the inequalities
(4.2) $\displaystyle\lVert\operatorname{Hi}_{Q,N}\rVert_{\ell^{2}\to\ell^{2}}$
$\displaystyle\lesssim\frac{\log\log Q}{Q},$ (4.3)
$\displaystyle\lVert\sup_{N>Q^{2}}\lvert\operatorname{Hi}_{Q,N}f\rvert\rVert_{2}$
$\displaystyle\lesssim\frac{\log\log Q\cdot\log Q}{Q}\lVert
f\rVert_{\ell^{2}}.$
We comment that the insertion of the $Q$ smooth property into the definition
of $V_{s,n}^{Q,\textup{hi}}$ in (3.22) is immaterial to this argument.
###### Proof.
Below, we assume that there are no exceptional characters, as a matter of
convenience as the exceptional characters are treated in exactly the same
manner. For the inequality (4.2), we have from the definition of the High term
in (3.26), and (3.22),
$\displaystyle\lVert\operatorname{Hi}_{Q,N}\rVert_{\ell^{2}\to\ell^{2}}$
$\displaystyle=\lVert\widehat{\operatorname{Hi}_{Q,N}}\rVert_{L^{\infty}(\mathbb{T})}$
$\displaystyle=\Bigl{\lVert}\sum_{s\;:\;Q\leq
2^{s}\leq\tilde{N}}\widehat{V_{s,n}^{Q,\textup{hi}}}\Bigr{\rVert}_{L^{\infty}(\mathbb{T})}$
$\displaystyle\leq\sum_{s\;:\;Q\leq
2^{s}\leq\tilde{N}}\lVert\widehat{V_{s,n}^{Q,\textup{hi}}}\rVert_{L^{\infty}(\mathbb{T})}$
$\displaystyle\leq\sum_{s\;:\;Q\leq 2^{s}\leq\tilde{N}}\max_{2^{s}\leq
q<2^{s+1}}\max_{a\in A_{q}}\lvert G(\mathbf{1}_{A_{q}},a)\rvert$
$\displaystyle\ll\sum_{s\;:\;Q\leq 2^{s}\leq\tilde{N}}\max_{2^{s}\leq
q<2^{s+1}}\frac{1}{\phi(q)}$ $\displaystyle\ll\sum_{s\;:\;Q\leq 2^{s}}\log
s\cdot 2^{-s}\ll\frac{\log\log Q}{Q}.$
The first line is Plancherel, and the subsequent lines depend upon
definitions, and the fact that the functions below are disjointly supported.
$\\{\eta_{s}(\cdot-a/q)\;:\;2^{s}\leq q<2^{s+1},\ a\in A_{q}\\}.$
Last of all, we use a well known lower bound $\phi(q)\gg q/\log\log q$.
For the maximal inequality (4.3), we have an additional logarithmic term. This
is direct consequence of the Bourgain multi-frequency inequality, stated in
Lemma 4.4. We then have
$\displaystyle\lVert\sup_{N>Q^{2}}\lvert\operatorname{Hi}_{Q,N}f\rvert\rVert_{\ell^{2}}$
$\displaystyle\leq\sum_{s\;:\;Q\leq
2^{s}}\bigl{\lVert}\sup_{N>Q^{2}}\lvert{V_{s,n}^{Q,\textup{hi}}}f\rvert\bigl{\rVert}_{\ell^{2}}$
$\displaystyle\ll\sum_{s\;:\;Q\leq 2^{s}}s\cdot\max_{2^{s}\leq
q<2^{s+1}}\frac{1}{\phi(q)}\cdot\lVert f\rVert_{\ell^{2}}\lesssim\frac{\log
Q\cdot\log\log Q}{Q}\lVert f\rVert_{\ell^{2}}.$
∎
###### Lemma 4.4.
Let $\theta_{1},\dotsc,\theta_{J}$ be points in $\mathbb{T}$ with $\min_{j\neq
k}\lvert\theta_{j}-\theta_{k}\rvert>2^{-2s_{0}+2}$. We have the inequality
$\Bigl{\lVert}\sup_{N>4^{s_{0}}}\Bigl{\lvert}\sum_{j=1}^{J}\mathcal{F}^{-1}\Bigl{(}\widehat{f}\sum_{j=1}^{J}\tilde{M}_{N}(\cdot-\theta_{j})\eta_{s_{0}}(\cdot-a/q)\Bigr{)}\Bigr{\rvert}\Bigr{\rVert}_{\ell^{2}}\ll\log
J\cdot\lVert f\rVert_{\ell^{2}}.$
This is one of the main results of [MR1019960]. It is stated therein with a
higher power of $\log J$. But it is well known that the inequality holds with
a single power of $\log J$. This is discussed in detail in [MR4072599].
### 4.2. The Low Terms
From the Low terms defined in (3.25), the property is
###### Lemma 4.5.
For a functions $f,g$ supported on interval $I$ of length $N=2^{n}$, we have
(4.6) $N^{-1}\langle\operatorname{Lo}_{Q,N}\ast f,g\rangle\ll\log
Q\cdot\langle f\rangle_{I}\langle g\rangle_{I}.$
The following Möbius Lemma is well known.
###### Lemma 4.7.
For each $q$, we have
(4.8) $\sum_{a\in
A_{q}}G(\mathbf{1}_{A_{q}},a)\mathcal{F}^{-1}(\widehat{M}_{N}\cdot\eta_{s}(\cdot-a/q))(x)=\frac{\mu(q)}{\phi(q)}c_{q}(-x).$
###### Proof.
Compute
$\displaystyle\sum_{a\in
A_{q}}G(\mathbf{1}_{A_{q}},a)\mathcal{F}^{-1}(\widehat{M}_{N}\cdot\eta_{s}(\cdot-a/q))(x)$
$\displaystyle=M_{N}\ast\mathcal{F}^{-1}\eta_{s}(x)\sum_{a\in
A_{q}}G(\mathbf{1}_{A_{q}},a)e(ax/q).$
We focus on the last sum above, namely
(4.9) $\displaystyle S_{q}(x)$ $\displaystyle=\sum_{a\in
A_{q}}G(\mathbf{1}_{q},a)e(xa/q)$ (4.10)
$\displaystyle=\frac{1}{\phi(q)}\sum_{r\in A_{q}}\sum_{a\in A_{q}}e(a(r+x)/q)$
(4.11) $\displaystyle=\frac{1}{\phi(q)}\sum_{r\in
A_{q}}c_{q}(r+x)=\frac{\mu(q)}{\phi(q)}c_{q}(-x).$
The last line uses Cohen’s identity. ∎
The two steps of inserting of the property of being $Q$ smooth in (3.21), as
well as dropping an restriction on $s$ in (3.25), were made for this proof.
###### Proof of Lemma 4.5.
By (4.8), the kernel of the operator $\operatorname{Lo}_{Q,N}$ is
(4.12) $\displaystyle\operatorname{Lo}_{Q,N}(x)$
$\displaystyle=M_{N}\ast\mathcal{F}^{-1}\eta_{s}(x)\cdot S(-x),$ (4.13)
$\displaystyle\textup{where}\quad S(x)$
$\displaystyle=\sum_{q\in\mathbb{S}_{Q}}\frac{\mu(q)}{\phi(q)}c_{q}(x).$
We establish a pointwise bound $\lVert S\rVert_{\ell^{\infty}}\ll\log Q$,
which proves the Lemma.
Assume $x\neq 0$. We exploit the multiplicative properties of the summands, as
well as the fact that if prime $p$ divides $x$, we have
$\frac{\mu_{p}(x)}{\phi(p)}c_{q}(x)=\mu_{p}(x)$. Let $\mathcal{Q}_{1}$ be the
primes $p<Q$ such that $(p,x)=1$, and set $\mathcal{Q}_{2}$ to be the primes
less than $Q$ which are not in $\mathcal{Q}_{1}$.
The multiplicative aspect of the sums allows us to write
$\frac{\mu(q)}{\phi(q)}c_{q}(-x)=\frac{\mu(q_{1})}{\phi(q_{1})}c_{q_{1}}(-x)\cdot\mu(q_{2})$
where $q=q_{1}q_{2}$, and all prime factors of $q_{j}$ are in
$\mathcal{Q}_{j}$. If $\mathcal{Q}_{j}$ is empty, set $q_{j}=1$. Thus,
$S(x)=S_{1}(x)S_{2}(x)$, where the two terms are associated with
$\mathcal{Q}_{1}$ and $\mathcal{Q}_{2}$ respectively. We have
$\displaystyle S_{1}(x)$ $\displaystyle=\sum_{\textup{ $q$ is
$\mathcal{Q}_{1}$ smooth}}\frac{\mu(q)}{\phi(q)}c_{q}(-x)$
$\displaystyle=\prod_{p\in\mathcal{Q}_{1}}1+\frac{\mu(p)c_{p}(-x)}{\phi(p)}$
$\displaystyle=\prod_{p\in\mathcal{Q}_{1}}1+\frac{1}{p-1}=A_{x}.$
This is so, since $\mu(p)c_{p}(x)=1$. It is a straight forward consequence of
the Prime Number Theorem that $A_{x}\ll\log Q$. Here, and below, we say that
$q$ is $\mathcal{Q}$ smooth if all the prime factors of $q$ are in the set of
primes $\mathcal{Q}$.
The second term is as below, where $d=\lvert\mathcal{Q}_{2}\rvert$. Here, in
the definition (3.25), there is no restriction on $s$, hence all the smooth
square free numbers are included. If $\mathcal{Q}_{2}=\emptyset$, then
$S_{2}(x)=1$, otherwise
$\displaystyle S_{2}(x)$ $\displaystyle=\sum_{\textup{ $q$ is
$\mathcal{Q}_{2}$ smooth}}\mu(q)$
$\displaystyle=\sum_{j=1}^{d}\binom{d}{j}(-1)^{j}$
$\displaystyle=-1+\sum_{j=0}^{d}\binom{d}{j}(-1)^{j}=-1.$
If $x=0$, then $S(0)=S_{2}(x)=-1$. That completes the proof.
∎
### 4.3. The Exceptional Term
The Exceptional terms are always of a smaller order than the Low terms.
###### Lemma 4.14.
Let $\chi$ be an exceptional character modulo $q$. For $x\in\mathbb{Z}$,
(4.15) $\Bigl{\lvert}\sum_{a\in
A_{q}}G(\chi,a)e(xa/q)\Bigr{\rvert}=\frac{q}{\phi(q)}$
provided $(x,q)=1$, otherwise the sum is zero.
###### Proof.
It is also known that exceptional characters are primitive - see
[MR2061214]*Theorem 5.27. So the sum is zero if $(x,q)>1$. We use the common
notation
$\tau(\chi,x)=\sum_{a\in A_{q}}\chi(a)e(ax/q)$
which is $\phi(q)G(\chi,x)$. Assuming $(x,q)=1$,
(4.16) $\tau(\chi,a)=\tau(\chi,1).$
This leads immediately to
(4.17) $\displaystyle\sum_{a\in A_{q}}\tau(\chi,a)e(\frac{ax}{q})$
$\displaystyle=\tau(\chi,1)\sum_{a\in A_{q}}\chi(a)e(-\frac{ax}{q})$ (4.18)
$\displaystyle=\frac{\tau(\chi)\overline{\tau(\chi,x)}}{\phi(q)}=\frac{|\tau(\chi)|^{2}\overline{\chi(x)}}{\phi(q)}.$
It is known that $|\tau(\chi)|^{2}=q$ for primitive characters. And the
exceptional character is quadratic, so this completes the proof. ∎
###### Lemma 4.19.
For a function $f$ supported on interval $I$ of length $N=2^{n}$, we have
(4.20) $\langle\operatorname{Ex}_{Q,N}\ast f\rangle_{\infty}\ll(\log\log
Q)^{2}\cdot\langle f\rangle_{I}.$
The term on the left is defined in (3.27).
###### Proof.
Following the argument from Lemma 4.5, we have
(4.21) $\displaystyle\operatorname{Ex}_{Q,N}(x)$
$\displaystyle=\sum_{q<Q}\sum_{a\in A_{q}}G(\chi_{q},a)e(xa/q)\cdot
M_{N}^{\beta_{v}}\ast\mathcal{F}^{-1}\eta_{s_{q}}(x).$
Above, $2^{s_{q}}\leq q<2^{s_{q}+1}$. The interior sum above is estimated in
(4.15). Using the lower bound on the totient function in (2.4), we have
$\operatorname{Ex}_{Q,N}(x)f\ll\log\log Q\cdot\langle
f\rangle_{I}\sum_{\begin{subarray}{c}q<Q\\\ \textup{$q$
exceptional}\end{subarray}}1.$
We know that the exceptional $q$ grow at the rate of a double exponential,
that is for $q_{v}$ being the $v$th exceptional $q$, we have $q_{v}\gg
C^{C^{v}}$, for some $C>1$. It follows that the sum above is at most $\log\log
Q$. ∎
## 5\. Proofs of the Fixed Scale and Sparse Bounds
###### Proof of Theorem 1.5.
Let $N=2^{n}$, and recall that $f=\mathbf{1}_{F}$ and $g=\mathbf{1}_{G}$ where
$F,G\subset I$, and interval of length $N$.
Let us address the case in which we do not assume GRH. We always have the
estimate
(5.1) $N^{-1}\langle A_{N}f,g\rangle\lesssim n\cdot\langle f\rangle_{I}\langle
g\rangle_{I}.$
Hence, if we have $\langle f\rangle_{I}\langle g\rangle_{I}\ll
e^{-c\sqrt{n}/100}$, the inequality with a squared log follows.
We assume that $e^{-c\sqrt{n}}\ll\langle f\rangle_{I}\langle g\rangle_{I}$,
and then prove a better estimate. We turn to the Low/High/Exceptional
decomposition in (3.25)—(3.27), for a choice of integer $Q$ that we will
specify. We have
${A_{N}}={\operatorname{Lo}_{Q,N}}+{\operatorname{Hi}_{Q,N}}-\operatorname{Ex}_{Q,N}+\textup{Err}_{N}+\textup{Err}_{N}^{\prime}$
These terms are defined (3.25), (3.26), (3.27), (3.19) and (3.29) df
respectively.
For the ‘High’ term we have by (4.2),
$\displaystyle
N^{-1}\lvert\langle\operatorname{Hi}_{Q,N}f,g\rangle\rvert\lesssim\frac{\log\log
Q}{Q}\langle f\rangle_{I,2}\langle g\rangle_{I,2}$
The same inequality holds for both $\operatorname{Err}_{Q,N}f$ and
$\operatorname{Err}^{\prime}_{Q,N}f$ by Lemma 3.18 and Proposition 3.28.
Concering the Low term, by (4.6), we have
$N^{-1}\lvert\langle\operatorname{Lo}_{Q,N}f,g\rangle\rvert\lesssim\log
Q\langle f\rangle_{I}\langle g\rangle_{I}$
The Exceptional term satisfies the same estimate by (4.20).
Combining estimates, choose $Q$ to minimize the right hand side, namely
(5.2) $N^{-1}\langle A_{N}f,g\rangle\lesssim\frac{\log\log
Q}{Q}\bigl{[}\langle f\rangle_{I}\langle g\rangle_{I}\bigr{]}^{1/2}+\log
Q\cdot\langle f\rangle_{I}\langle g\rangle_{I}.$
This value of $Q$ is
$Q\frac{\log Q}{\log\log Q}\simeq\bigl{[}\langle f\rangle_{I}\langle
g\rangle_{I}\bigr{]}^{-1/2}.$
Since $e^{-c\sqrt{n}}\ll\langle f\rangle_{I}\langle g\rangle_{I}$, this is an
allowed choice of $Q$. And, then, we prove the desired inequality, but only
need a single power of logarithm.
Assuming GRH, from (5.1), we see that the inequality to prove is always true
provided $\langle f\rangle_{I}\langle g\rangle_{I}<cN^{-1/4}$. Assuming this
inequality fails, we follow the same line of reasoning above that leads to
(5.2). That value of $Q$ will be at most $N^{1/4}$, so the proof will
complete, to show the bound with a single power of the logarithmic term.
∎
Turning to the sparse bounds, let us begin with the definitions.
###### Definition 5.3.
A collection of intervals $\mathcal{S}$ is called _sparse_ if to each interval
$I\in\mathcal{S}$, there is a set $E_{I}\subset I$ so that $4\lvert
E_{I}\rvert\geq\lvert I\rvert$ and the collection
$\\{E_{I}\;:\;I\in\mathcal{S}\\}$ are pairwise disjoint. All intervals will be
finite sets of consecutive integers in $\mathbb{Z}$.
The form of the sparse bound in Theorem 1.7 strongly suggests that one use a
recursive method of proof. (Which is indeed the common method.) To formalize
it, we start with the notion of a _linearized_ maximal function. Namely, to
bound the maximal function $A^{\ast}f$, it suffices to bound
$A_{\tau(x)}f(x)$, where
$\tau\;:\;\mathbb{Z}\to\\{2^{n}\;:\;n\in\mathbb{N}\\}$ is a function, taken to
realize the supremum. The supremum in the definition of $A^{\ast}f$ is always
attained if $f$ is finitely supported.
###### Definition 5.4.
Let $I_{0}$ an interval, and let $f$ be supported on $3I_{0}$. A map
$\tau\;:\;I_{0}\to\\{1,2,4,\dotsc,\lvert I_{0}\rvert\\}$ is said to be
_admissible_ if
$\sup_{N\geq\tau(x)}M_{N}f(x)\leq 10\langle f\rangle_{3I_{0},1}.$
That is, $\tau$ is admissible if at all locations $x$, the averages of $f$
over scales larger than $\tau(x)$ are controlled by the global average of $f$.
###### Lemma 5.5.
Let $f$ and $\tau$ be as in Definition 5.4. Further assume that $f$ and $g$
are indicator functions, with $g$ supported on $I_{0}$. Then, we have
(5.6) $\lvert I_{0}\rvert^{-1}\langle A_{\tau}f,g\rangle\lesssim\langle
f\rangle_{I_{0},1}\langle g\rangle_{I_{0},1}\cdot(\operatorname{Log}\langle
f\rangle_{3I_{0},1}\langle g\rangle_{I_{0},1})^{t},$
where $t=1$ assuming RH, and $t=2$ otherwise.
###### Proof.
We restrict $\tau$ to take values $1,2,4,\dotsc,2^{t},\dotsc,$. Let $\lvert
I_{0}\rvert=N_{0}=2^{n_{0}}$. We always have the inequalities
$\displaystyle\lvert I_{0}\rvert^{-1}\langle A_{\tau}f,g\rangle$
$\displaystyle\lesssim n_{0}\langle f\rangle_{I_{0},1}\langle
g\rangle_{I_{0},1}$ $\displaystyle\lvert
I_{0}\rvert^{-1}\langle\mathbf{1}_{\tau<T}A_{\tau}f,g\rangle$
$\displaystyle\lesssim(\log T)\langle f\rangle_{I_{0},1}\langle
g\rangle_{I_{0},1}.$
The top line follows from admissibility.
We begin by not assuming GRH. Then, the conclusion of the Lemma is immediate
if we have $(\operatorname{Log}\langle f\rangle_{I_{0},1}\langle
g\rangle_{I_{0},1})^{2}\gg{n_{0}}$. It is also immediate if
$\log\tau\ll(\operatorname{Log}\langle f\rangle_{I_{0},1}\langle
g\rangle_{I_{0},1})^{2}$. We proceed assuming
(5.7) $p_{0}^{2}=C(\operatorname{Log}\langle f\rangle_{I_{0},1}\langle
g\rangle_{I_{0},1})^{2}\leq c_{0}\min\\{n_{0},\log\tau\\},$
where $0<c_{0}<1$ is sufficiently small.
We use the definitions in (3.25)—(3.27) for a value of $Q<e^{c\sqrt{n_{0}}}$
that we will specify. We address the High, Low, Exceptional and both Error
terms. First, the Error terms. From the estimate (LABEL:e:ErrLess) and (5.7),
we have
$\displaystyle\lVert\operatorname{Err}_{Q,\tau}f\rVert_{2}^{2}$
$\displaystyle\leq\sum_{n\;:\;p_{0}^{2}\leq n\leq
n_{0}}\lVert\operatorname{Err}_{Q,2^{n}}f\rVert_{\ell^{2}}^{2}$
$\displaystyle\lesssim\lVert f\rVert_{\ell^{2}}^{2}\sum_{n\;:\;p_{0}^{2}\leq
n\leq n_{0}}e^{-c\sqrt{n}}$ $\displaystyle\lesssim\lVert
f\rVert_{\ell^{2}}^{2}\cdot p_{0}^{2}e^{-cp_{0}}\lesssim\lVert
f\rVert_{\ell^{2}}^{2}\cdot\langle f\rangle_{3I_{0},1}\langle
g\rangle_{I_{0},1}.$
This provided $C$ in (5.7) is large enough. This is a much smaller estimate
than we need. The second error term in Proposition 3.28 is addressed by the
same square function argument.
For the High term, apply (4.3) to see that
(5.8)
$\lVert\sup_{N>Q^{2}}\lvert\operatorname{Hi}_{Q,N}f\rvert\rVert_{2}\lesssim\frac{\log
Q\cdot\log\log Q}{Q}\lVert f\rVert_{\ell^{2}}.$
For the Low term the definition of admissibility and (4.6) that
$\lvert
I_{0}\rvert^{-1}\lvert\langle\operatorname{Lo}_{Q,\tau(x)}f(x),g\rangle\ll(\log
Q)\langle f\rangle_{I}\langle g\rangle_{I}.$
The Exceptional term also satisfies this bound.
We conclude that
$\displaystyle\lvert I_{0}\rvert^{-1}\langle
A_{\tau}f,g\rangle\lesssim\frac{\log Q\cdot\log\log Q}{Q}\langle
f\rangle_{I,2}\langle g\rangle_{I,2}+\log Q\cdot\langle f\rangle_{I}\langle
g\rangle_{I}.$
This is optimized by taking $Q$ so that
$\frac{Q}{\log\log Q}\simeq\bigl{[}\langle f\rangle_{I}\langle
g\rangle_{I}\bigr{]}^{-1/2}.$
And this will be an allowed value of $Q$ since (5.7) holds. Again, the
resulting estimate is better by power of the logarithmic term than what is
claimed.
Under RH, the proof is very similar, but a wider range of $Q$’s are allowed.
In particular, only a single power of logarithm is needed.
∎
## 6\. Proof of Corollary 1.9
The inequality (1.10) follows from the elementary identity that for $0<x<1$,
we have
$x(\operatorname{Log}x)^{t}\ll\min_{1<p<2}\frac{x}{(p-1)^{t}}.$
We remark that we do not know an efficient way to pass from the restricted
weak type sparse bound we have established to the similar sparse bounds for
functions. The methods to do this for _norm estimates_ is of course very well
studied.
###### Proof of (1.11).
There is a different inequality that is a natural consequence of the sparse
bound, namely
(6.1)
$\sup_{\lambda}\lambda\frac{\lvert\\{A^{\ast}\mathbf{1}_{F}>\lambda\\}\rvert}{(\operatorname{Log}\lvert\\{A^{\ast}\mathbf{1}_{F}>\lambda\\}\rvert\cdot\lvert
F\rvert^{-1})}\lesssim\lvert F\rvert.$
Indeed, if (1.11) were to fail, with a sufficiently large constant, it would
contradict the inequality above.
Let $\lvert G\rvert>\lvert F\rvert$. We show that there is a subset
$G^{\prime}\subset G$, with $4\lvert G^{\prime}\rvert\geq\lvert G\rvert$ with
(6.2) $\langle A^{\ast}f,\mathbf{1}_{G^{\prime}}\rangle\ll\lvert
F\rvert(\operatorname{Log}\lvert F\rvert/\lvert G\rvert)^{t}$
This implies (6.1) by taking $G=\\{A^{\ast}f>\lambda\\}$, for $0<\lambda<1$.
In the opposite case, take $G^{\prime}$ to be
$G^{\prime}=G\setminus\\{Mf>K\rho\\},\qquad\rho=\lvert F\rvert\cdot\lvert
G\rvert^{-1}$
where $M$ is the ordinary maximal function. By the usual weak $\ell^{1}$
inequality for $M$, for $K$ sufficiently large, we have $4\lvert
G^{\prime}\rvert>\lvert G\rvert$. Let $g=\mathbf{1}_{G^{\prime}}$. Apply the
sparse bound for $A^{\ast}$ to see that
$\langle A^{\ast}f,g\rangle\ll\sum_{I\in\mathcal{S}}\langle
f\rangle_{I}\langle g\rangle_{I}(\operatorname{Log}\langle f\rangle_{I}\langle
g\rangle_{I})^{t}\lvert I\rvert.$
We can assume that for all intervals $I\in\mathcal{S}$, that we have $\langle
g\rangle_{I}>0$. That means that $\langle f\rangle_{I}\leq K\lvert
F\rvert/\lvert G\rvert$. Turn to a pigeonhole argument. Divide the collection
$\mathcal{S}$ into subcollections $\bigcup_{j,k\geq 0}\mathcal{S}_{j,k}$ where
$\mathcal{S}_{j,k}=\\{I\in\mathcal{S}\;:\;2^{-j-1}K\rho<\langle
f\rangle_{I}\leq 2^{-j}K\rho,\ 2^{-k-1}<\langle g\rangle_{I}\leq 2^{-k}\\}.$
Then, we have
$\displaystyle\langle A^{\ast}f,g\rangle$ $\displaystyle\ll\sum_{j,k\geq
0}\sum_{I\in\mathcal{S}_{j,k}}\langle f\rangle_{I}\langle
g\rangle_{I}(\operatorname{Log}\langle f\rangle_{I}\langle
g\rangle_{I})^{t}\lvert I\rvert$ $\displaystyle\ll\lvert F\rvert\cdot\lvert
G\rvert^{-1}\sum_{j,k\geq
0}2^{-j-k}(j+k+\operatorname{Log}\rho)^{t}\sum_{I\in\mathcal{S}_{j,k}}\lvert
I\rvert$ $\displaystyle\ll\lvert F\rvert\cdot\lvert G\rvert^{-1}\sum_{j,k\geq
0}2^{-j-k}(j+k+\operatorname{Log}\rho)^{t}\min\\{\lvert G\rvert
2^{j},2^{k}\lvert G\rvert\\}$ $\displaystyle\ll\lvert F\rvert\sum_{j,k\geq
0}2^{-j-k}(j+k+\operatorname{Log}\rho)2^{(j+k)/2}\ll\lvert F\rvert.$
Here, we have used the standard weak-type inequality for the maximal function,
and the basic property of sparseness, namely
$\sum_{I\in\mathcal{S}}\lvert
I\rvert\lesssim\Bigl{\lvert}\bigcup_{I\in\mathcal{S}}I\Bigr{\rvert}.$
This completes the proof of (6.2). ∎
For the proof of (1.12), we need to recall the definition of the Orlicz norm.
Given $f$ finitely supported on $\mathbb{Z}$, let
$f^{\ast}\;:\;[0,\infty)\to\mathbb{N}$ be the decreasing rearrangement of $f$.
That is,
$f^{\ast}(\lambda)=\lvert\\{x\in\mathbb{Z}\;:\;\lvert
f(x)\rvert\geq\lambda\\}\rvert.$
For a slowly varying function $\varphi\;:\;[0,\infty)\to[0,\infty)$, set
(6.3) $\displaystyle\lVert f\rVert_{\ell\varphi(\ell)}$
$\displaystyle=\int_{0}^{\infty}f^{\ast}(\lambda)\varphi(\lambda)\;d\lambda$
(6.4)
$\displaystyle\simeq\sum_{j\in\mathbb{Z}}2^{j}\varphi(2^{j})f^{\ast}(2^{j}).$
For $\varphi(x)=1$, this is comparable to the usual $\ell^{1}$ estimate. For
$f=\mathbf{1}_{F}$, note that
$\lVert f\rVert_{\ell\varphi(\ell)}=\int_{0}^{\lvert
F\rvert}\varphi(\lambda)\;d\lambda\simeq\lvert F\rvert\varphi(\lvert F\rvert)$
We are interested in
$\varphi(x)=(\operatorname{Log}x)\cdot\operatorname{Log}\operatorname{Log}x)^{t}$,
for $t=1,2$. The proof of the orlicz norm estimate (1.12) is below.
###### Proof of (1.12).
This argument goes back to at least [MR241885]. Assume that the weak-type
estimate for indicators (1.11) holds. Let
$f\in\ell(\log\ell)^{t}(\log\log\ell)$ be a non-negative function of norm one.
Set
$\displaystyle B_{j}=\\{x\;:\;2^{j}\leq f(x)<2^{j+1}\\},$
and set $b_{j}=f^{\ast}(2^{j})$. We have
$\sum_{j\leq 0}2^{j}\mathbf{1}_{B_{j}}\leq f\leq 2\sum_{j\leq
0}2^{j}\mathbf{1}_{B_{j}}.$
And, by logarithmic subadditivity for the weak-type norm, and (1.11),
$\displaystyle\lVert A^{\ast}f\rVert_{1,\infty}$ $\displaystyle\ll\sum_{j\leq
0}\log(1-j)\cdot 2^{j}\lVert A^{\ast}\mathbf{1}_{B_{j}}\rVert_{1,\infty}$
$\displaystyle\ll\sum_{j\leq 0}\log(1-j)\cdot 2^{j}\lvert
B_{j}\rvert(\log\lvert B_{j}\rvert)^{t}$ $\displaystyle\ll\sum_{j\leq
0}\log(1-j)\cdot j^{t}2^{j}\lvert B_{j}\rvert\ll\lVert
f\rVert_{\ell(\log\ell)^{t}(\log\log\ell)}=1.$
Above, we appealed to $\lvert B_{j}\rvert\leq 2^{-j}$, for otherwise the norm
of $f$ is more than one.
∎
## References
|
# Learning-‘N-Flying: A Learning-based, Decentralized Mission Aware UAS
Collision Avoidance Scheme
Alëna Rodionova<EMAIL_ADDRESS>0000-0001-8455-9917 University
of PennsylvaniaDepartment of Electrical and Systems
EngineeringPhiladelphiaPA19104USA , Yash Vardhan Pant<EMAIL_ADDRESS>University of California, BerkeleyDepartment of Electrical Engineering and
Computer SciencesBerkeleyCAUSA , Connor Kurtz<EMAIL_ADDRESS>Oregon
State UniversitySchool of Electrical Engineering and Computer
ScienceCorvallisORUSA , Kuk Jang<EMAIL_ADDRESS>University of
PennsylvaniaDepartment of Electrical and Systems
EngineeringPhiladelphiaPA19104USA , Houssam Abbas
<EMAIL_ADDRESS>Oregon State UniversitySchool of Electrical
Engineering and Computer ScienceCorvallisORUSA and Rahul Mangharam
<EMAIL_ADDRESS>University of PennsylvaniaDepartment of Electrical and
Systems EngineeringPhiladelphiaPA19104USA
(2021)
###### Abstract.
Urban Air Mobility, the scenario where hundreds of manned and UAS (UAS) carry
out a wide variety of missions (e.g. moving humans and goods within the city),
is gaining acceptance as a transportation solution of the future. One of the
key requirements for this to happen is safely managing the air traffic in
these urban airspaces. Due to the expected density of the airspace, this
requires fast autonomous solutions that can be deployed online. We propose
Learning-‘N-Flying (LNF) a multi-UAS Collision Avoidance (CA) framework. It is
decentralized, works on-the-fly and allows autonomous UAS managed by different
operators to safely carry out complex missions, represented using Signal
Temporal Logic, in a shared airspace. We initially formulate the problem of
predictive collision avoidance for two UAS as a mixed-integer linear program,
and show that it is intractable to solve online. Instead, we first develop
Learning-to-Fly (L2F) by combining: a) learning-based decision-making, and b)
decentralized convex optimization-based control. LNF extends L2F to cases
where there are more than two UAS on a collision path. Through extensive
simulations, we show that our method can run online (computation time in the
order of milliseconds), and under certain assumptions has failure rates of
less than $1\%$ in the worst-case, improving to near $0\%$ in more relaxed
operations. We show the applicability of our scheme to a wide variety of
settings through multiple case studies.
Collision avoidance, unmanned aircraft systems, temporal logic, robustness,
neural network, Model Predictive Control
††copyright: acmcopyright††journalyear: 2021††doi:
nn.nnnn/nnnnnnn.nnnnnnn††journal: JACM††journalvolume:
Unassigned††journalnumber: Unassigned††publicationmonth: 1††ccs: Computer
systems organization Robotic control††ccs: Computing methodologies Neural
networks
## 1\. Introduction
With the increasing footprint and density of metropolitan cities, there is a
need for new transportation solutions that can move goods and people around
rapidly and without further stressing road networks. UAM (UAM) (Hackenberg,
2018) is a one such concept quickly gaining acceptance (NASA, 2018) as a means
to improve connectivity in metropolitan cities. In such a scenario, hundreds
of Autonomous manned and UAS (UAS) will carry goods and people around the
city, while also performing a host of other missions. A critical step towards
making this a reality is safe traffic management of the all the UAS in the
airspace. Given the high expected UAS traffic density, as well as the short
timescales of the flights, UTM (UTM) needs to be autonomous, and guarantee a
high degree of safety, and graceful degradation in cases of overload. The
first requirement for automated UTM is that its algorithms be able to
accommodate a wide variety of missions, since the different operators have
different goals and constraints. The second requirement is that as the number
of UAS in the airspace increases, the runtimes of the UTM algorithms does not
blow up - at least up to a point. Thirdly, it must provide guaranteed
collision avoidance in most use cases, and degrade gracefully otherwise; that
is, the determination of whether it will be able to deconflict two UAS or not
must happen sufficiently fast to alert a higher-level algorithm or a human
operator, say, who can impose additional constraints.
In this paper we introduce and demonstrate a new algorithm, LNF, for multi-UAS
planning in urban airspace. LNF starts from multi-UAS missions expressed in
Signal Temporal Logic (STL), a formal behavioral specification language that
can express a wide variety of missions and supports automated reasoning. In
general, a mission will couple various UAS together through mutual separation
constraints, and this coupling can cause an exponential blowup in computation.
To avoid this, LNF lets every UAS plan independently of others, while ignoring
the mutual separation constraints. This independent planning step is performed
using Fly-by-Logic, our previous UAS motion planner. An online collision
avoidance procedure then handles potential collisions on an as-needed basis,
i.e. when two UAS that are within communication range detect a future
collision between their pre-planned trajectories. Even online optimal
collision avoidance between two UAS requires solving a Mixed-Integer Linear
Program (MILP). LNF avoids this by using a recurrent neural network which maps
the current configuration of the two UAS to a sequence of discrete decisions.
The network’s inference step runs much faster (and its runtime is much more
stable) than running a MILP solver. The network is trained offline on
solutions generated by solving the MILP. To generalize from two UAS collision
avoidance to multi-UAS, we introduce another component to LNF: Fly-by-Logic
generates trajectories that satisfy their STL missions, and a robustness tube
around each trajectory. As long as the UAS is within its tube, it satisfies
its mission. To handle a collision between 3 or more UAS, LNF shrinks the
robustness tube for each trajectory in such a way that sequential 2-UAS
collision avoidance succeeds in deconflicting all the UAS.
We show that LNF is capable of successfully resolving collisions between UAS
even within high-density airspaces and the short timescales, which are exactly
the scenarios expected in UAM. LNF creates opportunities for safer UAS
operations and therefore safer UAM.
##### Contributions of this work
In this paper, we present an online, decentralized and mission-aware UAS CA
(CA) scheme that combines machine learning-based decision-making with Model
Predictive Control (MPC). The main contributions of our approach are:
1. (1)
It systematically combines machine learning-based decision-making111With the
offline training and fast online application of the learned policy, see
Sections 4.2 and 6.2. with an MPC-based CA controller. This allows us to
decouple the usually hard-to-interpret machine learning component and the
safety-critical low-level controller, and also repair potentially unsafe
decisions by the ML components. We also present a sufficient condition for our
scheme to successfully perform CA.
2. (2)
LNF collision avoidance avoids the live-lock condition where pair-wise CA
continually results in the creation of collisions between other pairs of UAS.
3. (3)
Our formulation is mission-aware, i.e. CA does not result in violation of the
UAS mission. As shown in (Rodionova et al., 2020), this also enables faster
STL-based mission planning for a certain class of STL specifications.
4. (4)
Our approach is computationally lightweight with a computation time of the
order of $10ms$ and can be used online.
5. (5)
Through extensive simulations, we show that the worst-case failure rate of our
method is less than $1\%$, which is a significant improvement over other
approaches including (Rodionova et al., 2020).
Figure 1. Two UAS communicating their planned trajectories, and cooperatively
maneuvering within their robustness tubes to avoid a potential collision in
the future.
##### Related Work.
UTM and Automatic Collision Avoidance approaches Collision avoidance (CA) is a
critical component of UAS Traffic Management (UTM). The NASA/FAA Concept of
Operations (Administration, 2018) and (Li et al., 2018) present airspace
allocation schemes where UAS are allocated airspace in the form of non-
overlapping space-time polygons. Our approach is less restrictive and allows
overlaps in the polygons, but performs online collision avoidance on an as-
needed basis. A tree search-based planning approach for UAS CA is explored in
(Chakrabarty et al., 2019). The next-gen CA system for manned aircrafts,
ACAS-X (Kochenderfer et al., 2012) is a learning-based approach that provides
vertical separation recommendations. ACAS-Xu (Manfredi and Jestin, 2016)
relies on a look-up table to provide high-level recommendations to two UAS. It
restricts desired maneuvers for CA to the vertical axis for cooperative
traffic, and the horizontal axis for uncooperative traffic. While we consider
only the cooperative case in this work, our method does not restrict CA
maneuvers to any single axis of motion. Finally, in its current form, ACAS-Xu
also does not take into account any higher-level mission objectives, unlike
our approach. This excludes its application to low-level flights in urban
settings. The work in (Fabra et al., 2019) presents a decentralized, mission
aware CA scheme, but requires time of the order of seconds for the UAS to
communicate and safely plan around each other, whereas our approach has a
computation times in milliseconds.
Multi-agent planning with temporal logic objectives Multi-agent planning for
systems with temporal logic objectives has been well studied as a way of safe
mission planning. Approaches for this usually rely on grid-based
discretization of the workspace (Saha et al., 2014; DeCastro et al., 2017), or
a simplified abstraction of the dynamics of the agents (Desai et al., 2017;
Aksaray et al., 2016). (Ma et al., 2016) combines a discrete planner with a
continuous trajectory generator. Some methods (Kloetzer and Belta, 2008;
Fainekos et al., 2005; Kloetzer and Belta, 2006) work for subsets of Linear
Temporal Logic (LTL) that do not allow for explicit timing bounds on the
mission requirements. The work in (Saha et al., 2014) allows some explicit
timing constraints. However, it restricts motion to a discrete set of motion
primitives. The predictive control method of (Raman et al., 2014a) uses the
full STL grammar; it handles a continuous workspace and linear dynamics of
robots, however its reliance on mixed-integer encoding (similar to (Saha and
Julius, 2016; Karaman and Frazzoli, 2011)) limit its practical use as seen in
(Pant et al., 2017). The approach of (Pant et al., 2018) instead relies on
optimizing a smooth (non-convex) function for generating trajectories for
fleets of multi-rotor UAS with STL specifications. While these methods can
ensure safe operation of multi-agent systems, these are all centralized
approaches, i.e. require joint planning for all agents and do not scale well
with the number of agents. In our framework, we use the planning method of
(Pant et al., 2018), but we let each UAS plan independently of each other in
order for the planning to scale. We ensure the safe operation of all UAS in
the airspace through the use of our predictive collision avoidance scheme.
##### Organization of the paper
The rest of the paper is organized as follows. Section 2 covers preliminaries
on Signal Temporal Logic and trajectory planning. In Section 3 we formalize
the two-UAS CA problems, state our main assumptions, and develop a baseline
centralized solution via a MILP formulation. Section 4 presents a
decentralized learning-based collision avoidance framework for UAS pairs. In
Section 5 we extend this approach to support cases when CA has to be performed
for three or more UAS. We evaluate our methods through extensive simulations,
including three case studies in Section 6. Section 7 concludes the paper.
## 2\. Preliminaries: Signal Temporal Logic-based UAS planning
##### Notation.
For a vector $x=(x_{1},\ldots,x_{m})\in\mathbb{R}^{m}$,
$\|x\|_{\infty}=\max_{i}|x_{i}|$.
Figure 2. Step-wise explanation and visualization of the framework. Each UAS
generates its own trajectories to satisfy a mission expressed as a Signal
Temporal Logic (STL) specification, e.g. regions in green are regions of
interest for the UAS to visit, and the no-fly zone corresponds to
infrastructure that all the UAS must avoid. When executing these trajectories,
UAS communicate their trajectories to others in range to detect any collisions
that may happen in the near future. If a collision is detected, the two UAS
execute a conflict resolution scheme that generates a set of additional
constraints that the UAS must satisfy to avoid the collision. A co-operative
CA-MPC controls the UAS to best satisfy these constraints while ensuring each
UAS’s STL specification is still satisfied. This results in new trajectories
(in solid pink and blue) that will avoid the conflict and still stay within
the predefined robustness tubes.
### 2.1. Introduction to Signal Temporal Logic and its Robustness
Let $\mathbb{T}=\\{0,dt,2dt,3dt\ldots\\}$ be a discrete time domain with
sampling period $dt$ and let $\mathcal{X}\subset\mathbb{R}^{m}$ be the state
space. A signal is a function $\mathbf{x}:E\rightarrow\mathcal{X}$ where
$E\subseteq\mathbb{T}$; The $k^{\text{th}}$ element of $\mathbf{x}$ is written
$x_{k}$, $k\geq 0$. Let $\mathcal{X}^{\mathbb{T}}$ be the set of all signals.
Signal specifications are expressed in Signal Temporal Logic (STL) (Maler and
Nickovic, 2004), of which we give an informal description here. An STL formula
$\varphi$ is created using the following grammar:
$\varphi:=\top~{}|~{}p~{}|~{}\neg\varphi~{}|~{}\varphi_{1}\vee\varphi_{2}~{}|~{}\Diamond_{[a,b]}\varphi~{}|~{}\square_{[a,b]}\varphi~{}|~{}\varphi_{1}\mathcal{U}_{[a,b]}\varphi_{2}$
Here, $\top$ is logical True, $p$ is an atomic proposition, i.e. a basic
statement about the state of the system, $\neg,\vee$ are the usual Boolean
negation and disjunction, $\Diamond$ is Eventually, $\square$ is Always and
$\mathcal{U}$ is Until. It is possible to define the $\Diamond$ and $\square$
in terms of Until $\mathcal{U}$, but we make them base operations because we
will work extensively with them.
An STL specification $\varphi$ is interpreted over a signal, e.g. over the
trajectories of quad-rotors, and evaluates to either True or False. For
example, operator Eventually ($\Diamond$) augmented with a time interval
$\Diamond_{[a,b]}\varphi$ states that $\varphi$ is True at some point within
$[a,b]$ time units. Operator Always ($\square$) would correspond to $\varphi$
being True everywhere within time $[a,b]$. The following example demonstrates
how STL captures operational requirements for two UAS:
###### Example 1.
(A two UAS timed reach-avoid problem) Two quad-rotor UAS are tasked with a
mission with spatial and temporal requirements in the workspace schematically
shown in Figure 2:
1. (1)
Each of the two UAS has to reach its corresponding Goal set (shown in green)
within a time of $6$ seconds after starting. UAS $j$ (where $j\in\\{1,2\\}$),
with position denoted by $\mathbf{p}_{j}$, has to satisfy:
$\varphi_{\text{reach},j}=\Diamond_{[0,6]}(\mathbf{p}_{j}\in\text{Goal}_{j})$.
The Eventually operator over the time interval $[0,6]$ requires UAS $j$ to be
inside the set $\text{Goal}_{j}$ at some point within $6$ seconds.
2. (2)
The two UAS also have an Unsafe (in red) set to avoid, e.g. a no-fly zone. For
each UAS $j$, this is encoded with Always and Negation operators:
$\varphi_{\text{avoid},j}=\square_{[0,6]}\neg(\mathbf{p}_{j}\in\text{Unsafe})$
3. (3)
Finally, the two UAS should be separated by at least $\delta$ meters along
every axis of motion:
$\varphi_{\text{separation}}=\square_{[0,6]}||\mathbf{p}_{1}-\mathbf{p}_{2}||_{\infty}\geq\delta$
The 2-UAS timed reach-avoid specification is thus:
(1) $\varphi_{\text{reach-
avoid}}=\bigwedge_{j=1}^{2}(\varphi_{\text{reach},j}\wedge\varphi_{\text{avoid},j})\wedge\varphi_{\text{separation}}$
To satisfy $\varphi$ a planning method generates trajectories $\mathbf{p}_{1}$
and $\mathbf{p}_{2}$ of a duration at least $hrz(\varphi)=6$s, where
$hrz(\varphi)$ is the time horizon of $\varphi$. If the trajectories satisfy
the specification, i.e. $(\mathbf{p}_{1},\,\mathbf{p}_{2})\models\varphi$,
then the specification $\varphi$ evaluates to True, otherwise it is False. In
general, an upper bound for the time horizon can be computed as shown in
(Raman et al., 2014a). In this work, we consider specifications such that the
horizon is bounded. More details on STL can be found in (Maler and Nickovic,
2004) or (Raman et al., 2014a). In this paper, we consider discrete-time STL
semantics which are defined over discrete-time trajectories.
The Robustness value (Fainekos and Pappas, 2009) $\rho_{\varphi}(\mathbf{x})$
of an STL formula $\varphi$ with respect to the signal $\mathbf{x}$ is a real-
valued function of $\mathbf{x}$ that has the important following property:
###### Theorem 2.1.
(Fainekos and Pappas, 2009) (i) For any
$\mathbf{x}\in\mathcal{X}^{\mathbb{T}}$ and STL formula $\varphi$, if
$\rho_{\varphi}(\mathbf{x})<0$ then $\mathbf{x}$ violates $\varphi$, and if
$\rho_{\varphi}(\mathbf{x})>0$ then $\mathbf{x}$ satisfies $\varphi$. The case
$\rho_{\varphi}(\mathbf{x})=0$ is inconclusive.
(ii) Given a discrete-time trajectory $\mathbf{x}$ such that
$\mathbf{x}\models\varphi$ with robustness value
$\rho_{\varphi}(\mathbf{x})=r>0$, then any trajectory $\mathbf{x}^{\prime}$
that is within $r$ of $\mathbf{x}$ at each time step, i.e.
$||x_{k}-x^{\prime}_{k}||_{\infty}<r,\,\forall k\in\mathbb{H}$, is such that
$\mathbf{x}^{\prime}\models\varphi$ (also satisfies $\varphi$).
### 2.2. UAS planning with STL specifications
Fly-by-logic (Pant et al., 2017, 2018) generates trajectories by centrally
planning for fleets of UAS with STL specifications, e.g. the specification
$\varphi_{\textit{reach-avoid}}$ of example 1. It maximizes the robustness
function by picking waypoints for all UAS through a centralized, non-convex
optimization.
While successful in planning for multiple multi-rotor UAS, performance
degrades as the number of UAS increases, in particular because for $N$ UAS,
$N\choose 2$ terms are needed for specifying the pair-wise separation
constraint $\varphi_{\textit{separation}}$. For these reasons, the method
cannot be used for real-time planning. In this work, we use the underlying
optimization of (Pant et al., 2018) to generate trajectories, but ignore the
mutual separation requirement, allowing each UAS to independently (and in
parallel) solve for their own STL specification. For the timed reach-avoid
specification (1) in example 1, this is equivalent to each UAS generating its
own trajectory to satisfy
$\varphi_{j}=\varphi_{\textit{reach},j}\wedge\varphi_{\textit{avoid},j}$,
independently of the other UAS. Ignoring the collision avoidance requirement
$\varphi_{\textit{separation}}$ in the planning stage allows for the
specification of (1) to be decoupled across UAS. Therefore, this approach
requires online UAS collision avoidance. This is covered in the following
section.
## 3\. Problem formulation: Mission aware UAS Collision Avoidance
We consider the case where two UAS flying pre-planned trajectories are
required to perform collision avoidance if their trajectories are on path for
a conflict.
###### Definition 1.
2-UAS Conflict: Two UAS, with discrete-time positions $\mathbf{p}_{1}$ and
$\mathbf{p}_{2}$ are said to be in conflict at time step $k$ if
$||p_{1,k}-p_{2,k}||_{\infty}<\delta$, where $\delta$ is a predefined minimum
separation distance222A more general polyhedral constraint of the form
$M(p_{1,k}-p_{2,k})<q$ can be used for defining the conflict.. Here, $p_{j,k}$
represents the position of UAS $j$ at time step $k$.
While flying their independently planned trajectories, two UAS that are within
communication range share an $H$-step look-ahead of their trajectories and
check for a potential conflict in those $H$ steps. We assume the UAS can
communicate with each other in a manner that allows for enough advance notice
for avoiding collisions, e.g. using 5G technology. While the details of this
are beyond the scope of this paper, we formalize this assumption as follows:
###### Assumption 1.
The two UAS in conflict have a communication range that is at least greater
than their $n$-step forward reachable set (Dahleh et al., 2004) ($n\geq 1$)
333This set can be computed offline as we know the dynamics and actuation
limits for each UAS.. That is, the two UAS will not collide immediately in at
least the next $n$-time steps, enabling them to communicate with each other to
avoid a collision. Here $n$ is potentially dependent on the communication
technology being used.
###### Definition 2.
Robustness tube: Given an STL formula $\varphi$ and a discrete-time position
trajectory $\mathbf{p}_{j}$ that satisfies $\varphi$ (with associated
robustness $\rho$), the (discrete) robustness tube around $\mathbf{p}_{j}$ is
given by $\mathbf{P}_{j}=\mathbf{p}_{j}\oplus\mathbb{B}_{\rho}$, where
$\mathbb{B}_{\rho}$ is a 3D cube with sides $2\rho$ and $\oplus$ is the
Minkowski sum operation ($A\oplus B:=\\{a+b\;|\;a\in A,b\in B\\}$). We say the
radius of this tube is $\rho$ (in the inf-norm sense).
Robustness tube defines the space around the UAS trajectory, such that as long
as the UAS stays within its robustness tube, it will satisfy STL specification
for which it was generated. See examples of the robustness tubes in Figures 1
and 2.
The following assumption relates the minimum allowable radius $\rho$ of the
robustness tube to the minimum allowable separation $\delta$ between two UAS.
###### Assumption 2.
For each of the two UAS in conflict, the radius of the robustness tube is
greater than $\delta/2$, i.e. $\min(\rho_{1},\rho_{2})\geq\delta/2$ where
$\rho_{1}$ and $\rho_{2}$ are the robustness of UAS 1 and 2, respectively.
This assumption defines the case where the radius of the robustness tube is
wide enough to have two UAS placed along opposing edges of their respective
tubes and still achieve the minimum separation between them. We assume that
all the trajectories generated by the independent planning have sufficient
robustness to satisfy this assumption (see Sec. 2.2). Now we define the
problem of collision avoidance with satisfaction of STL specifications:
###### Problem 1.
Given two planned $H$-step UAS trajectories $\mathbf{p}_{1}$ and
$\mathbf{p}_{2}$ that have a conflict, the collision avoidance problem is to
find a new sequence of positions $\mathbf{p}_{1}^{\prime}$ and
$\mathbf{p}_{2}^{\prime}$ that meet the following conditions:
(2a) $\displaystyle||p_{1,k}^{\prime}-p_{2,k}^{\prime}||\geq\delta,\,$
$\displaystyle\forall k\in\\{0,\dotsc,H\\}$ (2b) $\displaystyle
p_{j,k}^{\prime}\in P_{j,k},\,$ $\displaystyle\forall
k\in\\{0,\dotsc,H\\},\,\forall j\in\\{1,2\\}.$
That is, we need a new trajectory for each UAS such that they achieve minimum
separation distance and also stay within the robustness tube around their
originally planned trajectories.
##### Convex constraints for collision avoidance
Let $z_{k}=p_{1,k}-p_{2,k}$ be the difference in UAS positions at time step
$k$. For two UAS not to be in conflict, we need
(3) $z_{k}\not\in\mathbb{B}_{\delta/2},\ \forall k\in\\{0,\ldots,H\\},$
This is a non-convex constraint. For a computationally tractable controller
formulation which solves Problem 1, we define convex constraints that when
satisfied imply Equation (3). The $3$D cube $\mathbb{B}_{\delta/2}$ can be
defined by a set of linear inequality constraints of the form
$\widetilde{M}^{i}z\leq\widetilde{q}^{i},\,\forall i\in\\{1,\ldots,6\\}$.
Equation (3) is satisfied when $\exists
i\,|\widetilde{M}^{i}z>\widetilde{q}_{i}$. Let $M=-\widetilde{M}$ and
$q=-\widetilde{q}$, then $\forall i\in\\{1,\ldots,6\\}$,
(4)
$M^{i}(p_{1,k}-p_{2,k})<{q}^{i}\Rightarrow(p_{1,k}-p_{2,k})\not\in\mathbb{B}_{\delta/2}$
Intuitively, picking one $i$ at time step $k$ results in a configuration (in
position space) where the two UAS are separated in one of two ways along one
of three axes of motion444Two ways along one of three axes defines $6$
options, $i\in\\{1,\ldots,6\\}$.. For example, if at time step $k$ we select
$i$ with corresponding $M^{i}=[0,0,1]$ and $q^{i}=-\delta$, it implies that
UAS 2 flies over UAS 1 by $\delta$ meters, and so on.
##### A Centralized solution via a MILP formulation
Here, we formulate a MILP (MILP) to solve the two UAS CA problem of problem 1
in a predictive, receding horizon manner. For the formulation, we consider a
$H$-step look ahead that contains the time steps where the two UAS are in
conflict. Let the dynamics of either UAS555For simplicity we assume both UAS
have identical dynamics associated with multi-rotor robots, however our
approach would work otherwise. be of the form $x_{k+1}=Ax_{k}+Bu_{k}$. At each
time step $k$, the UAS state is defined as
$x_{k}=[p_{k},\,v_{k}]^{T}\in\mathbb{R}^{6}$, where $p$ and $v$ are the UAS
positions and velocities in the 3D space. Let $C$ be the observation matrix
such that $p_{k}=Cx_{k}$. The inputs $u_{k}\in\mathbb{R}^{3}$ are the thrust,
roll and pitch of the UAS. The matrices $A$ and $B$ are obtained through
linearization of the UAS dynamics around hover and discretization in time, see
(Luukkonen, 2011) and (Pant et al., 2015) for more details. Let
$\mathbf{x}_{j}\in\mathbb{R}^{6(H+1)}$ be the pre-planned full state
trajectories, $\mathbf{x}_{j}^{\prime}\in\mathbb{R}^{6(H+1)}$ the new full
state trajectories and $\mathbf{u}_{j}^{\prime}\in\mathbb{R}^{3H}$ the new
controls to be computed for the UAS $j=1,2$. Let
$\mathbf{b}\in\\{0,1\\}^{6(H+1)}$ be binary decision variables, and $\mu$ is a
large positive number, then the MILP problem is defined as:
(5)
$\displaystyle\min_{\mathbf{u}_{1}^{\prime},\,\mathbf{u}_{2}^{\prime},\,\mathbf{b}}J(\mathbf{x}_{1}^{\prime},\,\mathbf{u}_{1}^{\prime},\,\mathbf{x}_{2}^{\prime},\,\mathbf{u}_{2}^{\prime})$
$\displaystyle x_{j,0}^{\prime}$ $\displaystyle=x_{j,0},\,\forall
j\in\\{1,2\\}$ $\displaystyle x_{j,k+1}^{\prime}$
$\displaystyle=Ax_{j,k}^{\prime}+Bu_{j,k}^{\prime},\,\forall
k\in\\{0,\dotsc,H-1\\},\,\forall j\in\\{1,2\\}$ $\displaystyle
Cx^{\prime}_{j,k}$ $\displaystyle\in P_{j,k},\,\forall
k\in\\{0,\dotsc,H\\},\,\forall j\in\\{1,2\\}$ $\displaystyle
M^{i}C\,(x_{1,k}^{\prime}-x_{2,k}^{\prime})$
$\displaystyle\leq{q}_{i}+\mu(1-b^{i}_{k}),\,\forall
k\in\\{0,\dotsc,H\\},\forall i\in\\{1,\dotsc,6\\}$
$\displaystyle\sum_{i=1}^{6}b^{i}_{k}$ $\displaystyle\geq 1,\,\forall
k\in\\{0,\dotsc,H\\}$ $\displaystyle u_{j,k}^{\prime}$ $\displaystyle\in
U,\,\forall k\in\\{0,\dotsc,H\\},\,\forall j\in\\{1,2\\}$ $\displaystyle
x_{j,k}^{\prime}$ $\displaystyle\in X,\,\forall k\in\\{0,\dotsc,H+1\\},\forall
j\in\\{1,2\\}.$
Here $b^{i}_{k}$ encodes action $i=1,\dotsc,6$ taken for avoiding a collision
at time step $k$ which corresponds to a particular side of the cube
$\mathbb{B}_{\delta/2}$. Function $J$ could be any cost function of interest,
we use $J=0$ to turn (5) into a feasibility problem. A solution (when it
exists) to this MILP results in new trajectories
($\mathbf{p}_{1}^{\prime},\,\mathbf{p}_{2}^{\prime}$) that avoid collisions
and stay within their respective robustness tubes of the original
trajectories, and hence are a solution to problem 1.
Such optimization is joint over both UAS. It is impractical as it would either
require one UAS to solve for both or each UAS to solve an identical
optimization that would also give information about the control sequence of
the other UAS. Solving this MILP in an online manner is also intractable, as
we shown in Section 6.2.1.
## 4\. Learning-2-Fly: Decentralized Collision Avoidance for UAS pairs
To solve problem 1 in an online and decentralized manner, we develop our
framework, Learning-to-Fly (L2F). Given a predefined priority among the two
UAS, this combines a learning-based CR (CR) scheme (running aboard each UAS)
that gives us the discrete components of the MILP formulation (5), and a co-
operative collision avoidance MPC for each UAS to control them in a
decentralized manner. We assume that the two UAS can communicate their pre-
planned $N$-step trajectories $\mathbf{p}_{1},\,\mathbf{p}_{2}$ to each other
(refer to Sec. 2.2), and then L2F solves problem 1 by following these steps
(also see Algorithm 1) :
1. (1)
Conflict resolution: UAS 1 and 2 make a sequence of decisions,
$\mathbf{d}=(d_{0},\ldots,d_{H})$ to avoid collision. Each
$d_{k}\in\\{1,\ldots\,6\\}$ represents a particular choice of $M$ and $q$ at
time step $k$, see eq. (4). Section 4.2 will describe our proposed learning-
based method for picking $d_{k}$.
2. (2)
UAS 1 CA-MPC: UAS 1 takes the conflict resolution sequence $\mathbf{d}$ from
step 1 and solves a convex optimization to try to deconflict while assuming
UAS 2 maintains its original trajectory. After the optimization the new
trajectory for UAS 1 is sent to UAS 2.
3. (3)
UAS 2 CA-MPC: (If needed) UAS 2 takes the same conflict resolution sequence
$\mathbf{d}$ from step 1 and solves a convex optimization to try to avoid UAS
1’s new trajectory. Section 4.1 provides more details on CA-MPC steps 2 and 3.
The visualization of the above steps is presented in Figure 2. Such
decentralized approach differs from the centralized MILP approach, where both
the binary decision variables and continuous control variables for each UAS
are decided concurrently.
### 4.1. Distributed and co-operative Collision Avoidance MPC (CA-MPC)
Let $\mathbf{x}_{j}$ be the pre-planned trajectory of UAS $j$,
$\mathbf{x}_{\textit{avoid}}$ be the pre-planned trajectory of the other UAS
to which $j$ must attain a minimum separation, and let
$prty_{j}\in\\{-1,+1\\}$ be the priority of UAS $j$. Assume a decision
sequence $\mathbf{d}$ is given: at each $k$ in the collision avoidance
horizon, the UAS are to avoid each other by respecting (4), namely
$M^{d_{k}}(p_{1,k}-p_{2,k})<{q}^{d_{k}}$. Then each UAS $j=1,2$ solves the
following Collision-Avoidance MPC optimization (CA-MPC): $\text{CA-
MPC}_{j}(\mathbf{x}_{j},\,\mathbf{x}_{avoid},\,\mathbf{P}_{j},\
\mathbf{d},\,prty_{j})$:
(6)
$\displaystyle\min_{\mathbf{u}_{j}^{\prime},\boldsymbol{\lambda}_{j}}\sum_{k=0}^{H}\lambda_{j,k}$
$\displaystyle x_{j,0}^{\prime}$ $\displaystyle=x_{j,0}$ $\displaystyle
x_{j,k+1}^{\prime}$
$\displaystyle=Ax_{j,k}^{\prime}+Bu_{j,k}^{\prime},\,\forall
k\in\\{0,\dotsc,H-1\\}$ $\displaystyle Cx_{j,k}^{\prime}$ $\displaystyle\in
P_{j,k},\,\forall k\in\\{0,\dotsc,H\\}$ $\displaystyle prty_{j}\cdot
M^{d_{k}}C\,(x_{avoid,k}-x_{j,k}^{\prime})$ $\displaystyle\leq
q^{d_{k}}+\lambda_{j,k},\,\forall k\in\\{0,\dotsc,H\\}$
$\displaystyle\lambda_{j,k}$ $\displaystyle\geq 0,\,\forall
k\in\\{0,\dotsc,H\\}$ $\displaystyle u_{j,k}^{\prime}$ $\displaystyle\in
U,\,\forall k\in\\{0,\dotsc,H\\}$ $\displaystyle x_{j,k}^{\prime}$
$\displaystyle\in X,\,\forall k\in\\{0,\dotsc,H+1\\}.$
This MPC optimization tries to find a new trajectory $\mathbf{x}_{j}^{\prime}$
for the UAS $j$ that minimizes the slack variables $\lambda_{j,k}$ that
correspond to violations in the minimum separation constraint
$\eqref{eq:pickaside}$ w.r.t the pre-planned trajectory
$\mathbf{x}_{\textit{avoid}}$ of the UAS in conflict. The constraints in (6)
ensure that UAS $j$ respects its dynamics, input constraints, and state
constraints to stay inside the robustness tube. An objective of $0$ implies
that UAS $j$’s new trajectory satisfies the minimum separation between the two
UAS, see Equation (4)666Enforcing the separation constraint at each time step
can lead to a restrictive formulation, especially in cases where the two UAS
are only briefly close to each other. This does however give us an
optimization with a structure that does not change over time, and can avoid
collisions in cases where the UAS could run across each other more than once
in quick succession (e.g. https://tinyurl.com/arc-case), which is something
ACAS-Xu was not designed for..
CA-MPC optimization for UAS 1: UAS 1, with lower priority, $prty_{1}=-1$,
first attempts to resolve the conflict for the given sequence of decisions
$\mathbf{d}$:
(7)
$\displaystyle(\mathbf{x_{1}^{\prime}},\mathbf{u}_{1}^{\prime},\boldsymbol{\lambda}_{1})$
$\displaystyle=\textbf{CA-
MPC}_{1}(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{P}_{1},\mathbf{d},-1)$
An objective of $0$ implies that UAS 1 alone can satisfy the minimum
separation between the two UAS. Otherwise, UAS 1 alone could not create
separation and UAS 2 now needs to maneuver as well.
CA-MPC optimization for UAS 2: If UAS 1 is unsuccessful at collision
avoidance, UAS 1 communicates its current revised trajectory
$\mathbf{x}_{1}^{\prime}$ to UAS 2, with $prty_{2}=+1$. UAS 2 then creates a
new trajectory $\mathbf{x}_{2}^{\prime}$ (w.r.t the same decision sequence
$\mathbf{d}$):
(8)
$\displaystyle(\mathbf{x}_{2}^{\prime},\mathbf{u}_{2}^{\prime},\boldsymbol{\lambda}_{2})$
$\displaystyle=\textbf{CA-
MPC}_{2}(\mathbf{x}_{2},\mathbf{x}_{1}^{\prime},\mathbf{P}_{2},\mathbf{d},+1)$
Algorithm 1 is designed to be computationally lighter than the MILP approach
(5), but unlike the MILP it is not complete.
Notation :
$(\mathbf{x}^{\prime}_{1},\mathbf{x}^{\prime}_{2},\mathbf{u}_{1}^{\prime},\mathbf{u}_{2}^{\prime})=\textbf{L2F}(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{P}_{1},\mathbf{P}_{2})$
Input: Pre-planned trajectories $\mathbf{x}_{1}$, $\mathbf{x}_{2}$, robustness
tubes $\mathbf{P}_{1}$, $\mathbf{P}_{2}$
Output: Sequence of control signals $\mathbf{u}_{1}^{\prime}$,
$\mathbf{u}_{2}^{\prime}$ for the two UAS, updated trajectories
$\mathbf{x}_{1}^{\prime}$, $\mathbf{x}_{2}^{\prime}$
Get $\mathbf{d}$ from conflict resolution
UAS 1 solves CA-MPC optimization (6):
$(\mathbf{x}_{1}^{\prime},\mathbf{u}_{1}^{\prime},\boldsymbol{\lambda}_{1})=\textbf{CA-
MPC}_{1}(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{P}_{1},\mathbf{d},-1)$
if _$\sum_{k}\lambda_{1,k}=0$_ then
Done: UAS 1 alone has created separation; Set
$\mathbf{u}_{2}^{\prime}=\mathbf{u}_{2}$
else
UAS 1 transmits solution to UAS 2
UAS 2 solves CA-MPC optimization (6):
$(\mathbf{x}_{2}^{\prime},\mathbf{u}_{2}^{\prime},\boldsymbol{\lambda}_{2})=\textbf{CA-
MPC}_{2}(\mathbf{x}_{2},\mathbf{x}_{1}^{\prime},\mathbf{P}_{2},\mathbf{d},+1)$
if _$\sum_{k}\lambda_{2,k}=0$_ then
Done: UAS 2 has created separation
else
if _$||p_{1,k}^{\prime}-p_{2,k}^{\prime}||\geq\delta,\,\forall k=0,\dotsc,H$_
then
Done: UAS 1 and UAS 2 created separation
else
Not done: UAS still violate Equation (2a)
end if
end if
end if
Apply control signals $\mathbf{u}_{1}^{\prime}$, $\mathbf{u}_{2}^{\prime}$ if
Done; else Fail.
Algorithm 1 Learning-to-Fly: Decentralized and cooperative collision avoidance
for two UAS. Also see Figure 2.
The solution of CA-MPC can be defined as follows:
###### Definition 4.0 (Zero-slack solution).
The solution of the CA-MPC optimization (6), is called the zero-slack solution
if for a given decision sequence $\mathbf{d}$ either
1) there exists an optimal solution of (6) such that $\sum_{k}\lambda_{1,k}=0$
or
2) problem (6) is feasible with $\sum_{k}\lambda_{1,k}>0$ and there exists an
optimal solution of (6) such that $\sum_{k}\lambda_{2,k}=0$.
The following Theorem 4.2 defines the sufficient condition for CA and Theorem
4.3 makes important connections between the slack variables in CA-MPC
formulation and binary variables in MILP. Both theorems are direct
consequences of the construction of CA-MPC optimizations. We omit the proofs
for brevity.
###### Theorem 4.2 (Sufficient condition for CA).
Zero-slack solution of (6) implies that the resulting trajectories for two UAS
are non-conflicting and within the robustness tubes of the initial
trajectories777Theorem 4.2 formulates a conservative result as (4) is a convex
under approximation of the originally non-convex collision avoidance
constraint (3). Indeed, non-zero slack $\exists k|\lambda_{2,k}>0$ does not
necessarily imply the violation of the mutual separation requirement (2a). The
control signals $u_{1}^{\prime},u_{2}^{\prime}$ computed by Algorithm 1 can
therefore in some instances still create separation between UAS even when the
conditions of Theorem 4.2 are not satisfied..
###### Theorem 4.3 (Existence of the zero-slack solution).
Feasibility of the MILP problem (5) implies the existence of the zero-slack
solution of CA-MPC optimization (6).
The Theorem 4.3 states that the binary decision variables $b^{i}_{k}$ selected
by the feasible solution of the MILP problem (5), when used to select the
constraints (defined by $M,\,q$) for the CA-MPC formulations for UAS 1 and 2,
imply the existence of a zero-slack solution of (6).
### 4.2. Learning-based conflict resolution
Motivated by Theorem 4.3, we propose to learn offline the conflict resolution
policy from the MILP solutions and then online use already learned policy. To
do so, we use a Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber,
1997) recurrent neural network augmented with fully-connected layers. LSTMs
perform better than traditional recurrent neural networks on sequential
prediction tasks (Gers et al., 2002).
Figure 3. Proposed LSTM model architecture for CR-S. LSTM layers are shown
unrolled over $H$ time steps. The inputs are $z_{k}$ which are the differences
between the planned UAS positions, and the outputs are decisions $d_{k}$ for
conflict resolution at each time $k$ in the horizon.
The network is trained to map a difference trajectory
$\mathbf{z}=\mathbf{x}_{1}-\mathbf{x}_{2}$ (as in Equation (3)) to a decision
sequence $\mathbf{d}$ that deconflicts pre-planned trajectories
$\mathbf{x}_{1}$ and $\mathbf{x}_{2}$. For creating the training set,
$\mathbf{d}$ is produced by solving the MILP problem (5), i.e. obtaining a
sequence of binary decision variables $\mathbf{b}\in\\{0,1\\}^{6(H+1)}$ and
translating it into the decision sequence
$\mathbf{d}\in\\{1,\ldots,6\\}^{H+1}$.
The proposed architecture is presented in Figure 3. The input layer is
connected to the block of three stacked LSTM layers. The output layer is a
time distributed dense layer with a softmax activation function that produces
the class probability estimate
$\eta_{k}=[\eta_{k}^{1},\ldots,\eta_{k}^{6}]^{\top}$ for each
$k\in\\{0,\ldots,H\\}$, which corresponds to a decision
$d_{k}=\text{argmax}_{i=1,\ldots 6}\eta_{k}^{i}$.
### 4.3. Conflict Resolution Repairing
The total number of possible conflict resolution (CR) decision sequences of
over a time horizon of $H$ steps is $H^{6}$. Learning-based collision
resolution produces only one such CR sequence, and since it is not guaranteed
to be correct, an inadequate CR sequence might lead to the CA-MPC being unable
find a feasible solution of (6), i.e. a failure in resolving a collision. To
make the CA algorithm more resilient to such failures, we propose a heuristic
that instead of generating only one CR sequence, generates a number of
slightly modified sequences, aka backups, with an intention of increasing the
probability of finding an overall solution for CA. We call it a CR repairing
algorithm. We propose the following scheme for CR repairing.
#### 4.3.1. Naïve repairing scheme for generating CR decision sequences
The naïve-repairing algorithm is based on the initial supervised-learning CR
architecture, see Section 4.2. The proposed DNN model for CR has the output
layer with a softmax activation function that produces the class probability
estimates $\eta_{k}=[\eta_{k}^{1},\ldots,\eta_{k}^{6}]^{\top}$ for each time
step $k$, see Figure 3. Discrete decisions were chosen as:
(9) $d_{k}=\underset{i=1,\ldots 6}{\text{argmax}}\ \eta_{k}^{i},$
which corresponds to the highest probability class for time step $k$. Denote
such choice of $d_{k}$ as $d_{k}^{1}$.
Analogously to the idea of top-1 and top-$S$ accuracy rates used in image
classification (Russakovsky et al., 2015), where not only the highest
predicted class counts but also the top $S$ most likely labels, we define
higher order decisions $d^{s}_{k}$ as following: instead of choosing the
highest probability class at time step $k$, one could choose the second
highest probability class ($s=2$), third highest ($s=3$), up to the sixth
highest ($s=6$).
Formally, the second highest probability class choice $d_{k}^{2}$ is defined
as:
(10) $d_{k}^{2}=\underset{i=1,\ldots 6,\,i\not=d_{k}^{1}}{\text{argmax}}\
\eta_{k}^{i}$
In the same manner, we define decisions up to $d_{k}^{6}$. General formula for
the $s$-th highest probability class, decision $d_{k}^{s}$ is defined as
following ($s=1,\ldots,6$):
(11) $d_{k}^{s}=\underset{i=1,\ldots 6,\,i\not=d_{k}^{j}\ \forall
j<s}{\text{argmax}}\ \eta_{k}^{i}$
Using equation (11) to generate decisions $d_{k}$ at time step $k$, we define
the naïve scheme for generating new decision sequences $\mathbf{d}^{\prime}$
following Algorithm 2.
Notation :
$(\mathbf{x}_{1}^{\prime},\mathbf{x}_{2}^{\prime},\mathbf{u}_{1}^{\prime},\mathbf{u}_{2}^{\prime})=\textbf{Repairing}(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{P}_{1},\mathbf{P}_{2},\varUpsilon)$
Input: Pre-planned trajectories $\mathbf{x}_{1}$, $\mathbf{x}_{2}$, robustness
tubes $\mathbf{P}_{1}$, $\mathbf{P}_{2}$, original decision sequence
$\mathbf{d}$, class probability estimates $\mathbf{\eta}$, set of collision
indices: $\varUpsilon=\\{k:\ ||p_{1,k}^{\prime}-p_{2,k}^{\prime}||<\delta,\
0\leq k\leq H\\}$.
Output: Sequence of control signals $\mathbf{u}_{1}^{\prime}$,
$\mathbf{u}_{2}^{\prime}$ for the two UAS, updated trajectories
$\mathbf{x}_{1}^{\prime}$, $\mathbf{x}_{2}^{\prime}$
for _$s=2,\ldots,6$_ do
Define repaired sequence $\mathbf{d}^{\prime}$ using naïve scheme as follows:
* -
$\forall k\not\in\varUpsilon:\ d^{\prime}_{k}=d_{k}$
* -
$\forall k\in\varUpsilon:\ d^{\prime}_{k}=d_{k}^{s}=\text{argmax}_{i=1,\ldots
6,\,i\not=d_{k}^{j}\ \forall j<s}\ \eta_{k}^{i}$
$(\mathbf{x}_{1}^{\prime},\mathbf{x}_{2}^{\prime},\mathbf{u}_{1}^{\prime},\mathbf{u}_{2}^{\prime})=\textbf{CA-
MPC}(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{P}_{1},\mathbf{P}_{2},\mathbf{d}^{\prime})$
if _$||p_{1,k}^{\prime}-p_{2,k}^{\prime}||\geq\delta,\,\forall k=0,\dotsc,N$_
then
Break: Repaired CR sequence $\mathbf{d}^{\prime}$ led to UAS 1 and UAS 2
creating separation
end if
end for
if _$||p_{1,k}^{\prime}-p_{2,k}^{\prime}||\geq\delta,\,\forall k=0,\dotsc,H$_
then
$\mathbf{d}^{\prime}=\mathbf{d}$: Repairing failed. Return trajectories for
the original decision sequence.
end if
Algorithm 2 Naïve scheme for CR repairing
###### Example 4.4.
Let the horizon of interest be only $H=5$ time steps and the initially
obtained decision sequence be $\mathbf{d}=(1,1,1,1,1)$. Given the collision
was detected at time steps 2 and 3, i.e. $\varUpsilon=(2,3)$, let the second-
highest probability decisions be $d_{2}^{2}=3$ and $d_{3}^{2}=5$. Then the
proposed repaired decision sequence is $\mathbf{d}^{\prime}=(1,1,3,5,1)$. If
such CR sequence $\mathbf{d}^{\prime}$ still violates the mutual separation
requirement, then the naïve repairing scheme will propose another decision
sequence using the third-highest probability decisions $d_{3}$. Let
$d_{2}^{3}=2$ and $d_{3}^{3}=3$ then $\mathbf{d}^{\prime}=(1,1,2,3,1)$. If it
fails again, the next generated sequence will use fourth-highest decisions,
and so on up to the fifth iteration of the algorithm (requires $d_{k}^{6}$
estimates). If none of the sequences managed to create separation, the
original CR sequence $\mathbf{d}=(1,1,1,1,1)$ will be returned.
Other variations of the naïve scheme are possible. For example, one can use
augmented set of collision indices $\varUpsilon$ or another order of decisions
$d_{k}$ across the time indices, e.g. replace decisions $d_{k}$ one-by-one
rather than all $d_{k}$ for collision indices $\varUpsilon$ at once. Moreover,
other CR repairing schemes can be efficient and should be explored. We leave
it for future work.
## 5\. Learning-‘N-Flying: Decentralized Collision Avoidance for Multi-UAS
Fleets
The L2F framework of Section 4 was tailored for CA between two UAS. When more
than two UAS are simultaneously on a collision path, applying L2F pairwise for
all UAS involved might not necessarily result in all future collisions being
resolved. Consider the following example:
###### Example 5.1.
Figure 4 depicts an experimental setup. Scenario consists of 3 UAS which must
reach desired goal states within 4 seconds while avoiding each other, minimum
allowed separation is set to $\delta=0.1m$. Initially pre-planned UAS
trajectories have a simultaneous collision across all UAS located at
$(0,0,0)$. Robustness tubes radii were fixed at $\rho=0.055$ and UAS
priorities were set in the increasing order, e.g. UAS with a lower index had a
lower priority: $1<2<3$. First application of L2F lead to resolving collision
for UAS 1 and UAS 2, see Figure 4(a). Second application resolved collision
for UAS 1 and UAS 3 by UAS 3 deviating vertically downwards, see Figure 4(b).
The third application led to UAS 3 deviate vertically upwards, which resolved
collision for UAS 2 and UAS 3, though created a re-appeared violation of
minimum separation for UAS 1 and UAS 3 in the middle of their trajectories,
see Figure 4(c).
(a) L2F for pair UAS 1, UAS 2. Pairwise separations: $\delta_{12}=0.11$m,
$\delta_{13}=0.06$m, $\delta_{23}=0.05$m.
(b) L2F for pair UAS 1, UAS 3. Pairwise separations: $\delta_{12}=0.11$m,
$\delta_{13}=0.11$m, $\delta_{23}=0.04$m.
(c) L2F for pair UAS 2, UAS 3 results. Pairwise separations:
$\delta_{12}=0.11$m, $\delta_{13}=0.01$m, $\delta_{23}=0.1$m.
Figure 4. Sequential L2F application for the 3 UAS scenario. Pre-planned
colliding trajectories are depicted in dashed lines. Simultaneous collision is
detected at point $(0,0,0)$. The updated trajectories generated by L2F are
depicted in solid color. Initial positions of UAS marked by “O”.
To overcome this live lock like issue, where repeated pair-wise applications
of L2F only result in new conflicts between other pairs of UAS, we propose a
modification of L2F called Learning-N-Flying (LNF). The LNF framework is based
on pairwise application of L2F, but also incorporates a _Robustness Tube
Shrinking_ (RTS) process described in Section 5.1 after every L2F application.
The overall LNF framework is presented in Algorithm 3. Section 6.3 presents
extensive simulations to show the applicability of the LNF scheme to scenarios
where more than two UAS are on collisions paths, including in high-density UAS
operations.
Input: Pre-planned fleet trajectories $\mathbf{x}_{i}$, initial robustness
tubes $\mathbf{P}_{i}$, UAS priorities
Output: New trajectories $\mathbf{x}_{i}^{\prime}$, new robustness tubes
$\mathbf{P}^{\prime}_{i}$, control inputs $u^{\prime}_{i,0}$
Each UAS $i$ detects the set of UAS that it is in conflict with: $S=\\{j\ |\
\exists k\ ||p_{i,k}-p_{j,k}||<\delta,\ 0\leq k\leq H\\}$
Order $S$ by the UAS priorities
for _$j\in S$_ do
$(\mathbf{x}_{i}^{\prime},\mathbf{x}_{j}^{\prime},\mathbf{u}_{i}^{\prime},\mathbf{u}_{j}^{\prime})=\textbf{L2F}(\mathbf{x}_{i},\mathbf{x}_{j},\mathbf{P}_{i},\mathbf{P}_{j})$,
see Section 4
if _$\varUpsilon=\\{k:\ ||p_{i,k}^{\prime}-p_{j,k}^{\prime}|| <\delta,\ 0\leq
k\leq H\\}\not=\emptyset$_ then
$(\mathbf{x}_{i}^{\prime},\mathbf{x}_{j}^{\prime},\mathbf{u}_{i}^{\prime},\mathbf{u}_{j}^{\prime})=\textbf{Repairing}(\mathbf{x}_{i},\mathbf{x}_{j},\mathbf{P}_{i},\mathbf{P}_{j},\varUpsilon)$
end if
$(\mathbf{P}^{\prime}_{i},\mathbf{P}^{\prime}_{j})=\textbf{RTS}\,(\mathbf{x}^{\prime}_{i},\mathbf{x}^{\prime}_{j},\mathbf{P}_{i},\mathbf{P}_{j})$
end for
Apply controls $u_{i,0}^{\prime}$ for the initial time step of the receding
horizon
Algorithm 3 Learning-‘N-Flying: Decentralized and cooperative collision
avoidance for multi-UAS fleets. Applied in a receding horizon manner by each
UAS $i$.
### 5.1. Robustness tubes shrinking (RTS)
The high-level of idea of RTS is that, when two trajectories are de-collided
by L2F, we want to constrain their further modifications by L2F so as not to
induce new collisions. In Example 5.1, after collision-free
$\mathbf{x}_{1}^{\prime}$ and $\mathbf{x}_{2}^{\prime}$ are produced by L2F
and before $\mathbf{x}_{2}^{\prime}$ and $\mathbf{x}_{3}$ are de-collided, we
want to constrain any modification to $\mathbf{x}_{2}^{\prime}$ s.t. it does
not collide again with $\mathbf{x}_{1}^{\prime}$. Since trajectories are
constrained to remain within robustness tubes, we simply shrink those tubes to
achieve this. The amount of shrinking is $\delta$, the minimum separation. RTS
is described in Algorithm 4.
Notation :
$(\mathbf{P}^{\prime}_{1},\mathbf{P}^{\prime}_{2})=\textbf{RTS}\,(\mathbf{x}^{\prime}_{1},\mathbf{x}^{\prime}_{2},\mathbf{P}_{1},\mathbf{P}_{2})$
Input: New trajectories $\mathbf{x}^{\prime}_{1}$, $\mathbf{x}^{\prime}_{2}$
generated by L2F, initial robustness tubes $\mathbf{P}_{1}$, $\mathbf{P}_{2}$
Output: New robustness tubes $\mathbf{P}^{\prime}_{1}$,
$\mathbf{P}^{\prime}_{2}$
Set $msep=\min_{0\leq k\leq H}||p_{1,k}^{\prime}-p_{2,k}^{\prime}||$
for _$k=0,\ldots,H$_ do
if _$\mathbf{dist}(P_{1,k},P_{2,k})\geq\delta$_ then
No shrinking required: $P^{\prime}_{1,k}=P_{1,k},\ P^{\prime}_{2,k}=P_{2,k}$
else
Determine the axis ($X$, $Y$ or $Z$) of maximum separation between
$p^{\prime}_{1,k}$ and $p^{\prime}_{2,k}$
Define the 3D box $\varPi_{k}$ with edges of size $\min(msep,\delta)$ along
the determined axis and infinite edges along other two axes
Center $\varPi_{k}$ at the midpoint between $p^{\prime}_{1,k}$ and
$p^{\prime}_{2,k}$
Remove $\varPi_{k}$ from both tubes:
$P^{\prime}_{1,k}=P_{1,k}\setminus\varPi_{k},\
P^{\prime}_{2,k}=P_{2,k}\setminus\varPi_{k}$
end if
end for
Algorithm 4 Robustness tubes shrinking. Also see Figure 5. Figure 5.
Visualization of the robustness tubes shrinking process.
###### Example 5.2.
Figure 5(a) presents the initial discrete-time robustness tubes and
trajectories for UAS 1 and UAS 2. Successful application of L2F resolves the
detected collision between initially planned trajectories $\mathbf{p}_{1}$,
$\mathbf{p}_{2}$, depicted in dashed line. New non-colliding trajectories
$\mathbf{p}_{1}^{\prime}$ and $\mathbf{p}_{2}^{\prime}$ produced by L2F are in
solid color. Figure 5(b) shows that for time step $k=0$ no shrinking is
required since the robustness tubes $P_{1,0}$, $P_{2,0}$ are already
$\delta$-separate. For time steps $k=1,2,3$, the axis of maximum separation
between trajectories is $Z$, therefore, boxes $\varPi_{k}$ are defined to be
of height $\delta$ with infinite width and length. Boxes $\varPi_{k}$ are
drawn in gray, midpoints between the trajectories are drawn in yellow. Figure
5(c) depicts the updated $\delta$-separate robustness tubes
$\mathbf{P}^{\prime}_{1}$ and $\mathbf{P}^{\prime}_{2}$.
###### Theorem 5.3 (Sufficient condition for $\delta$-separate tubes).
Zero-slack solution of (6) implies that robustness tubes updated by RTS
procedure are the subsets of the initial robustness tubes and
$\delta$-separate, e.g. for robustness tubes
$(\mathbf{P}^{\prime}_{1},\mathbf{P}^{\prime}_{2})=\textbf{RTS}\,(\mathbf{x}^{\prime}_{1},\mathbf{x}^{\prime}_{2},\mathbf{P}_{1},\mathbf{P}_{2})$,
the following two properties hold:
(12)
$\displaystyle\mathbf{dist}(\mathbf{P}_{1}^{\prime},\mathbf{P}_{2}^{\prime})\geq\delta$
(13) $\displaystyle\mathbf{P}_{j}^{\prime}\subseteq\mathbf{P}_{j},\ \forall
j\in\\{1,2\\}$
See the proof in the appendix Section A.
### 5.2. Combination of L2F with RTS
Three following lemmas define important properties of L2F combined with the
shrinking process. Proofs can be found in the appendix Section A.
###### Lemma 5.0.
Let two trajectories $\mathbf{x}_{1}^{\prime}$, $\mathbf{x}_{2}^{\prime}$ be
generated by L2F and let the robustness tubes $\mathbf{P}_{1}^{\prime}$,
$\mathbf{P}_{2}^{\prime}$ be the updated tubes generated by RTS procedure from
initial tubes $\mathbf{P}_{1}$, $\mathbf{P}_{2}$ using the trajectories
$\mathbf{x}_{1}^{\prime}$, $\mathbf{x}_{2}^{\prime}$. Then
(14) $\mathbf{p}_{j}^{\prime}\in\mathbf{P}_{j}^{\prime},\ \forall
j\in\\{1,2\\}.$
The above Lemma 5.4 states that RTS procedure preserves trajectory belonging
to the corresponding updated robustness tube.
###### Lemma 5.0.
Let two robustness tubes $\mathbf{P}_{1}$ and $\mathbf{P}_{2}$ be
$\delta$-separate. Then any pair of trajectories within these robustness tubes
are non-conflicting, i.e.:
(15) $\forall\mathbf{p}_{1}\in\mathbf{P}_{1},\
\forall\mathbf{p}_{2}\in\mathbf{P}_{2},\
||p_{1,k}-p_{2,k}||\geq\delta,\,\forall k\in\\{0,\dotsc,H\\}.$
Using Lemma 5.5 we can now prove that every successful application of L2F
combined with the shrinking process results in new trajectories does not
violate previously achieved minimum separations between UAS, unless the RTS
process results in an empty robustness tube. In other words, it solves the 3
UAS issue raised in Example 5.1. We formalize this result in the context of 3
UAS with the following Lemma:
###### Lemma 5.0.
Let $\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3}$ be pre-planned conflicting
UAS trajectories, and let $\mathbf{P}_{1}$, $\mathbf{P}_{2}$ and
$\mathbf{P}_{3}$ be their corresponding robustness tubes. Without loss of
generality assume that the sequential pairwise application of L2F combined
with RTS has been done in the following order:
(16)
$\displaystyle(\mathbf{x}_{1}^{\prime},\mathbf{x}_{2}^{\prime})=\textbf{L2F}\,(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{P}_{1},\mathbf{P}_{2}),$
$\displaystyle\qquad(\mathbf{P}_{1}^{\prime},\mathbf{P}_{2}^{\prime})=\textbf{RTS}\,(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{P}_{1},\mathbf{P}_{2})$
(17)
$\displaystyle(\mathbf{x}_{1}^{\prime\prime},\mathbf{x}_{3}^{\prime})=\textbf{L2F}\,(\mathbf{x}_{1}^{\prime},\mathbf{x}_{3},\mathbf{P}_{1}^{\prime},\mathbf{P}_{3}),$
$\displaystyle\qquad(\mathbf{P}_{1}^{\prime\prime},\mathbf{P}_{3}^{\prime})=\textbf{RTS}\,(\mathbf{x}_{1}^{\prime\prime},\mathbf{x}_{3}^{\prime},\mathbf{P}_{1}^{\prime},\mathbf{P}_{3})$
(18)
$\displaystyle(\mathbf{x}_{2}^{\prime\prime},\mathbf{x}_{3}^{\prime\prime})=\textbf{L2F}\,(\mathbf{x}_{2}^{\prime},\mathbf{x}_{3}^{\prime},\mathbf{P}_{2}^{\prime},\mathbf{P}_{3}^{\prime}),$
$\displaystyle\qquad(\mathbf{P}_{2}^{\prime\prime},\mathbf{P}_{3}^{\prime\prime})=\textbf{RTS}\,(\mathbf{x}_{2}^{\prime\prime},\mathbf{x}_{3}^{\prime\prime},\mathbf{P}_{2}^{\prime},\mathbf{P}_{3}^{\prime})$
If all three L2F applications gave zero-slack solutions then position
trajectories
$\mathbf{p}_{1}^{\prime\prime},\mathbf{p}_{2}^{\prime\prime},\mathbf{p}_{3}^{\prime\prime}$
pairwise satisfy mutual separation requirement, e.g.:
(19)
$\displaystyle||p^{\prime\prime}_{1,k}-p^{\prime\prime}_{2,k}||\geq\delta,\
\forall k\in\\{0,\ldots,H\\}$ (20)
$\displaystyle||p^{\prime\prime}_{1,k}-p^{\prime\prime}_{3,k}||\geq\delta,\
\forall k\in\\{0,\ldots,H\\}$ (21)
$\displaystyle||p^{\prime\prime}_{2,k}-p^{\prime\prime}_{3,k}||\geq\delta,\
\forall k\in\\{0,\ldots,H\\}$
and are within their corresponding robustness tubes:
(22) $\mathbf{p}^{\prime\prime}_{j}\in\mathbf{P}^{\prime\prime}_{j},\ \forall
j\in\\{1,2,3\\}.$
By induction we can extend Lemma 5.6 to any number of UAS. Therefore, we can
conclude that for any $N$ pre-planned UAS trajectories, zero-slack solution of
LNF is a sufficient condition for CA, e.g. resulting trajectories generated by
LNF are non-conflicting and withing the robustness tubes of the initial
trajectories. Note that this approach can still fail to find a solution,
especially as repeated RTS can result in empty robustness tubes.
###### Theorem 5.7.
For the case of $N$ UAS, when applied at any time step $k$, LNF (algorithm 3)
terminates after no more than $N\choose 2$ applications of pairwise L2F
(algorithm 1).
This result follows directly from the inductive application of Lemma 5.6. In
experimental evaluations (Section 6.3), we see that this worst-case number of
L2F applications is not required often in practice.
## 6\. Experimental evaluation of L2F and LNF
In this section, we show the performance of our proposed methods via extensive
simulations, as well as an implementation for actual quad-rotor robots. We
compare L2F and L2F with repair (L2F+Rep) with the MILP formulation of Section
3 and two other baseline approaches. Through multiple case studies, we show
how LNF extends the L2F framework to work for scenarios with than two UAS.
### 6.1. Experimental setup
Computation platform: All the simulations were performed on a computer with an
AMD Ryzen 7 2700 8-core processor and 16GB RAM, running Python 3.6 on Ubuntu
18.04.
Generating training data: We have generated the data set of 14K trajectories
for training with collisions between UAS using the trajectory generator in
(Mueller et al., 2015). The look-ahead horizon was set to $T=4$s and
$dt=0.1$s. Thus, each trajectory consists of $H+1=41$ time-steps. The initial
and final waypoints were sampled uniformly at random from two 3D cubes close
to the fixed collision point, initial velocities were set to zero.
Implementation details for the learning-based conflict resolution: The MILP to
generate training data for the supervised learning of the CR scheme was
implemented in MATLAB using Yalmip (Lofberg, 2004) with MOSEK v8 as the
solver. The learning-based CR scheme was trained for $\rho=0.055$ and minimum
separation $\delta=0.1$m which is close to the lower bound in Assumption 2.
This was implemented in Python 3.6 with Tensorflow 1.14 and Keras API and
Casadi with qpOASES as the solver. For traning the LSTM models (with different
architectures) for CR, the number of training epochs was set to 2K with a
batch size of 2K. Each network was trained to minimize categorical cross-
entropy loss using Adam optimizer (Kingma and Ba, 2014) with training rate of
$\alpha=0.001$ and moment exponential decay rates of $\beta_{1}=0.9$ and
$\beta_{2}=0.999$. The model with 3 LSTM layers with 128 neurons each, see
Figure 3, was chosen as the default learning-based CR model, and is used for
the pairwise CA approach of both L2F and LNF.
Implementation details for the CA-MPC: For the online implementation of our
scheme, we implement CA-MPC using CVXgen and report the computation times for
this implementation. We then import CA-MPC in Python, interface it with the CR
scheme and run all simulations in Python.
### 6.2. Experimental evaluation of L2F
Figure 6. Trajectories for 2 UAS from different angles. The dashed (planned)
trajectories have a collision at the halfway point. The solid ones, generated
through L2F method, avoid the collision while remaining within the robustness
tube of the original trajectories. Initial UAS positions marked as stars.
Playback of the scenario is at https://tinyurl.com/l2f-exmpl.
Figure 7. Trajectories for 2 Crazyflie quad-rotors before (dotted) and after
(solid) L2F. Videos of this are at
%.␣The␣dotted␣(planned)%trajectories␣have␣a␣collision␣at␣the␣halfway␣point.␣The␣solid␣ones,%generated␣through␣L2F,␣avoid␣the␣collision.␣Vhttps://tinyurl.com/exp-
cf2
We evaluate the performance of L2F with 10K test trajectories (for pairwise
CA) generated using the same distribution of start and end positions as was
used for training. Figure 6 shows an example of two UAS trajectories before
and after L2F. Successful avoidance of the collision at the midway point on
the trajectories can easily be seen on the playback of the scenario available
at https://tinyurl.com/l2f-exmpl. To demonstrate the feasibility of the
deconflicted trajectories, we also ran experiments using two Crazyflie quad-
rotor robots as shown in Figure 7. Videos of the actual flights and additional
simulations can be found at https://tinyurl.com/exp-cf2.
#### 6.2.1. Results and comparison to other methods
We analyzed three other methods alongside the proposed learning-based approach
for L2F.
1. (1)
A random decision approach which outputs a sequence sampled from the discrete
uniform distribution.
2. (2)
A greedy approach that selects the discrete decisions that correspond to the
direction of the most separation between the two UAS at each time step. For
more details see (Rodionova et al., 2020).
3. (3)
A L2F with Repairing approach following Section 4.3.
4. (4)
A centralized MILP solution that picks decisions corresponding to binary
decision variables in (5).
(a) Separation rate defines the fraction of initially conflicting trajectories
for which UAS managed to achieve minimum separation.
(b) Failure rate (1-Separation rate) defines the fraction of initially
conflicting trajectories for which UA could not achieve minimum separation.
Figure 8. Model sensitivity analysis with respect to variations of fraction
$\rho/\delta$ which connects the minimum allowable robustness tube radius
$\rho$ to the minimum allowable separation between two UAS $\delta$, see
Assumption 2. A higher $\rho/\delta$ implies there is more room within the
robustness tubes to maneuver for CA.
For the evaluation, we measured and compared the separation rate and the
computation time for all the methods over the same 10K test trajectories.
Separation rate defines the fraction of the conflicting trajectories for which
UAS managed to achieve minimum separation after a CA approach. Figure 8 shows
the impact of the $\rho/\delta$ ratio on separation rate. Higher $\rho/\delta$
implies wider robustness tubes for the UAS to maneuver within, which should
make the CA task easier as is seen in the figure. The centralized MILP has a
separation rate of $1$ for each case here, however is unsuitable for an online
implementation with its computation time being over a minute ( seetable 1) and
we exclude it from the comparisons in the text that follows. In the case of
$\rho/\delta=0.5$, where the robustness tubes are just wide enough to fit two
UAS (see Assumption 2), we see the L2F with repairing (L2F+Rep) significantly
outperforms the methods. This worst-case performance of L2F with repairing is
$0.999$ which is significantly better than the other approaches including the
original L2F. As the ratio grows, the performance of all methods improve, with
L2F+Rep still outperforming the others and quickly reaching a separation rate
of $1$. For $\rho/\delta\geq 1.15$, L2F no longer requires any repair and also
has a separation rate of $1$.
Table 1 shows the separation rates for three different $\rho/\delta$ value as
well as the computation times (mean and standard deviation) for each CA
algorithm. L2F and L2F+Rep have an average computation time of less than
$10$ms, making them suited for an online implementation even at our chosen
control sampling rate of $10$Hz. For all CA schemes excluding MILP, the
smaller the $\rho/\delta$ ratio, the more UAS 1 alone is unsuccessful at
collision avoidance MPC (7), and UAS 2 must also solve its CA-MPC (8) and
deviate from its pre-planned trajectory. Therefore, computation time is higher
for smaller $\rho/\delta$ ratio and lower for higher $\rho/\delta$ values. A
similar trend is observed for the MILP, even though it jointly solves for both
UAS, showing that it is indeed harder to find a solution when the
$\rho/\delta$ ratio is small.
| | CA Scheme
---|---|---
| Random | Greedy | L2F | L2F+Rep | MILP
Separation rate | $\boldsymbol{\rho}/\boldsymbol{\delta}=\textbf{0.5}$ | 0.311 | 0.528 | 0.899 | 0.999 | 1
$\boldsymbol{\rho}/\boldsymbol{\delta}=\textbf{0.95}$ | 0.605 | 0.825 | 0.999 | 1 | 1
$\boldsymbol{\rho}/\boldsymbol{\delta}=\textbf{1.15}$ | 0.659 | 0.989 | 1 | 1 | 1
Comput. time (ms) (mean $\pm$ std) | $\boldsymbol{\rho}/\boldsymbol{\delta}=\textbf{0.5}$ | $7.9\pm 0.01$ | $9.7\pm 0.6$ | $9.1\pm 1.3$ | $9.7\pm 3.6$ | $(98.9\pm 44.9)\cdot 10^{3}$
$\boldsymbol{\rho}/\boldsymbol{\delta}=\textbf{0.95}$ | $7.5\pm 0.01$ | $9.3\pm 0.5$ | $8.7\pm 0.5$ | $8.7\pm 0.5$ | $(82.5\pm 36.3)\cdot 10^{3}$
$\boldsymbol{\rho}/\boldsymbol{\delta}=\textbf{1.15}$ | $6.3\pm 1.9$ | $7.1\pm 2.$ | $8.6\pm 0.5$ | $8.7\pm 0.4$ | $(33.1\pm 34.9)\cdot 10^{3}$
Table 1. Separation rates and computation times (mean and standard deviation)
comparison of different CA schemes. Separation rate is the fraction of
conflicting trajectories for which separation requirement (2a) is satisfied
after CA. Computation time estimates the overall time demanded by CA scheme.
MILP reports the time spent on solving (5). Other CA schemes report time
needed for CR and CA-MPC together. L2F with repairing includes repairing time
as well.
### 6.3. Experimental evaluation of LNF
Next, we carry out simulations to evaluate the performance of LNF, especially
in terms of scalability to cases with more than two UAS and analyze its
performance in wide variety of settings.
#### 6.3.1. Case study 1: Four UAS position swap
We recreate the following experiment from (Alonso-Mora et al., 2015). Here,
two pairs of UAS must maneuver to swap their positions, i.e. the end point of
each UAS is the same as the starting position for another UAS. See the 3D
representation of the scenario in Figure 9(a). Each UAS start set is assumed
to be a singular point fixed at:
(23) $\textit{Goal}_{1}=(1,0,0),\ \textit{Goal}_{2}=(0,1,0),\
\textit{Goal}_{3}=(-1,0,0),\ \textit{Goal}_{4}=(0,-1,0)$
and goal states are antipodal to the start states:
(24) $\textit{Start}_{j}=-\textit{Goal}_{j},\ \forall j\in\\{1,2,3,4\\}.$
All four UAS must reach desired goal states within 4 seconds while avoiding
each other. With a pairwise separations requirement of at least $\delta=0.1$
meters, the overall mission specification is:
(25)
$\varphi_{\textit{mission}}=\bigwedge_{j=1}^{4}\Diamond_{[0,4]}(\mathbf{p}_{j}\in\textit{Goal}_{j})\
\wedge\
\bigwedge_{j\not=j^{\prime}}\square_{[0,4]}||\mathbf{p}_{j}-\mathbf{p}_{j^{\prime}}||\geq
0.1\vspace{-3pt}$
Following Section 2.2, initial pre-planning is done by ignoring the mutual
separation requirement in (25) and generating the trajectory for each UAS
$j=\\{1,2,3,4\\}$ independently with respect to its individual STL
specification:
(26) $\varphi_{j}=\Diamond_{[0,4]}(\mathbf{p}_{j}\in\textit{Goal}_{j}).$
Obtained pre-planned trajectories contain a joint collision that happens
simultaneously (at $t=2$s, see Figure 10) across all four UAS and located at
point $(0,0,0)$, see Figure 9(b). For LNF experimental evaluation, the
robustness value was fixed at $\rho=0.055$ and the UAS priorities were set in
the increasing order, e.g. UAS with a lower index has the lower priority:
$1<2<3<4$.
Figure 9. Four UAS position swap. (a): 3D representation of the scenario.
(b)-(c): 2D projections of the scenario onto the horizontal plane $XoY$ before
and after collision avoidance. Initial colliding trajectories are depicted in
dashed lines in (a) and (b). Collision is detected at point $(0,0,0)$, it
involves all four UAS and happens simultaneously across the agents. The
updated non-colliding trajectories generated by LNF are depicted in solid
color in (a) and (c). Initial positions of UAS marked by “O” and final
positions by “$\star$”.
Figure 10. Four UAS position swap: Relative distances before (top) and after
(bottom) the collision avoidance algorithm. Initial simultaneous collisions
across all four UAS are successfully resolved by LNF. Note that the symmetry
in the initial positions and trajectories results in multiple UAS pairs with
the same relative distances for the time horizon of interest before collision
avoidance (top).
Simulation results. The non-colliding trajectories generated by LNF are
depicted in Figure 9(c). Playback of the scenario can be found at
https://tinyurl.com/swap-pos.
It is observed that the opposite UAS pairs chose to change attitude and pass
over each other, see Figure 9(a). Within these opposite pairs, UAS chose to
have horizontal deviations to avoid collision, see Figure 9(c). LNF algorithm
performed $4\choose 2$$=6$ pairwise applications of L2F (see Theorem 5.7).
Such high number of applications is expected due to a complicated simultaneous
nature of the detected collision across the initially pre-planned
trajectories. No CR repairing was required to successfully produce non-
colliding trajectories by the LNF algorithm. It took LNF $37.8$ms to perform
CA. Figure 10 represents relative distances between UAS pairs before and after
collision avoidance. Figure 10 shows that none of the UAS cross the safe
minimum separation threshold of $0.1$m after LNF, e.g. joint collision has
been successfully resolved by LNF.
#### 6.3.2. Case study 2: Four UAS reach-avoid mission
Figure 11 depicts a multi UAS case-study with a reach-avoid mission. Scenario
consists of four UAS which must reach desired goal states within 4 seconds
while avoiding the wall obstacle and each other. Each UAS
$j\in\\{1,\ldots,4\\}$ specification can be defined as:
(27) $\varphi_{j}=\Diamond_{[0,4]}(\mathbf{p}_{j}\in\textit{Goal}_{j})\
\wedge\ \square_{[0,4]}\neg(\mathbf{p}_{j}\in\textit{Wall})\vspace{-2pt}$
A pairwise separations requirement of $\delta=0.1$ meters is enforced for all
UAS, therefore, the overall mission specification is:
(28) $\varphi_{\text{mission}}=\bigwedge_{j=1}^{4}\varphi_{j}\ \wedge\
\bigwedge_{j\not=j^{\prime}}\square_{[0,4]}||\mathbf{p}_{j}-\mathbf{p}_{j^{\prime}}||\geq
0.1\vspace{-3pt}$
(a) 3D representation of the scenario
(b) 2D projection onto $XoY$
Figure 11. Reach-avoid mission. Non-colliding trajectories for 4 UAS generated
by LNF. All UAS reach their goal sets (green boxes) within 4 seconds, do not
crash into the vertical wall (in red) and satisfy pairwise separation
requirement of $0.1$m. Initial UAS positions marked by magenta “$\star$”.
Simulations are available at https://tinyurl.com/reach-av.
First, we solved the planning problem for all four UAS in a centralized manner
following approach from (Pant et al., 2018). Next, we solved the planning
problem for each UAS $j$ and its specification $\varphi_{j}$ independently,
with calling LNF on-the-fly, after planning is complete. This way, independent
planning with the online collision avoidance scheme guarantees the
satisfaction of the overall mission specification (28).
Simulation results. We have simulated the scenario for 100 different initial
conditions. Computation time results are presented in Table 2. The average
computation time to generate trajectories in a centralized manner was $0.35$
seconds. The average time per UAS when planning independently (and in
parallel) was $0.1$ seconds. These results demonstrate a speed up of
$3.5\times$ for the individual UAS planning versus centralized (Pant et al.,
2018). Scenario simulations are available https://tinyurl.com/reach-av.
| Centralized planning (Pant et al., 2018) | Decentralized planning with CA
---|---|---
| Independent planning | CA with LNF
Comput. time (mean$\pm$ std)(ms) | 345.8$\pm$ 87.2 | 138.6$\pm$ 62.4 | 9.97 $\pm$ 0.4
Table 2. Reach-avoid mission. Computation times (mean and standard deviation)
comparison between centralized planning following (Pant et al., 2018) and
decentralized planning (independent planning with LNF) over 100 runs of the
scenario.
#### 6.3.3. Case study 3: UAS operations in high-density airspace
Figure 12. 3D representation of the unit cube scenario with 20 UAS. All UAS
must reach their goal sets within 4 seconds, avoid the no-fly zone and satisfy
pairwise separation requirement of $0.1$m. Initially planned trajectories
(dashed lines) had 5 violations of the mutual separation requirement. LNF
succesfully resolved all detected violations and led to non-colliding
trajectories (solid lines). Simulations are available at
https://tinyurl.com/unit-cube.
To verify scalability of LNF, we perform evaluation of the scenario with high-
density UAS operations. The case study consists of multiple UAS flying within
the restricted area of 1m3 while avoiding a no-fly zone of $(0.2)^{3}$=0.08m3
in the center, see Figure 12. Such scenario represents a hypothetical
constrained and highly populated airspace with heterogeneous UAS missions such
as package delivery or aerial surveillance.
Each UAS’ $j$ start position $\textit{Start}_{j}$ and goal set
$\textit{Goal}_{j}$ are chosen at (uniform) random on the opposite random
faces of the unit cube. Goal state should be reached within $4$ second time
interval and the no-fly zone must be avoided during this time interval. Same
as in the previous case studies, we first solve the planning problem for each
UAS $j$ separately following trajectory generation approach from (Pant et al.,
2018). The STL specification for UAS $j$ is captured as follows:
(29) $\varphi_{j}=\Diamond_{[0,4]}(\mathbf{p}_{j}\in\textit{Goal}_{j})\
\wedge\ \square_{[0,4]}\neg(\mathbf{p}_{j}\in\textit{NoFly})$
After planning is complete and trajectories $\mathbf{p}_{j}$ are generated, we
call LNF on-the-fly to satisfy the overall mission specification
$\varphi_{\text{mission}}=\bigwedge_{j=1}^{N}\varphi_{j}\ \wedge\
\varphi_{\text{separation}}$, where $N$ is a number of UAS participating in
the scenario and $\varphi_{\text{mission}}$ is the requirement of the minimum
allowed pairwise separation of $0.1$m between the UAS:
(30) $\varphi_{\text{separation}}=\bigwedge_{j,j^{\prime}:\
j\not=j^{\prime}}\square_{[0,4]}||\mathbf{p}_{j}-\mathbf{p}_{j^{\prime}}||\geq
0.1.$
We increase the density of the scenario by increasing the number of UAS, while
keeping the space volume at 1m3.
Simulation results. We ran 100 simulations for various numbers of UAS, each
with randomized start and goal positions. Trajectory pre-planning was done
independently for all UAS, and LNF is tasked with CA. For evaluation, we
measure the overall number of minimum separation requirement violations before
and after LNF for two different settings of the fraction $\rho/\delta$: narrow
robustness tube, $\rho/\delta=0.5$ and wider tube, $\rho/\delta=1.15$, see
Figure 13. With increasing number of UAS, the number of collisions between
initially pre-planned trajectories increase (before LNF) and the number of not
collisions by LNF, while small, increases as well (figure 13(b)). The
corresponding decay in separation rate over pairs of collisions resolved is
faster for the case of $\rho/\delta=0.5$ which is expected due to less room to
maneuver. Separation rate is higher when the $\rho/\delta$ ratio is higher,
see Figure 13(a). We performed simulations for up to 70 UAS. Average
separation rate for 70 UAS is $0.915$ for $\rho/\delta=0.5$ and $0.987$ for
$\rho/\delta=1.15$. The results show that LNF can still succeed in scenarios
with a high UAS density. Videos of the simulations are available at
https://tinyurl.com/unit-cube.
(a) Separation rate defines the fraction between the number of initial
violations of the minimum separation and the number of resolved violations by
LNF.
(b) Number of minimum separation violations before and after LNF, averaged
over 100 simulations.
Figure 13. Unit cube scenario. Model performance analysis with respect to
variations in the number of UAS for two different settings of $\rho/\delta$. A
higher $\rho/\delta$ implies there is more room within the robustness tubes to
maneuver for CA. Performance is measured in terms of separation rate (a) and
the overall number of minimum separation requirement violations before and
after LNF (b). We plot the mean and standard deviation over 100 iterations.
#### 6.3.4. Comparison to MILP-based re-planning
| Re-planning scheme | $N=10$ | $N=20$ | $N=30$ | $N=40$ | $N=50$
---|---|---|---|---|---|---
Comp. times (mean$\pm$std) | MILP-based planner | $0.6\pm 0.1$s | $8.8\pm 9.6$s | $175.5\pm\\!149.9$s | $1740.\pm 129.3$s | Timeout
CA with LNF | 15.2$\pm$ 5.1ms | 73.1$\pm$23.5ms | 117.3$\pm\\!$ 45.6ms | 198.7$\pm$ 73.6ms | 211.1$\pm$82.3 ms
Table 3. Computation times (mean and standard deviation) demanded by the re-
planning scheme (MILP-based re-planning or CA with LNF) averaged over $100$
random runs. Time taken by the MILP-based re-planner to encode the problem is
not included in the overall computation time. ‘Timeout’ stands for a timeout
after $35$ minutes.
LNF requires the new trajectories after CA to be be within the robustness
tubes of pre-planned trajectories to still satisfy other high-level
requirements (problem 1). While this might be restrictive, we show that online
re-planning is usually not an option in these multi-UAS scenarios. A MILP-
based planner, similar in essence to (Raman et al., 2014b), was implemented
and treated as a baseline to compare against LNF through evaluations on the
scenario of Section 6.3.3. Unlike the decentralized LNF, such MILP-planner
baseline is centralized as it plans for all the UAS in a single optimization
to avoid the NoFly-zone, reach their destinations and also avoid each other.
We ran 100 simulations for various numbers of UAS, with each iteration having
randomized start and goal positions. Simulations are available at
https://tinyurl.com/re-milp. The computation times are presented in Table 3.
As the number of UAS increases, it is clear the online re-planning is
intractable. For example, the baseline takes on average $8.8$ seconds to
produce trajectories for $20$ UAS, in contrast with $73.1$ milliseconds for
LNF to perform CA. For 50 UAS and higher the MILP baseline solver could not
return a single feasible solution, while LNF could. LNF outperforms the MILP-
based re-planning baseline since it can perform CA with small computation
times, even for a high number of UAS.
## 7\. Conclusions
Summary: We presented Learning-to-Fly (L2F), an online, decentralized and
mission-aware scheme for pairwise UAS Collision Avoidance. Through Learning-
And-Flying (LNF) we extended it to work for cases where more than two UAS are
on collision paths, via a systematic pairwise application of L2F and with a
set-shrinking approach to avoid live-lock like situations. These frameworks
combine learning-based decision-making and decentralized linear optimization-
based Model Predictive Control (MPC) to perform CA, and we also developed a
fast heuristic to repair the decisions made by the learing-based component
based on the feasibility of the optimizations. Through extensive simulation,
we showed that our approach has a computation time of the order of
milliseconds, and can perform CA for a wide variety of cases with a high
success rate even when the UAS density in the airspace is high.
Limitations and future work: While our approach works very well in practice,
it is not complete, i.e. does not guarantee a solution when one exists, as
seen in simulation results for L2F. This drawback requires a careful analysis
for obtaining the sets of initial conditions over the conflicting UAS such
that our method is guaranteed to work. In future work, we aim to leverage
tools from formal methods, like falsification, to get a reasonable estimate of
the conditions in which our method is guaranteed to work. We will also explore
improved heuristics for the set-shrinking in LNF, as well as the CR-decision
repairing procedure.
## References
* (1)
* Administration (2018) Federal Aviation Administration. 2018\. Concept of Operations: Unmanned Aircraft System (UAS) Traffic Management (UTM). {https://utm.arc.nasa.gov/docs/2018-UTM-ConOps-v1.0.pdf}.
* Aksaray et al. (2016) Derya Aksaray, Austin Jones, Zhaodan Kong, Mac Schwager, and Calin Belta. 2016. Q-Learning for Robust Satisfaction of Signal Temporal Logic Specifications. In _IEEE Conference on Decision and Control_.
* Alonso-Mora et al. (2015) Javier Alonso-Mora, Tobias Naegeli, Roland Siegwart, and Paul Beardsley. 2015. Collision avoidance for aerial vehicles in multi-agent scenarios. _Autonomous Robots_ 39, 1 (2015), 101–121.
* Chakrabarty et al. (2019) Anjan Chakrabarty, Corey Ippolito, Joshua Baculi, Kalmanje Krishnakumar, and Sebastian Hening. 2019\. Vehicle to Vehicle (V2V) communication for Collision avoidance for Multi-copters flying in UTM –TCL4. https://doi.org/10.2514/6.2019-0690
* Dahleh et al. (2004) Mohammed Dahleh, Munther A Dahleh, and George Verghese. 2004\. Lectures on dynamic systems and control. _MIT Lecture Notes_ 4, 100 (2004), 1–100.
* DeCastro et al. (2017) Jonathan A. DeCastro, Javier Alonso-Mora, Vasumathi Raman, and Hadas Kress-Gazit. 2017. Collision-Free Reactive Mission and Motion Planning for Multi-robot Systems. In _Springer Proceedings in Advanced Robotics_.
* Desai et al. (2017) Ankush Desai, Indranil Saha, Yang Jianqiao, Shaz Qadeer, and Sanjit A. Seshia. 2017. DRONA: A Framework for Safe Distributed Mobile Robotics. In _ACM/IEEE International Conference on Cyber-Physical Systems_.
* Fabra et al. (2019) Francisco Fabra, Willian Zamora, Julio Sangüesa, Carlos T Calafate, Juan-Carlos Cano, and Pietro Manzoni. 2019. A distributed approach for collision avoidance between multirotor UAVs following planned missions. _Sensors_ 19, 10 (2019), 2404.
* Fainekos and Pappas (2009) G. Fainekos and G. Pappas. 2009. Robustness of temporal logic specifications for continuous-time signals. _Theor. Computer Science_ (2009).
* Fainekos et al. (2005) G. E. Fainekos, H. Kress-Gazit, and G. J. Pappas. 2005\. Hybrid Controllers for Path Planning: A Temporal Logic Approach. In _Proce. of the 44th IEEE Conf. on Decision and Control_. 4885–4890. https://doi.org/10.1109/CDC.2005.1582935
* Gers et al. (2002) Felix A Gers, Nicol N Schraudolph, and Jürgen Schmidhuber. 2002\. Learning precise timing with LSTM recurrent networks. _Journal of machine learning research_ 3, Aug (2002), 115–143.
* Hackenberg (2018) Davis L Hackenberg. 2018\. ARMD Urban Air Mobility Grand Challenge: UAM Grand Challenge Scenarios. https://evtol.news/__media/PDFs/eVTOL-NASA-Revised_UAM_Grand_Challenge_Scenarios.pdf.
* Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. _Neural computation_ 9, 8 (1997), 1735–1780.
* Karaman and Frazzoli (2011) S. Karaman and E. Frazzoli. 2011. Linear temporal logic vehicle routing with applications to multi-UAV mission planning. _International Journal of Robust and Nonlinear Control_ (2011).
* Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ (2014).
* Kloetzer and Belta (2006) M. Kloetzer and C. Belta. 2006. Hierarchical abstractions for robotic swarms. In _Proc. of 2006 IEEE Inter. Conf. on Robotics and Automation_. 952–957. https://doi.org/10.1109/ROBOT.2006.1641832
* Kloetzer and Belta (2008) M. Kloetzer and C. Belta. 2008. A Fully Automated Framework for Control of Linear Systems from Temporal Logic Specifications. _IEEE Trans. Automat. Control_ 53, 1 (Feb 2008), 287–297. https://doi.org/10.1109/TAC.2007.914952
* Kochenderfer et al. (2012) Mykel J Kochenderfer, Jessica E Holland, and James P Chryssanthacopoulos. 2012. _Next-generation airborne collision avoidance system_. Technical Report. MIT-Lincoln Laboratory, Lexington, US.
* Li et al. (2018) M. Z. Li, W. R. Tam, S. M. Prakash, J. F. Kennedy, M. S. Ryerson, D. Lee, and Y. V. Pant. 2018. Design and implementation of a centralized system for autonomous unmanned aerial vehicle trajectory conflict resolution. In _Proceedings of IEEE National Aerospace and Electronics Conference_.
* Lofberg (2004) Johan Lofberg. 2004\. YALMIP: A toolbox for modeling and optimization in MATLAB. In _2004 IEEE international conference on robotics and automation (IEEE Cat. No. 04CH37508)_. IEEE, 284–289.
* Luukkonen (2011) Teppo Luukkonen. 2011\. Modelling and control of quadcopter. _Independent research project in applied mathematics, Espoo_ 22 (2011).
* Ma et al. (2016) Xiaobai Ma, Ziyuan Jiao, and Zhenkai Wang. 2016. Decentralized prioritized motion planning for multiple autonomous UAVs in 3D polygonal obstacle environments. In _Inter. Conf. on Unmanned Aircraft Systems_.
* Maler and Nickovic (2004) Oded Maler and Dejan Nickovic. 2004. _Monitoring Temporal Properties of Continuous Signals_. Springer Berlin Heidelberg.
* Manfredi and Jestin (2016) G. Manfredi and Y. Jestin. 2016. An introduction to ACAS Xu and the challenges ahead. In _2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC)_. 1–9. https://doi.org/10.1109/DASC.2016.7778055
* Mueller et al. (2015) Mark W Mueller, Markus Hehn, and Raffaello D’Andrea. 2015\. A computationally efficient motion primitive for quadrocopter trajectory generation. _IEEE Transactions on Robotics_ 31, 6 (2015), 1294–1310.
* NASA (2018) NASA. 2018. Executive Briefing: Urban Air Mobility (UAM) Market Study. https://www.nasa.gov/sites/default/files/atoms/files/bah_uam_executive_briefing_181005_tagged.pdf.
* Pant et al. (2017) Yash Vardhan Pant, Houssam Abbas, and Rahul Mangharam. 2017\. Smooth operator: Control using the smooth robustness of temporal logic. In _Control Technology and Applications, 2017 IEEE Conf. on_. IEEE, 1235–1240.
* Pant et al. (2015) Yash Vardhan Pant, Houssam Abbas, Kartik Mohta, Truong X Nghiem, Joseph Devietti, and Rahul Mangharam. 2015\. Co-design of anytime computation and robust control. In _2015 IEEE Real-Time Systems Symposium_. IEEE, 43–52.
* Pant et al. (2018) Yash Vardhan Pant, Houssam Abbas, Rhudii A Quaye, and Rahul Mangharam. 2018. Fly-by-logic: control of multi-drone fleets with temporal logic objectives. In _Proceedings of the 9th ACM/IEEE International Conference on Cyber-Physical Systems_. IEEE Press, 186–197.
* Raman et al. (2014a) V. Raman, A. Donze, M. Maasoumy, R. M. Murray, A. Sangiovanni-Vincentelli, and S. A. Seshia. 2014a. Model predictive control with signal temporal logic specifications. In _53rd IEEE Conf. on Decision and Control_. 81–87. https://doi.org/10.1109/CDC.2014.7039363
* Raman et al. (2014b) Vasumathi Raman, Alexandre Donzé, Mehdi Maasoumy, Richard M Murray, Alberto Sangiovanni-Vincentelli, and Sanjit A Seshia. 2014b. Model predictive control with signal temporal logic specifications. In _53rd IEEE Conference on Decision and Control_. IEEE, 81–87.
* Rodionova et al. (2020) Alena Rodionova, Yash Vardhan Pant, Kuk Jang, Houssam Abbas, and Rahul Mangharam. 2020\. Learning to Fly - Learning-based Collision Avoidance for Scalable Urban Air Mobility. _Proceedings of the IEEE International Conference on Intelligent Transportation Systems_ (2020). http://arxiv.org/abs/2006.13267
* Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015\. ImageNet Large Scale Visual Recognition Challenge. _International Journal of Computer Vision (IJCV)_ 115, 3 (2015), 211–252. https://doi.org/10.1007/s11263-015-0816-y
* Saha et al. (2014) Indranil Saha, Ramaithitima. Rattanachai, Vijay Kumar, George J. Pappas, and Sanjit A. Seshia. 2014. Automated Composition of Motion Primitives for Multi-Robot Systems from Safe LTL Specifications. In _IEEE/RSJ International Conference on Intelligent Robots and Systems_.
* Saha and Julius (2016) S. Saha and A. Agung Julius. 2016. An MILP approach for real-time optimal controller synthesis with Metric Temporal Logic specifications. In _Proceedings of the 2016 American Control Conference (ACC)_.
## Appendix A Robustness Tubes Shrinking
###### Definition 3.
The distance between two sets $A$ and $B$ is defined as:
(31) $\mathbf{dist}(A,B)=\inf\left\\{||a-b||_{\infty}\ |\ a\in A,\ b\in
B\right\\}$
###### Definition 4.
Two robustness tubes $\mathbf{P}_{1}$ and $\mathbf{P}_{2}$ are said to be
$\delta$-separate from each other if at every time step $k$ the distance
between them is at least $\delta$, i.e.
(32) $\mathbf{dist}(P_{1,k},P_{2,k})\geq\delta\ \forall k=0,\ldots,H.$
For brevity we use $\mathbf{dist}(\mathbf{P}_{1},\mathbf{P}_{2})\geq\delta$
for denoting being $\delta$-separate across all time indices $k=0,\ldots,H$.
###### Proof of Theorem 5.3.
By construction of RTS, see Algorithm 4. If initial tubes are
$\delta$-separate then no shrinking is required and therefore, both properties
(13) and (12) are satisfied. If the initial tubes are not $\delta$-separate,
property (13) comes from the fact that for any time step $k$,
$P_{j,k}^{\prime}=P_{j,k}\setminus\varPi_{k}$ for UAS $j=1,2$. Property (12)
is a consequence of the zero-slack solution and Theorem 4.2 which states that
resulting trajectories are non-conflicting,
$||p_{1,k}^{\prime}-p_{2,k}^{\prime}||\geq\delta$, $\forall
k\in\\{0,\ldots,H\\}$, therefore, $msep\geq\delta$. Following Algorithm 4, for
any time step $k$ box’s $\varPi_{k}$ smallest edge is
$\min(msep,\delta)=\delta$ and since for both UAS $j=1,2$ the tubes update is
defined as $P_{j,k}^{\prime}=P_{j,k}\setminus\varPi_{k}$, the shrinked tubes
$P_{j,k}^{\prime}$ are $\delta$-separate. ∎
###### Proof of Lemma 5.4.
From the CA-MPC definition (6) it follows that
$\mathbf{p}_{j}^{\prime}\in\mathbf{P}_{j}$, $\forall j\in\\{1,2\\}$. The
updated tubes are defined as
$\mathbf{P}_{j}^{\prime}=\mathbf{P}_{j}\setminus\boldsymbol{\varPi}$, see
Algorithm 4. By the definition of 3D cube $\boldsymbol{\varPi}$, for any time
step $k$, ${p}_{j,k}^{\prime}\not\in\varPi_{k}$, therefore,
$\mathbf{p}_{j}^{\prime}\in\mathbf{P}_{j}^{\prime},\ \forall j\in\\{1,2\\}$. ∎
###### Proof of Lemma 5.5.
Following the Definition 4, tubes are $\delta$-separate if
$\mathbf{dist}(P_{1,k},P_{2,k})\geq\delta,\ \forall k\in\\{0,\ldots,H\\}$.
Therefore, due to (31) the following holds:
(33) $\inf\left\\{||p_{1,k}-p_{2,k}||\mid\ p_{1,k}\in P_{1,k},\ p_{2,k}\in
P_{2,k}\right\\}\geq\delta.$
By the definition of the infimum operator, $\forall p_{1,k}\in P_{1,k},\forall
p_{2,k}\in P_{2,k}$:
(34) $||p_{1,k}-p_{2,k}||\geq\inf\left\\{||p_{1,k}-p_{2,k}||\mid\ p_{1,k}\in
P_{1,k},\ p_{2,k}\in P_{2,k}\right\\}\geq\delta,$
which completes the proof. ∎
###### Proof of Lemma 5.6.
1. (1)
Property (21) directly follows from Theorem 4.2.
2. (2)
Due to Theorem 5.3, RTS application (17) leads to tubes
$\mathbf{P}^{\prime\prime}_{1}$ and $\mathbf{P}^{\prime}_{3}$ being
$\delta$-separate. RTS (18) leads to
$\mathbf{P}^{\prime\prime}_{3}\subseteq\mathbf{P}^{\prime}_{3}$. Therefore,
$\mathbf{P}^{\prime\prime}_{1}$ and $\mathbf{P}^{\prime\prime}_{3}$ are
$\delta$-separate and following Lemma 5.5, property (20) holds.
3. (3)
Analogously, due to Theorem 5.3, RTS application (16) leads to tubes
$\mathbf{P}^{\prime}_{1}$ and $\mathbf{P}^{\prime}_{2}$ being
$\delta$-separate. RTS (17) leads to
$\mathbf{P}^{\prime\prime}_{1}\subseteq\mathbf{P}^{\prime}_{1}$ and RTS (18)
leads to $\mathbf{P}^{\prime\prime}_{2}\subseteq\mathbf{P}^{\prime}_{2}$.
Therefore, $\mathbf{P}^{\prime\prime}_{1}$ and $\mathbf{P}^{\prime\prime}_{2}$
are $\delta$-separate and following Lemma 5.5, property (19) holds.
4. (4)
Tube belonging property (22) follows directly from Lemma 5.4.
∎
## Appendix B Links to the videos
Table 4 has the links for the visualizations of all simulations and
experiments performed in this work.
Scenario | Section | Platform | $\\#$ of UAS | Link
---|---|---|---|---
L2F test | Sec. 6.2 | Sim. | 2 | https://tinyurl.com/l2f-exmpl
Crazyflie validation | Sec. 6.2 | CF 2.0 | 2 | https://tinyurl.com/exp-cf2
Four UAS position swap | Sec. 6.3.1 | Sim. | 4 | https://tinyurl.com/swap-pos
Four UAS reach-avoid mission | Sec.6.3.2 | Sim. | 4 | https://tinyurl.com/reach-av
High-density unit cube | Sec.6.3.3 | Sim. | 10, 20, 40 | https://tinyurl.com/unit-cube
MILP re-planning | Sec 6.3.4 | MATLAB | 20 | https://tinyurl.com/re-milp
Table 4. Links for the videos for simulations and experiments. “Sim.” stands
for Python simulations, “CF2.0” for experiments on the Crazyflies.
*[UAS]:
*[UAM]:
*[UTM]:
*[CA]:
*[MILP]:
*[CR]:
|
11institutetext: Institute of Astronomy, Faculty of Physics, Astronomy and
Applied Informatics, Nicolaus Copernicus University in Toruń, Gagarina 11,
87-100 Toruń, Poland, 11email<EMAIL_ADDRESS>22institutetext:
Departamento de Física Teórica, Universidad Autónoma de Madrid, Cantoblanco
28049 Madrid, Spain, 22email<EMAIL_ADDRESS>33institutetext: Centro de
Astrobiología (CAB, CSIC-INTA), ESAC Campus Camino Bajo del Castillo, s/n,
Villanueva de la Cañada, E-28692 Madrid, Spain 44institutetext: National
Center for Supercomputing Applications, University of Illinois, Urbana-
Champaign, 1205 W Clark St, MC-257, Urbana, IL 61801, USA 55institutetext:
Center for Astrophysical Surveys, National Center for Supercomputing
Applications, Urbana, IL, 61801, USA 66institutetext: Department of Astronomy
and Astrophysics, Pennsylvania State University, 525 Davey Laboratory,
University Park, PA 16802, USA 66email<EMAIL_ADDRESS>77institutetext:
Center for Exoplanets and Habitable Worlds, Pennsylvania State University, 525
Davey Laboratory, University Park, PA 16802, USA
# Tracking Advanced Planetary Systems (TAPAS) with HARPS-N. ††thanks: Based on
observations obtained with the Hobby-Eberly Telescope, which is a joint
project of the University of Texas at Austin, the Pennsylvania State
University, Stanford University, Ludwig-Maximilians-Universität München, and
Georg-August-Universität Göttingen. ††thanks: Based on observations made with
the Italian Telescopio Nazionale Galileo (TNG) operated on the island of La
Palma by the Fundación Galileo Galilei of the INAF (Istituto Nazionale di
Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the
Instituto de Astrofísica de Canarias.
VII. Elder suns with low-mass companions
A. Niedzielski 11 E. Villaver 2233 M. Adamów 4455 K. Kowalik 44 A. Wolszczan
6677 G. Maciejewski 11
(Received;accepted)
###### Abstract
Context. We present the current status of and new results from our search for
exoplanets in a sample of solar-mass, evolved stars observed with the HARPS-N
and the 3.6-m Telescopio Nazionale Galileo (TNG), and the High Resolution
Spectrograph (HRS) and the 9.2-m Hobby Eberly Telescope (HET).
Aims. The aim of this project is to detect and characterise planetary-mass
companions to solar-mass stars in a sample of 122 targets at various stages of
evolution from the main sequence (MS) to the red giant branch (RGB), mostly
sub-gaints and giants, selected from the Pennsylvania-Toruń Planet Search
(PTPS) sample, and use this sample to study relations between stellar
properties, such as metallicity, luminosity, and the planet occurrence rate.
Methods. This work is based on precise radial velocity (RV) measurements. We
have observed the program stars for up to 11 years with the HET/HRS and the
TNG/HARPS-N.
Results. We present the analysis of RV measurements with the HET/HRS and the
TNG/HARPS-N of four solar-mass stars, HD 4760, HD 96992 , BD+02 3313, and TYC
0434-04538-1. We found that: HD 4760 hosts a companion with a minimum mass of
$13.9\hbox{$\thinspace M_{\mathrm{J}}$}$ ($a=1.14$ au, $e=0.23$); HD 96992 is
a host to a $m\sin i=1.14\hbox{$\thinspace M_{\mathrm{J}}$}$ companion on a
$a=1.24$ au and $e=0.41$ orbit, and TYC 0434-04538-1 hosts an $m\sin
i=6.1\hbox{$\thinspace M_{\mathrm{J}}$}$ companion on a $a=0.66$ au and
$e=0.08$ orbit. In the case of BD+02 3313 we found a correlation between the
measured RVs and one of the stellar activity indicators, suggesting that the
observed RV variations may originate in either stellar activity or be caused
by the presence of an unresolved companion. We also discuss the current status
of the project and a statistical analysis of the RV variations in our sample
of target stars.
Conclusions. In our sample of 122 solar-mass stars, $49\pm 5\%$ of them
appear to be single, and $16\pm 3\%$ are spectroscopic binaries. The three
giants hosting low-mass companions presented in this paper add to the six ones
previously identified in the sample.
###### Key Words.:
Stars: late-type - Planets and satellites: detection - Techniques: radial
velocities - Techniques: spectroscopic
## 1 Introduction
After the discovery of the first exoplanetary system around a pulsar (PSR
1257+12 b, c, d – Wolszczan & Frail 1992) with the pulsar timing technique,
and of the first exoplanet orbiting a solar-type star (51 Peg b – Mayor &
Queloz 1995) with the precise velocimetry, the photometric observations of
planetary transits have proved to be the most successful way of detecting
exoplanets.
Nearly 3000 out of about 4300 exoplanets were detected with the planetary
transit method, most of them by just one project, Kepler/K2 (Borucki et al.,
2010). Detailed characterisation of these systems requires both photometric
(transits) and spectroscopic (radial velocities, abundances) observations, but
not all of them are available for spectroscopic follow-up with ground-based
instruments, due to the faintness of the hosts. This emphasizes the need for
missions such as TESS (Ricker et al., 2015) and PLATO (Catala & PLATO
Consortium, 2008).
Our knowledge of exoplanets orbiting the solar-type or less massive stars on
the MS is quite extensive due to combined output of the RV and transit
searches (see Winn & Fabrycky 2015 for a review). The domain of larger orbital
separations or more evolved hosts clearly requires more exploration.
Table 1: Basic parameters of the program stars. Star | $\thinspace T_{\mathrm{eff}}$[K] | $\log g$ | $[$Fe/H$]$ | $\log L/\hbox{$\thinspace L_{\odot}$}$ | $M/\hbox{$\thinspace M_{\odot}$}$ | $R/\hbox{$\thinspace R_{\odot}$}$ | $v\sin i\;[\\!\hbox{$\thinspace{\mathrm{km\leavevmode\nobreak\ s^{-1}}}$}]$ | $\hbox{$P_{\mathrm{rot}}$}\;[\mathrm{days}]$
---|---|---|---|---|---|---|---|---
HD 4760 | 4076$\pm$15 | 1.62$\pm$0.08 | -0.91$\pm$0.09 | 2.93$\pm$0.11 | 1.05$\pm$0.19 | 42.4$\pm$9.2 | $1.40\pm 1.10$ | $1531\pm 1535$
HD 96992 | 4725$\pm$10 | 2.76$\pm$0.04 | -0.45$\pm$0.08 | 1.47$\pm$0.09 | 0.96$\pm$0.09 | 7.43$\pm$1.1 | $1.90\pm 0.60$ | $198\pm 92$
BD+02 3313 | 4425$\pm$13 | 2.64$\pm$0.05 | 0.10$\pm$0.07 | 1.44$\pm$0.24 | 1.03$\pm$0.03 | 8.47$\pm$1.53 | $1.80\pm 0.60$ | $238\pm 122$
TYC 0434-04538-1 | 4679$\pm$10 | 2.49$\pm$0.04 | -0.38$\pm$0.06 | 1.67$\pm$0.09 | 1.04$\pm$0.15 | 9.99$\pm$1.6 | $3.00\pm 0.40$ | $169\pm 49$
So far, the RV searches for exoplanets orbiting more evolved stars, like Lick
K-giant Survey (Frink et al., 2002a), Okayama Planet Search(Sato et al.,
2003), Tautenberg Planet Search (Hatzes et al., 2005), Retired A Stars and
Their Companions (Johnson et al., 2007), PennState - Toruń Planet Search
(Niedzielski et al., 2007; Niedzielski & Wolszczan, 2008a; Niedzielski &
Wolszczan, 2008b) or Boyunsen Planet Search (Lee et al., 2011), have resulted
in a rather modest population of 112 substellar companions in 102
systems111https://www.lsw.uni-
heidelberg.de/users/sreffert/giantplanets/giantplanets.php.
The Pennsylvania-Toruń Planet Search (PTPS) is one of the most extensive RV
searches for exoplanets around the evolved stars. The project was designed to
use the Hobby-Eberly Telescope (Tull, 1998) (HET) and its High Resolution
Spectrograph (Ramsey et al., 1998) (HRS). It has surveyed a sample of stars
distributed across the northern sky, with the typical, apparent V-magnitudes
between 7.5 and 10.5 mag, and the B-V colour indices between 0.6 and 1.3. On
the Hertzsprung-Russell (H-R) diagram, these stars occupy an area delimited by
the MS, the instability strip, and the coronal dividing line (Linsky & Haisch,
1979). In total, the program sample of 885 stars contains 515 giants, 238
subgiants, and 132 dwarfs (Deka-Szymankiewicz et al., 2018). A detailed
description of this sample, including their atmospheric and integrated
parameters (masses, luminosities, and radii), is presented in a series of the
following papers: Zieliński et al. (2012); Adamów et al. (2014); Niedzielski
et al. (2016a); Adamczyk et al. (2016); Deka-Szymankiewicz et al. (2018). The
first detection of a gas giant orbiting a red giant star by the PTPS project
has been published by Niedzielski et al. (2007).
So far, twenty-two planetary systems have been detected by the PTPS and TAPAS
projects. The most interesting ones include: a multiple planetary system
around TYC 1422-00614-1, an evolved solar-mass, K2 giant, with two planets
orbiting it (Niedzielski et al., 2015a); the most massive,
$1.9\hbox{$\thinspace M_{\odot}$}$, red giant star TYC 3667-1280-1, hosting a
warm Jupiter (Niedzielski et al., 2016c), and BD+48 740, a Li overabundant
giant star with a planet, which possibly represents a case of recent
engulfment (Adamów et al., 2012). Of specific interest is BD+14 4559 b, a
$1.5\hbox{$\thinspace M_{\mathrm{J}}$}$ gas giant orbiting a
$0.9\hbox{$\thinspace M_{\odot}$}$ dwarf in an eccentric orbit (e=0.29) at a
distance of a=0.78 au from the host star (Niedzielski et al., 2009b). The
International Astronomical Union chose this planet and its host on the
occasion of its 100th anniversary, to be named by the Polish national poll
organized by the ExoWorlds project. They have been assigned the names of Pirx
and Solaris to honor the famous Polish science fiction writer Stanisław Lem.
The PTPS sample is large enough to investigate planet occurrence as a function
of a well-defined set of stellar parameters. For instance, the sample of 15
Li-rich giants has been studied in a series of papers (Adamów et al., 2012,
2014, 2015, 2018) and resulted in a discovery of 3 Li-rich giants with
planetary-mass companions: BD+48 740, HD 238914, and TYC 3318-01333-1 and two
planetary-mass companions candidates: TYC 3663-01966-1 and TYC 3105-00152-1.
Another interesting subsample of the PTPS contains 115 stars with masses
greater than $1.5\hbox{$\thinspace M_{\odot}$}$. So far, four giants with
planets were detected in that sample: HD 95127, HD 216536, BD+49 828
(Niedzielski et al., 2015b), and TYC 3667-1280-1 (Niedzielski et al., 2016c)
with masses as high as $1.87\hbox{$\thinspace M_{\odot}$}$. A
$2.88\hbox{$\thinspace M_{\odot}$}$ giant TYC 3663-01966-1, mentioned above,
also belongs to this subsample.
There are more PTPS stars to be investigated in search for low-mass
companions: these are 74 low metallicity ([Fe/H]$\leq$-0.5) giant stars,
including BD +20 2457 (Niedzielski et al., 2009b) and BD +03 2562 (Villaver et
al., 2017) \- both with [Fe/H]$\leq$-0.7), 57 high luminosity giants with
$\hbox{$\thinspace\log L/L_{\odot}$}\geq 2$, cf. BD +20 2457 (Niedzielski et
al., 2009b), HD 103485 (Villaver et al., 2017) both with
$\hbox{$\thinspace\log L/L_{\odot}$}\geq 2.5$), and a number of others. All
these investigations are still in progress. Here, we present the results for
four of the program stars.
## 2 Sample and observations
There are 133 stars in the PTPS sample with masses in the $1\pm
0.05\hbox{$\thinspace M_{\odot}$}$ range: 12 dwarfs, 39 subgiants, and 82
giants (Deka-Szymankiewicz et al. 2018 and references therein). Due to an
insufficient RV time series coverage (less than two epochs of observations),
we have removed eleven of these stars from further considerations.
Consequently, the final, complete sample of 122 solar-mass stars contains 11
dwarfs, 33 subgiants, and 78 giant stars (Fig. 1). In what follows, we will
call them elder suns, representing a range of evolutionary stages (from the MS
through the subgiant branch and along the RGB) and a range of metallicities
(between [Fe/H]=-1.44 and [Fe/H]=+0.34, with [Fe/H]=-0.17 being the average).
However, within the estimated uncertainties, their estimated masses are all
the same. The small group of dwarfs included in the sample represents stars
similar to the Sun with different metallicities. The sample defined this way
allows us to study the planet occurrence ratio as a function of stellar
metallicity for a fixed solar mass.
Figure 1: The Hertzsprung-Russell diagram for 122 PTPS stars with solar masses
within $5\%$ uncertainty. Circles mark stars discussed in this work.
Here we present the results for four stars from this sample, that show RV
variations appearing be caused by low-mass companions. Their basic atmospheric
and stellar parameters are summarised in Table 1. The atmospheric parameters,
$\thinspace T_{\mathrm{eff}}$, $\log g$, and $[$Fe/H$]$, were derived using a
strictly spectroscopic method based on the LTE analysis of the equivalent
widths of FeI and FeII lines by Zieliński et al. (2012). The estimates of the
rotational velocities are given in Adamów et al. (2014). The stellar
parameters (masses, luminosities, and radii) were estimated using the Bayesian
approach of Jørgensen & Lindegren (2005), modified by da Silva et al. (2006)
and adopted for our project by Adamczyk et al. (2016), using the theoretical
stellar models from Bressan et al. (2012). In the case of BD+02 3313, we
determined the luminosity using the Gaia Collaboration et al. (2016) DR2
parallax (see Deka-Szymankiewicz et al. 2018 for details).
### 2.1 Observations
The spectroscopic observations presented in this paper were made with two
instruments: the 9.2-m Hobby-Eberly Telescope (HET, Ramsey et al. 1998) and
its High-Resolution Spectrograph (HRS, Tull 1998) in the queue scheduling mode
(Shetrone et al., 2007), and the 3.58-meter Telescopio Nazionale Galileo (TNG)
and its High Accuracy Radial velocity Planet Searcher in the Northern
hemisphere (HARPS-N, Cosentino et al. 2012). A detailed description of the
adopted observing strategies and the instrumental configurations for both
HET/HRS and TNG/HARPS-N can be found in Niedzielski et al. (2007) and
Niedzielski et al. (2015a).
All HET/HRS spectra were reduced with the standard IRAF222IRAF is distributed
by the National Optical Astronomy Observatories, which are operated by the
Association of Universities for Research in Astronomy, Inc., under cooperative
agreement with the National Science Foundation. procedures. The TNG/HARPS-N
spectra were processed with the standard user’s pipeline, Data Reduction
Software (DRS; Pepe et al. 2002a; Lovis & Pepe 2007).
### 2.2 Radial velocities
The HET/HRS is a general purpose spectrograph, which is neither temperature
nor pressure-controlled. Therefore the calibration of the RV measurements with
this instrument is best accomplished with the I2 gas cell technique (Marcy &
Butler, 1992; Butler et al., 1996). Our application of this technique to
HET/HRS data is described in detail in Nowak (2012) and Nowak et al. (2013).
The RVs from the HARPS-N were obtained with the cross-correlation method
(Queloz, 1995; Pepe et al., 2002b). The wavelength calibration was done using
the simultaneous Th-Ar mode of the spectrograph. The RVs were calculated by
cross-correlating the stellar spectra with the digital mask for a K2 type
star.
The RV data acquired with both instruments are shown in Table LABEL:RV-DATA.
There are different zero point offsets between the data sets for every target
listed in Table 3.
## 3 Keplerian analysis
To find the orbital parameters, we combined a global genetic algorithm (GA;
Charbonneau 1995) with the MPFit algorithm (Markwardt, 2009). This hybrid
approach is described in Goździewski et al. (2003); Goździewski & Migaszewski
(2006); Goździewski et al. (2007). The range of the Keplerian orbital
parameters found with the GA was searched with the RVLIN code (Wright &
Howard, 2009), which we modified to introduce the stellar jitter as a free
parameter to be fitted in order to find the optimal solution (Ford & Gregory,
2007; Johnson et al., 2011). The uncertainties were estimated with the
bootstrap method described by Marcy et al. (2005).
For a more detailed description of the Keplerian analysis presented here, we
refer the reader to the first TAPAS paper Niedzielski et al. (2015a). The
results of the analysis of our RV data are listed in Table 3.
Table 2: Keplerian orbital parameters of companions to HD 4760, BD+02 3313,
TYC 0434-04538-1, and HD 96992.
Parameter | HD 4760 | BD+02 3313 | TYC 0434-04538-1 | HD 96992
---|---|---|---|---
$P$ (days) | $434^{+3}_{-3}$ | $1393^{+3}_{-3}$ | $193.2^{+0.4}_{-0.4}$ | $514^{+4}_{-4}$
$T_{0}$ (MJD) | $53955^{+23}_{-27}$ | $54982^{+4}_{-4}$ | $54829^{+15}_{-18}$ | $53620^{+30}_{-40}$
$K$ ($\thinspace{\mathrm{m\leavevmode\nobreak\ s^{-1}}}$) | $370^{+12}_{-9}$ | $690^{+4}_{-4}$ | $209^{+2}_{-2}$ | $33^{+3}_{-3}$
$e$ | $0.23^{+0.09}_{-0.06}$ | $0.47^{+0.01}_{0.01}$ | $0.08^{+0.05}_{-0.03}$ | $0.41^{+0.24}_{-0.12}$
$\omega$ (deg) | $265^{+13}_{-16}$ | $351.3^{+0.7}_{-0.7}$ | $196^{+30}_{-34}$ | $149^{+24}_{-31}$
$m_{2}\sin i$ ($\thinspace M_{\mathrm{J}}$) | $13.9\pm 2.4$ | $34.1\pm 1.1$ | $6.1\pm 0.7$ | $1.14\pm 0.31$
$a$ ($\thinspace\mathrm{au}$) | $1.14\pm 0.08$ | $2.47\pm 0.03$ | $0.66\pm 0.04$ | $1.24\pm 0.05$
$V_{0}$ ($\thinspace{\mathrm{m\leavevmode\nobreak\ s^{-1}}}$) | $-67263^{+17}_{-14}$ | $-47210.6^{+2.1}_{-2.2}$ | $-52833.9^{+1}_{-1}$ | $-36624^{+3}_{-3}$
offset ($\thinspace{\mathrm{m\leavevmode\nobreak\ s^{-1}}}$) | $67163^{+30}_{-32}$ | $47105.6^{+6.2}_{-6.2}$ | $52897^{+11}_{-12}$ | $36630^{+8}_{-8}$
$\sigma_{\mathrm{jitter}}$($\thinspace{\mathrm{m\leavevmode\nobreak\ s^{-1}}}$) | 64 | 10.3 | 22 | 22
$\sqrt{\chi_{\nu}^{2}}$ | 1.13 | 1.33 | 1.26 | 1.23
RMS ($\thinspace{\mathrm{m\leavevmode\nobreak\ s^{-1}}}$) | 66 | 13.1 | 26.1 | 27
$N_{\textrm{obs}}$ | 35 | 29 | 29 | 76
333$V_{0}$ denotes absolute velocity of the barycenter of the system, offset
is a shift in radial velocity measurements between different telescopes,
$\sigma_{\mathrm{jitter}}$ is stellar intrinsic jitter as defined in Johnson
et al. (2011), RMS is the root mean square of the residuals. T0 is given in
MJD = JD - 2400000.5.
### 3.1 HD 4760
HD 4760 (BD+05 109, TYC-0017-01084-1) is one of the least metal abundant
giants in our sample, with ([Fe/H]=-0.91$\pm$0.09).
We have measured the RVs for this star at 35 epochs over about a nine year
period. Twenty-five epochs of the HET/HRS data were obtained between Jan 12,
2006 and Jan 22, 2013 (2567 days, which is more than seven years). These data
exhibit a RV amplitude of $\pm
839\hbox{$\thinspace{\mathrm{m\leavevmode\nobreak\ s^{-1}}}$}$. We have also
made additional ten observations of this star with the HARPS-N between Nov 30,
2012 and June 23, 2015 (935 days). For these observations, the measured RV
amplitude is $\pm 719\hbox{$\thinspace{\mathrm{m\leavevmode\nobreak\
s^{-1}}}$}$. These RV variations are more than two orders of magnitude larger
than the estimated RV precision of our measurements.
The measured RVs show a statistically significant periodic signal in Lomb-
Scargle periodogram (Lomb, 1976; Scargle, 1982; Press et al., 1992) (with a
false alarm probability $p<10^{-3}$) with a peak at about 430 days (Figure 2,
top panel).
These data, interpreted in terms of a Keplerian motion, show that this star
hosts a low-mass companion on an $a=1.14$ au, eccentric ($e=0.23$) orbit
(Figure 3). The calculated minimum mass of $13.9\pm 2.4\hbox{$\thinspace
M_{\mathrm{J}}$}$, makes the system similar to the one hosted by BD+20 2457.
See Table 3 and Figure 3 for the details of the Keplerian model.
After fitting this model out of the data, the remaining RV residuals leave no
trace of a periodic signal (Figure 2, bottom panel).
Figure 2: The Lomb-Scargle periodogram of (top to bottom) the combined HET/HRS
and TNG/HARPS-N RV data, the selected photometric data set, the FWHM of the
cross-correlation function from TNG, the $S_{\mathrm{HK}}$ measured in TNG
spectra and the post keplerian fit RV residua for HD 4760 . A periodic signal
is clearly present in the RV data. Figure 3: Keplerian best fit to combined
HET/HRS (orange) and TNG/HARPS-N (blue) data for HD 4760 The jitter is added
to uncertainties. Open symbols denote a repetition of the data points for the
initial orbital phase.
### 3.2 HD 96992
HD 96992 (BD+44 2063, TYC 3012-00145-1) is another low-metallicity
([Fe/H]=$-0.45\pm 0.08$) giant in our sample.
For this star, we have measured the RVs at 74 epochs over a 14 year period.
The HET/HRS data have been obtained at 52 epochs between Jan 20, 2004 and Feb
06, 2013 (3305 days, or $\sim$nine years), showing a maximum amplitude of $\pm
157\hbox{$\thinspace{\mathrm{m\leavevmode\nobreak\ s^{-1}}}$}$ Twenty-four
more epochs of the HARPS-N data were collected between Dec 16, 2012 and Mar
14, 2018 (1914 days, over 5 years), resulting in a maximum RV amplitude of
$\pm 117\hbox{$\thinspace{\mathrm{m\leavevmode\nobreak\ s^{-1}}}$}$.
The observed maximum RV amplitude is 25-100 times larger than the estimated RV
precision. Our RV measurements show a statistically significant periodic
signal (with a false alarm probability of about $10^{-3}$) with a peak at
about 510 days (Figure 4, top panel).
Figure 4: Same as Figure 2 for HD 96992 . The $\approx$300 day signal in RV
residuals is consistent with the estimated rotation period.
As the result of our Keplerian model fitting to data, this single periodic RV
signal suggests that HD 96992 hosts a $m\sin i=1.14\pm 1.1\hbox{$\thinspace
M_{\mathrm{J}}$}$ mass planet on a $a=1.24$ au, rather eccentric orbit
($e=0.41$). The parameters of this fit are listed in Table 3, and Fig. 5 shows
the fit to RV data.
As seen in Fig. 4 (bottom panel), the RV residuals, after removing the
Keplerian model, reveal yet another long period signal of similar statistical
significance to the 514 days one, at a period of about 300 days.
We find this periodicity consistent with our estimate of the rotation period
for HD 96992. To test alternative scenarios for this system, we tried to model
a planetary system with two planets, but the dynamical modeling with Systemic
2.16 (Meschiari et al., 2009) shows that such a system is highly unstable and
disintegrates after about 1000 years. We also attempted to interpret the
signal at 300 days as a single, Keplerian orbital motion, but it resulted in a
highly eccentric orbit, and the quality of the fit was unsatisfactory. We
therefore rejected these alternative solutions.
In conclusion, we postulate that the signal at 514 days, evident in the RV
data for HD 96992 is due to a Keplerian motion, and the $\sim 300$ days signal
remaining in the post-fit RV residuals reflects rotation of a feature on the
stellar surface.
Figure 5: Same as Figure 3 for HD 96992 .
### 3.3 BD+02 3313
BD+02 3313 (TYC 0405-01114-1) has a solar like metallicity of
[Fe/H]=0.10$\pm$0.07, but it has 27 times higher luminosity.
We have measured the RV’s for this star at 29 epochs over 4264 days (11.6
years). Thirteen epochs worth of the HET/HRS RV data were gathered between Jul
11, 2006 and Jun 15, 2013 (2531 days, nearly a seven year time span). These
RVs show a maximum amplitude of
$1381\hbox{$\thinspace{\mathrm{m\leavevmode\nobreak\ s^{-1}}}$}$. Additional
sixteen RV measurements were made with the HARPS-N between Jan 29, 2013 and
Mar 14, 2018 (1870 day or over a 5 year time span). In this case, the maximum
RV amplitude is $\pm 1141\hbox{$\thinspace{\mathrm{m\leavevmode\nobreak\
s^{-1}}}$}$, which is three orders of magnitude larger than the estimated RV
precision. The data show a statistically significant periodic signal (with a
false alarm probability of about 10-3) with a peak at about 1400 days (Figure
6, top panel).
Figure 6: Same as Figure 2 for BD+02 3313 .
Interpreted in terms of the Keplerian motion, the available RVs show that this
star hosts a low-mass companion, a brown dwarf, with a minimum mass of $m\sin
i=34.1\pm 1.1\hbox{$\thinspace M_{\mathrm{J}}$}$. The companion is located on
a relatively eccentric orbit ($e=0.47$), at $a=2.47$ au, within the brown
dwarf desert (Marcy & Butler, 2000), an orbital separation area below 3-5 au,
known for paucity of brown dwarf companions to solar-type stars. Parameters of
the Keplerian fit to these RV data are listed in Table 3, and shown in Fig. 7.
Figure 7: Same as Figure 3 for BD+02 3313 .
After removing the Keplerian model from the RV data, the residuals leave no
sign of any leftover periodic signal (Figure 6, bottom panel).
### 3.4 TYC 0434-04538-1
TYC 0434-04538-1 (GSC 00434-04538), another low metallicity
([Fe/H]=-0.38$\pm$0.06) giant, has been observed 29 times over a period of
3557 days (9.7 years).
The HET/HRS measurements were made at twelve epochs, between Jun 23, 2008 and
Jun 13, 2013 (over 1816 days, or nearly five years), showing a maximum RV
amplitude of $\pm 483\hbox{$\thinspace{\mathrm{m\leavevmode\nobreak\
s^{-1}}}$}$ Additional RV measurements for this star were made with the
HARPS-N at 17 eopchs between Jun 27, 2013 and Mar 14, 2018 (1721 days, 4.7
years). These data show a maximum RV amplitude of $\pm
442\hbox{$\thinspace{\mathrm{m\leavevmode\nobreak\ s^{-1}}}$}$, which is
similar to that seen in the HET/HRS measurements. This is over two orders of
magnitude more than the estimated RV precision. The data show a strong,
statistically significant, periodic signal (false alarm probability
$p<10^{-3}$) with a peak at about 193 days (Figure 8, top panel).
Figure 8: Same as Figure 2 for TYC 0434-04538-1 .
Our Keplerian analysis shows that this star hosts a $6.1\pm
1.1\hbox{$\thinspace M_{\mathrm{J}}$}$ mass planet on $a=0.66$ au, almost
circular ($e=0.08$) orbit, at the edge of the avoidance zone. The model
parameters of the best Keplerian fit to data are presented in Table 3 and in
Fig. 9.
Figure 9: Same as Figure 3 for TYC 0434-04538-1 .
After removing this model from the observed RV measurements we do not see any
other periodic signal in the periodogram of the post-fit residuals. (Figure 8,
bottom panel).
## 4 Stelar variability and activity analysis
Stars beyond the MS, especially the red giants, exhibit activity and various
types of variability that either alter their line profiles and mimic RV shifts
or cause the line shifts. Such phenomena, if periodic, may be erroneously
taken as evidence for the presence of orbiting, low-mass companions.
A significant variability of the red giants has been already noted by Payne-
Gaposchkin (1954) and Walker et al. (1989), and made the nature of these
variations a topic of numerous studies.
All the red giants of spectral type of K5 or later have been found to be
variable photometrically with amplitudes increasing for the cooler stars
(Edmonds & Gilliland, 1996; Henry et al., 2000).
Short-period RV variations in giants were demonstrated to originate from the
$p$-mode oscillations by Hatzes & Cochran (1994). The first detection of
multimodal oscillations in a K0 giant, $\alpha$ UMa, was published by Buzasi
et al. (2000).
The solar-type, $p$-mode (radial) oscillations (Kjeldsen & Bedding, 1995;
Corsaro et al., 2013) are easily observable in the high precision, photometric
time-series measurements, and they have been intensely studied based on the
COROT (Baglin et al., 2006) and KEPLER (Gilliland et al., 2010) data, leading
to precise mass determinations of many stars (De Ridder et al., 2009; Bedding
et al., 2010; Kallinger et al., 2010; Hekker et al., 2011). Yu et al. (2020)
present an HRD with the amplitudes and frequencies of solar-like oscillations
from the MS up to the tip of the RGB.
The granulation induced ,,flicker” (Corsaro et al., 2017; Tayar et al., 2019),
with characteristic time scales of $\approx$hours, is undoubtedly an
additional unresolved component to the RV scatter in the red giants.
Low amplitude, non-radial oscillations (mixed modes of Dziembowski et al.
2001) in the red giants (with frequencies of $\approx$5-60 cycles per day)
were first detected by Hekker et al. (2006). They were later unambiguously
confirmed using the COROT data by De Ridder et al. (2009), who also found that
the lifetimes of these modes are on the order of a month.
With the typical timescales for the red giants, ranging from hours to days,
the short-period variations typically remain unresolved in low-cadence
observations, focused on the long-term RV variations, and they contribute an
additional uncertainty to the RV measurements.
In the context of planet searches, long period variations of the red giant
stars are more intriguing, because they may masquerade as the low-mass
companions. Therefore, to distinguish between line profile shifts due to
orbital motion from those caused by, for instance, pulsations, and line
profile variations induced by stellar activity, it is crucial to understand
processes that may cause the observed line shifts by studying the available
stellar activity indicators.
As the HET/HRS spectra do not cover the spectral range, where Ca II H & K
lines are present, we use the line bisector, and the Hα line index as activity
indicators. In the case of TNG HARPS-N spectra, in addition to the RVs, the
DRS provides the FWHM of the cross-correlation function (CCF) between the
stellar spectra and the digital mask and the line bisector (as defined in
Queloz et al. 2001), both being sensitive activity indicators.
### 4.1 Line bisectors
The spectral line bisector (BIS) is a measure of the asymmetry of a spectral
line, which can arise for such reasons as blending of lines, a surface feature
(dark spots, for instance), oscillations, pulsations, and granulation (see
Gray 2005 for a discussion of BIS properties). BIS has been proven to be a
powerful tool to detect starspots and background binaries (Queloz et al.,
2001; Santos et al., 2002) that can mimic a planet signal in the RV data. In
the case of surface phenomena (cool spots), the anti-correlation between BIS
and RV is expected Queloz et al. (2001). In the case of a multiple star system
with a separation smaller than that of the fiber of the spectrograph, the
situation is more complicated: a correlation, anti-correlation, or lack of
correlation may occur, depending on the properties of the components (see
Santerne et al. 2015 and Günther et al. 2018 for a discussion). Unfortunately,
for the slow-rotating giant stars, like our targets, BIS is not a sensitive
activity indicator (Santos et al., 2003, 2014).
The HET/HRS and the HARPS-N bisectors are defined differently and were
calculated from different instruments and spectral line lists. They are not
directly comparable and have to be considered separately. All the HET/HRS
spectral line bisector measurements were obtained from the spectra used for
the I2 gas-cell technique (Marcy & Butler, 1992; Butler et al., 1996). The
combined stellar and iodine spectra were first cleaned of the I2 lines by
dividing them by the corresponding iodine spectra imprinted in a flat-field
ones, and then cross-correlated with a binary K2 star mask. A detailed
description of this procedure is described in Nowak et al. (2013). As stated
in Sect. 2.2 , HET/HRS is not a stabilized spectrograph, and the lack of
correlation for BIS should be treated with caution, as it might be a result of
the noise introduced by the varying instrumental profile.
The Bisector Inverse Slopes of the cross-correlation functions from the
HARPS-N data were obtained with the Queloz et al. (2001) method, using the
standard HARPS-N user’s pipeline, which utlilizes the simultaneous Th-Ar
calibration mode of the spectrograph and the cross-correlation mask with a
stellar spectrum (K2 in our case).
In all the cases presented here, the RVs do not correlate with the line
bisectors at the accepted significance level (p=0.01), see Tables 3 and 4. We
conclude, therefore, that the HET/HRS and the HARPS-N BIS RV data have not
been influenced by spots or background binaries.
### 4.2 The $I_{\mathrm{H_{\alpha}}}$ activity index
The Hα line is a widely used indicator of the chromospheric activity, as the
core of this line is formed in the chromosphere. The increased stellar
activity shows a correspondigly filled Hα profile. Variations in the flux in
the line core can be measured with the $I_{\mathrm{H_{\alpha}}}$ activity
index, defined as the flux ratio in a band centered on the Hα to the flux in
the reference bands. We have measured the Hα activity index
($I_{\mathrm{H_{\alpha}}}$) in both the HET/HRS and the TNG/HARPS-N spectra
using the procedure described in Maciejewski et al. (2013) (cf. also Gomes da
Silva et al. 2012 or Robertson et al. 2013, and references therein).
The HET/HRS spectra were obtained with the use of the iodine cell technique
meaning that the iodine spectrum was imprinted on the stellar one. To remove
the weak iodine lines in the Hα region, we divided an order of spectrum by the
Hα by the corresponding order of the GC flat spectrum, before performing the
$I_{\mathrm{H_{\alpha}}}$ index analysis.
A summary of our $I_{\mathrm{H_{\alpha}}}$ analysis in the HET/HRS data is
shown in Table 3, and a summary of the HARPS-N $I_{\mathrm{H_{\alpha}}}$ data
analysis is presented in Table 4. No statistically significant correlation
between $I_{\mathrm{H_{\alpha}}}$ and the RV data has been found for our
sample stars.
### 4.3 Calcium H & K doublet
The reversal profile in the cores of Ca H and K lines, i.e., the emission
structure at the core of the Ca absorption lines, is another commonly used
indicator of stellar activity (Eberhard & Schwarzschild, 1913). The Ca II H &
K lines are located at the blue end of the TNG/HARPS-N spectra, which is the
region with the lowest S/N for our red targets. The S/N of the spectra for the
stars discussed here varies between 2 and 10. Stacking the spectra to obtain a
better S/N is not possible here as they have been taken at least a month
apart. For every epoch’s usable spectrum for a given star, we calculated the
$S_{\mathrm{HK}}$ index following the formula of Duncan et al. (1991), and we
calibrated it against the Mount Wilson scale with the formula provided in
Lovis et al. (2011). We also searched the $S_{\mathrm{HK}}$ indices for
variability and found none (see periodograms in Figures 2, 4, 6 and 8).
Therefore, we conclude that the determined $S_{\mathrm{HK}}$ indices are not
related to the observed RV variations.
### 4.4 Photometry
Stellar activity and pulsations can also manifest themselves through changes
in the brightness of a star. All our targets have been observed by large
photometric surveys. We collected the available data for them from several
different catalogs: ASAS (Pojmanski, 1997), NSVS (Woźniak et al., 2004),
Hipparcos (Perryman & ESA, 1997) and SuperWASP (Pollacco et al., 2006). We
then selected the richest and the most precise data set from all available
ones for a detailed variability and period search. The original photometric
time series were binned by one day intervals. We found no periodic signal in
the selected time-series photometry for any of our targets (see periodograms
in Figures 2, 4, 6 and 8). Table 5 summarizes the results for the selected
data.
### 4.5 CCF FWHM
The stellar activity and surface phenomena impact the shape of the lines in
the stellar spectrum. Properties of CCF, a mean profile of all spectral lines,
are used as activity indicators. In a recent paper Oshagh et al. (2017) found
the CCF FWHM to be the best indicator of stellar activity available from the
HARPS-N DRS (for main sequence sun-like stars), in accordance with the
previous results of Queloz et al. (2009) and Pont et al. (2011). These authors
recommend it to reconstruct the stellar RV jitter as the CCF FWHM correlates
well with the activity-induced RV in the stars of various activity levels.
For all the HARPS-N observations available for our targets, we have correlated
the FWHM of the CCF against the RV measurements for the TNG/HARPS-N data set.
The presence of a correlation means that the observed variability may stem
from distorted spectral lines, possibly due to stellar activity. The results
of this analysis are shown in Table 4 and in Fig. 10. In the case of BD+02
3313 we found a statistically significant
($\mathrm{r}=0.73>\mathrm{r}_{c}=0.62$) correlation at the accepted confidence
level of p=0.01 between the observed RV and the CCF FWHM.
We also searched the CCF FWHM from HARPS-N for variability but found no
statistically significant signal (see periodograms in Figures 2, 4, 6 and 8).
Figure 10: Radial velocities plotted against cross-correlation function FWHM for TNG/HARPS-N data. Table 3: Summary of the activity analysis. Observations span (OS) is the total observation span covered by HET and TNG, $K_{\mathrm{osc}}$ is an amplitude of expected solar-like oscillations (Kjeldsen & Bedding, 1995), OSHET is a observing periods for HET only, $K$ denotes an amplitude of observed radial velocities defined as $RV_{\mathrm{max}}-RV_{\mathrm{min}}$, $\overline{\sigma_{\mathrm{RV}}}$ is an average RV uncertainty. All linear correlation coefficients are calculated with reference to RV. The last columns provides the number of epochs. Star | | | | | | HET/HRS
---|---|---|---|---|---|---
OS | $K_{\mathrm{osc}}$ | OSHET | $K$ | $\overline{\sigma_{\mathrm{RV}}}$ | BIS | $I_{\mathrm{H_{\alpha}}}$ | No
[days] | [$\mathrm{m}\mathrm{s}$] | [days] | [$\mathrm{m}\mathrm{s}$] | [$\mathrm{m}\mathrm{s}$] | r | p | r | p |
HD 4760 | 3449 | $189.68$ | 2567 | $839.01$ | $6.70$ | $0.18$ | $0.38$ | $0.05$ | $0.81$ | $25$
HD 96992 | 5167 | $7.19$ | 3305 | $157.24$ | $6.29$ | $0.04$ | $0.76$ | $0.13$ | $0.36$ | $52$
BD+02 3313 | 4264 | $6.26$ | 2531 | $1381.25$ | $5.32$ | $-0.20$ | $0.51$ | $0.24$ | $0.45$ | $13$
TYC 0434-04538-1 | 3551 | $10.52$ | 1816 | $483.64$ | $8.06$ | $0.26$ | $0.41$ | $0.28$ | $0.40$ | $12$
Table 4: Summary of the activity analysis. The OSTNG is an observing period for TNG only, $K$ denotes an amplitude of observed radial velocities defined as $RV_{\mathrm{max}}-RV_{\mathrm{min}}$, $\overline{\sigma_{\mathrm{RV}}}$ is an average RV uncertainty. All linear correlation coefficients are calculated with reference to RV. The last columns provides the number of epochs. Star | OSTNG | $K$ | $\overline{\sigma_{\mathrm{RV}}}$ | BIS | FWHM | $I_{\mathrm{H_{\alpha}}}$ | $S_{\mathrm{HK}}$ | No
---|---|---|---|---|---|---|---|---
[days] | [$\mathrm{m}\mathrm{s}$] | $[\mathrm{m}\mathrm{s}]$ | r | p | r | p | r | p | r | p |
HD 4760 | 934 | $718.30$ | $1.10$ | $0.56$ | $0.09$ | $-0.61$ | $0.06$ | $0.71$ | $0.02$ | $-0.16$ | $0.66$ | 10
HD 96992 | 1914 | $117.15$ | $1.65$ | $-0.24$ | $0.25$ | $0.10$ | $0.63$ | $0.30$ | $0.15$ | $0.09$ | $0.69$ | 24
BD+02 3313 | 1870 | $1153.10$ | $1.54$ | $-0.46$ | $0.08$ | $0.73$ | $0.00$ | $0.57$ | $0.02$ | $0.16$ | $0.55$ | 16
TYC 0434-04538-1 | 1721 | $442.23$ | $4.53$ | $0.50$ | $0.04$ | $0.27$ | $0.29$ | $0.59$ | $0.01$ | $-0.19$ | $0.47$ | 17
Table 5: A summary of long photometric time-series available for presented stars. | HD 4760 | HD 96992 | BD+02 3313 | TYC 0434 04538 1
---|---|---|---|---
Source | ASAS | Hipparcos | ASAS | ASAS
to [HJD] | 2455168.56291 | 2448960.99668 | 2455113.52337 | 2455122.52181
N points | 288 | 96 | 414 | 419
filter | V | Hp | V | V
mean mag. | 7.483 | 8.741 | 9.477 | 10.331
rms mag. | 0.023 | 0.019 | 0.018 | 0.019
## 5 Discussion.
Hatzes & Cochran (1993a) have suggested that the low-amplitude, long-period RV
variations in red giants are attributable to pulsations, stellar activity - a
spot rotating with a star, or low-mass companions. Such RV variations have
been successfully demonstrated to be due to a presence of low-mass companions
to many giants. Starting from $\iota$ Dra (Frink et al., 2002b), 112 giants
with planets have been listed in the compilation by Sabine Reffert -
https://www.lsw.uni-
heidelberg.de/users/sreffert/giantplanets/giantplanets.php. For some giants,
however, the companion hypothesis has been debatable.
The nature of the observed RV long-term variability in some giants (O’Connell,
1933; Payne-Gaposchkin, 1954; Houk, 1963) remains a riddle. Long, secondary
period (LSP) photometric variations of AGB stars but also the luminous red
giant (K5-M) stars near the tip of the first giant branch (TFGB), brighter
than logL/L⊙$\sim$2.7, were detected in MACHO (Wood et al., 1999): their
sequence D in the period-luminosity relation for the variable semi-regular
giants), and in OGLE (Soszyński, 2007; Soszyński et al., 2009, 2011, 2013)
data. They associate primary (but not always stronger) pulsations in these
stars with typically $\approx$10 times shorter periods (usually on sequence B,
first overtone pulsations, of Wood et al. 1999). Depending on the adopted
detection limit, 30-50$\%$ of luminous red giants may display LSP (Soszynski
et al., 2007). With photometric amplitudes of the order of 1 mag, periods
ranging from 250 to 1400 days, and RV amplitudes of 2-7
$\thinspace{\mathrm{km\leavevmode\nobreak\ s^{-1}}}$ (Wood et al., 2004;
Nicholls et al., 2009), LSP in luminous giants should be easily detectable in
precise RV planet searches.
Soszynski et al. (2004a), following suggestions by Ita et al. (2002) and Kiss
& Bedding (2003), demonstrated that in the LMC, LSP are also present in stars
below the TFGB, in the first ascent giants. These stars, OGLE Small Amplitude
Red Giants (OSARGs, Wray et al. 2004), show much lower amplitudes ($<0.13$mag
in I band).
The origin of LSP is practically unknown. Various scenarios: the eccentric
motion of an orbiting companion of mass $\approx 0.1\hbox{$\thinspace
M_{\odot}$}$, radial and nonradial pulsations, rotation of an ellipsoidal-
shaped red giant, episodic dust ejection, and starspot cycles, were discussed
in Wood et al. (2004). These authors propose a composite effect of large-
amplitude non-radial, g+ mode pulsation, and strong starspot activity as the
most feasible model. Soszyński & Udalski (2014) proposed another scenario, a
low-mass companion in a circular orbit just above the surface of the red
giant, followed by a dusty cloud that regularly obscures the giant and causes
the apparent luminosity variations. More recently, Saio et al. (2015) proposed
oscillatory convective modes as another explanation for the LSP. Thse models,
however, cannot explain effective temperatures of AGB stars ($\log
L/\hbox{$\thinspace L_{\odot}$}\geq 3$, $M/\hbox{$\thinspace M_{\odot}$}=2$)
and periods at the same time.
Generally, the observational data seem to favour binary-type scenarios for LSP
in giants, as for shorter periods the sequence D coincides with the E sequence
of Wood et al. (1999), formed by close binary systems, in which one of the
components is a red giant deformed due to the tidal force (Soszynski et al.,
2004b; Nicholls & Wood, 2012). Sequence E appears then, to be an extension of
the D sequence towards lower luminosity giants (Soszynski et al., 2004b), and
some of the LSP cases may be explained by ellipsoidal variability (ibid.). See
Nicholls & Wood (2012) for a discussion of differences of properties of
pulsating giants in sequences D and E.
Recently, Lee et al. (2014), Lee et al. (2016), and Delgado Mena et al.
(2018), invoked LSP as a potential explanation of observed RV variations in HD
216946 (M0 Iab var, logg=0.5$\pm$ 0.3, R=350R⊙, M=6.8$\pm$1.0 M⊙), $\mu$ UMa
(M0 III SB, Teff=3899 $\pm$35K, logg=1.0, M=2.2M⊙, R=74.7R⊙, L=1148L⊙); and
NGC 4349 No. 127 (L=504.36L⊙, logg=1.99$\pm$0.19, R=36.98$\pm$4.89R⊙,
M=3.81$\pm$0.23M⊙), respectively.
An interesting case of Eltanin ($\gamma$ Dra), a giant with RV variations that
disappeared after several years, was recently discussed by Hatzes et al.
(2018). This $M=2.14\pm 0.16\hbox{$\thinspace M_{\odot}$}$ star, ($R=49.07\pm
3.75\hbox{$\thinspace R_{\odot}$}$, and $L=510\pm 51\hbox{$\thinspace
L_{\odot}$}$ , op. cit. and [Fe/H] = +0.11 $\pm$ 0.09, $\hbox{$\thinspace
T_{\mathrm{eff}}$}=3990\pm 42$ K, and $\log g=1.669\pm 0.1$ Koleva & Vazdekis
2012) exhibited periodic RV variations that mimicked an $m\sin
i=10.7\hbox{$\thinspace M_{\mathrm{J}}$}$ companion in 702 day orbit between
2003 and 2011. In the more recent data, collected between 2011 and 2017, these
variations disappeared. The nature of this type of variability is unclear. The
authors suggest a new form of stellar variability, possibly related to
oscillatory convective modes (Saio et al., 2015).
Aldebaran ($\alpha$ Tau) was studied in a series of papers (Hatzes & Cochran,
1993b, 1998) in search for the origin of observed long-term RV variations.
Hatzes et al. (2015), based on 30 year long observations, put forward a
planetary hypothesis to this $M=1.13\pm 0.11\hbox{$\thinspace M_{\odot}$}$
giant star ($\hbox{$\thinspace T_{\mathrm{eff}}$}=4055\pm 70$ K, $\log
g=1.20\pm 0.1$, and [Fe/H]=-0.27 $\pm$ 0.05, $R=45.1\pm 0.1\hbox{$\thinspace
R_{\odot}$}$, op.cit.). They proposed a $m\sin i=6.47\pm 0.53\hbox{$\thinspace
M_{\mathrm{J}}$}$ planet in 629 day orbit and a 520 day rotation modulation by
a stellar surface structure. Recently, Reichert et al. (2019) showed, that in
2006/2007, the statistical power of the $\approx 620$ day period exhibited a
temporary but significant decrease. They also noted an apparent phase shift
between the RV variations and orbital solution at some epochs. These authors
note the resemblance of this star and $\gamma$ Dra, and also point to
oscillatory convective modes of Saio et al. (2015) as the source of observed
variations.
Due to the unknown underlying physics of the LSP, these claims are difficult
to dispute. However, a mysterious origin of the LSP certainly makes luminous
giants very intriguing objects, especially in the context of searching for
low-mass companions around them.
Another phenomenon that can mimic low-mass companions in precise RV
measurements is starspots rotating with the stellar disk. They can affect
spectral line profiles of magnetically active stars and mimic periodic RV
variations caused by orbiting companions (Vogt et al., 1987; Walker et al.,
1992; Saar & Donahue, 1997).
Slowly rotating, large G and K giants, are not expected to exhibit strong
surface magnetic fields. Nevertheless, they may show activity in the form of
emission in the cores of strong chromospheric lines, photometric variability,
or X-ray emission (Korhonen, 2014). In their study of 17 377 oscillating red
giants from Kepler Ceillier et al. (2017) identified only 2.08$\%$ of the
stars to show a pseudo-periodic photometric variability likely originating
from surface spots (a frequency consistent with the fraction of
spectroscopically detected, rapidly rotating giants in the field).
The most extreme example of a slowly rotating giant with a relatively strong
magnetic field of 100 G (Aurière et al., 2008) is EK Eri. This star was found
to be a $14\hbox{$\thinspace L_{\odot}$}$, $1.85\hbox{$\thinspace M_{\odot}$}$
GIV-III giant with $\hbox{$\thinspace T_{\mathrm{eff}}$}=5125$ K, $\log
g=3.25$ and photometric period of $306.9\pm 0.4$ days by Strassmeier et al.
(1999). A detailed spectroscopic study by dall et al. (2005) has shown RV
variations of about $100\hbox{$\thinspace{\mathrm{m\leavevmode\nobreak\
s^{-1}}}$}$ with the rotation period and a positive correlation between RV and
BIS. In a following extensive study of this object, Dall et al. (2010)
constrain the atmospheric parameters, suggest that the rotation period is
twice the photometric period $P_{\mathrm{rot}}=2P_{\mathrm{phot}}=617.6$ days,
and present a 1979-2009 V photometry time series. The amplitude of the
periodic variations is about 0.3 mag.
Another example is Pollux ($\beta$ Gem), a slowly rotating M=2.3$\pm$0.2M⊙
(Lyubimkov et al., 2019) giant with a detected magnetic field. In a series of
papers: Hatzes & Cochran (1993a); Larson et al. (1993); Reffert et al. (2006),
it has been found to have a planetary-mass companion in $589.7\pm 3.5$ days
orbit, and no sign of activity. Later on, that result was confirmed by Hatzes
et al. (2006), who also estimated the star’s rotation period to be 135 days.
Aurière et al. (2009) detected a weak magnetic field of -0.46$\pm$0.04 G in
Pollux, and Aurière et al. (2014) proposed a two-spot model that might explain
the observed RV variations. However, in their model ,,photometric variations
of larger amplitude than those detected in the Hipparcos data were predicted”.
In their recent paper Aurière et al. (2021) find that the longitudinal
magnetic field of Pollux varies with a sinusoidal behaviour and a period of
660$\pm$15 days, similar, to that of the RV variations but different.
The presence of spots on a stellar surface may mimic low-mass companions, if
the spots show a similar, repetitive pattern for a sufficiently long period of
time. However, very little is known about the lifetime of spots on the surface
of single inactive, slowly rotating giants. Mosser et al. (2009) estimate that
on the surface of F-G type, MS stars spots may form for a duration of 0.5-2
times the rotation period. Hall et al. (1990) studied the evolution of four
spots on the surface of a long-period RS CVN binary V1817 Cygni (K2III), and
estimated their lifetimes to be two years. Also, Gray & Brown (2006)
identified a magnetically active region on the surface of Arcturus (K2 III)
that lasted for a period of 2.0$\pm$ 0.2 yr (the star was found to present a
weak magnetic field by Sennhauser & Berdyugina 2011). Brown et al. (2008) have
published observations that suggest migration of an active region on the
surface of Arcturus over a timescale of 115-253 days. A similar result,
suggesting a 0.5-1.3 year recurrence period in starspot emergence, was derived
in the case of a rapidly rotating K1IV star, KIC 11560447 (Özavcı et al.,
2018).
The lifetime of spots on surfaces of single, low activity giants appears to be
on the order of $\sim$2 years. A long enough series of data, covering several
lifetimes of starspots is clearly required to rule out or confirm activity-
related features as the origin of the observed RV variability.
### 5.1 HD 4760
HD 4760 is the most luminous and one of the most evolved stars in our sample
with $\hbox{$\thinspace\log L/L_{\odot}$}=2.93\pm 0.11$. Its large radius
($R/\hbox{$\thinspace R_{\odot}$}=42\pm 9$), low metallicity
([Fe/H]=-0.91$\pm$0.09), and small $\log g=1.62\pm 0.08$ make it a twin to
BD+20 2457, taking into account the estimated uncertainties.
The observed RV amplitude is about four times larger than the expected
amplitude of the $p$-mode oscillations (cf. Table 3). We find the actual RV
jitter ($\sigma_{jitter}$) in HD 4760 about three times smaller than the
expected ($K_{\mathrm{osc}}$) from the p-mode oscillations (Table 3). Such
discrepancy cannot be explained by the estimated uncertainties, and it
suggests that they may have been underestimated in either luminosity or mass
(or both).
The high luminosity of HD 4760 makes it an interesting candidate for an LSP
object. However, the existing photometric data from ASAS do not indicate any
variability. Moreover, our RV data covering about nine periods of the observed
variation timescale, although not very numerous, do not show changes in
amplitude or phase, as those detected in $\gamma$ Dra or $\alpha$ Tau (Figure
11).
Figure 11: Keplerian best fit to combined HET/HRS (orange) and TNG/HARPS-N
(blue) data for HD 4760 . The jitter is added to uncertainties. The RV data
show no amplitude or phase shift over 14 years of observations.
The rotation period of HD 4760 (1531 days) is highly uncertain, and, given the
uncertainties in $v\sin i$ and $\thinspace R_{\odot}$, its maximum value
($P_{\mathrm{rot}}/\sin i$) may range between 672 and 8707 days. The orbital
period from the Keplerian fit to the RV data is shorter than the maximum
allowed rotation period and we cannot exclude the possibility that the
periodic distortions of spectral lines by a spot rotating with the star are
the reason for the observed RV variations. However, HD 4760 does not show an
increased activity (relative to the other stars in our sample) and none of
activity indicators studied here is correlatec with the observed RV
variations. Also, an estimate of the spot fraction that would cause the
observed RV amplitude, based on the scaling relation of Hatzes (2002), gives a
rather unrealistic value of f=53[$\%$]. Our data also show that the periodic
RV variation have been present in HD 4760 for over nine years, which is
unlikely, if caused by a surface feature. Together with the apparent lack of
photometric variability, we find that available data exclude that scenario.
We conclude that the reflect motion due to a companion appears to be the most
likely hypothesis that explains the observed RV variations in HD 4760.
The mass of the companion and a rather tight orbit of HD 4760 b locate it deep
in the zone of engulfment (Villaver & Livio, 2009; Villaver et al., 2014;
Veras, 2016). However, predicting its future requires more detailed analysis,
as this relatively massive companion may survive the common envelope phase of
this system’s evolution (Livio & Soker, 1984).
See Table 3 for details of the Keplerian model.
### 5.2 HD 96992
Of the time series presented in this paper, this is certainly the noisiest
one. The Keplerian model for the 514-day period results in a RV semi-amplitude
of only $33\hbox{$\thinspace{\mathrm{m\leavevmode\nobreak\ s^{-1}}}$}$ (about
five times greater than estimated HET/HRS precision), similar to the jitter of
$20\hbox{$\thinspace{\mathrm{m\leavevmode\nobreak\ s^{-1}}}$}$ (Figure 12).
The observed RV amplitude is about twenty times larger than the expected
amplitude of the p-mode oscillations. The jitter resulting from the Keplerian
fit is larger than that expected from the scaling relations of Kjeldsen &
Bedding (1995). This suggests an additional contribution, like granulation
,,flicker”, to the jitter.
HD 96992 is much less luminous than HD 4760, with $\thinspace\log
L/L_{\odot}$=1.47$\pm$0.09. It is located much below the TFGB, which makes it
unlikely to be an irregular LSP giant. An apparent lack of photometric
variability supports that claim as well.
The orbital period of 514 days is much longer than the estimated rotation
period of $198\pm 92$ days ($P_{\mathrm{rot}}/\sin i=128-332$ days within
uncertainties), which, together with absence of a photometric variability of
similar period and no correlation with activity indicators, excludes a spot
rotating with the star as a cause of the observed RV variations. The $\approx
300$ days period present in RV residua is more likely to be due to rotation.
The apparent absence of any correlation of observed RV variations with
activity indicators and no trace of periodic variations in those indices makes
the keplerian model the most consistent with the existing data. Details of our
Keplerian model are shown in Table 3.
Figure 12: Keplerian best fit to combined HET/HRS (orange) and TNG/HARPS-N
(blue) data for HD 96992 . The jitter is added to uncertainties.
The $m\sin i=1.14\pm 0.31\hbox{$\thinspace M_{\mathrm{J}}$}$ planet of this
system orbits the star deep in the engulfment zone (Villaver & Livio, 2009)
and will most certainly be destroyed by its host before the AGB phase.
### 5.3 BD+02 3313
BD+02 3313 is a very intriguing case of a solar metallicity giant. With
$\thinspace\log L/L_{\odot}$=1.44$\pm$0.24 it is located well below the TFGB,
even below the horizontal branch, which makes it very unlikely to be an LSP
pulsating red giant.
The RV signal is very apparent; the Keplerian orbit suggests a RV semi-
amplitude of $690\hbox{$\thinspace{\mathrm{m\leavevmode\nobreak\ s^{-1}}}$}$
and a period of 1393 days.
These RV data show an amplitude over two orders of magnitude larger than that
expected of the p-modes oscillations.
The fitted jitter of 10 ms-1 is close to the expected from the scaling
relations of Kjeldsen & Bedding (1995), within uncertainties of mass and
luminosity.
The estimated rotation period of $238\pm 122$ days ($P_{\mathrm{rot}}/\sin
i=146-421$ within uncertainties) is much shorter than the Keplerian orbital
period. The extensive photometric data set from ASAS, contemporaneous with our
HET/HRS data, shows no periodic signal and no excess scatter that might be a
signature of spots on the surface of the star.
None of the activity indices studied here shows a significant periodic signal.
Line bisectors, $I_{\mathrm{H_{\alpha}}}$ and $S_{\mathrm{HK}}$ are
uncorrelated with the RV variations. The value of $S_{\mathrm{HK}}$ does not
indicate a significant activity, compared to other stars in our sample. The
persistence of the RV periodicity for over 11 years also advocates against a
possible influence of an active region rotating with the star.
The resulting Keplerian model (Figure 14), which suggests an $m\sin i=34.1\pm
1.1\hbox{$\thinspace M_{\mathrm{J}}$}$ companion in a 2.47 au, eccentric
($e=0.47$) orbit (i.e., a brown dwarf in the brown dwarf desert) is consistent
with the available RV data for the total time-span of the observing run.
However, FWHM of the CCF from HARPS-N data for BD+02 3313 shows an
$\mathrm{r}=0.73>\mathrm{r}_{c}=0.62$ correlation, which is statistically
significant at the accepted confidence level of p=0.01 (Figure 10, lower left
panel). Given the small number of CCF FWHM data points, we cannot exclude the
possibility that the observed correlation is spurious. This possibility seems
to be supported by the apparent lack of a periodic signal in the LS
periodogram for CCF FWHM (Figure 6).
Figure 13: Keplerian best fit to combined HET/HRS (orange) and TNG/HARPS-N
(blue) data for BD+02 3313 . The jitter is added to uncertainties. The RV
signal is very apparent.
An assumption that all the observed RV variations in this inactive star are
due to the rotation of a surface feature is inconsistent with the existing
photometry and our rotation period estimate. A more likely explanation would
be the presence of a spatially unresolved companion associated with BD+02
3313.
We conclude that the observed RV and CCF FWHM correlation seriously undermines
the Keplerian model of the observed RV variations in BD+02 3313. The actual
cause of the reported RV variations remains to be identified with the help of
additional observations.
### 5.4 TYC 0434-04538-1
TYC 0434-04538-1 is a low metallicity, [Fe/H]=-0.38$\pm$0.06 giant, with a
luminosity of $\hbox{$\thinspace\log L/L_{\odot}$}=1.67\pm 0.09$, which
locates it near the horizontal branch. As such, the star is unlikely to be an
irregular LSP giant.
It shows a strong, periodic RV signal, which, when modelled under the
assumption of a Keplerian motion, shows a semi amplitude of
$K=209\hbox{$\thinspace{\mathrm{m\leavevmode\nobreak\ s^{-1}}}$}$, and a
period of 193 days. The RV data show an amplitude about forty times larger
than that expected of the p-mode oscillation. Again, the jitter is larger than
expected form the p-mode oscillations only, so it likely contains an
additional component, unresolved by our observations, like the granulation
,,flicker”.
This period is shorter than the estimate of $P_{\mathrm{rot}}/\sin i=124-225$
days, hence the observed RV variation may originate, in principle, from a
feature on the stellar surface rotating with the star. The spot would have to
cover f=$10\%$ of the stellar surface according to the simple model of Hatzes
Hatzes (2002) to explain the observed RV variations. Photometric data from
ASAS, which show no variability, do not support this scenario. We also note
that such a large spot coverage ($10\%$) was successfully applied to model
spots on the surface of the overactive spotted giant in a binary system EPIC
211759736 by Oláh et al. (2018).
Consequently, we conclude that the available data favour the low-mass
companion hypothesis.
Figure 14: Keplerian best fit to combined HET/HRS (orange) and TNG/HARPS-N
(blue) data for TYC 0434-04538-1 . The jitter is added to uncertainties. The
RV variations appear to be stable over the period of nearly 10 years. Figure
15: Mass-orbital period relation for 228 planets hosted by solar mass stars
(within 5$\%$) in exoplanets.org, together with our four new candidates
presented here. Symbol sizes are scaled with orbital eccentricities.
### 5.5 The current status of the project
The sample contains 122 stars in total, with at least two epochs of
observations that allowed us to measure the RV variation amplitude.
Sixty stars in the sample (49$\pm 5\%$) are assumed to be single, as they show
$\Delta\mathrm{RV}<50\hbox{$\thinspace{\mathrm{m\leavevmode\nobreak\
s^{-1}}}$}$ in several epochs of data over a period of, typically, 2-3 years.
This group of stars may still contain long period and/or low-mass companions,
which means that the number of single stars may be overestimated. Due to
available telescope time further observations of these stars have been ceased.
The estimate of single star frequency in our sample, although based on a small
sample of GK stars at various stages of evolution from the MS to the RGB,
agrees with the results of a study of a sample of 454 stars by Raghavan et al.
2010 who found that $54\pm 2\%$ of solar-type stars are single. We take this
agreement as a confirmation that our sample is not biased towards binary or
single stars.
Nineteen stars in our sample ($16\pm 3\%$) are spectroscopic binaries with
$\Delta\mathrm{RV}>2\hbox{$\thinspace{\mathrm{km\leavevmode\nobreak\
s^{-1}}}$}$. Technically not only HD 181368 b (Adamów et al., 2018) but also
BD+20 274 b (Gettel et al., 2012) belongs to this group, due to the observed
RV trend. Although we cannot exclude more low-mass companions associated with
binary stellar systems for these targets, they were rejected from further
observations after several epochs, due to a limited telescope time available.
Finally, 43 of the stars in our sample ($35\pm 4\%$) show RV amplitudes
between $50\hbox{$\thinspace{\mathrm{m\leavevmode\nobreak\ s^{-1}}}$}$ and
$2\hbox{$\thinspace{\mathrm{km\leavevmode\nobreak\ s^{-1}}}$}$ and are assumed
to be either active stars or planetary/BD companion candidates. These stars
have been searched for low-mass companions by this project.
Six low-mass companion hosts have been identified in the sample so far: HD
102272 (Niedzielski et al., 2009a), BD+20 2457 (Niedzielski et al., 2009b),
BD+20 274 (Gettel et al., 2012), HD 219415 (Gettel et al., 2012), HD 5583
(Niedzielski et al., 2016b), and HD 181368 (Adamów et al., 2018).
This paper presents low-mass companions to another three stars: HD 4760, TYC
0434 04538 1, HD 96992. Our findings add to a population of 228 planets
orbiting solar-mass stars in exoplanets.org (Figure 15).
Seven stars from the sample (HD 102272, BD+20 2457, BD+20 274, HD 219415, HD
5583, TYC 0434 04538 1, HD 96992) show RV variations consistent with
planetary-mass companions ($m_{\mathrm{p}}\sin i<13\hbox{$\thinspace
M_{\mathrm{J}}$}$), which represents $6\pm 2\%$ of the sample.
## 6 Conclusions
Based on precise RV measurements gathered with the HET/HRS and Harps-N for
over 11 years, we have discussed three solar-mass giants with low mass
companions: HD 4760 hosts a $m\sin i=13.9\pm 2.4\hbox{$\thinspace
M_{\mathrm{J}}$}$ companion in an $a=1.14\pm 0.08$ au and $e=0.23\pm 0.09$
orbit; HD 96992 has a $m\sin i=1.14\pm 0.31\hbox{$\thinspace M_{\mathrm{J}}$}$
companion in an $a=1.24\pm 0.05$ au, eccentric, $e=0.41\pm 0.24$ orbit; TYC
0434-04538-1 is accompanied with a $m\sin i=6.1\pm 0.7\hbox{$\thinspace
M_{\mathrm{J}}$}$ companion in an $a=1.66\pm 0.04$ au, nearly circular orbit
with $e=0.08\pm 0.05$. In the case of BD+02 3313 we find the Keplerian model
uncertain because of statistically significant correlation between RV and CCF
FWHM in the HARPS-N data.
The analysis of RV amplitudes in our sample of 122 solar-mass stars at various
stellar evolution stages shows that single star frequency is $49\pm 5\%$,
which means that the sample is not biased against stellar binarity.
###### Acknowledgements.
We thank the HET and IAC resident astronomers and telescope operators for
their support. AN was supported by the Polish National Science Centre grant
no. 2015/19/B/ST9/02937. EV acknowledges support from the Spanish Ministerio
de Ciencia Inovación y Universidades under grant PGC2018-101950-B-100. KK was
funded in part by the Gordon and Betty Moore Foundation’s Data-Driven
Discovery Initiative through Grant GBMF4561. This research was supported in
part by PL-Grid Infrastructure. The HET is a joint project of the University
of Texas at Austin, the Pennsylvania State University, Stanford University,
Ludwig- Maximilians-Universität München, and Georg-August-Universität
Göttingen. The HET is named in honor of its principal benefactors, William P.
Hobby and Robert E. Eberly. The Center for Exoplanets and Habitable Worlds is
supported by the Pennsylvania State University, the Eberly College of Science,
and the Pennsylvania Space Grant Consortium. This research has made use of the
SIMBAD database, operated at CDS, Strasbourg, France. This research has made
use of NASA’s Astrophysics Data System. The acknowledgements were compiled
using the Astronomy Acknowledgement Generator. This research made use of SciPy
(Jones et al., 2001–). This research made use of the yt-project, a toolkit for
analyzing and visualizing quantitative data (Turk et al., 2011). This research
made use of matplotlib, a Python library for publication quality graphics
(Hunter, 2007). This research made use of Astropy, a community-developed core
Python package for Astronomy (Astropy Collaboration et al., 2013). IRAF is
distributed by the National Optical Astronomy Observatory, which is operated
by the Association of Universities for Research in Astronomy (AURA) under
cooperative agreement with the National Science Foundation (Tody, 1993). This
research made use of NumPy (Walt et al., 2011). We thank the referee for
comments that have significantly contributed to improving this paper.
## References
* Adamczyk et al. (2016) Adamczyk, M., Deka-Szymankiewicz, B., & Niedzielski, A. 2016, A&A, 587, A119
* Adamów et al. (2018) Adamów, M., Niedzielski, A., Kowalik, K., et al. 2018, A&A, 613, A47
* Adamów et al. (2012) Adamów, M., Niedzielski, A., Villaver, E., Nowak, G., & Wolszczan, A. 2012, ApJ, 754, L15
* Adamów et al. (2014) Adamów, M., Niedzielski, A., Villaver, E., Wolszczan, A., & Nowak, G. 2014, A&A, 569, A55
* Adamów et al. (2015) Adamów, M., M., Niedzielski, A., Villaver, E., et al. 2015, A&A, 581, A94
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33
* Aurière et al. (2014) Aurière, M., Konstantinova-Antova, R., Espagnet, O., et al. 2014, in IAU Symposium, Vol. 302, Magnetic Fields throughout Stellar Evolution, ed. P. Petit, M. Jardine, & H. C. Spruit, 359–362
* Aurière et al. (2008) Aurière, M., Konstantinova-Antova, R., Petit, P., et al. 2008, A&A, 491, 499
* Aurière et al. (2021) Aurière, M., Petit, P., Mathias, P., et al. 2021, arXiv e-prints, arXiv:2101.02016
* Aurière et al. (2009) Aurière, M., Wade, G. A., Konstantinova-Antova, R., et al. 2009, A&A, 504, 231
* Baglin et al. (2006) Baglin, A., Auvergne, M., Barge, P., et al. 2006, in ESA Special Publication, Vol. 1306, The CoRoT Mission Pre-Launch Status - Stellar Seismology and Planet Finding, ed. M. Fridlund, A. Baglin, J. Lochard, & L. Conroy, 33
* Bedding et al. (2010) Bedding, T. R., Huber, D., Stello, D., et al. 2010, ApJ, 713, L176
* Borucki et al. (2010) Borucki, W. J., Koch, D., Basri, G., et al. 2010, Science, 327, 977
* Bressan et al. (2012) Bressan, A., Marigo, P., Girardi, L., et al. 2012, MNRAS, 427, 127
* Brown et al. (2008) Brown, K. I. T., Gray, D. F., & Baliunas, S. L. 2008, ApJ, 679, 1531
* Butler et al. (1996) Butler, R. P., Marcy, G. W., Williams, E., et al. 1996, PASP, 108, 500
* Buzasi et al. (2000) Buzasi, D., Catanzarite, J., Laher, R., et al. 2000, ApJ, 532, L133
* Catala & PLATO Consortium (2008) Catala, C. & PLATO Consortium. 2008, in Journal of Physics Conference Series, Vol. 118, Journal of Physics Conference Series, 012040
* Ceillier et al. (2017) Ceillier, T., Tayar, J., Mathur, S., et al. 2017, A&A, 605, A111
* Charbonneau (1995) Charbonneau, P. 1995, ApJS, 101, 309
* Corsaro et al. (2013) Corsaro, E., Fröhlich, H. E., Bonanno, A., et al. 2013, MNRAS, 430, 2313
* Corsaro et al. (2017) Corsaro, E., Mathur, S., García, R. A., et al. 2017, A&A, 605, A3
* Cosentino et al. (2012) Cosentino, R., Lovis, C., Pepe, F., et al. 2012, in SPIE Conference Series, Vol. 8446, SPIE Conference Series
* da Silva et al. (2006) da Silva, L., Girardi, L., Pasquini, L., et al. 2006, A&A, 458, 609
* Dall et al. (2010) Dall, T. H., Bruntt, H., Stello, D., & Strassmeier, K. G. 2010, A&A, 514, A25
* dall et al. (2005) dall, T. H., Bruntt, H., & Strassmeier, K. G. 2005, A&A, 444, 573
* De Ridder et al. (2009) De Ridder, J., Barban, C., Baudin, F., et al. 2009, Nature, 459, 398
* Deka-Szymankiewicz et al. (2018) Deka-Szymankiewicz, B., Niedzielski, A., Adamczyk, M., et al. 2018, A&A, 615, A31
* Delgado Mena et al. (2018) Delgado Mena, E., Lovis, C., Santos, N. C., et al. 2018, A&A, 619, A2
* Duncan et al. (1991) Duncan, D. K., Vaughan, A. H., Wilson, O. C., et al. 1991, ApJS, 76, 383
* Dziembowski et al. (2001) Dziembowski, W. A., Gough, D. O., Houdek, G., & Sienkiewicz, R. 2001, MNRAS, 328, 601
* Eberhard & Schwarzschild (1913) Eberhard, G. & Schwarzschild, K. 1913, ApJ, 38, 292
* Edmonds & Gilliland (1996) Edmonds, P. D. & Gilliland, R. L. 1996, ApJ, 464, L157
* Ford & Gregory (2007) Ford, E. B. & Gregory, P. C. 2007, in ASP Conference Series, Vol. 371, Statistical Challenges in Modern Astronomy IV, ed. G. J. Babu & E. D. Feigelson, 189
* Frink et al. (2002a) Frink, S., Mitchell, D. S., Quirrenbach, A., et al. 2002a, ApJ, 576, 478
* Frink et al. (2002b) Frink, S., Mitchell, D. S., Quirrenbach, A., et al. 2002b, ApJ, 576, 478
* Gaia Collaboration et al. (2016) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2016, aap, 595, A2
* Gettel et al. (2012) Gettel, S., Wolszczan, A., Niedzielski, A., et al. 2012, ApJ, 756, 53
* Gilliland et al. (2010) Gilliland, R. L., Brown, T. M., Christensen-Dalsgaard, J., et al. 2010, PASP, 122, 131
* Gomes da Silva et al. (2012) Gomes da Silva, J., Santos, N. C., Bonfils, X., et al. 2012, A&A, 541, A9
* Goździewski et al. (2003) Goździewski, K., Konacki, M., & Maciejewski, A. J. 2003, ApJ, 594, 1019
* Goździewski et al. (2007) Goździewski, K., Maciejewski, A. J., & Migaszewski, C. 2007, ApJ, 657, 546
* Goździewski & Migaszewski (2006) Goździewski, K. & Migaszewski, C. 2006, A&A, 449, 1219
* Gray (2005) Gray, D. F. 2005, PASP, 117, 711
* Gray & Brown (2006) Gray, D. F. & Brown, K. I. T. 2006, PASP, 118, 1112
* Günther et al. (2018) Günther, M. N., Queloz, D., Gillen, E., et al. 2018, MNRAS, 478, 4720
* Hall et al. (1990) Hall, D. S., Gessner, S. E., Lines, H. C., & Lines, R. D. 1990, AJ, 100, 2017
* Hatzes (2002) Hatzes, A. P. 2002, Astronomische Nachrichten, 323, 392
* Hatzes & Cochran (1993a) Hatzes, A. P. & Cochran, W. D. 1993a, ApJ, 413, 339
* Hatzes & Cochran (1993b) Hatzes, A. P. & Cochran, W. D. 1993b, ApJ, 413, 339
* Hatzes & Cochran (1994) Hatzes, A. P. & Cochran, W. D. 1994, ApJ, 432, 763
* Hatzes & Cochran (1998) Hatzes, A. P. & Cochran, W. D. 1998, MNRAS, 293, 469
* Hatzes et al. (2015) Hatzes, A. P., Cochran, W. D., Endl, M., et al. 2015, A&A, 580, A31
* Hatzes et al. (2006) Hatzes, A. P., Cochran, W. D., Endl, M., et al. 2006, A&A, 457, 335
* Hatzes et al. (2018) Hatzes, A. P., Endl, M., Cochran, W. D., et al. 2018, AJ, 155, 120
* Hatzes et al. (2005) Hatzes, A. P., Guenther, E. W., Endl, M., et al. 2005, A&A, 437, 743
* Hekker et al. (2006) Hekker, S., Aerts, C., De Ridder, J., & Carrier, F. 2006, A&A, 458, 931
* Hekker et al. (2011) Hekker, S., Gilliland, R. L., Elsworth, Y., et al. 2011, MNRAS, 414, 2594
* Henry et al. (2000) Henry, G. W., Fekel, F. C., Henry, S. M., & Hall, D. S. 2000, ApJS, 130, 201
* Houk (1963) Houk, N. 1963, AJ, 68, 253
* Hunter (2007) Hunter, J. D. 2007, Computing In Science & Engineering, 9, 90
* Ita et al. (2002) Ita, Y., Tanabé, T., Matsunaga, N., et al. 2002, MNRAS, 337, L31
* Johnson et al. (2007) Johnson, J. A., Fischer, D. A., Marcy, G. W., et al. 2007, ApJ, 665, 785
* Johnson et al. (2011) Johnson, J. A., Payne, M., Howard, A. W., et al. 2011, AJ, 141, 16
* Jones et al. (2001–) Jones, E., Oliphant, T., Peterson, P., et al. 2001–, SciPy: Open source scientific tools for Python, [Online; accessed ¡today¿]
* Jørgensen & Lindegren (2005) Jørgensen, B. R. & Lindegren, L. 2005, A&A, 436, 127
* Kallinger et al. (2010) Kallinger, T., Mosser, B., Hekker, S., et al. 2010, A&A, 522, A1
* Kiss & Bedding (2003) Kiss, L. L. & Bedding, T. R. 2003, MNRAS, 343, L79
* Kjeldsen & Bedding (1995) Kjeldsen, H. & Bedding, T. R. 1995, A&A, 293, 87
* Koleva & Vazdekis (2012) Koleva, M. & Vazdekis, A. 2012, A&A, 538, A143
* Korhonen (2014) Korhonen, H. 2014, in IAU Symposium, Vol. 302, Magnetic Fields throughout Stellar Evolution, ed. P. Petit, M. Jardine, & H. C. Spruit, 350–358
* Larson et al. (1993) Larson, A. M., Irwin, A. W., Yang, S. L. S., et al. 1993, PASP, 105, 825
* Lee et al. (2014) Lee, B. C., Han, I., Park, M. G., Hatzes, A. P., & Kim, K. M. 2014, A&A, 566, A124
* Lee et al. (2016) Lee, B.-C., Han, I., Park, M.-G., et al. 2016, AJ, 151, 106
* Lee et al. (2011) Lee, B.-C., Mkrtichian, D. E., Han, I., Kim, K.-M., & Park, M.-G. 2011, A&A, 529, A134
* Linsky & Haisch (1979) Linsky, J. L. & Haisch, B. M. 1979, ApJ, 229, L27
* Livio & Soker (1984) Livio, M. & Soker, N. 1984, MNRAS, 208, 763
* Lomb (1976) Lomb, N. R. 1976, Ap&SS, 39, 447
* Lovis et al. (2011) Lovis, C., Dumusque, X., Santos, N. C., et al. 2011, ArXiv e-prints [arXiv:1107.5325]
* Lovis & Pepe (2007) Lovis, C. & Pepe, F. 2007, A&A, 468, 1115
* Lyubimkov et al. (2019) Lyubimkov, L. S., Petrov, D. V., & Poklad, D. B. 2019, Astrophysics, 62, 338
* Maciejewski et al. (2013) Maciejewski, G., Niedzielski, A., Wolszczan, A., et al. 2013, AJ, 146, 147
* Marcy & Butler (1992) Marcy, G. W. & Butler, R. P. 1992, PASP, 104, 270
* Marcy & Butler (2000) Marcy, G. W. & Butler, R. P. 2000, PASP, 112, 137
* Marcy et al. (2005) Marcy, G. W., Butler, R. P., Vogt, S. S., et al. 2005, ApJ, 619, 570
* Markwardt (2009) Markwardt, C. B. 2009, in Astronomical Society of the Pacific Conference Series, Vol. 411, Astronomical Data Analysis Software and Systems XVIII, ed. D. A. Bohlender, D. Durand, & P. Dowler, 251
* Mayor & Queloz (1995) Mayor, M. & Queloz, D. 1995, Nature, 378, 355
* Meschiari et al. (2009) Meschiari, S., Wolf, A. S., Rivera, E., et al. 2009, PASP, 121, 1016
* Mosser et al. (2009) Mosser, B., Baudin, F., Lanza, A. F., et al. 2009, A&A, 506, 245
* Nicholls & Wood (2012) Nicholls, C. P. & Wood, P. R. 2012, MNRAS, 421, 2616
* Nicholls et al. (2009) Nicholls, C. P., Wood, P. R., Cioni, M. R. L., & Soszyński, I. 2009, MNRAS, 399, 2063
* Niedzielski et al. (2016a) Niedzielski, A., Deka-Szymankiewicz, B., Adamczyk, M., et al. 2016a, A&A, 585, A73
* Niedzielski et al. (2009a) Niedzielski, A., Goździewski, K., Wolszczan, A., et al. 2009a, ApJ, 693, 276
* Niedzielski et al. (2007) Niedzielski, A., Konacki, M., Wolszczan, A., et al. 2007, ApJ, 669, 1354
* Niedzielski et al. (2009b) Niedzielski, A., Nowak, G., Adamów, M., & Wolszczan, A. 2009b, ApJ, 707, 768
* Niedzielski et al. (2016b) Niedzielski, A., Villaver, E., Nowak, G., et al. 2016b, A&A, 588, A62
* Niedzielski et al. (2016c) Niedzielski, A., Villaver, E., Nowak, G., et al. 2016c, A&A, 589, L1
* Niedzielski et al. (2015a) Niedzielski, A., Villaver, E., Wolszczan, A., et al. 2015a, A&A, 573, A36
* Niedzielski & Wolszczan (2008a) Niedzielski, A. & Wolszczan, A. 2008a, in IAU Symposium, Vol. 249, IAU Symposium, ed. Y.-S. Sun, S. Ferraz-Mello, & J.-L. Zhou, 43–47
* Niedzielski & Wolszczan (2008b) Niedzielski, A. & Wolszczan, A. 2008b, in Astronomical Society of the Pacific Conference Series, Vol. 398, Extreme Solar Systems, ed. D. Fischer, F. A. Rasio, S. E. Thorsett, & A. Wolszczan, 71
* Niedzielski et al. (2015b) Niedzielski, A., Wolszczan, A., Nowak, G., et al. 2015b, ApJ, 803, 1
* Nowak (2012) Nowak, G. 2012, PhD thesis, Nicolaus Copernicus Univ., Toruń, Poland
* Nowak et al. (2013) Nowak, G., Niedzielski, A., Wolszczan, A., Adamów, M., & Maciejewski, G. 2013, ApJ, 770, 53
* O’Connell (1933) O’Connell, D. J. K. 1933, Harvard College Observatory Bulletin, 893, 19
* Oláh et al. (2018) Oláh, K., Rappaport, S., Borkovits, T., et al. 2018, A&A, 620, A189
* Oshagh et al. (2017) Oshagh, M., Santos, N. C., Figueira, P., et al. 2017, A&A, 606, A107
* Özavcı et al. (2018) Özavcı, I., Şenavcı, H. V., Isık, E., et al. 2018, MNRAS, 474, 5534
* Payne-Gaposchkin (1954) Payne-Gaposchkin, C. 1954, Annals of Harvard College Observatory, 113, 189
* Pepe et al. (2002a) Pepe, F., Mayor, M., Galland, F., et al. 2002a, A&A, 388, 632
* Pepe et al. (2002b) Pepe, F., Mayor, M., Galland, F., et al. 2002b, A&A, 388, 632
* Perryman & ESA (1997) Perryman, M. A. C. & ESA, eds. 1997, ESA Special Publication, Vol. 1200, The HIPPARCOS and TYCHO catalogues. Astrometric and photometric star catalogues derived from the ESA HIPPARCOS Space Astrometry Mission
* Pojmanski (1997) Pojmanski, G. 1997, Acta Astron., 47, 467
* Pollacco et al. (2006) Pollacco, D. L., Skillen, I., Collier Cameron, A., et al. 2006, PASP, 118, 1407
* Pont et al. (2011) Pont, F., Aigrain, S., & Zucker, S. 2011, MNRAS, 411, 1953
* Press et al. (1992) Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992, Numerical recipes in FORTRAN. The art of scientific computing (Cambridge University Press)
* Queloz (1995) Queloz, D. 1995, in IAU Symposium, Vol. 167, New Developments in Array Technology and Applications, ed. A. G. D. Philip, K. Janes, & A. R. Upgren, 221
* Queloz et al. (2009) Queloz, D., Bouchy, F., Moutou, C., et al. 2009, A&A, 506, 303
* Queloz et al. (2001) Queloz, D., Henry, G. W., Sivan, J. P., et al. 2001, A&A, 379, 279
* Raghavan et al. (2010) Raghavan, D., McAlister, H. A., Henry, T. J., et al. 2010, ApJS, 190, 1
* Ramsey et al. (1998) Ramsey, L. W., Adams, M. T., Barnes, T. G., et al. 1998, in SPIE Conference Series, Vol. 3352, SPIE Conference Series, ed. L. M. Stepp, 34–42
* Reffert et al. (2006) Reffert, S., Quirrenbach, A., Mitchell, D. S., et al. 2006, ApJ, 652, 661
* Reichert et al. (2019) Reichert, K., Reffert, S., Stock, S., Trifonov, T., & Quirrenbach, A. 2019, A&A, 625, A22
* Ricker et al. (2015) Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003
* Robertson et al. (2013) Robertson, P., Endl, M., Cochran, W. D., & Dodson-Robinson, S. E. 2013, ApJ, 764, 3
* Saar & Donahue (1997) Saar, S. H. & Donahue, R. A. 1997, ApJ, 485, 319
* Saio et al. (2015) Saio, H., Wood, P. R., Takayama, M., & Ita, Y. 2015, MNRAS, 452, 3863
* Santerne et al. (2015) Santerne, A., Díaz, R. F., Almenara, J. M., et al. 2015, MNRAS, 451, 2337
* Santos et al. (2002) Santos, N. C., Mayor, M., Naef, D., et al. 2002, A&A, 392, 215
* Santos et al. (2014) Santos, N. C., Mortier, A., Faria, J. P., et al. 2014, A&A, 566, A35
* Santos et al. (2003) Santos, N. C., Udry, S., Mayor, M., et al. 2003, A&A, 406, 373
* Sato et al. (2003) Sato, B., Ando, H., Kambe, E., et al. 2003, ApJ, 597, L157
* Scargle (1982) Scargle, J. D. 1982, ApJ, 263, 835
* Sennhauser & Berdyugina (2011) Sennhauser, C. & Berdyugina, S. V. 2011, A&A, 529, A100
* Shetrone et al. (2007) Shetrone, M., Cornell, M. E., Fowler, J. R., et al. 2007, PASP, 119, 556
* Soszyński (2007) Soszyński, I. 2007, ApJ, 660, 1486
* Soszynski et al. (2007) Soszynski, I., Dziembowski, W. A., Udalski, A., et al. 2007, Acta Astron., 57, 201
* Soszyński & Udalski (2014) Soszyński, I. & Udalski, A. 2014, ApJ, 788, 13
* Soszynski et al. (2004a) Soszynski, I., Udalski, A., Kubiak, M., et al. 2004a, Acta Astron., 54, 129
* Soszynski et al. (2004b) Soszynski, I., Udalski, A., Kubiak, M., et al. 2004b, Acta Astron., 54, 347
* Soszyński et al. (2009) Soszyński, I., Udalski, A., Szymański, M. K., et al. 2009, Acta Astron., 59, 239
* Soszyński et al. (2011) Soszyński, I., Udalski, A., Szymański, M. K., et al. 2011, Acta Astron., 61, 217
* Soszyński et al. (2013) Soszyński, I., Udalski, A., Szymański, M. K., et al. 2013, Acta Astron., 63, 21
* Strassmeier et al. (1999) Strassmeier, K. G., Stȩpień , K., Henry, G. W., & Hall, D. S. 1999, A&A, 343, 175
* Tayar et al. (2019) Tayar, J., Stassun, K. G., & Corsaro, E. 2019, ApJ, 883, 195
* Tody (1993) Tody, D. 1993, in Astronomical Society of the Pacific Conference Series, Vol. 52, Astronomical Data Analysis Software and Systems II, ed. R. J. Hanisch, R. J. V. Brissenden, & J. Barnes, 173
* Tull (1998) Tull, R. G. 1998, in SPIE Conference Series, Vol. 3355, SPIE Conference Series, ed. S. D’Odorico, 387–398
* Turk et al. (2011) Turk, M. J., Smith, B. D., Oishi, J. S., et al. 2011, ApJS, 192, 9
* Veras (2016) Veras, D. 2016, Royal Society Open Science, 3, 150571
* Villaver & Livio (2009) Villaver, E. & Livio, M. 2009, ApJ, 705, L81
* Villaver et al. (2014) Villaver, E., Livio, M., Mustill, A. J., & Siess, L. 2014, ApJ, 794, 3
* Villaver et al. (2017) Villaver, E., Niedzielski, A., Wolszczan, A., et al. 2017, A&A, 606, A38
* Vogt et al. (1987) Vogt, S. S., Penrod, G. D., & Hatzes, A. P. 1987, ApJ, 321, 496
* Walker et al. (1992) Walker, G. A. H., Bohlender, D. A., Walker, A. R., et al. 1992, ApJ, 396, L91
* Walker et al. (1989) Walker, G. A. H., Yang, S., Campbell, B., & Irwin, A. W. 1989, ApJ, 343, L21
* Walt et al. (2011) Walt, S. v. d., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science and Engg., 13, 22
* Winn & Fabrycky (2015) Winn, J. N. & Fabrycky, D. C. 2015, ARA&A, 53, 409
* Wolszczan & Frail (1992) Wolszczan, A. & Frail, D. A. 1992, Nature, 355, 145
* Wood et al. (1999) Wood, P. R., Alcock, C., Allsman, R. A., et al. 1999, in IAU Symposium, Vol. 191, Asymptotic Giant Branch Stars, ed. T. Le Bertre, A. Lebre, & C. Waelkens, 151
* Wood et al. (2004) Wood, P. R., Olivier, E. A., & Kawaler, S. D. 2004, ApJ, 604, 800
* Woźniak et al. (2004) Woźniak, P. R., Vestrand, W. T., Akerlof, C. W., et al. 2004, AJ, 127, 2436
* Wray et al. (2004) Wray, J. J., Eyer, L., & Paczyński, B. 2004, MNRAS, 349, 1059
* Wright & Howard (2009) Wright, J. T. & Howard, A. W. 2009, ApJS, 182, 205
* Yu et al. (2020) Yu, J., Bedding, T. R., Stello, D., et al. 2020, MNRAS, 493, 1388
* Zieliński et al. (2012) Zieliński, P., Niedzielski, A., Wolszczan, A., Adamów, M., & Nowak, G. 2012, A&A, 547, A91
|
# Anomalous symmetry breaking in Weyl semimetal CeAlGe
H. Hodovanets,1,∗ C. J. Eckberg,1 Y. Eo, 1 D. J. Campbell,1 P. Y. Zavalij,2 P.
Piccoli,3 T. Metz,1 H. Kim,1,∗ J. S. Higgins,1 and J. Paglione1 1 Maryland
Quantum Materials Center, Department of Physics, University of Maryland,
College Park, Maryland, 20742 USA 2 X-ray Crystallographic Center, Department
of Chemistry and Biochemistry, University of Maryland, College Park, Maryland,
2074 USA 3 Department of Geology, University of Maryland, College Park,
Maryland, 20742 USA Current address: Department of Physics and Astronomy,
Texas Tech University, Lubbock, Texas, 79409 USA
###### Abstract
CeAlGe, a proposed type-II Weyl semimetal, orders antiferromagnetically below
5 K. Both a spin-flop and a spin-flip transitions to less than 1 $\mu_{B}$/Ce
are observed at 2 K below 30 kOe in the $M(H)$ (H$\|$a and b) and 4.3 kOe
(H$\|$ $\langle 110\rangle$) data, respectively, indicating a four-fold
symmetry of the $M(H)$ along the principal directions in the tetragonal ab-
plane with $\langle 110\rangle$ set of easy directions. However, anomalously
robust and complex two-fold symmetry is observed in the angular dependence of
resistivity and magnetic torque data in the magnetically ordered state once
the field is swept in the ab -plane. This two-fold symmetry is independent of
temperature- and field-hystereses and suggests a magnetic phase transition
that separates two different magnetic structures in the ab-plane. The boundary
of this magnetic phase transition can be tuned by different growth conditions.
Weyl semimetals have attracted much attention due to their intricate
properties associated with the topological manifestation of electronic band
structure and their potential application in spintronics, quantum bits,
thermoelectric and photovoltaic devices Wan _et al._ (2011); Weng _et al._
(2015); Hasan _et al._ (2017); Chang _et al._ (2017); Yan and Felser (2017);
Armitage _et al._ (2018). Magnetic semimetals that break spatial inversion
and time-reversal symmetries are relatively scarce, and it is especially hard
to confirm the breaking of the time-reversal symmetry in these materials Wan
_et al._ (2011); Witczak-Krempa and Kim (2012); Liu _et al._ (2014); Neupane
_et al._ (2014); Wang _et al._ (2016); Manna _et al._ (2018); Liu _et al._
(2018). The RAlGe and RAlSi (R = Ce and Pr) families present a new class of
magnetic Weyl semimetals where both inversion and time-reversal symmetries are
broken due to intrinsic magnetic order Chang _et al._ (2018); Hodovanets _et
al._ (2018); Puphal _et al._ (2019); Yang _et al._ (2020a); Lyu _et al._
(2020). With observations of a topological magnetic phase Puphal _et al._
(2020), anomalous Hall effect (AHE) Meng _et al._ (2019), topological Hall
effect Puphal _et al._ (2020), and singular angular magnetoresistance (AMR)
Suzuki _et al._ (2019), as well as a possible route to axial gauge fields
Destraz _et al._ (2020), RAlGe is particularly promising.
Noncentrosymmetric CeAlGe, a proposed type-II magnetic Weyl semimetal Chang
_et al._ (2018) that orders antiferromagnetically below 5 K in zero magnetic
field and ferrimagnetically in non-zero field Hodovanets _et al._ (2018),
hosts several incommensurate multi-$\vec{k}$ magnetic phases, including a
topological phase for H$\|$c Puphal _et al._ (2020). Motivated by the fact
that its magnetic moments lie in the tetragonal ab-plane, together with the
observation of a sharp singular AMR in its Si-substituted variant Suzuki _et
al._ (2019), we study pure CeAlGe using magnetization, M, angle-dependent
magnetic torque, $\tau(\varphi)$, and magnetoresistance, $R(\varphi)$
measurements. While we find the expected four-fold tetragonal symmetry in M(H)
when field is swept through the ab-plane, we also observe an anomalous two-
fold symmetry in both angle-dependent magnetic torque and AMR in the ordered
state. In contrast to conventional smoothly changing (i.e. sinusoidal) AMR in
magnetic conductors McGuire and Potter (1975), which is dependent on the
orientation of magnetization and current, the two-fold symmetric ab-plane AMR
of CeAlGe is remarkably history independent and unchanged under magnetic field
and temperature hystereses, highlighting possibilities for device
applications. We discuss the idea of two different magnetic structures in the
ordered state as a likely explanation for the observed two-fold symmetry, and
consider other possibilities.
Figure 1: (color online) Electrode configurations used in the different
resistance as a function of angle measurements. Figure 2: (color online) (a)
Field-dependent magnetization of CeAlGe for H$\|$a, H$\|$b, H$\|$[110], and
H$\|$[$\bar{1}\bar{1}$0]. $H_{1}$ denotes the lowest critical field (beginning
of the canted phase of the spin-flop transition) below which the hysteresis in
the M(H) data starts in the H$\|$a and H$\|$b data. $H_{2}$ marks a critical
field of spin-saturated ferromagnetic state. For H$\|\langle 110\rangle$
directions, a spin-flip transition occurs at the critical field marked
$H^{\prime}$. (b) Angular dependence of the resistivity (AMR) of CeAlGe single
crystal measured in the 4-probe configuration at T = 2 K with H = const and
I$\|$[010]. The field is swept in the tetragonal ab-plane. (c) AMR of CeAlGe
single crystal measured in the 4-probe configuration at H = 2.5 kOe and
selected T = const. AMR of CeAlGe single crystal measured in the 4-probe wire
configuration at 2 K and 2.5 kOe (d) different current and (e) different
conditions on approaching 2 K and 2.5 kOe: cool down from 3 K to 2 K in 2.5
kOe; at 2 K, changed the field from 1.5 to 2.5 kOe; at 2 K, swept through 0 Oe
(-30 to 30 kOe); warmed up to 10 K, set H = 50 kOe, cooled down to 2 K; warmed
up to 10 K, demagnetized with 50 kOe, zero-field cooled to 2 K; warmed up to
10 K at position 90∘, set H = 50 kOe, cooled down to 2 K, started off from 0∘;
warmed up to 10 K at position 90∘, set H = 50 kOe, cooled down to 2 K, started
off from 90∘.(f-h) Angle-dependent magnetic torque data at selected H = const
collected at 2 K. The data for H = 3 and 10 kOe are repeated for clarity.
Single crystals of CeAlGe were grown by the high-temperature flux method
Hodovanets _et al._ (2018); Canfield _et al._ (2016). Temperature-, field-,
and angle-dependent magnetization, resistivity, and magnetic torque
measurements were performed in a commercial cryostat. All angle-dependent data
were collected on changing the angle from 0∘ to 360∘ unless otherwise noted.
Resistivity measurements were made in a standard four-probe, van der Pauw van
der Pauw (1958), Hall bar or concentric ring (we will call it a 4-terminal
Corbino) geometry ($I$ = 0.5 or 1 mA), Fig. 1. The samples were polished and
shaped with care to not have any Al inclusions. Electrical contacts to the
samples in the four-probe, van der Pauw, and Hall bar geometry were made with
Au wires attached to the samples using EPOTEK silver epoxy and subsequently
cured at 100∘C. The 4-terminal Corbino was patterned using standard
photolithography followed by a standard metal liftoff. The patterns consist of
20-30 Å/1500 Å Ti/Au contacts made by e-beam evaporation. 25 $\mu$m Au wires
were attached to the gold electrodes by wire bonding. To calculate the
resistivity of the 4-terminal Corbino one needs a geometric factor, which is
difficult to estimate when the sample is not a 2D material or a thin film. To
estimate the geometric factor of a single crystal that has a finite thickness,
we used the effective thickness that was found numerically Eo _et al._
(2018). We note that this is a nominal resistivity since the resistivity ratio
between the ab-plane and the c-axis is not accurately known.
Figure 3: (color online) AMR for various electrode configurations and
measurement techniques, and magnetic torque of CeAlGe measured at T = 2 K with
(a)-(h) H = 2.5 kOe and (i)-(p) H = 5 kOe. For the sample with I$\|$[110], the
rotation started with the a-axis shifted by 45∘, thus the positions of the a\-
and b\- axes are marked in the graph. The data for the van der Pauw
configuration with I$\|$a, b, and [110] were measured on the same sample. The
sample for the 4-terminal Corbino measurement was mounted, due to the size
restrictions, with approximately -13∘ offset with respect to the a-axis. All
samples are from the same batch.
We now discuss the field-dependent magnetization $M(H)$ data at T = 2 K
measured for H$\|$a, b, [110], and [$\bar{1}\bar{1}$0] axes, shown in Fig.
2(a). For H$\|$a and b (circles), a clear sharp spin-flop transition to a less
than 1 $\mu_{B}$ saturation moment is observed below $\sim$26 kOe as was
reported in Ref. Hodovanets _et al._ , 2018. The critical fields $H_{1}$ and
$H_{2}$ delineate the canted moment phase. On the contrary, the spin-flip
transition to a slightly higher value of saturated magnetization is observed
for H$\|$[110] and [$\bar{1}\bar{1}$0] data (squares) below $H^{{}^{\prime}}$
= 4.3 kOe, indicating that the easy axes are the $\langle 110\rangle$ set of
directions. The data presented in Fig. 2(a) indicate a four-fold symmetry of
M(H) data in the tetragonal ab-plane.
However, as shown in Fig. 2(b), a sharp two-fold symmetry change is observed
in the AMR data when field is swept in the tetragonal ab-plane. This two-fold
symmetry sets in before the critical fields of $H_{1}$ and $H^{{}^{\prime}}$
(defined from the magnetization data) and is less apparent above $H_{2}$ where
the moments are in the field-saturated ferromagnetic state. Keeping the field
constant at H = 2.5 kOe, the AMR was measured at constant temperatures, Fig.
2(c). The two-fold symmetry holds only in the ordered state, below 5 K, thus
suggesting that the origin of this behavior is due to magnetic order. Neither
the magnitude of the current, nor the different conditions at which the 2 K
temperature is reached, nor at what angle the measurement is started have an
effect on the AMR features, at least for the H = 2.5 kOe data as shown in
Figs. 2(d) and (e), respectively. The AMR for CeAlGe cannot be simply scaled
based on the field-induced magnetization along either the a-axis or [110] axis
(see Fig. S1 below).
To further validate whether the two-fold symmetry in the AMR is due to the
magnetic order or due to the current direction, we measured magnetic torque in
the tetragonal ab-plane at H=const at 2 K, Figs. 2(f)-(h). Figure 2(f) shows
$\tau(\varphi)$ data with a clear two-fold symmetry and very complicated
functional dependence that cannot be fit by a series of even sine functions
below 10 kOe Okazaki _et al._ (2011); Kasahara _et al._ (2012). The magnetic
torque changes the location of positive and negative maxima (sign change)
between 3 and 5 kOe, Fig. 3(g). The corresponds in the AMR to the appearance
of plateaus at 45∘ (every 45∘) in Figs. 2(b) between H = 3.5 and 4 kOe. This
region separates the data into two different magnetic regimes and is more
evident in the AMR of the sample for which I$\|$[110], Fig. S1(a). Here, H =
3.5 kOe is further confirmed as a transition field. This value of magnetic
field is slightly above H1(H$\|$a) and much lower than H′(H$\|$[110]) in the
M(H) data, Fig. 2(a).
The two-fold anisotropy in the torque data decreases and is barely observable
at 10 kOe, Fig. 2(g), as the magnitude of the torque increases with the
magnetic field. We would like to note that the two-fold symmetry is observed
in the torque data at 5 K as well (SM below), which is above the magnetic
ordering temperature, indicating some moment fluctuation. As opposed to the
AMR, the magnetic torque data display a clear four-fold symmetry at a lower
magnetic field H = 20 kOe and above, Fig. 2(h). These observations point to
the magnetic order being a culprit of the breaking of the four-fold symmetry
in the measurements (according to neutron studies Suzuki _et al._ (2019);
Puphal _et al._ (2020), the crystal structure of CeAlGe remains tetragonal
down to 2 K). This two-fold symmetry is more dramatic and enhanced in the
resistivity measurements.
To further test the effect of the current and its direction, we measured the
AMR with different electrode configurations and techniques. The results are
shown in Fig. 3 for H = 2.5 and 5 kOe left and right panels, respectively. The
two-fold symmetry is present in all measurements. The sample with I$\|$c and
4-terminal Corbino sample display similar functional dependence in the AMR for
both H = 2.5 and 5 kOe. The van der Pauw sample, in-plane Hall sample, and
4-probe sample can be grouped together based on the similar behavior as well.
It is interesting to note that the van der Pauw sample shows similar behavior
in the resistance and Hall configurations. The AMR data of the sample with
I$\|$[110] appear to mirror the behavior of the magnetic torque data, i.e. the
45∘ shift of maxima between H = 2.5 and 5 kOe. As the field is increased to 5
kOe, the current chooses a different preferred direction, compared to that for
H = 2.5 kOe, and is a reflection of a change in the magnetic order/spin
structure between H = 2.5 and 5 kOe. This is more clearly seen in Figs. S1 and
S2. At 0.1 K a yet different functional behavior of AMR is observed for H =
2.5 kOe, Fig. S3, indicating an additional magnetic phase transition. No
functional change is observed for H = 5 kOe at 0.1 K compared to that at 2 K.
Neutron diffraction experiments have reported differing results on the
magnetic structure of CeAlGe. While Ref. Suzuki _et al._ , 2019 reported a
zero-field coplanar $Fd^{\prime}d2^{\prime}$ magnetic structure (i.e. m =
($m_{x},m_{y}$,0) on sublattice A and (-$m_{x},-m_{y}$,0) on sublattice B
related by a diamond glide-operation) and a collinear magnetic structure
(m$\|$[100]) in non-zero field with independent moments on the symmetrically
nonequivalent A and B sublattice sites, Ref. Puphal _et al._ , 2020 reported
an incommensurate multi-$\vec{k}$ magnetic ground state in zero field. This
magnetic phase changes to a single-$\vec{k}$ state at the metamagnetic
transition $H_{1}$ = 3 kOe (H$\|$a), consistent with our results. The
single-$\vec{k}$ state evolves into the field polarized ferromagnetic state at
$H_{2}\sim$9 kOe (lower than $H_{2}$ for this work) at 2 K. Thus, the two
different regimes seen in the AMR data would correspond to these two different
magnetic phases. Note that the sample studied in Ref. Puphal _et al._ , 2020
is stoichiometric and the ones in this work have $\sim 5\%$ deficiency in both
Al and Ge (see Table II in SM). Despite slightly different magnetic ordering
temperatures, the critical field $H_{1}$ is the same. As we discussed above,
the field close to $H_{1}$ determines the boundary of the two magnetic phases.
As is discussed in SM, a large Al deficiency, which depends on different
crystal growth conditions, changes the value of $H_{1}$ and $H^{\prime}$,
making them smaller, and perhaps changing the values of multi-$\vec{k}$
vectors (or magnetic structure altogether) since the features in the AMR in
the lower-field state become slightly different. The magnetic phase above
these two fields remains unchanged. Systematic magnetic structure studies are
needed to confirm this hypothesis.
In Ref. Suzuki _et al._ , 2019, the observed singular AMR was suggested to
arise, under particular conditions, from momentum space mismatch across real
space domains and was confined to a very narrow angles. These domains form a
single domain once the field is increased so that the sample is in the field
saturated ferromagnetic state. One may assume that if the magnetic field is
subsequently lowered, the sample would break into a different set of magnetic
domains and hence upon remeasuring AMR, a different functional dependence
would be observed. Instead, we still observe the same behavior not matter how
many times the sample is warmed up above the ordered temperature and at which
field the sample is cooled down to 2 K, Fig. 2(e). It is plausible that
structural defects (e.g. micro-cracks in the samples after polishing since the
samples are rather brittle) or some arrangements of sub-micron Al inclusions
may “help” the formation of the domains and once domains are formed, they are
pinned and could only be changed if the defects are removed, e.g. by
controlled annealing in the former case. Such studies, together with the
visualization of magnetic domains Yang _et al._ (2020b) and defects or
pinning centers in the ordered state at constant applied magnetic field, would
be necessary. Alternatively, single crystals of CeAlGe can be grown using a
different flux (we discuss In-flux grown single crystal of CeAlGe in SM) or a
different single crystal growth technique can be utilized.
In conclusion, a clear two-fold symmetry is observed in the robust and sharp
non-sinusoidal AMR data in the ordered state when the magnetic field is swept
in the tetragonal ab-plane, revealing more detailed and complicated underlying
magnetic structures and the phase transitions between them. The current along
the b-axis enhances this two-fold symmetry compared to the current along [110]
and c-axes, although [110] axis is an easy axis. A clear separation of the AMR
data into two regimes based on the two distinct magnetic phases is observed in
the magnetic torque and AMR data at the magnetic field close to $H_{1}$ and
$H^{\prime}$. Al deficiency controls the value of these two critical fields
and changes the critical field of the phase transition between the two
different magnetic phases and most likely alters the magnetic structure of the
low-field phase in the type II Weyl semimetal CeAlGe.
Acknowledgments. The authors would like to thank J. Checkelsky and J. Lynn for
insightful discussions. H.H. would like to thank D. M. Benson and B. L.
Straughn for fruitful discussions. Materials synthesis efforts were supported
by the Gordon and Betty Moore Foundation’s EPiQS Initiative through grant no.
GBMF9071, and experimental investigations were supported by the Department of
Energy, Office of Basic Energy Sciences, under Award No. DE-SC0019154.
## References
* Wan _et al._ (2011) X. Wan, A. M. Turner, A. Vishwanath, and S. Y. Savrasov, Phys. Rev. B 83, 205101 (2011).
* Weng _et al._ (2015) H. Weng, C. Fang, Z. Fang, B. A. Bernevig, and X. Dai, Phys. Rev. X 5, 011029 (2015).
* Hasan _et al._ (2017) M. Z. Hasan, S.-Y. Xu, I. Belopolski, and S.-M. Huang, Annu. Rev. Condens. Matt. Phys. 8, 289 (2017).
* Chang _et al._ (2017) G. Chang, S.-Y. Xu, S.-M. Huang, D. S. Sanchez, C.-H. Hsu, G. Bian, Z.-M. Yu, I. Belopolski, N. Alidoust, H. Zheng, T.-R. Chang, H.-T. Jeng, S. A. Yang, T. Neupert, H. Lin, and M. Z. Hasan, Sci. Rep. 7, 1688 (2017).
* Yan and Felser (2017) B. Yan and C. Felser, Annu. Rev. Condens. Matt. Phys. 8, 337 (2017).
* Armitage _et al._ (2018) N. P. Armitage, E. J. Mele, and A. Vishwanath, Rev. Mod. Phys. 90, 015001 (2018).
* Witczak-Krempa and Kim (2012) W. Witczak-Krempa and Y. B. Kim, Phys. Rev. B 85, 045124 (2012).
* Liu _et al._ (2014) Z. K. Liu, B. Zhou, Y. Zhang, Z. J. Wang, H. M. Weng, D. Prabhakaran, S.-K. Mo, Z. X. Shen, Z. Fang, X. Dai, Z. Hussain, and Y. L. Chen, Science 343, 864 (2014).
* Neupane _et al._ (2014) M. Neupane, S.-Y. Xu, R. Sankar, N. Alidoust, G. Bian, C. Liu, I. Belopolski, T.-R. Chang, H.-T. Jeng, H. Lin, A. Bansil, F. Chou, and M. Z. Hasan, Nature Commun. 5, 3786 (2014).
* Wang _et al._ (2016) Z. Wang, M. G. Vergniory, S. Kushwaha, M. Hirschberger, E. V. Chulkov, A. Ernst, N. P. Ong, R. J. Cava, and B. A. Bernevig, Phys. Rev. Lett. 117, 236401 (2016).
* Manna _et al._ (2018) K. Manna, Y. Sun, L. Muechler, J. Kübler, and C. Felser, Nature Rev. Materials 3, 244 (2018).
* Liu _et al._ (2018) E. Liu, N. Sun, Yan Kumar, L. Muechler, A. Sun, L. Jiao, S.-Y. Yang, D. Liu, A. Liang, Q. Xu, J. Kroder, V. Süß, H. Borrmann, C. Shekhar, Z. Wang, C. Xi, W. Wang, W. Schnelle, S. Wirth, Y. Chen, S. T. B. Goennenwein, and C. Felser, Nature Phys. 14, 1125 (2018).
* Chang _et al._ (2018) G. Chang, B. Singh, S.-Y. Xu, G. Bian, S.-M. Huang, C.-H. Hsu, I. Belopolski, N. Alidoust, D. S. Sanchez, H. Zheng, H. Lu, X. Zhang, Y. Bian, T.-R. Chang, H.-T. Jeng, A. Bansil, H. Hsu, S. Jia, T. Neupert, H. Lin, and M. Z. Hasan, Phys. Rev. B 97, 041104 (2018).
* Hodovanets _et al._ (2018) H. Hodovanets, C. J. Eckberg, P. Y. Zavalij, H. Kim, W.-C. Lin, M. Zic, D. J. Campbell, J. S. Higgins, and J. Paglione, Phys. Rev. B 98, 245132 (2018).
* Puphal _et al._ (2019) P. Puphal, C. Mielke, N. Kumar, Y. Soh, T. Shang, M. Medarde, J. S. White, and E. Pomjakushina, Phys. Rev. Materials 3, 024204 (2019).
* Yang _et al._ (2020a) H.-Y. Yang, B. Singh, B. Lu, C.-Y. Huang, F. Bahrami, W.-C. Chiu, D. Graf, S.-M. Huang, B. Wang, H. Lin, D. Torchinsky, A. Bansil, and F. Tafti, APL Materials 8, 011111 (2020a).
* Lyu _et al._ (2020) M. Lyu, J. Xiang, Z. Mi, H. Zhao, Z. Wang, E. Liu, G. Chen, Z. Ren, G. Li, and P. Sun, arXiv:2001.05398 (2020).
* Puphal _et al._ (2020) P. Puphal, V. Pomjakushin, N. Kanazawa, V. Ukleev, D. J. Gawryluk, J. Ma, M. Naamneh, N. C. Plumb, L. Keller, R. Cubitt, E. Pomjakushina, and J. S. White, Phys. Rev. Lett. 124, 017202 (2020).
* Meng _et al._ (2019) B. Meng, H. Wu, Y. Qiu, C. Wang, Y. Liu, Z. Xia, S. Yuan, H. Chang, and Z. Tian, APL Materials 7, 051110 (2019).
* Suzuki _et al._ (2019) T. Suzuki, L. Savary, J.-P. Liu, J. W. Lynn, L. Balents, and J. G. Checkelsky, Science 365, 377 (2019).
* Destraz _et al._ (2020) D. Destraz, L. Das, S. S. Tsirkin, Y. Xu, T. Neupert, J. Chang, A. Schilling, A. G. Grushin, J. Kohlbrecher, L. Keller, P. Puphal, E. Pomjakushina, and J. S. White, npj Quantum Materials 5, 5 (2020).
* McGuire and Potter (1975) T. McGuire and R. Potter, IEEE Transactions on Magnetics 11, 1018 (1975).
* Canfield _et al._ (2016) P. C. Canfield, T. Kong, U. S. Kaluarachchi, and N. H. Jo, Phil. Mag. 96, 84 (2016).
* van der Pauw (1958) L. J. van der Pauw, Philips Research Reports 13, 1 (1958).
* Eo _et al._ (2018) Y. S. Eo, K. Sun, i. m. c. Kurdak, D.-J. Kim, and Z. Fisk, Phys. Rev. Applied 9, 044006 (2018).
* Okazaki _et al._ (2011) R. Okazaki, T. Shibauchi, H. J. Shi, Y. Haga, T. D. Matsuda, E. Yamamoto, Y. Onuki, H. Ikeda, and Y. Matsuda, Science 331, 439 (2011).
* Kasahara _et al._ (2012) S. Kasahara, H. J. Shi, K. Hashimoto, S. Tonegawa, Y. Mizukami, T. Shibauchi, K. Sugimoto, T. Fukuda, T. Terashima, A. H. Nevidomskyy, and Y. Matsuda, Nature 486, 382 (2012).
* Yang _et al._ (2020b) H.-Y. Yang, B. Singh, J. Gaudet, B. Lu, C.-Y. Huang, W.-C. Chiu, S.-M. Huang, B. Wang, F. Bahrami, B. Xu, J. Franklin, I. Sochnikov, D. E. Graf, G. Xu, Y. Zhao, C. M. Hoffman, H. Lin, D. H. Torchinsky, C. L. Broholm, A. Bansil, and F. Tafti, (2020b), arXiv:2006.07943 .
## Appendix A Supplementary Materials
Figure S1: (color online) (a) AMR of CeAlGe single crystal measured in the
four-probe wire configuration at 2 K, I$\|$[110]. AMR data are clearly split
into two regimes at H = 3.5 kOe - a value of the phase transition between the
two different magnetic states. Field-dependent resistivity of CeAlGe measured
in the 4-probe wire configuration at constant angles at 2 K (b) I$\|$[010] and
(c) I$\|$[110]. Insets show zoom in of low field data. Figure S2: (color
online) Angle-dependent magnetic torque of CeAlGe single crystal measured at
(a) H = 2 kOe and (b) H = 5 kOe. Figure S3: (color online) Angle-dependent
resistivity of CeAlGe single crystal measured in four-probe wire
configuration, I$\|$[010], at $T\leq$2 K for H = 2.5 and 5 kOe.
## Appendix B Angle- and field-dependent resistivity
Figure S1(a) shows AMR with I$\|$[110] at 2 K. The shape of AMR for
I$\|$[110], is very different compared to that of I$\|$b discussed in the main
text. The two-fold symmetry is still visible here but is not as drastic.
Interestingly, all peaks at every 90∘ for H = 3.25 kOe that pointed up, point
down at H = 3.75 kOe. In between, at H = 3.5 kOe, they alternate, the ones at
90∘ and 270∘ turn down but the one at 180∘ and 0∘/360∘ stay up. These fields
are slightly over H1(H$\|$a) and much lower than H′(H$\|$[110]) in the M(H)
data, Fig. 2(a). Looking at Fig. 2(b), the corresponding feature in the AMR
data for I$\|$b is the appearance of dips (H = 3.5 kOe) that turn into
plateaus (H = 4 kOe) at 45∘ (every 45∘). Perhaps the narrow region between
$H_{1}$ and $H^{\prime}$ critical fields in the M(H) data of spin
reorientation is captured here. This phase transition at H = 3.5 kOe, seems to
divide AMR into two different regimes.
The field-dependent resistivity data collected every 45∘ at 2 K are shown in
Figs. S1(b) I$\|$b and S1(c) I$\|$[110], respectively. The data fall into
three manifolds for I$\|$b and two manifolds for I$\|$[110] in the ordered
state. For I$\|$b, Fig. S1(b), as opposed to the M(H) data, $\rho(H)$ data
show a clear two-fold symmetry for H$\|$[100] and [010] directions and a clear
four-fold symmetry for H$\|\langle 110\rangle$ directions in the ordered
state. This would be consistent with the current breaking the four-fold
symmetry for 90∘ rotations. On the contrary, for I$\|$[110], Fig. S1(c), the
$\rho(H)$ data seem to follow M(H) behavior except below H = 3 kOe, inset to
Fig. S1(c), where the data for 0∘ (180∘ and 360∘) and 90∘(270∘) are not the
same, i.e. the four-fold symmetry is broken. The difference in the data is not
due to the hysteresis since the data were collected with the same approach.
Thus, the two-fold symmetry in the $\rho(H)$ data for I$\|$[110] is more
subtle. Negative magnetoresistance in the ordered state is followed by a
positive magnetoresistance at the field close to $H_{2}$ with almost no
anisotropy in the field-saturated state for I$\|$b and small anisotropy for
I$\|$[110]. The features in the ordered state are consistent with those
observed in the M(H) data shown in Fig. 2(a), except for I$\|$b sample at 0∘
and 180∘ a clear sharp change in the AMR is seen at about 16 kOe. There is no
corresponding sharp feature in the M(H) data Fig. 2(a). The hysteresis in the
data below H = 4 kOe, consistent with that seen in the M(H) data, is evident
in the data shown in the inset to Fig. S1(a). One thus should expect
hysteresis on increasing and decreasing the angle in the AMR data as well.
## Appendix C Magnetic torque
The torque magnetometry option of Quantum Design Physical Properties
Measurement System was used to collect magnetic torque data. The sample for
this measurement was secured with the help of N grease. The background of the
Si torque chip with the N grease and sample was measured at 90∘ and was
accounted for in the final result. A markedly different evolution, reflecting
different magnetic state, of the angle-dependent magnetic torque with
temperature at H = 2.5 and 5 kOe is shown in Figs. S2(a) and S2(b),
respectively. The absolute value of the torque decreases as the temperature
increases. At 5 K which is just above the ordering temperature, the torque
data for these two magnetic fields become similar and at 6.5 K, the torque is
almost zero reflecting the paramagnetic state. Interestingly, the torque data
at 5 K is still two-fold symmetric, perhaps indicating some moment
fluctuations.
## Appendix D Lower than 2 K temperature
Upon lowering the temperature to 0.1 K, the shape of the AMR for I$\|$b
changes at H = 2.5 kOe, Fig. S3. However, the shape of AMR at H = 5 kOe
remains unchanged except the peaks become narrower. This indicates another
magnetic phase below 2 K with the critical field less than 2.5 kOe.
## Appendix E Single crystals grown under different conditions
Additional two batches of CeAlGe single crystals were grown with the following
conditions: (i) Cerium ingot from Ames Laboratory and Al was used as a flux
and (ii) Cerium ingot from Alfa Aesar and In flux was used to grow CeAlGe
single crystals. Cerium from Alfa Aesar was also used to grow the crystals
presented in the main text. Canfield crucible with the fritCanfield _et al._
(2016) was used in these two growths to prevent Si substitution from the
quartz wool in the catch crucible. The batches of crystals were grown at
different times with the same temperature profile in the same Lindberg/Blue M
1500∘C box furnace.
Lattice parameters determined through single crystal x-ray diffraction
analysis are shown in the Table 1. Sample with Ames Ce shows only slightly
larger c-axis. In-flux grown sample has both lattice parameters smaller than
Al-flux grown samples.
Table 1: Lattice parameters data determined through single-crystal x-ray diffraction of CeAlGe single crystals grown with different conditions. Space group I41md (No. 109). All data were collected at 250 K on Bruker APEX-II CCD system equipped with a graphite monochromator and a MoK$\alpha$ sealed tube (wavelength $\lambda$ = 0.71070 $\mathrm{\AA}$). Lattice parameters | frit (main text) | Ames Cerium/frit | In flux/frit
---|---|---|---
$a$($\mathrm{\AA}$) | 4.2920(2) | 4.2930(2) | 4.2875(2)
$b$($\mathrm{\AA}$) | 4.2920(2) | 4.2930(2) | 4.2875(2)
$c$($\mathrm{\AA}$) | 14.7496(4) | 14.7631(7) | 14.7197(7)
Table 2: Results of WDS with 2standard deviations for CeAlGe single crystals grown under different conditions. The rows represent different samples. Chemical elements | frit (main text) | Ames Ce/frit | In flux/frit
---|---|---|---
Ce | 1 | 1 | 1
Ge | 0.95(1) | 0.96(2) | 0.98(1)
Al | 0.98(2) | 0.83(2) | 0.90(1)
Ce | 1 | 1 |
Ge | 0.95(4) | 0.95(1) |
Al | 0.95(1) | 0.81(2) |
To determine stoichiometry of the samples, single crystals were analyzed using
a JEOL 8900R electron probe microanalyzer at the Advanced Imaging and
Microscopy Laboratory (AIMLab) in the Maryland Nanocenter using standard
wavelength dispersive spectroscopy (WDS) techniques. The following analytical
conditions were utilized: 15 kV accelerating voltage; 50 nA sample current; a
1 micron beam; and synthetic Al and Ge metal and CePO4 standards. Both K-alpha
(Al, Ge) and L-alpha (Ce) x-ray lines were used. Count times ranged from 20-30
seconds on peak, and 5-10 on background. Raw x-ray intensities were corrected
using a standard ZAF algorithm. The standard deviation due to counting
statistics was generally below 0.5$\%$, 0.3$\%$, and 0.25$\%$ for Ge, Al, and
Ce, respectively. Based on the total amounts recorded for Ce, Al, and Ge, any
additional In doping was not recorded. WDS results are listed in Table 2.
Small and nearly the same Ge deficiency is observed among all samples with the
In-flux grown sample having Ge concentration closest to 1. However, Al
deficiency varies largely among different batches. The first batch listed in
the Table 2 shows about 5$\%$ Al deficiency. Crystals grown with Ames Ce show
surprisingly large Al deficiency at nearly 20$\%$. In-flux grown crystal is
$\sim$10$\%$ Al deficient. There appears to be no correlation between lattice
parameters and either Ge or Al deficiency. Both In and Al inclusions were
observed in In-grown single crystals.
Zero-field cooled (ZFC) and field-cooled (FC) temperature- and field-dependent
magnetization for the batch with Ames Ce/Al flux and In-flux grown single
crystals of CeAlGe are shown in Fig. S4 (a) and (c), respectively. The
ordering temperature of 5 K is the same. The effective moments calculated from
the Curie-Weiss law fits of the polycrystalline average (not shown here) are
consistent with the WDS data. Large difference is seen in the ZFC and FC data
below 4.5 K. Temperature-dependent magnetization is also larger at 100 Oe for
In-grown sample. This is consistent with the M(H) data shown in Fig. S4(d)
where $H_{1}$ = 1 kOe (it may even be lower, the data were collected with 1
kOe step) as opposed to $H_{1}$ = 2 kOe of Ames Ce sample, Fig. S4(b). $H_{1}$
critical fields for both samples and $H^{\prime}$ for Ames Ce sample are lower
than those of the sample presented in the main text. On the other hand,
$H_{2}$ critical fields are somewhat similar. It appears that Al deficiency
affects the values of critical fields of spin reorientation $H_{1}$ and
$H^{\prime}$.
M(H) data for H$\|$c show different behavior. The magnetic moment along the
c-axis for In-flux sample is larger than that of Ames Ce sample and the field
at which $M_{H\|c}>M_{H\|a}$ is lower. Two metamagnetic transitions with
hysteresis are seen below H = 20 kOe for the In-grown sample as well.
The temperature-dependent resistivity of the samples from these batches
measured in the standard 4-probe configuration are shown together with those
from the batch (measured with different techniques) described in the main text
in Fig. 1. The $\rho(T)$ data for Al-flux grown samples show a good agreement.
The $\rho(T)$ data for I$\|$c are 1.6 times larger at 300 K and 1.8 times
larger at 2 K than that of I$\|$b indicating that b-axis is more conductive.
In-flux grown samples show larger resistivity values. The feature around 7 K
is more pronounced and the feature associated with the magnetic order is less
pronounced compared to those for the Al-flux grown samples.
The AMR data measured in the 4-probe configuration with I$\|$b at 2 K at
different constant magnetic fields (the field was rotated in the tetragonal
ab-plane) for the Ames Ce/Al-flux and In-flux grown samples are shown in Fig.
S6(a) and (b), respectively. The AMR shows similar features for H$\geq$ 2 kOe.
The same behavior was observed for H$\geq$ 5 kOe in the sample discussed in
the main text, Fig. 2(b). This corresponds to the region above $H^{\prime}$
(which is different for these samples), however, the behavior of the AMR data
is the same, reflecting the same magnetic state for all three samples above
$H^{\prime}$. On the contrary, the features in the AMR are different below
$H^{\prime}$ for all three samples. The onset of the large and clear two-fold
symmetry occurs at a much lower field of 1 kOe (perhaps even at a smaller
field) as opposed to 2.5 kOe for the sample discussed in the main text, due to
$H_{1}$ for these two sample being much lower than the one for the sample
discussed in the main text.
Al deficiency appears to affect critical fields $H_{1}$ and $H^{\prime}$,
making them smaller than that of a closer to Al stoichiometric sample. This
leads to the second regime, common among all samples, of AMR to appear above
fields as small as 2 kOe. In addition, Al deficiency seems to affect the spin
orientation/structure in the first regime below $H^{\prime}$ as is evident by
a different functional dependence of the AMR.
Figure S4: (color online) (a) ZFC and FC temperature-dependent magnetization
of CeAlGe single crystals grown using Ames cerium. (b) M(H) data of
CeAlGe/Ames cerium and Al flux, (c) ZFC and FC temperature-dependent
magnetization of CeAlGe single crystals grown using In flux, and (d) M(H) data
of CeAlGe/In flux. Figure S5: (color online) (a) Temperature-dependent
resistivity of CeAlGe single crystals grown under different conditions and
measured in different electrode configurations. (b), (c), and (d) Low-
temperature part of the temperature-dependent resistivity showing the features
associated with the magnetic order. Figure S6: (color online) Angle-dependent
resistivity of CeAlGe single crystal measured in four-probe wire configuration
at 2 K and H = 2.5 kOe. (a) Ce from Ames Laboratory was used, Al-flux was used
to grow CeAlGe single crystals. (b) In-flux was used to grow CeAlGe single
crystals.
|
# The efficacy of antiviral drug, HIV viral load and the immune response
Mesfin Asfaw Taye West Los Angeles College, Science Division
9000 Overland Ave, Culver City, CA 90230, USA
###### Abstract
Developing antiviral drugs is an exigent task since viruses mutate to overcome
the effect of antiviral drugs. As a result, the efficacy of most antiviral
drugs is short-lived. To include this effect, we modify the Neumann and Dahari
model. Considering the fact that the efficacy of the antiviral drug varies in
time, the differential equations introduced in the previous model systems are
rewritten to study the correlation between the viral load and antiviral drug.
The effect of antiviral drug that either prevents infection or stops the
production of a virus is explored. First, the efficacy of the drug is
considered to decreases monotonously as time progresses. In this case, our
result depicts that when the efficacy of the drug is low, the viral load
decreases and increases back in time revealing the effect of the antiviral
drugs is short-lived. On the other hand, for the antiviral drug with high
efficacy, the viral load, as well as the number of infected cells,
monotonously decreases while the number of uninfected cells increases. The
dependence of the critical drug efficacy on time is also explored. Moreover,
the correlation between viral load, the antiviral drug, and CTL response is
also explored. In this case, not only the dependence for the basic
reproduction ratio on the model parameters is explored but also we analyze the
critical drug efficacy as a function of time. We show that the term related to
the basic reproduction ratio increases when the CTL response step up. A simple
analytically solvable mathematical model is also presented to analyze the
correlation between viral load and antiviral drugs.
###### pacs:
Valid PACS appear here
## I Introduction
Viruses are tiny particles that occupy the world and have property between
living and non-living things. As they are not capable of reproducing, they
rely on the host cells to replicate themselves. To gain access, the virus
first binds and intrudes into the host cells. Once the virus is inside the
cell, it releases its genetic materials into the host cells. It then starts
manipulating the cell to multiply its viral genome. Once the viral protein is
produced and assembled, the new virus leaves the cell in search of other host
cells. Some viruses can also stay in the host cells for a long time as a
latent or chronic state. The genetic information of a virus is stored either
in form of RNA or DNA. Depending on the type of virus, the host cell type also
varies. For instance, the Human Immunodeficiency Virus (HIV) (see Fig. 1 muu1
) directly affects Lymphocytes. Lymphocytes can be categorized into two main
categories: the B and T cells. The B cells directly kill the virus by
producing a specific antibody. T cells on the other hand can be categorized as
killer cells (CD 8) and helper cells (CD 4). Contrary to CD 8, CD 4 gives only
warning so that the cells such as CD 8 and B cells can directly kill the virus
mes5 ; muu2 . Although most of these viruses contain DNA as genetic material,
retroviruses such as HIV store their genetic materials as RNA. These viruses
translate their RNA into DNA using an enzyme called reverse transcriptase
during their life cycle. In the case of HIV, once HIV infects the patient, a
higher viral load follows for the first few weeks, and then its replication
becomes steady for several years. As a result, the CD 4 (which is the host
cell for HIV) decreases considerably. When the CD 4 cells are below a certain
threshold, the patient develops AIDS.
Figure 1: (Color online) Schematic diagram for HIV virion muu1 .
To tackle the spread of virulent viruses such as HIV, discovering potent
antiviral drugs is vital. However, developing antiviral drugs is an exigent
task since viruses mutate to overcome the effect of antiviral drugs because of
this only a few antiviral drugs are currently available. Most of these drugs
are developed to cure HIV and herpes virus. The fact that viruses are obligate
parasites of the cells makes drug discovery complicated since the drug’s
adverse effects directly affect the host cells. Many medically important
viruses are also virulent and hence they cannot be propagated or tested via
animal models. This in turn forces the drug discovery to be lengthy. Moreover,
unlike other antimicrobial drugs, antiviral drugs have to be 100 percent
potent to completely avoid drug resistance. In other words, if the drug
partially inhibits the replication of the virus, through time, the number of
resistant viruses will dominate the cell culture. All of the above factors
significantly hinder drug discovery. Furthermore, even a potent antiviral drug
does not guarantee a cure if the acute infection is already established.
Understanding the dynamics of the virus in vivo or vitro is crucial since
viral diseases are the main global health concern. For instance, recent
outbreaks of viral diseases such as COVID-19 not only cost trillion dollars
but also killed more than 217,721 people in the USA alone. To control such a
global pandemic, developing an effective therapeutic strategy is vital.
Particularly, in the case of virulent viruses, mathematical modeling along
with the antiviral drug helps to understand the dynamics of the virus in vivo
mes2 . The pioneering mathematical models on the Human Immunodeficiency virus
depicted in the works mes1 ; mes3 ; mes4 ; mu1 ; mu2 ; mu3 ; mu4 ; mu5 ; mu6 ;
mu7 shed light regarding the host-virus correlation. Latter these model
systems are modified by Neumann $et.$ $al.$ mes1 ; mu8 to study the dynamics
of HCV during treatment. To study the dependence of uninfected cells, infected
cells, and virus load on model parameters, Neumann proposed three differential
equations. More recently, to explore the observed HCV RNA profile during
treatment, Dahari $et.$ $al.$ mes1 ; mu9 extended the original Neumann model.
Their findings disclose that critical drug efficacy plays a critical role.
When the efficacy is greater than the critical value, the HCV will be cleared.
On the contrary, when the efficacy of the drug is below the critical
threshold, the virus keeps infecting the target cells.
As discussed before, the effect of antiviral drugs is short-lived since the
virus mutates during the course of treatment. To include this effect, we
modify Neumanny and Dahariy models. Considering the fact that the efficacy of
the antiviral drug decreases in time, we rewrite the three differential
equations introduced in the previous model systems. The mathematical model
presented in this work analyzes the effect of an antiviral drug that either
prevents infection ($e_{k}$) or stops the production of virus ($e_{p})$.
First, we consider a case where the efficacy of the drug decreases to zero as
time progresses and we then discuss the case where the efficacy of the drug
decreases to a constant value as time evolves maintaining the relation
$e_{P}=e_{p}^{\prime}(1+e^{-rt})/m$ and $e_{r}=e_{r}^{\prime}(1+e^{-rt})/m$.
Here $r$, $e_{k}^{\prime}$ and $e_{p}^{\prime}$ measure the ability of
antiviral drug to overcome drug resistance. When $r$ tends to increase, the
efficacy of the drug decreases. The results obtained in this work depict that
for large $r$, the viral load decreases and increases back as the antiviral
drug is administered showing the effect of antiviral drugs is short-lived. On
the other hand, for small $r$, the viral load, as well as the number of
infected cells monotonously decreases while the host cell increases. The
dependence of the critical drug efficacy on time is also explored.
The correlation between viral load, antiviral therapy, and cytotoxic
lymphocyte immune response (CTL) is also explored. Not only the dependence for
the basic reproduction ratio on the model parameters is explored but also we
find the critical drug efficacy as a function of time. The basic reproduction
ratio increases when the CTL response decline. When the viral load inclines,
the CTL response step up. We also present a simple analytically solvable
mathematical model to address the correlation between drug resistance and
antiviral drugs.
The rest of the paper is organized as follows: in Section II, we explore the
correlation between antiviral treatment and viral load. In Section III the
relation between viral load, antiviral therapy, and the CTL immune response is
examined. A simple analytically solvable mathematical model that addresses the
correlation between drug resistance and viral load is presented in section IV.
Section V deals with summary and conclusion.
## II The relation between antiviral drug and virus load
In the last few decades, mathematical modeling along with antiviral drugs
helps to develop a therapeutic strategy. The first model that describes the
dynamics of host cells $x$, virus load $v$, and infected cells $y$ as a
function of time $t$ was introduced in the works mu1 ; mu2 ; mu3 ; mu4 ; mu5 .
Accordingly, the dynamics of the host cell, infected cell, and virus is
governed by
$\displaystyle{\dot{x}}$ $\displaystyle=$ $\displaystyle\lambda-dx-\beta xv,$
$\displaystyle{\dot{y}}$ $\displaystyle=$ $\displaystyle\beta xv-ay,$
$\displaystyle{\dot{v}}$ $\displaystyle=$ $\displaystyle ky-uv.$ (1)
The host cells are produced at rate of $\lambda$ and die naturally at a
constant rate $d$ with a half-life of $x_{t_{1\over 2}}={ln(2)\over d}$. The
target cells become infected at a rate of $\beta$ and die at a rate of $a$
with a corresponding half-life of $y_{t_{1\over 2}}={ln(2)\over a}$. On the
other hand, the viruses reproduce at a rate of $k$ and die with a rate $u$
with a half-life of $v_{t_{1\over 2}}={ln(2)\over u}$ mu10 ; mu11 ; mu12 ;
mu14 . In this model, only the interaction between the host cells and viruses
is considered neglecting other cellular activities. The host cells, the
infected cells, and the viruses have a lifespan of $1/d$, $1/a$, and $1/u$,
respectively. During the lifespan of a cell, one infected cell produces
$N=u/a$ viruses on average muu2 ; mu12 ; mu14 .
The capacity for the virus to spread can be determined via the basic
reproductive ratio
$\displaystyle R_{0}={\lambda\beta k\over adu}.$ (2)
Whenever $R_{0}>1$, the virus spreads while when $R_{0}<1$ the virus will be
cleared by the host immune system muu2 .
To examine the dependence of uninfected cells, infected cells and virus load
on the system parameters during antiviral treatment, the above model system
(Eq. (1)) was modified by Neumann $et.$ $al.$ mu8 and Dahari $et.$ $al.$ mu9
. The modified mathematical model presented in those works analyzes the effect
of antiviral drugs that either prevents infection of new cells ($e_{k}$) or
stops production of the virion ($e_{p})$. In this case, the above equation can
be remodified to include the effect of antiviral drugs as
$\displaystyle{\dot{x}}$ $\displaystyle=$ $\displaystyle\lambda-
dx-(1-e_{k})\beta xv$ $\displaystyle{\dot{y}}$ $\displaystyle=$
$\displaystyle(1-e_{k})\beta xv-ay$ $\displaystyle{\dot{v}}$ $\displaystyle=$
$\displaystyle(1-e_{p})ky-uv$ (3)
where the terms $e_{k}$ and $e_{p}$ are used when the antivirus blocks
infection and virion production, respectively. For instance, $e_{p}=0.8$
indicates that the drug has efficacy in blocking virus production by $80$
percent. The antiviral drug such as protease inhibitor inhibits the infected
cell from producing the right gag protein as a result the virus becomes
noninfectious. A drug such as a reverse transcriptase inhibitor prohibits the
infection of new cells.
Moreover, the results obtained in the last few decades depict that, usually
HIV patient shows high viral load in the first few weeks of infection. As a
result, the viral load becomes the highest then it starts declining for a few
weeks. The viruses then keep replicating for many years until the patient
develops AIDS. Since virus replication is prone to errors, the virus often
develops drug resistance; HIV mutates to become drug-resistant. Particularly
when the antiviral drug is administered individually, the ability of the virus
to develop drug resistance steps up. However, a triple-drug therapy which
includes one protease inhibitor combined with two reverse transcriptase
inhibitors helps to reduce the viral load for many years muu2 .
Since the antiviral drugs are sensitive to time, to include this effect, next,
we will modify the Neumann and Dahari model.
Case one.—As discussed before, the effect of antiviral drugs is short-lived
since the virus mutates once the drug is administrated. To include this
effect, we modify the above equation by assuming that
$e_{P}=e_{p}^{\prime}e^{-rt}$ and $e_{r}=e_{r}^{\prime}e^{-rt}$. The efficacy
of the drugs declines exponentially as time progresses. The decaying rate
aggravates when $r$ tends to increase. Hence let us rewrite Eq. (3) as
$\displaystyle{\dot{x}}$ $\displaystyle=$ $\displaystyle\lambda-
dx-(1-e_{k}^{\prime}e^{-rt})\beta xv$ $\displaystyle{\dot{y}}$
$\displaystyle=$ $\displaystyle(1-e_{k}^{\prime}e^{-rt})\beta xv-ay$
$\displaystyle{\dot{v}}$ $\displaystyle=$
$\displaystyle(1-e_{p}^{\prime}e^{-rt})ky-uv.$ (4)
The term related to the reproductive ratio is given as
$\displaystyle R_{1}={\lambda\beta k\over
adu}(1-e_{k}^{\prime})(1-e_{p}^{\prime}).$ (5)
When $R_{1}<1$, the antivirus drug is capable of clearing the virus and if
$R_{1}>1$, the virus tends to spread. At steady state,
$\displaystyle\overline{x}$ $\displaystyle=$ $\displaystyle{au\over\beta k}$
$\displaystyle\overline{y}$ $\displaystyle=$ $\displaystyle{\beta\lambda
k-adu\over a\beta k}$ $\displaystyle\overline{v}$ $\displaystyle=$
$\displaystyle{\beta\lambda k-adu\over a\beta u}.$ (6)
As one can note that, when $t\to\infty$, $e_{p}\to 0$ and $e_{k}\to 0$. At
steady state, only one newly infected cell arises from one infected cell muu2
.
Figure 2: (Color online) (a) The number of host cells $x$ as a function of
time (days). (b) The number of infected cells as function of time (days). (c)
The virus load as a function of the time (days). In the figure, we fix
$\lambda=10^{7}$, $d=0.1$, $a=0.5$, $\beta=2X10^{-9}$, $k=1000.0$, $u=5.0$,
$e_{p}=0.5$, $e_{k}=0.5$ and $r=0.06$.
Let us next explore how the number of host cells $x$, the number of infected
cells $y$, and the viral load $v$ behave as a function of time by exploiting
Eq. (4) numerically. From now on, all of the physiological parameters are
considered to vary per unit time (days). Figure 1 depicts the plot of the
number of host cells $x$, the number of infected cells $y$ and the number of
virus as function of time (days) for parameter choice of $\lambda=10^{7}$,
$d=0.1$, $a=0.5$, $\beta=2X10^{-9}$, $k=1000.0$, $u=5.0$, $e_{p}=0.5$,
$e_{k}=0.5$ and $r=0.06$. The figure depicts that in the presence of an
antiviral drug, the number of $CD_{4}$ cells increases and attains a maximum
value. The cell numbers then decrease and saturate to a constant value. The
number of infected cells decreases and saturates to a constant value. On the
other hand, the viral load decreases as the antiviral takes an effect.
However, this effect is short-lived since the the viral load increases back as
the viruses develop drug resistance.
Figure 3: (Color online) (a) The number of host cells $x$ as a function of
time (days). (b) The number of infected cells as function of time (days). (c)
The plasma virus load as a function of time (days). In the figure, we fix
$\lambda=10^{7}$, $d=0.1$, $a=0.5$, $\beta=2X10^{-9}$, $k=1000.0$, $u=5.0$,
$e_{p}=0.9$, $e_{k}=0.9$ and $r=0.0001$.
When $r$ is small, the ability of the antiviral drug to overcome drug
resistance increases. As depicted in Fig. (2), for very small $r$, the host
cells increase in time, and at a steady state, the cells saturate to a
constant value. On the contrary, the infected cells as well as the plasma
virus load monotonously decrease as time progresses. The figure is plotted by
fixing $\lambda=10^{7}$, $d=0.1$, $a=0.5$, $\beta=2X10^{-9}$, $k=1000$,
$u=5.0$, $e_{p}=0.9$, $e_{k}=0.9$ and $r=0.0001$. These all results indicate
that when combined drugs are administrated, the viral load is significantly
reduced depending on the initial viral load.
Figure 4: (Color online) (a) The number of host cells $x$ as a function of
days. (b) The number of infected cells as function of days. (c) The virus load
as a function of the days. In the figure, we fix $\lambda=10^{7}$, $d=0.1$,
$a=0.5$, $\beta=2X10^{-9}$, $k=1000.0$, $u=5.0$, $e_{p}=0.9$, $e_{k}=0.9$ and
$r=20.0$.
Figure 3 is plotted by fixing $\lambda=10^{7}$, $d=0.1$, $a=0.5$,
$\beta=2X10^{-9}$, $k=1000$, $u=5.0$, $e_{p}=0.9$, $e_{k}=0.9$ and $r=20.0$.
The figure exhibits that for large $r$, the $CD_{4}$ cells decrease and
exhibit a local minima. As time progresses, the number of cells increases and
saturates to a constant value. On the other hand, the number of infected cells
$y$ and the viral load $v$ decreases and saturates to considerably large value
as time progresses. Figure 3 also depicts that when the drug is unable to
control the infection in a very short period of time, the number of drug
resistant viruses steps up.
Case two.— In the previous case, the efficacy of the drug is considered to
decrease monotonously as time progresses. In this section, the efficacy of the
drug is assumed to decrease to a constant value as time increases maintaining
the relation $e_{P}=e_{p}^{\prime}(1+e^{-rt})/m$ and
$e_{r}=e_{r}^{\prime}(1+e^{-rt})/m$. The dynamics of host cells, infected
cells, and viral load is governed by the equation
$\displaystyle{\dot{x}}$ $\displaystyle=$ $\displaystyle\lambda-
dx-(1-e_{r}^{\prime}(1+e^{-rt})/m)\beta xv$ $\displaystyle{\dot{y}}$
$\displaystyle=$ $\displaystyle(1-e_{r}^{\prime}(1+e^{-rt})/m)\beta xv-ay$
$\displaystyle{\dot{v}}$ $\displaystyle=$
$\displaystyle(1-e_{p}^{\prime}(1+e^{-rt})/m)ky-uv.$ (7)
After some algebra, the term related to the basic reproductive ratio reduces
to
$\displaystyle R_{1}={\lambda\beta k\over
adum^{2}}\left(m-2e_{k}^{\prime}\right)\left(m-2e_{p}^{\prime}\right).$ (8)
As one can see from Eq. (8) that as $m$ steps up, the drug losses its potency
and as a result $R_{1}$ increases. When $R_{1}<1$, the antivirus drug
treatment is successful and this occurs for large values of $e_{p}$ and
$e_{k}$. When $R_{1}>1$, the virus overcomes the antivirus treatment.
At equilibrium, one finds
$\displaystyle\overline{x}$ $\displaystyle=$ $\displaystyle{aum^{2}\over\beta
k(m-e_{r}^{\prime})(m-e_{p}^{\prime})}$ $\displaystyle=$
$\displaystyle{\lambda\over
dR_{0}}{m^{2}\over(m-e_{r}^{\prime})(m-e_{p}^{\prime})}$
$\displaystyle\overline{y}$ $\displaystyle=$ $\displaystyle{\lambda\over
a}+{dum^{2}\over\beta k(e_{r}^{\prime}-m)(m-e_{p}^{\prime})}$ $\displaystyle=$
$\displaystyle\left(R_{0}-{m^{2}\over(m-e_{r}^{\prime})(m-e_{p}^{\prime})}\right){du\over\beta
k}$ $\displaystyle\overline{v}$ $\displaystyle=$ $\displaystyle{dm\over(\beta
e_{k}^{\prime}-\beta m})+{\lambda k(m-e_{p}^{\prime})\over(amu)}$ (9)
$\displaystyle=$ $\displaystyle\left({R_{0}(m-e-p^{\prime})\over
m}-{m\over(m-e_{k}^{\prime})}\right){d\over\beta}.$
The case $R_{0}\gg 1$ indicates that the equilibrium abundance of the
uninfected cells is much less than the number of uninfected cells before
treatment. When the drug is successful, (large values of $e_{p}^{\prime}$ or
$e_{r}^{\prime}$), the equilibrium abundance of the uninfected cells
increases. On the contrary, for a highly cytopathic virus ( $R_{1}\gg 1$ ),
the number of infected cells, as well as the viral load steps up. When
$e_{p}^{\prime}$ and $e_{r}^{\prime}$ increase, the equilibrium abundance of
infected cells as well as viral load decreases.
In general for large $R_{0}$, Eq. (9) converges to
$\displaystyle\overline{y}$ $\displaystyle=$ $\displaystyle{\lambda\over a}$
$\displaystyle\overline{v}$ $\displaystyle=$ $\displaystyle{\lambda
k(m-e_{p}^{\prime})\over(amu)}.$ (10)
Clearly $\overline{v}$ decreases as $e_{p}^{\prime}$ and $e_{r}^{\prime}$
increase.
The overall efficacy can be written as mu9
$\displaystyle 1-e=\left(1-{e_{r}^{\prime}\left(1+e^{-rt}\right)\over
m}\right)\left(1-{e_{p}^{\prime}\left(1+e^{-rt}\right)\over m}\right)$ (11)
where $0<e_{r}^{\prime}<1$ and $0<e_{p}^{\prime}<1$. At steady state
$1-e=\left(1-{e_{r}^{\prime}\over m}\right)\left(1-{e_{p}^{\prime}\over
m}\right)$. The transcritical bifurcation point (at steady state) can be
analyzed via Eq. (9) and after some algebra we find
$\displaystyle 1-e=\left(1-{e_{r}^{\prime}\over
m}\right)\left(1-{e_{p}^{\prime}\over m}\right)={adum\over\lambda\beta
k}={x_{1}\over x_{0}}$ (12)
where $x_{0}={\lambda\over d}$ denotes the number of uninfected host cells
before infection and $x_{1}={aum\over\beta k}$ designates the number of
uninfected cells in the chronic case. This implies the critical efficacy is
given as $e_{c}=1-{x_{1}\over x_{0}}=1-{adum\over\lambda\beta k}$.
To write the overall efficacy as a function of time, for simplicity, let us
further assume that $e_{r}^{\prime}=e_{p}^{\prime}$. In this case, Eq. (12)
can be rewritten as
$\displaystyle{e_{r}^{\prime}\over m}=1\pm\sqrt{{adum\over\lambda\beta k}}$
(13)
and hence
$\displaystyle 1-e=\left(1-\left(1\pm\sqrt{{adum\over\lambda\beta
k}}\right)\left(1+e^{-rt}\right)\right)^{2}.$ (14)
From Eq. (14), one finds
$\displaystyle e_{c}=1-\left(1-\left(1\pm\sqrt{{adum\over\lambda\beta
k}}\right)\left(1+e^{-rt}\right)\right)^{2}.$ (15)
The critical efficacy serves as an alternative way of determining whether
antiviral treatment is successful or not. When $e>e_{c}$, the antiviral clears
the infection and if $e<e_{c}$, the virus replicates.
## III The correlation between antiviral drug, immune response and virus
load
The basic mathematical model that specifies the relation between the immune
response, antiviral drug, and viral load is given by
$\displaystyle{\dot{x}}$ $\displaystyle=$ $\displaystyle\lambda-
dx-(1-e_{r}^{\prime}(1+e^{-rt})/m)\beta xv$ $\displaystyle{\dot{y}}$
$\displaystyle=$ $\displaystyle(1-e_{r}^{\prime}(1+e^{-rt})/m)\beta xv-ay-pyz$
$\displaystyle{\dot{v}}$ $\displaystyle=$
$\displaystyle(1-e_{p}^{\prime}(1+e^{-rt})/m)ky-uv$ $\displaystyle{\dot{z}}$
$\displaystyle=$ $\displaystyle c-bz.$ (16)
Once again the terms $x$, $y$, and $v$ denote the uninfected cells, infected
cells, and the viral load. The term $z$ denotes the CTL response and the CTL
die at a rate of $b$ and produced at a rate of $c$. The term CTL is defined as
cytotoxic lymphocytes that have responsibility for killing the infected cells.
The term related to the basic reproduction rate is given as
$\displaystyle R_{1}={\lambda\beta k\over(a+{cp\over
b})dum^{2}}(m-2e_{k}^{\prime})(m-2e_{p}^{\prime}).$ (17)
It vital to see that when $R_{0}>0$ the virus becomes successful to persist an
infection which triggers an immune responses $z$. As long as the coordination
between the immune response and the antiviral drug treatment is strong enough,
the virus will be cleared $R_{1}<1$. As it can be clearly seen from Eq. (17),
when the CTL response step up, $R_{1}$ declines as expected.
The equilibrium abundance of the host cells, infected cells, viral load, and
CTL response can be given as
$\displaystyle\overline{x}$ $\displaystyle=$
$\displaystyle{m^{2}u(ab+cp)\over\beta
bk(m-e_{k}^{\prime})(m-e_{p}^{\prime})}$ $\displaystyle=$
$\displaystyle{\lambda\over
adR_{0}}{(ab+cp)m^{2}\over(m-e_{r}^{\prime})(m-e_{p}^{\prime})}$
$\displaystyle\overline{y}$ $\displaystyle=$
$\displaystyle{b\lambda\over(ab+cp)}+{dum^{2}\over\beta k(e_{r}-m)(m-e_{p})}$
$\displaystyle=$
$\displaystyle\left({abR_{0}\over(ab+cp)}-{m^{2}\over(m-e_{r}^{\prime})(m-e_{p}^{\prime})}\right){du\over\beta
k}$ $\displaystyle\overline{v}$ $\displaystyle=$ $\displaystyle{dm\over(\beta
e_{k}-\beta m})+{b\lambda k(m-e_{p})\over mu(ab+cp)}$ $\displaystyle=$
$\displaystyle\left({bR_{0}(m-e-p^{\prime})\over
m(ab+cd)}-{m\over(m-e_{k}^{\prime})}\right){d\over\beta}$
$\displaystyle\overline{z}$ $\displaystyle=$ $\displaystyle{c\over b}.$ (18)
Exploiting Eq. (18), one can deduce when $R_{0}\gg 1$, the equilibrium
abundance of the uninfected cells becomes much lower in comparison to the
number of cells before treatment. The equilibrium abundance of the uninfected
cells steps up when there is CTL response (when $c$ and $p$ increase) or when
the antiviral treatment is successful (when $e_{p}^{\prime}$ and
$e_{r}^{\prime}$ increase). On the contrary, $\overline{y}$ and $\overline{v}$
decline whenever there is a strong CTL response or when the antiviral
treatment is successful.
In Fig. 5a , we plot the phase diagram for a regime $R_{1}<1$ (shaded region)
in the phase space of $e_{k}$ and $e_{p}$. In Figure 5b, the phase diagram for
a regime $R_{1}<1$ (shaded region) in the phase space of $m$ and $e_{p}=e_{k}$
is plotted. In the figure, we fix $\lambda=10^{7}$, $d=0.1$, $a=0.5$,
$\beta=2X10^{-9}$, $k=1000$, $u=5$, $m=2$, $r=0.0001$, $p=1.0$, $b=0.5$ and
$c=2.0$.
Furthermore, the transcritical bifurcation point can be analyzed via Eq. (18)
and after some algebra (at steady state) we find
$\displaystyle
1-e=\left(1-e_{r}^{\prime}/m\right)\left(1-e_{p}^{\prime}/m\right)={(ab+cp)dum\over\lambda\beta
kb}={x_{1}\over x_{0}}$ (19)
where $x_{0}={\lambda\over d}$ denotes the number of uninfected host cells
before infection and $x_{1}={(ab+cp)um\over\beta kb}$ designates the number of
uninfected cells in a chronic case. This implies the critical efficacy is
given as $e_{c}=1-{x_{1}\over x_{0}}=1-{(ab+cp)umd\over\beta kb\lambda}$.
Figure 5: (Color online) (a) The phase diagram for a regime $R_{0}<1$ in the
phase space of $e_{k}$ and $e_{p}$. (b) The phase diagram for a regime
$R_{0}<1$ in the phase space of $m$ and $e_{p}=e_{k}$. In the figure, we fix
$\lambda=10^{7}$, $d=0.1$, $a=0.5$, $\beta=2X10^{-9}$, $k=1000$, $u=5$, $m=2$,
$r=0.0001$, $p=1.0$, $b=0.5$ and $c=2.0$.
Assuming $e_{r}^{\prime}=e_{p}^{\prime}$, Eq. (19) can be rewritten as
$\displaystyle e_{r}^{\prime}/m=1\pm\sqrt{{(ab+cp)dum\over\lambda\beta kb}}$
(20)
and the overall efficacy as a function of time is given by
$\displaystyle\left(1-e\right)=\left(1-\left(1\pm\sqrt{{(ab+cp)dum\over\lambda\beta
kb}}\right)\left(1+e^{-rt}\right)\right)^{2}.$ (21)
From Eq. (21), one finds
$\displaystyle
e_{c}=1-\left(1-\left(1\pm\sqrt{{\left(ab+cp\right)dum\over\lambda\beta
kb}}\right)\left(1+e^{-rt}\right)\right)^{2}.$ (22)
Once again, the infection will be cleared when $e>e_{c}$, and the virus
replicates as long as $e<e_{c}$.
Figure 6: (Color online) (a) The number of host cells $x$ as a function of
days. (b) The number of infected cells as function of days (c) The virus load
as a function of the days. In the figure, we fix $\lambda=10^{7}$, $d=0.1$,
$a=0.5$, $\beta=2X10^{-9}$, $k=1000$, $u=5.0$, $e_{p}^{\prime}=0.9$,
$e_{k}^{\prime}=0.9$, $r=0.1$, $p=10.0$, $b=0.5$ and $c=1.0$.
In order to get a clear insight, let us explore Eq. (16). In Fig. 6, the
number of host cells $x$, the number of infected cells and the viral load as a
function of time is plotted. In the figure, we fix $\lambda=10^{7}$, $d=0.1$,
$a=0.5$, $\beta=2X10^{-9}$, $k=1000$, $u=5.0$, $e_{p}^{\prime}=0.9$,
$e_{k}^{\prime}=0.9$, $r=0.1$, $p=10.0$, $b=0.5$ and $c=1.0$. For such
parameter choice, $R_{0}\gg 1.0$ and $R_{1}\ll 1.0$ revealing that initially,
the virus establishes an infection but latter the antiviral drug and CTL
response collaborate to clear the infection. As a result, the number of host
cells increases while the viral load as well as the infected cells decreases.
Since the virus is responsible for initiating the CTL response, as the viral
load declines, the CTL response step down.
## IV The dynamics of mutant virus in the presence of antiviral drugs
As discussed before, the fact that viruses are an obligate parasite of the
cells forces drug discovery to be complicated as the drug’s adverse effects
directly affect the host cells. Many medically important viruses are also
virulent and hence they cannot be propagated or tested via animal models. This
in turn forces the drug discovery to be lengthy. Moreover, unlike other
antimicrobial drugs, antiviral drugs have to be 100 percent potent to
completely avoid drug resistance. In other words, if the drug partially
inhibits the replication of the virus, through time, the number of the
resistant virus will dominate the cell culture.
To discuss the dynamics of the mutant virus, let us assume that the virus
mutates (when a single drug is administered ) by changing one base every
$10^{4}$ viruses. If $10^{11}$ viruses are produced per day, then this results
in $10^{7}$ mutant viruses. On the contrary, when two antiviral drugs are
administrated, $10^{3}$ mutant viruses will be produced. In the case of triple
drug therapy, no mutant virus is produced. To account for this effect, let us
remodify the rate of mutant virus production per day as
$\displaystyle k=10^{11-4s},{s=1,2,3}$ (23)
where the variable $s=1,~{}2,~{}3$ corresponds to a single, double, and triple
drug therapy, respectively.
Here we assume only the rate of mutant virus production $k$ determines the
dynamics and $10^{11}$ viruses are produced per day. To get an instructive
analytical solution regarding the relation between antiviral drug and viral
load, let us solve the differential equation
$\displaystyle{\dot{v}}$ $\displaystyle=$ $\displaystyle k-uv$ (24)
neglecting the effect of uninfected and infected host cells. Here the mutant
virus produced at rate of $k$ and die with the rate of $u$. The solution for
the above equation is given as
$\displaystyle v(t)={e^{-ut}(-k+ke^{ut}+uN)\over u}.$ (25)
Whenever ${k\over u}>1$ the virus spreads and when ${k\over u}<1$, the
antiviral is capable of eliminating the virus.
Exploiting Eq. (25) one can comprehend that, in the case of single therapy,
the virus load decreases during the course of treatment. As time progresses,
the viral load increases back due to the emergence of drug resistance (see
Fig. 8a). In the case of double drug therapy, as shown in Fig. 8b, the viral
load decreases but relapses back as time progresses. When triple drugs are
administered, the viral replication becomes suppressed as depicted in Fig 8c.
The readers should understand that the triple drug therapy does not guarantee
a cure. If the patient halts his or her therapy, the viral replications will
resume because of the latent and chronic infected cells. At steady state, we
get
$\displaystyle\overline{v}={k\over u}$ (26)
At equilibrium, the viral load spikes as $k$ increases and it decreases as $u$
steps up.
Figure 7: (Color online) The virus load as a function of time. In the figure,
we fix $u=1.0$. (a) Single drug therapy $s=1$. (b) Double drug therapy $s=2$.
(c) Triple drug therapy $s=3$.
## V Summary and conclusion
Developing antiviral drugs is challenging but an urgent task since outbreaks
of viral diseases not only killed several people but also cost trillion
dollars worldwide. The discovery of new antiviral drugs together with emerging
mathematical models helps to understand the dynamics of the virus in vivo. For
instance, the pioneering mathematical models on HIV shown in the works mu1 ;
mu2 ; mu3 ; mu4 ; mu5 ; mu6 ; mu7 disclose the host-virus correlation.
Moreover, to study the correlation between, antiviral drugs and viral load, an
elegant mathematical model was presented in the works mu8 ; mu9 .
Due to the emergence of drug resistance, the efficiency of antiviral drugs is
short-lived. To study this effect, we numerically study the dynamics of the
host cells and viral load in the presence of an antiviral drug that either
prevents infection ($e_{k}$) or stops the production of virus ($e_{p}$). For
the drug whose efficacy depends on time, we show that when the efficacy of the
drug is low, the viral load decreases and increases back in time revealing the
effect of the antiviral drugs is short-lived. On the contrary, for the
antiviral drug with high efficacy, the viral load, as well as the number of
infected cells, monotonously decreases while the number of uninfected cells
increases. The dynamics of critical drug efficacy on time is also explored.
Furthermore, the correlation between viral load, an antiviral drug, and CTL
response is also explored. Not only the dependence for the basic reproduction
ratio on the model parameters is explored but also we analyze the critical
drug efficacy as a function of time. The term related to the basic
reproduction ratio increases when the CTL response step up. A simple
analytically solvable mathematical model to analyze the correlation between
viral load and antiviral drugs is also presented.
In conclusion, in this work, we present a simple model which not only serves
as a basic tool for better understanding of viral dynamics in vivo and vitro
but also helps in developing an effective therapeutic strategy.
## Acknowledgment
I would like to thank Mulu Zebene and Blaynesh Bezabih for the constant
encouragement.
## References
* (1) This file is made available by MechESR under the Creative Commons CC 1.0 Universal Public Domain Dedication.Images (Wikimedia Commons).
* (2) G. Doitsh and W. C. Greene, Cell Host and Microbe 19, 280 (2016).
* (3) Nowak, M. A., and May, R. (2001). Virus Dynamics: Mathematical Principles of Immunology and Virology. Oxford University Press.
* (4) S. Wang, Y. Pan, Q. Wang, H. Miao, A. N. Brown and L. Rong, Mathematical Biosciences 328, 108438 (2020).
* (5) C. Zitzmann and L. Kaderali, Forntiers in Microbiology 9, 1546 (2018).
* (6) Y. Wang, J. Liu and L. Liu, Advances in Difference Equations 1, 225 (2018).
* (7) S. S. Chen, C. Y. Cheng and Y. Takeuchi, Journal of Mathematical Analysis and Applications 442, 642 (2016).
* (8) A. S. Perelson, D. E. Kirschner and R. De Boer , Math. Biosci 114, 81 (1993).
* (9) A. S. Perelson, , A. U. Neumann, M. Markowitz, J. M Leonard and D. D Ho , Science 271, 1582 (1996).
* (10) A. S. Perelson, P. Essunger, Y. Cao, M. Vesanen, A. Hurley and K. Saksela, Nature 387, 188 (1997).
* (11) Ho, D. D., Neumann, A. U., Perelson, A. S., Chen, W., Leonard, J. M., and Markowitz,M. (1995). Rapid turnover of plasma virions and CD4 lymphocytes in HIV-1 infection. Nature 373, 123126. doi: 10.1038/373123a.
* (12) Bonhoeffer, S., May, R. M., Shaw, G. M., and Nowak, M. A. (1997). Virus dynamics and drug therapy. Proc. Natl. Acad. Sci. U.S.A. 94, 69716976. doi: 10.1073/pnas.94.13.6971.
* (13) Stafford, M. A., Corey, L., Cao, Y., Daar, E. S., Ho, D. D., and Perelson, A. S. (2000). Modeling plasma virus concentration during primary HIV infection. J. Theor. Biol. 203, 285301. doi: 10.1006/jtbi.2000.1076
* (14) Wei, X., Ghosh, S. K., Taylor, M. E., Johnson, V. A., Emini, E. A., Deutsch, P., et al. (1995).Viral dynamics in human immunodeficiency virus type 1 infection. Nature 373, 117122. doi: 10.1038/373117a0
* (15) Neumann, A. U. (1998). Hepatitis C viral dynamics in vivo and the antiviral efficacy of interferon- therapy. Science 282, 103107. doi: 10.1126/science.282.5386.103
* (16) Dahari, H., Lo, A., Ribeiro, R. M., and Perelson, A. S. (2007a). Modeling hepatitis C virus dynamics: Liver regeneration and critical drug efficacy. J. Theor. Biol. 247, 371381. doi: 10.1016/j.jtbi.2007.03.006
* (17) Nowak, M. A., and Bangham, C. R. (1996). Population dynamics of immune responses to persistent viruses. Science 272, 7479.
* (18) Nowak,M. A., Bonhoeffer, S., Hill, A.M., Boehme, R., Thomas, H. C., andMcDade, H. (1996). Viral dynamics in hepatitis B virus infection. Proc. Natl. Acad. Sci. U.S.A. 93, 43984402. doi: 10.1073/pnas.93.9.4398
* (19) Perelson, A. S. (2002). Modelling viral and immune system dynamics. Nat. Rev. Immunol. 2, 2836. doi: 10.1038/nri700
* (20) Wodarz, D., and Nowak,M. A. (2002).Mathematical models of HIV pathogenesis and treatment. BioEssays 24, 11781187. doi: 10.1002/bies.10196
|
Paths of unitary access to exceptional points
Miloslav Znojil
The Czech Academy of Sciences, Nuclear Physics Institute,
Hlavní 130, 250 68 Řež, Czech Republic
and
Department of Physics, Faculty of Science, University of Hradec Králové,
Rokitanského 62, 50003 Hradec Králové, Czech Republic
e-mail<EMAIL_ADDRESS>
## Abstract
With an innovative ideas of acceptability and usefulness of the non-Hermitian
representations of Hamiltonians for the description of unitary quantum systems
(dating back to the Dyson’s papers), the community of quantum physicists was
offered a new and powerful tool for the building of models of quantum phase
transitions. In this paper the mechanism of such transitions is discussed from
the point of view of mathematics. The emergence of the direct access to the
instant of transition (i.e., to the Kato’s exceptional point) is attributed to
the underlying split of several roles played by the traditional single Hilbert
space of states ${\cal L}$ into a triplet (viz., in our notation, spaces
${\cal K}$ and ${\cal H}$ besides the conventional ${\cal L}$). Although this
explains the abrupt, quantum-catastrophic nature of the change of phase (i.e.,
the loss of observability) caused by an infinitesimal change of parameters,
the explicit description of the unitarity-preserving corridors of access to
the phenomenologically relevant exceptional points remained unclear. In the
paper some of the recent results in this direction are summarized and
critically reviewed.
## Keywords
exceptional points; quasi-Hermitian quantum theory; perturbations; quantum
catastrophes;
## Acknowledgements
Work supported by the Excellence project PřF UHK 2020 of the University of
Hradec Králové.
## 1 Introduction.
In the context of quantum physics the first signs of appreciation of the
phenomenological relevance of exceptional points [1] appeared during the
studies of the so called open quantum systems [2]. In these studies the
effective Hamiltonians act in a model subspace of the full Hilbert space and
are non-Hermitian. Thus, it was not too surprising to reveal that “the
positions of the exceptional points” vary “in the same way as the transition
point of the corresponding phase transition” [3]. The possibility of a generic
connection between exceptional points and phase transitions has been born.
In our present paper we will summarize several aspects of this connection. In
order to narrow the subject we will only consider the exceptional-point-
related phenomena emerging in the theory of the closed, stable, unitary
quantum systems.
### 1.1 Mathematical concept of exceptional point.
In mathematics the exceptional point (EP) can be defined as the value of an
(in general, complex) parameter $g$ at which a linear operator (which is, say,
non-Hermitian but analytic in $g$) loses its diagonalizability. For
Hamiltonians, one of the possible consequences is schematically depicted in
Fig. 1. This picture indicates that near an EP singularity there may exist an
$N-$plet of eigenvalues of the operator (i.e., typically, bound state energies
specified by a Hamiltonian $H(g)$) which merge in the limit of $g\to
g^{(EP)}$. Simultaneously, in contrast to the non-EP dynamical scenarios, the
EP or EPN degeneracy also involves the eigenvectors [1].
Figure 1: A schematic sample of the degeneracy of an $N-$plet of energies at
an exceptional point of order $N$ (EPN) with $N=8$ (EP8).
### 1.2 Early applications of exceptional points in quantum physics of closed
systems.
The confluence of eigenvalues as studied by mathematicians and sampled by Fig.
1 did not initially find any immediate applications in quantum physics of
closed systems. Among the reasons one can find, first of all, the widespread
habit of keeping all of the realistic phenomenological bound-state
Hamiltonians self-adjoint. This also required, for pragmatic reasons, a
replacement of the general complex parameter (say, $g\in\mathbb{C}$ in $H(g)$)
by a real variable (i.e., by $\lambda\in\mathbb{R}$ in $H(\lambda)$). A
combination of the two constraints rendered the mergers impossible. Only after
an abstract mathematical operation of analytic continuation of Hamiltonian
$H(\lambda)$ it was possible to reveal, in several models [4, 5], the
existence of the EPs. Naturally, all of them were manifestly non-real, ${\rm
Im}\,\lambda^{(EP)}\neq 0$ [1]. Only an indirect indication of their presence
near a real line of $\lambda$ could have been provided by the avoided level
crossings, a spectral feature sampled in Fig. 2.
Figure 2: Avoided crossing of four real (i.e., observable) energy levels
(arbitrary units).
In the quantum unitary-evolution setting a dramatic change of the situation
only came with the Bender’s and Boettcher’s pioneering letter [6]. The authors
revealed that a suitable weakening of the property of the self-adjointness of
$H(\lambda)$ could make the EP singularities “visible” and real. What followed
the discovery (cf. also the later review paper [7]) was an enormous increase
of interest of the physics community in a broad variety of Hamiltonians
$H(\lambda)$ possessing the real (i.e., in principle, experimentally
accessible) EP singularities with ${\rm Im}\,\lambda^{(EP)}=0$. In 2010, the
conference organized by W. D. Heiss in Stellenbosch [8] was even exclusively
dedicated to the role of the EPs in multiple branches of physics.
In our present paper we are going to interpret the EP and EPN degeneracies as
sampled by Fig. 1 in a strict unitary-evolution sense. This means that we will
only consider the real parameters $\lambda$ lying in a small vicinity of
$\lambda^{(EPN)}$. Under this assumption we will require that the whole
spectrum of energies remains real and non-degenerate either on both sides of
$\lambda^{(EPN)}$ (i.e., at any not too remote $\lambda\neq\lambda^{(EPN)}$)
or on one side at least (i.e., for $\lambda<\lambda^{(EPN)}$ or for
$\lambda>\lambda^{(EPN)}$). We will, naturally, also admit that the value of
$\lambda$ parametrizes a smooth curve passing through a larger,
$d-$dimensional space $\mathbb{R}^{d}$ of the real parameters determining a
$d-$parametric Hamiltonian
$H(\lambda)=H[a(\lambda),b(\lambda),\ldots,z(\lambda)]$.
### 1.3 Two-parametric example.
For illustration let us recall the two-parametric real-matrix Hamiltonian of
Ref. [9],
$H(a,b)=\left(\begin{array}[]{rrrr}-3&b&0&0\\\ -b&-1&a&0\\\ 0&-a&1&b\\\
0&0&-b&3\end{array}\right)\,.$ (1)
Its eigenvalues
$E_{\pm,\pm}(a,b)=\pm\frac{1}{2}\,\sqrt{20-4\,{b}^{2}-2\,{a}^{2}\pm
2\,\sqrt{64-64\,{b}^{2}+16\,{a}^{2}+4\,{b}^{2}{a}^{2}+{a}^{4}}}\,$ (2)
remain real and non-degenerate inside a two-dimensional unitarity-supporting
domain ${\cal D}^{(physical)}$ of parameters $a=a(\lambda)$ and $b=b(\lambda)$
which is displayed in Fig. 3.
Figure 3: The boundary of domain ${\cal D}^{(physical)}$ for toy-model (1)
with $d=2$.
It is worth adding that once one moves to the EP-supporting models with more
parameters, $d>2$, the illustrative shape of the $d=2$ domain in Fig. 3 (viz.,
a deformed square with protruded vertices) appears to be, in some sense,
generic. For a family of solvable models with $N>4$ such an intuition-based
conjecture has been confirmed in [10, 11]. A more recent, abstract theoretical
explanation of the hypothesis may be found in [12, 13]. On this background one
can expect that the most interesting smooth curves parametrized by $\lambda$
would be those which end at one of the EPN vertices with maximal $N$ (i.e.,
with $N=4$ in Fig. 3).
### 1.4 Paradox of stability near exceptional points.
In a way inspired by the above example one can expect that the behavior of the
closed quantum system with parameters lying deeply inside ${\cal D}$ would not
be too surprising. In such a dynamical regime, small changes of the parameters
leave the spectrum real. The formulation of predictions can be based on a
conventional perturbation theory.
Close to the boundary $\partial{\cal D}$ the situation is different and much
more interesting. Indeed, in a small vicinity of this boundary, a small change
of the parameter seems to be able to cause an abrupt loss of the observability
of the system. A spontaneous collapse alias quantum phase transition caused
by a small fluctuation of the interaction seems unavoidable.
Our present paper will be fully devoted to its study. In fact, a major part of
the paper will provide a concise explanation that in the specific context of
the closed, unitary quantum systems the latter, intuitive expectation of
instabilities is incorrect (see section 2 for introduction). We will clarify
why such an interpretation of dynamics is incorrect (see section 3), why a
deeper clarification of the point is important (cf. section 4), and, finally,
what would be a valid conclusion (section 5).
Keeping this purpose in mind, our text will start by a sketchy presentation of
a (very non-representative) sample of the current state of applications of the
stationary version of the formalism represented, schematically, by Fig. 4.
This will be followed by a (partially critical) review of some open questions
connected, first of all, with the role of the Kato’s exceptional points in
phase transitions. We will clarify the role of parameters in the vicinity of
EPs. In this dynamical regime, a few comments will be also added on the
correct analysis of stability of the non-Hermitian but unitary quantum systems
with respect to small perturbations.
A concise summary of our present message will finally be formulated in section
6.
## 2 Unitary evolution in Schrödinger picture using non-Hermitian
Hamiltonians.
Figure 4: Triplet of Hilbert spaces representing a bound state $\psi$ and
connected by a non-unitary map $\Omega\,$ and by the innovative ad hoc
amendment $I\to\Theta=\Omega^{\dagger}\Omega\neq I$ of the inner-product
metric.
### 2.1 Theoretical background
The above-mentioned paradox of stability near EPs is reminiscent of the old
puzzle of the stability of atoms in classical physics. In fact, the resolution
of the latter puzzle belongs among the most remarkable successes which
accompanied the birth of quantum mechanics. The innovation was based on
Schrödinger equation representing bound states by ket-vector elements of a
suitable Hilbert space ${\cal K}$,
$H\,|\psi_{n}\rangle=E_{n}\,|\psi_{n}\rangle\,,\ \ \ \
|\psi_{n}\rangle\in{\cal K}\,,\ \ \ \ n=0,1,\ldots\,.$ (3)
Subsequently, the incessant growth of the number of successful
phenomenological applications of quantum theory was accompanied by the
emergence of various innovative mathematical subtleties. One of the ideas of
the latter type (and of a decisive relevance for the present paper) can be
traced back to the papers by Dyson [14] and Dieudonné [15]. Independently,
they introduced the concept of the $\Theta$-pseudo-Hermitian Hamiltonians.
These operators (with real spectra) are assumed to remain non-Hermitian in
${\cal K}$ but restricted by the quasi-Hermiticity relation
$H=\Theta^{-1}H^{\dagger}\Theta\neq H^{\dagger}\,,\ \ \ \ \
\Theta=\Theta^{\dagger}>0\,.$ (4)
For details see the text below, or the older review paper [16], or its more
recent upgrades [7, 17, 18, 19, 20, 21].
Briefly, the $\Theta$-pseudo-Hermiticity innovation can be characterized as a
reclassification of the status of the Hilbert space of states (cf. Fig. 4).
Indeed, in conventional textbooks the choice of ${\cal K}$ in Schrödinger Eq.
(3) is usually presented as unique. In the textbook cases of stable, unitarily
evolving quantum systems, in a way observing Stone theorem [22], also the
Hamiltonian itself would necessarily be required self-adjoint in ${\cal K}$.
After the reclassification, in contrast, the meaning of symbols ${\cal K}$ and
$H$ is being changed. Firstly, in the Dyson’s spirit one decides to admit that
$H$ can be non-Hermitian in ${\cal K}$. In the light of the Stone theorem this
means that the status of ${\cal K}$ must be changed from “physical” to
“unphysical”. Secondly, in the Dieudonné’s spirit, the postulate of validity
of quasi-Hermiticity relation (4) enables us to interpret operator $\Theta$ as
a metric [16]. Thus, we may amend the inner product in order to convert the
unphysical Hilbert space ${\cal K}$ into a new, unitarity non-equivalent
physical Hilbert space ${\cal H}$,
$\langle\psi_{1}|\psi_{2}\rangle_{\cal
H}=\langle\psi_{1}|\Theta|\psi_{2}\rangle_{\cal K}\,.$ (5)
Thirdly, the factorization $\Theta=\Omega^{\dagger}\Omega\neq I$ of the metric
enables us to introduce an operator
$\mathfrak{h}=\Omega^{-1}\,H\,\Omega$ (6)
and to interpret is as a hypothetical alternative isospectral Hamiltonian in
another, alternative physical Hilbert space ${\cal L}$ which is, by the
assumption which dates back to Dyson [14, 16], self-adjoint but constructively
as well as technically inaccessible.
### 2.2 Notation conventions
Further details characterizing such an apparently redundant representation of
a single state $\psi$ will be recalled and summarized below. Now, let us only
point out that the Dyson’s and Dieudonné’s reformulation of the postulates of
quantum theory was deeply motivated. It is only necessary to accept and
appreciate both the Dyson’s activity and the Diedonné’s scepticism. Indeed,
Dyson discovered and used, constructively, several positive and truly
innovative aspects of the use of quasi-Hermiticity in physics and
phenomenology. At the same time, the Dieudonné’s well founded critical
analysis of the “hidden dangers” behind the quasi-Hermiticity is still a
nontrivial and exciting subject for mathematicians [23, 24]).
In the context of physics the latter “hidden dangers” were, fortunately,
cleverly circumvented (cf. review [16]). Some of the corresponding technical
and mathematical recommendations will be recollected below. Immediately, let
us only mention that the amendment of mathematics led to an ultimate compact
and explicit three-Hilbert-space (3HS) formulation of the most general non-
stationary version of quasi-Hermitian quantum mechanics as first proposed in
[25] and as subsequently reviewed in [17].
In what follows, we will employ the most compact notation as introduced in our
latter two papers. The reason is twofold. Firstly, along the lines indicated
in [17], the choice of such a notation will simplify the separation of our
present perception of physics from its alternatives which often share the
mathematical terminology while not sharing the phenomenological scope.
Secondly, the emphasis put on notation will enable us to review the field of
our present interest in a sufficiently compact and concise manner, avoiding
potential misunderstandings caused by the variabiity of the notation used in
the literature (cf. Table 1).
Table 1: Sample of confusing differences in notation conventions. concept | symbol
---|---
Hilbert space metric | $\eta_{+}$ | $\rho$ | $\widetilde{T}$ | $\Theta$
Dyson’s map | $\rho$ | $\eta$ | $S$ | $\Omega$
state vector | $|\psi\rangle$ | $\Psi$ | $|\Psi\rangle$ | $|\psi\rangle$
dual state vector | $|\phi\rangle$ | $\rho\Psi$ | $\widetilde{T}|\Psi\rangle$ | $|\psi\rangle\\!\rangle$
reference | [18] | [26] | [16] | here
### 2.3 The concept of hidden Hermiticity.
#### 2.3.1 Motivation.
In an incomplete sample of ambitions of the 3HS reformulation of quantum
theory let us mention, first of all, the Dyson’s description of correlations
in many-body systems [14] inspired by numerical mathematics (where one would
speak simply about a “preconditioning” of the Hamiltonian). Secondly, in
combination with the assumption of ${\cal PT}-$symmetry [21] the 3HS approach
(complemented by the mathematical Krein-space methods [27, 28]) opened new
horizons in our understanding of the first-quantized relativistic Klein-Gordon
equation [29, 30]. Thirdly, a transfer of the underlying “hidden Hermiticity”
ideas to relativistic quantum field theory [31] and/or to the studies of
supersymmetry [32] inspired a number of methodical studies of various
elementary toy models [6, 33, 34]. Last but not least it is worth mentioning
that the applications of the 3HS formalism even touched the field of canonical
quantum gravity based on the use of Wheeler-DeWitt equation [35].
#### 2.3.2 Disambiguation.
The solution $\Theta=\Theta(H)$ of Eq. (4) does not exist whenever the
spectrum of $H$ ceases to be real. This means that only certain parameters in
non-Hermitian $H(\lambda)$ remain unitarity-compatible and “admissible”,
$\lambda\in{\cal D}$. In the admissible cases, in a way explained in [17],
there exists a mapping $\Omega$ which realizes an equivalence of predictions
made in ${\cal H}$ with those made in a third, hypothetical and, by our
assumption, practically inaccessible Hilbert space ${\cal L}$. The latter
space is precisely the space of states used in conventional textbooks. In the
present 3HS context (depicted in Figure 4), its role is purely formal because
in this space, operator (6) representing the Hamiltonian and formally self-
adjoint in ${\cal L}$ is, by assumption, too complicated to be useful or
tractable (for example, it may happen to be a highly non-local pseudo-
differential operator [18]).
In the literature devoted to applications of unitary quantum theory the
authors working in the 3HS version of Schrödinger picture do not always
sufficiently clearly emphasize the Hermiticity of the physical Hamiltonian in
the physical Hilbert space ${\cal H}={\cal H}^{(Standard)}$ (say, by writing
$H=H^{\ddagger}$ [17]). Another potential source of confusion lies in the
widespread habit (or rather in the abuse of language) of using shorthand
phrases (like “non-Hermitian Hamiltonians”) or shorthand formulae (like $H\neq
H^{\dagger}$) without adding that one just temporarily dwells in an
irrelevant, auxiliary, unphysical Hilbert space ${\cal K}$. The resulting,
fairly high probability of misunderstandings is further enhanced by the
diversity of conventions as sampled in Table 1.
## 3 Constructive aspects of the triple Hilbert space formalism.
### 3.1 Metric and its ambiguity.
Two alternative model-building strategies based on the “generalized
Hermiticity” (4) have been used in applications. In the first one one chooses
$\mathfrak{h}=\mathfrak{h}^{\dagger}$ and $\Omega$ and reconstructs $H$ and
$\Theta$. In fact, the use of such a strategy remained restricted just to
nuclear physics of heavy nuclei in practice [16]. At present, almost
exclusively [18], one picks up the Hamiltonian (i.e., a “trial and error”
operator $H$ which is non-Hermitian in ${\cal K}$) and reconstructs, via Eq.
(4), the (necessarily, nontrivial) metric $\Theta>0$ (i.e., the correct
physical Hilbert space of states denoted, here, by dedicated symbol ${\cal
H}$). The approach based on the reconstruction of metric now forms the
mainstream in research. The false but friendly space ${\cal K}$ and a non-
Hermitian Hamiltonian $H$ are both assumed to be given in advance while a
suitable Hermitizing inner-product metric must be reconstructed, in principle
at least. The Hermiticity of any other observable $\Lambda$ in ${\cal H}$ must
also be guaranteed. In the auxiliary space ${\cal K}$ this requirement has the
form $\Lambda^{\dagger}\,\Theta=\Theta\,\Lambda$.
In an elementary illustration the Wheeler-DeWitt-like equation
$H=H^{(WDW)}(\tau)=\left[\begin{array}[]{cc}0&\exp 2\tau\\\ \vskip 6.0pt plus
2.0pt minus 2.0pt\cr 1&0\end{array}\right]\neq H^{\dagger}\,\ \ \ \ \ {\rm
in}\ \ \ {\cal K}=\mathbb{R}^{2}$ (7)
yields the two real closed-form eigenvalues $E=E_{\pm}=\pm\exp\tau$ so that it
can serve as a sample of the Dyson-Dieudonné definition of quasi-Hermiticity
(4). A decisive advantage of the use of such a highly schematic one-parametric
two-by-two real-matrix example is that one can easily solve Eq. (4) and
construct all of the eligible physical inner-product-metric operators
$\Theta=\Theta^{(WDW)}(\tau,\beta)=\left[\begin{array}[]{cc}\exp(-\tau)&\beta\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\beta&\exp\tau\end{array}\right]=\Theta^{\dagger}\,,\ \ \ \ \
|\beta|<1\,.$ (8)
These solutions form a complete set of candidates for the (Hermitian and
positive definite) eligible metric [16]. In this example one notices that
parameter $\beta$ is an independent variable. This observation is, indeed,
compatible with the well known fact that the assignment $\Theta=\Theta(H)$ of
the metric to a preselected Hamiltonian is not unique [16, 18, 36].
### 3.2 False instabilities and open systems in disguise
In the literature devoted to applications the authors interested in non-
Hermiticity often do not sufficiently clearly separate the quantum and non-
quantum theories. Here, we are not going to deal with the latter branch of
physics. Nevertheless, even within the range of quantum mechanics the authors
often intermingle the results concerning the open and closed quantum systems.
Here, almost no attention will be paid to the former family of models, either.
An exception should be made in connection with the papers dealing with certain
non-Hermitian but ${\cal PT}-$symmetric quantum systems where, typically, the
authors claim that “complex eigenvalues may appear very far from the
unperturbed real ones despite the norm of the perturbation is arbitrarily
small” [37].
As long as the latter claims (of a top mathematical quality) are accompanied
by certain fairly vague quantum-theoretical considerations (which could
certainly prove misleading), we feel forced to point out that recently, the
study of the parametric domains of unitarity near EPs [38] clarified the point
(cf. also the less formal explanation in [9]). The essence of the
misunderstanding can be traced back to the fact that the loss of stability was
deduced, in [37], from the properties of the pseudospectrum [23].
Unfortunately, the construction was only performed using the trivial form of
the inner-product metric defining just the manifestly unphysical Hilbert space
${\cal K}$ where $\Theta=I$. For this reason the mathematical results about
pseudospectra in ${\cal K}$ make sense in, and only in, the open quantum
systems. In these systems the predicted instabilities really do occur because
the space ${\cal K}$ itself still keeps there the status of the physical
space.
We may summarize that in the 3HS models of closed systems the Hamiltonians are
in fact self-adjoint in ${\cal H}$. This means that the evaluation of their
pseudospectra would necessarily require the work with norms which would be
expressed in terms of the physical metric $\Theta$. Thus, once the existence
of such a metric is guaranteed (which is, naturally, a nontrivial task!), the
proofs of stability based on the pseudospectra will apply.
We should also add that the smallness of perturbations is a concept which
crucially depends on the metric $\Theta$ defining the physical Hilbert space
${\cal H}$. From this point of view it is obvious that as long as the metric
itself becomes necessarily strongly anisotropic in the vicinity of EPs [36],
also some of the perturbations which might look small in ${\cal K}$ become, in
such a regime, large in ${\cal H}$, and vice versa [39].
### 3.3 EP (hyper)surfaces and their geometry.
For the lovers of closed formulae the existence as well as the geometry of
access to EPs was made very explicit in paper [13]. An advertisement of the
contents of this paper can be brief: a list of transmutations is given there
between various versions of a special Bose-Hubbard (BH) system (represented by
certain complex finite matrices) and of a discrete and truncated anharmonic
oscillator (AO). It is sufficient to recall here just the ultimate message of
the paper: at an EP singularity of order $N$ it is possible to match, via a
phase transition, many entirely different quantum systems. Represented in
their respective Hilbert spaces ${\cal K}$ and sharing just their dimension
$N<\infty$.
In [13] the idea is illustrated via its several closed-form realizations.
Incidentally, all of these models happened to be unitary in a domain ${\cal
D}$ of a shape resembling a (hyper)cube with protruded vertices. In a broader
perspective one can say that by definition [1], the latter vertices are
precisely the EP extremes of our present interest. In this light, our present
paper could be briefly characterized as a study of the geometry of the generic
unitarity-supporting domains of parameters, with particular emphasis on
understanding of the sharply spiked shapes of their surfaces $\partial{\cal
D}$ in a small vicinity of their EP vertices and edges. Indeed, we found such
phenomenologically relevant features of the geometry mathematically remarkable
and worth a dedicated study.
## 4 Real-world models and predictions.
### 4.1 Mathematics: Amended inner products and exceptional points.
The main purpose of the introductory recollection of the 3HS formalism was to
prepare a turn of attention to the key role played, in the 3HS applications,
by the concept of exceptional points (EPs). Although their original rigorous
definition may be already found in the old Kato’s monograph on perturbation
theory [1], their usefulness for quantum physics of unitary systems only
started emerging after Bender with Boettcher pointed out, in their pioneering
letter [6], that the EPs (also known as Bender-Wu singularities [4, 5]) could
also acquire an immediate phenomenological interpretation of the points of
quantum phase transition. Alternatively, their properties appeared relevant in
the more speculative contexts of Calogero models and/or of supersymmetry [40].
From all of the similar 3HS-applicability points of view it is necessary to
start the model-building processes from a preselected candidate for the
Hamiltonian which is parameter-dependent, $H=H(\lambda)$. Moreover, it must be
non-Hermitian in the auxiliary Hilbert space ${\cal K}$ and,, at the same
time, properly Hermitian and self-adjoint in an “amended” Hilbert space of
states ${\cal H}$. Now, the key point is that in the light of assumption (4),
the latter space can be represented via a mere amendment (5) of the inner
product in ${\cal K}$. In other words, any solution $\Theta=\Theta(H)$ of Eq.
(4) defines the necessary physical space ${\cal H}={\cal H}(H)$.
In opposite direction, many of the eligible and Hamiltonian-dependent metrics
and spaces may and will cease to exist before the variable, path-specifying
parameter $\lambda$ reaches the ultimate EP value
$\lambda^{(EP)}\in\partial{\cal D}$. For the parameters lying inside the
physical domain ${\cal D}$, the Hamiltonian must still be assigned such a
specific metric $\Theta$ and space ${\cal H}$ which would exist up to the
required limit of $\lambda\to\lambda^{(EPN)}$. In this sense, for any
preselected quasi-Hermitian quantum system, our knowledge and specification of
the boundary $\partial{\cal D}$ near EPNs are of an uttermost importance.
### 4.2 Realistic many-body systems.
In the latter setting we should return, once more, to Fig. 3 illustrating the
sharply spiked, fragile, parameter-fine-tuning nature of the shape of the
sample domain near its EPN extremes. Due to their potential phase-transition
interpretation, these extremes seem to be the best targets of a realistic
experimental search.
#### 4.2.1 Realistic systems inclined to support an approximate decomposition
into clusters.
The manifestly non-unitary mapping $\Omega$ as mentioned in Fig. 4 connects
the ket-vector elements of two non-equivalent Hilbert spaces: In the notation
of Ref. [17] we have
$|\psi\\!\\!\succ\,\,=\Omega\,|\psi\rangle\,,\ \ \ \
|\psi\\!\\!\succ\,\,\in{\cal L}\,,\ \ \ \ |\psi\rangle\in{\cal K}\,.$ (9)
Recently it has been revealed that precisely the same mapping (attributed to
Dyson [16]) also forms a mathematical background of the so called coupled
cluster method (CCM, [41]). In fact, the implementation aspects of the latter,
CCM interpretation of formula (9) were already used in calculations and
tested, say, in quantum chemistry. What was particularly successful are the
variational (or, more precisely, bi-variational) realizations of the CCM
philosophy, with emphasis put upon the construction of ground states, and with
a well-founded preference of mappings (9) in the exponential form $\Omega=\exp
S$ where $S$ is represented in a suitable operator basis.
The latter, apparently purely technical restriction seems to be responsible
for the success of the method which is currently “one of the most versatile
and most accurate of all available formulations of quantum many-body theory”
[42]. In paper [42], extensive 3HS-CCM parallels have been found. The
respective strengths and weaknesses of the two approaches look mutually
complementary. Currently [43], their further analysis is being concentrated
upon the strengths. One may expect that the consequent, mathematically
consistent 3HS quantum theory might enhance the range of applicability of the
more pragmatic but very precise CCM ground-state constructions. Along these
lines, in particular, the new theoretical predictions may be expected to
concern the EP-related many-body quantum phase transitions which could be
also, in parallel, experimentally detected.
#### 4.2.2 Bose-Hubbard model and its open- and closed-system
interpretations.
The Bose-Hubbard Hamiltonian
$H=\varepsilon\left(a_{1}^{\dagger}a_{1}-a_{2}^{\dagger}a_{2}\right)+v\left(a_{1}^{\dagger}a_{2}+a_{2}^{\dagger}a_{1}\right)+\frac{c}{2}\left(a_{1}^{\dagger}a_{1}-a_{2}^{\dagger}a_{2}\right)^{2}$
(10)
of Graefe et al [44] has been developed to describe an $(N-1)$-particle Bose-
Einstein condensate in a double well potential containing a sink and a source
of equal strengths. Besides the usual annihilation and creation operators the
definition contains the purely imaginary on-site energy difference
$2\varepsilon=2{\rm i}\gamma$. In the fixed$-N$ representation the Hamiltonian
is a matrix: At $N=6$ we have, for example,
$H^{(6)}(\gamma,c,v)=\left(\begin{array}[]{cccccc}-5{\rm
i}\gamma+\frac{25}{2}c&\sqrt{5}v&0&0&0&0\\\ \sqrt{5}v&-3{\rm
i}\gamma+\frac{9}{2}c&2\sqrt{2}v&0&0&0\\\ 0&2\sqrt{2}v&-{\rm
i}\gamma+\frac{1}{2}c&3v&0&0\\\ 0&0&3v&{\rm
i}\gamma+\frac{1}{2}c&2\sqrt{2}v&0\\\ 0&0&0&2\sqrt{2}v&3{\rm
i}\gamma+\frac{9}{2}c&\sqrt{5}v\\\ 0&0&0&0&\sqrt{5}v&5{\rm
i}\gamma+\frac{25}{2}c\end{array}\right)\,.$ (11)
Once we fix the inessential single particle tunneling constant $v=1$ and once
we localize the EPN singularity at $\gamma=1$ and at the vanishing strength of
the interaction between particles $c=0$, we reveal, at any $N$, that after an
arbitrarily small $c\neq 0$ perturbation, the spectrum abruptly ceases to be
real (see loc. cit.). This means that the metric $\Theta$ and space ${\cal H}$
cease to exist, either. The perturbed system admits, exclusively, the non-
unitary, open-system interpretation in ${\cal K}$.
In our present framework restricted to closed systems, only the parameters
contained inside the suitable physical domain ${\cal
D}=\\{\gamma,c\,|\,\gamma\in(-1,1)\,,c\in(c_{\min}(\gamma),c_{\max}(\gamma))\\}$
(with the shape resembling, locally, Fig. 3 near its spikes) would be
compatible with the reality of the energies. Interested readers may find an
extensive study and detailed constructive description of the shape of such a
unitarity compatible domain in our rather lengthy recent paper [45].
### 4.3 Generalized Bose Hubbard models
Up to now we paid attention to the models (sampled by the Bose Hubbard
Hamiltonian (10)) with the EPN singularities possessing the trivial geometric
multiplicity $K=1$ [1]. Interested readers may find, in paper [46], an
introduction into a more general category of the EPs characterized by a
clustered, $K-$centered degeneracy of the wave functions with $K>1$. In these
cases the EP-related quantum catastrophes (i.e., the generalized ${\cal
PT}-$symmetry breakdowns) appeared to be of the form of confluence of several
independent EPs with $K=1$. The paper illustrated the advanced mathematics of
the degeneracy of degeneracies via low-dimensional matrix models. The
emergence of unusual horizons found its mathematical formulation in the
language of geometry of Riemann surfaces, accompanied by the phenomenological
predictions of certain anomalous phase transitions.
A model-independent analysis of these anomalies in the dynamical EP-unfolding
scenarios was based, in subsequent paper [47], on their parametrization by the
matrix elements of admissible (i.e., properly scaled and unitarity-compatible)
perturbations. A consistency of algebra with the EP-related deformations of
the Hilbert-space geometry has been confirmed. The new degenerate perturbation
techniques were developed and their implementation has been found feasible.
Via a class of schematic models, a constructive analysis of the vicinity of
the simplest nontrivial EPN with $K=2$ was performed.
An implementation of the schematic recipe to the Bose-Hubbard-type generalized
models may finally be found described in [45]. It was shown that there always
exists a non-empty unitarity domain ${\cal D}$ comprising a multiplet of
variable matrix elements of the admissible perturbations for which the
spectrum is all real and non-degenerate. The intuitive expectations were
confirmed: the physical parametric domains near EPs were found sharply spiked.
A richer structure was revealed to characterize the admissibility of the
perturbations. Two categories of the models were considered. In the first one
the number of bosons was assumed conserved (leading to the matrix Hamiltonians
of the form (11)). The alternative assumption of the particle-number non-
conservation led to the realistic $K>1$ scenarios in which the spectra also
remain real and non-degenerate. The quantum evolution controlled by the
Hamiltonians of larger (or even infinite) dimensions still remains unitary.
In all of these cases, in spite of a rapid increase of the complexity of the
formulae with the number of particles, the existence as well as a sharply
spiked structure of ${\cal D}$ near EPN has again been reconfirmed. The first
steps of the explicit constructive analysis of the structure of ${\cal D}$
were performed in the simplest case with $N=5$ where the access to EP5
appeared mediated by eight independently variable parameters.
### 4.4 Further phenomenological challenges.
The early abstract words of warning against the deceptive nature of the
concept of quasi-Hermiticity [15, 23] were recently reconfirmed by the authors
of paper [48]. After a detailed analysis of the popular non-Hermitian but
${\cal PT}-$symmetric imaginary cubic anharmonic oscillator these authors came
to the conclusion that such a “fons et origo” of the theory can be
characterized by the singular behavior attributed to an “intrinsic” EP. Such a
discovery contributed to the motivation of our present study since it enhanced
the importance of the knowledge of the behavior of the 3HS models at
parameters lying close to their EP limits.
Another, independent source of interest in the study and explicit description
of the domains ${\cal D}$ of the unitarity-compatible “admissible” parameters
in the close vicinity of EPs may be seen in the frequently experimentally
observed phenomenon of the avoided level crossings. In a way sampled by Figure
3 this phenomenon occurs even in the spectra of finite-dimensional Hermitian
matrices.
The related, highly desirable analytic continuation of the spectra towards
their EP degeneracies is by far not an easy task. The task is intimately
connected with the 3HS-inspired turn of attention to the description of
quantum dynamics using non-Hermitian Hamiltonians. This opens multiple
technical questions. One of them is that after one perturbs a quasi-Hermitian
Hamiltonian or even only its parameter, $H(\lambda)\to H(\lambda^{\prime})$,
one immediately encounters the re-emergence of the well known ambiguity of the
Hilbert-space inner product in Eq. (5) [18, 36, 49]. As a byproduct of this
observation there appeared a need of a deep and thorough reformulation of
perturbation theory itself [39], with nontrivial consequences concerning, in
particular, the systems lying close to the boundary $\partial{\cal D}$.
Figure 5: Schematic 3HS picture of the Universe evolving through a sequence of
Eons separated by EPs. Sampled by “breathing” one-dimensional $N-$point grids
with $N_{1}=2$, $N_{2}=4$, etc.
Besides the technical open questions there also exists a number of the strong
parallel challenges emerging in the context of quantum phenomenology. Their
truly prominent samples emerged in the context of quantum cosmology and, in
particular, in the attempted descriptions of the evolution of the Universe
shortly after its initial Big Bang singularity. The key point is that the
classical-theory-supported existence of Big Bang seems to contradict the
conventional quantum-theoretical paradigm of Hermitian theory. By the latter
theory the Big-Bang-type phase transitions cannot exist, being “smeared” and
reduced to the mere avoided crossing behavior of the spatial coordinates
called Big Bounce of the Universe [50].
A disentanglement of the puzzle could be, in principle, offered by the 3HS
models in which the Big Bang would correspond to a real EP-related spectral
singularity of a suitable non-Hermitian operator (cf., e.g., [51]). Such a
hypothesis would admit even a highly speculative “evolutionary cosmology”
pattern of Fig. 5 in which a sequence of penrosian Eons separated by the Big-
Crunch/Big-Bang singularities would render the structure of the “younger”
Universes richer and more sophisticated.
## 5 Exceptional-point-mediated quantum phase transitions.
### 5.1 EPs as quantum crossroads.
In paper [13] we emphasized that a classification of passages of closed
quantum systems through their EP singularities could be perceived as a quantum
analogue of the classical catastrophe theory [52]. In this context let us only
add that the EP-mediated phase transitions could acquire the form of a quantum
process of bifurcation,
$\begin{array}[]{c}\begin{array}[]{|c|}\hline\cr\vspace{-0.35cm}\hfil\\\ {\rm
initial\ phase,}\ t<0,\\\ {\rm Hamiltonian\ }H^{(-)}(t)\\\
\hline\cr\end{array}\\\ \Downarrow\\\ \stackrel{{\scriptstyle\bf process\ of\
degeneracy}}{{\rm}}\\\ \Downarrow\\\
\begin{array}[]{|c|}\hline\cr\vspace{-0.35cm}\hfil\\\ t>0\ {\rm\ branch\
A,}\\\ {\rm\ Hamiltonian\ }H^{(+)}_{(A)}(t)\\\
\hline\cr\end{array}\stackrel{{\scriptstyle{\bf option\
A}}}{{\Longleftarrow}}\begin{array}[]{|c|}\hline\cr\vspace{-0.35cm}\hfil\\\
{\rm the\ EP\ ``crossroad",}\\\ {\rm indeterminacy\ at}\ t=0\\\
\hline\cr\end{array}\stackrel{{\scriptstyle{\bf option\
B}}}{{\Longrightarrow}}\begin{array}[]{|c|}\hline\cr\vspace{-0.35cm}\hfil\\\
t>0\ {\rm\ branch\ B,}\\\ {\rm\ Hamiltonian\ }H^{(+)}_{(B)}(t)\\\
\hline\cr\end{array}\\\ \end{array}$
Thus, in principle, the future extensions of our present models might even
incorporate a multiverse-resembling branching of evolutions at $t=0$.
Marginally, let us add that in such a branched-evolution setting one could
find applications even for some results on non-unitary, spectral-reality-
violating evolutions. An illustration may be found in papers (sampled by [53])
where just the search for the EP degeneracies has been performed without any
efforts of guaranteeing the reality of the spectrum.
### 5.2 Perturbation theory near EPs using nonstandard unperturbed
Hamiltonians.
At the above-mentioned “cross-road” EP instant $t=0$ the Hamiltonian ceases to
be diagonalizable. This means that such an instant can be perceived as a
genuine quantum analogue of the classical Thom’s bifurcation singularity alias
catastrophe [52]. The distinguishing feature of the phenomenon in its quantum
form is that it is “instantaneously” incompatible with the postulates of
quantum theory. Fortunately, the theory returns in full force at any,
arbitrarily small time before or after the catastrophe.
In [13], several explicit and strictly algebraic, solvable-model illustrations
of such a passage through the EPN singularity may be found described in full
detail. Alternatively, the phenomenon can be also described in a model-
independent manner. Indeed, a return to the diagonalizability can be
characterized as a perturbation of the non-diagonalizable $t=0$ Hamiltonian
$H_{(EP)}$. Thus, any multiplet of states $|\vec{\psi}(t)\rangle$ can be
constructed, before or after $t=0$, as the solution of a properly perturbed
Schrödinger equation
$(H_{(EP)}+\lambda\,W)|\vec{\psi}\rangle=\epsilon\,|\vec{\psi}\rangle\,.$ (12)
One has to keep in mind that the unperturbed Hamiltonian itself is an
anomalous operator, the conventional diagonalization (or, more generally,
spectral representation) of which does not exist. Once we have to consider
here just its finite-dimensional matrix forms, the constructive approach to
Eq. (12) can be based on the evaluation of the so called transition matrices
$Q_{(EP)}$, defined as solutions of the Schrödinger-like linear algebraic
equation
$H_{(EP)}Q_{(EP)}=Q_{(EP)}J_{(JB)}(E_{(EP)})\,.$
The symbol $J_{(JB)}(E_{(EP)})$ denotes here the canonical representation of
$H_{(EP)}$. Once we decide to choose it in the most common Jordan-matrix form,
the related transition matrices $Q_{(EP)}$ can be reinterpreted as an analogue
of the unperturbed basis. In this basis, the perturbed Schrödinger equation
(12) acquires the canonical form
$[J_{(JB)}(E_{(EP)})+\lambda\,V]|\vec{\phi}\rangle=\epsilon\,|\vec{\phi}\rangle\,.$
(13)
Interested readers are recommended to consult Refs. [38] and [45] for the
further details of the solution of such an equation.
For our present purposes the essence of the latter technicalities may be
explained using the elementary unperturbed real-matrix Hamiltonians of Ref.
[13],
$H^{(2)}_{(EP)}=\left[\begin{array}[]{cc}1&1\\\ {}-1&-1\end{array}\right]\,,\
\ \ \ \ \ H^{(3)}_{(EP)}=\left[\begin{array}[]{ccc}2&\sqrt{2}&0\\\
{}-\sqrt{2}&0&\sqrt{2}\\\ {}0&-\sqrt{2}&-2\end{array}\right],\ \ldots\,.$
For this series of examples all of the transition matrices are non-unitary but
known in closed form. At $N=3$ one gets
$Q^{(3)}_{(EP)}=\left[\begin{array}[]{ccc}2&2&1\\\ \vskip 6.0pt plus 2.0pt
minus 2.0pt\cr-2\,\sqrt{2}&-\sqrt{2}&0\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr 2&0&0\end{array}\right]\,$
etc. As long as the lack of space does not allow us to reproduce here the
further details, let us redirect the readers to paper [39] (in which some
overall conceptual features of the EP-related perturbation approximation
construction are described) and to paper [47] (in which the more complicated
EPs with geometric multiplicity greater than one are taken into
consideration).
Out of the most essential conclusions of the latter two studies let us pick up
the single and apparently obvious fact (still not observed, say, in Refs. [23,
37]) that the class of admissible, operationally meaningful perturbations must
not violate the self-adjointness of the Hamiltonian in the correctly
reconstructed physical Hilbert space ${\cal H}$.
### 5.3 Constructions based on the differential Schrödinger equations.
In the year 1998 Bender with Boettcher discovered the existence of the real
(i.e., in principle, experimentally accessible ) EPs generated by certain
local and non-Hermitian but parity-time symmetric (${\cal PT}-$symmetric)
potentials [6]. The EPs were interpreted as instants of the spontaneous
breakdown of ${\cal PT}-$symmetry. Their reality was unexpected because for
the conventional local potentials the EPs are never real [4].
Among the specific studies of the non-Hermitian but ${\cal PT}-$symmetric
differential Schrödinger equations $H\,\psi=E\,\psi$ a distinguished position
belongs to paper [54] by Dorey et al who considered the angular-momentum-
spiked oscillator Hamiltonians
$H(M,L,A)=-\,\frac{d^{2}}{dx^{2}}+\frac{L(L+1)}{x^{2}}-(ix)^{2M}-A\,(ix)^{M-1}\,,\
\ \ \ M=1,2,\ldots\,,\ \ \ \ L,A\in\mathbb{R}$ (14)
in which the “coordinate” $x$ lied on a suitable ad hoc complex contour. They
showed that inside a suitable domain ${\cal D}$ of parameters these
Hamiltonians generate the strictly real bound-state-like spectra. These
authors were the first to describe the shape and role of the boundaries
$\partial{\cal D}$ formed by the EPs. Unfortunately, they did not make the
picture complete because they did not construct the corresponding physical
inner products.
### 5.4 Harmonic oscillator.
Once one restricts attention to the most elementary choice of $M=1$ yielding
the one-parametric harmonic-oscillator Hamiltonian $H(1,L,A)=H^{(HO)}(L)$, the
model becomes exactly solvable at all real $L\in\mathbb{R}$ [55]. For this
reason the HO domain of unitarity ${\cal D}^{[HO]}$ has an elementary,
multiply connected form of a “punched” interval with EPs (i.e., with elements
$L^{(EP)}$ of boundary $\partial{\cal D}^{[HO]}$) excluded,
${\cal
D}^{[HO]}=\left(-\frac{1}{2},\frac{1}{2}\right)\bigcup\left(\frac{1}{2},\frac{3}{2}\right)\bigcup\left(\frac{3}{2},\frac{5}{2}\right)\bigcup\ldots\,.$
Figure 6: Spectrum of Hamiltonian (14) at $M=1$.
This property (cf. Fig. 6) enabled us to pay more attention, in paper [56], to
one of the key challenges connected with the theory, viz., to the constructive
analysis of the practical consequences of the nontriviality and of the
ambiguity of the related angular-momentum-dependent metrics
$\Theta=\Theta(L)$. Our main result was the construction of a complete menu of
the infinite-parametric assignments $H\to\Theta(H)$ of an eligible metric to
the Hamiltonian.
The very possibility of doing so makes the HO model truly unique. For
technical as well as phenomenological reasons we restricted our attention just
to the parameters $L$ which lied close to the points of the boundary of the
domain of the unitarity, i.e., not far from the set of EPs
$\partial{\cal D}=\left\\{-\frac{1}{2}\,,\ \frac{1}{2}\,,\ \frac{3}{2}\,,\ \
\ldots\right\\}\,.$ (15)
The basic technical ingredient in the construction of the metrics (see its
details as well as the rather long explicit formulae in [56]) was twofold.
Firstly, the availability of the closed-form diagonalization of $H^{(HO)}(L)$
enabled us to replace the Hamiltonian, at any one of its EP limits, by an
equivalent matrix called canonical or Jordan-block representation. Thus, at
$L^{(EP)}=-1/2$, for example, such a representation has the elementary block-
diagonal form
${J}^{(-1/2)}_{(EP)}=\left(\begin{array}[]{cc|cc|cc}2&1&0&0&0&\ldots\\\
0&2&0&0&0&\ldots\\\ \hline\cr 0&0&6&1&0&\ldots\\\ 0&0&0&6&0&\ldots\\\
\hline\cr 0&0&0&0&10&\ldots\\\
\vdots&\vdots&\vdots&\vdots&\ddots&\ddots\end{array}\right)\ +\ {\rm
corrections}\,.$
Secondly, the highly nontrivial fact that all of the unavoided energy-level
crossings occurred pairwise and simultaneously led to the decomposition of the
metric-determining relation $H^{\dagger}(L)\Theta(L)=\Theta(L)\,H(L)$ (cf. Eq.
(4)) to a set of its finite-dimensional (in fact, two-by-two) matrix
components numbered by the separate degenerate energies
$E^{(EP)}=2,6,10,\ldots\ $.
In such a setup, every value $L^{(EP)}=-1/2,1/2,3/2,\ldots$ may be perceived
as an instant of a quantum phase transition which involves all levels at once.
In a way accounting, in an exhaustive manner, for the non-uniqueness, the one-
parametric ambiguity of every two-by-two submatrix of $\Theta(L)$ (once more,
recall Eq. (8) for illustration) contributes, independently, to the ultimate
infinite-parametric ambiguity of selection of the physics-determining inner
product in the infinite-dimensional physical Hilbert space ${\cal H}^{(HO)}$.
## 6 Summary.
At present the Dyson’s traditional 3HS recipe (4) based on the de-
Hermitization interpretation $\mathfrak{h}\to H$ of Eq. (6) is usually
inverted to yield the flowchart
$\begin{array}[]{c}\begin{array}[]{|c|}\hline\cr\vspace{-0.35cm}\hfil\\\ {\rm
input\\!:}\\\ {\rm non\\!-\\!Hermitian\ }H\ {\rm with\ real\ spectrum,}\\\
{\rm\framebox{\rm{\text@underline{ user-friendly } Hilbert space {${\cal
K}$}}}}\\\
\hline\cr\end{array}\stackrel{{\scriptstyle{\bf}}}{{\longrightarrow}}\begin{array}[]{|c|}\hline\cr\vspace{-0.35cm}\hfil\\\
{\rm output\\!:}\\\ {\rm metric\ }\Theta=\Omega^{\dagger}\Omega\ \ ({\rm s.\
t.}\ H^{\dagger}\Theta=\Theta H),\\\ {\rm\framebox{\rm{\text@underline{
physical } Hilbert space {${\cal H}$}}}}\\\ \hline\cr\end{array}\\\ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \nearrow\\!\\!\\!\swarrow\ \stackrel{{\scriptstyle\bf equivalent\
predictions}}{{}}\\\ \begin{array}[]{|c|}\hline\cr\vspace{-0.35cm}\hfil\\\
{\rm reference}\\!:\\\ {\rm Hamiltonian}\ \mathfrak{h}=\Omega
H\Omega^{-1}=\mathfrak{h}^{\dagger}\\\ {\rm\framebox{\rm{\text@underline{
inaccessible } Hilbert space {${\cal L}$}}}}\\\ \hline\cr\end{array}\,.\\\
\end{array}$
The model-building process is initiated by the choice of a bona fide
Hamiltonian $H$ which is defined and non-Hermitian in auxiliary space ${\cal
K}$. The theory is then based on an exact or approximate re-Hermitization of
$H$ via $\Theta$, with a very rare or marginal explicit subsequent reference
to the lower-case Hamiltonian $\mathfrak{h}$ or to the map of Eq. (6).
Finally, the variability of parameters in $H=H(\lambda)$ is taken into
account, and the physical domain ${\cal D}$ of the admissible values of these
parameters is determined.
In applications the 3HS formalism is to be kept user-friendly, with reasonably
calculable predictions. Besides the expected enhancement of technical
friendliness, an equally important merit of the 3HS formalism should be seen
in an emerging access to new and unusual phenomena. By our present selection,
all of the phenomena under consideration were characterized by the proximity
of EPs, treated as forming the boundary $\partial{\cal D}$ of the domains of
“acceptable” alias “physical” (i.e., unitarity-compatible) parameters of the
model in question. We reviewed and slightly extended several recent related
results.
From the abstract methodical point of view we put emphasis upon the
suitability and amendments of the necessary (although, sometimes, less usual)
perturbation-type construction techniques. This enabled us to clarify several
counterintuitive facts characterizing the behavior of the closed quantum
systems in the small vicinities of EPs of higher orders. As a main conclusion
the readers should remember the fact that in these vicinities, the technique
of perturbations offered one of the most efficient tools of the
parametrization and classification of the “admissible” (i.e., unitarity-
preserving) multiparametric Hamiltonians
$H(\lambda)=H[a(\lambda),b(\lambda),\ldots,z(\lambda)]$.
## References
* [1] Kato T 1966 Perturbation Theory for Linear Operators (Berlin: Springer-Verlag)
* [2] Moiseyev N 2011 Non-Hermitian Quantum Mechanics (Cambridge: CUP)
* [3] Heiss W D, Müller M and Rotter I 1998 Phys. Rev. E 58 2894
* [4] Bender C M and Wu T T 1969 Phys. Rev. 184 1231
* [5] Alvarez G 1995 J. Phys. A: Math. Gen. 28 4589–4598
* [6] Bender C M and Boettcher S 1998 Phys. Rev. Lett. 80 5243–5246
* [7] Bender C M 2007 Rep. Prog. Phys. 70 947–1018
* [8] http://www.nithep.ac.za/2g6.htm (accessed 2018 Jan 28)
* [9] Znojil M and Růžička F 2019 J. Phys. Conf. Ser. 1194 012120
* [10] Znojil M 2007 J. Phys. A: Math. Theor. 40 4863–4875
* [11] Znojil M 2007 J. Phys. A: Math. Theor. 40 13131–13148
* [12] Znojil M 2019 Phys. Rev. A 100 032124
* [13] Znojil M 2020 Proc. Roy. Soc. A 476 20190831\.
* [14] Dyson F J 1956 Phys. Rev. 102 1217
* [15] Dieudonné J 1961 Proc. Int. Symp. Lin. Spaces (Oxford: Pergamon) pp 115–122
* [16] Scholtz F G, Geyer H B and Hahne F J W 1992 Ann. Phys. (NY) 213 74–101
* [17] Znojil M 2009 Symm. Integ. Geom. Methods and Applications 5 001
* [18] Mostafazadeh A 2010 Int. J. Geom. Meth. Mod. Phys. 7 1191–1306
* [19] Bagarello F, Gazeau J P, Szafraniec F H and Znojil M 2015 Non-Selfadjoint Operators in Quantum Physics: Mathematical Aspects ed F Bagarello, J-P Gazeau et al (Hoboken, NJ: John Wiley & Sons)
* [20] Christodoulides D and Yang J K 2018 Parity-time Symmetry and Its Applications (Singapore: Springer)
* [21] Bender C M 2018 PT Symmetry in Quantum and Classical Physics (Singapore: World Scientific).
* [22] Stone M H 1932 Ann. Math. 33 643–648
* [23] Trefethen L N and Embree M 2005 Spectra and Pseudospectra (Princeton, NJ: PUP)
* [24] Antoine J P and Trapani C 2015 Non-Selfadjoint Operators in Quantum Physics: Mathematical Aspects, ed F Bagarello, J-P Gazeau et al (Hoboken, NJ: John Wiley & Sons) pp 345–402
* [25] Znojil M 2008 Phys. Rev. D 78 085003
* [26] Fring A and Moussa M H Y 2016 Phys. Rev. A 93 042114
* [27] Langer H and Tretter Ch 2004 Czech. J. Phys. 54 1113–1120
* [28] Albeverio S and Kuzhel S 2015 Non-Selfadjoint Operators in Quantum Physics: Mathematical Aspects, ed F Bagarello, J-P Gazeau et al (Hoboken, NJ: John Wiley & Sons) pp 293–344
* [29] Mostafazadeh A 2003 Class. Quantum Grav. 20 155
* [30] Znojil M 2017 Ann. Phys. (NY) 385 162–179
* [31] Bender C M and Milton K A 1997 Phys. Rev. D 55 R3255
* [32] Andrianov A A, Ioffe M V, Cannata F and Dedonder J P 1999 Int. J. Mod. Phys. A 14 2675
* [33] Buslaev V and Grecchi V 1993 J. Phys. A: Math. Gen. 26 5541
* [34] Jones H F and Mateo J 2006 Phys. Rev. D 73 085002\.
* [35] Mostafazadeh A 2004 Ann. Phys. (NY) 309 1
* [36] Krejčiřík D, Lotoreichik V and Znojil M 2018 Proc. Roy. Soc. A: Math. Phys. Eng. Sci. 474 20180264
* [37] Krejčiřík D, Siegl P, Tater M and Viola J 2015 J. Math. Phys. 56 103513
* [38] Znojil M 2018 Phys. Rev. A 97 032114
* [39] Znojil M 2020 Entropy 22 000080
* [40] Znojil M 2020 Symmetry 12 892
* [41] Čížek J 1966 J. Chem. Phys. 45 4256
* [42] Bishop R F and Znojil M 2020 Eur. Phys. J. Plus 135 374
* [43] Znojil M, Bishop R F and Veselý P 2021 in preparation
* [44] Graefe E M, Günther U, Korsch H J and Niederle A E 2008 J. Phys. A: Math. Theor. 41 255206
* [45] Znojil M 2020 Proc. Roy. Soc. A: Math., Phys. & Eng. Sci. A 476 20200292
* [46] Znojil M and Borisov D I 2020 Nucl. Phys. B 957 115064
* [47] Znojil M 2020 Symmetry 12 1309
* [48] Siegl P and Krejčiřík D 2012 Phys. Rev. D 86 121702(R)
* [49] Znojil M, Semorádová I, Růžička F, Moulla H and Leghrib I 2017 Phys. Rev. A 95 042122
* [50] Ashtekar A, Pawlowski T and Singh P 2006 Phys. Rev. D 74 084003
* [51] Znojil M 2016 Non-Hermitian Hamiltonians in Quantum Physics, ed F Bagarello, R Passante and C Trapani (Cham: Springer) pp 383–399
* [52] Zeeman E C 1977 Catastrophe Theory-Selected Papers 1972-1977 (Reading: Addison-Wesley)
* [53] Znojil M 2019 Ann. Phys. (NY) 405 325–339
* [54] Dorey P, Dunning C and Tateo R 2001 J. Phys. A: Math. Gen. 34 5679–5703
* [55] Znojil M 1999 Phys. Lett. A 259 220–223
* [56] Znojil M 2020 Sci. Reports 10 18523
|
# On sparse perfect powers
A. Moscariello Dipartimento di Matematica, Università di Pisa, Largo Bruno
Pontecorvo 5, 56127 Pisa, Italy<EMAIL_ADDRESS>
###### Abstract.
This work is devoted to proving that, given an integer $x\geq 2$, there are
infinitely many perfect powers, coprime with $x$, having exactly $k\geq 3$
non-zero digits in their base $x$ representation, except for the case
$x=2,k=4$, for which a known finiteness result by Corvaja and Zannier holds.
###### Key words and phrases:
base representation, sparse powers
###### 2020 Mathematics Subject Classification:
11D41, 11P99
## Introduction
Let $k$ and $x$ be positive integers, with $x\geq 2$. In this work, we will
study perfect powers having exactly $k$ non-zero digits in their
representation in a given basis $x$. These perfect powers are exactly (up to
dividing by a suitable factor) the set solutions of the Diophantine equation
(1) $y^{d}=c_{0}+\sum_{i=1}^{k-1}c_{i}x^{m_{i}},$
with $y,d$ positive integers greater than $1$, and
$c_{0},c_{1},\dots,c_{k-1}\in\\{1,\dots,x-1\\}$ and $m_{1}<\dots<m_{k-1}$
positive integers. We call perfect powers having a fixed number of non-zero
digits _sparse_ , borrowing the terminology used for polynomials (a _sparse_
polynomial is a polynomial having _relatively few_ non-zero terms, compared to
its degree) Special cases of this innocent problem has been widely studied in
the literature, and its appearence is quite deceiving: for instance, the
lowest case, obtained with the positions $k=2$, $c_{0}=c_{1}=1$, is the well-
known Catalan’s conjecture, first proposed in 1844, which stood open for
nearly 150 years before being proved by Mihailescu (cf. [8]) in the case
$x=2$. Furthermore, the case $k=2$, $x>2$ (i.e. perfect powers having exactly
two digits in their base $x>2$ representation) is still open (cf. [9,
§4.4.3]), and is related to the well-known ABC conjecture.
This class of problems also presents some ties to algebraic geometry. In fact,
Corvaja and Zannier showed in [3] that solutions of an equation of the form
(1) are associated with $S$-integral points on certain projective varieties.
For instance, assume for the sake of simplicity that $x=p$ is a prime number,
and that $k,d$ are fixed and $c_{0}=c_{1}=\dots=c_{k-1}=1$ in equation (1).
Consider, in the projective space $\mathbb{P}_{k}$, the variety
$\mathbb{P}_{k}\setminus D$, where $D$ denotes the divisor consisting of the
$k-1$ lines $X_{i}=0$, for $i=0,\dots,k-2$, and the hypersurface
$X_{k-1}^{d}=X_{0}^{d}+\displaystyle\sum_{i=1}^{k-2}X_{0}^{d-1}X_{i}$, and let
$S=\\{\infty,p\\}$. Then, $S$-integral points of this variety are such that
the values $y_{i}=\frac{X_{i}}{X_{0}}$, where $i=1,\dots,k-2$, and
$y_{k-1}=\left(\frac{X_{k-1}}{X_{0}}\right)^{d}-1-\displaystyle\sum_{i=1}^{k-2}\frac{X_{i}}{X_{0}}$
are all $S$-units. Also, the elements $y_{i}$ all have the form $\pm
p^{m_{i}}$ and are such that $1+y_{1}+\dots+y_{k-1}$ is a $d$th perfect power,
and are thus solutions of equation (1). Now, the study of these points, and
their distribution, can also be seen as a particular instance of a conjecture
by Lang and Vojta (see [7]); in our context, this conjecture would imply that
the set of $S$-integral points on $\mathbb{P}_{k}\setminus D$ is not Zariski
dense.
Besides Mihailescu’s Theorem, the more general case $k=2$ is still open;
however, there is some evidence suggesting that there may be only a finite
number of perfect powers having exactly two non-zero digits in any given base
$x$. The case $k=3$ has been studied recently (cf. [1], [5]); in particular,
Corvaja and Zannier developed in [5] an approach using $v$-adic convergence of
analytic series at $S$-unit points to reduce this problem to the study of
polynomial identities involving lacunary polynomial powers (i.e. polynomial
powers $P(T)^{d}$ having a fixed number $k$ of terms). This method allowed
them to provide a classification of perfect powers having exactly three non-
zero digits.
Specifically, for $x=2$ they obtained the following characterization.
###### Theorem 1 ([2]).
For $d\geq 2$ integer, the perfect $d$th powers in $\mathbb{N}$ having at most
three non-zero digits in the binary scale form the union of finitely many sets
of the shape $\\{q2^{md}\ |\ m\in\mathbb{N}\\}$ and, if $d=2$, also the set
$\\{(2^{a}+2^{b})^{2}\ |\ a,b\in\mathbb{N}\\}$.
In the same work, the authors comment that their method can be used to obtain
results equivalent to Theorem 1 for any given base $x$. Actually, Theorem 1
states that if $k=3$, $x=2$ there are only a finite number of _exceptional_
solutions, and the infinite family $y=(2^{a}+1)$, $d=2$, corresponding to the
polynomial identity $(T+1)^{2}=T^{2}+2T+1$.
Intuitively, one might expect that as the number of terms $k$ increases, the
number of polynomial powers $P(T)^{d}$ having exactly $k$ terms increases as
well. Moreover, since Corvaja and Zannier’s method can be adjusted to study
perfect powers with $k\geq 3$ non-zero digits, under certain assumption, we
might infer that there is an increasing number of infinite families of
solutions to equation (1).
However, this is not necessarily the case. In fact, while studying the case
$k=4$, Corvaja and Zannier obtained families of lacunary polynomial powers
having exactly $4$ terms that are not related to solutions of the Diophantine
equation $y^{d}=c_{0}+c_{1}2^{m_{1}}+c_{2}2^{m_{2}}+c_{3}2^{m_{3}}$. Actually,
they proved that this Diophantine equation has only finitely many solutions.
###### Theorem 2 ([2, Theorem 1.1]).
There are only finitely many odd perfect powers in $\mathbb{N}$ having
precisely four non-zero digits in their representation in the binary scale.
In this work, we prove that these results are _exceptional_. Namely, we show
that it is possible to obtain infinite families of perfect powers (coprime
with $x$) having exactly $k\geq 3$ non-zero digits in their base $x\geq 2$
representation (moreover, we will show that we can almost always provide
infinite families of perfect squares) for all values of $x$ and $k$, except
for the case $x=2$, $k=4$ studied by Corvaja and Zannier (Theorem 2).
## 1\. Main result
Consider the equation
(1) $y^{d}=c_{0}+\sum_{i=1}^{k-1}c_{i}x^{m_{i}}.$
In this work we want to determine whether the Diophantine equation (1) admits
infinite solutions, for given values of $x$ and $k$. Arguing that some
solutions can be induced from polynoial identities, and since intuitively, as
the number of terms $k$ increase, we can guess that there are more and more
polynomial powers $P(T)^{d}$ having exactly $k$ non-zero terms, our
expectation is that, as $k$ increases, it is easier to find infinite families
of perfect powers with exactly $k$ non-zero digits; our approach will focus on
finding such families in some specific setting. Actually, we will see that
finiteness results can only be obtained in the cases $k=2$ and $k=4,x=2$.
First, notice that the natural expansion of
$(1+X_{1}+\dots+X_{p-1})^{d}\in\mathbb{C}[X_{1},\dots,X_{p-1}]$ has exactly
$\binom{p-1+d}{d}$ distinct terms. Therefore, we can choose a suitable
specialization $X_{i}=x^{\alpha_{i}}$, with positive integers $\alpha_{i}$
such that different terms of the expansion yield different powers of $x$;
under the assumption that $x$ is greater than all coefficients of this
expansion, we can obtain a correspondence between the terms of this expansion
and the digits of our desired perfect power, and thus obtain perfect powers
whose base $x$ representation has exactly $\binom{p-1+d}{d}$ non-zero digits.
Similarly, under the same assumptions, we can choose a set of exponents
$\alpha_{i}$ such that there are exactly $\beta$ equalities among those terms,
for relatively small values of $\beta$, thus obtaining perfect powers having
exactly $\binom{p+d}{d}-\beta$ non-zero digits in their base $x$
representation (where $\beta$ hopefully takes all values between $0$ and
$\binom{p-1+d}{d}-\binom{p-2+d}{d-1}-1$).
This simple idea naturally directs us to the best case: the integers
$\binom{i}{2}$ form a sequence of relatively small intervals partitioning
$\mathbb{N}$, and the coefficients of the expansion of
$(1+X_{1}+\dots+X_{p-1})^{2}$ are all either $1$ or $2$. For $p\geq 1$ and
$0=\alpha_{0}<\alpha_{1}<\dots<\alpha_{p-1}$ we can expand
$(1+X_{1}+\dots+X_{p-1})^{2}$ in the following way:
(*)
$\begin{gathered}(x^{\alpha_{0}}+x^{\alpha_{1}}+\dots+x^{\alpha_{p-1}})^{2}=x^{2\alpha_{0}}+(2x^{\alpha_{0}+\alpha_{1}})+x^{2\alpha_{1}}+\left(2x^{\alpha_{2}+\alpha_{1}}+2x^{\alpha_{2}+\alpha_{0}}\right)+x^{2\alpha_{2}}\\\
+\dots+x^{2\alpha_{p-3}}+\left(\sum_{i=0}^{p-3}2x^{\alpha_{p-2}+\alpha_{i}}\right)+x^{2\alpha_{p-2}}+\left(\sum_{i=0}^{p-2}2x^{\alpha_{p-1}+\alpha_{i}}\right)+x^{2\alpha_{p-1}}.\end{gathered}$
Clearly $x$ is always not less than all the coefficients, and if $x>2$, this
expression can be used as a starting point to yield a representation. However,
if $x=2$, this expression needs to be slightly adjusted to become a binary
representation, and for this motive we might have to slightly alter our
construction; thus we will discuss the case $x=2$ separately from the rest.
### 1.1. Perfect powers with arbitrary number of binary digits
Clearly, the only admissible digits in the binary scale are $0$ and $1$, thus,
in base $2$, equation (1) becomes
$y^{d}=1+2^{\alpha_{1}}+\dots+2^{\alpha_{k-1}}.$
Let us summarize the known results, for $k\leq 4$:
* •
Mihailescu’s Theorem states that there is only one odd perfect power having
exactly two non-zero digits, that is, $3^{2}=1+2^{3}$.
* •
There are infinitely many odd squares having exactly three non-zero digits,
like for instance $(1+2^{\alpha_{1}})^{2}$ (see also Theorem 1).
* •
Theorem 2 states that there are only finitely many odd perfect powers having
exactly four non-zero digits.
Then, assume $k\geq 5$. Clearly, Equation (* ‣ 1) can be adjusted to obtain
the following binary representation (remember that $\alpha_{0}=0$):
($\star$)
$\begin{gathered}(2^{\alpha_{0}}+2^{\alpha_{1}}+\dots+2^{\alpha_{p-1}})^{2}=2^{2\alpha_{0}}+(2^{\alpha_{0}+\alpha_{1}+1})+2^{2\alpha_{1}}+\left(2^{\alpha_{2}+\alpha_{1}+1}+2^{\alpha_{2}+\alpha_{0}+1}\right)+2^{2\alpha_{2}}\\\
+\dots+2^{2\alpha_{p-3}}+\left(\sum_{i=0}^{p-3}2^{\alpha_{p-2}+\alpha_{i}+1}\right)+2^{2\alpha_{p-2}}+\left(\sum_{i=0}^{p-2}2^{\alpha_{p}+\alpha_{i}+1}\right)+2^{2\alpha_{p-1}}.\end{gathered}$
We rearranged the expression in this way since, for $i=1,\dots,p-1$ the $i$th
bracket contains pairwise distinct terms, ranging between
$2^{\alpha_{i}+\alpha_{0}+1}=2^{\alpha_{i}+1}$ and
$2^{\alpha_{i}+\alpha_{i-1}+1}$. Thus if $\alpha_{i}\geq\alpha_{i-1}+2$ every
term of the $i$th bracket is strictly lower than $2^{2\alpha_{i}}$, while if
$\alpha_{i}\geq 2\alpha_{i-1}-1$ then all terms of that bracket are larger
than $2^{2\alpha_{i-1}}$, with equality happening if and only if
$2^{\alpha_{i}+\alpha_{0}+1}=2^{2\alpha_{i-1}}$, that is, if and only if
$\alpha_{i}=2\alpha_{i-1}-1.$ Hence, if $\alpha_{i}\geq 2\alpha_{i-1}-1$,
equation ($\star$ ‣ 1.1) yields a perfect square having $\binom{p+1}{2}$
terms, with at most $p-2$ coincident terms, given by the number of indexes
such that $\alpha_{i}=2\alpha_{i-1}-1.$
Therefore, we can easily prove the following.
###### Lemma 3.
Let $k$ be a positive integer greater than $4$ not of the form
$\binom{p}{2}+1$, for a positive integer $p$. Then there exist infinitely many
odd perfect squares having exactly $k$ non-zero digits in their representation
in the binary scale.
###### Proof.
Write $k$ as $k=\binom{p+1}{2}-\beta$, with $\beta\in\\{0,\dots,p-2\\}$.
Define a sequence $(\alpha_{1},\dots,\alpha_{p-1})$ of positive integers such
that
$\begin{cases}\alpha_{1}\geq 3,\\\ \alpha_{i}=2\alpha_{i-1}-1\text{ for
}i=2,\dots,\beta+1,\\\ \alpha_{i}>2\alpha_{i-1}-1\text{ for }i>\beta+2.\\\
\end{cases}$
Then, arguing as in the previous paragraphs, we can show that there are
exactly $\beta$ coincident terms in the expansion ($\star$ ‣ 1.1); moreover,
those coincident terms are of the form $2^{2\alpha_{i-1}}$ and
$2^{\alpha_{i}+\alpha_{0}+1}$, which then form the term
$2^{2\alpha_{i-1}}+2^{\alpha_{i}+\alpha_{0}+1}=2^{\alpha_{i}+\alpha_{0}+2}<2^{\alpha_{i}+\alpha_{1}+1}$
(since $\alpha_{1}\geq 3$): thus the positive integer
$y=(1+2^{\alpha_{1}}+\dots+2^{\alpha_{p-1}})$ is such that $y^{2}$ has exactly
$\binom{p+1}{2}-\beta=k$ non-zero digits in its representation in the binary
scale. ∎
Notice that if $k=\binom{p}{2}+1$ (i.e. $\beta=p-1$) this method would not
work. Thus we have to prove this case in a slightly different way.
###### Lemma 4.
Let $k$ be a positive integer greater than $4$ of the form $\binom{p}{2}+1$,
with $p$ a positive integer. Then there are infinitely many odd perfect
squares having exactly $k$ non-zero digits in their binary representation.
###### Proof.
Notice that the binary representation of
$(1+2^{\alpha_{1}}+2^{\alpha_{1}+1}+2^{\alpha_{1}+2})^{2}$ is given by
$(1+2^{\alpha_{1}}+2^{\alpha_{1}+1}+2^{\alpha_{1}+2})^{2}=1+2^{\alpha_{1}+1}+2^{\alpha_{1}+2}+2^{\alpha_{1}+3}+2^{2\alpha_{1}}+2^{2\alpha_{1}+4}+2^{2\alpha_{1}+5},$
hence it has exactly $7=\binom{4}{2}+1$ non-zero digits; while, if $k\geq 11$
define as before an infinite sequence $(\alpha_{1},\dots,\alpha_{p-1})$ of
positive integers such that
$\begin{cases}\alpha_{1}\geq 4,\\\ \alpha_{i}=\alpha_{1}+i-1\text{ for
}i=2,3,\\\ \alpha_{4}=2\alpha_{1}+4\text{ },\\\
\alpha_{i}=2\alpha_{i-1}-1\text{ for }i>4.\end{cases}.$
Let $y=1+2^{\alpha_{1}}+2^{\alpha_{2}}+\dots+2^{\alpha_{p-1}}.$ Then the
expansion ($\star$ ‣ 1.1) of $y^{2}$ has $\binom{p+1}{2}$ terms; let us count
how many equalities there are between those terms:
* •
There are $3$ equalities depending on $\alpha_{1},\alpha_{2},\alpha_{3}$ only,
which we deduce from the binary representation of
$(1+2^{\alpha_{1}}+2^{\alpha_{2}}+2^{\alpha_{3}})^{2}$ (which has
$\binom{5}{2}-3=7$ non-zero digits);
* •
There are $p-4$ equalities, one for each of the $\alpha_{i}$, with $i>4$;
these $\alpha_{i}$ are chosen so that every term of the form $2^{2\alpha_{i}}$
is equal to the maximum term preceding it in the expansion ($\star$ ‣ 1.1).
Therefore there are exactly $p-1$ equalities, and since each of the terms
obtained by adding these coincident terms is distinct from any other term of
the expansion since $\alpha_{1}\geq 4$, we deduce that $y^{2}$ has exactly
$\binom{p+1}{2}-(p-1)=\binom{p}{2}+1=k$ non-zero digits in its representation
in the binary scale. ∎
Combining the last two results, we obtain the following result.
###### Theorem 5.
Let $k\geq 2$ be an integer.
1. (1)
If $k\in\\{2,4\\}$, then there are only finitely many odd perfect powers in
$\mathbb{N}$ having precisely $k$ non-zero digits in their representation in
the binary scale.
2. (2)
If $k\not\in\\{2,4\\}$, then there are infinitely many odd perfect squares in
$\mathbb{N}$ having precisely $k$ non-zero digits in their representation in
the binary scale.
### 1.2. Perfect powers with arbitrary number of base $x\geq 3$ digits
Let $x\geq 3$. Again, let us summarize the known results for $k\leq 4$.
* •
As of today, determining whether the Diophantine equation
$y^{d}=c_{1}x^{m_{1}}+c_{2}$ admits finitely or infinitely many solution is a
very challenging open problem, studied by several authors (see for instance
[9, §4.4.3] for results concerning this class of Diophantine equations); it is
conjectured that, for given values of $c_{1},c_{2}$, this equation admits only
finitely many solutions (and therefore that there are finitely many perfect
powers having exactly two non-zero digits, for any value of $x$).
* •
Clearly, there are infinitely many perfect squares not divisible by $x$ which
base $x$ representation has exactly three non-zero digits; for instance
$(x^{a}+1)^{2}=x^{2a}+2x^{a}+1$.
* •
Similarly, it is easy to see that the perfect cube
$(x^{a}+1)^{3}=x^{3a}+3x^{2a}+3x^{a}+1$ has exactly four non-zero digits in
its base $x$ representation; thus implying that there are infinitely many
perfect cubes having exactly four non-zero digits in their base $x$
representation.
It is worth noticing that, as $d$ grows, the coefficients involved become very
large, and thus for increasingly many values of $x$ the expansion of
$(x^{a}+1)^{d}$ would not yield a base $x$ representation.
We approach this case similarly to the case $x=2$. Consider the expansion (fix
$\alpha_{0}=0$)
(*)
$\begin{gathered}(x^{\alpha_{0}}+x^{\alpha_{1}}+\dots+x^{\alpha_{p-1}})^{2}=x^{2\alpha_{0}}+(2x^{\alpha_{0}+\alpha_{1}})+x^{2\alpha_{1}}+\left(2x^{\alpha_{2}+\alpha_{1}}+2x^{\alpha_{2}+\alpha_{0}}\right)+x^{2\alpha_{2}}\\\
+\dots+x^{2\alpha_{p-3}}+\left(\sum_{i=0}^{p-3}2x^{\alpha_{p-2}+\alpha_{i}}\right)+x^{2\alpha_{p-2}}+\left(\sum_{i=0}^{p-2}2x^{\alpha_{p-1}+\alpha_{i}}\right)+x^{2\alpha_{p-1}}.\end{gathered}$
As before, for $i=1,\dots,p-1$ the $i$th bracket contains pairwise distinct
terms, ranging between $x^{\alpha_{i}+\alpha_{0}}=x^{\alpha_{i}}$ and
$x^{\alpha_{i}+\alpha_{i-1}}$. Thus if $\alpha_{i}\geq\alpha_{i-1}+1$ all
these terms are strictly lower than $x^{2\alpha_{i}}$, while if
$\alpha_{i}\geq 2\alpha_{i-1}$ we have
$\alpha_{i}+\alpha_{i-1}>\ldots>\alpha_{i}+\alpha_{0}=\alpha_{i}\geq
2\alpha_{i-1}$, hence all the terms are strictly larger than
$x^{2\alpha_{i-1}}$, with equality happening if and only if
$\alpha_{i}=2\alpha_{i-1}$, which would imply
$x^{\alpha_{i}+\alpha_{0}+1}=x^{2\alpha_{i-1}}$. Hence, if $\alpha_{i}\geq
2\alpha_{i-1}$, the equation (* ‣ 1.2) gives a perfect square having exactly
$\binom{p+1}{2}$ terms, and, just like we did in the case $x=2$, we can fiddle
with our exponents in order to obtain the desired number of equalities
(between $0$ and $p-2$). Therefore, the following result is very
straightforward.
###### Lemma 6.
Let $k$ be a positive integer greater than four not of the form
$\binom{p}{2}+1$, with $p$ positive integer, and let $x\geq 3$ be an integer.
Then there exist infinitely many perfect squares, not divisible by $x$, having
exactly $k$ non-zero digits in their base $x$ representation.
###### Proof.
Write $k$ as $k=\binom{p+1}{2}-\beta$, with $\beta\in\\{0,\dots,p-2\\}$.
Define a sequence $(\alpha_{1},\dots,\alpha_{p-1})$ of positive integers
(depending on $\alpha_{1}$) satisfying the following conditions:
$\begin{cases}\alpha_{1}\geq 3,\\\ \alpha_{i}=2\alpha_{i-1}\text{ for
}i=2,\dots,\beta+1\\\ \alpha_{i}>2\alpha_{i-1}\text{ for }i>\beta+2\\\
\end{cases}.$
Then it is straightforward (arguing as in Lemma 3) to prove that the integer
$y=(1+x^{\alpha_{1}}+\dots+x^{\alpha_{p-1}})$ is such that $y^{2}$ has exactly
$\binom{p+1}{2}-\beta=k$ non-zero digits in its base $x$ representation. ∎
As in the previous Section, the remaining case $k=\binom{p}{2}+1$ is not
covered by the previous construction, but requires some slight adjustements to
be made, according to the value of $x$; here, we will need to split this case
in three subcases.
###### Lemma 7.
Let $k\geq 7$ be an integer of the form $\binom{p}{2}+1$, for some positive
integer $p$. Then there are infinitely many perfect squares not divisible by
$3$ having exactly $k$ non-zero digits in their base $3$ representation.
###### Proof.
First, we consider some special cases:
* •
The perfect square $(1+3^{\alpha_{1}}+3^{\alpha_{1}+1}+3^{\alpha_{1}+2})^{2}$
has exactly $7$ non-zero digits in its base $3$ representation.
* •
The expansion
$(1+3^{\alpha_{1}}+3^{\alpha_{1}+1}+3^{2\alpha_{1}}+3^{2\alpha_{1}+1})^{2}$
yields perfect squares having exactly $11=\binom{5}{2}+1$ non-zero digits in
their base $3$ representation.
For $k>11$, consider a sequence of positive integers
$(\alpha_{1},\dots,\alpha_{p-1})$ such that
$\begin{cases}\alpha_{1}\geq 4,\\\ \alpha_{2}=\alpha_{1}+1,\\\
\alpha_{i}=2\alpha_{1}+i-3\text{ for }i=3,4,\\\ \alpha_{i}=2\alpha_{i-1}\text{
for }i\geq 5.\end{cases}.$
Then, by taking the integer
$y=1+3^{\alpha_{1}}+3^{\alpha_{2}}+\dots+3^{\alpha_{p}}$, notice that, for the
expansion (* ‣ 1.2) of $y^{2}$, the following hold:
* •
There are exactly four equalities between terms of (* ‣ 1.2) depending on our
choice of $\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}$, which follow from the
expansion of
$(1+3^{\alpha_{1}}+3^{\alpha_{2}}+3^{\alpha_{3}}+3^{\alpha_{4}})^{2}$ (which
has exactly $11$ non-zero digits in its base $3$ representation).
* •
There are $p-5$ equalities, one for each $\alpha_{i}$, with $i=5,6,\dots,p-1$,
following from the condition $\alpha_{i}=2\alpha_{i-1}$.
As before, these equalities are such that the terms obtained are distinct from
any other term in (* ‣ 1.2) and that each term of the expansion yields a digit
in the base $3$ representation of $y^{2}$, which then contains exactly
$\binom{p+1}{2}-4-(p-5)=\binom{p}{2}+1=k$ non-zero digits. ∎
###### Lemma 8.
Let $k\geq 4$ be an integer of the form $\binom{p}{2}+1$, for a positive
integer $p$.
1. (1)
There are infinitely many perfect squares not divisible by $4$ having exactly
$k$ non-zero digits in their base $4$ representation.
2. (2)
There are infinitely many perfect squares not divisible by $5$ having exactly
$k$ non-zero digits in their base $5$ representation.
###### Proof.
1. (1)
Fix $\alpha_{1}\geq 2$, and define a sequence
$(\alpha_{1},\dots,\alpha_{p-2})$ of positive integers such that
$\alpha_{i}>2\alpha_{i-1}$ for every $i=2,\dots,p-2$. Take now the integer
$\displaystyle y=3\cdot
4^{\alpha_{p-2}}+2\left(\sum_{i=0}^{p-3}4^{\alpha_{i}}\right),$ with
$\alpha_{0}=0$ (remember that $p\geq 3$). Then clearly
$y^{2}=9\cdot
4^{2\alpha_{p-2}}+3\left(\sum_{i=0}^{p-3}4^{\alpha_{p-2}+\alpha_{i}+1}\right)+4\left(\sum_{i=0}^{p-3}4^{\alpha_{i}}\right)^{2}.$
Now, examining the base $4$ representation associated to the right-hand side,
the first term yields exactly two non-zero digits, the second one has $p-2$
non-zero digits, while the last bracket gives exactly $\binom{p-1}{2}$ non-
zero digits (by expanding the square and remembering the conditions on
$\alpha_{i}$); further, our conditions are such that all terms appearing on
the right-hand side are pairwise distinct. Thus the base $4$ representation of
$y^{2}$ has exactly $\binom{p-1}{2}+(p-2)+2=\binom{p}{2}+1=k$ non-zero digits.
2. (2)
Similarly, for $\alpha_{1}\geq 2$, define a sequence
$(\alpha_{1},\dots,\alpha_{p-2})$ of positive integers such that
$\alpha_{i}>2\alpha_{i-1}$ for any $i=2,\dots,p-2$, and take $\displaystyle
y=2\cdot 5^{\alpha_{p-2}}+2\cdot
5^{\alpha_{p-3}}+\left(\sum_{i=0}^{p-4}5^{\alpha_{i}}\right),$ with
$\alpha_{0}=0$. Then
$y^{2}=4\cdot 5^{2\alpha_{p-2}}+8\cdot 5^{\alpha_{p-2}+\alpha_{p-3}}+4\cdot
5^{2\alpha_{p-3}}+$
$+\left(\sum_{i=0}^{p-4}5^{\alpha_{i}}\right)^{2}+4\left(\sum_{i=0}^{p-4}5^{\alpha_{p-2}+\alpha_{i}}\right)+4\left(\sum_{i=0}^{p-4}5^{\alpha_{p-3}+\alpha_{i}}\right).$
This time, examining the base $5$ representation associated to this expansion,
we easily see that the first and third term yield one non-zero digit, the
second one gives $2$ digits, the fourth has exactly $\binom{p-2}{2}$ non-zero
digits, whence the last two have $p-3$ non-zero digits each; since all terms
appearing on the right-hand side have distinct exponents, the base $5$
representation of $y^{2}$ has thus exactly
$2(p-3)+\binom{p-2}{2}+4=\binom{p}{2}+1=k$ non-zero digits.
∎
###### Lemma 9.
Let $x\geq 6$ and $k\geq 4$ be integers, with $k$ having the form
$\binom{p}{2}+1$, for some positive integer $p$. Then there are infinitely
many perfect squares not divisible by $x$ having exactly $k$ non-zero digits
in their base $x$ representation.
###### Proof.
Let $\sigma=\left\lceil\sqrt{x+1}\right\rceil$. Since $x\geq 6$, clearly
$2\sigma\leq x$ and $x<\sigma^{2}<2x$; now, for $\alpha_{1}\geq 2$, define a
sequence $(\alpha_{1},\ldots,\alpha_{p-2})$ of positive integers such that
$\alpha_{i}>2\alpha_{i-1}$ for all $i=2,\ldots,p-2$, and take $y=\sigma
x^{\alpha_{p-2}}+x^{\alpha_{p-3}}+\ldots+x^{\alpha_{1}}+1$. Clearly, fixing
$\alpha_{0}=0$, we have
$y^{2}=\sigma^{2}x^{2\alpha_{p-2}}+\left(\sum_{i=0}^{p-3}2\sigma
x^{\alpha_{p-2}+\alpha_{i}}\right)+\left(\sum_{i=0}^{p-3}x^{\alpha_{i}}\right)^{2}.$
Our choice of $\sigma$ is such that the first term of the right-hand side has
exactly $2$ non-zero digits in its base $x$ representation, while the second
one has exactly $p-2$ non-zero digits, and the third one has exactly
$\binom{p-1}{2}$; since all powers of $x$ appearing in this expansion have
distinct exponents, we immediately deduce that the base $x$ representation of
$y^{2}$ has exactly $\binom{p-1}{2}+p=\binom{p}{2}+1=k$ non-zero digits. ∎
We can combine all the results of this section to achieve the desired result:
###### Theorem 10.
Let $x\geq 2$ and $k\geq 3$ be integers.
1. (1)
If $(x,k)\neq(2,4)$, there exist infinitely many perfect powers not divisible
by $x$ having exactly $k$ non-zero digits in their base $x$ representation.
2. (2)
Further, if $(x,k)\neq(3,4)$, there exist infinitely many perfect squares not
divisible by $x$ having exactly $k$ non-zero digits in their base $x$
representation.
The previous result affirms that the known finiteness results of Mihailescu
(for $k=2$) and Corvaja-Zannier (if $k=4$ and $x=2$) are the only exceptions
to the general rule. However, our construction does not work in the case
$x=3,k=4$; in fact, in that case it is easy to see that it is impossible to
impose more than one equality among the exponents of
$(1+3^{\alpha_{1}}+3^{\alpha_{2}})^{2}=1+2\cdot
3^{\alpha_{1}}+3^{2\alpha_{1}}+(2\cdot 3^{\alpha_{2}}+2\cdot
3^{\alpha_{2}+\alpha_{1}})+3^{2\alpha_{2}},$
and that in the general expansion
$\begin{gathered}(3^{\alpha_{0}}+3^{\alpha_{1}}+\dots+3^{\alpha_{p-1}})^{2}=3^{2\alpha_{0}}+(2\cdot
3^{\alpha_{0}+\alpha_{1}})+3^{2\alpha_{1}}+\left(2\cdot
3^{\alpha_{2}+\alpha_{1}}+2\cdot
3^{\alpha_{2}+\alpha_{0}}\right)+3^{2\alpha_{2}}\\\
+\dots+3^{2\alpha_{p-3}}+\left(\sum_{i=0}^{p-3}2\cdot
3^{\alpha_{p-2}+\alpha_{i}}\right)+3^{2\alpha_{p-2}}+\left(\sum_{i=0}^{p-2}2\cdot
3^{\alpha_{p-1}+\alpha_{i}}\right)+3^{2\alpha_{p-1}}\end{gathered}$
at least the four terms $1=3^{2\alpha_{0}},2\cdot 3^{\alpha_{1}},2\cdot
3^{\alpha_{p-1}+\alpha_{p-2}},3^{2\alpha_{p-1}}$ have different exponents from
the others, and thus are very hard to _remove_ from the final base $3$
representation that will derive from this expansion; while we were not able to
reach a conclusion in this case, we think it might be interesting to ask this
Question, with which we finish this work.
###### Question 11.
Determine if there are infinitely many squares not divisible by $3$ having
exactly $4$ non-zero digits in their base $3$ representation.
## Acknowledgements
This work is part of my PhD thesis. I would like to thank my advisers,
Professors Roberto Dvornicich and Umberto Zannier for their supervision, and
for helpful discussions.
## References
* [1] M. A. Bennett, Y. Bugeaud, M. Mignotte, Perfect powers with few binary digits and related Diophantine problems, _Annali Scuola Normale Superiore di Pisa_ 12, 4 (2013), p. 14.
* [2] P. Corvaja, U. Zannier, Finiteness of odd perfect powers with four nonzero binary digits, _Annales de l’Institut Fourier_ 63, 2 (2013), p. 715-731.
* [3] P. Corvaja, U. Zannier, Application of the Subspace Theorem to certain Diophantine problems, In: Diophantine Approximation, H. E. Schlickewei et al, Editors, Springer-Verlag (2008), p. 161-174.
* [4] P. Corvaja, U. Zannier, $S$-unit points on analytic hypersurfaces, _Ann. Sci. École Norm. Sup._ 38, 4 (2005) no. 1, p. 76-92.
* [5] P. Corvaja, U. Zannier, On the Diophantine equation $f(a^{m},y)=b^{n}$, _Acta Arith._ 94 (2000), p. 25-40.
* [6] P. Corvaja, U. Zannier, Diophantine equations with power sums and Universal Hilbert Sets, _Indag. Mathem. N. S._ 9 (1998) no. 3, p. 317-332.
* [7] M. Hindry, J.H. Silverman: _Diophantine Geometry_. Springer, Heidelberg (2000).
* [8] P. Mihailescu, Primary cyclotomics units and a proof of Catalan’s conjecture, _J. Reine Angew. Math._ 572 (2004), p. 167-195.
* [9] W. Narkiewicz, _Rational number theory in the 20th Century : from PNT to FLT_ , Springer Monographs in Mathematics, Springer (2012).
|
# Well-Posedness for the Reaction-Diffusion Equation with Temperature in a
critical Besov Space
Chun Liu Department of Applied Mathematics, Illinois Institute of Technology,
Chicago, IL 60616, United States Jan-Eric Sulzbach Department of Applied
Mathematics, Illinois Institute of Technology, Chicago, IL 60616, United
States
(2/1/2021)
###### Abstract
We derive a model for the non-isothermal reaction-diffusion equation.
Combining ideas from non-equilibrium thermodynamics with the energetic
variational approach we obtain a general system modeling the evolution of a
non-isothermal chemical reaction with general mass kinetics. From this we
recover a linearized model for a system close to equilibrium and we analyze
the global-in-time well-posedness of the system for small initial data for a
critical Besov space.
## 1 Introduction
### 1.1 Overview
Reaction-Diffusion systems are a crucial part in science; from chemical
reactions and predator-prey models to the spread of diseases. These are just a
few examples of the applications of reaction-diffusion systems. Many of these
systems have been studied over the last decades at a constant temperature or
the equivalent in the respective field. In recent years however, the focus
shifted towards the analysis of non-isothermal models, that is systems with a
non-constant temperature, leading to an additional non-linear equation to
govern the temperature evolution.
For the chemical reaction-diffusion equation, the addition of a heat term not
only adds an equation to the system, but also the material properties are
affected, e.g with different local temperatures the viscosity and the heat
conductivity can change. For the chemical reaction in particular, the heat
term also changes the reaction rate in the reaction. These thermal effects in
the chemical reaction equation have been studied from a chemical and
engineering point in [AB71], [Chu+93] and [RB08] or more recently [Zár+07] and
[Dem06].
In the mathematical theory of non-isothermal fluid mechanics there are two
different ways to find and prove the existence of solutions. One method is to
study the existence of weak solutions. We refer to [FN09] for dealing with a
general Navier-Stokes-Fourier system and to [BH15] and [ERS15] for some
applications of the general theory. Whereas the other method is to study the
well-posedness of global solutions of the system [Dan01], [Dan14] and [AL19].
In this paper we follow the later approach studying the well-posedness of the
reaction-diffusion system with temperature in a critical function space. In
addition, we present a new approach in the derivation of non-isothermal models
in fluid mechanics. This approach follows the theory of classical irreversible
thermodynamics [GM62] and [Leb89] and adds a variational structure to it, see
[LS20] and [LLT20] for the application of this approach to the ideal gas
system. Other works that follow this idea but in a different setting or with a
different variational structure are detailed in the book by [Fré02] and the
articles by [GBY17], [GBY17a] for example.
We consider the following system for the chemical reaction-diffusion system
close to equilibrium, where we denote the concentration for each chemical
species by $c_{i}$ for $i=A,B,C$ and the absolute temperature by $\theta$.
Further, we denote the equilibrium state by
$(\tilde{c}_{A},\tilde{c}_{B},\tilde{c}_{C},\tilde{\theta})$ and the system
then reads
$\displaystyle\partial_{t}c_{i}-k^{c}\Delta
c_{i}=-\sigma_{i}R_{t}+k^{c}\nabla\cdot\big{(}c_{i}\nabla\ln\theta\big{)},~{}~{}\text{
for }i=A,B,C$ (1.1)
$\displaystyle\begin{split}&\sum_{i}k^{\theta}c_{i}\bigg{[}\partial_{t}\theta-k^{\theta}\bigg{(}\frac{\nabla
c_{i}\cdot\nabla\theta}{c_{i}}+\frac{|\nabla\theta|^{2}}{\theta}\bigg{)}\bigg{]}=\kappa\Delta\theta+\sum_{i}\sigma_{i}k^{\theta}\theta
R_{t}\\\
&~{}~{}~{}~{}~{}~{}+(k^{c})^{2}\sum_{i}\bigg{[}(\eta_{i}-1)\frac{|\nabla(c_{i}\theta)|^{2}}{c_{i}\theta}+\Delta(c_{i}\theta)\bigg{]}\end{split}$
(1.2)
where
$\displaystyle
R_{t}=k^{c}\ln\bigg{(}\frac{c_{A}c_{B}}{c_{C}}\bigg{)}-k^{\theta}\ln\theta+k^{c}$
The goal of this paper is to show the well-posedness of the above system in a
critical Besov space. By a critical space we mean a function space that has
the same invariance with respect to scaling in time and space as the system
itself. The scaling we consider is
$(c_{i},\theta)\to(c_{i}^{\lambda},\theta^{\lambda})$ where
$\displaystyle c_{i}^{\lambda}(t,x)=c_{i}(\lambda^{2}t,\lambda
x)~{}~{}\text{and }\theta^{\lambda}(t,x)=\theta(\lambda^{2}t,\lambda x).$
A natural function space to consider would be the Sobolev homogeneous space
$\dot{H}^{d/2}$ but for the initial data in this space we cannot state the
well-posedness result due to the lack of an algebraic structure. This can be
overcome by considering the initial data of the problem in the critical Besov
space $\dot{B}_{2,1}^{d/2}$.
This paper is structured as follows. In the next section, we present an
overview of non-equilibrium thermodynamics and the framework of our result.
This is followed by the derivation of the general model of a chemical
reaction-diffusion system using these new ideas in Chapter 2. In Chapter 3 we
state the main definitions and theorems of the theory of Besov spaces that are
used to show the well-posedness of the system. The main result, i.e. the well-
posedness of the non-isothermal chemical reaction-diffusion sytem close to
equilibrium, and its proof can be found in Chapter 4.
### 1.2 Non-Equilibrium Thermodynamics
The theory of non-equilibrium thermodynamics derived from irreversible
processes has been developed almost 100 years ago. Starting from the 1930s
seminal work by Onsager ([Ons31], [Ons31a]) formulating his principles of
irreversible thermodynamics with some underlying assumptions. The idea is to
extend the the concept of state from continuum thermostatics to a local
description of material point in the continuum, i.e. every material point that
constructs the continuum is assumed to be close to a local thermodynamic
equilibrium state at any given instant. Therefore, we can define the state
variables and state functions such as temperature and entropy past their
definition in equilibrium thermostatics. This theory is known as Classical
Irreversible Thermodynamics (CIT) ([GM62]). Besides the classical set of state
variables, thermodynamic fluxes are introduced to describe irreversible
processes. In particular, the rate of change of entropy within a region is
contributed by an entropy flux through the boundary of that region and entropy
production inside the region. In CIT the entropy flux only depends on the heat
flux. The non-negativity of the entropy production rate grants the
irreversibility of the dissipative process and states the second law of
thermodynamics. The introduction of the local equilibrium hypothesis led to an
impressive production of scientific research, but it is also the breaking
point of the theory. For systems far from equilibrium the CIT does no longer
hold.
To extend the scope of the applications of non-equilibrium thermodynamics
beyond the CIT, Truesdall, Coleman and Noll among others introduced Rational
Thermodynamics (RT) ([Tru84], [CG67], [JCVL96]). The main assumption of RT is
that materials have memory, i.e. at any given time, dependent variables cannot
be determined by only instantaneous values of independent variables, but by
their entire history. Thus speaking, the concept of state as known in CIT is
modified and extended. One drawback of RT is that temperature and entropy
remain undefined objects.
In both CIT and RT, limitations of the possible form of the state and
constitutive equations are obtained as a consequence of the application of the
second law. No restrictions however, are placed on the reversible parts, since
they do not contribute to the entropy production. By using a Hamiltonian
structure restrictions on the reversible dynamics are provided. An early
version of a Hamiltonian framework for non-equilibrium thermodynamics was
proposed by Grmela ([Grm84]), based on a single generator. This approach
however was superseded by the work of Grmela and Öttinger ([GÖ97], [GÖ97a])
proposing the so called GENERIC formalism (General Equation for the Non-
Equilibrium Reversible-Irreversible Coupling) and further developed by
Öttinger ([Ött05]). The GENERIC formalism relays on the generators, $E$ the
total energy and $S$ the entropy. This gives the theory more flexibility and
emphasizes the central role played by the thermodynamic potentials. The main
achievement of GENERIC is its compact, abstract and general framework. In this
level of abstraction lies also the main difficulty of the formalism, its
application to specific problems.
### 1.3 Framework of this work
The approach to non-equilibrium thermodynamics in this paper follows some of
the ideas of the classical irreversible thermodynamics (CIT) and extend it to
a variational framework. The main assumption in this framework is that outside
of equilibrium, there exists an extensive quantity $S$ called entropy which is
a sole function of the state variables.
The structure of the derivation of the thermodynamic model is the following.
We introduce the free energy $\Psi$ as a basic quantity to define the
material/ fluid properties. From here, we derive the thermodynamic state
function of the system. In the second step, we define the kinematics of the
state variables. Next, we derive the conservative and dissipative forces by
using the Energetic Variational Approach (EnVarA) [HL+10], [GKL17], [LWL18]
inspired by the work of Ericksen [Eri98] and combine them with Newton’s force
balance. In the last step, we apply the laws of thermodynamics to the state
functions and obtain the full model system.
We recall the following definitions from thermodynamics [McQ76], [Sal01].
Free energy:
The free energy $\Psi$ is a thermodynamic function depending on the state
variables. The change in the free energy is the maximum amount of work that a
system can perform.
Entropy:
The entropy given by $s=-\partial_{\theta}\Psi$ is an extensive state
function. By the second law of thermodynamics the entropy of an isolated
system increases and tends to the equilibrium state.
Internal energy:
The internal energy $e=\Psi+s\theta$ is an extensive state function. It is a
thermodynamic potential that can be analyzed in terms of microscopic motions
and forces.
In addition to the state functions we recall the laws of thermodynamics
[BRR00].
The first law of thermodynamics relates the change in the internal energy with
dissipation and heat
$\displaystyle\frac{d\,e}{dt}=\nabla\cdot(\Sigma\cdot u)-\nabla\cdot q,$ (1.3)
where $\Sigma$ denotes the stress tensor of the material and $u$ its velocity;
this part expresses the work done by the system; and where $q$ denotes the
heat flux. We note that every total derivative can be written as follows
$\displaystyle\frac{d\,s}{dt}=\nabla\cdot j+\Delta,$ (1.4)
where in case for the entropy $j$ denotes the entropy flux and $\delta$ is the
entropy production rate. The second law of thermodynamics states that the
entropy production rate is non-negative:
$\displaystyle\Delta\geq 0.$ (1.5)
## 2 Derivation
In this section we derive a thermodynamic consistent model for the chemical
reaction-diffusion equation with temperature. For more details on chemical
reactions we refer to the book by [KP14] and for a general chemical reaction
equation derived by the energetic variational approach we refer to [Wan+20].
We consider the chemical reaction
$\displaystyle\alpha A+\beta B\rightleftharpoons\gamma C$
and denote the concentration of each species by $c_{i}$, where $i=A,B,C$.
The kinematics of the concentration $c_{i}$ for each species is given by
$\displaystyle\partial_{t}c_{i}+\operatorname{div}(c_{i}u_{i})=r(c,\theta)~{}~{}(x,t)\in\Omega\times(0,T)~{}~{}\text{
for }i=A,B,C$ (2.1)
where $u_{i}:\Omega\to\mathbb{R}^{n}$ is the effective microscopic velocity
for the i-th species and $r(c,\theta)$ denotes the reaction rate and we assume
that $r(c,\theta)=0$ at equilibrium, i.e. the concentration of $A$ and $B$
lost in the forward reaction equals the amount gained in the backward reaction
and the same for the concentration of $C$. In addition, we assume that $u$
satisfies the non-flux boundary condition
$\displaystyle u_{i}\cdot n=0~{}~{}(x,T)\in\partial\Omega\times(0,T).$ (2.2)
Moreover, we assume that the temperature moves along the trajectories of the
flow map.
For the free energy we have the following equation
$\displaystyle\psi(c,\theta)=\sum_{i}\psi_{i}(c_{i},\theta)=\sum_{i}k_{i}^{c}c_{i}\theta\ln
c_{i}-k_{i}^{\theta}c_{i}\theta\ln\theta$ (2.3)
where for each species we consider the free energy of the ideal gas and we set
the stoichiometric numbers to be one.
From the free energy we obtain the following thermodynamic quantities. The
entropy is given by
$\displaystyle
s(c,\theta)=\sum_{i}s_{i}(c_{i},\theta)=-\frac{\partial\psi}{\partial\theta}=-\sum_{i}c_{i}\big{(}k_{i}^{c}\ln
c_{i}-k_{i}^{\theta}(\ln\theta+1)\big{)}.$ (2.4)
###### Remark 2.1.
We note that the free energy is convex in the temperature variable $\theta$.
This allows us to apply the implicit function theorem and solve the entropy
equation (2.4) for $\theta$, i.e $\theta=\theta(\phi,\nabla\phi,s)$.
Next, we can define the internal energy as follows
$\displaystyle\begin{split}e(c,\theta)=\sum_{i}e_{i}(c_{i},\theta)&:=\psi+\theta
s=\psi-\psi_{\theta}\theta\\\
&=\sum_{i}k_{i}^{\theta}c_{i}\theta=:e_{1}(c,s)\end{split}$ (2.5)
where we have used the convexity of the free energy $\psi$ with respect to
$\theta$ to write the internal energy in terms of $c$ and $s$.
Next, we define the chemical potential as
$\displaystyle\mu_{i}:=\partial_{c_{i}}\psi_{i}(c_{i},\theta)=k_{i}^{c}\theta(\ln
c_{i}+1)-k_{i}^{\theta}\theta\ln\theta.$ (2.6)
We observe that at equilibrium we have
$\displaystyle\mu_{A}+\mu_{B}=\mu_{C}$ (2.7)
and by using the definition of the chemical potential we obtain
$\displaystyle\ln\bigg{(}\frac{c_{A}^{k_{A}^{c}}c_{B}^{k_{B}^{c}}}{c_{C}^{k_{C}^{c}}}\bigg{)}=\ln\theta\big{(}k_{A}^{\theta}+k_{B}^{\theta}-k_{C}^{\theta}\big{)}-\big{(}{k_{A}^{c}}+{k_{B}^{c}}-{k_{C}^{c}}\big{)}$
(2.8)
and
$\displaystyle\frac{c_{A}^{k_{A}^{c}}c_{B}^{k_{B}^{c}}}{c_{C}^{k_{C}^{c}}}=\frac{\theta^{k_{A}^{\theta}+k_{B}^{\theta}-k_{C}^{\theta}}}{e^{k_{A}^{c}+{k_{B}^{c}}-k_{C}^{c}}}=:K_{eq}(\theta)$
(2.9)
where $K_{eq}(\theta)$ is the equilibrium constant for a fixed temperature
$\theta$ of the reaction equation.
###### Remark 2.2.
The quantity $\mu_{A}+\mu_{B}-\mu_{C}$ is known as affinity of a chemical
reaction, introduced by De Donder as a new state variable of the system. Its
sign shows the direction of the the chemical reaction and can be considered as
the driving force of the reaction.
Now, we return to the chemical reaction and write it as the following system
of ordinary differential equations.
$\displaystyle r=\frac{1}{\sigma_{i}}\frac{d}{dt}c_{i},~{}~{}\text{ for
}i=A,B,C$ (2.10)
and $\sigma=(\alpha,\beta,-\gamma)$. We observe that if we subtract two of the
equations we end up with two constraints
$\displaystyle\gamma\frac{dc_{A}}{dt}+\alpha\frac{dc_{C}}{dt}=0,~{}~{}\gamma\frac{dc_{B}}{dt}+\beta\frac{dc_{C}}{dt}=0$
and as a consequence we obtain
$\displaystyle\gamma c_{A}+\alpha c_{C}=Z_{0},~{}~{}\gamma c_{B}+\beta
c_{C}=Z_{1},$
where the constants $Z_{0}$ and $Z_{1}$ are obtained by the initial
concentrations. Thus we only have one independent free parameter left, which
we will cal reaction coordinate $R(t)$ and we can write
$\displaystyle c_{i}(t)=c_{i,0}-\sigma_{i}R(t),~{}~{}\text{ for }i=A,B,C.$
(2.11)
Moreover we have that the reaction rate $r$ is given by
$r=\partial_{t}R(t)=R_{t}(t)$.
This allows us to rewrite the free energy in terms of the reaction coordinate
and temperature, i.e
$\displaystyle\psi(R,\theta)=\sum_{i}\psi_{i}(c_{i}(R),\theta).$ (2.12)
Next, we introduce the dissipation due to the reaction $\mathcal{D}(R,R_{t})$.
Applying the principle of virtual work we obtain that
$\displaystyle\frac{\delta F(R,\theta)}{\delta R}=-\frac{D(R,R_{t})}{R_{t}}$
(2.13)
where
$\displaystyle\frac{\delta F(R,\theta)}{\delta R}$
$\displaystyle=-\mu_{A}-\mu_{B}+\mu_{C}$
$\displaystyle=\theta\ln\theta\big{(}k_{A}^{\theta}+k_{B}^{\theta}-k_{C}^{\theta}\big{)}-\theta\ln\bigg{(}\frac{c_{A}^{k_{A}^{c}}c_{B}^{k_{B}^{c}}}{c_{C}^{k_{C}^{c}}}\bigg{)}-\theta\big{(}{k_{A}^{c}}+{k_{B}^{c}}-{k_{C}^{c}}\big{)}$
The law of mass action determines the choice of the dissipation function. The
general form of the dissipation in the reaction we consider is the following
$\displaystyle\mathcal{D}(R,R_{t})=\eta_{1}(R,\theta)R_{t}\ln(\eta_{2}(R,\theta)R_{t}+1),$
(2.14)
where $\eta_{1}$ and $\eta_{2}$ are positive functions in $R$ and $\theta$.
Details of the derivation can be found in e.g. [GM62].
In chemical reactions a linear response function is considered as a simplified
function of the general dissipation term. We obtain
$\displaystyle\mathcal{D}(R,R_{t})=\eta(R,\theta)|R_{t}|^{2}$ (2.15)
again with $\eta$ being a positive function. Using the principle of virtual
work with these two dissipation terms yields the following reaction rates
$\displaystyle r_{1}:=$ $\displaystyle
R_{t}=\bigg{(}\frac{c_{A}c_{B}}{c_{C}}\bigg{)}^{k^{c}}\frac{\theta^{k^{\theta}}}{\exp(k^{c})}-1$
(2.16) for the choice $\eta_{1}(R,\theta)=\theta$ and $\eta_{2}(R,\theta)=1$
which we can write as the usual law of mass action $\displaystyle r_{1}:=$
$\displaystyle R_{t}=k_{f}(c_{c},\theta)c_{A}c_{B}-k_{r}(c_{C},\theta)c_{C},$
(2.17)
where
$\displaystyle k_{f}\sim\frac{\theta^{k^{\theta}/k^{c}}}{c_{C}}~{}~{}\text{and
}k_{r}\sim\frac{1}{c_{C}}.$
Similar, for the linear response theory we obtain
$\displaystyle r_{2}:=$ $\displaystyle
R_{t}=k^{c}\ln\bigg{(}\frac{c_{A}c_{B}}{c_{C}}\bigg{)}+k^{\theta}\ln\theta-k^{c}$
(2.18)
where we assume that $k_{i}^{c}=k^{c}$ and $k_{i}^{\theta}=k^{\theta}$ for
$i=A,B,C$. The above observations can be summarized in the following ODE
system, where the derivation of the temperature part can be found at the end
of this section.
In addition to the reaction part we also consider a diffusion part in the
concentration. To this end we introduce the dissipation due to diffusion
$\displaystyle\mathcal{D}^{D}=\sum_{i}\eta_{i}(c_{i},\theta)u_{i}^{2}+\nu|\nabla
u_{i}|^{2}.$
###### Remark 2.3.
Note that the dissipation depends on both the velocity of the flow map $u$ and
its gradient $\nabla u$. Thus the parameters in front of the two terms can be
seen as an interpolation between a Darcy-type and a Stokes-type of
dissipation.
Applying the principle of virtual work for the concentration part we obtain
$\displaystyle\nabla
P_{i}=\nabla\big{(}c_{i}\psi_{c_{i}}-\psi_{i}\big{)}=c_{i}\nabla\mu_{i},$
(2.19)
where $P_{i}$ denotes the pressure and has the form
$\displaystyle P_{i}=c_{i}\psi_{c_{i}}-\psi_{i}=k^{c}c_{i}\theta.$ (2.20)
###### Lemma 2.4.
The pressure satisfies
$\displaystyle\nabla P_{i}=c_{i}\nabla\psi_{c_{i}}+s\nabla\theta.$
###### Proof.
From the definition of the pressure we have
$P_{i}(c_{i},\theta)=\psi_{c_{i}}c_{i}-\psi$ and thus we compute
$\displaystyle\nabla P_{i}(c_{i},\theta)$
$\displaystyle=\nabla(\psi_{c_{i}}\rho-\psi)=c_{i}\nabla\psi_{c_{i}}+\psi_{c_{i}}\nabla
c_{i}-\nabla\psi$ $\displaystyle=c_{i}\nabla\psi_{c_{i}}+\psi_{c_{i}}\nabla
c_{i}-\psi_{c_{i}}\nabla
c_{i}-\psi_{\theta}\nabla\theta=c_{i}\nabla\psi_{c_{i}}+s\nabla\theta.$
∎
Next, we apply the MDL and compute the variation of the dissipation with
respect to the microscopic velocity $u$. This yields
$\displaystyle\delta_{u}\frac{1}{2}\mathcal{D}^{tot}$
$\displaystyle=2\int_{\Omega}\eta_{i}(c_{i},\theta)u_{i}\cdot\tilde{u}+\nu\nabla
u_{i}\cdot\nabla\tilde{u}dx$
$\displaystyle=2\int_{\Omega}\eta_{i}(c_{i},\theta)u_{i}\cdot\tilde{u}-\nu\Delta\nabla
u_{i}\cdot\tilde{u}dx$
and hence the dissipative forces are
$\displaystyle f_{diss}=\eta_{i}(c_{i},\theta)u_{i}-\nu\Delta u_{i}$
From the classical Newton’s force law for the concentration we deduce that the
sum of the conservative and dissipative forces equals the change in the
momentum, i.e.
$\displaystyle f_{cons}+f_{diss}=\frac{d}{dt}(c_{i}u_{i})$
Thus we obtain
$\displaystyle\nu\Delta u_{i}-\eta_{i}(c_{i},\theta)u_{i}-\nabla
P_{i}=\frac{d}{dt}(c_{i}u_{i})=\partial_{t}(c_{i}u_{i})+\operatorname{div}(c_{i}u_{i}\otimes
u_{i})$ (2.21)
###### Remark 2.5.
This is the momentum equation for the compressible Navier-Stokes equation with
the addition of a Brinkman-type contribution in the dissipation.
Before taking a closer look at the laws of thermodynamics we provide to useful
Lemmas.
###### Lemma 2.6.
$e_{1,s}(c,s)=\partial_{s}e_{1}(c,s)=\theta(c,s)$.
###### Proof.
Applying the chain rule to the left-hand side of the equation yields
$\displaystyle\partial_{s}e_{1}(c,s)$
$\displaystyle=\partial_{s}\big{[}\psi(c,\theta(c,s))+\theta(c,s)s\big{]}$
$\displaystyle=\psi_{\theta}\theta_{s}+\theta_{s}s+\theta(c,s)=\theta(c,s),$
where we used that $s=-\psi_{\theta}$. ∎
###### Lemma 2.7.
$\psi_{i,\theta}(c_{i},\theta(c_{i},s))=e_{1_{i},c_{i}}(c_{i},s)$.
###### Proof.
By the chain rule applied to $e_{1_{i},c_{i}}(c_{i},s)$ we have
$\displaystyle\partial_{c_{i}}e_{1_{i}}(c_{i},s)$
$\displaystyle=\partial_{c_{i}}\big{[}\psi_{i}(c_{i},\theta(c,s))+\theta(c,s)s\big{]}$
$\displaystyle=\psi_{c_{i}}+\psi_{\theta}\theta_{c_{i}}+\theta_{c_{i}}s=\psi_{\phi}.$
∎
We note that we have a weak duality of the time evolution of the temperature
and the total derivative of the entropy in the following way.
###### Remark 2.8.
If $\theta$ evolves as $\frac{d}{dt}\theta=\theta_{t}+u\cdot\nabla\theta$ then
by testing this equation with the entropy $s$ in the weak form yields
$\displaystyle\int_{\Omega}\theta_{t}s+u\cdot\nabla\theta
s\,dx=-\int_{\Omega}s_{t}\theta+\operatorname{div}(su)\theta\,dx.$ (2.22)
Thus $s$ satisfies $\frac{d}{dt}s=s_{t}+\operatorname{div}(su)$.
In the computations of the laws of thermodynamics we use the following
constitutive relations and assumptions
* •
the Durhem equation $q=j\theta$;
* •
Fourier’s law $q=-\kappa\nabla\theta$;
* •
the positivity of $\eta_{i}$, i.e $\eta_{i}(c_{i},\theta)\geq 0$.
The general form of the first law of thermodynamics reads
$\displaystyle\frac{d}{dt}\int_{\Omega}\big{(}K+e_{1}\big{)}=\text{work}+\text{heat},$
where in our case the kinetic energy $K=\sum_{i}c_{i}|u_{i}|^{2}$. Then we
compute
$\displaystyle\frac{d}{dt}\int_{\Omega}e_{1}(c,s)dx$
$\displaystyle=\int_{\Omega}\big{[}\sum_{i}e_{1,c_{i}}c_{i,t}+e_{1,s}s_{t}\big{]}dx$
(2.23) Using the kinematics for the density $c_{i}$ from equation (2.1) we
obtain
$\displaystyle=\int_{\Omega}\big{[}\sum_{i}e_{1,c_{i}}\big{(}-\nabla\cdot(c_{i}u_{i})-\sigma_{i}R_{t}\big{)}+e_{1,s}s_{t}\big{]}dx$
Applying Lemma 2.7 yields
$\displaystyle=\int_{\Omega}\big{[}-\sum_{i}\nabla\cdot\big{(}e_{1,c_{i}}c_{i}u_{i}\big{)}+\sum_{i}\nabla\psi_{c_{i}}c_{i}\cdot
u_{i}-\sum_{i}\psi_{c_{i}}\sigma_{i}R_{t}+e_{1,s}s_{t}\big{]}dx$ In order to
have the full expression for the gradient of the pressure we have to
incorporate the term $s\nabla\theta$ which can only occur if the kinematics
for the entropy are as in Remark 2.8 and equation (2.22). Moreover by equation
(1.4) we have
$\displaystyle=\int_{\Omega}\big{[}-\nabla\cdot\big{(}\sum_{i}e_{1,c_{i}}c_{i}u_{i}+e_{1,s}su\big{)}+\nabla\sum_{i}\psi_{c_{i}}c_{i}\cdot
u_{i}+s\nabla e_{1,s}\cdot u$
$\displaystyle~{}~{}~{}~{}~{}-\sum_{i}\psi_{c_{i}}\sigma_{i}R_{t}+e_{1,s}\big{(}\nabla\cdot
j+\Delta\big{)}\big{]}dx$ By Lemma 2.6 and the Duhem equation we have
$\displaystyle=\int_{\Omega}\big{[}-\nabla\cdot\big{(}\sum_{i}e_{1,c_{i}}c_{i}u_{i}+e_{1,s}su\big{)}+\nabla\sum_{i}\psi_{c_{i}}c_{i}\cdot
u_{i}+s\nabla e_{1,s}\cdot u$
$\displaystyle~{}~{}~{}~{}~{}-\sum_{i}\psi_{c_{i}}\sigma_{i}R_{t}+\nabla\cdot
q-\frac{q\cdot\nabla\theta}{\theta}+\theta\Delta\big{]}dx$ Now, we can apply
Lemma 2.4 to obtain
$\displaystyle=\int_{\Omega}\big{[}-\nabla\cdot\big{(}\sum_{i}e_{1,c_{i}}c_{i}u_{i}+e_{1,s}su\big{)}+\nabla\cdot
q+\sum_{i}\nabla(\psi_{c_{i}}\rho-c_{i})\cdot u_{i}$
$\displaystyle~{}~{}~{}~{}~{}-\sum_{i}\psi_{c_{i}}\sigma_{i}R_{t}-\frac{q\cdot\nabla\theta}{\theta}+\theta\Delta\big{]}dx$
From the definition of the pressure and the absence of external forces and
heat sources we have that $\displaystyle=\int_{\Omega}\big{[}\sum_{i}\nabla
P_{i}\cdot
u_{i}-\sum_{i}\psi_{c_{i}}\sigma_{i}R_{t}-\frac{q\cdot\nabla\theta}{\theta}+\theta\Delta\big{]}dx$
where we used that the divergence terms equal to zero under the boundary
conditions $u\cdot n=0$ and $\nabla\theta\cdot n=0$. Thus we have
$\displaystyle=\int_{\Omega}\sum_{i}\big{(}\nu\Delta
u_{i}-\eta(c_{i},\theta)u_{i}+\partial_{t}(c_{i}u_{i})+\operatorname{div}(c_{i}u_{i}\otimes
u_{i})\big{)}\cdot u_{i}dx$
$\displaystyle~{}~{}~{}-\int_{\Omega}\sum_{i}\mu_{i}\sigma_{i}R_{t}-\frac{q\cdot\nabla\theta}{\theta}+\theta\Delta
dx$ and integration by parts yields
$\displaystyle\begin{split}&=\int_{\Omega}\big{[}\sum_{i}\big{(}-\nu|\nabla
u_{i}|^{2}-\eta(c_{i},\theta)u_{i}^{2}-\sigma_{i}R_{t}|u_{i}|^{2}-\mu_{i}\sigma_{i}R_{t}\big{)}\big{]}dx\\\
&~{}~{}~{}-\int_{\Omega}\big{[}\frac{q\cdot\nabla\theta}{\theta}+\theta\Delta\big{]}dx\\\
&~{}~{}~{}-\frac{1}{2}\sum_{i}\bigg{(}\frac{d}{dt}\int_{\Omega}c_{i}|u_{i}|^{2}dx+\int_{\Omega}\operatorname{div}(c_{i}|u_{i}|^{2}u_{i})dx\bigg{)}\end{split}$
(2.24)
where we used the reaction equation and the momentum equation to express the
pressure term. Since there are no external forces or heat sources in our
system the total internal energy must be conserved and we obtain that
$\displaystyle\Delta=\frac{1}{\theta}\bigg{(}\sum_{i}\nu|\nabla
u_{i}|^{2}+\sum_{i}\big{(}\sigma_{i}R_{t}+\eta(c_{i},\theta)\big{)}|u_{i}|^{2}+\sum_{i}\mu_{i}\sigma_{i}R_{t}+\frac{\kappa|\nabla\theta|^{2}}{\theta}\bigg{)}.$
(2.25)
We observe that we have to restrict the function $\eta_{i}(c_{i},\theta)$ and
the reaction rate $R_{t}$ such that
$\displaystyle\sum_{i}\big{(}\sigma_{i}R_{t}+\eta_{i}(c_{i},\theta)\big{)}|u_{i}|^{2}\geq
0.$
Thus, we note that the second law of thermodynamics $\Delta\geq 0$ is
satisfied as long as $\theta>0$.
In addition, we have shown that the total energy,i.e. the sum of the kinetic
energy and internal energy is conserved
$\displaystyle\frac{d}{dt}\int_{\Omega}K(c,u)+e_{1}(c,s)dx=\int_{\Omega}\text{
work }+\text{ heat }dx=0$ (2.26)
since we assume that there are no external forces and no heat flux through the
boundary.
Moreover, we have that the total entropy is increasing in time, i.e.
$\displaystyle\frac{d}{dt}\int_{\Omega}s(c,\theta)dx=\int_{\Omega}s_{t}+\operatorname{div}(su)dx=\int_{\Omega}\operatorname{div}j+\Delta\geq
0,$ (2.27)
where we assume that there is no entropy flux through the boundary.
The above derivation can be summarized in the following general model for the
chemical reaction with temperature
$\displaystyle\partial_{t}c_{i}+\operatorname{div}(c_{i}u_{i})=-\sigma
R_{t},~{}~{}\text{ for }i=A,B,C$ (2.28)
$\displaystyle\partial_{t}(c_{i}u_{i})+\operatorname{div}(c_{i}u_{i}\otimes
u_{i})-\nu\Delta u_{i}+\eta_{i}(c_{i},\theta)u_{i}=k^{c}\nabla c_{i}\theta$
(2.29)
$\displaystyle\partial_{t}s+\operatorname{div}(\sum_{i}s_{i}u_{i})=\operatorname{div}\bigg{(}\frac{\kappa\nabla\theta}{\theta}\bigg{)}+\Delta$
(2.30)
where
$\displaystyle
R_{t}=r_{1}=k_{f}(c_{c},\theta)c_{A}c_{B}-k_{r}(c_{C},\theta)c_{C}$ (2.31) for
the general law of mass action or $\displaystyle
R_{t}=r_{2}=k^{c}\ln\bigg{(}\frac{c_{A}c_{B}}{c_{C}}\bigg{)}+k^{\theta}\ln\theta-k^{c}$
(2.32) for the linear response theory. In addition, we have that the entropy
production rate is given by
$\displaystyle\Delta=\frac{1}{\theta}\bigg{(}\sum_{i}\nu|\nabla
u_{i}|^{2}+\sum_{i}\big{(}\sigma_{i}R_{t}+\eta(c_{i},\theta)\big{)}|u_{i}|^{2}+\sum_{i}\mu_{i}\sigma_{i}R_{t}+\frac{\kappa|\nabla\theta|^{2}}{\theta}\bigg{)}$
(2.33) where the chemical potential is defined as
$\displaystyle\mu_{i}=k^{c}\theta(\ln c_{i}+1)-k^{\theta}\theta\ln\theta$
(2.34) and the entropy is defined by $\displaystyle
s=\sum_{i}s_{i}=-\sum_{i}c_{i}\big{(}k^{c}\ln
c_{i}-k^{\theta}(\ln\theta+1)\big{)}$ (2.35)
After deriving the general model for the reaction-diffusion equation with
temperature we consider a simplified version. To this end, we make several
assumptions.
* •
First, we assume that the dissipation $\mathcal{D}$ depends only on the
velocity $u$ and not on its derivative, i.e. the dissipation we consider is of
Darcy type.
* •
Second, we assume that Newton,s force law reduces to a force balance between
conservative and dissipative forces, i.e. $f_{cons}+f_{diss}=0$. This yields a
Darcy type law for the velocity $\eta(c_{i},\theta)u_{i}=-\nabla P_{i}$.
* •
Finally,as a consequence of the above we assume that we can neglect the
influence of the kinetic energy and set it equal to zero. Thus the
conservation of internal energy holds
$\frac{d}{dt}\int_{\Omega}e_{1}(\rho,s)dx=0$.
Hence, we obtain the reaction-diffusion equation with temperature for a Darcy
type velocity.
$\displaystyle\partial_{t}c_{i}+\operatorname{div}(c_{i}u_{i})=-\sigma
R_{t},~{}~{}\text{ for }i=A,B,C$ (2.36)
$\displaystyle\eta_{i}(c_{i},\theta)u_{i}=-k^{c}\nabla c_{i}\theta$ (2.37)
$\displaystyle\partial_{t}s+\operatorname{div}(\sum_{i}s_{i}u_{i})=\operatorname{div}\bigg{(}\frac{\kappa\nabla\theta}{\theta}\bigg{)}+\Delta$
(2.38)
where
$\displaystyle
R_{t}=r_{1}=k_{f}(c_{c},\theta)c_{A}c_{B}-k_{r}(c_{C},\theta)c_{C}$ for the
general law of mass action $\displaystyle
R_{t}=r_{2}=k^{c}\ln\bigg{(}\frac{c_{A}c_{B}}{c_{C}}\bigg{)}+k^{\theta}\ln\theta-k^{c}$
and for the linear response theory. In addition, we have
$\displaystyle\Delta=\frac{1}{\theta}\bigg{(}\sum_{i}\nu|\nabla
u_{i}|^{2}+\sum_{i}\big{(}\sigma_{i}R_{t}+\eta(c_{i},\theta)\big{)}|u_{i}|^{2}+\sum_{i}\mu_{i}\sigma_{i}R_{t}+\frac{\kappa|\nabla\theta|^{2}}{\theta}\bigg{)}$
$\displaystyle\mu_{i}=k^{c}\theta(\ln c_{i}+1)-k^{\theta}\theta\ln\theta$
$\displaystyle s=\sum_{i}s_{i}=-\sum_{i}c_{i}\big{(}k^{c}\ln
c_{i}-k^{\theta}(\ln\theta+1)\big{)}$
This system of equations can be written in a condensed form by eliminating the
velocity in the reaction and entropy equation. Moreover we take a closer look
at the temperature. To this end, we explicitly compute the left-hand side of
equation (2.38).
$\displaystyle\partial_{t}s=-\sum_{i}\big{(}k^{c}\partial_{t}c_{i}\ln
c_{i}+k^{c}\partial_{t}c_{i}-k^{\theta}\partial
c_{i}(\ln\theta+1)-k^{\theta}c_{i}\frac{\partial_{t}\theta}{\theta}\big{)}$
(2.39)
and
$\displaystyle\operatorname{div}(\sum_{i}s_{i}u_{i})$
$\displaystyle=-\operatorname{div}\bigg{(}\sum_{i}c_{i}\big{(}k^{c}\ln
c_{i}-k^{\theta}(\ln\theta+1)\big{)}u_{i}\bigg{)}$
$\displaystyle=-\sum_{i}\bigg{[}k^{c}\ln
c_{i}\operatorname{div}\big{(}c_{i}u)+k^{c}u_{i}\nabla
c_{i}-k^{\theta}\big{(}\ln\theta+1\big{)}\operatorname{div}\big{(}c_{i}u_{i}\big{)}-k^{\theta}c_{i}u_{i}\frac{\nabla\theta}{\theta}\bigg{]}$
(2.40)
Adding these two equations and using the reaction equation for the
concentration we obtain
$\displaystyle\partial_{t}s+\operatorname{div}(\sum_{i}s_{i}u_{i})=$
$\displaystyle\sum_{i}\big{(}k^{c}\ln
c_{i}+k^{c}-k^{\theta}(\ln\theta+1)\big{)}\sigma_{i}R_{t}+\sum_{i}k^{c}c_{i}\operatorname{div}u_{i}$
$\displaystyle+\sum_{i}k^{\theta}\frac{c_{i}}{\theta}\big{(}\partial_{t}\theta+u_{i}\nabla\theta\big{)}$
Thus multiplying the entropy equation by $\theta$ yields
$\displaystyle\theta\big{(}\partial_{t}s+\operatorname{div}(\sum_{i}s_{i}u_{i})\big{)}=$
$\displaystyle\sum_{i}k^{\theta}c_{i}(\partial_{t}\theta+u_{i}\nabla\theta)+\sum_{i}k^{c}c_{i}\theta\operatorname{div}u_{i}$
$\displaystyle+\sum_{i}(\mu_{i}+k^{\theta}\theta)\sigma_{i}R_{t}$
and the temperature equations reads
$\displaystyle\sum_{i}k^{\theta}c_{i}\big{(}\partial_{t}\theta+\operatorname{div}(\theta
u_{i})\big{)}=\kappa\Delta\theta+\sum_{i}\sigma_{i}k^{\theta}\theta
R_{t}+\sum_{i}\big{(}\nu|\nabla
u_{i}|^{2}+\eta_{i}(c_{i},\theta)|u_{i}|^{2}\big{)}.$ (2.41)
This yields the following system of equations for the reaction-diffusion
system
$\displaystyle\partial_{t}c_{i}-k^{c}\Delta
c_{i}=-\sigma_{i}R_{t}+k^{c}\nabla\cdot\big{(}c_{i}\nabla\ln\theta\big{)},~{}~{}\text{
for }i=A,B,C$ (2.42)
$\displaystyle\begin{split}&\sum_{i}k^{\theta}c_{i}\bigg{[}\partial_{t}\theta-k^{\theta}\bigg{(}\frac{\nabla
c_{i}\cdot\nabla\theta}{c_{i}}+\frac{|\nabla\theta|^{2}}{\theta}\bigg{)}\bigg{]}=\kappa\Delta\theta+\sum_{i}\sigma_{i}k^{\theta}\theta
R_{t}\\\
&~{}~{}~{}~{}~{}~{}+(k^{c})^{2}\sum_{i}\bigg{[}(\eta_{i}-1)\frac{|\nabla(c_{i}\theta)|^{2}}{c_{i}\theta}+\Delta(c_{i}\theta)\bigg{]}\end{split}$
(2.43)
where we have the two different reaction rates derived from the general law of
mass action and the linear response theory
$\displaystyle
R_{t}=r_{1}=k_{f}(c_{c},\theta)c_{A}c_{B}-k_{r}(c_{C},\theta)c_{C},$
$\displaystyle
R_{t}=r_{2}=k^{c}\ln\bigg{(}\frac{c_{A}c_{B}}{c_{C}}\bigg{)}+k^{\theta}\ln\theta-k^{c}.$
## 3 Besov Spaces
In this section we will present the theory behind the well-posedness problem
for the reaction-diffusion system with temperature. In order to so, we
introduce the Besov spaces by using the Littlewood-Paley decomposition. For
the details in the Theorems and Definitions presented in this section, we
refer to [BCD11] and [Saw18].
We first define the building blocks of the theory of Besov spaces, the dyadic
partition of unity. Let $\mathcal{C}$ be the annulus
$\\{\xi\in\mathbb{R}^{d}~{}:~{}3/4\leq|\xi|\leq 8/3\\}$, and let $\phi$ be a
radial function with values in the interval $[0,1]$ belonging to the space
$\mathcal{D}(\mathcal{C})$ with the following partition of unity
$\displaystyle\forall\xi\in\mathbb{R}^{d}\setminus\\{0\\},~{}\sum_{j\in\mathbb{Z}}\phi\big{(}2^{-j}\xi\big{)}=1.$
We observe that for $|j-i|\geq 2$ we have
$\operatorname{supp}\phi\big{(}2^{-j}\cdot\big{)}\cap\operatorname{supp}\phi\big{(}2^{-i}\cdot\big{)}=\emptyset$.
In addition, we define the Fourier transform $\mathcal{F}$ of the whole space
$\mathbb{R}^{d}$. Then we can define the homogeneous dyadic block
$\dot{\Delta}_{j}$ and the homogeneous low-frequency cut-off operator
$\dot{S}_{j}$ for all $j$
$\displaystyle\dot{\Delta}_{j}u$
$\displaystyle=\mathcal{F}^{-1}\big{(}\phi(2^{-j}\xi)\mathcal{F}u\big{)}$
$\displaystyle\dot{S}_{j}u$ $\displaystyle=\sum_{i\leq j-1}\dot{\Delta}_{i}u.$
Hence, we can write the formal Littlewood-Paley decomposition
$\displaystyle\text{Id}=\sum_{j}\dot{\Delta}_{j}.$
This allows us to define the homogeneous Besov spaces.
###### Definition 3.1.
The homogeneous Besov spaces $\dot{B}^{s}_{p,r}$ with $s\in\mathbb{R}$,
$p,r\in[1,\infty]^{2}$ and
$\displaystyle s<\frac{d}{2}\text{ if }r>1,\quad s\leq\frac{d}{2}\quad\text{
if }\quad r=1$
consist of all homogeneous tempered distributions $u$ such that
$\displaystyle\|u\|_{\dot{B}^{s}_{p,r}}:=\bigg{(}\sum_{j\in\mathbb{Z}}2^{rjs}\|\dot{\Delta}_{j}u\|_{L^{p}}^{r}\bigg{)}^{1/r}<\infty.$
We remark that the (semi-)norms $\|\cdot\|_{\dot{H}^{s}}$ and
$\|\cdot\|_{\dot{B}^{s}_{2,2,}}$ are equivalent. Furthermore, we observe that
$\dot{H}^{s}\subset\dot{B}^{s}_{2,2}$ and equality holds if $s<d/2$.
We have have the following remark
###### Remark 3.2.
Let $(s_{1},s_{2})\in\mathbb{R}^{2}$ and $1\leq
p_{1},p_{2},r_{1},r_{2}\leq\infty$ with $s<d/p$ or $s=d/p$ if $r=1$. Then the
space $\dot{B}_{p_{1},r_{1}}^{s_{1}}\cap\dot{B}_{p_{2},r_{2}}^{s_{2}}$ is
endowed with the norm
$\|\cdot\|_{\dot{B}_{p_{1},r_{1}}^{s_{1}}}+\|\cdot\|_{\dot{B}_{p_{2},r_{2}}^{s_{2}}}$
is a complete normed space.
One special feature of homogeneous Besov spaces is there scaling property.
Next, we have some useful embeddings.
###### Proposition 3.3.
For $p\in[1,\infty)$ the space $\dot{B}_{p,1}^{d/p}$ is continuously embedded
in the space $C^{0}$, i.e. the space of continuous functions vanishing at
infinity.
###### Proposition 3.4.
Let $1\leq p_{1}\leq p_{2}\leq\infty$ and let $1\leq r_{1}\leq
r_{2}\leq\infty$. Then for any $s\in\mathbb{R}$ the space
$\dot{B}_{p_{1},r_{1}}^{s}$ is continuously embedded in
$\dot{B}_{p_{2},r_{2}}^{s-d(1/p_{1}-1/p_{2})}$.
###### Remark 3.5.
From this point on we work with the Besov spaces $\dot{B}_{2,1}^{s}$ and by
the above Proposition we have that it is continuously embedded into
$\dot{H}^{s}$.
The following product rule is the key in the well-posedness result for the
reaction-diffusion system.
###### Proposition 3.6.
Let $u\in\dot{B}_{2,1}^{s_{1}}$ and let $v\in\dot{B}_{2,1}^{s_{2}}$ with
$s_{1},\,s_{2}\leq d/2$. If $s_{1}+s_{2}>0$ then the product $uv$ belongs to
$\dot{B}_{2,1}^{s_{1}+s_{2}-d/2}$ and the following inequality holds
$\displaystyle\|uv\|_{\dot{B}_{2,1}^{s_{1}+s_{2}-d/2}}\leq
C\|u\|_{\dot{B}_{2,1}^{s_{1}}}\|v\|_{\dot{B}_{2,1}^{s_{2}}},$
where the constant $C$ depends on $s_{1},\,s_{2}$ and the dimension $d$.
We observe that for $s=d/2$ fixed we obtain an algebra structure for the space
$\dot{B}_{2,1}^{d/2}$, i.e.
$\displaystyle\dot{B}_{2,1}^{d/2}\times\dot{B}_{2,1}^{d/2}\to\dot{B}_{2,1}^{d/2}.$
Next, we define the time-space Besov spaces, where the idea is to bound each
dyadic block in $L^{q}\big{(}[0,T];L^{p}\big{)}$ than to estimate directly the
solution of the whole partial differential equation in
$L^{q}\big{(}[0,T];\dot{B}_{p,r}^{s}\big{)}$.
###### Definition 3.7.
For $T>0$ and $s\in\mathbb{R}$ let $1\leq r,p\leq\infty$ and let the
assumptions of Definition 3.1 hold. Then we set
$\displaystyle\|u\|_{\mathcal{L}^{q}_{T}\big{(}\dot{B}_{p,r}^{s}\big{)}}=\bigg{(}\sum_{j\in\mathbb{Z}}2^{rjs}\|\dot{\Delta}_{j}u\|_{L_{T}^{q}\big{(}L^{p}\big{)}}\bigg{)}^{1/r}.$
The spaces $\mathcal{L}^{q}_{T}\big{(}\dot{B}_{p,r}^{s}\big{)}$ can be linked
with the more classical spaces $L^{q}\big{(}[0,T];\dot{B}_{p,r}^{s}\big{)}$
via the Minkowski inequality and we obtain
$\displaystyle\|u\|_{\mathcal{L}^{q}_{T}\big{(}\dot{B}_{p,r}^{s}\big{)}}\leq\|u\|_{L^{q}\big{(}[0,T];\dot{B}_{p,r}^{s}\big{)}}~{}~{}\text{if
}r\geq p,$ and
$\displaystyle\|u\|_{\mathcal{L}^{q}_{T}\big{(}\dot{B}_{p,r}^{s}\big{)}}\leq\|u\|_{L^{q}\big{(}[0,T];\dot{B}_{p,r}^{s}\big{)}}~{}~{}\text{if
}r\leq p.$
###### Remark 3.8.
The general principles is that all properties of continuity of the product,
composition, etc. remain true in these time-space Besov spaces too. The
exponent $q$ just has to behave according to Hölder’s inequality for the time
variable.
The following result is the key in the existence proof later on.
###### Theorem 3.9.
Let $u_{0}\in\dot{B}_{2,1}^{s}$ be the initial data with regularity $s\leq
d/2$. In addition, let
$f\in\mathcal{L}^{1}_{T}\big{(}\dot{B}_{2,1}^{s}\big{)}$ be the driving force,
and we denote by $u$ the unique solution to the following linear parabolic PDE
$\displaystyle\partial_{t}u-\Lambda u=f~{}~{}~{}\text{in
}\mathbb{R}_{+}\times\mathbb{R}^{d},$ (3.1) $\displaystyle
u\big{|}_{t=0}=u_{0}~{}~{}~{}\text{in }\mathbb{R}^{d},$ (3.2)
where $\Lambda$ is a linear second order strongly elliptic operator. Then the
solution $u$ belongs to the space
$\mathcal{L}_{T}^{\infty}\big{(}\dot{B}_{2,1}^{s}\big{)}$ and the pair
$\big{(}\partial_{t}u,\Delta u\big{)}$ to
$\mathcal{L}_{T}^{1}\big{(}\dot{B}_{2,1}^{s}\big{)}$. Furthermore the
following inequality holds
$\displaystyle\|u\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{s})}+\|\partial_{t}u\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{s})}+\|\Delta
u\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{s})}\leq
C\big{[}\|u_{0}\|_{\dot{B}_{2,1}^{s}}+\|f\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{s})}\big{]}.$
In addition, the following Corollary is used frequently in the later part.
###### Corollary 3.10.
Let $1\leq q,r\leq\infty$, $2\leq p<\infty$ and $s\in\mathbb{R}$ and let
$I=[0,T)$ for any $T>0$. Suppose is a solution to the system (). Then there
exists a constant $C>0$ depending on $q,p,r,n$ such that
$\displaystyle\|u\|_{\mathcal{L}^{q}_{T}\big{(}\dot{B}_{p,r}^{s+2/q}\big{)}}\leq
C\big{(}\|u\|_{\dot{B}_{p,r}^{s}}+\|f\|_{\mathcal{L}^{1}_{T}\big{(}\dot{B}_{p,r}^{s}\big{)}}$
for $0<T\leq\infty$.
The following result considers the action of smooth functions on the Besov
space $\dot{B}_{2,1}^{d/2}$.
###### Lemma 3.11.
Let $f$ be a smooth function on $\mathbb{R}$ which vanishes at $0$. Then for
any function $u\in\dot{B}_{2,1}^{d/2}$ the function $f(u)$ is still element of
$\dot{B}_{2,1}^{d/2}$ and the following inequality holds
$\displaystyle\|f(u)\|_{\dot{B}_{2,1}^{d/2}}\leq
Q\big{(}f,\|u\|_{L^{\infty}}\big{)}\|u\|_{\dot{B}_{2,1}^{d/2}},$
where $Q$ is a smooth function depending on the value of $f$ and its
derivative.
The above Lemma can also be applied to a product of two functions in the
following way.
###### Corollary 3.12.
Let $u\in\dot{B}_{2,1}^{d/2}$ and $v\in\dot{B}_{2,1}^{s}$ such that the
product is continuous in
$\dot{B}_{2,1}^{d/2}\times\dot{B}_{2,1}^{s}\to\dot{B}_{2,1}^{s}$. Let$f$ be a
smooth function on $\mathbb{R}$, then $f(u)v\in\dot{B}_{2,1}^{s}$ and the
following inequality holds
$\displaystyle\|f(u)v\|_{\dot{B}_{2,1}^{s}}\lesssim
Q\big{(}f,\|u\|_{L^{\infty}}\big{)}\|u\|_{\dot{B}_{2,1}^{d/2}}\|v\|_{\dot{B}_{2,1}^{s}}.$
## 4 Well-Posedness Result
Now, we have all the necessary tools together to show the existence of
solutions. We recall the Darcy-type model for which we introduce perturbations
close to equilibrium, where we set $c_{i}(t,x)$ for $i=A,B,C$ to be the
concentration of the i-th species and $\theta(t,x)$ the temperature of the
system for $(t,x)\in[0,T]\times\mathbb{R}^{d}$ for $d=2,3$. The system then
reads
$\displaystyle\partial_{t}c_{i}-k^{c}\Delta
c_{i}=-\sigma_{i}R_{t}+k^{c}\nabla\cdot\big{(}c_{i}\nabla\ln\theta\big{)},~{}~{}\text{
for }i=A,B,C$ (4.1)
$\displaystyle\begin{split}&\sum_{i}k^{\theta}c_{i}\bigg{[}\partial_{t}\theta-k^{\theta}\bigg{(}\frac{\nabla
c_{i}\cdot\nabla\theta}{c_{i}}+\frac{|\nabla\theta|^{2}}{\theta}\bigg{)}\bigg{]}=\kappa\Delta\theta+\sum_{i}\sigma_{i}k^{\theta}\theta
R_{t}\\\
&~{}~{}~{}~{}+(k^{c})^{2}\sum_{i}\bigg{[}(\eta_{i}-1)\frac{|\nabla(c_{i}\theta)|^{2}}{c_{i}\theta}+\Delta(c_{i}\theta)\bigg{]}\end{split}$
(4.2)
with
$\displaystyle
R_{t}=k^{c}\ln\bigg{(}\frac{c_{A}c_{B}}{c_{C}}\bigg{)}-k^{\theta}\ln\theta+k^{c}$
###### Remark 4.1.
The equilibrium state is defined such that
$R_{t}(\tilde{c}_{A},\tilde{c}_{B},\tilde{c}_{C},\tilde{\theta})=0$, where we
observe that if $(\tilde{c}_{A},\tilde{c}_{B},\tilde{c}_{C},\tilde{\theta})$
is at equilibrium then
$(\lambda\tilde{c}_{A},\lambda\tilde{c}_{B},\lambda\tilde{c}_{C},\lambda^{k^{c}/k^{\theta}}\tilde{\theta})$
is also an equilibrium state. Thus, we can assume that without loss of
generality $\tilde{c}_{i}\geq 1/h^{2}$ for $i=A,B,C$ and $\tilde{\theta}\geq
1/h^{2}$ for any $0<h<1$.
Next, we rewrite the system as perturbation to the equilibrium state
$(\tilde{c}_{A},\tilde{c}_{B},\tilde{c}_{C},\tilde{\theta})$ by setting
$\displaystyle c_{i}=\tilde{c}_{i}+z_{i}~{}~{}\text{for
}i=A,B,C~{}~{}\text{and }\theta=\tilde{\theta}+\omega.$
In the nest step we linearize the reaction rate $R_{t}$ by doing a first order
Taylor expansion around the equilibrium state $R_{t}=0$ and obtain
$\displaystyle
R_{t}=r=k^{c}\sum_{j}\sigma_{j}\frac{z_{j}}{\tilde{c}_{j}}-k^{\theta}\frac{\omega}{\tilde{\theta}}$
The perturbed system now reads
$\displaystyle\begin{split}&\partial_{t}z_{i}-k^{c}\Delta
z_{i}=-\sigma_{i}\bigg{[}k^{c}\sum_{j}\sigma_{j}\frac{z_{j}}{\tilde{c}_{j}}-k^{\theta}\frac{\omega}{\tilde{\theta}}\bigg{]}\\\
&~{}~{}~{}~{}~{}~{}+k^{c}\bigg{[}\nabla
z_{i}^{k}\cdot\frac{\nabla\omega}{\omega+\tilde{\theta}}+z_{i}\frac{\Delta\omega}{\omega+\tilde{\theta}}-(z_{i}+\tilde{c}_{i})\frac{|\nabla\omega|^{2}}{(\omega+\tilde{\theta})^{2}}\bigg{]},\end{split}$
(4.3) for $i=A,B,C$
$\displaystyle\begin{split}&\sum_{i}k^{\theta}(z_{i}+\tilde{c}_{i})\bigg{[}\partial_{t}\omega-k^{\theta}\bigg{(}\frac{\nabla
z_{i}\cdot\nabla\omega}{z_{i}+\tilde{c}_{i}}+\frac{|\nabla\omega|^{2}}{\omega+\tilde{\theta}}\bigg{)}\bigg{]}=\kappa\Delta\omega\\\
&~{}~{}~{}+\sum_{i}\sigma_{i}k^{\theta}(\omega+\tilde{\theta})\big{(}k^{c}\sum_{j}\sigma_{j}\frac{z_{j}}{\tilde{c}_{j}}-k^{\theta}\frac{\omega}{\tilde{\theta}}\big{)}\\\
&~{}~{}~{}+(k^{c})^{2}\sum_{i}\bigg{[}(\eta_{i}-1)\frac{|(z+_{i}+\tilde{c}_{i})\nabla\omega+(\omega+\tilde{\theta})\nabla
z_{i}|^{2}}{(z_{i}+\tilde{c}_{i})(\omega+\tilde{\theta})}\bigg{]}\\\
&~{}~{}~{}+(k^{c})^{2}\sum_{i}\bigg{[}(z_{i}+\tilde{c}_{i})\Delta\omega+\nabla
z_{i}\cdot\nabla\omega+\omega\Delta z_{i}\bigg{]}\end{split}$ (4.4)
###### Remark 4.2.
We note that we modified the concentration equation and temperature equation
slightly by subtracting the term $\tilde{c}_{i}\Delta\omega$ and
$\tilde{\theta}\Delta z_{i}$ respectively. This regularization of the
equations ensures that for constant concentration or constant temperature,
i.e. the perturbation of the concentration $z_{i}=0$ and perturbation of the
temperature $\omega=0$, we obtain that the perturbation in the state variables
goes to zero, and thus the system returns to equilibrium.
We now state the well-posed result for the reaction-diffusion system with
temperature
###### Theorem 4.3 (Well-Posedness for the R-D System with Temperature).
Let there be a small positive number $h>0$ and let the initial data satisfy
the following condition
$c_{i,0}-\tilde{c}_{i}=z_{i,0}\in\dot{B}_{2,1}^{d/2}~{}\text{ for
}~{}~{}i=A,B,C~{}~{}\text{ and
}~{}~{}\theta_{0}-\tilde{\theta}=\omega_{0}\in\dot{B}_{2,1}^{d/2}$
and let the initial data fulfill the smallness condition
$\displaystyle\sum_{i}\|z_{i,0}\|_{\dot{B}_{2,1}^{d/2}}+\|\omega_{0}\|_{\dot{B}_{2,1}^{d/2}}\leq
h^{4}.$ (4.5)
Then the reaction-diffusion system with temperature close to equilibrium
admits a unique global-in-time strong solution belonging to the following
function spaces
$\displaystyle c_{i}-\tilde{c}_{i}$
$\displaystyle=:z_{i}\in\mathcal{L}_{T}^{\infty}\big{(}\dot{B}_{2,1}^{d/2}\big{)}~{}~{}\text{and
}~{}~{}\partial_{t}c_{i},\Delta
c_{i}\in\mathcal{L}_{T}^{1}\big{(}\dot{B}_{2,1}^{d/2}\big{)}~{}~{}\text{for
}i=A,B,C$ (4.6) $\displaystyle\theta-\tilde{\theta}$
$\displaystyle=:\omega\in\mathcal{L}_{T}^{\infty}\big{(}\dot{B}_{2,1}^{d/2}\big{)}~{}~{}\text{and
}~{}~{}\partial_{t}\theta,\Delta\theta\in\mathcal{L}_{T}^{1}\big{(}\dot{B}_{2,1}^{d/2}\big{)}.$
(4.7)
In addition, the solution satisfies the following the inequality
$\displaystyle\sum_{i}\|c_{i}-\tilde{c}_{i}\|_{\mathcal{B}}+\|\theta-\tilde{\theta}\|_{\mathcal{B}}\leq
h^{2},$ (4.8)
where we define the space $\mathcal{B}$ is defined as follows
$\displaystyle\|u\|_{\mathcal{B}}:=\|u\|_{\mathcal{L}_{T}^{\infty}\big{(}\dot{B}_{2,1}^{d/2}\big{)}}+\|\partial_{t}u\|_{\mathcal{L}_{T}^{1}\big{(}\dot{B}_{2,1}^{d/2}\big{)}}+\|\Delta
u\|_{\mathcal{L}_{T}^{1}\big{(}\dot{B}_{2,1}^{d/2}\big{)}}.$ (4.9)
The idea of the proof is to construct an iterative scheme of the following
form
$\displaystyle\partial_{t}f^{k+1}+\Lambda f^{k+1}=F^{k}$
where we show that this yields a bounded sequence in some Besov space and
where the difference between two iterations form a null sequence. From this we
can follow that the iterative sequence convergences.
### 4.1 Proof of Theorem 4.3
As mentioned before, the idea of the proof of the theorem is to use an
approximate scheme to construct the solution to the perturbed system of
equations (4.3)-(4.4). We set the first term in the sequence
$(z_{i}^{0}(t,x),\omega^{0}(t,x))$ is set to be zero everywhere in
$\mathbb{R}_{+}\times\mathbb{R}^{d}$. Then, we set
$(z_{i}^{k}(t,x),\omega^{k}(t,x))$ to be the solution of the following linear
approximate system.
$\displaystyle\partial_{t}z_{i}^{k+1}-k^{c}\Delta z_{i}^{k+1}$
$\displaystyle=F_{i}^{k}~{}~{}\text{for }i=A,B,C$ (4.10)
$\displaystyle\partial_{t}\omega^{k+1}-\bigg{(}\frac{(k^{c})^{2}}{k^{\theta}}+\frac{\kappa}{k^{\theta}\sum_{i}\tilde{c}_{i}}\bigg{)}\Delta\omega^{k+1}$
$\displaystyle=G^{k}$ (4.11)
where
$\displaystyle\begin{split}F_{i}^{k}&=-\sigma_{i}\bigg{[}k^{c}\sum_{j}\sigma_{j}\frac{z^{k}_{j}}{\tilde{c}_{j}}-k^{\theta}\frac{\omega^{k}}{\tilde{\theta}}\bigg{]}+k^{c}\bigg{(}\frac{1}{\tilde{\theta}}+f(\omega^{k})\bigg{)}\nabla
z_{i}^{k}\cdot\nabla\omega^{k}\\\
&~{}~{}~{}+k^{c}\bigg{[}z_{i}^{k}(\frac{1}{\tilde{\theta}}+f(\omega^{k}))\Delta\omega^{k}-(z_{i}^{k}+\tilde{c}_{i})(\frac{1}{\tilde{\theta}^{2}}+g(\omega^{k}))|\nabla\omega^{k}|^{2}\bigg{)}\bigg{]}\end{split}$
(4.12) $\displaystyle\begin{split}G^{k}&=k^{\theta}\sum_{i}\nabla
z_{i}^{k}\cdot\nabla\omega^{k}\bigg{(}\frac{1}{\sum_{i}\tilde{c}_{i}}+f(\sum_{i}z_{i}^{k})\bigg{)}+k^{\theta}|\nabla\omega^{k}|^{2}\bigg{(}\frac{1}{\tilde{\theta}}+f(\omega^{k})\bigg{)}+\kappa
f(c)\Delta\omega^{k}\\\
&~{}~{}~{}+\bigg{(}\frac{1}{\sum_{i}k^{\theta}\tilde{c}_{i}}+f(\sum_{i}z_{i}^{k})\bigg{)}\sum_{i}\sigma_{i}k^{\theta}(\omega^{k}+\tilde{\theta})\big{(}k^{c}\sum_{j}\sigma_{j}\frac{z_{j}^{k}}{\tilde{c}_{j}}-k^{\theta}\frac{\omega^{k}}{\tilde{\theta}}\big{)}\\\
&~{}~{}~{}+\bigg{(}\frac{1}{\sum_{i}k^{\theta}\tilde{c}_{i}}+f(\sum_{i}z_{i}^{k})\bigg{)}(k^{c})^{2}\sum_{i}\bigg{[}(\eta_{i}-1)\frac{|(z_{i}^{k}+\tilde{c}_{i})\nabla\omega^{k}+(\omega^{k}+\tilde{\theta})\nabla
z_{i}^{k}|^{2}}{(z_{i}^{k}+\tilde{c}_{i})(\omega^{k}+\tilde{\theta})}\bigg{]}\\\
&~{}~{}~{}+\bigg{(}\frac{1}{\sum_{i}k^{\theta}\tilde{c}_{i}}+f(\sum_{i}z_{i}^{k})\bigg{)}(k^{c})^{2}\sum_{i}\bigg{[}\nabla
z_{i}^{k}\cdot\nabla\omega^{k}+\omega^{k}\Delta z_{i}^{k}\bigg{]}\end{split}$
(4.13)
and where we define
$\displaystyle f(x):=\frac{1}{\tilde{x}+x}-\frac{1}{\tilde{x}}~{}~{}\text{and
}g(x):=\frac{1}{(x+\tilde{x})^{2}}-\frac{1}{\tilde{x}^{2}}$
We note that for $x>-\tilde{x}$ $f$ and $g$ are smooth functions and in
addition for $|x|/\tilde{x}\ll 1$ the function $f$ is $\mathcal{O}(x)$ and $g$
is $\mathcal{O}(x^{2})$ respectively.
###### Proposition 4.4 (Iterative scheme).
Let $(z_{A}^{k},z_{B}^{k},z_{C}^{k},\omega^{k})$ be a unique global-in-time
classical solution to the perturbed system (4.3)-(4.4). Then the solution
belongs to the space
$\mathcal{L}_{T}^{\infty}\big{(}\dot{B}_{2,1}^{d/2}\big{)}$ fulfilling the
following inequalities
$\displaystyle\|z_{i}^{k}\|_{\mathcal{B}}\leq h^{2}~{}~{}\text{for
}i=A,B,C~{}~{}\|\omega^{k}\|_{\mathcal{B}}\leq h^{2}.$ (4.14)
Furthermore, the difference between two consecutive solutions satisfies
$\displaystyle\|\delta z_{i}^{k}\|_{\mathcal{B}}\leq h^{2}~{}~{}\text{for
}i=A,B,C~{}~{}\|\delta\omega^{k}\|_{\mathcal{B}}\leq h^{2}.$ (4.15)
From this proposition the proof of Theorem 4.3 can be proven as follows. Let
$(z_{A}^{k},z_{B}^{k},z_{C}^{k},\omega^{k})$ be an approximate solution
satisfying the estimate of Proposition 4.4. Then the following series
converges
$\displaystyle\sum_{k=1}^{\infty}\sum_{i}\|\delta
z_{i}^{k}\|_{\mathcal{B}}+\|\delta\omega^{k}\|_{\mathcal{B}}<\infty.$
Thus we conclude that the sequence
$\big{(}z_{A}^{k},z_{B}^{k},z_{C}^{k},\omega^{k}\big{)}_{k\in\mathbb{N}}$
forms a Cauchy sequence in the space $\mathcal{B}$ and the limit
$(z_{A},z_{B},z_{C},\omega)$ is a strong solution of the perturbed system
(4.3)-(4.4).
The proof of this proposition is split up into several steps. The first one is
to show the approximate solutions are bounded in the Besov space
$\mathcal{B}$.
Concentration equation: We consider an approximate solution $z_{i}^{k}$ and
aim to show that the next level in the approximation is bounded by
$\|z_{i}^{k+1}\|_{\mathcal{B}}\leq h^{2}$. Then, by Theorem 3.9 we have that
the norm of $\|z_{i}^{k+1}\|_{\mathcal{B}}$ is bounded by
$\displaystyle\|z_{i}^{k+1}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+\|\partial_{t}z_{i}^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+k^{c}\|\Delta
z_{i}^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$ $\displaystyle\leq
C\big{[}\|z_{i,0}\|_{\dot{B}_{2,1}^{d/2}}+\|F_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\big{]}$
By the smallness assumption on the initial data we obtain
$\displaystyle\begin{split}\|z_{i}^{k+1}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}&+\|\partial_{t}z_{i}^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+k^{c}\|\Delta
z_{i}^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\\\ &\leq
C\big{[}h^{4}+\|F_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\big{]}\end{split}$
(4.16)
We claim that the forcing term is bounded by
$\displaystyle\|F_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle\lesssim
k^{c}\sum_{j}\frac{\|z_{j}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}}{\tilde{c}_{j}}+k^{\theta}\frac{\|\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}}{\tilde{\theta}}$
$\displaystyle+k^{c}\big{(}\frac{1}{\tilde{\theta}}+\|f(\omega^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\|\nabla
z_{i}^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+k^{c}\|z_{i}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{(}\frac{1}{\tilde{\theta}}+\|f(\omega^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\|\Delta\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+k^{c}(\tilde{c}_{i}+\|z_{i}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})})\big{(}\frac{1}{\tilde{\theta}^{2}}+\|g(\omega^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}^{2}$
Then the assumption on the equilibrium state we estimate
$\displaystyle\|F_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle\lesssim
k^{c}\sum_{j}h^{2}\|z_{j}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+k^{\theta}h^{2}\|\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+k^{c}\big{(}h^{2}+\|f(\omega^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\|\nabla
z_{i}^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+k^{c}\|z_{i}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{(}h^{2}+\|f(\omega^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\|\Delta\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+(h^{-2}+\|z_{i}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})})\big{(}h^{4}+\|g(\omega^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}^{2}$
and using Lemma 3.11 yields
$\displaystyle\|F_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle\lesssim
k^{c}\sum_{j}h^{2}\|z_{j}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+h^{2}\|\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+k^{c}\big{(}h^{2}+Q(f,\omega^{k})\|\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\|\nabla
z_{i}^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+k^{c}\|z_{i}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{(}h^{2}+Q(f,\omega^{k})\|\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\|\Delta\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+(h^{-2}+\|z_{i}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})})\big{(}h^{4}+Q(g,\omega^{k})\|\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}^{2}$
We observe that by the assumption on $z_{i}^{k}$ and $\omega^{k}$ for any
fixed $k$ we have
$\displaystyle\|F_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle\lesssim
k^{c}h^{2}h^{2}+k^{c}\big{(}h^{2}+Q(f,\omega^{k})h^{2}\big{)}h^{2}h^{2}+h^{2}h^{2}$
$\displaystyle+k^{c}h^{2}\big{(}h^{2}+Q(f,\omega^{k})h^{2}\big{)}h^{2}+(h^{-2}+h^{2})\big{(}h^{4}+Q(g,\omega^{k})h^{2}\big{)}h^{4}$
Thus we obtain that
$\displaystyle\|F_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\lesssim
h^{4}$ (4.17)
Combining the estimate from equation (4.16) with the estimate in equation
(4.17) yields
$\displaystyle\|z_{i}^{k+1}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}+\|\partial_{t}z_{i}^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+k^{c}\|\Delta
z_{i}^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\leq Ch^{4}$ (4.18)
and thus $\|z_{i}^{k+1}\|_{\mathcal{B}}\leq h^{2}$ which concludes the proof
of the first estimate in (4.14).
Now, we consider the difference between two solutions $\delta
z_{i}^{k+1}=z_{i}^{k+2}-z_{i}^{k+1}$. Then $\delta z_{i}^{k+1}$ is a solution
to
$\displaystyle\partial_{t}\delta z_{i}^{k+1}-k^{c}\Delta\delta
z_{i}^{k+1}=\delta F_{i}^{k}$
where
$\displaystyle\delta F_{i}^{k}$
$\displaystyle=-\sigma_{i}\bigg{[}k^{c}\sum_{j}\sigma_{j}\frac{\delta
z^{k}_{j}}{\tilde{c}_{j}}-k^{\theta}\frac{\delta\omega^{k}}{\tilde{\theta}}\bigg{]}-k^{c}\delta
z_{i}^{k}\frac{|\nabla\omega^{k+1}|^{2}}{(\omega^{k+1}+\tilde{\theta})^{2}}$
$\displaystyle+k^{c}\bigg{[}\nabla\delta
z_{i}^{k}\cdot\frac{\nabla\omega^{k+1}}{\omega^{k+1}+\tilde{\theta}}-\nabla
z_{i}^{k}\cdot\bigg{(}\frac{\nabla\delta\omega^{k}}{\omega^{k+1}+\tilde{\theta}}+\frac{\nabla\omega^{k}\delta\omega^{k}}{(\omega^{k+1}+\tilde{\theta})(\omega^{k}+\tilde{\theta})}\bigg{)}\bigg{]}$
$\displaystyle+k^{c}\bigg{[}\delta
z_{i}^{k}\frac{\Delta\omega^{k+1}}{\omega^{k+1}+\tilde{\theta}}+z_{i}^{k}\bigg{(}\frac{\Delta\delta\omega^{k}}{\omega^{k+1}+\tilde{\theta}}+\frac{\Delta\omega^{k}\delta\omega^{k}}{(\omega^{k+1}+\tilde{\theta})(\omega^{k}+\tilde{\theta})}\bigg{)}\bigg{]}$
$\displaystyle-k^{c}(z_{i}^{k}+\tilde{c}_{i})\bigg{(}\frac{\nabla\delta\omega^{k+1}\cdot(\nabla\omega^{k+1}+\nabla\omega^{k})}{(\omega^{k+1}+\tilde{\theta})^{2}}-\frac{|\nabla\omega^{k}|^{2}(2\tilde{\theta}+\omega^{k+1}+\omega^{k})}{(\omega^{k+1}+\tilde{\theta})^{2}(\omega^{k}+\tilde{\theta})^{2}}\delta\omega^{k}\bigg{)}$
This can be rewritten as follows
$\displaystyle\delta F_{i}^{k}=$
$\displaystyle-\sigma_{i}\bigg{[}k^{c}\sum_{j}\sigma_{j}\frac{\delta
z^{k}_{j}}{\tilde{c}_{j}}-k^{\theta}\frac{\delta\omega^{k}}{\tilde{\theta}}\bigg{]}+k^{c}\nabla\delta
z_{i}^{k}\cdot\nabla\omega^{k+1}(\frac{1}{\tilde{\theta}}+f(\omega^{k+1}))$
$\displaystyle-k^{c}\nabla
z_{i}^{k}\cdot\bigg{(}\nabla\delta\omega^{k}(\frac{1}{\tilde{\theta}}+f(\omega^{k+1}))+\nabla\omega^{k}\delta\omega^{k}(\frac{1}{\tilde{\theta}}+g(\omega^{k+1},\omega^{k}))\bigg{)}$
$\displaystyle+k^{c}\delta
z_{i}^{k}\Delta\omega^{k+1}(\frac{1}{\tilde{\theta}}+f(\omega^{k+1}))$
$\displaystyle+k^{c}z_{i}^{k}\bigg{(}\Delta\delta\omega^{k}(\frac{1}{\tilde{\theta}}+f(\omega^{k+1}))+\Delta\omega^{k}\delta\omega^{k}(\frac{1}{\tilde{\theta}^{2}}+g(\omega^{k+1},\omega^{k}))\bigg{)}$
$\displaystyle-k^{c}\delta
z_{i}^{k}|\nabla\omega^{k+1}|^{2}\big{(}\frac{1}{\tilde{\theta}^{2}}+g(\omega^{k+1})\big{)}$
$\displaystyle-k^{c}(z_{i}^{k}+\tilde{c}_{i})\bigg{(}\nabla\delta\omega^{k}\cdot(\nabla\omega^{k+1}+\nabla\omega^{k})\big{(}\frac{1}{\tilde{\theta}^{2}}+g(\omega^{k+1})\big{)}$
$\displaystyle-k^{c}(z_{i}^{k}+\tilde{c}_{i})|\nabla\omega^{k}|^{2}(2\tilde{\theta}+\omega^{k+1}+\omega^{k})\big{(}\frac{1}{\tilde{\theta}^{2}}+g(\omega^{k+1})\big{)}\big{(}\frac{1}{\tilde{\theta}^{2}}+g(\omega^{k1})\big{)}\delta\omega^{k}$
Again applying Theorem 3.9 yields
$\displaystyle\|\delta
z_{i}^{k+1}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}+\|\partial_{t}\delta
z_{i}^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+k^{c}\|\Delta\delta
z_{i}^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\leq C\|\delta
F_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$ (4.19)
where we can estimate further
$\displaystyle\|\delta
F_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\lesssim$
$\displaystyle\sum_{j}\sigma_{j}\frac{\|\delta
z^{k}_{j}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}}{\tilde{c}_{j}}+\frac{\|\delta\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}}{\tilde{\theta}}$
$\displaystyle+\|\nabla\delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\|\nabla\omega^{k+1}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}(\frac{1}{\tilde{\theta}}+\|f(\omega^{k+1})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})})$
$\displaystyle+\|\nabla
z_{i}^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\|\nabla\delta\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}(\frac{1}{\tilde{\theta}}+\|f(\omega^{k+1})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})})$
$\displaystyle+\|\nabla
z_{i}^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\|\delta\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle~{}~{}~{}~{}\times(\frac{1}{\tilde{\theta}}+\|g(\omega^{k+1},\omega^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})})$
$\displaystyle+\|\delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\|\Delta\omega^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}(\frac{1}{\tilde{\theta}}+\|f(\omega^{k+1})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})})$
$\displaystyle+\|z_{i}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\|\Delta\delta\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}(\frac{1}{\tilde{\theta}}+\|f(\omega^{k+1})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})})$
$\displaystyle+(\|z_{i}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}+\tilde{c}_{i})\|\Delta\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\|\delta\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle~{}~{}~{}~{}\times(\frac{1}{\tilde{\theta}^{2}}+\|g(\omega^{k+1},\omega^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})})$
$\displaystyle+\|\delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\|\nabla\omega^{k+1}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}^{2}\big{(}\frac{1}{\tilde{\theta}^{2}}+\|g(\omega^{k+1})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}$
$\displaystyle+(\|z_{i}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}+\tilde{c}_{i})(\|\nabla\omega^{k+1}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}+\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})})$
$\displaystyle~{}~{}~{}~{}\times\|\nabla\delta\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\big{(}\frac{1}{\tilde{\theta}^{2}}+\|g(\omega^{k+1})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\bigg{)}$
$\displaystyle+(\|z_{i}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}+\tilde{c}_{i}(2\tilde{\theta}+\|\omega^{k+1}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}+\|\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})})$
$\displaystyle~{}~{}~{}~{}\times\big{(}\frac{1}{\tilde{\theta}^{2}}+\|g(\omega^{k+1})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)})\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}^{2}$
$\displaystyle~{}~{}~{}~{}\times\big{(}\frac{1}{\tilde{\theta}^{2}}+\|g(\omega^{k+1})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\|\delta\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}$
By using the assumptions on the equilibrium state and by applying the previous
estimates we obtain
$\displaystyle\|\delta F_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle\lesssim(h^{2}+h^{5}+h^{7})\sum_{j}\|\delta
z^{k}_{j}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+h^{5}\|\nabla\delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle~{}~{}+\|\nabla\delta\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}(h^{3}+h^{4}+h^{5}+h^{7})+\|\Delta\delta\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}h^{4}$
$\displaystyle~{}~{}+(h^{2}+h^{3}+h^{5}+h^{6})\|\delta\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
Now, taking into account the induction assumption yields the following
$\displaystyle\|\delta
F_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\lesssim h^{k+2}$ (4.20)
Combining the above estimates yields
$\displaystyle\|\delta
z_{i}^{k+1}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}+\|\partial_{t}\delta
z_{i}^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+k^{c}\|\Delta\delta
z_{i}^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\leq h^{k+1}$ (4.21)
which concludes the proof of the induction.
Temperature equation: We proceed in a similar fashion as for the concentration
equation. Let $\omega^{k}$ be the approximate solution to the previous step.
Then by Theorem 3.9 we have that the solution to the next step $\omega^{k+1}$
in the approximate temperature equation exists and that the norm of
$\|\omega^{k+1}\|_{\mathcal{B}}$ is bounded by
$\displaystyle\|\omega^{k+1}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+\|\partial_{t}\omega^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+\|\Delta\omega^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle\leq
C\big{[}\|\omega_{0}\|_{\dot{B}_{2,1}^{d/2}}+\|G^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\big{]}.$
By the assumption on the initial perturbation in the temperature we obtain
$\displaystyle\begin{split}\|\omega^{k+1}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}&+\|\partial_{t}\omega^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+\|\Delta\omega^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\\\
&\leq
C\big{[}h^{4}+\|G^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\big{]}.\end{split}$
(4.22)
Next, we claim that the forcing term can be bounded as follows
$\displaystyle\|G^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\leq$
$\displaystyle\sum_{i}\|\nabla
z_{i}^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\bigg{(}\tilde{c}^{-1}+\|f(c^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\bigg{)}$
$\displaystyle+\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}^{2}\bigg{(}\tilde{\theta}^{-1}+\|f(\omega^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\bigg{)}$
$\displaystyle+\|f(c^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\|\Delta\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+\bigg{(}\frac{1}{\tilde{c}}+\|f(c^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\bigg{)}(\|\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}+\tilde{\theta})$
$\displaystyle~{}~{}~{}\times\bigg{(}\sum_{j}\frac{\|z_{j}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}}{\tilde{c}_{j}}+\frac{\|\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}}{\tilde{\theta}}\bigg{)}$
$\displaystyle+\bigg{(}\frac{1}{\tilde{c}}+\|f(c^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\bigg{)}\sum_{i}\|\nabla
z_{i}^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+\bigg{(}\frac{1}{\tilde{c}}+\|f(c^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\bigg{)}\sum_{i}\|\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\|\Delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})},$
where we assume that $\eta_{i}=1$ and thus the additional term can be dropped.
The assumptions on the equilibrium state and applying Lemma 3.11 then yields
$\displaystyle\|G^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\lesssim$
$\displaystyle\sum_{i}\|\nabla
z_{i}^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\bigg{(}h^{2}+Q(f,c^{k})\|c^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\bigg{)}$
$\displaystyle+\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}^{2}\bigg{(}h^{2}+Q(f,\omega^{k})\|\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\bigg{)}$
$\displaystyle+Q(f,c^{k})\|c^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\|\Delta\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+\bigg{(}h^{2}+Q(f,c^{k})\|c^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\bigg{)}(\|\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}+h^{-2})$
$\displaystyle~{}~{}~{}\times
h^{2}\bigg{(}\sum_{j}\|z_{j}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+\|\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\bigg{)}$
$\displaystyle+\bigg{(}h^{2}+Q(f,c^{k})\|c^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\bigg{)}\sum_{i}\|\nabla
z_{i}^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+\bigg{(}h^{2}+Q(f,c^{k})\|c^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\bigg{)}\sum_{i}\|\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\|\Delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
Using the control of $z_{i}^{k}$ and $\omega^{k}$ we have
$\displaystyle\|G^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle\leq h^{4}$ (4.23)
Hence we obtain
$\displaystyle\|\omega^{k+1}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}+\|\partial_{t}\omega^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+\|\Delta\omega^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\leq
Ch^{4}.$ (4.24)
and thus
$\displaystyle\|\omega^{k+1}\|_{\mathcal{B}}\leq h^{2}$ (4.25)
which completes the proof of the second estimate in (4.14).
Finally, we consider the difference between two approximate solutions and set
$\delta\omega^{k+1}=\omega^{k+2}-\omega^{k+1}$. Then $\delta\omega^{k+1}$ is a
solution to
$\displaystyle\partial_{t}\delta\omega^{k+1}-\tilde{\kappa}\Delta\delta\omega^{k+1}=\delta
G^{k},$
where
$\displaystyle\delta G^{k}=$ $\displaystyle
k^{\theta}\big{(}\tilde{c}^{-1}+f(c^{k+1})\big{)}\sum_{i}\bigg{(}\nabla\delta
z_{i}^{k}\cdot\nabla\omega^{k+1}+\nabla
z_{i}^{k}\cdot\nabla\delta\omega^{k}\bigg{)}$
$\displaystyle+k^{\theta}\sum_{i}\nabla
z_{i}^{k}\cdot\nabla\omega^{k}\delta\omega^{k}\big{(}\tilde{c}^{-2}+g(c^{k+1},c^{k})\big{)}$
$\displaystyle+k^{\theta}\nabla\delta\omega^{k}\cdot\big{(}\nabla\omega^{k+1}+\nabla\omega^{k}\big{)}\big{(}\tilde{\theta}^{-1}+f(\omega^{k+1})\big{)}$
$\displaystyle+|\nabla\omega^{k}|^{2}\delta\omega^{k}\big{(}\tilde{\theta}^{-1}+g(\omega^{k+1},\omega^{k})\big{)}$
$\displaystyle+\kappa
f(c^{k+1})\Delta\delta\omega^{k}+\kappa\Delta\omega^{k}\sum_{i}\delta
z_{i}^{k}g(c^{k+1},c^{k})$
$\displaystyle+(\omega^{k+1}+\tilde{\theta})\big{(}\tilde{c}^{-1}+f(c^{k+1})\big{)}\bigg{(}\sum_{j}\sigma_{j}\frac{\delta
z_{j}^{k}}{\tilde{c}_{j}}-\frac{\delta\omega^{k}}{\tilde{\theta}}\bigg{)}$
$\displaystyle+\bigg{(}\sum_{j}\sigma_{j}\frac{z_{j}^{k}}{\tilde{c}_{j}}-\frac{\omega^{k}}{\tilde{\theta}}\bigg{)}\bigg{(}\delta\omega^{k}\big{(}\tilde{c}^{-1}+f(c^{k+1})\big{)}$
$\displaystyle~{}~{}~{}+(\omega^{k}+\tilde{\theta})\sum_{i}\delta
z_{i}^{k}\big{(}\tilde{c}^{-2}+g(c^{k+1},c^{k})\big{)}\bigg{)}$
$\displaystyle+\sum_{i}\big{(}\omega^{k+1}\Delta\delta
z_{i}^{k}+\delta\omega^{k}\Delta
z_{i}^{k}\big{)}\big{(}\tilde{c}^{-1}+f(c^{k+1})\big{)}$
$\displaystyle+\omega^{k}\sum_{i}\Delta z_{i}^{k}\sum\delta
z_{i}^{k}\big{(}\tilde{c}^{-2}+g(c^{k+1},c^{k})\big{)}$
Then by applying Theorem 3.9 we have the following estimate
$\displaystyle\|\delta\omega^{k+1}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}+\|\partial_{t}\delta\omega^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+\|\tilde{\kappa}\Delta\delta\omega^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\leq
C\|\delta G^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})},$ (4.26)
where we estimate the last term as follows
$\displaystyle\|\delta G^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle\lesssim\big{(}\tilde{c}^{-1}+\|f(c^{k+1})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\sum_{i}\|\nabla\delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\|\nabla\omega^{k+1}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+\big{(}\tilde{c}^{-1}+\|f(c^{k+1})\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\big{)}\sum_{i}\|\nabla
z_{i}^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\|\nabla\delta\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+\big{(}\tilde{c}^{-2}+\|g(c^{k+1},c^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\sum_{i}\|\nabla
z_{i}^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle~{}~{}~{}\times\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\|\delta\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+\big{(}\tilde{\theta}^{-1}+\|f(\omega^{k+1})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\|\nabla\delta\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle~{}~{}~{}\times\big{(}\|\nabla\omega^{k+1}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}+\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}\big{)}$
$\displaystyle+\big{(}\tilde{\theta}^{-1}+\|g(\omega^{k+1},\omega^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\|\nabla\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}^{2}\|\delta\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+\kappa\|f(c^{k+1})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\|\Delta\delta\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+\kappa\|\Delta\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\sum_{i}\|\delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\|g(c^{k+1},c^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+(\|\omega^{k+1}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}+\tilde{\theta})\big{(}\tilde{c}^{-1}+\|f(c^{k+1})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}$
$\displaystyle~{}~{}~{}\times\bigg{(}\sum_{j}\frac{\|\delta
z_{j}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}}{\tilde{c}_{j}}+\frac{\|\delta\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}}{\tilde{\theta}}\bigg{)}$
$\displaystyle+\bigg{(}\sum_{j}\frac{\|z_{j}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}}{\tilde{c}_{j}}+\frac{\|\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}}{\tilde{\theta}}\bigg{)}$
$\displaystyle~{}~{}~{}\times\bigg{[}\|\delta\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\big{(}\tilde{c}^{-1}+\|f(c^{k+1})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}+(\|\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}+\tilde{\theta})$
$\displaystyle~{}~{}~{}~{}~{}~{}\times\sum_{i}\|\delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\big{(}\tilde{c}^{-2}+\|g(c^{k+1},c^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\bigg{]}$
$\displaystyle+\big{(}\tilde{c}^{-1}+\|f(c^{k+1})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\sum_{i}\|\omega^{k+1}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\|\Delta\delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+\big{(}\tilde{c}^{-1}+\|f(c^{k+1})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\sum_{i}\|\delta\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\|\Delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle+\big{(}\tilde{c}^{-2}+\|g(c^{k+1},c^{k})\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\big{)}\|\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle~{}~{}~{}\times\sum_{i}\|\Delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\sum_{i}\|\delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}$
Using the assumptions on the equilibrium state and the previous estimates
yields
$\displaystyle\|\delta G^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle\lesssim(h^{2}+h^{4})\sum_{i}\bigg{(}\|\Delta\delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+\|\nabla\delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}+\|\delta
z_{i}^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\bigg{)}$
$\displaystyle+(h^{2}+h^{4})\bigg{(}\|\Delta\delta\omega^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+\|\nabla\delta\omega^{k}\|_{\mathcal{L}_{T}^{2}(\dot{B}_{2,1}^{d/2})}+\|\delta\omega^{k}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}\bigg{)}$
By combining this inequality with the induction assumption we obtain
$\displaystyle\|\delta G^{k}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}$
$\displaystyle\lesssim h^{k+2}$ (4.27)
and therefore this results in the final estimate
$\displaystyle\|\delta\omega^{k+1}\|_{\mathcal{L}_{T}^{\infty}(\dot{B}_{2,1}^{d/2})}+\|\partial_{t}\delta\omega^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}+\|\tilde{\kappa}\Delta\delta\omega^{k+1}\|_{\mathcal{L}_{T}^{1}(\dot{B}_{2,1}^{d/2})}\leq
h^{k+1}$ (4.28)
which concludes the proof of the proposition.
The next step in the proof of Theorem 4.3 is to pass to the limit. From the
uniform estimates obtained in Proposition 4.4 we can take the limit as $k$
goes to $\infty$. Since
$\big{(}z_{A}^{k},z_{B}^{k},z_{C}^{k},\omega^{k}\big{)}_{k\in\mathbb{N}}$ is a
Cauchy sequence the following convergence result holds:
$\displaystyle z_{i}^{k}\to z_{i}~{}~{}\text{in
}\mathcal{L}_{T}^{\infty}\big{(}\dot{B}_{2,1}^{d/2}\big{)},~{}~{}(\partial_{t}z_{i}^{k},\Delta
z_{i}^{k})\to(\partial_{t}z_{i},\Delta
z_{i})~{}~{}\in\mathcal{L}_{T}^{1}\big{(}\dot{B}_{2,1}^{d/2}\big{)}~{}~{}\text{for
}i=A,B,C$ (4.29) $\displaystyle\omega^{k}\to\omega~{}~{}\text{in
}\mathcal{L}_{T}^{\infty}\big{(}\dot{B}_{2,1}^{d/2}\big{)},~{}~{}(\partial_{t}\omega^{k},\Delta\omega^{k})\to(\partial_{t}\omega,\Delta\omega)~{}~{}\in\mathcal{L}_{T}^{1}\big{(}\dot{B}_{2,1}^{d/2}\big{)}$
(4.30)
Therefore, by passing to the limit as $k\to\infty$ we obtain that
$\displaystyle\big{(}z_{A},z_{B},z_{C},\omega\big{)}=\big{(}c_{A}-\tilde{c}_{A},c_{B}-\tilde{c}_{B},c_{C}-\tilde{c}_{C},\theta-\tilde{\theta}\big{)}$
is a classical solution to the reaction-diffusion system with temperature
close to equilibrium (4.1)-(4.2)
The final step in the proof of Theorem 4.3 is to show the uniqueness of
solutions.
###### Proposition 4.5.
Let the initial data $\big{(}z_{A,0},z_{B,0},z_{C,0},\omega_{0}\big{)}$
satisfy the assumptions of Theorem 4.3 and let
$\big{(}z_{A}^{j},z_{B}^{j},z_{C}^{j},\omega^{j}\big{)}$ for $j=1,2$ be two
classical solutions to the same initial data belonging to the space
$\mathcal{B}$ defined in (4.9). Setting $\delta z_{i}=z_{i}^{1}-z_{i}^{2}$ for
$i=A,B,C$ and $\delta\omega=\omega^{1}-\omega^{2}$ the difference between the
two solutions it follows that
$\displaystyle\sum_{i}\|\delta
z_{i}\|_{\mathcal{B}}+\|\delta\omega\|_{\mathcal{B}}\lesssim
h^{2}\bigg{(}\sum_{i}\|\delta
z_{i}\|_{\mathcal{B}}+\|\delta\omega\|_{\mathcal{B}}\bigg{)}.$ (4.31)
This implies that for $h>0$ small enough we have
$\displaystyle\sum_{i}\|\delta
z_{i}\|_{\mathcal{B}}+\|\delta\omega\|_{\mathcal{B}}=0$
and therefore the two solutions coincide.
The proof of the proposition follows by repeating the arguments used to bound
the differences of two approximate solutions in equations (4.1) and (4.2).
This concludes the proof of the well-posedness result for the chemical
reaction-diffusion system with temperature.
### 4.2 Conclusion and Remarks
From the general model of the non-isothermal reaction-diffusion system we can
obtain the ideal gas model by considering only one species with density $\rho$
and by setting the reaction rate to zero, see [LS20] for more details in the
derivation. Thus the system has the following form
$\displaystyle\partial_{t}\rho-k^{\rho}\Delta\rho$
$\displaystyle=k^{\rho}\nabla\cdot\big{(}\rho\nabla\ln\theta)$ $\displaystyle
k^{\theta}\rho\bigg{(}\partial_{t}\theta-k^{\theta}\frac{\nabla\rho\cdot\nabla\theta}{\rho}-k^{\theta}\frac{|\nabla\theta|^{2}}{\theta}\bigg{)}$
$\displaystyle=\kappa\Delta\theta+(k^{\rho})^{2}(\eta-1)\frac{\nabla(\rho\theta)}{\rho\theta}+(k^{\rho})^{2}\Delta(\rho\theta).$
Similar, by using a different constitutive relation in the dissipation we can
obtain the ideal gas system discussed in [LLT20]
$\displaystyle\partial_{t}\rho$ $\displaystyle=\Delta(\rho\theta)$
$\displaystyle
k^{\theta}\partial_{t}(\rho\theta)-k^{\rho}(k^{\rho}+k^{\theta})\nabla\cdot\bigg{(}\theta\nabla(\rho\theta)\bigg{)}$
$\displaystyle=\nabla\cdot(\kappa\nabla\theta).$
We observe that the well-posedness result for the reaction-diffusion systems
(Theorem 4.3) can be applied to both systems, yielding the existence of
solutions to a system with small perturbations.
For a different approach to these systems we refer to [LS20], where the
existence of weak solutions to the Brinkman-Fourier system on a bounded domain
is proven by using energy estimates rather then scaling arguments.
As for future work we want to extend the derivation of non-isothermal fluid
mechanics to non-local systems with the porous media equation and the Poisson-
Nerst-Plank equation as examples, see [DL17] for the case without temperature.
## Acknowledgments
The authors would like to express their thank to Dr Yiwei Wang and Professor
Tengfei Zhang for the inspiring discussions and new insights. The work was
partially supported by DMS-1950868, and the United States – Israel Binational
Science Foundation (BSF) #2024246 .
## References
* [AB71] M.L. Anderson and R.K. Boyd “Nonequilibrium thermodynamics in chemical kinetics” In _Canadian Journal of Chemistry_ 49.7, 1971, pp. 1001–1007
* [AL19] F. De Anna and C. Liu “Non-isothermal General Ericksen–Leslie System: Derivation, Analysis and Thermodynamic Consistency” In _Archive for Rational Mechanics and Analysis_ 231, 2019, pp. 637–717
* [BCD11] H. Bahouri, J.-Y. Chemin and R. Danchin “Fourier analysis and nonlinear partial differential equations” Springer, Berlin-Heidelberg, 2011
* [BRR00] R.S. Berry, S.A. Rice and J. Ross “Physical Chemistry” Oxford University Press, Oxford, 2000
* [BH15] M. Bulíček and J. Havrda “On existence of weak solution to a model describing incompressible mixtures with thermal diffusion cross effects” In _Z. Angew. Math. Mech._ 95, 2015, pp. 589–619
* [Chu+93] X. Chu, J. Ross, P.M. Hunt and K.L.C. Hunt “Thermodynamic and stochastic theory of reaction-diffusion systems with multiple stationary states” In _The Journal of chemical physics_ 99.5 American Institute of Physics, 1993, pp. 3444–3454
* [CG67] B. Coleman and M. Gurtin “Thermodynamics with internal state variables” In _The Journal of Chemical Physics_ 47.2 American Institute of Physics, 1967, pp. 597–613
* [Dan01] R. Danchin “Global Existence in CriticalSpaces for Flows of Compressible Viscous and Heat-Conductive Gases” In _Arch. Rational Mech. Anal._ 160, 2001, pp. 1–39
* [Dan14] R. Danchin “A Lagrangian approach for the compressible Navier-Stokes equations” In _Annales de l’Institut Fourier_ 64, 2014, pp. 753–791
* [Dem06] Y. Demirel “Non-isothermal reaction-diffusion systems with thermodynamically coupled heat and mass transfer” In _Chemical engineering science_ 61.10, 2006, pp. 3379–3385
* [DL17] C. Deng and C. Liu “Largest well-posed spaces for the general diffusion system with nonlocal interactions” In _Journal of Functional Analysis_ 272.10, 2017, pp. 4030 –4062
* [ERS15] M. Eleuteri, E. Rocca and G. Schimperna “On a non-isothermal diffuse interface model for two-phase flows of incompressible fluids” In _Discrete & Continuous Dynamical Systems - A_ 35, 2015, pp. 2497–2522
* [Eri98] J.L Ericksen “Introduction to the Thermodynamics of Solids” Springer, New York, 1998
* [FN09] E. Feireisl and A. Novotný “Singular Limits in Thermodynamics of Viscous Fluids” Birkhäuser-Verlag, Basel, 2009
* [Fré02] M. Frémond “Non-Smooth Thermomechanics” Springer, Berlin, 2002
* [GBY17] F. Gay-Balmaz and H. Yoshimura “A Lagrangian variational formulation for nonequilibrium thermodynamics. Part I: Discrete systems” In _Journal of Geometry and Physics_ 111, 2017, pp. 169 –193
* [GBY17a] F. Gay-Balmaz and H. Yoshimura “A Lagrangian variational formulation for nonequilibrium thermodynamics. Part II: Continuum systems” In _Journal of Geometry and Physics_ 111, 2017, pp. 194 –212
* [GKL17] M.-H. Giga, A. Kirshtein and C. Liu “Variational modeling and complex fluids” In _Handbook of mathematical analysis in mechanics of viscous fluids_ Springer International Publishing, Cham, 2017, pp. 1–41
* [Grm84] M. Grmela “Bracket formulation of dissipative fluid mechanics equations” In _Physics Letters A_ 102.8, 1984, pp. 355 –358
* [GÖ97] M. Grmela and H.C. Öttinger “Dynamics and thermodynamics of complex fluids. I. Development of a general formalism” In _Physical Review E_ 56.6, 1997, pp. 6620
* [GÖ97a] M. Grmela and H.C. Öttinger “Dynamics and thermodynamics of complex fluids. II. Illustrations of a general formalism” In _Physical Review E_ 56.6, 1997, pp. 6633
* [GM62] S.R. De Groot and P. Mazur “Non-equilibrium thermodynamics” North-Holland Publ Co., Amsterdam, 1962
* [HL+10] Y. Hyon and C. Liu “Energetic variational approach in complex fluids: maximum dissipation principle” In _Discrete & Continuous Dynamical Systems-A_ 26.4, 2010, pp. 1291
* [JCVL96] D. Jou, J. Casas-Vázquez and G. Lebon “Extended irreversible thermodynamics” Springer, Berlin, 1996
* [KP14] D. Kondepudi and I. Prigogine “Modern thermodynamics: from heat engines to dissipative structures” John Wiley & Sons, Chichester, 2014
* [LLT20] N.A. Lai, C. Liu and A. Tarfulea “Positivity of temperature for some non-isothermal fluid models” In _arXiv preprint arXiv:2011.07192_ , 2020
* [Leb89] G. Lebon “From classical irreversible thermodynamics to extended thermodynamics” In _Acta Physica Hungarica_ 66, 1989, pp. 241–249
* [LS20] C. Liu and J.-E. Sulzbach “The Brinkman-Fourier System with Ideal Gas Equilibrium” In _arXiv preprint arXiv:2007.07304_ , 2020
* [LWL18] P. Liu, S. Wu and C. Liu “Non-isothermal electrokinetics: energetic variational approach” In _JournalCommunications in Mathematical Sciences_ 16, 2018, pp. 1451–1463
* [McQ76] D.A. McQuarrie “Statistical Mechanics” Harper & Row, New York, 1976
* [Ons31] L. Onsager “Reciprocal Relations in Irreversible Processes. I.” In _Phys. Rev._ 37 American Physical Society, 1931, pp. 405–426
* [Ons31a] L. Onsager “Reciprocal Relations in Irreversible Processes. II.” In _Phys. Rev._ 38 American Physical Society, 1931, pp. 2265–2279
* [Ött05] H.C. Öttinger “Beyond equilibrium thermodynamics” John Wiley & Sons, Hoboken, New Jersey, 2005
* [RB08] J. Ross and S. Berry “Thermodynamics and Fluctuations far from Equilibrium” Springer, New York, 2008
* [Sal01] S. Salinas “Introduction to Statistical Physics” Springer, New York, 2001
* [Saw18] Y. Sawano “Theory of Besov spaces” Springer, Singapore, 2018
* [Tru84] C. Truesdell “Rational Thermodynamics” Springer, New York, 1984
* [Wan+20] Y. Wang, C. Liu, P. Liu and B. Eisenberg “Field theory of reaction-diffusion: Law of mass action with an energetic variational approach” In _Phys. Rev. E_ 102, 2020, pp. 062147
* [Zár+07] J.M. Ortiz Zárate, J. Sengers, D. Bedeaux and S. Kjelstrup “Concentration fluctuations in nonisothermal reaction-diffusion systems” In _The Journal of Chemical Physics_ 127.3, 2007, pp. 34501
|
zhou]Institute of Cyber-Systems and Control, College of Control Science and
Engineering, Zhejiang University, Hangzhou, 310027, China pan]Institute of
Cyber-Systems and Control, College of Control Science and Engineering,
Zhejiang University, Hangzhou, 310027, China
# Spectrum Attention Mechanism for Time Series Classification
Shibo Zhouzhou Yu Panpan
###### Abstract
Time series classification(TSC) has always been an important and challenging
research task. With the wide application of deep learning, more and more
researchers use deep learning models to solve TSC problems. Since time series
always contains a lot of noise, which has a negative impact on network
training, people usually filter the original data before training the network.
The existing schemes are to treat the filtering and training as two stages,
and the design of the filter requires expert experience, which increases the
design difficulty of the algorithm and is not universal. We note that the
essence of filtering is to filter out the insignificant frequency components
and highlight the important ones, which is similar to the attention mechanism.
In this paper, we propose an attention mechanism that acts on spectrum (SAM).
The network can assign appropriate weights to each frequency component to
achieve adaptive filtering. We use L1 regularization to further enhance the
frequency screening capability of SAM. We also propose a segmented-SAM (SSAM)
to avoid the loss of time domain information caused by using the spectrum of
the whole sequence. In which, a tumbling window is introduced to segment the
original data. Then SAM is applied to each segment to generate new features.
We propose a heuristic strategy to search for the appropriate number of
segments. Experimental results show that SSAM can produce better feature
representations, make the network converge faster, and improve the robustness
and classification accuracy.
###### keywords:
time series classfication, spectrum attention mechanism, deep learning,
adaptive filtering
00footnotetext: This work was supported by the National Key R&D Program of
China under Grant 2018YFB1700100.
## 1 Introduction
Time series are real-valued ordered data with the characteristics of large
data volume, high dimensionality, and high noise. The traditional TSC
algorithm requires manual feature extraction, which is complicated and not
universal[1]. With the wide application of deep learning in computer vision,
natural language processing, recommendation system, etc.[2, 3, 4], more and
more researchers use deep learning to solve TSC problems. Deep learning based
TSC model do not require manual design of features, the network can
automatically learn the internal patterns of data through training. Typical
TSC models include FCN[5], MCNN[6], InceptionTime[7], etc. However, time
series usually contain a lot of noise, which has a negative impact on the
training of the model. Before applying the algorithm, people generally filter
the original data to improve the feature representation. The existing schemes
are to treat the filtering and classifying as two stages. For example, the
effective frequency spectrum of EMG signal ranges from 0 to 4hz, digital
filters are generally applied to filter out high-frequency noise in the data
preprocessing stage[8]. For EEG and MEG time series, a high-pass filter is
generally used to remove slow drifts[9]. In the field of remote sensing
images, people usually apply the fourier smoothing algorithm to the original
normalized difference vegetation index time series (NVDI-TS) to improve the
classification accuracy[10]. In all the above cases, the filter design
requires expert experience, which undoubtedly increases the difficulty of
algorithm design.
In this paper, we propose a spectrum attention mechanism (SAM) that is
compatible with the deep learning model. It is embedded in the first layer of
the network, and the mask vector is updated through training, so as to achieve
adaptive filtering of the original data and generate the features that is more
conducive to network training. We validate the effectiveness of the scheme
through experiments on synthetic datasets and real datasets.
The paper is organized as follows: In Section 2, the problem formulation and
our methods are presented. Section 3 presents the experiments and results.
Finally, Section 4 provides the main conclusions of the paper.
## 2 Methodology
### 2.1 Frequency Domain Filtering
Our goal is to design an adaptive filtering module that is compatible with the
deep learning models, which can generate a feature representation that is more
conducive to network training. We notice that the common point of existing
schemes is the use of frequency domain filtering. That is, the frequency
domain transformation of the original data is carried out to get the spectrum,
and then the specific frequency components are filtered out according to
specific scenes. This process can be described by the formula (1).
$x_{{filtered}}^{n}=\mathcal{F}^{-1}\left(f\left(\mathcal{F}\left(x^{n}\right),{mask}\right)\right)$
(1)
Where $\mathcal{F}$ denotes the frequency domain transformation, $mask$ is a
vector of the same dimension as the input data, and $f$ represents the
operation on the spectrum, which is generally a dot product.
We should preserve the important frequency components in the spectrum and
remove the unimportant components, so the key to the problem is how to
determine a suitable mask.
### 2.2 Discrete Cosine Transformation
We use the Discrete Cosine Transform (DCT) to transform the raw data into the
frequency domain. DCT is a Fourier-related transform similar to the discrete
Fourier transform (DFT), but uses only cosine basis. DCT for a time series $X$
of length $N$ is defined as formula (2) and (3).
$\displaystyle X[k]=\sum_{n=0}^{N-1}a(k)x[n]cos\left(\frac{(2n+1)\pi
k}{2N}\right)$ (2)
$\displaystyle X[k]=\sum_{n=0}^{N-1}a(n)x[n]cos\left(\frac{(2k+1)\pi
n}{2N}\right)$ (3)
where $a(u)=\left\\{\begin{array}[]{l}\sqrt{\frac{1}{N}},u=0\\\
\sqrt{\frac{2}{N}},u=1,2,\cdots,N-1\end{array}\right.$
We choose to use DCT because it has the following advantages compared with
DFT:
* •
DCT coefficients are real numbers as opposed to the DFT complex coefficients,
which is more suitable for gradient descent.
* •
DCT can handle signals with trends well, while DFT suffers from the problem of
”frequency leakage” when representing simple trends.
* •
When the successive values are highly correlated, DCT achieves better energy
concentration than DFT.
### 2.3 Spectrum Attention mechanism (SAM)
In cognitive science, due to the bottleneck of information processing
capabilities, humans selectively focus on part of the information while
ignoring the rest of the visible information. This is usually called the
attention mechanism[11]. Since frequency domain filtering is to remove
unimportant frequency components and retain or strengthen important
components, this is the same idea as the attention mechanism.
Figure 1: Spectral Attention Mechanism
We design an attention mechanism that acts on the spectrum (see Fig. 1). It
contains trainable parameters $mask$ of the same dimension as the input
signal, representing the weight of each frequency component, and the initial
value is 1. The weights are updated through training, so as to realize
adaptive filtering and generate better features. The forward propagation of
this layer is summarized in Algorithm 1.
Algorithm 1 Spectral Attention Mechanism(SAM)
Input: Univariate time series ${{x}^{n}}$
Output: $x_{filtered}^{n}$
Initialization: All-ones learnable array $mask^{n}$
1:# Transform the input series into spectral domain $s{{p}^{n}}\leftarrow
DCT({{x}^{n}})$
2:# Element-wise multiply $spectrum$ by $mask$ $masked\\_s{{p}^{n}}\leftarrow
s{{p}^{n}}\cdot mas{{k}^{n}}$
3:# Transform the $spectrum$ back into the time domain
${{x}_{filtered}}\leftarrow IDCT(masked\\_s{{p}^{n}})$
Figure 2: Three time series (left) and their corresponding spectrum (right).
SAM uses the spectrum of the entire sequence, which cannot reflect the phase
information of the original signal. As shown in Fig. 2, although the three
time series have great differences in the time domain, their spectrum is very
similar. This is because they contain the same frequency components, only the
phase of each frequency component is different. Therefore, SAM has inherent
defects in processing non-stationary signals. However, almost all signals are
non-stationary in real world. In order to retain part of the valuable phase
information, we use a tumbling window to divide the original sequence into $K$
segments of equal length, and apply SAM to each segment. The SAM output of
each segment is concatenated on the channel dimension as output features. The
main algorithm is summarized in Algorithm 2.
Algorithm 2 Segmented SAM (SSAM)
Input: ${{x}^{n}}$, number of segments $K$
Output: generated features $x_{new}^{T\times K}$
1:$T\leftarrow n//K$ # Initialize length of each segment
2:${{x}_{new}}\leftarrow zeros(T,K)$ # Initialize ${{x}_{new}}$
3:for i = 1 to $K$ do
4: # Get the ${{i}^{th}}$ segment $cur\leftarrow x[(i-1)*T:i*T]$
5: # Apply SAM to the ${{i}^{th}}$ segment $cur\\_output\leftarrow SAM(cur)$
6: # Update output ${{x}_{new}}[:,i]\leftarrow cur\\_output$
7:end for
Algorithm 3 Searching for the best number of segments
Input: training dataset, validation dataset
Output: ${K}_{best}$
1:$min\\_loss\leftarrow Inf$
2:${{K}_{best}}\leftarrow None$
3:for K = 1 to 8 do
4: Train network for 5 epochs
5: Calculate the validation loss
6: if validation loss $<$ min_loss then
7: ${{K}_{best}}\leftarrow K$
8: $min\\_loss\leftarrow validation\\_loss$
9: end if
10:end for
Since $K$ is a hyperparameter, which difficult to determine directly. We
design a heuristic search method, as shown in algorithm 3. The candidate range
of $K$ is set to 1 to 10, that is, the original data is divided into 10
segments at most. Take each $K$ value, train the model for 5 epochs, and apply
the $K$ corresponding to the minimum validation loss to the final model.
### 2.4 Architecture
Figure 3: The model architecture of SSAM-CNN.
In order to avoid the strong fitting ability of complex models to cover up the
performance of SSAM, wo present a relatively simple model. We first define the
convolution block, which consists of one-dimensional convolutional layer,
batch normalization layer[12], and activation layer. The basic convolution
block is:
$\begin{array}[]{l}y=\omega\otimes x+b\\\ s=BN(y)\\\ h={relu}(s)\end{array}$
(4)
$\otimes$ is the convolution operator. Our model is shown in Figure 3. First,
the raw data is input to SSAM for filtering to generate features more suitable
for network training. Then input into two convolution blocks to extract
features. The kernel size is {8,5}, and the channel dimension is {32,8}. After
the convolution blocks, the features are fed into a global average pooling
layer[13] instead of a fully connected layer, which largely reduces the number
of weights. The final label is produced by a softmax layer.
## 3 Experiment
In this section, experiments will be conducted on a synthetic dataset and four
widely used real datasets from UCR archive[14]. It should be noted that our
goal is to verify the effectiveness and universality of the design, so we did
not perform an overly detailed search on the hyperparameters of the model.
### 3.1 Data
Synthetic: To better understand the relationship between model performance and
the characteristics of the data, we define 3 classes: $C1$, $C2$, and $C3$ and
generate 2000 series from each as follows:
$x_{t}=\cos\left(\frac{2\pi t}{100}\right)+cos\left(\frac{2\pi
5t}{100}\right)+\omega_{t}\text{ for }x_{t}\in C_{1}$ (5)
$x_{t}=\cos\left(\frac{2\pi t}{100}\right)+cos\left(\frac{2\pi
20t}{100}\right)+\omega_{t}\text{ for }x_{t}\in C_{2}$ (6)
$x_{t}=\cos\left(\frac{2\pi t}{100}\right)+cos\left(\frac{2\pi
80t}{100}\right)+\omega_{t}\text{ for }x_{t}\in C_{3}$ (7)
where $t=1,..,100$,and $\omega_{t}$ is a gaussian noise with standard
deviation $\sigma=2$.
The most dominant frequencies for $C_{1}$ are 1 and 5, while for $C_{2}$ are 1
and 20, while for $C_{3}$ are 1 and 80. Due to the influence of noise, the
series from the three classes look similar in the time domain.
CBF: This is a shape classification datasets which contains three classes:
Cylinder, Bell and Funnel.
Control Charts (CC) : This dataset is derived from the control chart, which
contains six different control modes: normal, cyclic, increasing trend,
decreasing trend, upward shift and downward shift.
Face: This dataset originates from a face recognition problem. It consists of
four different individuals, making different facial expressions. The task is
to identify the person based on the head profile, which is represented as
“pseudo time series”.
Trace: This dataset records instrument data from a nuclear power plant for
fault detection.
The details of each dataset is summarized in Table 1.
Table 1: The characteristics of each dataset Datasets | Classes | Instances | Time Series length
---|---|---|---
Synthetic | 3 | 6000 | 100
CBF | 3 | 310 | 128
CC | 6 | 600 | 60
Face | 4 | 1120 | 350
Trace | 4 | 2000 | 275
| | |
### 3.2 Experimental settings
We first normalize the data, and then divide it into training dataset,
validation dataset, and test dataset at a ratio of 6:2:2. Then algorithm 3 is
applied to search for the optimal number of segments. In the training stage,
we record the validation loss in each epoch, and select the model with the
minimum validation loss as the final model. Some of the hyperparameter
configurations are shown in Table 2.
Table 2: Hyperparameters of SSAM-CNN model. | learning
---
rate
| learning
---
algorithm
| regularization
---
coefficient
epochs | | batch
---
size
0.01 | SGD | 0.01 | 500 | 128
| | | |
We select the widely used traditional TSC algorithm DTW-1NN[15] and the
representative deep learning based TSC models FCN[5] and MCNN[6] as
comparison. In order to validate the help of SSAM, we also test the base CNN
model without SSAM layer.
We visualize loss curve to discuss SSAM’s help in accelerating network
convergence. Through visulizing the learned mask and SSAM output, we discuss
the effectiveness of SSAM. We also validate that L1 regularization can make
the model generate a more sparse mask, which plays a role in frequency
component selection. At the same time, we validate that SSAM can improve the
robustness of the model to noise.
### 3.3 Results
Figure 4: The validation accuracy curve and loss curve on Synthetic dataset.
Compared to the base model, the introduction of SSAM makes the network
converge faster and the loss curve is smoother (see Figure 4). This indicates
that SSAM can map the original data into a feature representation that is more
conducive to network training.
Figure 5: Learned mask and filter output of synthetic dataset.
The learned mask is sparse and has a larger weight at three frequencies (see
Figure 5), which exactly correspond to the three target classes. Therefore,
SSAM can indeed assign appropriate weights to each frequency component,
highlighting important components and attenuating unimportant components. Due
to the influence of noise, the original time series are very similar, more
discriminative features are generated by SSAM, making the network easier to
train.
Figure 6: Classification accuracy under different intensity of white noise
We add white noise to the original data to test the noise immunity of the
model. The accuracy of the base model decreases rapidly with the increase of
noise intensity, while the accuracy of SSAM-CNN hardly decreases (see Figure
6). This is because our model is only sensitive to a few frequency components,
so it can shield most bands of white noise. Therefore, our model has good
robustness.
Table 3 shows the test accuracy of all algorithms on each dataset. The
hyperparameter $K$ obtained by algorithm3 is marked in the parentheses. From
Table 3, the accuracy of SSAM-CNN is higher than all other algorithms on four
datasets, and only slightly lower than FCN on CBF dataset. The performance of
SSAM-CNN exceeds base model on all datasets, indicating that the introducing
SSAM could improve the classification accuracy of the model. The number of
segments $K$ obtained by algorithm 3 is distributed between 1 and 3, among
which $K$ is 1 on CBF, Face and Synthetic datasets, 2 on Trace datasets, and 3
on CC datasets.
Table 3: Comparison results of the algorithms. The best result in each dataset is bolded. $K$ is marked in the parentheses. Datasets | | DTW
---
-1NN
FCN | MCNN | base | | SSAM
---
-CNN(K)
CBF | 91.30 | 94.60 | 80.43 | 91.30 | 93.48(1)
CC | 90.33 | 87.78 | 93.33 | 92.22 | 93.33(3)
Face | 82.14 | 91.07 | 89.88 | 83.57 | 94.64(1)
Trace | 68.33 | 93.00 | 92.33 | 86.31 | 93.73(2)
Synthetic | 63.20 | 88.30 | 93.40 | 93.09 | 99.94(1)
| | | | |
Figure 7: Learned mask and filter output of Face dataset. Figure 8: Learned
mask and filter output of CBF dataset.
Figure 7 and 8 show the learned mask and filtering results on the real
datasets CBF and Face. As can be seen from the figure, SSAM can assign
appropriate weights to each frequency component according to the
characteristics of data to generate discriminative features. And the learned
mask shows that the model pays more attention to the low frequency part of the
original data, which is in line with common sense.
Figure 9: The test accuracy of our model in different segmentation.
For CC and Trace dataset, the $K$ obtained by Algorithm 3 is not 1. This is
because the frequency difference of the original data at different phases may
be associated with the label. If the spectrum of the whole sequence is used
directly, the phase information will be lost. Therefore, Algorithm 3 gives a
$K$ that is not 1, and finally achieves a higher accuracy. In algorithm 3, we
use a heuristic strategy to search for $K$, that is, for each candidate value,
we only train the model for 5 epochs, and then decide whether to pick the
current candidate according to the validation loss. In order to verify the
effectiveness of this strategy, we apply each $K$ to the model to obtain the
test accuracy. As shown in Fig. 9, when $K$ is 2, the model achieves the
highest accuracy on the Trace dataset; When $K$ is 3, the model achieves the
highest accuracy on CC dataset. This is the same as the result obtained by the
algorithm 3, indicating that the heuristic algorithm can accurately find an
appropriate segmentation.
## 4 Conclusion
The main contribution of this paper is to prepose an attention mechanism that
acts on the spectrum. In order to avoid the complete loss of time domain
information, we also propose a segmented spectral attention mechanism, which
uses a tumbling window to segment the original sequence and apply SAM for each
segment to preserve the time domain information. We also propose a heuristic
algorithm to search for the best number of segments. The experimental results
show that the proposed SSAM is able to assign appropriate weights to each
frequency components to realize adaptive filtering, which make the model
converge faster and smoother, and more robust to noise. And the proposed
heuristic search algorithm can indeed find the most suitable segmentation and
improve the classification accuracy.
## References
* [1] Bagnall A, Lines J, Bostrom A, Large J, Keogh E, The great time series classification bake off: A review and experimental evaluation of recent algorithmic advances, Data Mining and Knowledge Discovery, 31(3): 606-660, 2017.
* [2] Simonyan K, Zisserman A, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:14091556, 2014.
* [3] Sennrich R, Haddow B, Birch A, Neural machine translation of rare words with subword units, in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016: 1715-1725.
* [4] Guo H, Tang R, Ye Y, Li Z, He X, Deepfm: A factorization-machine based neural network for ctr prediction, in Proceedings of the 26th International Joint Conference on Artificial Intelligence, 2017: 1725-1731.
* [5] Wang Z, Yan W, Oates T, Time series classification from scratch with deep neural networks: A strong baseline, in 2017 International joint conference on neural networks (IJCNN), 2017: 1578-1585.
* [6] Cui Z, Chen W, Chen Y, Multi-scale convolutional neural networks for time series classification, arXiv preprint arXiv:160306995, 2016.
* [7] Fawaz HI, Lucas B, Forestier G, Pelletier C, Schmidt DF, Weber J, Webb GI, Idoumghar L, Muller P-A, Petitjean F, Inceptiontime: Finding alexnet for time series classification, Data Mining and Knowledge Discovery, 34(6): 1936-1962, 2020.
* [8] Diab A, Falou O, Hassan M, Karlsson B, Marque C, Effect of filtering on the classification rate of nonlinear analysis methods applied to uterine emg signals, in 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2015: 4182-4185.
* [9] van Driel J, Olivers CN, Fahrenfort JJ, High-pass filtering artifacts in multivariate classification of neural time series data, BioRxiv: 530220, 2020.
* [10] Shao Y, Lunetta RS, Wheeler B, Iiames JS, Campbell JB, An evaluation of time-series smoothing algorithms for land-cover classifications using modis-ndvi multi-temporal data, Remote Sensing of Environment, 174(258-265), 2016.
* [11] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser, Polosukhin I, Attention is all you need, in Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017: 6000-6010.
* [12] Ioffe S, Szegedy C, Batch normalization: Accelerating deep network training by reducing internal covariate shift, in International conference on machine learning, 2015: 448-456.
* [13] Lin M, Chen Q, Yan S, Network in network, arXiv preprint arXiv:131203762, 2013.
* [14] Dau HA, Bagnall A, Kamgar K, Yeh C-CM, Zhu Y, Gharghabi S, Ratanamahatana CA, Keogh E, The ucr time series archive, IEEE/CAA Journal of Automatica Sinica, 6(6): 1293-1305, 2019.
* [15] Lines J, Bagnall A, Time series classification with ensembles of elastic distance measures, Data Mining and Knowledge Discovery, 29(3): 565-592, 2014.
|
# Online Continual Learning in Image Classification:
An Empirical Survey
Zheda Mai<EMAIL_ADDRESS>Ruiwen Li Jihwan Jeong David Quispe
Hyunwoo Kim Scott Sanner Department of Mechanical and Industrial
Engineering, University of Toronto, Toronto, 5 King’s College Road, ON M5S3G8,
Canada (zheda.mai, ruiwen.li<EMAIL_ADDRESS>(jhjeong,
<EMAIL_ADDRESS>LG AI Research, 128, Yeoui-daero, Yeongdeungpo-gu,
Seoul, South Korea<EMAIL_ADDRESS>
###### Abstract
Online continual learning for image classification studies the problem of
learning to classify images from an online stream of data and tasks, where
tasks may include different classes (class incremental) or data
nonstationarity (domain incremental). One of the key challenges of continual
learning is to avoid catastrophic forgetting (CF), i.e., forgetting old tasks
in the presence of more recent tasks. Over the past few years, a large range
of methods and tricks have been introduced to address the continual learning
problem, but many have not been fairly and systematically compared under a
variety of realistic and practical settings.
To better understand the relative advantages of various approaches and the
settings where they work best, this survey aims to (1) compare state-of-the-
art methods such as Maximally Interfered Retrieval (MIR), iCARL, and GDumb (a
very strong baseline) and determine which works best at different memory and
data settings as well as better understand the key source of CF; (2) determine
if the best online class incremental methods are also competitive in domain
incremental setting; and (3) evaluate the performance of 7 simple but
effective trick such as ”review” trick and nearest class mean (NCM) classifier
to assess their relative impact. Regarding (1), we observe earlier proposed
iCaRL remains competitive when the memory buffer is small; GDumb outperforms
many recently proposed methods in medium-size datasets and MIR performs the
best in larger-scale dataset. For (2), we note that GDumb performs quite
poorly while MIR — already competitive for (1) — is also strongly competitive
in this very different (but important) incremental learning setting. Overall,
this allows us to conclude that MIR is overall a strong and versatile online
continual learning method across a wide variety of settings. Finally for (3),
we find that all tricks are beneficial, and when augmented with ”review” trick
and NCM classifier, MIR produces performance levels that bring online
continual learning much closer to its ultimate goal of matching offline
training.
###### keywords:
Incremental Learning , Continual Learning , Lifelong Learning , Catastrophic
Forgetting , Online Learning
††journal: Neurocomputing
## 1 Introduction
With the ubiquity of personal smart devices and image-related applications, a
massive amount of image data is generated daily. While image-based deep neural
networks have demonstrated exceptional advances in recent years [1],
incrementally updating a neural network with a nonstationary data stream
results in _catastrophic forgetting_ (CF) [2, 3], the inability of a network
to perform well on previously seen data after updating with recent data. For
this reason, conventional deep learning tends to focus on offline training,
where each mini-batch is sampled i.i.d from a static dataset with multiple
epochs over the training data. However, to accommodate changes in the data
distribution, such a training scheme requires entirely retraining the network
on the new dataset, which is inefficient and sometimes infeasible when
previous data are not available due to storage limits or privacy issues.
_Continual Learning_ (CL) studies the problem of learning from a non-i.i.d
stream of data, with the goal of preserving and extending the acquired
knowledge. A more complex and general viewpoint of CL is the stability-
plasticity dilemma [4, 5] where stability refers to the ability to preserve
past knowledge and plasticity denotes the fast adaptation of new knowledge.
Following this viewpoint, CL seeks to strike a balance between learning
stability and plasticity. Since CL is often used interchangeably with lifelong
learning [6, 7] and incremental learning [8, 9], for simplicity, we will use
CL to refer to all concepts mentioned above.
Most early CL approaches consider the task incremental setting [10]. In this
setting, new data arrives one task at a time, and the model can utilize task-
IDs during both training and inference time [11, 12, 13]. Hence, a common
practice in this context is to assign a separate output layer (head) for each
task; then the model just needs to classify labels within a task, which is
known as the multi-head evaluation [9]. However, the multi-head evaluation
requires additional supervisory signals at inference — namely the task-ID — to
select the corresponding head, which obviates its use when the task label is
unavailable. Moreover, this setting needs to store all the current task data
in the memory, and thus, it is not friendly for edge devices with limited
resources.
In this work, we focus on two realistic but challenging settings, known as
_Online Class Incremental_ (OCI) and _Online Class Incremental_ (ODI). In
these settings, a model needs to learn from an online stream of data, with
each sample being seen only once. The incoming data either include new classes
(class incremental) or data nonstationarity (domain incremental). In contrast
to the task incremental setting, OCI and ODI have two main differences. (1)
Single-head evaluation is adopted: the model needs to classify all labels
without task-IDs [9]. (2) The model is required to process data online, which
reduces the adaptation time and operational memory usage. These settings are
based on the practical CL desiderata proposed recently [14, 15, 16] and have
received much attention in the past year [17, 18, 19].
To keep this paper focused, we only consider the supervised classification
problem in computer vision. Although CL is also studied in reinforcement
learning [20, 21, 22] and more recently in unsupervised learning [23], the
image classification problem is still the main focus for many CL researchers.
Over the past few years, a broad range of methods and tricks have been
introduced to address CL problems, but many have not been fairly and
systematically compared under a variety of settings. To better understand the
relative advantages of different approaches and the settings where they work
best, this survey aims to do the following:
* 1.
We fairly compare state-of-the-art methods in OCI and determine which works
best at different memory and data settings. We observe earlier proposed iCaRL
[8] remains competitive when the memory buffer is small; GDumb [24] is a
strong baseline that outperforms many recently proposed methods in medium-size
datasets, while MIR [17] performs the best in a larger-scale dataset. Also, we
experimentally and theoretically confirm that a key cause of CF is due to the
recency learning bias towards new classes in the last fully connected layer
owing to the imbalance between previous data and new data.
* 2.
We determine if the best OCI methods are also competitive in the ODI setting.
We note that GDumb performs quite poorly in ODI, whereas MIR — already
competitive in OCI — is still strongly competitive in ODI. Overall, these
results allow us to conclude that MIR is a strong and versatile online CL
method across a wide variety of settings.
* 3.
We evaluate the performance of 7 simple but effective tricks to assess their
relative impacts. We find that all tricks are beneficial and when augmented
with the ”review” trick [25] and a nearest class mean (NCM) classifier [8],
MIR produces performance levels that bring online CL much closer to its
ultimate goal of matching offline training.
The remainder of this paper is organized as follows. Section 2 discusses the
existing surveys in the CL community, and Section 3 formally defines the
problem, settings and evaluation metrics. In Section 4, we explain the online
continual hyperparameter tuning method we use. Then, Section 5 provides an
overview of state-of-the-art CL techniques, while Section 6 gives a detailed
description of methods that we compared in experiments. We discuss how class
imbalance results in catastrophic forgetting and introduce CL tricks that can
effectively alleviate the forgetting in Section 7. We outline our experimental
setup, comparative evaluation and key findings in Section 8. Finally, Section
9 discusses recent trends and emerging directions in CL, and we conclude in
Section 10.
## 2 Related Work
With the surge in the popularity of CL, there are multiple reviews and surveys
covering the advances of CL. The first group of surveys are not empirical.
[26] discusses the biological perspective of CL and summarizes how various
approaches alleviate catastrophic forgetting. [14] formalizes the CL problem
and outlines the existing benchmarks, metrics, approaches and evaluation
methods with the emphasis on robotics applications. They also recommend some
desiderata and guidelines for future CL research. [27] emphasizes the
importance of online CL and discusses recent advances in this setting.
Although these three surveys descriptively review the recent development of CL
and provide practical guidelines, they do not perform any empirical comparison
between methods.
In contrast, the second group of surveys on CL are empirical. For example,
[28, 10] evaluate multiple CL methods on three CL scenarios: task incremental,
class incremental and domain incremental. [16] empirically analyzes and
criticizes some common experimental settings, including the multi-head
evaluation [9] with an exclusive output layer for each task and the use of
permuted-type datasets (e.g., permuted MNIST). However, the analysis in these
three works is limited to small datasets such as MNIST and Fashion-MNIST.
Another two empirical studies on the performance of CL include [29, 30], but
only a small number of CL methods are compared. The first extensive
comparative CL survey with empirical analysis is presented in [15], which
focuses on the task incremental setting with the multi-head evaluation. In
contrast, our work addresses more practical and realistic settings, namely
Online Class Incremental (OCI) and Online Domain Incremental (ODI), which
require the model to learn online without access to task-ID at training and
inference time.
## 3 Online Class/Domain Incremental Learning
### 3.1 Problem definition and evaluation settings
We consider the supervised image classification problem with an online
(potentially infinite) non-i.i.d stream of data, following the recent CL
literature [17, 18, 27, 14]. Formally, we define a data stream of unknown
distributions $\mathcal{D}=\\{D_{1},\ldots,D_{N}\\}$ over $X\times Y$, where
$X$ and $Y$ are input and output random variables respectively, and a neural
network classifier parameterized by $\theta$, $f\mathrel{\mathop{\mathchar
58\relax}}X\mapsto\mathbb{R}^{C}$ where $C$ is the number of classes observed
so far as in [14]. At time $t$, a CL algorithm $A^{CL}$ receives a mini-batch
of samples ($\mathit{x_{t}^{i}}$, $\mathit{y_{t}^{i}}$) from the current
distribution $D_{i}$, and the algorithm only sees this mini-batch once.
An algorithm $A^{CL}$ is defined with the following signature:
$\displaystyle A_{t}^{CL}\mathrel{\mathop{\mathchar 58\relax}}\ \langle
f_{t-1},(\mathit{x_{t}},\mathit{y_{t}}),M_{t-1}\rangle\ \rightarrow\ \langle
f_{t},M_{t}\rangle$ (1)
Where:
* 1.
$f_{t}$ is the classifier at time step $t$.
* 2.
($\mathit{x_{t}^{i}}$, $\mathit{y_{t}^{i}}$) is a mini-batch received at time
$t$ from $D_{i}$ which contains
$\\{(\mathit{x_{tj}^{i}},\mathit{y_{tj}^{i}})\mid j\in[1,\ldots,b]\\}$ where b
is the mini-batch size.
* 3.
$M_{t}$ is an external memory that can be used to store a subset of the
training samples or other useful data (e.g., the classifier from the previous
time step as in LwF [12]). Note that the online setting does not limit the
usage of the samples in $M$, and therefore, the classifier $f_{t}$ can use
them as many times as it wants.
Note that we assume, for simplicity, a locally i.i.d stream of data where each
task distribution $D_{i}$ is stationary as in [13, 31]; however, this
framework can also accommodate the setting in which samples are drawn
non-i.i.d from $D_{i}$ as in [32, 33], where concept drift may occur within
$D_{i}$.
The goal of $A^{CL}$ is to train the classifier $f$ to continually learn new
samples from the data stream without interfering with the performance of
previously observed samples. Note that unless the current samples are stored
in $M_{t}$, $A^{CL}$ will not have access to these sample in the future.
Formally, at time step $\tau$, $A^{CL}$ tries to minimize the loss incurred by
all the previously seen samples with only access to the current mini-batch and
data from $M_{\tau-1}$:
$\displaystyle\mathrm{min}_{\theta}\sum_{t=1}^{\tau}\mathbb{E}_{\left(\mathit{x}_{t},\mathit{y}_{t}\right)}\left[\ell\left(f_{\tau}\left(\mathit{x}_{t};\theta\right),\mathit{y}_{t}\right)\right]$
(2)
Scenario | Difference between $D_{i-1}$ and $D_{i}$ | Task-ID | Online
---|---|---|---
$P(X_{i-1})\neq P(X_{t})$ | $P(Y_{i-1})\neq P(Y_{i})$ | $\\{Y_{i-1}\\}\neq\\{Y_{i}\\}$
Task Incremental | ✓ | ✓ | ✓ | Train & Test | No
Class Incremental | ✓ | ✓ | | No | Optional
Domain Incremental | ✓ | | | No | Optional
Table 1: Three continual learning scenarios based on the difference between $D_{i-1}$ and $D_{i}$, following [28]. $P(X)$ is the input data distribution; $P(Y)$ is the target label distribution; $\\{Y_{i-1}\\}\neq\\{Y_{i}\\}$ denotes that output space are from a disjoint space which is separated by task-ID. Task | Task Incremental | Class Incremental | Domain Incremental
---|---|---|---
$D_{i-1}$ | x: | | | x: | | | x: | |
y: | Bird | Dog | y: | Bird | Dog | y: | Bird | Dog
task-ID(test) | i-1 | Unknown | Unknown
$D_{i}$ | x: | | | x: | | | x: | |
y: | Ship | Guitar | y: | Ship | Guitar | y: | Bird | Dog
task-ID(test) | i | Unknown | Unknown
Table 2: Examples of the three CL scenarios. (x, y, task-ID) represents (input
images, target label and task identity). The main distinction between task
incremental and class incremental is the availability of task-ID. The main
difference between class incremental and domain incremental is that, in class
incremental, a new task contains completely new classes, whereas domain
incremental, a new task consists of new instances with nonstationarity (e.g.,
noise) of all the seen classes.
Recently, [28, 10] have categorized the CL problem into three scenarios based
on the difference between $D_{i-1}$ and $D_{i}$. Table 1 summarizes the
differences between the three scenarios, i.e., task incremental, class
incremental and domain incremental. For task incremental, the output spaces
are separated by task-IDs and are disjoint between $D_{i-1}$ and $D_{i}$. We
denote this setting as $\\{Y_{i-1}\\}\neq\\{Y_{i}\\}$, which in turn leads to
$P(Y_{i-1})\neq P(Y_{i})$. In this setting, task-IDs are available during both
train and test times. For class incremental, mutually exclusive sets of
classes comprise each data distribution $D_{i}$, meaning that there is no
duplicated class among different task distributions. Thus $P(Y_{i-1})\neq
P(Y_{i})$, but the output space is the same for all distributions since this
setting adopts the single-head evaluation where the model needs to classify
all labels without a task-ID. Domain incremental represents the setting where
input distributions are different, while the output spaces and distribution
are the same. Note that task IDs are not available for both class and domain
incremental. Table 2 shows examples of these three scenarios. Following this
categorization, the settings we focus in this work are known as Online Class
Incremental (OCI) and Online Domain Incremental (ODI).
### 3.2 Metrics
Besides measuring the final accuracy across tasks, it is also critical to
assess how fast a model learns, how much the model forgets and how well the
model transfers knowledge from one task to another. To this end, we use five
standard metrics in the CL literature to measure performance: (1) the average
accuracy for overall performance [31]; (2) the average forgetting to measure
how much of the acquired knowledge the model has forgotten [9]; (3) the
forward transfer and (4) the backward transfer to assess the ability for
knowledge transfer [14, 13]; (5) the total running time, including training
and testing times.
Formally, we define $a_{i,j}$ as the accuracy evaluated on the held-out test
set of task $j$ after training the network from task 1 through to $i$, and we
assume there are $T$ tasks in total.
Average Accuracy can be defined as Eq. (3). When $i=T$, $A_{T}$ represents the
average accuracy by the end of training with the whole data sequence (see
example in Table 3).
$\displaystyle\text{Average Accuracy}(A_{i})=\frac{1}{i}\sum_{j=1}^{i}a_{i,j}$
(3)
Average Forgetting at task $i$ is defined as Eq. (4). $f_{i,j}$ represents how
much the model has forgot about task $j$ after being trained on task $i$.
Specifically, $\max\limits_{l\in\\{1,\cdots,k-1\\}}(a_{l,j})$ denotes the best
test accuracy the model has ever achieved on task j before learning task $k$,
and $a_{k,j}$ is the test accuracy on task $j$ after learning task $k$.
$\displaystyle\text{Average
Forgetting}(F_{i})=\frac{1}{i-1}\sum_{j=1}^{i-1}f_{i,j}$
$\displaystyle\text{where~{}}f_{k,j}=\max_{l\in\\{1,\cdots,k-1\\}}(a_{l,j})-a_{k,j},\forall
j<k$ (4)
a | $te_{1}$ | $te_{2}$ | $\dots$ | $te_{T-1}$ | $te_{T}$
---|---|---|---|---|---
$tr_{1}$ | $a_{1,1}$ | $a_{1,2}$ | $\dots$ | $a_{1,T-1}$ | $a_{1,T}$
$tr_{2}$ | $a_{2,1}$ | $a_{2,2}$ | $\dots$ | $a_{2,T-1}$ | $a_{2,T}$
$\dots$ | $\dots$ | $\dots$ | $\dots$ | $\dots$ | $\dots$
$tr_{T-1}$ | $a_{T-1,1}$ | $a_{T-1,2}$ | $\dots$ | $a_{T-1,T-1}$ | $a_{T-1,T}$
$tr_{T}$ | $a_{T,1}$ | $a_{T,2}$ | $\dots$ | $a_{T,T-1}$ | $a_{T,T}$
Table 3: Accuracy matrix example following the notations in [14]. $tr_{i}$ and
$te_{i}$ denote training and test set of task $i$. $A_{T}$ is the average of
accuracies in the box. $BWT^{+}$ is the average of accuracies in purple and
$FWT$ is the average of accuracies in green.
Positive Backward Transfer($BWT^{+}$) measures the positive influence of
learning a new task on preceding tasks’ performance (see example in Table 3).
$\displaystyle
BWT^{+}=max(\frac{\sum_{i=2}^{T}\sum_{j=1}^{i-1}\left(a_{i,j}-a_{j,j}\right)}{\frac{T(T-1)}{2}},0)$
(5)
Forward Transfer($FWT$) measures the positive influence of learning a task on
future tasks’ performance (see example in Table 3).
$\displaystyle
FWT=\frac{\sum_{i=1}^{j-1}\sum_{j=1}^{T}a_{i,j}}{\frac{T(T-1)}{2}}$ (6)
## 4 Online Continual Hyperparameter Tuning
In practice, most CL methods have to rely on well-selected hyperparameters to
effectively balance the trade-off between stability and plasticity.
Hyperparameter tuning, however, is already a challenging task in learning
conventional deep neural network models, which becomes even more complicated
in the online CL setting. Meanwhile, a large volume of CL works still tune
hyperparameters in an offline manner by sweeping over the whole data sequence
and selecting the best hyperparameter set with grid-search on a validation
set. After that, metrics are reported on the test set with the selected set of
hyperparameters. This tuning protocol violates the online CL setting where a
classifier can only make a single pass over the data, which implies that the
reported results in the CL literature may be too ideal and cannot be
reproduced in real online CL applications.
Recently, several hyperparameter tuning protocols that are useful for CL
settings have been proposed. [29] introduces a tuning protocol for two tasks
($D_{1}$ and $D_{2}$). Firstly, they determine the best combination of model
hyperparameters using $D_{1}$. Then, they tune the learning rate to be used
for learning $D_{2}$ such that the test accuracy on $D_{2}$ is maximized. [15]
proposes another protocol that dynamically determines the stability-plasticity
trade-off. When the model receives a new task, the hyperparameters are set to
ensure minimal forgetting of previous tasks. If a predefined threshold for the
current task’s performance is not met, the hyperparameters are adjusted until
achieving the threshold. While the aforementioned protocols assume the data of
a new task is available all at once, [31] presents a protocol targeting the
online CL. Specifically, a data stream is divided into two sub-streams —
$D^{CV}$, the stream for cross-validation and $D^{EV}$, the stream for final
training and evaluation. Multiple passes over $D^{CV}$ are allowed for tuning,
but a CL algorithm can only perform a single pass over $D^{EV}$ for training.
The metrics are reported on the test sets of $D^{EV}$. Since the setting of
our interest in this work is the online CL, we adopt the protocol from [31]
for our experiments. We summarize the protocol in Algorithm 1.
Input : Hyperparameter set $\mathcal{P}$
Require: $D^{CV}$ data stream for tuning, $D^{EV}$ data stream for learning &
testing
Require: $f$ classifier, $A^{CL}$ CL algorithm
1
2for _$p\in\mathcal{P}$_ do $\triangleright$ Multiple passes over $D^{CV}$
for tuning
3 for _$i\in\\{1,\dots,T^{CV}\\}$_ do $\triangleright$ Single pass over
$D^{CV}$ with $p$
4 for _$B_{n}\sim D^{CV}_{i}$_ do
5 $A^{CL}(f,B_{n},p)$
6 $\text{Evaluate}(f,D^{CV}_{test})$ $\triangleright$ Store performance on
test set of $D^{CV}$
7
8$\text{Best hyperparameters, }\boldsymbol{p^{*}}\leftarrow$ based on Average
Accuracy of $D^{CV}_{test}$, see Eq.(3)
9for _$i\in\\{1,\dots,T^{EV}\\}$_ do $\triangleright$ Learning over $D^{EV}$
10 for _$B_{n}\sim D^{EV}_{i}$_ do $\triangleright$ Single pass over $D^{EV}$
with $p^{*}$
11 $A^{CL}(f,B_{n},\boldsymbol{p^{*}})$
12 $\text{Evaluate}(f,D^{EV}_{test})$
Report performance on $D^{EV}_{test}$
Algorithm 1 Hyperparameter Tuning Protocol
## 5 Overview of Continual Learning Techniques
Although a broad range of methods have been introduced in the past few years
to address the CL problem, the assumptions that each method makes are not
consistent due to the plethora of settings in CL. In particular, some methods
have a better ability to generalize to different CL settings because they
require less supervisory signals during both training and inference times. For
example, ER [34] was designed for the task incremental setting, but the method
can easily be used in all the other CL settings since it does not need any
additional supervisory signals. In this sense, a systematic summary of
supervisory signals that different methods demand will help understand the
generalization capacity and limitations of the methods. Furthermore, the
summary will facilitate fair comparison in the literature. On the other hand,
CL methods have typically been taxonomized into three major categories based
on the techniques they use: regularization-based, memory-based and parameter-
isolation-based [15, 26]. A clear trend in recent works, however, is to
simultaneously apply multiple techniques in order to tackle the CL problem. In
this section, we comprehensively summarize recently proposed methods based on
techniques they use and supervisory signals required at training and inference
times (see Table 4).
#### Supervisory Signals
The most important supervisory signal is the task-ID. When a task-ID is
available, a training or testing sample is given as $(x,y,t)$ instead of
$(x,y)$ where $t$ is the task-ID. For the task incremental setting, task-IDs
are available at both training and inference times. In regards to the class
incremental setting, a task-ID is not available at the inference time but can
be inferred during training as each task has disjoint class labels. On the
other hand, in the domain incremental setting, a task-ID is not available at
all times. Other supervisory signals include a natural language description of
a task or a matrix specifying the attribute values of the objects to be
recognized in the task [31].
Moreover, a method is online-able if it does not need to revisit samples it
has processed before. Hence, to be online-able, the model needs to learn
efficiently from one pass of the data. For example, the herding-based memory
update strategy proposed in iCaRL [8] needs all the samples from a class to
select a representative set; therefore, methods using this strategy are not
online-able.
#### Techniques
Regularization techniques impose constraints on the update of network
parameters to mitigate catastrophic forgetting. This is done by either
incorporating additional penalty terms into the loss function [35, 36, 37, 38]
or modifying the gradient of parameters during optimization [13, 31, 39].
Knowledge Distillation (KD) [40] is an effective way for knowledge transfer
between networks. KD has been widely adopted in CL methods [41, 12, 42, 43],
and it is often considered as one of the regularization techniques. Due to the
prevalence of KD in CL methods, we list it as a separate technique in Table 4.
One shortcoming of regularization-based techniques including KD is that it is
difficult to strike a balance between the regularization and the current
learning when learning from a long stream of data.
Memory-based techniques store a subset of samples from previous tasks for
either replay while training on new task [34, 17, 18] or regularization
purpose [44, 13, 45]. These methods become infeasible when storing raw samples
is not possible due to privacy or storage concerns.
Instead of saving raw samples, an alternative is Generative Replay which
trains a deep generative model such as GAN [46] to generate pseudo-data that
mimic past data for replay [47, 48, 49]. The main disadvantages of generative
replay are that it takes long time to train such generative models and that it
is not a viable option for more complex datasets given the current state of
deep generative models [50, 17].
Parameter-isolation (PI)-based techniques bypass interference by allocating
different parameters to each task. PI can be subdivided into Fixed
Architecture (FA) that only activates relevant parameters for each task
without modifying the architecture [51, 52, 53], and Dynamic Architecture (DA)
that adds new parameters for new tasks while keeping old parameters unchanged
[54, 55, 56]. Most previous works require task-IDs at inference time, and a
few recent methods have been introduced to predict without a task-ID [19, 57].
For a more detailed discussion of these techniques, we refer readers to the
recent CL surveys [15, 26, 14].
| Settings | Techniques
---|---|---
Methods | t-ID free(test) | t-ID free(train) | Online-able | Reg | Mem | KD | PI(FA) | PI(DA) | Generative
MIR[17], GSS[18], ER[34], CBO[58] | ✓ | ✓ | ✓ | | ✓ | | | |
GDumb[24], DER[59], MER[60]
CBRS[61], GMED[62], PRS[63]
La-MAML[64], MEFA[65]
MERLIN[66] | | ✓ | | ✓ | |
CN-DPM[19], TreeCNN[67] | | | | | ✓ |
A-GEM[31], GEM[13], VCL [44] | ✓ | ✓ | | | |
WA[68], BiC[42], LUCIR[69] | | ✓ | ✓ | | |
IL2M[70], ILO[71] | | | |
LwF-MC[8], LwM[72], DMC[73] | | | ✓ | | |
SRM[74], AQM[75] | | ✓ | | | | ✓
EWC++[9] | ✓ | | | | |
AR1[76] | ✓ | | | ✓ | |
EEIL[41], iCaRL[8], MCIL[77] | ✓ | ✓ | ✗ | | ✓ | ✓ | | |
SDC[78] | ✓ | | | | |
DGR[47] | | | | | | ✓
DGM[79] | | | | ✓ | ✓ | ✓
ICGAN[80], RtF[49] | | | | | | ✓ | | | ✓
iTAML[81],CCG[82] | ✓ | ✗ | ✓ | | ✓ | | ✓ | |
| | | | | | | | |
Table 4: Summary of recently proposed CL methods based on supervisory signals
required and techniques they use. t-ID free(test/train) means task-ID is not
required at test/train time.
## 6 Compared Methods
### 6.1 Regularization-based methods
#### Elastic Weight Consolidation (EWC)
EWC [11] incorporates a quadratic penalty to regularize the update of model
parameters that were important to past tasks. The importance of parameters is
approximated by the diagonal of the Fisher Information Matrix $F$. Assuming a
model sees two tasks A and B in sequence, the loss function of EWC is:
$\displaystyle\mathcal{L}(\theta)=\mathcal{L}_{B}(\theta)+\sum_{j}\frac{\lambda}{2}F_{j}\left(\theta_{j}-\theta_{A,j}^{*}\right)^{2}$
(7)
where $\mathcal{L}_{B}(\theta)$ is the loss for task B, $\theta^{*}_{A,j}$ is
the optimal value of $j^{th}$ parameter after learning task A and $\lambda$
controls the regularization strength. There are three major limitations of
EWC: (1) It requires storing the Fisher Information Matrix for each task,
which makes it impractical for a long sequence of tasks or models with
millions of parameters. (2) It needs an extra pass over each task at the end
of training, leading to its infeasibility for the online CL setting. (3)
Assuming the Fisher to be diagonal may not be accurate enough in practice.
Several variants of EWC are proposed lately to address these limitations [83,
9, 84]. As we use the online CL setting, we compare EWC++, an efficient and
online version of EWC that keeps a single Fisher Information Matrix calculated
by moving average. Specifically, given $F^{t-1}$ at $t-1$, the Fisher
Information Matrix at $t$ is updated as:
$\displaystyle F^{t}=\alpha F_{tmp}^{t}+(1-\alpha)F^{t-1}$ (8)
where $F_{tmp}^{t}$ is the Fisher Information Matrix calculated with the
current batch of data and $\alpha\in[0,1]$ is a hyperparameter controlling the
strength of favouring the current $F^{t}$.
#### Learning without Forgetting (LwF)
LwF [12] utilizes knowledge distillation [40] to preserve knowledge from past
tasks in the multi-head setting [9]. In LwF, the teacher model is the model
after learning the last task, and the student model is the model trained with
the current task. Concretely, when the model receives a new task ($X_{n}$,
$Y_{n}$), LwF computes $Y_{o}$, the output of old tasks for the new data
$X_{n}$. During training, LwF optimizes the following loss:
$\mathcal{L}(\theta)=\left(\lambda_{o}\mathcal{L}_{\text{KD}}\left(Y_{o},\hat{Y}_{o}\right)+\mathcal{L}_{\text{CE}}\left(Y_{n},\hat{Y}_{n}\right)+\mathcal{R}\left(\theta\right)\right)$
(9)
where $\hat{Y}_{o}$ and $\hat{Y}_{n}$ are the predicted values of the old task
and new task using the same $X_{n}$. $\mathcal{L}_{\text{KD}}$ is the
knowledge distillation loss incorporated to impose output stability of old
tasks with new data and $\mathcal{L}_{\text{CE}}$ is the cross-entropy loss
for the new task. $\mathcal{R}$ is a regularization term, and $\lambda_{o}$ is
a hyperparameter controlling the strength of favouring the old tasks over the
new task. A known shortcoming of LwF is its heavy reliance on the relatedness
between the new and old tasks. Thus, LwF may not perform well when the
distributions of the new and old tasks are different [12, 8, 56]. To apply LwF
in the single-head setting where all tasks share the same output head, a
variant of LwF (LwF.MC) is proposed in [8], and we evaluate LwF.MC in this
work.
### 6.2 Memory-based methods
A generic online memory-based method is presented in Algorithm 2. For every
incoming mini-batch, the algorithm retrieves another mini-batch from a memory
buffer, updates the model using both the incoming and memory mini-batches and
then updates the memory buffer with the incoming mini-batch. What
differentiate various memory-based methods are the memory retrieval strategy
(line 3) [17, 85], model update (line 4) [31, 13] and memory update strategy
(line 5) [61, 63, 18].
1
Input : Batch size $b$, Learning rate $\alpha$
Initialize: Memory $\mathcal{M}\leftarrow\\{\\}*M$; Parameters $\theta$;
Counter $n\leftarrow 0$
2 for _$t\in\\{1,\dots,T\\}$_ do
3 for _$B_{n}\sim D_{t}$_ do
4 $B_{\mathcal{M}}\\!\\!\leftarrow\\!$ MemoryRetrieval($B_{n},\\!\mathcal{M}$)
5 $\theta\leftarrow~{}\text{{ModelUpdate}}(B_{n}\cup
B_{\mathcal{M}},\theta,\alpha)$
6 $\mathcal{M}\leftarrow$ MemoryUpdate$(B_{n},\mathcal{M})$
7 $n\leftarrow n+b$
8
return $\theta$
Algorithm 2 Generic online Memory-based method
#### Averaged GEM (A-GEM)
A-GEM [31] is a more efficient version of GEM [13]. Both methods prevent
forgetting by constraining the parameter update with the samples in the memory
buffer. At every training step, GEM ensures that the loss of the memory
samples for each individual preceding task does not increase, while A-GEM
ensures that the average loss for all past tasks does not increase.
Specifically, let $g$ be the gradient computed with the incoming mini-batch
and $g_{ref}$ be the gradient computed with the same size mini-batch randomly
selected from the memory buffer. In A-GEM, if $g^{T}g_{ref}\geq 0$, $g$ is
used for gradient update but when $g^{T}g_{ref}<0$, $g$ is projected such that
$g^{T}g_{ref}=0$. The gradient after projection is:
$\displaystyle\tilde{g}=g-\frac{g^{\top}g_{ref}}{g_{ref}^{\top}g_{ref}}g_{ref}$
(10)
As we can see, A-GEM focuses on ModelUpdate in Algorithm 2, and we apply
reservoir sampling [86] in MemoryUpdate and random sampling in MemoryRetrieval
for A-GEM.
#### Incremental Classifier and Representation Learning (iCaRL)
iCaRL [8] is a replay-based method that decouples the representation learning
and classification. For representation learning, the training set is
constructed by mixing all the samples in the memory buffer and the current
task samples. The loss function includes a classification loss to encourage
the model to predict the correct labels for new classes and a KD loss to
prompt the model to reproduce the outputs from the previous model for old
classes. Note that the training set is imbalanced since the number of new-
class samples in the current task is larger than that of the old-class samples
in the memory buffer. iCaRL applies the binary cross entropy (BCE) for each
class to handle the imbalance, but BCE may not be effective in addressing the
relationship between classes. For the classifier, iCaRL uses a nearest-class-
mean classifier [87] with the memory buffer to predict labels for test images.
Moreover, it proposes a MemoryUpdate method based on the distance in the
latent feature space with the inspiration from [88]. For each class, it looks
for a subset of samples whose mean of latent features are closest (in the
Euclidean distance) to the mean of all the samples in this class. However,
this method requires all samples from every class, and therefore it cannot be
applied in the online setting. As such, we modify iCaRL to use reservoir
sampling [86], which has been shown effective for MemoryUpdate [34].
#### Experience Replay (ER)
ER refers to a simple but effective replay-based method that has been
discussed in [34, 89]. It applies reservoir sampling [86] in MemoryUpdate and
random sampling in MemoryRetrieval. Reservoir sampling ensures every streaming
data point has the same probability, ${mem_{sz}}/{n}$, to be stored in the
memory buffer, where $mem_{sz}$ is the size of the buffer and $n$ is the
number of data points observed up to now. We summarize the detail in Algorithm
3 in A. For ModelUpdate, ER simply trains the model with the incoming and
memory mini-batches together using the cross-entropy loss. Despite its
simplicity, recent research has shown that ER outperforms many specifically
designed CL approaches with and without a memory buffer [34].
#### Maximally Interfered Retrieval (MIR)
MIR [17] is a recently proposed replay-based method aiming to improve the
MemoryRetrieval strategy. MIR chooses replay samples according to the loss
increases given the estimated parameter update based on the incoming mini-
batch. Concretely, when receiving a mini-batch $B_{n}$, MIR performs a virtual
parameter update $\theta^{v}\leftarrow\text{SGD}(B_{n},\theta)$. Then it
retrieves the top-k samples from the memory buffer with the criterion
$s(x)=l\left(f_{\theta^{v}}(x),y\right)-l\left(f_{\theta}(x),y\right)$, where
$x\in M$ and $M$ is the memory buffer. Intuitively, MIR selects memory samples
that are maximally interfered (the largest loss increases) by the parameter
update with the incoming mini-batch. MIR applies reservoir sampling in
MemoryUpdate and replays the selected memory samples with new samples in
ModelUpdate.
#### Gradient based Sample Selection (GSS)
GSS [18] is another replay-based method that focuses on the MemoryUpdate
strategy111[18] proposes two gradient-based methods, and we select the more
efficient one with better performance, dubbed GSS-Greedy. Specifically, it
tries to diversify the gradient directions of the samples in the memory
buffer. To this end, GSS maintains a score for each sample in the buffer, and
the score is calculated by the maximal cosine similarity in the gradient space
between the sample and a random subset from the buffer. When a new sample
arrives and the memory buffer is full, a randomly selected subset is used as
the candidate set for replacement. The score of a sample in the candidate set
is compared to the score of the new sample, and the sample with a lower score
is more likely to be stored in the memory buffer. Algorithm 4 in A shows the
main steps of this update method. Same as ER, GSS uses random sampling in
MemoryRetrieval.
#### Greedy Sampler and Dumb Learner (GDumb)
GDumb [24] is not specifically designed for CL problems but shows very
competitive performance. Specifically, it greedily updates the memory buffer
from the data stream with the constraint to keep a balanced class distribution
(Algorithm 5 in A). At inference, it trains a model from scratch using the
balanced memory buffer only.
### 6.3 Parameter-isolation-based methods
#### Continual Neural Dirichlet Process Mixture (CN-DPM)
CN-DPM [19] is one of the first dynamic architecture methods that does not
require a task-ID. The intuition behind this method is that if we train a new
model for a new task and leave the existing models intact, we can retain the
knowledge of the past tasks. Specifically, CN-DPM is comprised of a group of
experts, where each expert is responsible for a subset of the data and the
group is expanded based on the Dirichlet Process Mixture [90] with Sequential
Variational Approximation [91]. Each expert consists of a discriminative model
(classifier) and a generative model (VAE [92] is used in this work). The goal
of CN-DPM is to model the overall conditional distribution as a mixture of
task-wise conditional distributions as the following, where $K$ is the number
of experts in the current model:
$\displaystyle p(y\mid x)=\sum_{k=1}^{K}\underbrace{p(y\mid
x,z=k)}_{\text{discriminative }}\frac{\overbrace{p(x\mid
z=k)}^{\text{generative }}{p(z=k)}}{\sum_{k^{\prime}=1}^{K}p\left(x\mid
z=k^{\prime}\right)p\left(z=k^{\prime}\right)}$ (11)
## 7 Tricks for Memory-Based Methods in the OCI Setting
Method | e(n, o) | e(n, n) | e(o, o) | e(o, n) | er(n, o) | er(o, n)
---|---|---|---|---|---|---
A-GEM | 0 | 177 | 0 | 9500 | 0% | 100%
ER | 37 | 148 | 2269 | 5852 | 20% | 72%
MIR | 54 | 113 | 2770 | 5330 | 32% | 66%
Table 5: Error analysis of CIFAR-100 by the end of training with M=5k. e(n, o)
& e(n, n) represent the number of test samples from new classes that are
misclassified as old classes and new classes, respectively. Same notation rule
is applied to e(o, o) & e(o, n).
(a) er(n, o) is the ratio of new class test samples misclassified as old
classes to the total number of new class test samples.
(b) The mean of logits for new and old classes
(c) The mean of the bias terms in the last FC layer for new and old classes
(d) The mean of the weights in the last FC layer for new and old classes
Figure 1: Error analysis for three memory-based methods(A-GEM, ER and MIR)
with 5k memory buffer on Split CIFAR-100 described in Section 8.1.
In class incremental learning, old class samples are generally not available
while training on new class samples. Although keeping a portion of old class
samples in a memory buffer has been proven effective [34, 8], the class
imbalance is still a serious problem given a limited buffer size. Moreover,
multiple recent works have revealed that class imbalance is one of the most
crucial causes of catastrophic forgetting [41, 42, 69]. To alleviate the class
imbalance, many simple but effective tricks have been proposed as the building
blocks for CL methods by modifying the loss function, post-processing, or
using different types of classifiers.
In Section 7.1 and 7.2, we perform quantitative error analysis and disclose
how class imbalance results in catastrophic forgetting. Section 7.3 explains
the compared tricks in detail.
### 7.1 Error analysis of memory-based methods
We perform quantitative error analysis for three memory-based methods with 5k
memory buffer (A-GEM, ER and MIR) on Split CIFAR-100 described in Section 8.1.
We define $e(n,o)$ and $e(n,n)$ as the number of test samples from new classes
that are misclassified as old classes and new classes, respectively. The same
notation rule is applied to $e(o,o)$ and $e(o,n)$. Also, $er(n,o)$ denotes the
ratio of new class test samples misclassified as old classes to the total
number of new class test samples. $er(o,n)$ is similarly defined.
As shown in Table 5, all methods have strong bias towards new classes by the
end of the training: A-GEM classifies all old class samples as new classes; ER
and MIR misclassify 72% and 66% old class samples as new classes,
respectively. Moreover, as we can see in Fig. 1(a), $er(o,n)$ is higher than
$er(n,o)$ most of the times along the training process for all three methods.
These phenomena are not specific to the OCI setting, and [93, 68] also found
similar results in the offline class incremental learning. Additionally, we
easily find that ER and MIR are always better than A-GEM in terms of
$er(o,n)$. This is because ER and MIR use the memory samples more directly,
namely replaying them with the new class samples. The indirect use of memory
samples in A-GEM is less effective in the class incremental setting.
### 7.2 Biased FC layer
To better understand how class imbalance affects the learning performance, we
define the following notations. The convolutional neural network (CNN)
classification model can be split into a feature extractor
$\phi(\cdot)\mathrel{\mathop{\mathchar
58\relax}}\mathbf{x}\mapsto\mathbb{R}^{d}$ where d is the dimension of the
feature vector of image $\mathbf{x}$, and a fully-connected (FC) layer with
Softmax output. The output of the FC layer is obtained with:
$\displaystyle logits(\mathbf{x})=\mathbf{W}^{T}\phi(\mathbf{x})$ (12)
$\displaystyle\mathbf{o}(\mathbf{x})=Softmax(logits(\mathbf{x})))$ (13)
where $\mathbf{W}\in\mathbb{R}^{d\times({\mathinner{\\!\left\lvert
C_{old}\right\rvert}+\mathinner{\\!\left\lvert C_{new}\right\rvert}}})$, and
$C_{old}$ and $C_{new}$ are the sets of old and new classes respectively, with
$|\cdot|$ denoting the number of classes in each set. $\mathbf{W_{i}}$
represents the weight vector for class i. For notational brevity, $\mathbf{W}$
also contains the bias terms.
We start by analyzing the mean of logits for new and old classes. As shown in
Fig. 1(b), the mean of logits for new classes is always much higher than that
for old classes, which explains the high $er(o,n)$ for all three methods. As
we can see from Eq. (12), both feature extractor $\phi(\mathbf{x})$ and FC
layer $\mathbf{W}$ may potentially contribute to the logit bias. However,
previous works have found that even a small memory buffer (implying high class
imbalance) can greatly alleviate catastrophic forgetting in the multi-head
setting where the model can utilize the task-id to select the corresponding FC
layer for each task [34, 31]. This suggests that the feature extractor is not
heavily affected by the class imbalance, and therefore we hypothesize that the
FC layer is biased.
To validate the hypothesis, we plot the means of bias terms and weights in the
FC layer for new and old classes in Fig. 1(c) and 1(d). As we can see, the
means of weights for the new classes are much higher than those for the old
classes, and the means of bias terms for the new classes are also higher than
those for the old classes most of the times. Since the biased weights and bias
terms in $\mathbf{W}$ have direct impacts on the output logits, the model has
a higher chance of predicting a sample as new classes. [93, 42, 68] have also
verified the same hypothesis but with different validation methods.
A recent work reveals how does class imbalance result in a biased $\mathbf{W}$
[93]. We denote $s_{i}=\mathbf{W_{i}}^{T}\phi(\mathbf{x})$ as the logit of
sample $\mathbf{x}$ for class $i$. The gradient of the cross-entropy loss
$L_{CE}$ with respect to $\mathbf{W_{i}}$ for the Softmax output is:
$\displaystyle\frac{\partial\mathcal{L}_{\mathrm{CE}}}{\partial\mathbf{W_{i}}}=\bigg{(}\frac{e^{s_{i}}}{\sum_{j\in
C_{old}+C_{new}}e^{s_{j}}}-\mathbbm{1}_{\\{i=y\\}}\bigg{)}\phi(\mathbf{x})$
(14)
where $y$ is the ground-truth class and $\mathbbm{1}_{\\{i=y\\}}$ is the
indicator for $i=y$. Since ReLU is often used as the activation function for
the embedding networks [94], $\phi(\mathbf{x})$ is always positive. Therefore,
the gradient is always positive for $i\neq y$. If $i$ belongs to old classes,
$i\neq y$ will hold most of the time as the new class samples significantly
outnumber the old class samples during training. Thus, the logit for class $i$
will keep being penalized during the gradient descent. As a result, the logits
for the old classes are prone to be smaller than those for the new classes,
and the model is consequently biased towards new classes.
Other than the bias related to the Softmax classifier mentioned above, another
problem induced by class imbalance is under-representation of the minority
[95, 96], where minority classes do not show a discernible pattern in the
latent feature space [63]. The under-representation introduces additional
difficulty for other classifiers apart from the Softmax classifier, such as
nearest-class-mean [87] and cosine-similarity-based classifier [97].
### 7.3 Compared tricks
To mitigate the strong bias towards new classes due to class imbalance,
multiple methods have been proposed lately [98, 99, 68, 70, 100]. For example,
LUCIR [69] proposes three tricks: cosine normalization to balance class
magnitudes, a margin ranking loss for inter-class separation, and a less-
forgetting constraint to preserve the orientation of features. BiC [42] trains
an additional linear layer to remove bias with a separate validation set. We
compare seven simple (effortless integration without additional resources) but
effective (decent improvement) tricks in this work.
#### Labels Trick (LB)
[101] proposes to consider only the outputs for the classes in the current
mini-batch when calculating the cross-entropy loss, in contrast to the common
practice of considering outputs for all the classes. To achieve this, the
outputs that do not correspond to the classes of the current mini-batch are
masked out when calculating the loss.
Although the author did not demonstrate the motivation of this trick, we can
easily find the rationale based on the analysis in Section 7.2. Masking out
all the outputs that don’t match the classes in the current mini-batch is
equivalent to changing the loss function to:
$\displaystyle\mathcal{L}_{\mathrm{CE}}(x_{i},y_{i})=-\log\left(\frac{e^{s_{y_{i}}}}{\sum_{j\in{C}_{cur}}e^{s_{j}}}\right)$
(15)
where $C_{cur}$ denotes the classes in the current mini-batch. We can see that
$\frac{\partial\mathcal{L}_{\mathrm{CE}}}{\partial s_{j}}=0$ for $j\notin
C_{cur}$, and therefore training with the current mini-batch will not overly
penalize the logits for classes that are not in the mini-batch.
#### Knowledge Distillation with Classification Loss (KDC)
KD [40] is an effective way for knowledge transfer between networks. Multiple
recent works have proposed different ways to combine the KD loss with the
classification loss [8, 41, 102]. In this part, we compare the methods from
[42]. Specifically, the loss function is given as:
$\displaystyle\mathcal{L}(\mathbf{x},y)=\lambda\mathcal{L}_{CE}(\mathbf{x},y)+(1-\lambda)\mathcal{L}_{KD}(\mathbf{x})$
(16)
where $\lambda$ is set to $\frac{\mathinner{\\!\left\lvert
C_{new}\right\rvert}}{\mathinner{\\!\left\lvert
C_{old}\right\rvert}+\mathinner{\\!\left\lvert C_{new}\right\rvert}}$. Note
that $(\mathbf{x},y)$ is from both new class data from the current task and
old class data from the memory buffer.
As shown in Table 9, however, this method does not perform well in our
experiment setting, especially with a large memory buffer. We identify
$\lambda$ as the key issue. We find that the accuracy for new class samples
becomes almost zero around the end of training because $\lambda$ is very
small. In other words, $\mathcal{L}_{KD}$ dominates the loss, and the model
cannot learn any new knowledge. Hence, we suggest setting $\lambda$ to
$\sqrt{\frac{\mathinner{\\!\left\lvert
C_{new}\right\rvert}}{\mathinner{\\!\left\lvert
C_{old}\right\rvert}+\mathinner{\\!\left\lvert C_{new}\right\rvert}}}$. We
denote the trick with this modification as KDC*.
#### Multiple Iterations (MI)
Most of the previous works only perform a single gradient update on the
incoming mini-batch in the online setup. [17] suggests performing multiple
gradient updates to maximally utilize the current mini-batch. Particularly for
replay methods, additional updates with different replay samples can improve
performance. We run 5 iterations per incoming mini-batch and retrieve
different memory samples for each iteration in this work.
#### Nearest Class Mean (NCM) Classifier
To tackle the biased FC layer, one can replace the FC layer and Softmax
classifier with another type of classifier. Nearest Class Mean classifier
(NCM) [87] is a popular option in CL [8, 78]. To make prediction for a sample
$\mathbf{x}$, NCM computes a prototype vector for each class and assigns the
class label with the most similar prototype:
$\displaystyle\mu_{y}=\frac{1}{\left|M_{y}\right|}\sum_{\mathbf{x}_{m}\in
M_{y}}\phi(\mathbf{x}_{m})$ (17) $\displaystyle
y^{*}=\underset{y=1,\ldots,t}{\operatorname{argmin}}\left\|\phi(\mathbf{x})-\mu_{y}\right\|$
(18)
In the class incremental setting, the true prototype vector for each class
cannot be computed due to the unavailability of the training data for previous
tasks. Instead, the prototype vectors can be approximated using the data in
the memory buffer. In Eq. (17), $M_{y}$ denotes the memory samples of class
$y$.
#### Separated Softmax (SS)
Since training the whole FC layer with one Softmax output layer results in
bias as explained in Section 7.2, SS [93] employs two Softmax output layers:
one for new classes and another one for old classes. The loss function can be
calculated as below:
$\displaystyle\begin{aligned} &\mathcal{L}(\mathbf{x}_{i},y_{i})\\\
=&-\log\left(\frac{e^{s_{y_{i}}}}{\sum_{j\in{C}_{old}}e^{s_{j}}}\right)\cdot\mathbbm{1}\left\\{y_{i}\in{C}_{old}\right\\}-\log\left(\frac{e^{s_{y_{i}}}}{\sum_{j\in{C}_{new}}e^{s_{j}}}\right)\cdot\mathbbm{1}\left\\{y_{i}\in{C}_{new}\right\\}\end{aligned}$
(19)
Depending on whether $y_{i}\in C_{new}$ or $C_{old}$, the corresponding
Softmax is used to compute the cross-entropy loss. We can find that
$\frac{\partial\mathcal{L}}{\partial s_{j}}=0$ for $j\in C_{old}$ and
$y_{i}\in C_{new}$. Thus, training with new class samples will not overly
penalize the logits for the old classes.
#### Review Trick (RV)
To alleviate the class imbalance, [41] proposes an additional fine-tuning step
with a small learning rate, which uses a balanced subset from the memory
buffer and the training set of the current task. A temporary distillation loss
for new classes is applied to avoid forgetting the new classes during the
fine-tuning phase mentioned above. A similar yet simplified version, dubbed
Review Trick, is applied in the winning solution in the continual learning
challenge at CVPR2020 [25]. At the end of learning the current task, the
review trick fine-tunes the model with all the samples in the memory buffer
using only the cross-entropy loss. In this work, we compare the review trick
from [25] with a learning rate 10 times smaller than the training learning
rate.
Dataset | #Task | #Train/task | #Test/task | #Class | Image Size | Setting
---|---|---|---|---|---|---
Split MiniImageNet | 20 | 2500 | 500 | 100 | 3x84x84 | OCI
Split CIFAR-100 | 20 | 2500 | 500 | 100 | 3x32x32 | OCI
CORe50-NC | 9 | 12000$\sim$24000 | 4500$\sim$9000 | 50 | 3x128x128 | OCI
NS-MiniImageNet | 10 | 5000 | 1000 | 100 | 3x84x84 | ODI
CORe50-NI222CORe50-NI uses one test set(44972 images) for all tasks | 8 | 15000 | 44972 | 50 | 3x128x128 | ODI
Table 6: Summary of dataset statistics
## 8 Experiments
Section 8.1 explains the general setting for all experiments. Then, we focus
on the OCI setting in Section 8.2 and 8.3: we evaluate all the compared
methods and baselines in Section 8.2 and investigate the effectiveness of the
seven tricks in Section 8.3. In Section 8.4, we assess the compared methods in
the ODI setting to investigate their abilities to generalize, and Section 8.5
provides general comments on the surveyed methods and tricks.
### 8.1 Experimental setup
#### Datasets
We evaluate the nine methods summarized in Section 6 and additional two
baselines on three class incremental datasets. We also propose a new domain
incremental dataset based on Mini-ImageNet and examine the compared methods in
the ODI setting to see how well the methods can generalize to this setting.
The summary of dataset statistics is provided in Table 6.
#### Class Incremental Datasets
* 1.
Split CIFAR-100 is constructed by splitting the CIFAR-100 dataset [103] into
20 tasks with disjoint classes, and each task has 5 classes. There are 2,500
3×32×32 images for training and 500 images for testing in each task.
* 2.
Split MiniImageNet splits MiniImageNet dataset [104], a subset of ImageNet
[105] with 100 classes, into 20 disjoint tasks as in [34]. Each task contains
5 classes, and every class consists of 500 3×84×84 images for training and 100
images for testing.
* 3.
CORe50-NC [106] is a benchmark designed for class incremental learning with 9
tasks and 50 classes: 10 classes in the first task and 5 classes in the
subsequent 8 tasks. Each class has around 2,398 3×128×128 training images and
900 testing images.
#### Domain Incremental Datasets
* 1.
NonStationary-MiniImageNet (NS-MiniImageNet) The most popular domain
incremental datasets are still based on MNIST [107], such as Rotation MNIST
[13] and Permutation MNIST [11]. To evaluate the domain incremental setting in
a more practical scenario, we propose NS-MiniImageNet with three nonstationary
types: noise, blur and occlusion. The number of tasks and the strength of each
nonstationary type are adjustable in this dataset. In this survey, we use 10
tasks for each type, and each task comprises 5,000 3×84×84 training images and
1,000 testing images. As shown in Table 7, the nonstationary strength
increases over time, and to ensure a smooth distribution shift, the strength
always increases by the same constant. More details about the nonstationary
strengths used in the experiments can be found in B.
* 2.
CORe50-NI [106] is a practical benchmark designed for assessing the domain
incremental learning with 8 tasks, where each task has around 15,000 training
images of 50 classes with different types of nonstationarity including
illumination, background, occlusion, pose and scale. There is one single test
set for all tasks, which contains 44,972 images.
Type | Task 1 | … | Task 5 | … | Task 10
---|---|---|---|---|---
Noise | | … | | … |
Blur | | … | | … |
Occlusion | | … | | … |
Table 7: Example images of different nonstationary types in NS-MiniImageNet.
The nonstationary strength increases over time, and to ensure a smooth
distribution shift, the strength always increases by the same constant.
#### Task Order and Task Composition
Since the task order and task composition may impact the performance [106], we
take the average over multiple runs for each experiment with different task
orders and composition to reliably assess the robustness of the methods. For
CORe50-NC and CORe50-NI, we follow the number of runs (i.e., 10), task order
and composition provided by the authors. For Split CIFAR-100 and Split
MiniImagenet, we average over 15 runs, and the class composition in each task
is randomly selected for each run.
#### Models
Similar to [34, 13], we use the reduced ResNet18 [108] as the base model for
all datasets and methods. The network is trained via the cross-entropy loss
with a stochastic gradient descent optimizer and a mini-batch size of 10. The
size of the mini-batch retrieved from the memory buffer is also set to 10,
irrespective of the size of the memory buffer as in [34]. Note that with
techniques such as transfer learning (e.g., using a pre-trained model from
ImageNet), data augmentation and deeper network architectures, it is possible
to achieve much higher performance in this setting [109]. However, since those
techniques are orthogonal to our investigation and deviate from the simpler
experimental settings of other papers we cite and compare, we do not use them
in our experiments.
#### Baselines
We compare the methods we discussed in Section 6 with two baselines:
* 1.
Finetune greedily updates the model with the incoming mini-batch without
considering the previous task performance. The model suffers from catastrophic
forgetting and is regarded as the lower-bound.
* 2.
Offline trains a model using all the samples in a dataset in an offline
manner. The baseline is trained for multiple epochs within each of which mini-
batches are sampled i.i.d from differently shuffled dataset. We train the
model for 70 epochs with the mini-batch size of 128.
#### Other Details
We evaluate the performance with 5 metrics we described in Section 3.2:
Average Accuracy, Average Forgetting, Forward Transfer, Backward Transfer and
Run Time. We use the hyperparameter tuning protocol described in Section 4 and
give each method similar tuning budget. The details of the implementation
including hyperparameter selection can be found in B.2.
### 8.2 Performance comparison in Online Class Incremental (OCI) setting
Method | Split CIFAR-100 | Split Mini-ImageNet | CORe50-NC
---|---|---|---
Finetune | $3.7\pm 0.3$ | $3.4\pm 0.2$ | $7.7\pm 1.0$
Offline | $49.7\pm 2.6$ | $51.9\pm 0.5$ | $51.7\pm 1.8$
EWC++ | $3.7\pm 0.4$ | $3.5\pm 0.4$ | $8.3\pm 0.3$
LwF | $7.2\pm 0.4$ | $7.6\pm 0.7$ | $7.1\pm 1.9$
Buffer Size | M=1k | M=5k | M=10k | M=1k | M=5k | M=10k | M=1k | M=5k | M=10k
ER | $7.6\pm 0.5$ | $17.0\pm 1.9$ | $18.4\pm 1.4$ | $6.4\pm 0.9$ | $14.5\pm 2.1$ | $15.9\pm 2.0$ | $23.5\pm 2.4$ | $27.5\pm 3.5$ | $28.2\pm 3.3$
MIR | $7.6\pm 0.5$ | $18.2\pm 0.8$ | $19.3\pm 0.7$ | $6.4\pm 0.9$ | $16.5\pm 2.1$ | $21.0\pm 1.1$ | $\mathbf{27.0\pm 1.6}$ | $\mathbf{32.9\pm 1.7}$ | $\mathbf{34.5\pm 1.5}$
GSS | $7.7\pm 0.5$ | $11.3\pm 0.9$ | $13.4\pm 0.6$ | $5.9\pm 0.7$ | $11.2\pm 0.9$ | $13.5\pm 0.8$ | $19.6\pm 3.0$ | $22.2\pm 4.4$ | $21.1\pm 3.5$
iCaRL | $\mathbf{16.7\pm 0.8}$ | $19.2\pm 1.1$ | $18.8\pm 0.9$ | $\mathbf{14.7\pm 0.4}$ | $17.5\pm 0.6$ | $17.4\pm 1.5$ | $22.1\pm 1.4$ | $25.1\pm 1.6$ | $22.9\pm 3.1$
A-GEM | $3.7\pm 0.4$ | $3.6\pm 0.2$ | $3.8\pm 0.2$ | $3.4\pm 0.2$ | $3.7\pm 0.3$ | $3.3\pm 0.3$ | $8.7\pm 0.6$ | $9.0\pm 0.5$ | $8.9\pm 0.6$
CN-DPM | $14.0\pm 1.7$ | - | - | $9.4\pm 1.2$ | - | - | $7.6\pm 0.4$ | - | -
GDumb | $10.4\pm 1.1$ | $\mathbf{22.1\pm 0.9}$ | $\mathbf{28.8\pm 0.9}$ | $8.8\pm 0.4$ | $\mathbf{21.1\pm 1.7}$ | $\mathbf{31.0\pm 1.4}$ | $15.1\pm 1.2$ | $28.1\pm 1.4$ | $32.6\pm 1.7$
Table 8: Average accuracy (end of training) for the OCI setting of Split
CIFAR-100, Split Mini-ImageNet and CORe50-NC. Replay-based methods and a
strong baseline GDumb show competitive performance across three datasets.
#### Regularization-based Methods
As shown in Table 8, EWC++ has almost the same performance as Finetune in the
OCI setting. We find that the gradient explosion of the regularization terms
is the root cause. Specifically, $\lambda$ in EWC++ controls the
regularization strength, and we need a larger $\lambda$ to avoid forgetting.
However, when $\lambda$ increases to a certain value, the gradient explosion
occurs. If we take $\theta_{j}$ in Eq. (7) as an example, the regularization
term for $\theta_{j}$ has the gradient
${\lambda}F_{j}(\theta_{j}-\theta^{*})$. Some model weights change
significantly when it receives data with new classes, and therefore the
gradients for those weights are prone to explode with a large $\lambda$. The
Huber regularization proposed lately could be a possible remedy [110].
Surprisingly, we also observe that LwF, a method relying on KD, has similar
performance as replay-based methods with a small memory buffer(1k) such as ER,
MIR and GSS in Split CIFAR-100 and even outperforms them in Split Mini-
ImageNet. In the larger and more realistic CORe50-NC, however, both EWC++ and
LwF fail. This also confirms the results of three recent studies, where [111]
shows the shortcomings of regularization-based approaches in the class
incremental setting, [112] theoretically explains why regularization-based
methods underperform memory-based methods and [113] empirically demonstrates
that KD is more useful in small-scale datasets.
(a) CIFAR-100
(b) Mini-ImageNet
(c) CORe50-NC
Figure 2: The average accuracy measured by the end of each task for the OCI
setting with a 5k memory buffer. More detailed results for different memory
buffer sizes are shown in C.1.
#### Memory-based Methods
Firstly, A-GEM does not work in this setting as it has almost the same
performance as Finetune, implying that the indirect use of the memory samples
is less efficient than the direct relay in the OCI setting. Secondly, given a
small memory buffer in Split CIFAR-100 and Mini-ImageNet, iCaRL—proposed in
2017—shows the best performance. On the other hand, other replay-based methods
such as ER, MIR and GSS do not work well because simply replaying with a small
memory buffer yields severe class imbalance. When equipped with a larger
memory buffer (5k and 10k), GDumb—a simple baseline that trains with the
memory buffer only—outperforms other methods by a large margin. Additionally,
as shown in Fig. 2(a) and Fig. 2(b), GDumb dominates the average accuracy not
only at the end of training but also at any other evaluation points along the
data stream. This raises concerns about the progress in the OCI setting in the
literature. Next, in the larger CORe50-NC dataset, GDumb is less effective
since it only relies on the memory and the memory is smaller in a larger
dataset in proportion. MIR is a robust and strong method as it exhibits
remarkable performance across different memory sizes. Also, even though GSS is
claimed to be an enhanced version of ER, ER consistently surpasses GSS across
different memory sizes and datasets, which is also confirmed by other studies
[19, 59].
#### Parameter-isolation-based methods
As one of the first dynamic-architecture methods without using a task-ID, CN-
DPM shows competitive results in Split CIFAR-100 and Mini-ImageNet but fails
in CORe5-NC. The key reason is that CN-DPM is very sensitive to
hyperparameters, and when applying CN-DPM in a new dataset, a good performance
cannot be guaranteed given a limited tuning budget.
Figure 3: Average Accuracy, Forgetting, Running Time, Forward Transfer and
Backward Transfer for the OCI setting with a 5k memory buffer. Each column
represents a metric and each row represents a dataset. In this setting, none
of the methods show any forward or backward transfer.
#### Other Metrics
We show the performance of all five metrics in Fig. 3. Generally speaking, we
find that a high average accuracy comes with low forgetting, but methods using
KD such as iCaRL and LwF, and dynamic-architecture methods such as CN-DPM have
lower forgetting. The reason for the low forgetting of these methods is
intransigence, the model’s inability to learn new knowledge [9]. For iCaRL and
LwF, KD imposes a strong regularization, which may lead to a lower accuracy on
new tasks. For CN-DPM, the inaccurate expert selector is the cause of the
intransigence [19]. Furthermore, most methods have similar running time except
for CN-DPM, GDumb and GSS. CN-DPM requires a significantly longer training
time as it needs to train multiple experts, and each expert contains a
generative model (VAE [92]) and a classifier(10-layer ResNet [108]). GDumb has
the second longest running time as it requires training the model from scratch
with the memory at every evaluation point. Lastly, we notice none of the
methods show any forward and backward transfer, which is expected since a
model tends to classify all the test samples as the current task labels due to
the strong bias in the last FC layer.
### 8.3 Effectiveness of tricks in the OCI setting
Finetune | $3.7\pm 0.3$
---|---
Offline | $49.7\pm 2.6$
Trick | A-GEM | ER | MIR
Buffer Size | M=1k | M=5k | M=10k | M=1k | M=5k | M=10k | M=1k | M=5k | M=10k
N/A | $3.7\pm 0.4$ | $3.6\pm 0.2$ | $3.8\pm 0.2$ | $7.6\pm 0.5$ | $17.0\pm 1.9$ | $18.4\pm 1.4$ | $7.6\pm 0.5$ | $18.2\pm 0.8$ | $19.3\pm 0.7$
LB | $5.0\pm 0.5$ | $4.6\pm 0.8$ | $4.9\pm 0.7$ | $14.0\pm 2.0$ | $19.0\pm 2.6$ | $20.4\pm 1.2$ | $\mathbf{15.1\pm 0.6}$ | $21.1\pm 0.8$ | $22.5\pm 0.9$
KDC | $8.3\pm 0.7$ | $8.8\pm 0.7$ | $7.7\pm 1.1$ | $11.7\pm 0.8$ | $10.9\pm 1.9$ | $11.9\pm 2.2$ | $12.0\pm 0.5$ | $12.3\pm 0.7$ | $11.8\pm 0.6$
KDC* | $5.6\pm 0.5$ | $5.8\pm 0.5$ | $5.8\pm 0.5$ | $12.6\pm 0.5$ | $21.2\pm 1.1$ | $24.2\pm 1.9$ | $12.4\pm 0.5$ | $20.7\pm 0.8$ | $23.2\pm 1.2$
MI | $4.0\pm 0.3$ | $4.0\pm 0.3$ | $4.0\pm 0.2$ | $8.6\pm 0.5$ | $19.7\pm 0.9$ | $26.4\pm 1.2$ | $8.5\pm 0.4$ | $17.7\pm 1.0$ | $25.9\pm 1.2$
SS | $5.0\pm 0.8$ | $5.2\pm 0.6$ | $5.1\pm 0.5$ | $12.3\pm 2.1$ | $20.9\pm 1.0$ | $23.1\pm 1.2$ | $14.0\pm 0.5$ | $21.6\pm 0.7$ | $24.5\pm 0.7$
NCM | $\mathbf{9.5\pm 0.9}$ | $11.7\pm 0.6$ | $11.5\pm 0.7$ | $\mathbf{14.6\pm 0.7}$ | $\mathbf{27.6\pm 1.0}$ | $31.0\pm 1.0$ | $13.7\pm 0.5$ | $27.0\pm 0.5$ | $30.0\pm 0.6$
RV | $4.5\pm 0.4$ | $\mathbf{22.5\pm 1.3}$ | $\mathbf{30.7\pm 1.2}$ | $12.0\pm 0.8$ | $26.9\pm 2.8$ | $\mathbf{32.0\pm 5.3}$ | $9.7\pm 0.5$ | $\mathbf{28.1\pm 0.6}$ | $\mathbf{35.2\pm 0.5}$
Best OCI | $16.7\pm 0.8$ | ${22.1\pm 0.9}$ | ${28.8\pm 0.9}$ | $16.7\pm 0.8$ | ${22.1\pm 0.9}$ | ${28.8\pm 0.9}$ | $16.7\pm 0.8$ | ${22.1\pm 0.9}$ | ${28.8\pm 0.9}$
Table 9: Performance of compared tricks for the OCI setting on Split CIFAR-100. We report average accuracy (end of training) for memory buffer with size 1k, 5k and 10k. Best OCI refers to the best performance achieved by the compared methods in Table 8. Figure 4: Comparison of various tricks for the OCI setting on Split CIFAR-100. We report average accuracy (end of training) for memory buffer with size 1k, 5k and 10k. N/A denotes the performance of the base methods (A-GEM, ER, MIR) without any trick. Trick | Running Time(s)
---|---
| M=1k | M=5k | M=10k
N/A | 83 | 82 | 84
LB | 87 | 88 | 89
KDC | 105 | 106 | 106
KDC* | 105 | 105 | 107
MI | 328 | 324 | 325
SS | 89 | 90 | 90
NCM | 126 | 282 | 450
RV | 98 | 159 | 230
Table 10: Running time of different tricks applying to ER with different
memory sizes on Split CIFAR-100.
We evaluate Label trick (LB), Knowledge Distillation and Classification (KDC),
Multiple Iterations (MI), Separated Softmax(SS), Nearest Class Mean (NCM)
classifier, Review trick(RV) on three memory-based methods: A-GEM, ER and MIR.
The results are shown in Table 9 and Fig. 4.
Firstly, although all tricks enhance the basic A-GEM, only RV can bring A-GEM
closer to ER and MIR, which reiterates that the direct replay of the memory
samples is more effective than the gradient projection approach in A-GEM.
For replay-based ER and MIR, LB and NCM are the most effective when M=1k and
can improve the accuracy by around 100% (7.6% $\rightarrow$ 14.5% on average).
KDC, KDC* and SS have similar performance improvement effect and can boost the
accuracy by around 64% (7.6% $\rightarrow$ 12.4% on average). With a larger
memory size, NCM remains very effective, and RV becomes much more helpful.
When M=10k, RV boosts ER’s performance by 74% (18.4% $\rightarrow$ 32.0%) and
improve MIR’s performance by 82% (from 19.3% $\rightarrow$ 35.2%). Also, KDC
fails with a larger memory due to over-regularization of the KD term, and the
modified KDC* has much better performance. Compared with other tricks, MI and
RV are more sensitive to the memory size since these tricks highly depend on
the memory. Note that when equipped with NCM or RV, both ER and MIR can
outperform the best performance achieved by the compared methods (Table 8)
when M=5k or 10k.
As shown in Table 10, the running times of LB, KDC, KDC*, MI and SS do not
depend on the memory size and have a limited increase compared to the
baseline. Since NCM needs to calculate the means of all the classes in the
memory and RV requires additional training of the whole memory before
evaluation, their running times increase as the memory size grows.
To sum up, all of the tricks are beneficial to the base methods (A-GEM, ER,
MIR); NCM is a useful and robust trick across all memory sizes, while LB and
RV are more advantageous in smaller and larger memory, respectively. The
running times of NCM and RV go up with the increase in the memory size, and
other tricks only add a fixed overhead to the running time. We also get
similar results in Split Mini-ImageNet, as shown in C.
### 8.4 Performance comparison in Online Domain Incremental (ODI) setting
Method | Mini-ImageNet-Noise | Mini-ImageNet-Occlusion | Mini-ImageNet-Blur | CORe50-NI
---|---|---|---|---
Finetune | $11.1\pm 1.0$ | $13.8\pm 1.6$ | $2.4\pm 0.2$ | $14.0\pm 2.8$
Offline | $37.3\pm 0.8$ | $38.6\pm 4.7$ | $11.9\pm 1.0$ | $51.7\pm 1.8$
EWC | $12.5\pm 0.8$ | $14.8\pm 1.1$ | $2.6\pm 0.2$ | $11.6\pm 1.5$
LwF | $9.2\pm 0.9$ | $12.8\pm 0.8$ | $3.4\pm 0.4$ | $11.1\pm 1.1$
Buffer Size | M=1k | M=5k | M=10k | M=1k | M=5k | M=10k | M=1k | M=5k | M=10k | M=1k | M=5k | M=10k
ER | $\mathbf{19.4\pm 1.3}$ | $21.6\pm 1.1$ | $24.3\pm 1.2$ | $\mathbf{19.2\pm 1.5}$ | $\mathbf{23.4\pm 1.4}$ | $23.7\pm 1.1$ | $5.3\pm 0.6$ | $\mathbf{8.6\pm 0.8}$ | $9.4\pm 0.7$ | $24.1\pm 4.2$ | $28.3\pm 3.5$ | $30.0\pm 2.8$
MIR | $18.1\pm 1.1$ | $\mathbf{22.5\pm 1.4}$ | $\mathbf{24.4\pm 0.9}$ | $17.6\pm 0.7$ | $22.0\pm 1.1$ | $\mathbf{23.8\pm 1.2}$ | $\mathbf{5.5\pm 0.5}$ | $8.1\pm 0.6$ | $9.6\pm 1.0$ | $\mathbf{26.5\pm 1.0}$ | $\mathbf{34.0\pm 1.0}$ | $\mathbf{33.3\pm 1.7}$
GSS | $18.9\pm 0.8$ | $21.4\pm 0.9$ | $23.2\pm 1.1$ | $17.7\pm 0.8$ | $21.0\pm 2.2$ | $23.2\pm 1.4$ | $5.2\pm 0.5$ | $7.6\pm 0.6$ | $8.0\pm 0.6$ | $25.5\pm 2.1$ | $27.2\pm 2.0$ | $25.3\pm 2.1$
A-GEM | $14.0\pm 1.3$ | $14.6\pm 0.7$ | $14.2\pm 1.4$ | $16.4\pm 0.7$ | $13.9\pm 2.6$ | $14.4\pm 2.0$ | $4.4\pm 0.4$ | $4.4\pm 0.4$ | $4.3\pm 0.5$ | $12.4\pm 1.1$ | $13.8\pm 1.2$ | $15.0\pm 2.2$
CN-DPM | $4.6\pm 0.5$ | - | - | $3.9\pm 0.8$ | - | - | $2.2\pm 0.2$ | - | - | $9.6\pm 3.9$ | - | -
GDumb | $5.4\pm 1.0$ | $12.5\pm 0.7$ | $15.2\pm 0.5$ | $5.4\pm 0.4$ | $14.2\pm 0.6$ | $20.2\pm 0.4$ | $3.3\pm 0.2$ | $7.5\pm 0.2$ | $\mathbf{10.0\pm 0.2}$ | $9.6\pm 1.5$ | $11.2\pm 2.0$ | $11.5\pm 1.7$
Table 11: The Average Accuracy (end of training) for the ODI setting of Mini-
ImageNet with three nonstationary types (Noise, Occlusion, Blur) and
CORe50-NI. Figure 5: Average Accuracy, Forgetting, Running Time, Forward
Transfer and Backward Transfer for the ODI setting with a 5k memory buffer.
Each column represents a metric and each row represents a dataset. Forward and
backward transfer are not applicable in CORe50 since it uses one test set for
all tasks.
Since most of the surveyed methods are only evaluated in the class incremental
setting in the original papers, we evaluate them in the ODI setting to
investigate their robustness and ability to generalize to other CL settings.
We assess the methods with CORe50-NI—a dataset designed for ODI—and the
proposed NS-MiniImageNet dataset consisting of three nonstationary types:
noise, occlusion, and blur (see Table 7). The average accuracy at the end of
training is summarized in Table 11.
Generally speaking, all replay-based methods (ER, MIR, GSS) show comparable
performance across three memory sizes and outperform all other methods. GDumb,
the strong baseline that dominates the OCI setting in most cases, is no longer
as competitive as the replay-based methods and fails completely in CORe50-NI.
One of the reasons is that class imbalance, the key cause of forgetting in the
OCI setting, does not exist in ODI since the class labels are the same for all
tasks. Moreover, in the ODI setting, samples in the data stream change
gradually and smoothly with different nonstationary strengths (NS-
MiniIMageNet) or nonstationary types (CORe50-NI). Learning new samples
sequentially with the replay samples(replay-based) may be more effective for
the model to adapt to the gradually changing nonstationarity than learning
only the samples in the buffer (GDumb). Additionally, the greedy memory update
strategy (see Algorithm 5 in Appendix) in GDumb is not suitable for the ODI
setting as the buffer will comprise mostly of samples in the latest tasks due
to the greedy update, and GDumb will have very limited access to samples in
the earlier tasks. Using reservoir sampling as the update strategy may
alleviate this shortcoming since reservoir sampling ensures every data point
has the same probability to be stored in the memory.
CN-DPM has terrible performance in this setting because it is very sensitive
to hyperparameters, and the method cannot find the hyperparameter set that
works in this setting within the same tuning budget as other methods.
Regarding methods without a memory buffer, the KD-based LwF underperforms EWC
and Finetune, implying KD may not be useful in the ODI setting.
Another interesting observation is that all methods, including Offline, show
unsatisfactory results in the blur scenario. The pivotal cause may be due to
the backbone model we use (ResNet18 [108]) since a recent study points out
that Gaussian blur can easily degrade the performance of ResNet [114].
In terms of other metrics, as shown in Fig. 5, we find that all methods show
positive forward transfer, and replay-based methods show much better backward
transfer than others. The first reason is that tasks in the ODI setting share
more cross-task resemblances than the OCI setting; secondly, the bias towards
the current task due to class imbalance does not happen since new tasks
contain the same class labels as old tasks. Thus, the model is able to perform
zero-shot learning (forward transfer) and improve the preceding tasks
(backward transfer).
In summary, replay-based methods exhibit more robust and surpassing
performance in the ODI setting. Considering the running time and the
performance in the larger scale dataset, MIR stands out as a versatile and
competitive method in this setting.
### 8.5 Overall comments for methods and tricks
We summarize the key findings and give comments on each method and trick based
on our findings in Table 12 and Table 13.
Method | Comments
---|---
Regularization-based
EWC++ | 1. Ineffective in both OCI and ODI settings 2. Suffers from gradient explosion
LwF | 1. Effective on small scale datasets in OCI (achieves similar performance as replay-based methods with small memory) 2. Ineffective in ODI setting
Memory-based
ER | 1. Efficient training time over other memory-based methods 2. Better than GSS in most cases but worse than MIR, especially with a large memory buffer
MIR | 1. A versatile and competitive method in both OCI and ODI settings 2. Works better on a large scale dataset and a large memory buffer
GSS | 1. Inefficient training time 2. Worse than other memory-based methods in most cases
iCaRL | 1. Best performance (with large margins) with a small memory buffer on small scale datasets
A-GEM | 1. Ineffective in both OCI and ODI settings
GDumb | 1. Best performance with a large memory buffer on small scale datasets in OCI setting 2. Ineffective in ODI mostly due to its memory update strategy 3. Inefficient training time due to training from scratch at every inference point
Parameter-isolation-based
CN-DPM | 1. Effective when memory size is small 2. Sensitive to hyperparameters and when testing on a new dataset, it may not find a working hyperparameter set given the same tuning budget as others 3. Longest training time among compared methods
Table 12: Overall comments for compared methods Trick | Comments
---|---
LB | 1. Effective when memory buffer is small 2. Fixed and limited training time overhead
KDC | 1. Fails because of over-regularization of knowledge distillation loss
KDC* | 1. Provides moderate improvement with fixed and acceptable training time overhead
MI | 1. Better improvement with a larger memory buffer 2. Training time increases with more iterations
SS | 1. Similar improvement as KDC* but with less training time overhead
NCM* | 1. Provides very strong improvement across different memory sizes. 2. Baselines equipped with it outperform state-of-the-art methods when the memory buffer is large 3. Inference time increases with the growth of the memory size
RV | 1. Presents very competitive improvement, especially with a larger memory buffer 2. Baselines equipped with it outperform state-of-the-art methods 3. Training time increases with the growth of the memory size but it is more efficient than NCM
Table 13: Overall comments for compared tricks
## 9 Trendy Directions in Online CL
In this section, we discuss some emerging directions in online CL that have
attracted interest and are expected to gain more attention in the future.
#### Raw-Data-Free Methods
In some applications, storing raw images is not feasible due to privacy and
security concerns, and this calls for CL methods that maintain reasonable
performance without storing raw data. Regularization [12, 11] is one of the
directions but [111] shows that this approach has theoretical limitations in
the class incremental setting and cannot be used alone to reach decent
performance. We also have empirically confirmed their claims in this work.
Generative replay [115, 47] is another direction but it is not viable for more
complex datasets as the current deep generative models still cannot generate
satisfactory images for such datasets [17, 50].
Feature replay is a promising direction where latent features of the old
samples at a given layer (feature extraction layer) are relayed instead of raw
data [65, 116, 117, 118]. Since the model changes along the training process,
to keep the latent features valid, [118] proposes to slow-down—in the limit
case, freeze—the learning of all the layers before the feature extraction
layer, while [65] proposes a feature adaptation method to map previous
features to their correct values as the model is updated. Another way is to
generate latent features with a deep generative model [117].
There are other lately proposed approaches which do not require storing the
raw data. For example, SDC [78] leverages embedding networks [119] and the
nearest class mean classifier [87]. The approach proposes a method to estimate
the drift of features during learning the current task and compensate for the
drift in the absence of previous samples. DMC [73] trains a separate model for
new tasks and combines the new and old models using publicly available
unlabeled data via a double distillation training objective. DSLDA [120]
freezes the feature extractor and uses deep Streaming Linear Discriminant
Analysis [121] to train the output layer incrementally.
With the increasing data privacy and security concerns, the raw-data-free
methods are expected to attract more research endeavour in the coming years.
#### Meta Learning
Meta-learning is an emerging learning paradigm where a neural network evolves
from multiple related learning episodes and generalizes the learned knowledge
to unseen tasks [122]. Since meta-learning builds up a potential framework to
advance CL, a lot of meta-learning based CL methods have been proposed
recently, and most of them support the online setting. MER [60] combines
experience replay with optimization based meta-learning to maximize transfer
and minimize interference based on future gradients. OML [123] is a meta-
objective that uses interference as a training signal to learn a
representation that accelerates future learning and avoid catastrophic
interference. More recently, iTAML [81] proposes to learn a task-agnostic
model that automatically predicts the task and quickly adapts to the predicted
task with meta-update. La-MAML [64] proposes an efficient gradient-based meta-
learning that incorporates per-parameter learning rates for online CL. MERLIN
[66] proposes an online CL method based on consolidation in a meta-space,
namely, the latent space that generates model weights for solving downstream
tasks. In [124], authors propose Continual-MAML, an online extension of MAML
[125], that can cope the new CL scenario they propose. We believe meta-
learning based online CL methods will continue to be popular with recent
advances in meta-learning.
#### CL in Other Areas
Although image classification and reinforcement learning are the main focuses
for most CL works, CL has drawn more and more attention in other areas. Object
detection has been another emerging topic in CL, and multiple works have been
proposed lately to tackle this problem. Most methods leverage KD [40] to
alleviate CF, and the main differences between the methods are the base object
detector and distillation parts in the network [126, 127, 128, 129]. More
recently, a meta-learning based approach is proposed to reshape model
gradients for better information share across incremental tasks [130]. A
replay-based method is introduced to address streaming object detection by
replaying compressed representation in a fixed memory buffer [131].
Beyond computer vision, CL with sequential data and recurrent neural network
(RNN) has gained attention over the past few years. Recent works have
confirmed that RNNs, including LSTMs, are also immensely affected by CF [132,
133, 134]. In [132], the authors unify GEM [13] and Net2Net [135] to tackle
forgetting in RNN. More recently, [136] shows that weight-importance based CL
in RNNs are limited and that the hypernetwork-based approaches are more
effective in alleviating forgetting. Meanwhile, [137] proposes a learning rule
to preserve network dynamics within subspaces for previous tasks and encourage
interfering dynamics to explore orthogonal subspaces when learning new tasks.
Moreover, multiple works are proposed to address general CL language learning
[138, 139] and specific language tasks, such as dialogue systems [140, 141,
142], image captioning [143], sentiment classification [144] and sentence
representation learning [145].
Recommender systems have also started to adopt CL [146, 147, 148, 149]. ADER
[148] is proposed to handle CF in session-based recommendation using the
adaptive distillation loss and replay with heading [88] technique. GraphSAIL
[149] is introduced for Graph Neural Networks based recommender systems to
preserve a user’s long-term preference during incremental model updates using
local structure distillation, global structure distillation and self-embedding
distillation.
Several works also address the deployment of CL in practice. [150] introduces
on-the-job learning, which requires a deployed model to discover new tasks,
collect training data continuously and incrementally learn new tasks without
interrupting the application. The author also uses chat-bots and self-driving
cars as the examples to highlight the necessity of on-the-job learning. [151]
presents a reference architecture for self-maintaining intelligent systems
that can adapt to shifting data distributions, cope with outliers, retrain
when necessary, and learn new tasks incrementally. [152] discusses the
clinical application of CL from three perspectives: diagnosis, prediction and
treatment decisions. [153] addresses a practical scenario where a high-
capacity server interacts with a large group of resource-limited edge devices
and proposes a Dual User-Adaptation framework which disentangles user-
adaptation into model personalization on the server and local data
regularization on the user device.
## 10 Conclusion
To better understand the relative advantages of recently proposed online CL
approaches and the settings where they work best, we performed extensive
experiments with nine methods and seven tricks in the online class incremental
(OCI) and online domain incremental (ODI) settings.
Regarding the performance in the OCI setting (see Table 8, Fig. 2 and 3), we
conclude:
* 1.
For memory-free methods, LwF is effective in CIFAR100 and Mini-ImageNet,
showing similar performance as replay-based methods with a small memory
buffer. However, all memory-free methods fail in the larger CORe50-NC.
* 2.
When the memory buffer is small, iCaRL shows the best performance (by large
margins) in CIFAR100 and Mini-ImageNet, followed by CN-DPM.
* 3.
With a larger memory buffer, GDumb—a simple baseline—outperforms methods
designed specifically for the CL problem in CIFAR100 and Mini-ImageNet at the
expense of much longer training times.
* 4.
In the larger and more realistic CORe50-NC dataset, MIR consistently surpasses
all the other methods across different memory sizes.
* 5.
We experimentally and theoretically confirm that a key cause of CF is the bias
towards new classes in the last fully connected layer due to the imbalance
between previous data and new data [42, 68, 93].
* 6.
None of the methods show any positive forward and backward transfer due to the
bias mentioned above.
The conclusions from our experiments for the OCI tricks (see Table 9, Fig. 4)
are as follows:
* 1.
When the memory size is small, LB and NCM are the most effective, showing
around 64% relative improvement.
* 2.
With a larger memory buffer, NCM remains effective, and RV becomes more
helpful, showing around 80% relative improvement.
* 3.
When equipped with NCM or RV, both ER and MIR can outperform the best
performance of the compared methods without tricks.
* 4.
The running times of NCM and RV increase with the growth in memory size, but
other tricks only add a fixed overhead to the running time.
For the ODI setting (see Table 11, Fig. 5), we conclude:
* 1.
Generally speaking, all replay-based methods (ER, MIR, GSS) show comparable
performance across three memory sizes and outperform all other methods.
* 2.
GDumb, the strong baseline that dominates the OCI setting in most cases, is no
longer effective, possibly due to its memory update strategy.
* 3.
Other OCI methods cannot generalize to the ODI setting.
Detailed comments for compared methods and tricks can be found in Table 12 and
Table 13.
In conclusion, by leveraging the best methods and tricks identified in this
comparative survey, online CL (with a very small mini-batch) is now
approaching offline performance, bringing CL much closer to its ultimate goal
of matching offline training that opens up CL for effective deployment on edge
and other RAM-limited devices.
## 11 Acknowledgement
This research was supported by LG AI Research.
## References
* [1] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al., Imagenet large scale visual recognition challenge, International journal of computer vision 115 (3) (2015) 211–252.
* [2] M. McCloskey, N. J. Cohen, Catastrophic interference in connectionist networks: The sequential learning problem, in: Psychology of learning and motivation, Vol. 24, Elsevier, 1989, pp. 109–165.
* [3] I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, Y. Bengio, An empirical investigation of catastrophic forgetting in gradient-based neural networks, arXiv preprint arXiv:1312.6211 (2013).
* [4] G. A. Carpenter, S. Grossberg, A massively parallel architecture for a self-organizing neural pattern recognition machine, Computer vision, graphics, and image processing 37 (1) (1987) 54–115.
* [5] M. Mermillod, A. Bugaiska, P. Bonin, The stability-plasticity dilemma: Investigating the continuum from catastrophic forgetting to age-limited learning effects, Frontiers in psychology 4 (2013) 504.
* [6] Z. Chen, B. Liu, Lifelong machine learning, Synthesis Lectures on Artificial Intelligence and Machine Learning 12 (3) (2018) 1–207.
* [7] J. Yoon, E. Yang, J. Lee, S. J. Hwang, Lifelong learning with dynamically expandable networks, in: International Conference on Learning Representations, 2018.
URL https://openreview.net/forum?id=Sk7KsfW0-
* [8] S.-A. Rebuffi, A. Kolesnikov, G. Sperl, C. H. Lampert, icarl: Incremental classifier and representation learning, in: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2017, pp. 2001–2010.
* [9] A. Chaudhry, P. K. Dokania, T. Ajanthan, P. H. Torr, Riemannian walk for incremental learning: Understanding forgetting and intransigence, in: Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 532–547.
* [10] G. M. van de Ven, A. Tolias, Three scenarios for continual learning, ArXiv abs/1904.07734 (2019).
* [11] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al., Overcoming catastrophic forgetting in neural networks, Proceedings of the National Academy of Sciences of the United States of America 114 13 (2017) 3521–3526.
* [12] Z. Li, D. Hoiem, Learning without forgetting, in: ECCV, Springer, 2016, pp. 614–629.
* [13] D. Lopez-Paz, M. A. Ranzato, Gradient episodic memory for continual learning, in: Advances in Neural Information Processing Systems 30, 2017, pp. 6467–6476.
* [14] T. Lesort, V. Lomonaco, A. Stoian, D. Maltoni, D. Filliat, N. Díaz-Rodríguez, Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges, Information Fusion 58 (2020) 52–68.
* [15] M. De Lange, R. Aljundi, M. Masana, S. Parisot, X. Jia, A. Leonardis, G. Slabaugh, T. Tuytelaars, A continual learning survey: Defying forgetting in classification tasks, arXiv preprint arXiv:1909.08383 (2019).
* [16] S. Farquhar, Y. Gal, Towards robust evaluations of continual learning (2018). arXiv:1805.09733.
* [17] R. Aljundi, E. Belilovsky, T. Tuytelaars, L. Charlin, M. Caccia, M. Lin, L. Page-Caccia, Online continual learning with maximal interfered retrieval, in: Advances in Neural Information Processing Systems 32, 2019, pp. 11849–11860.
* [18] R. Aljundi, M. Lin, B. Goujaud, Y. Bengio, Gradient based sample selection for online continual learning, in: Advances in Neural Information Processing Systems 32, 2019, pp. 11816–11825.
* [19] S. Lee, J. Ha, D. Zhang, G. Kim, A neural dirichlet process mixture model for task-free continual learning, in: International Conference on Learning Representations, 2020.
* [20] D. Rolnick, A. Ahuja, J. Schwarz, T. Lillicrap, G. Wayne, Experience replay for continual learning, in: Advances in Neural Information Processing Systems 32, 2019, pp. 350–360.
* [21] A. Nagabandi, C. Finn, S. Levine, Deep online learning via meta-learning: Continual adaptation for model-based RL, in: International Conference on Learning Representations, 2019.
* [22] C. Kaplanis, M. Shanahan, C. Clopath, Continual reinforcement learning with complex synapses, Vol. 80 of Proceedings of Machine Learning Research, PMLR, Stockholmsmässan, Stockholm Sweden, 2018, pp. 2497–2506.
* [23] D. Rao, F. Visin, A. Rusu, R. Pascanu, Y. W. Teh, R. Hadsell, Continual unsupervised representation learning, in: Advances in Neural Information Processing Systems, 2019, pp. 7647–7657.
* [24] A. Prabhu, P. H. Torr, P. K. Dokania, Gdumb: A simple approach that questions our progress in continual learning, in: European Conference on Computer Vision, Springer, 2020, pp. 524–540.
* [25] Z. Mai, H. Kim, J. Jeong, S. Sanner, Batch-level experience replay with review for continual learning, arXiv preprint arXiv:2007.05683 (2020).
* [26] G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, S. Wermter, Continual lifelong learning with neural networks: A review, Neural Networks 113 (2019) 54 – 71.
* [27] G. I. Parisi, V. Lomonaco, Online continual learning on sequences, in: Recent Trends in Learning From Data, Springer, 2020, pp. 197–221.
* [28] Y.-C. Hsu, Y.-C. Liu, A. Ramasamy, Z. Kira, Re-evaluating continual learning scenarios: A categorization and case for strong baselines, arXiv preprint arXiv:1810.12488 (2018).
* [29] B. Pfülb, A. Gepperth, A comprehensive, application-oriented study of catastrophic forgetting in DNNs, in: International Conference on Learning Representations, 2019.
* [30] R. Kemker, A. Abitino, M. McClure, C. Kanan, Measuring catastrophic forgetting in neural networks, in: AAAI, 2018.
* [31] A. Chaudhry, M. Ranzato, M. Rohrbach, M. Elhoseiny, Efficient lifelong learning with a-GEM, in: International Conference on Learning Representations, 2019.
* [32] A. Gepperth, B. Hammer, Incremental learning algorithms and applications, 2016.
* [33] T. L. Hayes, R. Kemker, N. D. Cahill, C. Kanan, New metrics and experimental paradigms for continual learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 2031–2034.
* [34] A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P. K. Dokania, P. H. S. Torr, M. Ranzato, On tiny episodic memories in continual learning (2019). arXiv:1902.10486.
* [35] S.-W. Lee, J.-H. Kim, J. Jun, J.-W. Ha, B.-T. Zhang, Overcoming catastrophic forgetting by incremental moment matching, in: Advances in neural information processing systems, 2017, pp. 4652–4662.
* [36] R. Aljundi, F. Babiloni, M. Elhoseiny, M. Rohrbach, T. Tuytelaars, Memory aware synapses: Learning what (not) to forget, in: Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 139–154.
* [37] F. Zenke, B. Poole, S. Ganguli, Continual learning through synaptic intelligence, Proceedings of machine learning research 70 (2017) 3987.
* [38] H. Ritter, A. Botev, D. Barber, Online structured laplace approximations for overcoming catastrophic forgetting, in: Advances in Neural Information Processing Systems, 2018, pp. 3738–3748.
* [39] X. He, H. Jaeger, Overcoming catastrophic interference using conceptor-aided backpropagation, in: International Conference on Learning Representations, 2018\.
* [40] G. Hinton, O. Vinyals, J. Dean, Distilling the knowledge in a neural network, arXiv preprint arXiv:1503.02531 (2015).
* [41] F. M. Castro, M. J. Marin-Jimenez, N. Guil, C. Schmid, K. Alahari, End-to-end incremental learning, in: Proceedings of the European Conference on Computer Vision (ECCV), 2018.
* [42] Y. Wu, Y. Chen, L. Wang, Y. Ye, Z. Liu, Y. Guo, Y. Fu, Large scale incremental learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 374–382.
* [43] A. Rannen, R. Aljundi, M. B. Blaschko, T. Tuytelaars, Encoder based lifelong learning, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1320–1328.
* [44] C. V. Nguyen, Y. Li, T. D. Bui, R. E. Turner, Variational continual learning, in: International Conference on Learning Representations, 2018.
* [45] X. Tao, X. Chang, X. Hong, X. Wei, Y. Gong, Topology-preserving class-incremental learning, in: European Conference on Computer Vision, Springer, 2020, pp. 254–270.
* [46] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, in: Advances in neural information processing systems, 2014, pp. 2672–2680.
* [47] H. Shin, J. K. Lee, J. Kim, J. Kim, Continual learning with deep generative replay, in: Advances in Neural Information Processing Systems, 2017, pp. 2990–2999.
* [48] Y. Wu, Y. Chen, L. Wang, Y. Ye, Z. Liu, Y. Guo, Z. Zhang, Y. Fu, Incremental classifier learning with generative adversarial networks, arXiv preprint arXiv:1802.00853 (2018).
* [49] G. M. van de Ven, A. S. Tolias, Generative replay with feedback connections as a general strategy for continual learning, arXiv preprint arXiv:1809.10635 (2018).
* [50] T. Lesort, H. Caselles-Dupré, M. Garcia-Ortiz, A. Stoian, D. Filliat, Generative models from the perspective of continual learning, in: 2019 International Joint Conference on Neural Networks (IJCNN), IEEE, 2019, pp. 1–8.
* [51] A. Mallya, S. Lazebnik, Packnet: Adding multiple tasks to a single network by iterative pruning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7765–7773.
* [52] C. Fernando, D. Banarse, C. Blundell, Y. Zwols, D. Ha, A. A. Rusu, A. Pritzel, D. Wierstra, Pathnet: Evolution channels gradient descent in super neural networks, arXiv preprint arXiv:1701.08734 (2017).
* [53] J. Serra, D. Suris, M. Miron, A. Karatzoglou, Overcoming catastrophic forgetting with hard attention to the task, in: International Conference on Machine Learning, 2018, pp. 4548–4557.
* [54] A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, R. Hadsell, Progressive neural networks, arXiv preprint arXiv:1606.04671 (2016).
* [55] J. Yoon, E. Yang, J. Lee, S. J. Hwang, Lifelong learning with dynamically expandable networks, in: International Conference on Learning Representations, 2018.
* [56] R. Aljundi, P. Chakravarty, T. Tuytelaars, Expert gate: Lifelong learning with a network of experts, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3366–3375.
* [57] J. Rajasegaran, M. Hayat, S. H. Khan, F. S. Khan, L. Shao, Random path selection for continual learning, in: Advances in Neural Information Processing Systems, Vol. 32, Curran Associates, Inc., 2019, pp. 12669–12679.
* [58] Z. Borsos, M. Mutnỳ, A. Krause, Coresets via bilevel optimization for continual learning and streaming, arXiv preprint arXiv:2006.03875 (2020).
* [59] P. Buzzega, M. Boschini, A. Porrello, D. Abati, S. Calderara, Dark experience for general continual learning: a strong, simple baseline, arXiv preprint arXiv:2004.07211 (2020).
* [60] M. Reimer, I. Cases, R. Ajemian, M. Liu, I. Rish, Y. Tu, G. Tesauro, Learning to learn without forgetting by maximizing transfer and minimizing interference, in: ICLR, 2019.
* [61] A. Chrysakis, M.-F. Moens, Online continual learning from imbalanced data, Proceedings of Machine Learning Research (2020).
* [62] X. Jin, J. Du, X. Ren, Gradient based memory editing for task-free continual learning, arXiv preprint arXiv:2006.15294 (2020).
* [63] C. D. Kim, J. Jeong, G. Kim, Imbalanced continual learning with partitioning reservoir sampling, arXiv preprint arXiv:2009.03632 (2020).
* [64] G. Gupta, K. Yadav, L. Paull, La-maml: Look-ahead meta learning for continual learning, arXiv preprint arXiv:2007.13904 (2020).
* [65] A. Iscen, J. Zhang, S. Lazebnik, C. Schmid, Memory-efficient incremental learning through feature adaptation, arXiv preprint arXiv:2004.00713 (2020).
* [66] K. Joseph, V. N. Balasubramanian, Meta-consolidation for continual learning, arXiv preprint arXiv:2010.00352 (2020).
* [67] D. Roy, P. Panda, K. Roy, Tree-cnn: a hierarchical deep convolutional neural network for incremental learning, Neural Networks 121 (2020) 148–160.
* [68] B. Zhao, X. Xiao, G. Gan, B. Zhang, S.-T. Xia, Maintaining discrimination and fairness in class incremental learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 13208–13217.
* [69] S. Hou, X. Pan, C. C. Loy, Z. Wang, D. Lin, Learning a unified classifier incrementally via rebalancing, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 831–839.
* [70] E. Belouadah, A. Popescu, Il2m: Class incremental learning with dual memory, in: Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 583–592.
* [71] J. He, R. Mao, Z. Shao, F. Zhu, Incremental learning in online scenario, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 13926–13935.
* [72] P. Dhar, R. V. Singh, K.-C. Peng, Z. Wu, R. Chellappa, Learning without memorizing, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 5138–5146.
* [73] J. Zhang, J. Zhang, S. Ghosh, D. Li, S. Tasci, L. Heck, H. Zhang, C.-C. J. Kuo, Class-incremental learning via deep model consolidation, in: The IEEE Winter Conference on Applications of Computer Vision, 2020, pp. 1131–1140.
* [74] M. Riemer, T. Klinger, D. Bouneffouf, M. Franceschini, Scalable recollections for continual lifelong learning, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, 2019, pp. 1352–1359.
* [75] L. Caccia, E. Belilovsky, M. Caccia, J. Pineau, Online learned continual compression with adaptive quantization modules, arXiv (2019) arXiv–1911.
* [76] D. Maltoni, V. Lomonaco, Continuous learning in single-incremental-task scenarios, Neural Networks 116 (2019) 56–73.
* [77] Y. Liu, Y. Su, A.-A. Liu, B. Schiele, Q. Sun, Mnemonics training: Multi-class incremental learning without forgetting, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 12245–12254.
* [78] L. Yu, B. Twardowski, X. Liu, L. Herranz, K. Wang, Y. Cheng, S. Jui, J. v. d. Weijer, Semantic drift compensation for class-incremental learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 6982–6991.
* [79] O. Ostapenko, M. Puscas, T. Klein, P. Jahnichen, M. Nabi, Learning to remember: A synaptic plasticity driven framework for continual learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 11321–11329.
* [80] Y. Wu, Y. Chen, L. Wang, Y. Ye, Z. Liu, Y. Guo, Z. Zhang, Y. Fu, Incremental classifier learning with generative adversarial networks, arXiv preprint arXiv:1802.00853 (2018).
* [81] J. Rajasegaran, S. Khan, M. Hayat, F. S. Khan, M. Shah, itaml: An incremental task-agnostic meta-learning approach, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 13588–13597.
* [82] D. Abati, J. Tomczak, T. Blankevoort, S. Calderara, R. Cucchiara, B. E. Bejnordi, Conditional channel gated networks for task-aware continual learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 3931–3940.
* [83] J. Schwarz, W. Czarnecki, J. Luketina, A. Grabska-Barwinska, Y. W. Teh, R. Pascanu, R. Hadsell, Progress & compress: A scalable framework for continual learning, in: ICML, 2018.
* [84] X. Liu, M. Masana, L. Herranz, J. Van de Weijer, A. M. Lopez, A. D. Bagdanov, Rotate your networks: Better weight consolidation and less catastrophic forgetting, in: 2018 24th International Conference on Pattern Recognition (ICPR), IEEE, 2018, pp. 2262–2268.
* [85] Z. Mai, D. Shim, J. Jeong, S. Sanner, H. Kim, J. Jang, Adversarial shapley value experience replay for task-free continual learning, arXiv preprint arXiv:2009.00093 (2020).
* [86] J. S. Vitter, Random sampling with a reservoir, ACM Transactions on Mathematical Software (TOMS) 11 (1) (1985) 37–57.
* [87] T. Mensink, J. Verbeek, F. Perronnin, G. Csurka, Distance-based image classification: Generalizing to new classes at near-zero cost, IEEE transactions on pattern analysis and machine intelligence 35 (11) (2013) 2624–2637.
* [88] M. Welling, Herding dynamical weights to learn, in: Proceedings of the 26th Annual International Conference on Machine Learning, 2009, pp. 1121–1128.
* [89] T. L. Hayes, N. D. Cahill, C. Kanan, Memory efficient experience replay for streaming learning, in: 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 9769–9776.
* [90] T. S. Ferguson, Bayesian density estimation by mixtures of normal distributions, in: Recent advances in statistics, Elsevier, 1983, pp. 287–302.
* [91] D. Lin, Online learning of nonparametric mixture models via sequential variational approximation, in: Advances in Neural Information Processing Systems, 2013, pp. 395–403.
* [92] D. P. Kingma, M. Welling, Auto-encoding variational bayes, arXiv preprint arXiv:1312.6114 (2013).
* [93] H. Ahn, T. Moon, A simple class decision balancing for incremental learning (2020). arXiv:2003.13947.
* [94] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
* [95] Q. Dong, S. Gong, X. Zhu, Imbalanced deep learning by minority class incremental rectification, IEEE transactions on pattern analysis and machine intelligence 41 (6) (2018) 1367–1381.
* [96] X. Yin, X. Yu, K. Sohn, X. Liu, M. Chandraker, Feature transfer learning for deep face recognition with under-represented data, arXiv preprint arXiv:1803.09014 (2018).
* [97] S. Gidaris, N. Komodakis, Dynamic few-shot visual learning without forgetting, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4367–4375.
* [98] K. Lee, K. Lee, J. Shin, H. Lee, Overcoming catastrophic forgetting with unlabeled data in the wild, in: Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 312–321.
* [99] H. Jung, J. Ju, J. Kim, Less-forgetful learning for domain expansion in deep neural networks, in: AAAI-18: Thirty-Second AAAI Conference on Artificial Intelligence, the Association for the Advancement of Artificial Intelligence, 2018\.
* [100] A. Douillard, M. Cord, C. Ollion, T. Robert, Podnet: Pooled outputs distillation for small-tasks incremental learning, Springer.
* [101] C. Zeno, I. Golan, E. Hoffer, D. Soudry, Task agnostic continual learning using online variational bayes (2018). arXiv:1803.10123.
* [102] K. Javed, F. Shafait, Revisiting distillation and incremental classifier learning, in: Asian Conference on Computer Vision, Springer, 2018, pp. 3–17.
* [103] A. Krizhevsky, Learning multiple layers of features from tiny images, Tech. rep., University of Toronto (April 2009).
* [104] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al., Matching networks for one shot learning, in: Advances in neural information processing systems, 2016, pp. 3630–3638.
* [105] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei, Imagenet: A large-scale hierarchical image database, in: 2009 IEEE conference on computer vision and pattern recognition, Ieee, 2009, pp. 248–255.
* [106] V. Lomonaco, D. Maltoni, Core50: a new dataset and benchmark for continuous object recognition, in: Conference on Robot Learning, 2017, pp. 17–26.
* [107] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE 86 (11) (1998) 2278–2324.
* [108] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
* [109] V. Lomonaco, L. Pellegrini, P. Rodriguez, M. Caccia, Q. She, Y. Chen, Q. Jodelet, R. Wang, Z. Mai, D. Vazquez, et al., Cvpr 2020 continual learning in computer vision competition: Approaches, results, current challenges and future directions, arXiv preprint arXiv:2009.09929 (2020).
* [110] L. Liu, Z. Kuang, Y. Chen, J. H. Xue, W. Yang, W. Zhang, Incdet: In defense of elastic weight consolidation for incremental object detection, IEEE Transactions on Neural Networks and Learning Systems (2020) 1–14doi:10.1109/TNNLS.2020.3002583.
* [111] T. Lesort, A. Stoian, D. Filliat, Regularization shortcomings for continual learning, arXiv preprint arXiv:1912.03049 (2019).
* [112] J. Knoblauch, H. Husain, T. Diethe, Optimal continual learning has perfect memory and is np-hard, arXiv preprint arXiv:2006.05188 (2020).
* [113] E. Belouadah, A. Popescu, Scail: Classifier weights scaling for class incremental learning, in: The IEEE Winter Conference on Applications of Computer Vision, 2020, pp. 1266–1275.
* [114] P. Roy, S. Ghosh, S. Bhattacharya, U. Pal, Effects of degradations on deep neural network architectures, arXiv preprint arXiv:1807.10108 (2018).
* [115] C. Wu, L. Herranz, X. Liu, J. van de Weijer, B. Raducanu, et al., Memory replay gans: Learning to generate new categories without forgetting, in: Advances in Neural Information Processing Systems, 2018, pp. 5962–5972.
* [116] T. L. Hayes, K. Kafle, R. Shrestha, M. Acharya, C. Kanan, Remind your neural network to prevent catastrophic forgetting, in: European Conference on Computer Vision, Springer, 2020, pp. 466–483.
* [117] X. Liu, C. Wu, M. Menta, L. Herranz, B. Raducanu, A. D. Bagdanov, S. Jui, J. van de Weijer, Generative feature replay for class-incremental learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 226–227.
* [118] L. Pellegrini, G. Graffieti, V. Lomonaco, D. Maltoni, Latent replay for real-time continual learning, arXiv preprint arXiv:1912.01100 (2019).
* [119] S. Chopra, R. Hadsell, Y. LeCun, Learning a similarity metric discriminatively, with application to face verification, in: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), Vol. 1, IEEE, 2005, pp. 539–546.
* [120] T. L. Hayes, C. Kanan, Lifelong machine learning with deep streaming linear discriminant analysis, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 220–221.
* [121] S. Pang, S. Ozawa, N. Kasabov, Incremental linear discriminant analysis for classification of data streams, IEEE transactions on Systems, Man, and Cybernetics, part B (Cybernetics) 35 (5) (2005) 905–914.
* [122] T. Hospedales, A. Antoniou, P. Micaelli, A. Storkey, Meta-learning in neural networks: A survey, arXiv preprint arXiv:2004.05439 (2020).
* [123] K. Javed, M. White, Meta-learning representations for continual learning, in: Advances in Neural Information Processing Systems, 2019, pp. 1820–1830.
* [124] M. Caccia, P. Rodriguez, O. Ostapenko, F. Normandin, M. Lin, L. Caccia, I. Laradji, I. Rish, A. Lacoste, D. Vazquez, et al., Online fast adaptation and knowledge accumulation: a new approach to continual learning, arXiv preprint arXiv:2003.05856 (2020).
* [125] C. Finn, P. Abbeel, S. Levine, Model-agnostic meta-learning for fast adaptation of deep networks, in: ICML, 2017.
* [126] K. Shmelkov, C. Schmid, K. Alahari, Incremental learning of object detectors without catastrophic forgetting, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 3400–3409.
* [127] D. Li, S. Tasci, S. Ghosh, J. Zhu, J. Zhang, L. Heck, Rilod: near real-time incremental learning for object detection at the edge, in: Proceedings of the 4th ACM/IEEE Symposium on Edge Computing, 2019, pp. 113–126.
* [128] L. Chen, C. Yu, L. Chen, A new knowledge distillation for incremental object detection, in: 2019 International Joint Conference on Neural Networks (IJCNN), IEEE, 2019, pp. 1–7.
* [129] X. Liu, H. Yang, A. Ravichandran, R. Bhotika, S. Soatto, Multi-task incremental learning for object detection.
* [130] K. Joseph, J. Rajasegaran, S. Khan, F. S. Khan, V. Balasubramanian, L. Shao, Incremental object detection via meta-learning, arXiv preprint arXiv:2003.08798 (2020).
* [131] M. Acharya, T. L. Hayes, C. Kanan, Rodeo: Replay for online object detection, arXiv preprint arXiv:2008.06439 (2020).
* [132] S. Sodhani, S. Chandar, Y. Bengio, Toward training recurrent neural networks for lifelong learning, Neural computation 32 (1) (2020) 1–35.
* [133] M. Schak, A. Gepperth, A study on catastrophic forgetting in deep lstm networks, in: International Conference on Artificial Neural Networks, Springer, 2019, pp. 714–728.
* [134] G. Arora, A. Rahimi, T. Baldwin, Does an lstm forget more than a cnn? an empirical study of catastrophic forgetting in nlp, in: Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association, 2019, pp. 77–86.
* [135] T. Chen, I. Goodfellow, J. Shlens, Net2net: Accelerating learning via knowledge transfer, arXiv preprint arXiv:1511.05641 (2015).
* [136] B. Ehret, C. Henning, M. R. Cervera, A. Meulemans, J. von Oswald, B. F. Grewe, Continual learning in recurrent neural networks with hypernetworks, arXiv preprint arXiv:2006.12109 (2020).
* [137] L. Duncker, L. Driscoll, K. V. Shenoy, M. Sahani, D. Sussillo, Organizing recurrent network dynamics by task-computation to enable continual learning, Advances in Neural Information Processing Systems 33 (2020).
* [138] F.-K. Sun, C.-H. Ho, H.-Y. Lee, Lamol: Language modeling for lifelong language learning, in: International Conference on Learning Representations, 2019.
* [139] Y. Li, L. Zhao, K. Church, M. Elhoseiny, Compositional language continual learning, in: International Conference on Learning Representations, 2019.
* [140] F. Mi, L. Chen, M. Zhao, M. Huang, B. Faltings, Continual learning for natural language generation in task-oriented dialog systems, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, 2020, pp. 3461–3474.
* [141] S. Mazumder, B. Liu, S. Wang, N. Ma, Lifelong and interactive learning of factual knowledge in dialogues, arXiv preprint arXiv:1907.13295 (2019).
* [142] B. Liu, S. Mazumder, Lifelong learning dialogue systems: Chatbots that self-learn on the job, arXiv preprint arXiv:2009.10750 (2020).
* [143] R. Del Chiaro, B. Twardowski, A. Bagdanov, J. van de Weijer, Ratt: Recurrent attention to transient tasks for continual image captioning, Advances in Neural Information Processing Systems 33 (2020).
* [144] Z. Ke, B. Liu, H. Wang, L. Shu, Continual learning with knowledge transfer for sentiment classification, in: Proceedings of European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2020\.
* [145] T. Liu, L. Ungar, J. Sedoc, Continual learning for sentence representations using conceptors, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019, pp. 3274–3279.
* [146] F. Mi, B. Faltings, Memory augmented neural model for incremental session-based recommendation, arXiv preprint arXiv:2005.01573 (2020).
* [147] F. Yuan, G. Zhang, A. Karatzoglou, X. He, J. Jose, B. Kong, Y. Li, One person, one model, one world: Learning continual user representation without forgetting, arXiv preprint arXiv:2009.13724 (2020).
* [148] F. Mi, X. Lin, B. Faltings, Ader: Adaptively distilled exemplar replay towards continual learning for session-based recommendation, in: Fourteenth ACM Conference on Recommender Systems, 2020, pp. 408–413.
* [149] Y. Xu, Y. Zhang, W. Guo, H. Guo, R. Tang, M. Coates, Graphsail: Graph structure aware incremental learning for recommender systems, in: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2020, pp. 2861–2868.
* [150] B. Liu, Learning on the job: Online lifelong and continual learning., in: AAAI, 2020, pp. 13544–13549.
* [151] T. Diethe, T. Borchert, E. Thereska, B. Balle, N. Lawrence, Continual learning in practice, arXiv preprint arXiv:1903.05202 (2019).
* [152] C. S. Lee, A. Y. Lee, Clinical applications of continual learning machine learning, The Lancet Digital Health 2 (6) (2020) e279–e281.
* [153] M. D. Lange, X. Jia, S. Parisot, A. Leonardis, G. Slabaugh, T. Tuytelaars, Unsupervised model personalization while preserving privacy and scalability: An open problem, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 14463–14472.
## Appendix A Algorithms
In this section, we provide the algorithms for different memory update
strategies described in Section 6.
1 procedure __MemoryUpdate $(mem_{s}z,t,n,B)$
3 2 $j\leftarrow 0$
5 4 for _$(\mathrm{x},y)$ in $B$_ do
7 6 $\quad M\leftarrow|\mathcal{M}|$ $\triangleright$ Number of samples
currently stored in the memory
9 8 if _$M <\mathrm{mem}_{-}\mathrm{sz}$_ then
11 10 $\quad\mathcal{M}$.append $(\mathrm{x},y,t)$
12 else
14 13 $i=\operatorname{randint}(0,n+j)$
16 15 if _$i <\mathrm{mem}_{-}\mathrm{sz}$_ then
18 17 $\mathcal{M}[i]\leftarrow(\mathbf{x},y,t)$ $\triangleright$ Overwrite
memory slot
20 19$\quad j\leftarrow j+1$
21
22 return $\mathcal{M}$
Algorithm 3 Reservoir sampling
1 Input : $n,M$
2 Initialize: $\mathcal{M},\mathcal{C}$
3 Receive: $(x,y)$
4 Update: $(x,y,\mathcal{M})$
65 $X,Y\leftarrow$ RandomSubset $(\mathcal{M},\mathrm{n})$
87 $g\leftarrow\nabla\ell_{\theta}(x,y);G\leftarrow\nabla_{\theta}\ell(X,Y)$
109 $c=\max_{i}\left(\frac{\left\langle
g,G_{i}\right\rangle}{\|g\|\left\|G_{i}\right\|}\right)+1$ $\triangleright$
make the score positive
1211 if _$\operatorname{len}(\mathcal{M}) >=M$_ then
14 13 if _$c <1$_ then
15 $\triangleright$ cosine similarity $<0$
17 16 $i\sim P(i)=\mathcal{C}_{i}/\sum_{j}\mathcal{C}_{j}$
19 18 $r\sim\text{ uniform }(0,1)$
21 20 if _$r <\mathcal{C}_{i}/\left(\mathcal{C}_{i}+c\right)$_ then
23 22 $\quad\mathcal{M}_{i}\leftarrow(x,y);\mathcal{C}_{i}\leftarrow c$
25 24end if
27 26end if
2928else
31 30 $\mathcal{M}\leftarrow\mathcal{M}\cup\\{(x,y)\\};\mathcal{C}\cup\\{c\\}$
3332end if
Algorithm 4 GSS-Greedy
1 Init: counter $C_{0}=\\{\\},\mathcal{D}_{0}=\\{\\}$ with capacity $k.$
Online samples arrive from $\mathrm{t}=1$
32 function SAMPLE
$\left(x_{t},y_{t},\mathcal{D}_{t-1},\mathcal{Y}_{t-1}\right)\quad$
$\triangleright$ Input: New sample and past state
54 $k_{c}=\frac{k}{\left|\mathcal{Y}_{t-1}\right|}$
76 if _$y_{t}\notin\mathcal{Y}_{t-1}$ or $C_{t-1}\left[y_{t}\right]<k_{c}$_
then
9 8 if _$\sum_{i}C_{i} >=k$_ then
10 $\triangleright$ If memory is full, replace
12 11 $y_{r}=\operatorname{argmax}\left(C_{t-1}\right)$ $\triangleright$
Select largest class, break ties randomly
14 13 $\left(x_{i},y_{i}\right)=\mathcal{D}_{t-1}\cdot\text{ random
}\left(y_{r}\right)$ $\triangleright$ Select random sample from class $y_{r}$
16 15
$\mathcal{D}_{t}=\left(\mathcal{D}_{t-1}-\left(x_{i},y_{i}\right)\right)\cup\left(x_{t},y_{t}\right)$
18 17 $C_{t}\left[y_{r}\right]=C_{t-1}\left[y_{r}\right]-1$
20 19 else
21 $\triangleright$ If memory has space, add
23 22 $\mathcal{D}_{t}=\mathcal{D}_{t-1}\cup\left(x_{t},y_{t}\right)$
25 24 end if
27 26 $\mathcal{Y}_{t}=\mathcal{Y}_{t-1}\cup y_{t}$
29 28 $C_{t}\left[y_{t}\right]=C_{t-1}\left[y_{t}\right]+1$
3130 end if
3332 return $\mathcal{D}_{t}$
3534 end function
Algorithm 5 Greedy Balancing Sampler
## Appendix B Experiment Details
### B.1 Dataset Detail
The summary of dataset statistics is provided in Table 1.
The strength of each nonstationary type used in the experiments are summarized
below.
* 1.
Noise: [0.0, 0.4, 0.8, 1.2, 1.6, 2.0, 2.4, 2.8, 3.2, 3.6]
* 2.
Occlusion: [0.0, 0.07, 0.13, 0.2, 0.27, 0.33, 0.4, 0.47, 0.53, 0.6]
* 3.
Blur: [0.0, 0.28, 0.56, 0.83, 1.11, 1.39, 1.67, 1.94, 2.22, 2.5]
Dataset | #Task | #Train/task | #Test/task | #Class | Image Size | Setting
---|---|---|---|---|---|---
Split MiniImageNet | 20 | 2500 | 500 | 100 | 3x84x84 | OCI
Split CIFAR-100 | 20 | 2500 | 500 | 100 | 3x32x32 | OCI
CORe50-NC | 9 | 12000$\sim$24000 | 4500$\sim$9000 | 50 | 3x128x128 | OCI
NS-MiniImageNet | 10 | 5000 | 1000 | 100 | 3x84x84 | ODI
CORe50-NI | 8 | 15000 | 44972 | 50 | 3x128x128 | ODI
Table 1: Summary of dataset statistics
### B.2 Implementation Details
This section describes the implementation details of each method, including
the hyperparameter grid considered for each dataset (see Table 2). As we
described in Section 4 of the main paper, the first $D^{CV}$ tasks are used
for hyperparameter tuning to satisfy the requirement that the model does not
see the data of a task more than once, and $D^{CV}$ is set to 2 in this work.
* 1.
EWC++: We set the $\alpha$ in Eq. (8) to 0.9 as suggested in the original
paper. We tune three hyperparameters in EWC++, learning rate (LR), weight
decay(WD) and $\lambda$ in Eq. (7).
* 2.
LwF: We set the temperature factor $T=2$ as the original paper and other CL
papers. The coefficient $\lambda$ for $\mathcal{L}_{KD}$ is set to
$\frac{\mathinner{\\!\left\lvert
C_{new}\right\rvert}}{\mathinner{\\!\left\lvert
C_{old}\right\rvert}+\mathinner{\\!\left\lvert C_{new}\right\rvert}}$
following the idea from [42] and the coefficient for $\mathcal{L}_{CE}$ is set
to $1-\lambda$.
* 3.
ER: The reservoir sampling used in MemoryUpdate follows Algorithm 3 in A. For
MemoryRetrieval, we randomly select samples with mini-batch size of 10
irrespective of the size of the memory buffer.
* 4.
MIR: To reduce the computational cost, MIR selects $C$ random samples from the
memory buffer as the candidate set to perform the criterion search. We tune
LR, WD as well as $C$.
* 5.
GSS: For every incoming sample, GSS computes the cosine similarity of the new
sample gradient to $n$ gradient vectors of samples randomly drawn from the
memory buffer (see Algorithm 4 in A). Other than LR and WD, we also tune $n$.
* 6.
iCaRL: We replace the herding-based [88] memory update method with reservoir
sampling to accommodate the online setting. We use random sampling for
MemoryRetrieval and tune LR and WD.
* 7.
A-GEM: We use reservoir sampling for MemoryUpdate and random sampling for
MemoryRetrieval and tune LR and WD.
* 8.
CN-DPM: CN-DPM is much more sensitive to hyperparameters than others. We need
to use different hyperparameter grids for different scenarios and datasets.
Other than LR, we tune $\alpha$, the concentration parameter controlling how
sensitive the model is to new data and classifier_chill $cc$, the parameter
used to adjust the VAE loss to have a similar scale as the classifier loss.
* 9.
GDumb: We use batch size of 16 and 30 epochs for all memory sizes. We clip
gradient norm with max norm 10.0 and tune LR and WD.
Method | CIFAR-100 | Mini-ImageNet | CORe50-NC | NS-MiniImageNet | CORe50-NI
---|---|---|---|---|---
EWC++ | LR: [0.0001, 0.001, 0.01, 0.1]
WD: [0.0001, 0.001], $\lambda$: [0, 100, 1000]
LwF | LR: [0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1]
WD: [0.0001, 0.001, 0.01, 0.1]
ER | LR: [0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1]
WD: [0.0001, 0.001, 0.01, 0.1]
MIR | LR:[0.0001, 0.001, 0.01, 0.1]
WD: [0.0001, 0.001], $C$: [25, 50, 100]
GSS | LR:[0.0001, 0.001, 0.01, 0.1]
WD: [0.0001, 0.001], $n$: [10, 20, 50]
iCaRL | LR: [0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1]
WD: [0.0001, 0.001, 0.01, 0.1]
A-GEM | LR: [0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1]
WD: [0.0001, 0.001, 0.01, 0.1]
CN-DPM | LR: [0.0001, 0.001, 0.01, 0.1] | [0.001, 0.005, 0.01] | [0.001, 0.01] | [0.001, 0.005, 0.01] | [0.001, 0.01]
$cc$: [0.001, 0.01, 0.1] | [0.001, 0.0015, 0.002] | [0.0005, 0.001, 0.002] | [0.0005, 0.001, 0.002] | [0.0005, 0.001, 0.002]
$\alpha$: [-100, -300, -500] | [-1200, -1000, -800] | [-1200, -1000, -800, -300] | [-15000, -5000, -500] | [-1200, -1000, -800, -300]
GDumb | LR: .001, 0.01, 0.1], WD:[0.0001, 0.000001]
Table 2: Hyperparameter grid for the compared methods.
## Appendix C Additional Experiments and Results
### C.1 More Results for OCI Setting
Fig 1, 2 and 3 show the average accuracy measured by the end of each task on
Split CIFAR-100, Mini-ImageNet and CORe50-NC with three different memory
buffer sizes (1k, 5k, 10k).
Figure 1: The average accuracy measured by the end of each task for the OCI
setting on Split CIFAR-100 with three memory sizes. Figure 2: The average
accuracy measured by the end of each task for the OCI setting on Split Mini-
ImageNet with three memory sizes. Figure 3: The average accuracy measured by
the end of each task for the OCI setting on CORe50-NC with three memory sizes.
### C.2 OCI Tricks on Split Mini-ImageNet
We evaluate the tricks described in Section 7.3 on Split Mini-ImageNet. As
shown in Table 1 and Fig. 4, we find similar results as in Split CIFAR-100
that all tricks are beneficial. LB and KDC* are most useful when the memory
buffer is small, and NCM and RV are more effective when the memory buffer is
large. One main difference is that NCM is not as effective as in CIFAR-100
with a 10k memory buffer as base methods with NCM cannot outperform the best
OCI performance.
Finetune | $3.4\pm 0.2$
---|---
Offline | $51.9\pm 0.5$
Method | A-GEM | ER | MIR
Buffer Size | M=1k | M=5k | M=10k | M=1k | M=5k | M=10k | M=1k | M=5k | M=10k
NA | $3.4\pm 0.2$ | $3.7\pm 0.3$ | $3.3\pm 0.3$ | $6.4\pm 0.9$ | $14.5\pm 2.1$ | $15.9\pm 2.0$ | $6.4\pm 0.9$ | $16.5\pm 2.1$ | $21.0\pm 1.1$
LB | $5.8\pm 0.8$ | $5.8\pm 0.5$ | $5.4\pm 0.9$ | $14.4\pm 2.1$ | $19.3\pm 2.3$ | $22.1\pm 1.1$ | $\mathbf{17.1\pm 0.9}$ | $21.7\pm 0.7$ | $23.0\pm 0.8$
KDC | $8.0\pm 1.1$ | $7.5\pm 1.5$ | $8.2\pm 1.7$ | $12.3\pm 2.5$ | $15.4\pm 0.4$ | $14.6\pm 2.1$ | $14.3\pm 0.5$ | $15.8\pm 0.4$ | $15.5\pm 0.5$
KDC* | $5.6\pm 0.4$ | $5.5\pm 0.5$ | $5.4\pm 0.4$ | $\mathbf{16.4\pm 0.8}$ | $20.3\pm 2.5$ | $23.0\pm 3.1$ | $16.4\pm 0.6$ | $25.1\pm 0.8$ | $26.1\pm 0.9$
MI | $3.5\pm 0.2$ | $3.7\pm 0.2$ | $3.6\pm 0.3$ | $6.4\pm 0.6$ | $16.3\pm 1.3$ | $24.1\pm 1.3$ | $6.6\pm 0.6$ | $15.2\pm 1.1$ | $22.0\pm 1.9$
SS | $5.7\pm 0.9$ | $6.2\pm 0.8$ | $5.7\pm 0.8$ | $12.5\pm 1.9$ | $20.5\pm 2.1$ | $24.1\pm 1.1$ | $14.2\pm 1.0$ | $21.9\pm 0.8$ | $24.7\pm 0.9$
RV | $4.1\pm 0.2$ | $\mathbf{19.9\pm 3.7}$ | $\mathbf{25.5\pm 4.7}$ | $11.4\pm 0.6$ | $\mathbf{32.1\pm 0.8}$ | $\mathbf{36.3\pm 1.5}$ | $9.1\pm 0.5$ | $\mathbf{29.9\pm 0.7}$ | $\mathbf{37.3\pm 0.5}$
NCM | $\mathbf{10.2\pm 0.4}$ | $11.7\pm 1.5$ | $13.0\pm 0.5$ | $14.2\pm 0.7$ | $26.7\pm 0.7$ | $28.2\pm 0.6$ | $13.6\pm 0.6$ | $26.4\pm 0.7$ | $28.6\pm 0.4$
Best OCI | $14.7\pm 0.4$ | ${21.1\pm 1.7}$ | ${31.0\pm 1.4}$ | $14.7\pm 0.4$ | ${21.1\pm 1.7}$ | ${31.0\pm 1.4}$ | $14.7\pm 0.4$ | ${21.1\pm 1.7}$ | ${31.0\pm 1.4}$
Table 1: Performance of compared tricks for the OCI setting on Split Mini-
ImageNet. We report average accuracy (end of training) for memory buffer with
size 1k, 5k and 10k. Best OCI refers to the best performance from the compared
methods in Table 8. Figure 4: Comparison of various tricks for the OCI
setting on Split Mini-ImageNet. We report average accuracy (end of training)
for memory buffer with size 1k, 5k and 10k.
### C.3 More Results for ODI Setting
Fig. 5, 6 and 7 show the average accuracy measured by the end of each task on
Mini-ImageNet-Noise, Mini-ImageNet-Occlusion and CORe50-NI with three
different memory buffer sizes (1k, 5k, 10k).
Figure 5: The average accuracy measured by the end of each task for the ODI
setting on Mini-ImageNet-Noise with three memory sizes. Figure 6: The average
accuracy measured by the end of each task for the ODI setting on Mini-
ImageNet-Occlusion with three memory sizes. Figure 7: The average accuracy
measured by the end of each task for the ODI setting on CORe50-NI with three
memory sizes.
|
# Finding hidden-feature depending laws inside a data set and classifying it
using Neural Network.
Thilo Moshagen Nihal Acharya Adde Ajay Navilarekal Rajgopal
###### Abstract
The logcosh loss function for neural networks has been developed to combine
the advantage of the absolute error loss function of not overweighting
outliers with the advantage of the mean square error of continuous derivative
near the mean, which makes the last phase of learning easier. It is clear, and
one experiences it soon, that in the case of clustered data, an artificial
neural network with logcosh loss learns the bigger cluster rather than the
mean of the two. Even more so, the ANN, when used for regression of a set-
valued function, will learn a value close to one of the choices, in other
words, one branch of the set-valued function, while a mean-square-error NN
will learn the value in between. This work suggests a method that uses
artificial neural networks with logcosh loss to find the branches of set-
valued mappings in parameter-outcome sample sets and classifies the samples
according to those branches.
Keywords— Neural Networks, Clustering, Classification, Model selection, Loss
function, Objective Function, ANOVA, Hypothesis testing
## 1 Introduction
Given a set of data tuple, _Clustering_ algorithms [AR] decide which elements
of the set belong together, i.e. form a subset in the sense that they have
closer mutual distance among each other. Further, there are a lot of well
established and also new methods to deal with the question of whether two or
more sets of samples belong to the same population or not. Mainly, this
considers the field of statistical hypothesis testing [SOA99] which is a
testable hypothesis based on observed data modelled as the realised values
taken by a collection of random variables. Also, in a different setting, a
data model can be defined as a set of mathematical laws that might be valid
inside a data set and describes how the data elements relate to one another.
Given that there exist some measurements and parameters that caused these
measurements, the model selection tells which model is most likely valid for
the observed measurements to happen. Variants of ANOVA (Analysis of variance)
combine the two and is used to analyse the differences among group means in a
sample [KS14].
We consider a method that answers the question whether a set of vector-valued
samples, where some components can be seen as cause and at least one as an
effect, obeys some possibly unknown rule, or if it rather splits into groups
that fulfil different rules. In other words, assuming that any input data
point consists of components
$\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}}$
that are presumably independent and component(s)
$\mathchoice{\displaystyle\boldsymbol{y}}{\textstyle\boldsymbol{y}}{\scriptstyle\boldsymbol{y}}{\scriptscriptstyle\boldsymbol{y}}$
that depend on them by some generally unknown rules, the suggested methods
finds the rules in the shape of an artificial neural networks’ weights _and_
clusters the data into groups obeying each of the found rules.
The three key features of the suggested method are, first, the use of neural
networks to extract one of the unknown rules that are valid in parts of the
data. This extraction is done by supervised learning, which is a regression in
mathematical terms. The second key feature is that supervised learning is done
with a loss function that is approximately linear in the distance to zero and
thus puts less weight on far-off data than the square error loss function. For
example, the $L_{1}$-norm fulfils this. But here, the logcosh loss function
was used as it facilitates learning, while still having the desired property.
When used for regression, such a loss function leads to learning a function
that approximates well the strongest cluster of output data, while hardly
taking into account clusters with fewer members. Data lying away from the
found regression graph thus is probably obeying another law; Distance to the
regression function found by the artificial neural network is then used as a
classification criterion. This is the third key feature. The data that is
approximated well by the found regression function is considered to be
governed by that function. With the badly approximated data, a new network is
taught, and all points where its forecast matches are considered to be
governed by that second regression. This procedure is continued until no
relevant data remains unclassified. This is how in brief the suggested method
works.
## 2 Problem Setting
### 2.1 Mathematical Description
Let
$\left\\{(\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}},\mathchoice{\displaystyle\boldsymbol{y}}{\textstyle\boldsymbol{y}}{\scriptstyle\boldsymbol{y}}{\scriptscriptstyle\boldsymbol{y}})\in\mathbb{R}^{d}\times\mathbb{R}^{d}\right\\}$
(1)
be a set of data points, where it can be assumed that
$\mathchoice{\displaystyle\boldsymbol{y}}{\textstyle\boldsymbol{y}}{\scriptstyle\boldsymbol{y}}{\scriptscriptstyle\boldsymbol{y}}$
depends on
$\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}}$.
To simplify the setting and also due to the fact that artificial neural
networks do not encourage vector valued output, we restrict ourselves to
$\left\\{(\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}},y)\in\Omega\subset\mathbb{R}^{d}\times\mathbb{R}\right\\}$
(2)
where $\Omega$ is the domain in which observations are defined.
The presence of clusters
$\left\\{(\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}},y)\right\\}_{i,i=1,...,M}$,
where inside each cluster, the
$(\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}},y)$-tuples
obey a different law is now mathematically described as follows:
Each clusters’ independent variable points are subsumed in the set
$\hat{X}_{i}\subset X$, $X$ being all
$\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}}$
in $\Omega$, where the law valid inside it is (first defined on the samples
only, with the hat denoting this):
$\displaystyle\hat{\phi}_{i}:\hat{X}_{i}$
$\displaystyle\longrightarrow\mathbb{R}$ (3)
$\displaystyle\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}}$
$\displaystyle\mapsto\hat{\phi}_{i}(\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}})=y(\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}})$
(4)
where $\hat{\Phi}_{i}$ is a single-valued function, mapping each point in
$\hat{X}_{i}$ to a unique value in the range. The existence of multiple
$\hat{\Phi}_{i}$ is due to hidden features, for which nearby
$\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}}$
can have very distant $y$. Each $\hat{\Phi}_{i}$ induces a continuous function
$\Phi_{i}$ in some super-set $X_{i}$ of $\hat{X}_{i}$ by regression, the
continuous regression counterpart of the measurements. Those then have
reasonably bounded derivatives - which a mapping $\Phi$ that maps all $x$
would not have. There may exist a certain subset of $X$ which gives the same
output for all $\Phi_{i}$, while in the $X_{i}$ the $\Phi_{i}$ give different
values. Thus, $\Phi_{i}$ may be seen as defined only on $X_{i}$, or
alternatively on $X$, in which case the $\Phi_{i}$ coincide in parts of
$\Omega$. This can be seen as a multi-valued or set-valued function
$\displaystyle\Phi(\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}})=\left\\{\begin{array}[]{cc}\Phi_{1}\\\
\vdots\\\ \Phi_{M}\end{array}\right\\}.$ (8)
The set-valuedness in this nomenclature is expressed by this vector-
valuedness. It captures the property that the data input-output pairs indeed
belong to different situations or populations. The task to solve in this
nomenclature is: Given the set $X$, find the rules $\Phi_{i}$ and the subsets
$X_{i}$ where they are valid.
### 2.2 Outline of Strategy
One seeks to learn each $X_{i}$’s rule $\Phi_{i}$ by regression, which for
general $\Phi_{i}$ is done best by artificial neural network, using the
logcosh loss function: It weights the outliers less, similar to the MAE loss
while it exhibits good performance during gradient descent as MSE. The network
trained with logcosh loss will thus learn the biggest cluster $\Phi_{1}$
efficiently because it weights smaller clusters away from the biggest one only
linearly with distance, unlike the squared error losses, and thus classifies
the data as belonging to the biggest cluster or not. In our research, we train
the network with logcosh loss function in an aim to classify the clustered
data. This approach is demonstrated using a simple 1-dimensional and
2-dimensional problem.
## 3 Artificial Neural Network Regression Quality as a Classification
Criterion
Supervised learning of an Artificial Neural Network [GBC16] has the task of
learning a function that maps an input to an output based on example input-
output pairs. It is where the set of input variables
$\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}}$
and the output variables
$\mathchoice{\displaystyle\boldsymbol{y}}{\textstyle\boldsymbol{y}}{\scriptstyle\boldsymbol{y}}{\scriptscriptstyle\boldsymbol{y}}$
are available and one has to use an algorithm to learn the mapping function
from the input to the output. The goal is to approximate the mapping function
so well that the new unseen input data
$\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}}$
can be used to predict the output variables
$\mathchoice{\displaystyle\boldsymbol{y}}{\textstyle\boldsymbol{y}}{\scriptstyle\boldsymbol{y}}{\scriptscriptstyle\boldsymbol{y}}$
for that data. An ANN is based on a collection of connected nodes called
neurons which loosely represents the neurons in a biological brain. Each
connection transmits signals from one neuron to the other. The signal at a
connection is a real number, and the output of each neuron is computed by some
non-linear function of the sum of its inputs. These connections are called
edges. Neurons and edges typically have a weight that adjusts as learning
proceeds. Through backpropagation, the network tries to find optimal weights
and biases to represent the model. In other words, the artificial neural
network can be represented as an optimization problem which ultimately is
equivalent to minimising the loss function of the data. Therefore the choice
of the loss function becomes vital for modelling an efficient network. Our
task is to find an appropriate model that fits the regression model by one of
the rules and classifies the clustered data by it. Therefore, in the following
section, we discuss the different available loss functions and choose an
appropriate loss function in an aim to classify the clustered data.
### 3.1 Loss Functions Properties
One key feature of the suggested method is the choice of the loss function. We
will point out in the following that for a regression problem, the minimizer
of loss functions that rise linearly with the distance lies inside a cluster,
while for the quadratic loss functions, it lies between clusters. The choice
of loss function depends on a number of factors including the presence of
outliers, choice of the machine learning algorithm, time efficiency of
gradient descent, ease of finding the derivatives and confidence of
predictions. [NZL18] investigated some representative loss functions and
analysed the latent properties of them. The main goal of the investigation was
to find the reason why bilateral loss functions are more suitable for
regression task, while unilateral loss functions are more suitable for
classification task. This section covers in detail the different loss
functions which can be used for our regression problem as discussed by
[Gro18].
#### 3.1.1 Mean Square Error (MSE) or L2 loss
This function originates from the theory of regression, least-squares method.
Mean Square Error (MSE) is the most commonly used regression loss function.
MSE is the sum of squared distances between our target variable $y$ and
predicted values $y_{p}$.
$\mathbf{M.S.E.}=\frac{\sum_{i=1}^{n}(y^{i}-y_{p}^{i})^{2}}{n}$ (9)
It is well known that, here, few distant points outweigh the closer points.
The MSE loss establishes that our trained model takes outliers seriously as
the contribution to loss by an outlier in input is magnified by squaring and
so learning results are biased in favor of the outliers. This can be an
advantage - predictions in zones with outliers do not produce huge errors to
the outliers since the MSE took them into account. MSE is thus good to use if
the target data conditioned on the input is normally distributed around a mean
value and in the absence of outliers. It has a continuous derivative and
therefore the minimisation with gradient methods works well. The described
property is a disadvantage for our setting, as one cluster consist of outliers
seen from the other clusters’ perspective, thus the MSE minimiser would be
right in between clusters.
Figure 1 shows the plots of mean square error loss vs. predictions, where the
target value is 0, and the predicted values range between -100 to 100. The
loss (Y-axis) reaches its minimum value at the prediction (X-axis) = 0. The
range of the loss is 0 to $\infty$.
Figure 1: Plot of Mean Square Error (MSE) Loss
#### 3.1.2 Mean Absolute Error (MAE) or L1 loss
Mean Absolute Error (MAE) is just the mean of absolute errors between the
actual value $y$ and the value predicted $y_{p}$. So it measures the average
magnitude of errors in a set of predictions, without considering their
directions.
$\mathbf{M.A.E.}=\frac{\sum_{i=1}^{n}|(y^{i}-y_{p}^{i})|}{n}$ (10)
As one can see, for this loss function, both the big and small distances
contribute the same. The advantage of MAE covers the disadvantage of MSE. As
we consider the absolute value, the errors will be weighted on the same linear
scale. Therefore, unlike the previous case, MAE doesn’t put too much weight on
the outliers. However, it does not have a continuous derivative and thus
frequently oscillates around a minimum during gradient descent. The MSE does a
better job there as it has a continuous derivative and provides a stable
solution. Figure 2 shows the plot of mean absolute error loss with respect to
the prediction while the target value is 0, similar to the previous case.
Figure 2: Plot of Mean Absolute Error (MAE) Loss
#### 3.1.3 Huber loss
Huber loss is just the absolute error but transforms to squared error for
small values of error. It is an attempt to overcome MAE’s disadvantage of
nonsmooth derivative. Huber loss is less sensitive to outliers in data than
the squared error loss. It is also differentiable at 0. It is basically
absolute error, which becomes quadratic when the error is small. How small
that error has to be to make it quadratic depends on a hyperparameter
$\delta$, which can be tuned. Huber loss approaches MSE when
$\delta\rightarrow 0$ and MAE when $\delta\rightarrow\infty$ (large numbers).
It is defined as
$L_{\delta}(y,y_{p})=\left\\{\begin{array}[]{ll}\frac{1}{2}(y-y_{p})^{2}&\mbox{if
}|y-y_{p}|\leq\delta\\\ \delta|y-y_{p}|-\frac{1}{2}\delta^{2}&\mbox{otherwise
}\end{array}\right\\}$ (11)
The choice of $\delta$ becomes increasingly important depending on what one
considers as an outlier. Residuals larger than delta are minimized with L1
while residuals smaller than delta are minimized with L2. Hubber loss combines
the advantages of both the loss functions. It can be really helpful in some
cases, as it curves around the minima which decreases the gradient. However,
the problem with Huber loss is that we might need to train hyperparameter
delta which is an iterative process. Figure 3 shows the plot of Huber loss vs.
predictions for different values of delta $\delta$.
Figure 3: Plot of Huber Loss
#### 3.1.4 Log-Cosh loss
Log-cosh is the logarithm of the hyperbolic cosine of the prediction error.
Given the actual value $y$ and the predicted value $y_{p}$, the log-cosh is
defined as
$L(y,y_{p})=\sum_{i=1}^{n}|\operatorname{log}(\cosh((y^{i}-y_{p}^{i})))|$ (12)
$\operatorname{log}(\cosh(x))$ is approximately equal to $\frac{x^{2}}{2}$ for
small values of x and to $|x|-\operatorname{log}(2)$ for larger values. It is
twice differentiable everywhere unlike Huber loss. Therefore, the log-cosh
loss function is similar to mean absolute error with respect to its moderate
weighting of outliers, while it behaves stable during gradient descent search.
Figure 4 shows the plots of logcosh loss vs predictions, where the target
value is 0, and the predicted values range between -10 to 10.
Figure 4: Plot of Log-cosh Loss
Therefor, in our research log-cosh loss function was used and indeed showed
good results in classifying the data based on the hidden features.
Figure 5 compares the 3 different losses functions.
Figure 5: Plot of different Losses : MSE, MAE and Log-cosh
When clustered data is present, an artificial neural network with logcosh loss
function learns the bigger cluster rather than the mean of the two and hence
can be used to classify the clustered data. In the case of MSE, due to the
squaring of the error function, few faraway points are weighted more than the
nearby points. When learning clustered data, the network with MSE loss
function gets affected by these outlying clusters and tries to find the minima
between them and thereby fails to learn the bigger cluster. For linearly
growing loss functions like logcosh and MAE, just the sum of distances counts
and few far-away points do not count more than several nearby points and
therefore, a regression value near or through the heavier cluster is learnt.
Though the MAE loss function has this property of the bigger cluster, it is
non-smooth and has a non-continuous derivative resulting in oscillating
behaviour. As mentioned above, since the logcosh loss function is a
combination of MAE for larger values and MSE for the smaller values, it
successfully learns the bigger cluster and gives a stable solution. These
features of logcosh loss function are exploited in our research.
### 3.2 One-Dimensional Test Case
#### 3.2.1 Test Problem
We now consider a simple 1D example based on the concept discussed in the
section 2. Two simple single-valued polynomial functions were selected and
combined in different fractions to achieve a multi-valued data set. This
section discusses the problem setting of the 1-dimensional case and thereafter
the network behaviour based on the chosen data set.
To create a multi valued Data set, 2 simple functions were selected as below.
$\Phi_{1}(\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}})={((x-4)(x+4))}^{2},\hskip
14.22636ptx\in[-6,6]$ (13)
$\displaystyle\Phi_{2}(\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}})=\left\\{\begin{array}[]{cc}{((x-4)(x+4))}^{2}&\hskip
14.22636ptx\in[-6,-4)\\\ 0&\hskip 14.22636ptx\in[-4,4]\\\
{((x-4)(x+4))}^{2}&\hskip 14.22636ptx\in(4,6]\\\ \end{array}\right\\}.$ (17)
where $\Phi_{1}$ and $\Phi_{2}$ are two single-valued functions which are
defined within the interval $[-6,6]$.
#### 3.2.2 Training Strategy
The data set was split such that $80\%$ of the data were used for training and
the rest $20\%$ were used as test data. Initially, both the functions were
trained individually with a basic regression neural network and then tested on
the test data to validate the network.
(a) $60\%$ or more of function $\phi_{1}$
(b) $60\%$ or more of function $\phi_{2}$
Figure 6: Plot of Test and Predicted Data for the functions $\phi_{1}$ and
$\phi_{2}$ for 1-dimensional test case.
As seen in figure 6, it is clear that the neural network was able to
approximate the given functions by reducing the loss function to the minimum.
As discussed in Section 2, to set up a multi-valued data set we combine
fraction of both the sets $\Phi_{1}$ and $\Phi_{2}$ respectively, to form a
new data set $\Phi$ as per our requirement. The two data sets were combined in
different fractions, trained using our neural network and then tested on the
test data which is $20\%$ of the new combined data. The noise was added to the
data set to replicate the real-world data. The network was trained using
logcosh loss function to examine the network behaviour. To compare the
functionality of different loss functions, the network was also trained with
MSE and MAE loss function using a similar setting.
The combined function can be written as follows :
$\displaystyle\Phi(\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}})=\left\\{\begin{array}[]{cc}\Phi_{1}\\\
\Phi_{2}\end{array}\right\\}.$ (20)
where
$\Phi(\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}})$
is a combination of both multi-valued (the multi-valued region where each
$\mathchoice{\displaystyle\boldsymbol{x}}{\textstyle\boldsymbol{x}}{\scriptstyle\boldsymbol{x}}{\scriptscriptstyle\boldsymbol{x}}$
has two possible outputs $y$ as shown in the figure 7 and single valued
function (defined on $[-6,-4)\bigcup(4,6]$).
Figure 7: Plot of data set with noise for 1-dimensional case
#### 3.2.3 Network behaviour
In this section, the behaviour of our network based on the chosen network
architecture is discussed. As discussed earlier the network was trained with a
different fraction of the two chosen functions and then tested on the test
data. The network was completely trained using log-cosh loss function. It can
be seen that, when using the log-cosh loss, the network predicted one of the
two chosen function with high accuracy and not the mean of the two functions.
The network predicted the function $\Phi_{1}$, when $60\%$ or more of the
function $\Phi_{1}$-generated data was chosen in the combined data set and it
predicted the function $\Phi_{2}$ otherwise, as shown in the figure 8.
(a) $60\%$ or more of function $\Phi_{1}$
(b) $60\%$ or more of function $\Phi_{2}$
Figure 8: Plot of Test and Predicted Data for the mixed data set in the
1-dimensional case. The plots illustrate the behaviour of the network when
tested on the test data. The network accurately predicts one of the 2
functions depending on the fraction of functions considered when trained using
log-cosh loss function.
As could be expected (see Section 3.1.4), the logcosh loss function learned
the bigger cluster of data, unlike the mean square error loss which learned
the mean of the two functions or the absolute error which would oscillate
between the two chosen functions as shown in Figure 9. As mentioned in section
3.1.4, the MSE loss function gets affected by the minor cluster due to
squaring and thus finds the weighted mean between the two depending on
composition of the cluster. Unlike MSE, MAE functions similar to that of
logcosh and tries to find one of the two clusters. However, since it is non
smooth and has non-continuous derivative, the prediction oscillates between
the clusters when the composition of the clusters are nearly equal.
(a) Network behaviour when the model was trained with MSE loss function. The
network predicts the weighted mean of the 2 functions depending on the
fractional composition
(b) Network behaviour when the model was trained with MAE loss function.
Though the network predicts one of the 2 clusters often, the prediction
oscillates between the two when the fractions of the clusters are nearly
equal.
Figure 9: Plot of Test and Predicted Data for the mixed data set in the
1-dimensional case when the network is trained using (a) MSE and (b) MAE loss
function.
### 3.3 Two-Dimensional Test Case
#### 3.3.1 Test Problem
We now choose a 2-dimensional case based on the concept discussed in section
2. Similar to the 1D case, two 2 dimensional single-valued functions were
combined in different fractions to form the multi-valued data set $\Phi$,
learned by the neural network and finally, the behaviour of our network based
on these data sets was analysed.
The two functions
$f_{1}(x,y)=xy(2x+2y)$ (21) $f_{2}(x,y)=xy(x^{2}+y^{2})$ (22)
were used as arguments to the sigmoid function. The main reason to use the
sigmoid function was to keep the range between (0,1).
$\Phi_{1}(x,y)=\text{sigmoid}(f_{1}(x,y))=\frac{1}{1+e^{-f_{1}(x,y)}}$ (23)
$\Phi_{2}(x,y)=\text{sigmoid}(f_{2}(x,y))=\frac{1}{1+e^{-f_{2}(x,y)}}$ (24)
To set up a multi-valued data set we combined both the data sets $\Phi_{1}$
and $\Phi_{2}$ of the above functions in different fractions to form a
combined data set $\Phi$ as per our requirement. Noise was added to the data
set to replicate the real-world scenario.
#### 3.3.2 Training Strategy
The neural network was trained with this data set and then predicted on the
test data which is $20\%$ of the total combined data.
Figure 10: Plot of data set without noise for a 2-dimensional data set.
Figure 10 shows the plot of the combined data set without noise, where red and
orange represent the function $\Phi_{1}$ and function $\Phi_{2}$ respectively.
As discussed earlier, in this case, for given nearby $(x,y)_{1}$ and
$(x,y)_{2}$, we have two distant values $z_{1}$ and $z_{2}$ despite being very
close to each other. The network with logcosh loss function is trained with
different fractions of the sets $\Phi_{1}$ and $\Phi_{2}$ joined into $\Phi$
in an aim to classify the two.
#### 3.3.3 Network behaviour
A very noisy data set was used to train the network – the 2 populations cannot
be easily distinguished by visualization. After training the network with a
combination $\Phi$ of different fractions of $\Phi_{1}$ and $\Phi_{2}$,
similar to the 1D case, a clear rule was visible when the logcosh loss
function was used.
(a) $60\%$ or more of function $\Phi_{1}$ values in data
(b) $60\%$ or more of function $\Phi_{2}$ values in data
Figure 11: Plot of Test and Predicted Data for the mixed data set in the
2-dimensional case. The network accurately predicts one of the two functions
depending on the fractional composition of the data set.
The network predicted the function $\Phi_{1}$ when $60\%$ or more of $\Phi$
consisted values of $\Phi_{1}$ and $\Phi_{2}$ when $60\%$ or more of $\Phi$
consisted values of $\Phi_{2}$ as shown in the figure 11. In the plots, the
red scatter points represent the function $\Phi_{1}$ with noise and the red
surface plot represents the values $\Phi_{1}$ of function ${\Phi}$ without
noise. Similarly, for the function $\Phi_{2}$, orange scatter points and
orange surface plot represents the function with and without noise
respectively. Finally, the blue scatter points represent the predicted value.
The functions were plotted without noise for better visualisation. From figure
11, it is clear that the network learnt one of the 2 functions accurately
without being influenced by noise. It can be therefore confirmed that the
neural network predicts the bigger of the two clusters when logcosh loss
function is used.
## 4 Conclusion
Based on the network behaviour, we claim that a network with logcosh loss
function can be used to classify the data when clusters of data exist. It can
be concluded that in case of clustered data, an artificial neural network with
logcosh learns the bigger cluster rather than the mean of the two. Even more
so, the ANN when used for regression of a set-valued function, will learn a
value close to one of the choices, in other words, one branch of the set-
valued function, while a mean-square-error NN will learn the value in between.
Based on the above result we have a neural network that not only helps in
classifying the data based on the invisible features but also predicts the
majority cluster with high accuracy. In the real world scenario, the
unavailability of enough parameters to build the regression model is always a
major problem and therefore it becomes increasingly difficult to represent the
model based on the available limited data. Using this theory, we can classify
the clusters of data based on an invisible feature which is not available to
us beforehand. It can be also used to validate if there are enough features to
represent the model. In other words, we can confirm if a feature is essential
to represent the model.
## References
* [AR] Charu C. Aggarwal and Chandan K. Reddy “Data Clustering” In _O’Reilly Online Learning_ ChapmanHall/CRC URL: https://www.oreilly.com/library/view/data-clustering/9781466558229/
* [GBC16] Ian Goodfellow, Yoshua Bengio and Aaron Courville “Deep Learning” http://www.deeplearningbook.org MIT Press, 2016
* [Gro18] Prince Grover “5 Regression Loss Functions All Machine Learners Should Know”, 2018 URL: https://heartbeat.fritz.ai/5-regression-loss-functions-all-machine-learners-should-know-4fb140e9d4b0
* [KS14] Jörg Kaufmann and AG Schering “Analysis of Variance ANOVA” In _Wiley StatsRef: Statistics Reference Online_ American Cancer Society, 2014 DOI: https://doi.org/10.1002/9781118445112.stat06938
* [NZL18] Feiping Nie, Hu Zhanxuan and Xuelong Li “An investigation for loss functions widely used in machine learning” In _Communications in Information and Systems_ 18, 2018, pp. 37–52 DOI: 10.4310/CIS.2018.v18.n1.a2
* [SOA99] Alan Stuart, J. Ord and Steven Arnold “Kendall’s advanced theory of statistics. Vol.2A: Classical inference and the linear model”, 1999
|
# A Linear Division-Based Recursion with Number Theoretic Applications
Jonathan L. Merzel
###### Abstract
A simple remark on infinite series is presented. This applies to a particular
recursion scenario, which in turn has applications related to a classical
theorem on Euler’s phi-function and to recent work by Ron Brown on natural
density of square-free numbers.
## 1 A Basic Fact about Infinite Series
In a recent paper [1], Ron Brown has computed the natural density of the set
of square-free numbers divisible by $a$ but relatively prime to $b$, where $a$
and $b$ are relatively prime square-free integers. Here we note a simple
remark on infinite series, one of whose consequences generalizes a key
argument in that work. We then derive a consequence of a well-known result on
the Euler $\varphi$-function. The ”$m=p$” case of that consequence follows
from En-Naoui[2] who anticipates some of our arguments..
###### Remark 1
Let $\underset{i=1}{\overset{\infty}{\sum}}a_{i}$ be an absolutely convergent
series of complex numbers, and for $i\geq 1$,
$f_{i}:\mathbb{N}\cup\\{0\\}\rightarrow\mathbb{C}$ with
$\underset{N\rightarrow\infty}{\lim}f_{i}(N)=D$ (independent of $i$) and the
$f_{i}$ uniformly bounded. Then
$\underset{N\rightarrow\infty}{\lim}\underset{i=1}{\overset{\infty}{\sum}}a_{i}f_{i}(N)=D\underset{i=1}{\overset{\infty}{\mathop{\displaystyle\sum}}}a_{i}.$
Proof. This is a special case of the Lebesgue Dominated Convergence Theorem
(using the counting measure and applied to the sequence
$\\{a_{i}f_{i}(n)\\}_{n=1}^{\infty}$). To preserve the elementary character of
the arguments here, we give an ”Introductory Analysis” proof.
Let $\varepsilon>0$ be given. By uniform boundedness, there is a constant $B$
for which $\left|f_{i}(N)-D\right|<B$ for all $i$ and $N$. Choose
$k\in\mathbb{N}$
with$\underset{i=k+1}{\overset{\infty}{\sum}}\left|a_{i}\right|<\frac{\varepsilon}{2B}$,
and choose M such that for all $N\geq M$ and $1\leq i\leq k,~{}$
$\left|f_{i}(N)-D\right|<\varepsilon/(1+2\underset{j=1}{\overset{k}{\sum}}\left|a_{j}\right|).$
Then we have for $N\geq M$
$\displaystyle\left|\underset{i=1}{\overset{\infty}{\sum}}a_{i}f_{i}(N)-D\underset{i=1}{\overset{\infty}{\mathop{\displaystyle\sum}}}a_{i}\right|$
$\displaystyle=$
$\displaystyle\left|\underset{i=1}{\overset{\infty}{\sum}}a_{i}(f_{i}(N)-D)\right|$
$\displaystyle\leq$
$\displaystyle\underset{i=1}{\overset{k}{~{}\sum}}\left|a_{i}\right|\left|(f_{i}(N)-D)\right|+\underset{i=k+1}{\overset{\infty}{~{}\sum}}\left|a_{i}\right|\left|(f_{i}(N)-D)\right|$
$\displaystyle<$
$\displaystyle\underset{i=1}{\overset{k}{~{}\sum}}\left|a_{i}\right|\cdot\varepsilon/(1+2\underset{i=1}{\overset{k}{\sum}}\left|a_{i}\right|)+\frac{\varepsilon}{2B}\cdot
B<\varepsilon$
## 2 A Consequence and Some Applications
For all applications of the remark above, we first derive the following
consequence involving a ”linear division-based” recursion.
###### Lemma 2
Let, $F,G:\mathbb{N}\cup\\{0\\}\rightarrow\mathbb{C}$,
$1<m\in\mathbb{N},~{}\alpha,\beta,D\in\mathbb{C}$ satisfy the conditions (1)
$\underset{N\rightarrow\infty}{\lim}F(N)/N=D$, (2) $\left|\beta\right|<m$, (3)
$G(N)=\alpha F(\left\lfloor N/m\right\rfloor)+\beta G(\left\lfloor
N/m\right\rfloor)$, and (4) $F(0)=G(0)=0$. Then
$\underset{N\rightarrow\infty}{\lim}G(N)/N=\frac{D\alpha}{m-\beta}.$
Proof. Recursively expand (using condition (3) and $\left\lfloor\left\lfloor
a/b\right\rfloor/c\right\rfloor=\left\lfloor a/(bc)\right\rfloor$ for positive
integers $a,b,c$ ) we have for $N>0$
$G(N)/N=\frac{\alpha}{m}\cdot\frac{F(\left\lfloor
N/m\right\rfloor)}{N/m}+\frac{\alpha\beta}{m^{2}}\frac{F(\left\lfloor
N/m^{2}\right\rfloor)}{N/m^{2}}+\cdots+\frac{\alpha\beta^{j-1}}{m^{j}}\frac{F(\left\lfloor
N/m^{j}\right\rfloor)}{N/m^{j}}+\frac{\alpha\beta^{j-1}}{m^{j}}\frac{G(\left\lfloor
N/m^{j}\right\rfloor)}{N/m^{j}}$ (*)
. By properties (1), (2) and (4), this implies we have
$G(N)/N=\mathop{\displaystyle\sum}\limits_{i=1}^{\infty}\frac{\alpha\beta^{i-1}}{m^{i}}\frac{F(\left\lfloor
N/m^{i}\right\rfloor)}{N/m^{i}}$
After all, for any fixed N this is actually a finite sum by (4) and the final
term in display (*) above is 0 for large $j$. Now by Lemma 1, taking
$a_{i}=\frac{\alpha\beta^{i-1}}{m^{i}}$ and $f_{i}(N)=\frac{F(\left\lfloor
N/m^{i}\right\rfloor)}{N/m^{i}}$, it follows that
$\underset{N\rightarrow\infty}{\lim}G(N)/N=D\mathop{\displaystyle\sum}\limits_{i=1}^{\infty}\frac{\alpha\beta^{i-1}}{m^{i}}=\frac{D\alpha}{m-\beta}$.
We can derive some simple applications.
Application 1. Let $m$ be an integer greater than 1. Call an integer $n$ oddly
divisible by $m$ if the largest nonnegative integer $t$ with $m^{t}|n$ is odd.
Similarly define evenly divisible. (Note that by this definition, a number not
divisible by $m$ is evenly divisible by $m$.) Set $F(n)=n$ and $G(n)=$
$\left|\left\\{i\in\mathbb{N}:1\leq i\leq n\text{, }i\text{ oddly divisible by
}m\right\\}\right|$. Since there is a 1-1 correspondence between
$\left\\{i\in\mathbb{N}:1\leq i\leq n\text{, }i\text{ oddly divisible by
}m\right\\}$ and $\left\\{i\in\mathbb{N}:1\leq i\leq\left\lfloor
n/m\right\rfloor\text{ and }i\text{ is evenly divisible by }m\right\\}$, we
quickly see that $G(n)=F(\left\lfloor n/m\right\rfloor)-G(\left\lfloor
n/m\right\rfloor)$. Now apply the Lemma with $D=\alpha=-\beta=1$ to get
$\underset{N\rightarrow\infty}{\lim}G(N)/N=\frac{1}{m+1}.$ So the natural
density of numbers oddly divisible by $m$ is $\frac{1}{m+1}$. (This is also
easily arrived at by an inclusion-exclusion argument.)
Application 2. In Brown[1] the natural density of the set of square-free
numbers divisible by primes $p_{1},\cdots,p_{k}$ is shown to be
$6/\pi^{2}\mathop{\textstyle\prod}\limits_{i=1}^{k}\frac{1}{p_{k}+1}$. (In
fact, he more generally computes the density of the set of such numbers also
not divisible by a further set of primes and reduces that problem to this
one.) Using that the natural density of the set of square-free numbers is
$6/\pi^{2}$, the cited result follows directly from [?] Lemma 3, which states
that, for a square-free integer $t$ and a prime $p$ not dividing $t$, if the
natural density of the set of square-free numbers divisible by $t$ is $D$,
then the natural density of the set of square-free numbers divisible by $tp$
is $D/(p+1)$. To do this (converting to our notation), letting $C$ be the set
of square-free numbers, $F(x)$ $=$ $\left|\left\\{r\in C:t|r,r\leq
x\right\\}\right|$ and $G(x)=\left|\left\\{r\in C:pt|r,r\leq
x\right\\}\right|$ Brown quickly establishes that $F(x/p)=G(x/p)+G(x).$ Noting
that we can replace arguments here with their greatest integers, and that all
hypotheses are in place, we can apply Lemma 2 with
$\alpha=1,~{}\beta=-1,~{}m=p$ to arrive at
$\underset{N\rightarrow\infty}{\lim}G(N)/N=\frac{D}{p+1}.$
## 3 Application to a Classical Theorem on Euler’s $\varphi$-function
It is well-known that
$\underset{N\rightarrow\infty}{\lim}\left(\mathop{\displaystyle\sum}\limits_{n=1}^{N}\frac{\varphi(n)}{n}\right)/N=$
$6/\pi^{2}$. (See for example [3].)
From this we can derive the following proposition, where we sum only over
multiples of an integer $m$:
###### Proposition 3
Let $m$ be a positive integer, and let $p_{1},\cdots,p_{k}$ the distinct prime
divisors of $m$. Then
$\underset{N\rightarrow\infty}{\lim}\left(\mathop{\displaystyle\sum}\limits_{m|n\leq
N}\frac{\varphi(n)}{n}\right)/N=\frac{6}{\pi^{2}m}\prod\limits_{j=1}^{k}\frac{p_{j}}{1+p_{j}}$
Some numerical evidence:
$N=1000,~{}m=5.$ Here
$\frac{\mathop{\textstyle\sum}\limits_{5|n\leq
1000}\frac{\varphi(n)}{n}}{1000}\approx.1016$ while
$\frac{6}{5\pi^{2}}\cdot\frac{5}{6}\approx.1013$.
$N=100000,~{}m=200.$ Here
$\frac{\mathop{\textstyle\sum}\limits_{200|n\leq
100000}\frac{\varphi(n)}{n}}{100000}\approx.001691$, while
$\frac{6}{200\pi^{2}}\cdot\frac{2}{3}\cdot\frac{5}{6}\approx.001689$.
$N=10000000,~{}m=12348.$ Here
$\frac{\mathop{\textstyle\sum}\limits_{12348|n\leq
1000000}\frac{\varphi(n)}{n}}{1000000}\approx.00002153$, while
$\frac{6}{12348\pi^{2}}\cdot\frac{2}{3}\cdot\frac{3}{4}\cdot\frac{7}{8}\approx.00002154$.
Proof. The result will follow inductively from the following
Claim : Let $p$ be a prime, $k$ a positive integer and $t$ an positive integer
not divisible by $p$. Then if
$\underset{N\rightarrow\infty}{\lim}\left(\mathop{\displaystyle\sum}\limits_{t|n\leq
N}\frac{\varphi(n)}{n}\right)/N=L$, it follows that
$\underset{N\rightarrow\infty}{\lim}\left(\mathop{\displaystyle\sum}\limits_{tp^{j}|n\leq
N}\frac{\varphi(n)}{n}\right)/N=\frac{L}{p^{j-1}(p+1)}.$
To establish the claim, we first handle the case $j=1$. We set
$F(N)=\mathop{\displaystyle\sum}\limits_{t|n\leq
N}\frac{\varphi(n)}{n},~{}G(N)=\mathop{\displaystyle\sum}\limits_{pt|n\leq
N}\frac{\varphi(n)}{n}$. We can bijectively correspond the set $A$ of integers
divisible by $t$ and less than or equal to $N/p$ with the set $B$ of multiples
of $pt$ less than or equal to $N$ by multiplication by $p$. We write
$A=A_{1}\cup A_{2\text{,}}$, with multiples of $p$ in $A_{1}$ and nonmultiples
of $p$ in $A_{2}$, and note that (from the usual computation of $\varphi$ in
terms of prime factorization) for $n\in A_{1},\varphi(n)/n=$
$\varphi(pn)/(pn)$, while for $n\in A_{2},\varphi(n)/n=\frac{p}{p-1}$
$\varphi(pn)/(pn)$. So
$\displaystyle G(N)$ $\displaystyle=$
$\displaystyle\mathop{\displaystyle\sum}\limits_{pt|n\leq
N}\frac{\varphi(n)}{n}=\mathop{\displaystyle\sum}\limits_{n\in
A_{1}}\frac{\varphi(np)}{np}+\mathop{\displaystyle\sum}\limits_{n\in
A_{2}}\frac{\varphi(np)}{np}=\mathop{\displaystyle\sum}\limits_{n\in
A_{1}}\frac{\varphi(n)}{n}+\frac{p-1}{p}\mathop{\displaystyle\sum}\limits_{n\in
A_{2}}\frac{\varphi(n)}{n}$ $\displaystyle=$
$\displaystyle\frac{p-1}{p}F(\left\lfloor
N/p\right\rfloor)+\frac{1}{p}G(\left\lfloor N/p\right\rfloor$
Applying our lemma with $m=p$, $\alpha=\frac{p-1}{p}$, $\beta=\frac{1}{p}$,
$D=L$ we get
$\underset{N\rightarrow\infty}{\lim}G(N)/N=\frac{D\alpha}{m-\beta}=\frac{L}{p+1}.$
Now we can proceed to the general case of the claim. We now bijectively
correspond the set $A$ of integers divisible by $t$ and less than or equal to
$N/p^{j}$ with the set $B$ of multiples of $p^{j}t$ less than or equal to $N$
by multiplication by $p^{k}$, and similarly $j=1$ case write $A=A_{1}\cup
A_{2\text{,}}$, with multiples of $p$ in $A_{1}$ and nonmultiples of $p$ in
$A_{2},$ . Then
$\displaystyle\mathop{\displaystyle\sum}\limits_{p^{j}t|n\leq
N}\frac{\varphi(n)}{n}$ $\displaystyle=$
$\displaystyle\mathop{\displaystyle\sum}\limits_{t|i\leq
N/p^{j}}\frac{\varphi(p^{j}i)}{p^{j}i}=\mathop{\displaystyle\sum}\limits_{n\in
A_{1}}\frac{\varphi(p^{j}i)}{p^{j}i}+\mathop{\displaystyle\sum}\limits_{n\in
A_{2}}\frac{\varphi(p^{j}i)}{p^{j}i}$ $\displaystyle=$
$\displaystyle\mathop{\displaystyle\sum}\limits_{n\in
A_{1}}\frac{p^{j-1}(p-1)\varphi(i)}{p^{j}i}+\mathop{\displaystyle\sum}\limits_{n\in
A_{2}}\frac{\varphi(p^{j}i)}{p^{j}i}$ $\displaystyle=$
$\displaystyle\frac{p-1}{p}\mathop{\displaystyle\sum}\limits_{n\in
A_{1}}\frac{\varphi(i)}{i}+\mathop{\displaystyle\sum}\limits_{n\in
A_{2}}\frac{\varphi(i)}{i}$ $\displaystyle=$
$\displaystyle\frac{p-1}{p}\mathop{\displaystyle\sum}\limits_{t|n\leq
N/p^{j}}\frac{\varphi(i)}{i}+\frac{1}{p}\mathop{\displaystyle\sum}\limits_{pt|i\leq
N/p^{j}}\frac{\varphi(i)}{i}$
Dividing through by $N$ we get
$\displaystyle\mathop{\displaystyle\sum}\limits_{p^{j}t|n\leq
N}\frac{\varphi(n)}{n}/N$ $\displaystyle=$
$\displaystyle\frac{p-1}{p^{j+1}}\frac{\mathop{\displaystyle\sum}\limits_{t|n\leq
N/p^{j}}\frac{\varphi(i)}{i}}{N/p^{j}}+\frac{1}{p^{j+1}}\frac{\mathop{\displaystyle\sum}\limits_{pt|i\leq
N/p^{j}}\frac{\varphi(i)}{i}}{N/p^{j}}$ $\displaystyle\rightarrow$
$\displaystyle\frac{L(p-1)}{p^{j+1}}+\frac{L}{p^{j+1}(p+1)}=\frac{L}{p^{j-1}(p+1)}\text{
as }N\rightarrow\infty$
where the first limit of the first term is given by the hypothesis
$\underset{N\rightarrow\infty}{\lim}\left(\mathop{\displaystyle\sum}\limits_{t|n\leq
N}\frac{\varphi(n)}{n}\right)/N=L$ and the limit of the second term follows
from the $j=1$ case above. That concludes the proof of the claim, and hence
the proposition.
## References
* [1] Brown R., What Proportion of Square-Free Numbers are Divisible by 2? Or by 30, but not by 7?, Private Communication 1/2021
* [2] En-Naoui E., Some Remarks on Sum of Euler’s Totient Function, arXiv:2101.02040v1
* [3] P. Erdos and H. N. Shapiro, Canad. J. Math. 3 (1951), 375-385.
|
# Test and Evaluation Framework for Multi-Agent Systems of Autonomous
Intelligent Agents
††thanks: This paper includes funded research conducted through the System
Engineering Research Center.
Erin Lanus1, Ivan Hernandez1, Adam Dachowicz2, Laura Freeman1, Melanie
Grande2, Andrew Lang2, Jitesh H. Panchal2, Anthony Patrick3, Scott Welch1
1Virginia Tech, Arlington, VA 22309, USA 2Purdue University, West Lafayette,
IN 47907, USA 3George Mason University, Fairfax, VA 22030, USA
###### Abstract
Test and evaluation is a necessary process for ensuring that engineered
systems perform as intended under a variety of conditions, both expected and
unexpected. In this work, we consider the unique challenges of developing a
unifying test and evaluation framework for complex ensembles of cyber-physical
systems with embedded artificial intelligence. We propose a framework that
incorporates test and evaluation throughout not only the development life
cycle, but continues into operation as the system learns and adapts in a
noisy, changing, and contended environment. The framework accounts for the
challenges of testing the integration of diverse systems at various
hierarchical scales of composition while respecting that testing time and
resources are limited. A generic use case is provided for illustrative
purposes and research directions emerging as a result of exploring the use
case via the framework are suggested.
###### Index Terms:
systems engineering, statistical models, software engineering, artificial
intelligence, design of experiments, combinatorial interaction testing
## I Introduction
The United States engages in numerous strategic initiatives to increase the
use of Artificial Intelligence (AI) to support strategic priorities. Achieving
complex mission needs requires AI to be integrated and deployed in multi-agent
systems of autonomous intelligent agents (AIAs). These systems, if proven to
be reliable, trustworthy, and safe, have the potential to be used in high-
stakes contexts, often with lack of human intervention and under changing
mission and environmental needs. This research is motivated by the challenge
of testing AIAs as compared to static, deterministic systems.
Test and evaluation (T&E) of multi-agent systems of AIAs presents unique
challenges due to the dynamic environments of the agents, adaptive learning
behaviors of individual agents, the complex interactions among the agents, the
complex interactions between agents and the operational environment, the
difficulty in testing black-box machine learning models, and rapidly evolving
AI algorithms. Currently, no unifying framework exists for T&E of multi-agent
systems of AIAs. Existing frameworks for T&E of complex engineered systems [1]
fail to account for these unique challenges.
T&E is a difficult topic to study as different fields of engineering have
evolved their test strategies to meet the specific needs of that field. For
example, the reliability community has a robust literature on reliability
testing [2], the software community has methods for software testing [3], and
manufacturing has methods for testing the consistence of their processes [4].
However, complex systems require the integration of many methods to fully
characterize system capabilities and understand how they will perform in the
actual operational environment. The United States Department of Defense (DoD)
has a mature T&E process due to the nature of the technologies they must
ensure perform adequately and are safe before fielding. These test processes
are documented in the Defense Acquisition Guidebook [1] and provide a
comprehensive overview of these processes, but notably missing is any guidance
on how processes should account for AIA challenges.
The development of multi-agent systems of AIAs involves taking an
interdisciplinary approach, with each discipline providing its own methods,
tools, techniques, priorities, and expertise. Consequently, the collection of
accompanying T&E strategies across a multi-agent system is heterogeneous, and
it is not clear how individual T&E strategies for components or subsystems
should be combined to provide a comprehensive T&E framework for a multi-agent
system of AIAs. Furthermore, new system capabilities, applications,
properties, and behaviors emerge at the intersection of multi-agent,
autonomous, and intelligent systems.
New constructs within T&E are necessary to facilitate addressing these new
challenges in a manageable framework used for systematic analysis. For
example, a multi-agent system-of-systems requires additional testing across a
hierarchical scale as component subsystems are integrated. Testing must be
conducted on “local” factor levels specific to an individual agent as well as
“global” factor levels representing the combined interactions of the different
agents along with environmental conditions. The dynamic nature of multi-agent
interactions can have effects apparent at the lower hierarchical scale of a
given agent and also produce emergent phenomena at higher hierarchical scale.
Additionally, an increase in the number of agents, each with its own
parameters, exponentially increases the number of tests that might be
conducted to support a comprehensive evaluation. Finally, these AIAs will have
the ability to learn over time, so test strategies that continuously evaluate
both the local and global scale over time are needed. The increase in agents
and parameters requires more time and resources for conducting T&E. We
hypothesize that improving efficiency and coverage in a distributed, dynamic
learning environment is essential to a T&E framework for multi-agent systems
of AIAs.
In this work, we propose a unifying framework for T&E of systems of multiple
AIAs to guide the systematic development of test plans. The framework is
informed by three major concepts that address the unique needs of this
context: 1) field of study, 2) hierarchy of test, and 3) test plan efficiency.
Collectively and along with an expanded systems engineering verification and
validation model, these concepts describe how to define a _slice_ of the
process during a phase of the system design, development, and deployment life
cycle in order to identify goals of the test and inform creation of a
comprehensive test plan. The rest of the paper is organized as follows. An
illustrative use case of a satellite system is presented in § II. The VTP
model on which the framework is built is presented in § III. Testing
procedures drawn from fields of study utilized in building these systems of
systems and how they are addressed in the framework are discussed in § IV. How
to conduct integration testing as components are merged into subsystems and
subsystems into systems as a hierarchical approach to testing is considered in
§ V. Maximizing knowledge gained with limited testing resources as the goal of
test plan efficiency is discussed in § VI. The complete framework is given in
§ VII. Finally, conclusions and research directions emerging as a result of
exploring the use case via the framework are suggested § VIII.
## II Illustrative Use Case
To provide an illustrative practical backdrop, we employ a generic use case of
a satellite network composed of a heterogeneous set of AIAs reporting to a
central controller and acting autonomously to conduct broad area search and
point detection (see Fig. 1). At the local hierarchical scale, each satellite
is composed of component subsystems such as sensors, actuators, and software
including deterministic control software as well as AI software that is
expected to change after deployment as a result of adapting to changing
environments and knowledge acquisition. Each of these subsystems could be
further decomposed into smaller components, such as a piece of hardware or a
function within a program. At the global hierarchical scale, the system can be
described by the types, number, and positions of each satellite, additional
state information such as if the satellite has been damaged or its software
has been compromised, and connectivity of each satellite with each other and
the ground station. Last, operational environmental conditions can be
specified such as lack of visibility, presence of adversarial powers and their
capabilities, and presence and location of observational targets.
Figure 1: Illustrative use case of a multi-agent satellite system
## III The VTP Model
Figure 2: The VTP framework extends the “Vee” model to include testing
throughout system deployment and a feedback loop
T&E must be integrated throughout not only the system development process, but
also the system life cycle. The Systems Engineering “Vee” model [5] is a
mature model of systems engineering that serves as a sound starting point for
development of a unifying framework (see the left third of Fig. 2). In the
“Vee” model, each system-level in the hierarchy is paired with a corresponding
level of verification and validation. Requirements and test plans are created
from the beginning of the development process, rather than waiting until the
entire system is developed. For example, at the beginning of the project when
requirements for the entire system are identified, tests that verify system
performance are designed though they are not executed until near the end. The
system is thus designed top down, walking down the left side of the “Vee” and
tests in the corresponding slice are defined simultaneously; however, test
execution is conducted “bottom up” as the subsystems are built and the system
is integrated, walking up the right side of the “Vee.”
Since the “Vee” model is based on the principle of hierarchical decomposition,
certain assumptions of the “Vee” model do not hold for testing multi-agent
system of AIAs. Specifically, most systems development life cycles assume
well-defined phases, such as concept studies, technology development,
preliminary design, final design, fabrication, assembly, test, launch,
operations & sustainment, and closeout [6]. The phases in the development
process are akin to the phases in the the Software Engineering Waterfall Model
that include requirement identification, design, implementation, testing, and
maintenance. These models assume that all requirements against which the
system will be tested are able to be listed during a requirements phase, and
that requirements can be decomposed and traced to individual components or
subsystems. Last, maintenance allows for “fixing” a product during deployment,
but does not consider that behavior of the agents could change after
deployment in the field.
These assumptions do not hold for multi-agent systems of AIAs. Specifically,
it may not be possible to define how the system should respond in all
environmental conditions as the environment is constantly changing. For
example, it may not be possible to define in advance the threat capabilities
of advanced adversaries. Requirements may be achieved by multiple combinations
of subsystems. A given task may be achieved by different ensembles of
satellites given their heterogeneous capabilities and positions. After
deployment, the satellite software may be modified by code pushes from the
ground station, but it may also change through learning behaviors as the
embedded AI acquires knowledge from interacting with its environment and
through collaborative decision making with other satellites in the
constellation.
Despite the challenges of sequential design or “big design up front” models,
incremental or iterative models have limited applicability for systems that
cannot be easily recalled once deployed. That is, development of subsystems
such as components on an individual satellite or software may be incrementally
designed, developed, and tested while on the ground, but a clear demarcation
for deployment of the system occurs when the satellite is launched. Thus, our
framework assumes the Systems Engineering “Vee” model up to deployment, though
several iterations of the “Vee” could occur before the deployment cutoff. We
then extend this model with a “T” phase to include testing throughout
operation to detect or respond to events. Such events could include a change
to the mission objective requiring a code push from the ground station to
particular satellites and executing all pertinent tests from the previous
phase as well as new tests to address the code changes. Embedded AI software
may also adapt as a result of learning, and tests must be conducted to ensure
the system is learning the “right” actions. As the system encounters debris or
exhibits hardware degradation over time, tests must be run periodically to
identify faults and enact mitigation strategies. Last, communication systems
are inherently susceptible to adversarial attack, and software may include
intrusion detection algorithms that may need to be updated with new
signatures. Embedded AI should be tested for resilience to newly discovered
vulnerabilities or maturity issues such as data drift.
Last, the model should process feedback. The “P” phase includes a loop back to
the deployment phase due to changes in the system from learning for systems
that are currently deployed. This loop can also extend further to inform the
next phase of system development. The full VTP model on which the framework is
built is presented in Fig. 2. In the figure, the dashed lines delineate the
sub-phases within design, development, and deployment, and these sub-phases
provide timeline context for a slice of the process.
## IV Field of study
Each AIA is a cyber-physical system with embedded AI. T&E of the composed
system should draw from scientifically-based testing techniques for each
subsystem and thus consider the peculiarities of testing for AI, deterministic
software, electronic hardware, and mechanical systems [7, 8].
For embedded AI, T&E methods should measure inherent weaknesses of the
algorithms in use. For example, the use of neural networks in learning
requires an evaluation strategy that measures the performance sensitivity to
transformations or noise added to the input. This is needed to detect attacks
such as data poisoning and measure the impact on mission success in contended
environments.
The AI functionality of the software is also supported by other deterministic
software, such as functions to receive input from sensors and control
actuators in order to interact with the agent’s environment, as well as to
communicate with other agents within the multi-agent system. For code under
development, white box methods that emphasize structural code coverage (e.g.,
statement, decision, condition, branch, and path coverage) can be employed.
For commercial-out-of-the-box software (COTS) or vendor-supplied software,
black-box techniques are required and include equivalence partitioning,
boundary value analysis, decision tables, state transition testing, use case
testing, and combinatorial techniques.
In testing physical components, statistical analysis of response variables
ascribe variance to different independent variables, or _factors_ , and to
estimate the effect of different factor _levels_ on system performance. Design
of experiments (DOE) is a systematic approach to choosing a set of test cases
to ensure adequate coverage of the operational space, determine how much
testing is enough, and provide an analytical basis for assessing test
adequacy. DOE has also proven useful in testing complex systems with embedded
software [9]. Alternatively, optimal learning [10] is an approach that begins
with an initial set of tests to establish some information about the system. A
Bayesian surrogate of the objective function is trained and the next tests are
chosen based on a heuristic that combines exploration of the test space
exhibiting the most uncertainty with exploitation of areas of the test space
that maximize the objective function.
Additionally, test strategies must evaluate the fully integrated system-of-
systems and its ability to execute tasks autonomously. While each subsystem
may reside within one field of study (e.g., software) and thus testing for the
subsystem may follow known testing strategies for that field, the composed
system spans multiple disciplines and T&E must consider all together. Some
tests may need to be designed that account for the interaction of disparate
systems. For example, a satellite may need to learn that visibility issues
affecting a sensor can be overcome by changes to its position and thus tests
requiring orchestrated interaction of sensors, embedded AI, control software,
and actuators are all required to evaluate this behavior.
Last, the above list is not exhaustive. Depending on the use case in which the
AIAs are employed, additional fields may be considered. Testing AIAs teamed
with humans or with significant human-in-the-loop components should consult
the psychology testing literature for designing tests that address the variety
of unique challenges such as attention issues along the human-computer
interface and how humans and computers can express and understand
collaborative goals. Further, even without human involvement, psychologist SME
consultation can be beneficial to establish benchmarks for learning behaviors
of AIAs and in testing for collaborative behaviors of ensembled AIAs.
## V Hierarchy of test
Rather than waiting until the complete system is built to test, testing is
conducted throughout the development process in order to detect and correct
flaws as early as possible. Unit testing is conducted on the smallest testable
components, using simulated inputs or digital environments when necessary. As
components are integrated into subsystems, the expectation is that components
work as intended, but there may be interactions among them causing failures or
interaction effects on performance. As an example, suppose some COTS control
software is employed that does not expect the range of inputs produced by a
given sensor on the satellite. When used in conjunction, the code may crash or
unexpected behavior may occur.
Combinatorial interaction testing (CIT) creates test suites to systematically
detect failures caused by combinations of interacting components up to a given
size of interaction[11]. To use available tools such as the Automated
Combinatorial Testing for Software to generate test suites [3], testers must
identify the components (synonymous with factors in DOE), the levels at which
the components should be tested, the maximum interaction size called the
_strength_ , and any _constraints_ , combinations of component levels that
must not be tested together. This process requires that components or factors
of interest are known to the tester, and continuous levels must be discretized
in order to use CIT.
As a complex system of systems, a system that undergoes integration testing in
one phase of testing becomes a component in the next phase. For example, at
the local scale of the hierarchy, payload sensors, actuators, AI algorithm,
and control software are component subsystems integrated into an AIA
satellite, and integration verification and validation testing is performed to
ensure the system performs as expected. Once deployed into a constellation,
testing moves up the hierarchy, and each satellite becomes a subsystem within
the global system. Interaction testing considers failures at the top system or
mission scale, such as whether communication relays between satellites and the
ground station are operational or whether the combined sensor footprint of the
constellation is sufficient for a given tactical operation. Factors at the
global scale of the hierarchy may also include environmental factors that
cannot be controlled but can be observed during a test to evaluate their
impact on mission performance, such as storms affecting visibility. Other
factors may be simulated, such as adversarial attacks.
After deployment, failures occurring at the global scale of the hierarchy may
be caused by interactions of subsystems immediately lower in the hierarchy
along with global scale factors, such as storms affecting sensor visibility of
a particular subset of satellites or interfering with communication with a
ground station. CIT methods include fault localization techniques that can be
used to identify interactions causing the fault [11]. In some cases, a problem
with a component system further down the hierarchy may be causing an error to
propagate up to higher systems. Techniques may need to be utilized to step
down through the involved interactions at each scale of the hierarchy to
locate the component causing the fault.
## VI Test plan efficiency
Rigorous testing under a variety of conditions provides a degree of assurance
that the system will perform as expected. The test input space is defined by
identification of system and environmental factors of interest and choosing a
range of levels for each factor. The points chosen for a test plan thus result
in some coverage of the multi-dimensional test input space. Each test incurs
some cost and, in most scenarios, both time and resources for testing are
limited. Different techniques prioritize efficiency versus coverage.
A full factorial design from DOE is a test suite including all combinations of
factors and levels, providing exhaustive testing and the ability to conduct an
analysis of variance and characterise the effect of factor levels on system
performance. In most complex systems with many factors with multiple levels,
exhaustive testing is infeasible. Fractional factorial designs select a
fraction of the factorial design with the result that some effects are aliased
with others and variance cannot be fully attributed. The choice of fraction
can lead to aliasing between main effects and higher order interaction effects
that are not expected to be significant and thus provide sufficient system
knowledge with fewer test points.
Optimal learning is a technique that also has the goal of reducing uncertainty
and determining the best factor levels for improving system performance. By
choosing tests adaptively via the exploration-exploitation heuristic policy,
optimal learning can discover the ideal factor settings with fewer tests.
However, it relies on other knowledge such as physical laws and prior
experience to ensure that the results from fewer tests can be used to make
claims about performance in the rest of the operating space.
An often employed technique in CIT is to design a test suite from a covering
array, a combinatorial array where the columns represent factors, the rows
represent tests, and the values in each cell represent the level set for that
factor in that test. Every combination of values for up to a given strength of
interacting features appears in some test in the array; thus, a covering array
guarantees to detect failures due to interactions of up to the strength
specified. One row covers many interactions when the strength is smaller than
the total number of factors and so produces a test suite with a fraction of
the number of tests in the full factorial. The number of tests required grows
logarithmically in the number of factors. While using a covering array as a
test suite provides coverage of the test space in terms of interactions, it
does not guarantee that the cause of the fault detected can be identified.
Humans frequently interact with complex systems as part of the environment or
system. Between-subject and within-subject experimental design strategies
allow for the humans contribution to the test outcome to be characterized and
separated in order to reduce aliasing with factors and levels in the
environment [12]. Between subject designs randomly assign humans to unique
combinations of factors and levels, while within-subject designs provide a
basis for comparison by matching individuals to combinations of factors and
levels.
Each of the above strategies has unique strengths that are leveraged by our
framework to systematically cover the input space at both local and global
scale, integrate information across test slices, and determine the right test
size at various stages of development and deployment. Specifically, finding
flaws early in development can prevent extensive testing later that must step
down to components lower in the hierarchy. Additionally, at the lower
hierarchical scale, such as within a single function of code of a sensor, the
test input space is smaller. Tests may be faster and more automatable, such as
by running a script or using simulated inputs, and test oracles may be
computable. As subsystems are composed, testing emphasis shifts to identifying
faulty interactions among components. At the global scale, the fully composed
system is tested in terms of achieving mission objective, and it may be
necessary to focus on only the most critical factor levels.
These facts support conducting exhaustive testing at the component scale but
sparser testing at the global scale. DOE full factorial or fractional
factorial designs are likely best used for components lower in the hierarchy.
CIT techniques can prevent exponential growth in test points as interactions
among components are considered during integration. At the global scale,
correct system behavior may not even be definable, particularly after
deployment in changing conditions and after acquisition of knowledge by
embedded AI components. In this case, optimal learning may be employed to
estimate system boundaries and adaptively select tests as needed and as
resources allow.
As shown in Fig. 3, the framework makes a trade-off between the number of test
points used and the fidelity of measuring the system. Moving up the hierarchy
towards a completely realized system, testing achieves better fidelity, but
becomes more difficult, and fewer test points are possible.
Figure 3: Number of test points exchanged for increased fidelity as testing
moves up the system composition hierarchy.
## VII Test Design Framework
The framework does not specify a series of tests to run. Instead, the
framework helps inform comprehensive test plan design by outlining the
considerations to address. These considerations will change at each slice in
the VTP model leading to different test designs. Additionally, the “P” phase
of the VTP model requires that information learned during operational test
slices is incorporated into future tests plans. For example, a system could
employ a combinatorial interaction testing strategy with automated tests of
each agent in its fielded state in an attempt to monitor if any had been
compromised by an adversary. The outcomes of these tests will be incorporated
into future algorithm updates and result in a new round of independent tests
to verify the successful implementation of updates to the agents.
The process by which the framework guides considerations to result in a test
plan is graphically represented in Fig. 4. It begins with identification of
the current phase of the life cycle for the system under test (SUT) and guides
identification of the field of study and hierarchy of test. Hierarchy of test
informs how to identify the components inside the SUT and which component
levels should be tested. Identification of components, levels, and their
interactions defines the test input space. Test plan efficiency guides the
focus of the test given the cost of each test run at the current scale and
assumes systems lower in the hierarchy function as expected. The framework
process also includes identification of goals of the test and reasonable test
methods. The field of study for the SUT and hierarchy of test inform the goal
of the test. Goals are combined with test plan efficiency to determine
appropriate test methods. Finally, the collected knowledge through all prior
steps guides creation of the test plan.
Figure 4: Process by which framework guides considerations towards a
comprehensive test plan
In Fig. 5, three example scenarios within the multi-agent system of AIAs life
cycle are provided. A non-exhaustive list of fields of study involved in the
SUT and from which testing strategies should be drawn is given.
Figure 5: High-level overview of how each framework concept contributes to
guiding test plan development
Recall our use case, a satellite network tasked with conducting broad area
search and point detection. Fig. 1 depicts the satellite network designed to
observe shipping traffic. The network conducts broad area search to understand
normal traffic patterns and detect activities that could indicate potential
illegal shipping activities. Once an anomaly is detected, the network has
multiple objectives: continue broad area search and maintain a track on the
anomalous vessel. Using this context, we can walk through Fig. 5 and propose
reasonable test design structures.
The first row identifies an early phase of testing focused on the AI
algorithm. This algorithm could be black-box or fully specified. In our use
case, consider the central controller, whose objective is to task the
satellites in the network to both provide wide area coverage and maintain
tracks. Our test goal is to assess the reliability, robustness, performance,
and biases of the central controller. Because testing will be conducted via
simulations, tests are relatively affordable, and we can afford a strategy
that allows both comprehensive coverage via CIT and investigation of any areas
of high uncertainty via optimal learning (tests can be sequentially added at
low cost). The CIT design could leverage historical shipping data crossed with
CIT at a high strength (covering many interactions) to embed anomalous traffic
into the historical data. The factors in the CIT test set may include vessel
country of origin, vessel type, vessel size, and geospatial location. Optimal
learning is used to augment the CIT in areas of high uncertainty or that show
large performance changes in the AI algorithm.
The second row of Fig. 5 shows how the considerations change when moving up to
the satellite scale of testing. Testing must now consider not only algorithm
performance, but also how that algorithm integrates with the system’s
additional software, electronic hardware, and mechanical systems. This
necessitates identification of new factors and test design strategy. The
knowledge gained in testing the AI via CIT/Optimal learning is leveraged to
identify a subset of scenarios for input into the system-centric test design.
Here we may use a hardware-in-the-loop test facility where satellite sensors
are given simulated inputs, but all additional aspects are real (e.g.,
simulating the space deployment). Factors include AI performing scenario
(potentially binned into low/medium/high based on the outcomes of the previous
AI testing), active adversary (yes/no), and weather impact on inputs (e.g.,
clouds, rain), etc. Experimental designs are used that focus on understanding
how system performance changes as a function of the simulated inputs combined
with various executions of the AI algorithm.
Finally, once the system is deployed, we may need to understand mission
accomplishment of the fully connected system. In Fig. 5, we focus on the human
integration at a ground control station as important to mission
accomplishment. Here our test size is limited by the number of humans that are
qualified mission controllers for the constellation. We use a within-subject
design to assign different ground teams to various scenarios on the deployed
system. The tasks are controlled by focusing the constellation on certain
parts of the ocean for an operating period.
The three scenarios provide a hypothetical example of how to use the elements
in the test design framework, coupled with the slice in the VTP model to
develop a test design strategy that pairs the goal of the test, derived from
the field of study and the hierarchy of test with desired test efficiencies,
for a complex system-of-systems at multiple scales.
## VIII Conclusions and Future directions
The framework we propose in this work specifies the three concepts of field of
study, hierarchy of test, and test plan efficiency to be considered in each of
the design, development, and deployment phases of the VTP model in order to
guide the creation of a comprehensive test plan.
This unifying framework synthesizes T&E methodology and is generalizable to
many contexts involving multi-agent systems of AIAs. We have provided examples
throughout of how the framework would be applied to the use case of a
constellation of satellites conducting both broad area search and point
detection in a series of tests. Evaluation of the framework against additional
use cases is needed to assess its usefulness for this category of complex
systems and to identify any features needed to assist in guiding the creation
of test plans.
Context-specific challenges in this work led to the formulation of two new
research questions. Testing the integration of a system-of-systems composed of
subsystems of the same type may lead to tests that appear “symmetrical” such
that multiple tests effectively represent the same configuration. For example,
consider testing the integration of a constellation of satellites having two
satellites, $S_{1}$ and $S_{2}$, with identical settings for all components,
but appearing in different locations in orbit, say $p_{x}$ and $p_{y}$. That
is, they have the same payload, algorithms, control software, and actuators.
If the satellites are listed as factors in a CIT test suite, without using
constraints, tests should be generated including both combinations
$\\{(S_{1},p_{x}),(S_{2},p_{y})\\}$, $\\{(S_{1},p_{y}),(S_{2},p_{x})\\}$ in
larger strength interactions, though at early phases of testing before
learning or damage has occurred, both satellites are interchangeable and thus
both observations are not necessary. When testing is expensive, it may be
desired to observe and analyze only the test cases needed. Thus, a test suite
generation tool that avoids producing these kinds of symmetrical tests is
needed. Using constraints to solve this challenge and generate a test suite
can be computationally expensive in the presence of a large number of
constraints such as for a large constellation. Taking inspiration from
sequence covering arrays, we hypothesize that a partial ordering on specified
factors could be used to build covering arrays without the need to remove
symmetrical tests.
By evaluating the framework in the context of the satellite use case, we
propose to examine how much of the variance in success and failure is captured
by this framework and whether mission success at the top of the hierarchy can
be predicted by the results of testing at the low, task-scale of the
hierarchy. As tests at the global scale may be costlier and, in some cases,
only possible after the constellation has been launched into space, preventing
most changes to the system, early warnings predicting mission-scale success or
failure provide immense value.
## References
* [1] _Defense Acquisition Guidebook_ , Defense Acquisition University, Fort Belvoir, VA, 2020.
* [2] E. A. Elsayed, “Overview of reliability testing,” _IEEE Transactions on Reliability_ , vol. 61, no. 2, pp. 282–291, 2012.
* [3] “Automated combinatorial testing for software,” https://csrc.nist.gov/projects/automated-combinatorial-testing-for-software.
* [4] C. A. Lowry and D. C. Montgomery, “A review of multivariate control charts,” _IIE transactions_ , vol. 27, no. 6, pp. 800–810, 1995.
* [5] D. Buede, _The engineering design of systems : models and methods_. Hoboken, N.J: John Wiley & Sons, 2009.
* [6] NASA, “NASA systems engineering handbook revision 2,” 2017. [Online]. Available: https://www.nasa.gov/connect/ebooks/nasa-systems-engineering-handbook
* [7] R. Lanotte, M. Merro, A. Munteanu, and L. Viganò, “A formal approach to physics-based attacks in cyber-physical systems (extended version),” _arXiv preprint arXiv:1902.04572_ , 2019.
* [8] S. Guo and D. Zeng, _Cyber-Physical Systems: Architecture, Security and Application_. Springer, 2019.
* [9] L. J. Freeman and C. Warner, “Informing the warfighter—why statistical methods matter in defense testing,” _Chance_ , vol. 31, no. 2, pp. 4–11, 2018.
* [10] W. B. Powell and I. O. Ryzhov, _Optimal Learning_. Wiley, 2012.
* [11] D. R. Kuhn, R. Bryce, F. Duan, L. S. Ghandehari, Y. Lei, and R. N. Kacker, “Chapter one - combinatorial testing: Theory and practice,” ser. Advances in Computers, A. Memon, Ed. Elsevier, 2015, vol. 99, pp. 1 – 66.
* [12] G. Charness, U. Gneezy, and M. A. Kuhn, “Experimental methods: Between-subject and within-subject design,” _Journal of Economic Behavior & Organization_, vol. 81, no. 1, pp. 1–8, 2012.
|
# Optimal Disclosure of Information to a Privately Informed Receiver
Ozan Candogan Philipp Strack
###### Abstract
We study information design settings where the designer controls information
about a state and the receiver is privately informed about his preferences.
The receiver’s action set is general and his preferences depend linearly on
the state. We show that to optimally screen the receiver, the designer can use
a menu of “laminar partitional” signals. These signals partition the state
space and send the same non-random message in each partition element. The
convex hulls of any two partition elements are such that either one contains
the other or they have an empty intersection. Furthermore, each state is
either perfectly revealed or lies in an interval in which at most $n+2$
different messages are sent, where $n$ is the number of receiver types.
In the finite action case an optimal menu can be obtained by solving a finite-
dimensional convex program. Along the way we shed light on the solutions of
optimization problems over distributions subject to a mean-preserving
contraction constraint and additional side constraints, which might be of
independent interest. Finally, we establish that public signals in general
achieve only a $1/n$ share of the optimal value for the designer.
## 1 Introduction
We study how a designer can use her private information about a real-valued
state to influence the mean belief and actions of a receiver who possesses
private information himself. For example, the designer could be a seller who
releases information about features of the product (e.g., through limited
product tests), while the receiver could be a buyer who has private
information about his preferences and decides how many units of the product
(if any) to purchase. To influence the receiver’s belief the designer offers a
menu of signals. The receiver chooses which of these signals to observe based
on his own private information. For example, if each signal reveals
information about a specific feature of the product the buyer (receiver) could
optimally decide to observe the signal which reveals information about the
feature he cares about the most.
We show there always exists a menu of “simple” signals that is optimal for the
designer. First, each signal in the optimal menu is partitional, which means
that it reveals to the receiver a non-random set in which the state lies. In
other words there exists a (deterministic) function mapping each state to a
message revealed to the receiver, and such an optimal signal does not
introduce any additional noise.111This condition allows one to rule out many
common signal structures. For example, a Gaussian signal which equals the
state plus an independent normal shock is for example not partitional as its
realization is random conditional on the state. Second, we show that the
partition of the optimal signal is laminar which means that the convex hull of
sets of states in which different messages are sent are either nested or they
do not overlap. This implies that an optimal signal can be completely
described by the collection of (smallest) intervals in which each message is
sent. This characterization is invaluable for tractability as it reduces the
optimization problem from an uncountable infinite one to an optimization
problem over the end points of the aforementioned intervals. Finally, we show
that the laminar partition structure has “depth” at most $n+2$, where $n$ is
the number of possible different realizations of the private information of
the receiver. That is, the intervals associated with a signal realization
overlaps with at most $n+1$ other intervals (associated with different signal
realizations). This final property further reduces the complexity of the
problem as it implies that a state is (i) either perfectly revealed, or (ii)
it lies in an interval in which the distribution of the posterior mean of the
receiver has at most $n+2$ mass points. We also investigate the special case
where the receiver has finitely many possible actions. In this case, we show
that the designer’s problem can be formulated as a tractable finite-
dimensional convex program (despite the space of signals being uncountably
infinite).
Furthermore, through examples we highlight interesting properties of the
optimal solution. Specifically, we show that restricting attention to public
signals (which reveal the same information to all types) can be strictly
suboptimal and in general achieves only a $1/n$ share of the optimal value for
the designer. In addition, unlike classical mechanism design settings, “non-
local” incentive compatibility constraints might bind in the optimal mechanism
(even if the receiver’s utility is supermodular in his actions and type).
Finally, under the optimal mechanism the actions of different types need not
be ordered for all states. For instance, there are states where the low and
the high types take a higher action than the intermediate types. This
precludes the optimality of “nested” information structures observed for
related information design settings where there are _only_ 2 actions (e.g.,
Guo and Shmaya, 2019).
To obtain our results, we study optimization problems over distributions,
where the objective is linear in the chosen distribution, and a distribution
is feasible if it satisfies (i) a majorization constraint (relative to an
underlying state distribution) as well as (ii) some side constraints. We
characterize properties of optimal solutions to such problems. In particular,
we show that one can find optimal distributions that redistribute the mass in
each interval where the majorization constraint does not bind, to at most
$n+2$ mass points in the interval, where $n$ is the number of side
constraints. Moreover, there exists a laminar partition of the underlying
state space such that the signal based on this laminar partition “generates”
the optimal distribution. Given the generality of the optimization
formulations described above, we suspect that these results may have
applications beyond the private persuasion problem studied in the paper. We
discuss some immediate applications in Section 5.
#### Literature Review.
Following the seminal work by Kamenica and Gentzkow (2011), the literature on
Bayesian persuasion studies how a designer can use information to influence
the action taken by a receiver. This framework has proven useful to analyze a
variety of economic applications, such as the design of grading
systems222Ostrovsky and Schwarz (2010); Boleslavsky and Cotton (2015); Onuchic
and Ray (2020)., medical testing333Schweizer and Szech (2018)., stress tests
and banking regulation444Inostroza and Pavan (2018); Goldstein and Leitner
(2018); Orlov et al. (2018)., voter mobilization and gerrymandering555Alonso
and Câmara (2016); Kolotilin and Wolitzky (2020)., as well as various
applications in social networks666Candogan and Drakopoulos (2017); Candogan
(2019b).. For an excellent survey of the literature see Kamenica (2019) and
Bergemann and Morris (2019).
Initial papers focused on the case where either the receiver possesses no
private information or the designer uses public signals (Brocas and Carrillo,
2007; Rayo and Segal, 2010; Kamenica and Gentzkow, 2011; Gentzkow and
Kamenica, 2016b). Kolotilin et al. (2017) extend this model by considering the
case where the receiver possesses private information about his preferences
and the designer maximizes the probability with which the receiver takes one
of _two_ actions. Assuming that the receiver’s payoff is linear and additive
in the state and his type they show that it is without loss of generality to
restrict attention to public signals in the sense that every outcome that can
be implemented with private signals can also be implemented with public
signals. Guo and Shmaya (2019) consider the case where the receiver possesses
private information about the state. They consider a general monotone utility
of the designer and receiver, but maintain the assumption of binary actions.
They show that even though not every outcome that can be implemented with
private signals can also be implemented with public signals, it is
nevertheless true that the _designer optimal_ outcome can always be
implemented with public signals.
We complement this line of the literature by studying the case where the
receiver can potentially choose among _more than two actions_ and find that –
contrasting with the binary action case – public signals are in general not
optimal. We maintain the assumption that the receiver’s payoff is linear in
the state commonly made in the literature.777Such settings are for example
considered in Ostrovsky and Schwarz (2010); Ivanov (2015); Gentzkow and
Kamenica (2016b); Kolotilin et al. (2017); Kolotilin (2018). For a more
detailed discussion of this setting and its economic applications see Section
3.2 in Kamenica (2019). In this setting, we characterize the structure of
menus that optimally screen the receiver for his private information.
The Bayesian persuasion literature is closely related to the notion of “Bayes
correlated equilibria” (Bergemann and Morris, 2013, 2016). Bayes correlated
equilibria characterize the set of all outcomes that can be induced in a given
game by revealing a signal. Thus, a Bayesian persuasion problem can be solved
by maximizing over the set of Bayes correlated equilibria. While the basic
concept does not allow for private information of the receiver it can be
extended to account for this case (c.f. Section 6.1 in Bergemann and Morris,
2019).
In the present paper, the state belongs to a continuum and the designer’s
payoff depends on the induced posterior mean. Without private information, the
approaches in Bergemann and Morris (2016); Kolotilin (2018); Dworczak and
Martini (2019), can be used to characterize the optimal information structure.
These approaches lead to infinite-dimensional optimization problems even if
the receiver has finitely many actions. Alternatively, as established in
Gentzkow and Kamenica (2016b), it is possible to associate a convex function
with each information structure, and cast the information design problem as an
optimization problem over all convex functions that are sandwiched in between
two convex functions (associated with the full disclosure and no-disclosure
information structures), which also yields an infinite-dimensional
optimization problem. This constraint is equivalent to a majorization
constraint restricting the set of feasible posterior distributions. Arieli et
al. (2020) and Kleiner et al. (2020) characterize the extreme points of this
set. This characterization implies that in the case without private
information one can restrict attention to signals where each state lies in an
interval such that for all states in that interval at most $2$ messages are
sent – an insight also observed in Candogan (2019a, b) for settings with
finitely many actions.
Our results generalize this insight to the case where the receiver has private
information and show that each state lies in an interval in which at most
$n+2$ messages are used, where $n$ is the number of types of the receiver.
Furthermore, we show that in the case where the receiver has finitely many
actions, even if the state space is continuous and the receiver has private
information, the optimal menu can be obtained by solving a simple and
tractable _finite-dimensional_ convex optimization problem.
## 2 Model
#### States and Types
We consider an information design setting in which a designer (she) tries to
influence the action taken by a privately informed receiver (he). We call the
information controlled by the designer the state $\omega\in\Omega$ and the
private information of the receiver the type $\theta\in\Theta$. The state
$\omega$ lies in an interval $\Omega=[0,1]$ and is distributed according to
the (cumulative) distribution $F:\Omega\rightarrow[0,1]$, with density $f\geq
0$.888The assumption that the state lies in $[0,1]$ is a normalization that is
without loss of generality for distributions with bounded support as we impose
no structure on the utility functions. Our arguments can be easily extended to
unbounded distributions with finite mean. Our approach and results can also be
extended to settings where the state distribution has mass points, at the
expense of notational complexity – as in this case the optimal mechanism may
also need appropriate randomization at the mass points. The receiver’s type
$\theta$ lies in a finite set $\Theta=\\{1,\ldots,n\\}$ and we denote by
$g(\theta)>0$ the probability that the type equals $\theta\in\Theta$.
Throughout, we assume that the state $\omega$ and the type $\theta$ are
independent.
#### Signals and Mechanisms
The designer commits to a menu $M$ of signals999We follow the convention of
the Bayesian persuasion literature and call a Blackwell experiment a signal.,
each revealing information about the state. We also refer to $M$ as a
_mechanism_. A signal $\mu$ assigns to each state $\omega$ a conditional
distribution $\mu(\cdot|\omega)\in\Delta(S)$ over the set of signal
realizations $S$, i.e.,
$\mu(\cdot|\omega)={\mathbb{P}\left[{s\in\cdot}\middle|{\omega}\right]}\,.$
We restrict attention to signals for which Bayes rule is well
defined,101010Formally, that requires that
${\mathbb{P}_{\mu}\left[{\cdot}\middle|{s}\right]}$ is a regular conditional
probability. and denote by
${\mathbb{P}_{\mu}\left[{\cdot}\middle|{s}\right]}\in\Delta(\Omega)$ the
posterior distribution induced over states by observing the signal realization
$s$ of the signal $\mu$, and by
${\mathbb{E}_{\mu}\left[{\cdot}\middle|{s}\right]}$ the corresponding
expectation. For finitely many signal realizations
${\mathbb{P}_{\mu}\left[{\omega\leq
x}\middle|{s}\right]}=\frac{\int_{0}^{x}\mu(\\{s\\}|\omega)dF(\omega)}{\int_{0}^{1}\mu(\\{s\\}|\omega)dF(\omega)}\,.$
(Bayes Rule)
#### Actions and Utilities
After observing his type, the receiver chooses a signal $\mu$ from $M$, and
observes its realization $s$ which we will call a _message_. Then the receiver
chooses an action $a$ in a compact set $A$ to maximize his expected utility
$\max_{a\in A}{\mathbb{E}\left[{u(a,\omega,\theta)}\middle|{s}\right]}\,.$
We make no additional assumptions on the set of actions $A$, and allow it to
be finite or infinite.
For a given mechanism $M$, a strategy of the receiver specifies the signal
$\mu^{\theta}$ chosen by him if he is of type $\theta$, as well as the action
$a^{\theta}(s)$ he takes upon observing the message $s$. A strategy $(\mu,a)$
is optimal for the receiver in the mechanism $M$ if $(i)$ the receiver’s
actions are optimal for all types $\theta\in\Theta$ and almost all messages
$s$ in the support of $\mu^{\theta}$
$\displaystyle a^{\theta}(s)\in\operatorname*{argmax}_{b\in
A}{\mathbb{E}_{\mu^{\theta}}\left[{u(b,\omega,\theta)}\middle|{s}\right]}\,,$
(Opt-Act)
and, $(ii)$ each type $\theta\in\Theta$ chooses the expected utility
maximizing signal (given the subsequently chosen actions)
$\displaystyle\mu^{\theta}\in\operatorname*{argmax}_{\nu\in
M}\int_{\Omega}\int_{S}\max_{b\in
A}{\mathbb{E}_{\nu}\left[{u(b,\omega,\theta)}\middle|{s}\right]}d\nu(s|\omega)dF(\omega)\,.$
(Opt-Signal)
One challenge in this environment is that the receiver can deviate by
simultaneously misreporting his type and taking an action different from the
one that would be optimal had he told the truth.
We denote by $v:A\times\Omega\times\Theta\to\mathbb{R}$ the designer’s
utility. For a given mechanism $M$ and optimal strategy of the receiver
$(\mu,a)$, the designer’s expected utility equals
$\sum_{\theta\in\Theta}g(\theta)\int_{\Omega}\int_{S}{\mathbb{E}_{\mu^{\theta}}\left[{v(a^{\theta}(s),\omega,\theta)}\middle|{s}\right]}d\mu^{\theta}(s|\omega)dF(\omega)\,.$
(1)
The designer’s information design problem is to pick a mechanism $M$ and an
optimal receiver strategy $(\mu,a)$ to maximize (1).
For tractability, we focus on preferences which are quasi-linear in the state.
###### Assumption 1 (Quasi-Linearity).
The receiver’s and designer’s utilities $u,v$ are quasi-linear in the state,
i.e., there exist functions
$u_{1},u_{2},v_{1},v_{2}:A\times\Theta\to\mathbb{R}$ continuous in $a\in A$
such that
$\displaystyle u(a,\omega,\theta)$
$\displaystyle=u_{1}(a,\theta)\omega+u_{2}(a,\theta)$ $\displaystyle
v(a,\omega,\theta)$ $\displaystyle=v_{1}(a,\theta)\omega+v_{2}(a,\theta)\,.$
Assumption 1 is natural in many economic situations and is commonly made in
the literature (c.f. Footnote 7). For example Kolotilin et al. (2017) assume
that there are two actions $A=\\{0,1\\}$ and that the receiver’s utility for
one action is zero, and for the other action it is the sum of the type and
state, which implies that $u(a,\omega,\theta)=a\times(\omega+\theta)$.
Our results generalize to the case where the preferences of the receiver and
the designer depend on the mean of some (potentially) non-linear
transformation of the state $h(\omega)$ in an arbitrary way.111111To see this
note that for every function $h:\Omega\to\mathbb{R}$ we can redefine the state
to be $\tilde{\omega}=h(\omega)$ and that we do not use the linearity in the
proofs of our results for general action sets. What is crucial for our results
is that the receiver’s belief about the state influences the preference of the
designer and the receiver only through the same real valued statistic.121212
For persuasion problems with a _multidimensional_ state see Dworczak and
Martini (2019) and Malamud et al. (2021).
## 3 Analysis
Our analysis proceeds in two steps. First, we will restate the persuasion
problem with a privately informed receiver as a problem without private
information, which is subject to side-constraints on the receiver’s beliefs.
These side-constraints capture the restrictions placed on the mechanism due to
possible deviations of different types, both in choosing a signal from the
mechanism and in taking an action after observing the message. In the second
step we analyze the structure of optimal signals in persuasion problems with
side-constraints which then implies our results for the persuasion problem
with private information.
To simplify notation we define the receiver’s indirect utility
$\bar{u}:\Omega\times\Theta\to\mathbb{R}$ as his utility from taking an
optimal action given the posterior mean belief $m$ and type $\theta$
$\bar{u}(m,\theta)=\max_{a\in A}u(a,m,\theta)\,.$
We also define the designer’s indirect utility
$\bar{v}:\Omega\times\Theta\to\mathbb{R}$ as the maximal payoff she can obtain
from a type $\theta$ receiver with a posterior mean belief $m$ who takes an
optimal action
$\displaystyle\bar{v}(m,\theta)=\max_{a\in A(m,\theta)}v(a,m,\theta)\,,$
where $A(m,\theta)=\operatorname*{argmax}_{b\in A}u(b,m,\theta)$. We note that
$\bar{u}(\cdot,\theta)$ is continuous as it is the maximum over continuous
functions, and it is bounded as $u$ is bounded. Furthermore,
$\bar{v}(\cdot,\theta)$ is upper semicontinuous for every $\theta$.131313As
$u$ is continuous in the action and $A$ is compact, it follows from Berge’s
Maximum Theorem that for every $\theta$ the correspondence $m\mapsto
A(m,\theta)$ is non-empty, upper hemicontinuous, and compact valued. This
implies that $m\mapsto\bar{v}(m,\theta)$ is upper semicontinuous for every
$\theta$. See for example Theorem 17.30 and Theorem 17.31 in Aliprantis and
Border (2013).
Denote by $G_{\theta}:\Omega\to[0,1]$ the cumulative distribution function
(CDF) associated with the mean of the receiver’s posterior belief after
observing the signal $\mu^{\theta}$
$G_{\theta}(x)={\mathbb{P}\left[{{\mathbb{E}_{\mu^{\theta}}\left[{\omega}\middle|{s}\right]}\leq
x}\right]}\,.$ (2)
We say that the distribution $G_{\theta}$ over posterior means is induced by
the signal $\mu^{\theta}$ if the equation above holds.
#### Incentive Compatibility
To solve the information design problem we first focus on direct mechanisms
where it is optimal for the receiver to truthfully reveal his type to the
designer.
###### Definition 1.
A mechanism $M=(\mu^{1},\ldots,\mu^{n})$ is a direct incentive compatible
mechanism if for all $\theta,\theta^{\prime}\in\Theta$
$\int_{\Omega}\bar{u}(s,\theta)dG_{\theta}(s)\geq\int_{\Omega}\bar{u}(s,\theta)dG_{\theta^{\prime}}(s)\,.$
(IC)
The incentive compatibility constraint (IC) requires each type $\theta$ of the
receiver to achieve a weakly higher expected payoff by observing the signal
$\mu^{\theta}$ designated for that type instead of observing any other signal
$\mu^{\theta^{\prime}}$ offered by the mechanism. Since the designer can
always remove signals that are not picked by _any_ receiver type without
affecting the outcome (as this relaxes the incentive constraints), it is
without loss of generality to restrict attention to incentive compatible
direct mechanisms.
###### Lemma 1 (Revelation Principle).
For every mechanism $M$ and associated optimal strategy of the receiver there
exists a direct incentive compatible mechanism that leaves each type of the
receiver and the designer with the same utility.
Motivated by this, in the remainder of the paper we focus only on direct
incentive compatible mechanisms, and refer to them simply as mechanisms.
#### Feasible Posterior Mean Distributions
As the payoffs depend only on the mean of the receiver’s posterior belief, but
not the complete distribution, a natural question is which distributions over
posterior means the designer can induce using a Blackwell signal. An important
notion to address this question is the notion of mean preserving contractions
(MPC). A distribution over states $H:\Omega\to[0,1]$ is a MPC of a
distribution $\tilde{H}:\Omega\to[0,1]$, expressed as $\tilde{H}\preceq H$, if
and only if for all $\omega$
$\int_{\omega}^{1}H(z)dz\geq\int_{\omega}^{1}\tilde{H}(z)dz,$ (MPC)
and the inequality holds with equality for $\omega=0$.
To see that $F\preceq G$ is necessary for $G$ to be the distribution of the
posterior mean induced by some signal note that for every convex function
$\phi:[0,1]\to\mathbb{R}$ we have that
$\int_{0}^{1}\phi(z)dF(z)={\mathbb{E}\left[{\phi(\omega)}\right]}={\mathbb{E}\left[{{\mathbb{E}_{\mu}\left[{\phi(\omega)}\middle|{s}\right]}}\right]}\leq{\mathbb{E}\left[{\phi({\mathbb{E}_{\mu}\left[{\phi(\omega)}\middle|{s}\right]})}\right]}=\int_{0}^{1}\phi(z)dG(z)\,.$
Here, the second equality is implied by the law of iterated expectations and
the inequality follows from Jensen’s inequality. Taking
$\phi(z)=\max\\{0,z-\omega\\}$ yields then yields that $F\preceq G$. This
condition is not only necessary, but also sufficient, see, e.g., Blackwell
(1950); Blackwell and Girshick (1954); Rothschild and Stiglitz (1970) and
Gentzkow and Kamenica (2016b) for an application to persuasion problems.
###### Lemma 2.
There exists a signal $\mu$ that induces the distribution $G$ over posterior
means if and only if $F\preceq G$.
Note that this result readily implies that (2) is satisfied if and only if
$F\preceq G_{\theta}$ for all $\theta\in\Theta$. Thus, instead of considering
the optimization problem over signals, we can equivalently optimize over
feasible distributions of posterior means.
#### The Optimal Mechanism
Combining the characterization of incentive compatibility from Lemma 1 and the
characterization of feasibility from Lemma 2 we obtain a characterization of
optimal direct mechanisms.
###### Proposition 1 (Optimal Mechanisms).
The direct mechanism $(\mu^{1},\mu^{2},\ldots,\mu^{n})$ is incentive
compatible and maximizes the designer’s payoff if and only if the associated
$(G_{1},\ldots,G_{n})$ solve
$\displaystyle\max_{G_{1},\ldots,G_{n}}\quad$
$\displaystyle\sum_{\theta\in\Theta}g(\theta)\int_{\Omega}\bar{v}(s,\theta)dG_{\theta}(s)$
$\displaystyle s.t.\quad$
$\displaystyle\int_{0}^{1}\bar{u}(s,\theta)dG_{\theta}(s)\geq\int_{0}^{1}\bar{u}(s,\theta)dG_{\theta^{\prime}}(s)$
$\displaystyle\forall\,\theta,\theta^{\prime}\in\Theta\,$ (3) $\displaystyle
G_{\theta}\succeq F$ $\displaystyle\forall\,\theta\in\Theta\,.$ (4)
The above problem is a simplification of the original information design
problem in two dimensions: First, there are no actions of the receiver in the
above problem. Second, instead of optimizing over signals, which specify the
distribution over messages conditional on each state the above formulation
involves only the unconditional distributions over posterior means. The main
challenge in the above optimization problem is that it involves maximization
over a vector of distributions $(G_{1},\ldots,G_{n})$ where the set of
feasible components is strongly interdependent due to (3). This
interdependence is naturally caused by the incentive compatibility constraint
as the designer cannot pick the signal she provides to one type $\theta$
without taking into account the fact that this might give another type
$\theta^{\prime}$ incentives to misreport.
Our next result decouples the above optimization problem into $n$ independent
problems, one for each type $\theta$ of the receiver. Let
$(G_{1}^{\ast},\ldots,G_{n}^{\ast})$ be an optimal solution to the problem
given in Proposition 1. We define the value $e_{\theta}$ the type $\theta$
receiver could achieve when deviating optimally from reporting his type
truthfully
$e_{\theta}=\max_{\theta^{\prime}\neq\theta}\int_{0}^{1}\bar{u}(s,\theta)dG^{\ast}_{\theta^{\prime}}(s)\,.$
(5)
We also define $d_{\theta}$ to be the value the receiver gets when reporting
his type truthfully
$d_{\theta}=\int_{0}^{1}\bar{u}(s,\theta)dG^{\ast}_{\theta}(s)\,.$ (6)
We note that $e_{\theta},d_{-\theta}$ are independent of the signal observed
by type $\theta$ in the optimal mechanism. We can thus characterize
$G_{\theta}^{\ast}$ by optimizing over $G_{\theta}$ while taking
$G_{-\theta}^{\ast}$ as given. This leads to the characterization given in our
next lemma.
###### Lemma 3.
If a mechanism maximizes the payoff of the designer, then for any type
$\theta\in\Theta$ the posterior mean distribution $G_{\theta}^{\ast}$ solves
$\displaystyle\max_{H\succeq F}\quad$
$\displaystyle\int_{\Omega}\bar{v}(s,\theta)dH(s)$ (7) s.t.
$\displaystyle\int_{0}^{1}\bar{u}(s,\theta)dH(s)\geq e_{\theta}$ (8)
$\displaystyle\int_{0}^{1}\bar{u}(s,\eta)dH(s)\leq
d_{\eta}\qquad\forall\,\eta\neq\theta\,.$ (9)
In this decomposition we maximize the payoff the designer receives from each
type $\theta$ of the receiver separately under the constraint (8). This
constraint ensures that type $\theta$ does not want to deviate and report to
be another type. Similarly, the constraint (9) ensures that no other type
wants to report his type as $\theta$. We note that (8) and (9) encode the
incentive constraints given in (3) that restrict the signal of type $\theta$.
By considering the optimal deviation in (5) we reduced the number of incentive
constraints associated with each type from $2(n-1)$ to $n$.
#### Laminar Partitional Signals
We next describe a small class of signals, laminar partitional signals, and
show that there always exists an optimal signal within this class. We first
define partitional signals:
###### Definition 2 (Partitional Signal).
A signal is partitional if for each message $s\in S$ there exists a set
$P_{s}\subseteq\Omega$ such that $\mu(\\{s\\}|\omega)=\mathbf{1}_{\omega\in
P_{s}}$.
A partitional signal partitions the state space into sets $(P_{s})_{s}$ and
reveals to the receiver the set in which the state $\omega$ lies. Partitional
signals are thus noiseless in the sense that the mapping from the state to the
signal is deterministic. A simple example of signals which are not partitional
are normal signals where the signal equals the state $\omega$ plus normal
noise and thus is random conditional on the state.
Denote by $cx$ the convex hull of a set. The next definition imposes further
restrictions on the partition structure.
###### Definition 3 (Laminar Partitional Signal).
A partition $(P_{s})_{s}$ is laminar if $cxP_{s}\cap
cxP_{s^{\prime}}\in\\{\emptyset,cxP_{s},cxP_{s^{\prime}}\\}$ for any
$s,s^{\prime}$. A partitional signal is laminar if its associated partition is
laminar.
The restrictions imposed by laminar partitional signals are illustrated in
Figure 1.
$0$$0.2$$0.4$$0.6$$0.8$$1$$P_{1}$$P_{2}$
$0$$0.2$$0.4$$0.6$$0.8$$1$$P_{1}$$P_{2}$$P_{3}$
Figure 1: The partition of the state space $\Omega=[0,1]$ on the left is not
laminar while the partition on the right is laminar as the convex hull of all
pairs of sets $P_{1},P_{2},P_{3}$ are either nested or have an empty
intersection.
Our next result establishes that there always exists an optimal policy such
that the signal of each type is laminar partitional. To simplify notation we
denote by $P^{\theta}(\omega)=\\{P_{s}^{\theta}\colon\omega\in
P_{s}^{\theta}\\}$ the set of states where the same message is sent as in
state $\omega$, for a partitional signal with partition
$P^{\theta}=(P^{\theta}_{s})_{s}$.
###### Theorem 1.
There exists an optimal mechanism such that the signal observed by each type
$\theta$ is laminar partitional with partition $P^{\theta}$. Furthermore, for
each type $\theta$ there exists a countable collection of intervals
$I^{\theta}_{1},I^{\theta}_{2},\ldots$ such that
1. (i)
$\omega\notin\cup_{k}I^{\theta}_{k}$ implies
$P^{\theta}(\omega)=\\{\omega\\}$;
2. (ii)
$\omega\in I^{\theta}_{k}$ implies that $P^{\theta}(\omega)\subseteq
I^{\theta}_{k}$ and in $I^{\theta}_{k}$ at most $n+2$ distinct messages are
sent, i.e.
$|\\{s\colon P^{\theta}_{s}\cap I^{\theta}_{k}\neq\emptyset\\}|\leq n+2\,.$
The proof of Theorem 1 is based on a result that characterizes the solutions
to optimization problems over mean preserving contractions under side
constraints. As this result might be of independent interest we explain it in
Section 3.1.
Theorem 1 drastically simplifies the search for an optimal signal. First, it
implies that the designer needs to consider only partitional signals. This
means that there always exists an optimal signal that only reveals to each
type $\theta$ a subset $P^{\theta}(\omega)\subseteq\Omega$ of the state space
in which the state lies. Furthermore, this subset is a deterministic function
of the state. Theorem 1 thus implies that the designer does not need to rely
on random signals whose distribution conditional on the state could be
arbitrarily complex.
The fact that the partition can be chosen to be laminar is a further
simplification. It implies a partial order or a tree structure on the signals
such that a message $s$ is larger in this partial order than $s^{\prime}$
whenever the convex hull of $P^{\theta}_{s}$ contains the convex hull of
$P^{\theta}_{s^{\prime}}$. The laminar partition $P^{\theta}_{s}$ can be
generated by taking the convex hull of the set where a message is sent and
subtracting the convex hull of all messages that are lower in this order,
i.e.,
$P^{\theta}_{s}=cxP^{\theta}_{s}\,\,\Big{\backslash}\,\,\,\bigcup_{s^{\prime}\colon
cxP^{\theta}_{s^{\prime}}\subset
cxP^{\theta}_{s}}cxP^{\theta}_{s^{\prime}}\,.$
Thus, the partition $P^{\theta}$ can be recovered from the intervals
$cxP^{\theta}$.
To see why Theorem 1 provides a drastic reduction in complexity consider the
case where the receiver chooses among $|A|<\infty$ actions. As it is never
optimal to reveal to the receiver more information than the optimal action for
each type of receiver, the optimal signal uses at most $|A|$ messages. Since
the optimal signal is partitional these messages correspond to $|A|$ subsets
of the state space. Recall that the optimal partition is laminar, and hence
each subset can be identified with its convex hull, which is an interval. As
each interval is completely described by its endpoints it follows that each
laminar partitional signal can be identified with a point in
$\mathbb{R}^{2|A|}$. Thus, the problem of finding the optimal laminar
partitional signal can be written as an optimization problem over
$\mathbb{R}^{2|A|\times|\Theta|}$, and hence be tackled with standard finite
dimensional optimization methods. This contrasts with the space of signals
which is uncountably infinite even if one restricts to finitely many messages.
As we illustrate in Section 3.2, there is an alternative “reduced form”
approach which has the additional advantage that it leads to a convex program.
This approach involves restating the optimization problem of Proposition 1 as
a finite dimensional convex problem, solving for the optimal posterior mean
distributions, and subsequently constructing the end points of intervals that
yield laminar partitions consistent with the optimal distributions (through a
solution of a simple system of equations).
### 3.1 Maximizing over MPCs Under Constraints
The next section discusses an abstract mathematical result about optimization
under constraints that implies Theorem 1. We discuss this result separately as
similar mathematical problems emerge in economic problems other than the
Bayesian persuasion application. For example Kleiner et al. (2020) discuss how
optimization problems under mean preserving contraction constraints naturally
arise in delegation problems. We leave the exploration of other applications
of this mathematical result for future work to keep the exposition focused on
the persuasion problem.
Consider the problem of maximizing the expectation of an arbitrary upper-semi
continuous function $v:[0,1]\to\mathbb{R}$ over all distributions $G$ that are
mean-preserving contractions of a given distribution $F:[0,1]\to[0,1]$ subject
to $n\geq 0$ constraints
$\displaystyle\max_{G\succeq F}$ $\displaystyle\int_{0}^{1}v(s)dG(s)$ (10)
$\displaystyle s.t.$ $\displaystyle\int_{0}^{1}u_{i}(s)dG(s)\geq 0\text{ for
}i\in\\{1,\ldots,n\\}\,.$
Throughout, we assume that the functions $u_{i}:[0,1]\to\mathbb{R}$ are
continuous. The next result establishes conditions that need to be satisfied
by any solution of the problem (10).
###### Proposition 2.
The problem (10) admits a solution. For any solution $G$ there exists a
countable collection of disjoint intervals $I_{1},I_{2},\ldots$ such that $G$
equals distribution $F$ outside the intervals, i.e.,
$G(x)=F(x)\text{ for }x\notin\cup_{j}I_{j}$
and each interval $I_{j}=(a_{j},b_{j})$ redistributes the mass of $F$ among at
most $n+2$ mass points $m_{1,j},m_{2,j},\ldots,m_{n+2,j}\in I_{j}$
$G(x)=G(a_{j})+\sum_{r=1}^{n+2}p_{r,j}\mathbf{1}_{m_{r,j}\leq x}\text{ for
}x\in I_{j}$
with $\sum_{r=1}^{n+2}p_{r,j}=F(b_{j})-F(a_{j})$ and the same expectation
$\int_{I_{j}}xdG(x)=\int_{I_{j}}xdF(x)$.
We prove Proposition 2 in the Appendix. The existence of an optimal solution
follows from standard arguments exploiting the compactness of the feasible set
of (10). To establish the remaining claims of Proposition 2, we first fix an
optimal solution, and consider intervals where the mean preserving contraction
constraint (MPC) does not bind at this solution. As both the constraints as
well as the objective function in (10) is a linear functional in the CDF we
can optimize over (any subinterval of) this interval fixing the solution on
the complement of this interval, to obtain another optimal solution. In this
auxiliary optimization problem the mean-preserving contraction constraint is
relaxed by a constraint fixing the conditional mean of the distribution on
this interval. This problem is now a maximization problem over distributions
subject to the $n$ original constraints and an additional the identical mean
constraint. It was shown in Winkler (1988) that each extreme point of the set
of distributions, which are subject to a given number $k$ of constraints, is
the sum of at most $k+1$ mass points. For our auxiliary optimization problem,
this ensures the existence of an optimal solution with $n+2$ mass points. A
challenge is to establish that the solution to the auxiliary problem is
feasible and satisfies the mean preserving contraction constraint. The main
idea behind this step is to show that if it is not feasible, then one can
construct an optimal solution where the MPC constraint binds on a larger set.
However, this can never be the case when we start with an optimal solution
where the set on which the MPC constraint binds is maximal (which exists by
Zorn’s lemma).
Combining the initial optimal solution, with the optimal solution for the
auxiliary optimization problem, we obtain a new solution that satisfies the
conditions of the proposition over this interval. Repeating this argument for
all intervals inductively, it follows that the claim holds for the entire
support.
#### Laminar Structure
Let $\omega$ be a random variable distributed according to $F$. Our next
result shows that each interval $I_{j}$ in Proposition 2 admits a laminar
partition such that when the realization of $\omega$ belongs to some $I_{j}$,
revealing the partition element that contains it and simply revealing $\omega$
when it does not belong to any $I_{j}$ induces a posterior mean distribution,
given by $G$. Proposition 2 together with this result yields the optimality of
partitional signals as stated in Theorem 1.
###### Proposition 3.
Consider the setting of Proposition 2 and let $\omega$ be distributed
according to $F$. For each interval $I_{j}$ there exists a laminar partition
$\Pi_{j}=(\Pi_{j,k})_{k}$ such that for all $k\in\\{1,\ldots,n+2\\}$
${\mathbb{P}\left[{\omega\in\Pi_{j,k}}\right]}=p_{j,k}\,\,\,\,\,\,\text{ and
}\,\,\,\,\,\,{\mathbb{E}\left[{\omega}\middle|{\omega\in\Pi_{j,k}}\right]}=m_{j,k}\,.$
(11)
The proof of this claim relies on a partition lemma (stated in the appendix),
which strengthens this result by shedding light on how the partition $\Pi_{j}$
can be constructed. The proof of the latter lemma is inductive over the number
of mass points. When $G$ given in Proposition 2 has two mass points in
$I_{j}$, the partition element that corresponds to one of these mass points is
an interval and the other one is the complement of this interval relative to
$I_{j}$. Moreover, it can be obtained by solving a system of equations,
expressed in terms of the end points of this interval, that satisfy condition
(11) of Proposition 3. As this partition is laminar this yields the result for
the case where there are only 2 mass points in $I_{j}$.
When $G$ consists of $k>2$ mass points in $I_{j}$ one can find a subinterval,
such that the expected value of $\omega\sim F$ conditional on $\omega$ being
inside this subinterval equals the value of the largest mass point and the
probability assigned to the interval equals the probability $G$ assigns to the
largest mass point. Conditional on $\omega$ being outside this interval, the
distribution thus only admits $k-1$ mass points and is a mean preserving
contraction of the distribution $F$. This allows us to invoke the induction
hypothesis to generate a laminar partition such that revealing in which
partition element $\omega$ lies generates the desired conditional distribution
of the posterior mean. Finally, as this laminar partition combined with the
subinterval associated with the largest mass point of $G$ in $I_{j}$ is again
a laminar partition, we obtain the result for distributions consisting of
$k>2$ mass points.
The proof of Proposition 3 (and Lemma 10 of the Appendix) details these
arguments, and also offers an algorithm for constructing a laminar partition
satisfying (11).
### 3.2 Finitely Many Actions
Next, we return to the persuasion problem and focus on the special case where
the set of the receiver’s actions is finite
$A=\\{1,\ldots,|A|\\}\,,$
and the designer’s value $v(a,\theta)$ depends only on the action and the
receiver’s type. As a consequence of Assumption 1 there exist a partition of
$\Omega$ into intervals $(B_{a,\theta})_{a\in A}$ such that action $a$ is
optimal for the receiver of type $\theta$ if and only if his mean belief is in
the interval $B_{a,\theta}$. By relabeling the actions for each type we can
without loss assume that the intervals
$B_{a,\theta}=[b_{a-1,\theta},b_{a,\theta}]$ are ordered141414If an action $a$
is never optimal for a type $\theta$ set
$b_{a-1,\theta}=b_{a,\theta}=b_{|A|,\theta}=1$. This is without loss as no
signal induces a posterior belief of 1 with strictly positive probability and
the action thus plays no role in the resulting optimization problem.
$0=b_{0,\theta}\leq b_{1,\theta}\leq\ldots\leq b_{|A|,\theta}=1\,.$
To abbreviate notation we define coefficients $c_{a,\theta}=u_{1}(a,\theta)$
and $h_{a,\theta}=u_{2}(a,\theta)+u_{1}(a,\theta)b_{a,\theta}$ and get that
the indirect utility of the receiver equals
$\displaystyle\bar{u}(m,\theta)$
$\displaystyle=h_{a,\theta}+c_{a,\theta}(m-b_{a,\theta})$ $\displaystyle\text{
for all }m\in B_{a,\theta}\,.$
As it is never optimal to reveal unnecessary information to the receiver, a
“recommendation mechanism” in which the message equals the action the receiver
is supposed to take is optimal:
###### Lemma 4.
There exists an optimal mechanism where the signal realization equals the
action taken by the receiver, i.e., $S_{\theta}=A$ and
${\mathbb{E}_{\mu^{\theta}}\left[{\omega}\middle|{s=a}\right]}\in
B_{a,\theta}$ for all $a\in A,\theta\in\Theta$.
In what follows we restrict attention to such recommendation mechanisms and
denote by $p_{a,\theta}$ the probability that action $a$ is recommended to
type $\theta$ and by
$m_{a,\theta}={\mathbb{E}_{\mu_{\theta}}\left[{\omega}\middle|{s=a}\right]}\in
B_{a,\theta}$ the posterior mean induced by this recommendation.
Given a recommendation mechanism, the expected payoff of a type $\theta$
receiver from reporting his type as $\theta^{\prime}$ equals
$\displaystyle\int_{\Omega}\bar{u}(s,\theta)dG_{\theta^{\prime}}(s)$
$\displaystyle=\sum_{a^{\prime}\in
A}p_{a^{\prime},\theta^{\prime}}\,\bar{u}(m_{a^{\prime},\theta^{\prime}},\theta).$
(12)
Defining
$z_{a,\theta}=m_{a,\theta}p_{a,\theta}$
to be the product of the posterior mean $m_{a,\theta}$ induced by the signal
$a$ and the probability $p_{a,\theta}$ of that signal, the incentive
compatibility constraint for type $\theta$ can be more explicitly expressed as
follows:
$\sum_{a\in
A}h_{a,\theta}p_{a,\theta}+c_{a,\theta}(z_{a,\theta}-b_{a,\theta}p_{a,\theta})\geq\sum_{a^{\prime}\in
A}\left[\max_{a\in
A}h_{a,\theta}p_{a^{\prime},\theta^{\prime}}+c_{a,\theta}(z_{a^{\prime},\theta^{\prime}}-b_{a,\theta}p_{a^{\prime},\theta^{\prime}})\right]\quad\forall\theta^{\prime}.$
(13)
Here, the left hand side is the payoff of this type from reporting his type
truthfully and subsequently following the recommendation of the mechanism,
whereas the right hand side is the payoff from reporting type as
$\theta^{\prime}$ and taking the best possible action (possibly different than
the recommendation of the mechanism) given the signal realization. Recall that
the distribution $G_{\theta}$ is an MPC of $F$ for all $\theta$ (Lemma 2). Our
next lemma establishes that the MPC constraints also admit an equivalent
restatement in terms of $(p,z)$.
###### Lemma 5.
$G_{\theta}\succeq F$ if and only if
$\sum_{a\geq\ell}z_{a,\theta}\leq\int_{1-\sum_{a\geq\ell}p_{a,\theta}}^{1}F^{-1}(x)dx$,
where the inequality holds with equality for $\ell=1$.
Our observations so far establish that the incentive compatibility and MPC
constraints can both be expressed in terms of the $(p,z)$ tuple. As a
consequence of these observations we can reformulate the problem of obtaining
optimal mechanisms, given in Proposition 1, in terms of $(p,z)$ as follows:
$\displaystyle\max_{\begin{subarray}{c}p\in(\Delta^{|A|})^{n}\\\
z\in\mathbb{R}^{|A|\times|\Theta|}\\\
y\in\mathbb{R}^{|A|\times|\Theta|^{2}}\end{subarray}}$
$\displaystyle\quad\sum_{\theta\in\Theta}g(\theta)\,\sum_{a\in
A}p_{a,\theta}v(a,\theta)$ (OPT) $\displaystyle s.t.$
$\displaystyle\sum_{a\geq\ell}z_{a,\theta}\leq\int_{1-\sum_{a\geq\ell}p_{a,\theta}}^{1}F^{-1}(x)dx$
$\displaystyle\forall\,\theta\in\Theta,\ell>1,$ $\displaystyle\sum_{{a\in
A}}z_{a,\theta}=\int_{0}^{1}F^{-1}(x)dx$
$\displaystyle\forall\,\theta\in\Theta,$ $\displaystyle
h_{a,\theta}p_{a^{\prime},\theta^{\prime}}+c_{a,\theta}\left({z_{a^{\prime},\theta^{\prime}}}-b_{a,\theta}{p_{a^{\prime},\theta^{\prime}}}\right)\leq
y_{a^{\prime},\theta,\theta^{\prime}}$
$\displaystyle\forall\,\theta,\theta^{\prime}\in\Theta,a,a^{\prime}\in A,$
$\displaystyle\sum_{a^{\prime}\in
A}y_{a^{\prime},\theta,\theta^{\prime}}\leq\sum_{a\in
A}h_{a,\theta}{p_{a,\theta}}+c_{a,\theta}\left({z_{a,\theta}}-b_{a,\theta}{p_{a,\theta}}\right)$
$\displaystyle\forall\,\theta,\theta^{\prime}\in\Theta,$ $\displaystyle
p_{a,\theta}b_{a-1,\theta}\leq z_{a,\theta}\leq p_{a,\theta}b_{a,\theta}$
$\displaystyle\forall\,\theta\in\Theta,a\in A\,.$
In this formulation, the first two constraints are the restatement of the mean
preserving contraction constraints (see Lemma 5). The value
$y_{a^{\prime},\theta,\theta^{\prime}}$ corresponds to the utility the agent
of type $\theta$ gets from observing the signal associated with type
$\theta^{\prime}$ and taking the optimal action when the recommended action is
$a^{\prime}$. It can be easily checked that
$y_{a^{\prime},\theta,\theta^{\prime}}=\max_{a\in
A}h_{a,\theta}p_{a^{\prime},\theta^{\prime}}+c_{a,\theta}\left({z_{a^{\prime},\theta^{\prime}}}-b_{a,\theta}{p_{a^{\prime},\theta^{\prime}}}\right)$
at an optimal solution.151515This is because when
$y_{a^{\prime},\theta,\theta^{\prime}}$ is strictly larger than the right hand
side, it can be decreased to construct another feasible solution with the same
objective. Thus, it follows that the third and fourth constraints restate the
incentive compatibility constraint (13), by using
$y_{a^{\prime},\theta,\theta^{\prime}}$ to capture the summands in the right
hand side of the aforementioned constraint. Finally, the last constraint
captures that the posterior mean $z_{a,\theta}/p_{a,\theta}$ must lie in
$B_{a,\theta}$ for the action $a$ to be optimal.
It is worth pointing out that (OPT) is a finite dimensional _convex_
optimization problem. This is unlike the infinite dimensional optimization
formulation given in Proposition 1. Note that (OPT) is restating the
designer’s problem in terms of the $(p,z)$ tuple. Two points about this
reformulation are important to highlight. First, an alternative approach would
involve optimizing directly over distributions that satisfy the conditions of
Lemma 4. This could be formulated as a finite dimensional problem as well,
e.g., searching over the location $m_{a,\theta}$ and weight $p_{a,\theta}$ of
each mass point. However, this approach does not yield a convex optimization
formulation as the $(p,m)$ tuples that satisfy the conditions of Lemma 4 do
_not_ constitute a convex set. The formulation in (OPT) amounts to a change of
variables that yields a convex program.
Second, given an optimal solution to (OPT), the optimal distributions
$G_{1},\ldots,G_{n}$ can be obtained straightforwardly by placing a mass point
with weight $p_{a,\theta}$ at $z_{a,\theta}/p_{a,\theta}$ for each action $a$.
Moreover, as discussed in Section 3.1, an optimal mechanism that induces these
distributions can be obtained by constructing a laminar partition of the state
space (by following the approach in Proposition 3 and Lemma 10 of the
Appendix). These observations imply our next proposition.
###### Proposition 4.
For every optimal solution $(p,z,y)$ of (OPT) the recommendation mechanism
which recommends the action $a$ for type $\theta$ with probability
$p_{a,\theta}$ and induces a posterior mean of $z_{a,\theta}/p_{a,\theta}$ is
an optimal mechanism. Moreover, there exists such an optimal mechanism with
laminar partitional signals.
## 4 Examples
### 4.1 Structure of Optimal Mechanisms
We next illustrate our results through a simple example. This example
generalizes the buyer-seller setting from Kolotilin et al. (2017), who assume
single unit demand, to the case where the buyer can demand more than one unit
and has a decreasing marginal utility in the number of units. As our example
coincides with their setup for the case of a single unit this example allows
us to highlight the effects of the receiver having more than two actions.
In this example, the receiver is a buyer who decides how many units of an
indivisible good to purchase. He is privately informed about his type which
captures his taste for the good. The designer is a seller who controls
information about the quality of the good, captured by the state. We assume
that prices are linear in consumption and we set the price of one unit of the
good to $\nicefrac{{10}}{{3}}$. The utility the buyer derives from the $k$-th
good is given by161616Assuming that the receiver’s preferences equals the sum
of $\omega$ and $\theta$ is without loss of generality in the two action case
if his preferences are linear in $\omega$ and the designer prefers the
receiver to take the action optimal for high states, c.f. the discussion
Section 3.3 in Kolotilin et al. (2017).
$(\theta+\omega)\max\\{5-k,0\\}\,.$
His marginal utility of consumption decreases linearly in the number of goods,
increases in the good’s quality $\omega$, and in his taste parameter $\theta$.
The quality of the good is distributed uniformly in $[0,1]$ and the buyer
either derives a low $\theta=0.3$, intermediate $\theta=0.45$, or high value
$\theta=0.6$ from the good with equal probability. The seller commits to a
menu of signals, one for each type $\theta$ of the buyer, to maximize the
(expected) number of units sold.
The indirect utility $\bar{u}(m,\theta)$ of the buyer is displayed in Figure
2. When the expected quality $m$ of the good is low, all types find it optimal
to purchase zero units, yielding a payoff of zero. As the expected quality
improves, the purchase quantity increases. In Figure 2, the curve for each
type is piecewise linear, and the kink-points of each curve correspond to the
posterior mean levels where the receiver increases his purchase quantity.
Since the state and hence the posterior mean belongs to $[0,1]$, the purchase
quantity of each type is at most $2$ units, and each curve in the figure has
at most two kink-points.171717This is easily seen as the utility any buyer
type derives from consuming the third unit of the good is bounded by
$(\theta+\omega)\max\\{5-3,0\\}\leq(0.6+1)\cdot 2=3.2$ which is less than the
price of $10/3$.
Figure 2: The indirect utility of the receiver
These observations imply that in this problem, the receiver effectively
considers finitely many actions (namely the quantities in $0,1$ and $2$), and
hence the designer’s problem can be formulated and solved using the finite-
dimensional convex program of Section 3.2. We solve this optimization
formulation, and construct the laminar partitional signal as discussed at the
end of Section 3.1. The resulting optimal menu is given in Figure 3.
Figure 3: The optimal menu.
In this figure, the bars represent the state space and its differently colored
regions the optimal partition. For each type, the designer reveals whether the
state belongs to the region(s) marked with $0,1,2$; and the buyer finds it
optimal to purchase the corresponding number of units. Under the optimal menu,
the _expected_ purchase quantity increases with the type. This can be seen as
the high type purchases two units in the states where the medium type
purchases only one unit, which in turn leads to higher expected purchase.
Similarly, when the low type purchases zero units, the medium type purchases
zero or one units; and the set of states where the medium type purchases two
units is larger than that for the low type.
While the expected quantities are ordered, the quantities purchased by
different types for a given state are _not ordered_. For instance, for states
states between $0.79$ and $0.83$ the low and the high types purchase two
units, and the medium type purchases one unit. Note that this implies that the
purchase regions of buyers are not “nested” in the sense of Guo and Shmaya
(2019), who establish such a nested structure that for the case of two actions
$|A|=2$. Moreover, low and medium types may end up purchasing lower quantities
in some high states, than they do for lower states. In fact, under the optimal
mechanism, for the best and the worst states, the low type purchases zero
units. Thus, in the optimal mechanism, the low and medium type of the buyer
sometimes consume a _smaller_ quantity of the good if it is of higher quality.
This (maybe counterintuitive) feature of the optimal mechanism is a
consequence of the incentive constraints: By pooling some high states with low
states, one makes it less appealing for the high type to deviate and observe
the signal meant for a lower type.
### 4.2 Relation to Public Revelation Results for Binary Actions
In case of binary actions and under some assumptions on the payoff
structure,181818Both papers normalize the payoff of the action $0$ to zero.
The assumption in Kolotilin et al. (2017) is equivalent to the assumption that
for $\theta^{\prime}\leq\theta$ if
${\mathbb{E}\left[{u(1,\omega,\theta^{\prime})}\right]}\geq 0$ then
${\mathbb{E}\left[{u(1,\omega,\theta)}\right]}\geq 0$ under any probability
measure. Guo and Shmaya (2019) establish this result under assumptions that in
our setting are equivalent to
$\omega\mapsto\frac{u(1,\omega,\theta)}{v(1,\omega,\theta)}$ and
$\omega\mapsto\frac{u(1,\omega,\theta)}{u(1,\omega,\theta^{\prime})}$ are
increasing for all $\theta^{\prime}\leq\theta$. Kolotilin et al. (2017) and
Guo and Shmaya (2019) establish that the optimal mechanism admits a “public”
implementation. For each type the corresponding laminar partitional signal
induces one action in a subinterval of the state space, and the other action
in the complement of this interval. It can be shown that these intervals are
nested which implies that the mechanism that reveals messages associated with
different types to all receiver types is still optimal. Thus, as opposed to
first eliciting types and then sharing with each type the realization of the
signal associated with this type, the designer can achieve the optimal outcome
by sharing a signal (which encodes the information of the signals of all
types) publicly with all receiver types.
By contrast, the mechanism illustrated in Figure 3 does not admit a public
implementation. For instance, under this mechanism the high type purchases two
units whenever the state realization is higher than $0.06$.191919In the
figure, the cutoffs are reported after rounding, e.g., the cutoff for the high
type is approximately at $0.06$. For sake of exposition, in our discussion we
stick to the rounded values. Suppose that this type of receiver had access to
the signals of, for instance, the low type as well. Then, he could infer
whether the state is in $[0.06,0.16]\cup[0.94,1]$. Conditional on the state
being in this set, his expectation of the state would be approximately $0.43$.
This implies that the expected payoff of the high type from purchasing the
second unit is $(0.43+0.6)\times 3-{10}/{3}<0$. Thus, for state realizations
that belong to the aforementioned set, the high type finds it optimal to
strictly reduce his consumption (relative to the one in Figure 3). In other
words, observing the additional signal reduces the expected purchase of the
high type (and the other types). Hence, such a public implementation is
strictly suboptimal. As a side note, the optimal public implementation can be
obtained by replacing different types with a single “representative type” and
using the framework of Section 3. This amounts to replacing the designer’s
payoff with $\sum_{\theta}g(\theta)\max_{a\in A(m,\theta)}v(a,m,\theta)$ and
removing the incentive compatibility constraints in the optimization
formulation of Section 3. We conducted this exercise and also verified that
restricting attention to public mechanisms yields a strictly lower expected
payoff to the designer.
### 4.3 Which Incentive Constraints Bind?
Given the mechanisms of Figure 3, it is straightforward to check which
incentive compatibility constraints are binding. Both the medium and the high
type are indifferent among reporting their types as low, medium, or high.
Similarly, the low type is indifferent between reporting his type as low or
medium, but achieves strictly lower payoff from reporting his type as high.
Interestingly, these observations imply that unlike classical mechanism design
settings “non-local” incentive constraints might bind under the optimal
mechanism.202020This in despite the fact that the receiver’s utility is
supermodular in his actions and type. This is also in contrast to Theorem 6.1
in Guo and Shmaya (2019) who establish in their binary action setting that the
designer can ignore all upward incentive constraints when designing the
optimal mechanism.
The effect of the incentive compatibility constraints on the optimal mechanism
are easily seen from the figure. For instance, the high type’s payoff from a
truthful type report is strictly positive. If this were the only relevant
type, the designer could choose a strictly smaller threshold than $0.06$ and
still ensure purchase of two units whenever state realization is above this
threshold, thereby increasing the expected purchase amount of the high type.
However, when the other types are also present, such a change in the signal of
the high type incentivizes this type to deviate and misreport his type as low
or medium. Changing the signals of the remaining types to recover incentive
compatibility, reduces the payoff the designer derives from them. The
mechanism in Figure 3 maximizes the designer’s payoff while carefully
satisfying such incentive compatibility constraints.
### 4.4 Private vs. Public Mechanisms
As discussed earlier, the optimal mechanism of Section 3 reveals different
signals to different types. What if we restricted attention to public signals
where all types observe the same signal? Suppose that the designer’s payoff is
non-negative. For any mechanism $(\mu^{1},\ldots,\mu^{n})$ where different
types observe different signals the designer can always construct a public
mechanism $(\mu^{\theta},\ldots,\mu^{\theta})$ where each type observes the
signal $\mu^{\theta}$ associated with type $\theta$ in the original mechanism.
As the designer’s payoffs are non-negative, doing so and choosing $\theta$
optimally guarantees her a payoff of
$\max_{\theta\in\Theta}g(\theta)\int\bar{v}(s,\theta)dG_{\theta}(s)\,.$
This is at least a $1/n$ fraction of the payoff achieved by the original
mechanism
$\sum_{\theta\in\Theta}g(\theta)\int\bar{v}(s,\theta)dG_{\theta}(s)\,.$
Thus, a public mechanism guarantees a $1/n$ fraction of the payoff achieved by
the optimal private mechanism to the designer.
We next construct an example which shows that this bound is tight. The idea
behind the example is to give all types of the receiver identical preferences
and chose the payoff of the designer such that she wants different types of
the agent to chose different actions. In a public mechanism all agents have to
choose the same action which leads to at most $1$ out of $n$ types choosing
the action preferred by the designer. The example is constructed such that in
a mechanism with private signals the designer can induce _all types_ to chose
her most preferred action. If the payoff from inducing the correct action
equals $1$ and the payoff from any other action to the designer equals $0$,
this achieves the $1/n$ bound. The main challenge in the construction is to
ensure that all types of the receiver are indifferent between all signals to
ensure that no type has incentives to misreport.
###### Example 1.
All types are equally likely, i.e., $g(\theta)\equiv 1/n$ for all
$\theta\in\Theta$, and the state is uniformly distributed in $[0,1]$. For
$k\in\\{-2n,\ldots,2n\\}$ we define
$B_{L,k}=[b_{L,k-1},b_{L,k}],B_{R,k}=[b_{L,k-1},b_{L,k}]$ and
$b_{L,k}=\frac{1}{4}+\frac{1}{8}\text{sgn}(k)\sqrt{\frac{|k|}{2n}}\hskip
56.9055ptb_{R,k}=\frac{3}{4}+\frac{1}{8}\text{sgn}(k)\sqrt{\frac{|k|}{2n}}\,.$
All types of the agent share the same indirect utility function $\bar{u}$,
such that $\bar{u}(m,\theta)=m^{2}$ for all $m\in\\{b_{L,k},b_{R,k}\\}$, and
linearly interpolated otherwise. The indirect utility of the designer is given
by
$\bar{v}(m,\theta)=\begin{cases}1&\text{ if }m\in B_{L,2\theta}\cup
B_{L,-2\theta+1}\cup B_{R,2n+2-2\theta}\cup B_{R,2\theta-2n-1}\\\ 0&\text{
otherwise.}\end{cases}$ (14)
These indirect utility functions can be generated by taking the set of actions
to be $\\{a_{L,k},a_{R,k}\\}$ for $k\in\\{-2n,\ldots,+2n\\}$ and the utilities
as a function of the action to be
${u}(a_{\cdot,k},\omega,\theta)=b_{\cdot,k}^{2}+\frac{\omega-
b_{\cdot,k}}{b_{\cdot,k+1}-b_{\cdot,k}}(b_{\cdot,k+1}^{2}-b_{\cdot,k}^{2})$
and $v(a,\omega,\theta)$ equals $1$ for the actions
$a_{L,2\theta},a_{L,-2\theta},a_{R,2n+2-2\theta},a_{R,-2n-2+2\theta}$ and zero
otherwise.
We begin by establishing that in the setting of Example 1 no public mechanism
achieves more than $1/n$. Note that by our construction in (14), for any
posterior mean $m$ the indirect utility of the designer equals $1$ for at most
a single type, i.e., $\sum_{\theta\in\Theta}\bar{v}(m,\theta)\leq 1$. As
$g(\theta)\equiv 1/n$ this immediately implies that for any _type independent_
distribution of the posterior mean $G$ the designer can achieve a payoff of at
most $1/n$
$\sum_{\theta\in\Theta}g(\theta)\int\bar{v}(m,\theta)dG(m)\leq\frac{1}{n}\,.$
Next consider the following private mechanism: The distribution $G_{\theta}$
for type $\theta\in\Theta$ consists of 4 equally likely mass points at
$b_{L,2\theta},\,\,\,\,b_{L,-2\theta},\,\,\,\,b_{R,2n+2-2\theta},\,\,\,\,b_{R,-2n-2+2\theta}\,\,\,.$
It can be readily verified from Lemma 2 that such a posterior mean
distribution can be induced with a signal.212121In fact, it is straightforward
to see that the signal based on the partition $(\Pi_{k})_{k=1}^{4}$ with
$\Pi_{1}=(b_{L,2\theta}-1/8,b_{L,2\theta}+1/8)$,
$\Pi_{2}=[0,1/2]\setminus\Pi_{1}$,
$\Pi_{3}=[b_{R,2n+2-2\theta}-1/8,b_{R,2n+2-2\theta}+1/8]$,
$\Pi_{4}=(1/2,1]\setminus\Pi_{3}$ induces the desired posterior mean
distribution. At each of these beliefs the receiver’s interim utility is given
by $\bar{u}(m,\theta)=m^{2}$. Thus, the benefit a receiver of type
$\theta^{\prime}$ derives from observing the signal meant for type $\theta$
(relative to observing no signal) equals the variance of $G_{\theta}$. In the
construction of the example we have carefully chosen the points $b_{L,\cdot}$
and $b_{R,\cdot}$ such that the variance associated with each signal is the
same, i.e. $\int(m-\nicefrac{{1}}{{2}})^{2}dG_{\theta}(m)=\frac{9n+1}{128\cdot
n}$.222222To see this note that the variance conditional on the posterior
being less than $1/2$ equals
$\nicefrac{{1}}{{2}}(b_{L,2\theta}-\nicefrac{{1}}{{4}})^{2}+\nicefrac{{1}}{{2}}(b_{L,-2\theta}-\nicefrac{{1}}{{4}})^{2}=\frac{\theta}{64\cdot
n}$ and the variance conditional on the posterior being greater than
$\nicefrac{{1}}{{2}}$ equals
$\nicefrac{{1}}{{2}}(b_{R,2n+2-2\theta}-\nicefrac{{3}}{{4}})^{2}+\nicefrac{{1}}{{2}}(b_{R,-2n-2+2\theta}-\nicefrac{{3}}{{4}})^{2}=\frac{n+1-\theta}{64\cdot
n}\,.$ By the law of the total variance the variance of $G_{\theta}$ thus
equals $\frac{1}{2}\frac{\theta}{64\cdot
n}+\frac{1}{2}\frac{n+1-\theta}{64\cdot
n}+\frac{1}{2}\frac{1}{4}^{2}+\frac{1}{2}\frac{1}{4}^{2}=\frac{9n+1}{128\cdot
n}$. Thus, each type derives equal utility from any signal and the mechanism
is incentive compatible.
Notably, each belief in the support of the posterior of $G_{\theta}$ yields a
payoff of $1$ to the designer and hence this mechanism with private signals
yields a payoff of $1$
$\sum_{\theta\in\Theta}\bar{v}(m,\theta)dG_{\theta}(m)=1\,.$
Thus, we have established the following proposition:
###### Proposition 5.
Assume that the designer’s utility $v$ is non-negative.
1. (i)
In any problem there exists a public persuasion mechanism which achieves a
$1/n$ fraction of the optimal value achievable by a private persuasion
mechanism.
2. (ii)
In some problems no public persuasion mechanism yields more than a $1/n$
fraction of the optimal value achievable by a private persuasion mechanism.
It is worth noting that this worst case bound is achieved even when attention
is restricted to a simple subclass of problem instances. For instance, in the
example presented in this section, it sufficed to focus on settings where the
designer has a payoff of either $0$ or $1$ for different actions of the
receiver, and the receiver has finitely many actions and type-independent
utility functions.
Furthermore by relabeling the actions one can easily modify the example such
that the designer’s utility is independent of the receiver’s type and the
receiver’s utility depends on his type. Proposition 5 thus holds unchanged
even if one restricts attention to problems where the designer’s utility
depends only on the receivers action, but not on his type or belief.
## 5 Discussion and Conclusion
Our results can be easily extended along various dimensions. Persuasion
problems where the designer’s payoff depends on the induced posterior mean,
but the admissible posterior mean distributions need to satisfy additional
side-constraints are naturally subsumed. Below we discuss other economically-
relevant extensions and applications of our results.
#### Type-Dependent Participation Constraints
In our analysis we can allow each type of receiver to face a participation
constraint which means that the mechanism must provide that type of the
receiver with at least some given expected utility. Our analysis and results
carry over to this case unchanged as (8) already encodes such an endogenous
constraint capturing the value of deviating by observing the signal meant for
another type. To adjust the result for this case one just needs to replace
$e_{\theta}$ by the lower bound on the receiver’s utility whenever this lower
bound is larger than $e_{\theta}$.
#### Competition among Multiple Designers
Another application of our approach is to competition among multiple
designers. Suppose that each designer offers a menu of signals and the
receiver can chose _one_ of them to observe.232323Another plausible model of
competition is that the receiver can observe all the signals sent by different
designers. For an analysis of this situation see Gentzkow and Kamenica
(2016a). Each designer receives a higher payoff if the receiver chooses a
signal from her menu and might have different preferences over the receiver’s
actions. Again the designer has to ensure that the signal she provides each
type with yields a sufficiently high utility such that this type does not
prefer to observe either another signal of the same designer or a signal
provided by a different designer. This situation corresponds to an endogenous
type-dependent participation constraint which is determined in equilibrium. As
our analysis works for any participation constraint it carries over to this
case.
#### Persuading Voters in Different Districts
In an interesting recent paper, Alonso and Câmara (2016) shed light on how a
politician could use public signals to persuade voters to support a policy. In
their setting, the politician faces a group of uninformed voters who must
choose whether to keep the status quo policy or implement a new policy. A
politician can design a policy experiment to influence the voters’ decisions,
and subsequently, the implemented outcome.
While the framework of Alonso and Câmara (2016) does not involve side
constraints, such constraints could be relevant in some voting settings, and
they can be handled using our approach in a straightforward way. To see this,
consider a politician (designer) trying to persuade voters (receivers) in
different voting districts $i\in\\{1,\ldots,n\\}$. The politician commits to a
public signal whose realization depends on the underlying state of the world.
The payoffs of the constituents depend on the expectation of the state, and
given the realization of the signal the constituents update their posterior
and take payoff-maximizing binary actions (vote in favor or not). The
constituents in different districts have different priorities. Specifically,
for district $i$ with population $z_{i}$, the share of the electorate voting
for the politician when the posterior mean is $m$, is given by
$f_{i}(m)\in[0,1]$. The objective of the politician is to design a public
signal that maximizes the total vote she receives
$v(s)=\sum_{i=1}^{n}f_{i}(s)z_{i}$, while ensuring that in expectation she
receives the majority vote in every district $u_{i}(s)=f_{i}(s)-1/2\geq 0$ (or
more generally the expected vote is above a threshold in each district).
The designer’s problem thus admits a formulation akin to (10).242424A variant
of this problem could be relevant for US politics. Suppose that the designer
is the presidential candidate. Her messages impact constituents’ votes for her
as well as the other representatives of her party (e.g., in the house or the
senate). The presidential candidate receives a payoff one $1$ if she wins the
electoral college and $0$ otherwise. However, while maximizing the probability
of winning the electoral college, she may want to ensure retaining/achieving
majority in the senate/house. This problem can also be formulated as in (10),
by appropriately defining the “majority constraints”. This is possible even if
the voters in the same state have different preferences for the presidential
vote vs. the vote for the remaining representatives (e.g., when there are
distinct $f_{i}$ functions associated with the vote for the presidential
candidate and the other representatives).
Another related setting in political economy is the gerrymandering problem
studied in Kolotilin and Wolitzky (2020). In this paper, the authors explore
how the voting regions can be chosen so as to maximize the likelihood of
getting as many votes as possible. An interesting future direction is to
incorporate additional constraints (e.g., geographical contiguity of
districts) to this problem using a framework similar to the one introduced in
the present paper.
#### Robustness to Partial Distributional Information on Types
In classical Bayesian persuasion problems, it is assumed that the designer has
beliefs over receiver’s prior information, i.e. the distribution of types
$\theta$. Dworczak and Pavan (2020) argue that the designer may have concerns
that her beliefs may be wrong. In such cases, the designer may want to choose
a robust information structure that maximizes her “worst-case”
payoff.252525More precisely, Dworczak and Pavan (2020) introduce a third
player “nature” who provides additional signals to the receiver, which can
even be conditioned on the realization of the designer’s signal. The
designer’s worst-case payoff from a signal is found by evaluating her payoff
when the nature chooses the response that minimizes the designer’s payoff.
Moreover, when there are ties in the receiver’s actions, Dworczak and Pavan
(2020) break them in a way that yields the lowest payoff to the designer. Here
we still break ties in favor of the designer, but discuss how the worst-case
receiver type can be accounted for within our framework. Similar ideas, to
some extent, can also be incorporated to our framework.
Consider the problem of information disclosure to a privately informed
receiver studied in the present paper, but assume that the designer only has
access to coarse distributional information about the receiver’s type.
Specifically, let $\\{\Theta_{i}\\}_{i=1}^{n}$ be a partition of the type
space $\Theta$, where we do not require $\Theta\subset\mathbb{R}$ to be a
finite set. Suppose that the designer knows the probability with which the
type belongs to each $\Theta_{i}$, but has no further information on how the
type is distributed within $\Theta_{i}$. Suppose further that the receiver’s
utility is affine in $\theta$. The designer picks a menu of signals, and the
receiver chooses one of them and subsequently takes an action. The designer’s
objective is to maximize her expected payoff with respect to the _worst-case
type distribution_ within each $\Theta_{i}$. Without loss of generality, it
suffices the designer to include one signal in her menu for each
$i\in\\{1,\ldots,n\\}$, and elicit the set $\Theta_{i}$ to which the
receiver’s type belongs. Due to the affineness of the utility functions, to
ensure incentive compatibility it suffices to check that for each $i$ types
$\inf\Theta_{i}$ and $\sup\Theta_{i}$ have no incentive to deviate. Thus, a
formulation similar to the one in Proposition 1 can be used to characterize
the optimal mechanism, by having a pair of constraints ensuring incentive
compatibility for types in each $\Theta_{i}$. When there are finitely many
actions the formulation of Section 3.2 can be used to obtain the optimal
mechanism – once again after adjusting the incentive compatibility to account
for the deviations of extreme types within each $\Theta_{i}$.262626The model
outlined here also admits an alternative interpretation: Each receiver type
observes an additional signal from the nature (in a similar fashion to
Dworczak and Pavan (2020)), which impacts its posterior. The induced posterior
mean for type $i$ lies in $\Theta_{i}$. In this interpretation we do not
require $\Theta_{i}$ to be non-overlapping sets. Our approach still allows for
identifying information structures that yield the largest payoff to the
designer under the worst-case choice of the nature’s signals.
#### Beyond Persuasion Problems
An immediate extension is to allow the designer to influence the receiver’s
utility by also designing transfers. For instance, in the context of the
example of Section 4.1, the seller might not only control the information she
provides to the buyer, but also might charge different buyers different
prices. Such settings are considered, e.g., in Wei and Green (2020); Guo et
al. (2020); Yang (2020); Yamashita and Zhu (2021). As our results apply for
any utility function, it is still without loss to restrict attention to
laminar partitional signals. Suppose that (i) the receiver has finitely many
actions, and (ii) his preferences are quasi-linear in the transfers. The
designer’s optimal mechanism (which now determines the information structure
as well as the transfers) can be formulated following an approach similar to
the one in Section 3.2. Additional variables which capture transfers need to
be added to the optimization formulation of that section. Due to (i) these
transfers can be represented by finite dimensional vectors, and due to (ii)
the resulting problem remains convex. Thus, similar to Section 3.2 an optimal
mechanism can be obtained tractably by solving a finite-dimensional convex
program.
Finally, while this paper focused on persuasion problems, the mathematical
result we obtain on maximization problems over mean preserving contractions
under side-constraints can be applied in other economic settings which lead to
similar mathematical formulations. For example as first observed in Kolotilin
and Zapechelnyuk (2019) the persuasion problem is closely related to
delegation problems where the agent privately observes the state and the
designer commits to an action as a function of a message sent by the agent.
Kleiner et al. (2020) show that this problem can be reformulated as a
maximization problem under majorization constraints which is a special case of
the problem we discuss in Section 3.1. Our results thus allow one to analyze
delegation problems where there is a constraint on the actions taken by the
designer.272727While mathematically closely related, the delegation problem is
economically fundamentally different from the persuasion problem. For example
the majorization constraint in the delegation problem corresponds to an
incentive compatibility constraint while it corresponds to a feasibility
constraint in the persuasion problem. The side constraints correspond to a
feasibility constraint in the delegation problem while they correspond to an
incentive compatibility constraint in the persuasion problem. For example if
the agent is the manager of a subdivision of a firm and the designer is the
CEO who allocates money to that subdivision depending on the manager’s report,
our results allow one to analyze the case where the CEO faces a budget
constraint and on average cannot allocate more than a given amount to that
subdivision.
## Appendix
###### Lemma 6.
Suppose $u_{i}:[0,1]\rightarrow\mathbb{R}$ is a continuous function for
$i\in\\{1,\ldots,n\\}$. The set of distributions $G:[0,1]\to[0,1]$ that
satisfy $G\succeq F$ and
$\displaystyle\int_{0}^{1}u_{i}(s)dG(s)\geq 0\text{ for }i\in\\{1,\ldots,n\\}$
(15)
is compact in the weak topology.
* Proof.
First, note that as $u_{i}$ is continuous it is bounded on $[0,1]$. Consider a
sequence of distributions $G^{i}$, $i\in\\{1,2,\ldots\\}$ that satisfies the
above constraints. By Helly’s selection theorem there exists a subsequence
that converges pointwise. From now on assume that $(G^{i})$ is such a
subsequence and denote by $G^{\infty}$ the right-continuous representation of
its point-wise limit. Thus, any sequence of random variables $m^{i}$ such that
$m^{i}\sim G^{i}$ converges in distribution to a random variable distributed
according to $G^{\infty}$. As $(u_{k})$ are continuous and bounded this
implies that for all $k$ and all $\theta\in\Theta$
$\lim_{i\to\infty}\int_{0}^{1}u_{k}(s)dG^{i}(s)=\int_{0}^{1}u_{k}(s)dG^{\infty}(s)\,.$
Furthermore, for all $x\in[0,1]$
$\lim_{i\to\infty}\int_{x}^{1}G^{i}(s)ds=\int_{x}^{1}G^{\infty}(s),$
and hence $G^{\infty}$ also satisfies $G^{\infty}\succeq F$. Thus, the set of
distributions given in the statement of the lemma is compact with respect to
the weak topology. ∎
###### Lemma 7.
Let $F,G:[0,1]\to[0,1]$ be CDFs and let $F$ be continuous. Suppose that $G$ is
a mean-preserving contraction of $F$ and for some $x\in[0,1]$
$\int_{x}^{1}F(s)ds=\int_{x}^{1}G(s)ds.$
Then $F(x)=G(x)$. Furthermore, $G$ is continuous at $x$.
* Proof.
Define the function $L:[0,1]\to\mathbb{R}$ as
$L(z)=\int_{z}^{1}F(s)-G(s)ds\,.$
As $G$ is a mean-preserving contraction of $F$ we have that $L(z)\leq 0$ for
all $z\in[0,1]$. By the assumption of the lemma $L(x)=0$. By definition $L$ is
absolutely continuous and has a weak derivative, which we denote by
$L^{\prime}$. As $F$ is continuous $L^{\prime}(z)=G(z)-F(z)$ almost everywhere
and $L^{\prime}$ has only up-ward jumps. For $L$ to have a maximum at $x$ we
need that $\lim_{z\nearrow x}L^{\prime}(z)\geq 0$ and $\lim_{z\searrow
x}L^{\prime}(z)\leq 0$. This implies that
$\lim_{z\searrow x}G(z)-F(z)\leq 0\leq\lim_{z\nearrow x}G(z)-F(z).$
In turn, this implies that $\lim_{z\searrow x}G(z)\leq\lim_{z\nearrow x}G(z)$.
As $G$ is a CDF it is non-decreasing and thus $G$ is continuous at $x$.
Consequently, $L$ is continuously differentiable at $x$ and as $L$ admits a
maximum at $x$, we have that $0=L^{\prime}(x)=G(x)-F(x)$. ∎
###### Lemma 8.
Fix an interval $[a,b]\subseteq[0,1]$, $c\in\mathbb{R}$, upper-semicontinuous
$v:[0,1]\to[0,1]$ and continuous
$\tilde{u}_{1},\ldots,\tilde{u}_{n}:[0,1]\to\mathbb{R}$ and consider the
problem
$\displaystyle\max_{\tilde{G}}$ $\displaystyle\int_{0}^{1}v(s)d\tilde{G}(s)$
(16) subject to $\displaystyle\int_{0}^{1}\tilde{u}_{i}(s)d\tilde{G}(s)\geq
0\text{ for }i\in\\{1,\ldots,n\\}$ (17)
$\displaystyle\int_{{a}}^{{b}}G(s)ds=c$ (18)
$\displaystyle\int_{[a,b]}d\tilde{G}(s)=1\,.$ (19)
If the set of distributions that satisfy (17)-(19) is non-empty then there
exists a solution to the above optimization problem that is supported on at
most $n+2$ points.
* Proof.
Consider the set of distributions that assign probability $1$ to the set
$[a,b]$. The extreme points of this set are the Dirac measures in $[a,b]$. Let
$\mathcal{D}$ be the set of distributions which satisfy (17)-(18) and are
supported on $[a,b]$. By Theorem 2.1 in Winkler (1988) each extreme points of
the set $\mathcal{D}$ is the sum of at most $n+2$ mass points as (17) and (18)
specify $n+1$ constraints. Note, that the set of the set of distributions
satisfying (17)-(19) is compact. As $v$ is upper-semicontinuous the function
$\tilde{G}\to\int_{0}^{1}v(s)d\tilde{G}(s)$ is upper-semi continuous and
linear. Thus, by Bauer’s maximum principle (see for example Result 7.69 in
Aliprantis and Border 2013) there exist a maximizer at an extreme point of
$\mathcal{D}$ which establishes the result. ∎
###### Lemma 9.
Suppose that $H,G$ are distribution that assign probability 1 to $[a,b]$. Let
$M$ be an absolutely continuous function such that $\int_{x}^{b}G(s)ds>M(x)$
for all $x\in[a,b]$, and $\int_{\hat{x}}^{b}H(y)dy<M(x)$ for some
$\hat{x}\in[a,b]$ Then, there exists $\lambda\in(0,1)$ such that for all
$x\in[a,b]$
$\int_{x}^{b}(1-\lambda)G(s)+\lambda H(s)ds\geq M(x)$
with equality for some $x\in[a,b]$.
* Proof.
Define
$L_{\lambda}(x)=\int_{x}^{b}(1-\lambda)G(y)+\lambda H(y)dy-M(x)\,,$
and $\phi(\lambda)=\min_{z\in[a,b]}L_{\lambda}(z)$. As $M$ is continuous, by
the assumptions of the lemma we have that
$\phi(0)=\min_{x\in[a,b]}L_{0}(x)=\min_{x\in[a,b]}\left[\int_{x}^{b}G(s)ds-M(x)\right]>0$
and
$\phi(1)=\min_{x\in[a,b]}L_{1}(x)=\min_{x\in[a,b]}\left[\int_{x}^{b}H(s)ds-M(x)\right]\leq\int_{\hat{x}}^{b}H(s)ds-M(\hat{x})<0\,.$
Furthermore,
$\left|\frac{\partial
L_{\lambda}(z)}{\partial\lambda}\right|=\left|\int_{x}^{b}H(s)-G(s)ds\right|\leq
b-a\,.$
Hence, $\lambda\mapsto L_{\lambda}(z)$ is uniformly Lipschitz continuous and
the envelope theorem thus implies that $\phi$ is Lipschitz continuous. As
$\phi(0)>0$, and $\phi(1)<0$ there exist some $\lambda^{*}\in(0,1)$ such that
$\phi(\lambda^{*})=0$.
$\int_{x}^{b}(1-\lambda^{*})G(s)+\lambda^{*}H(s)ds\geq M(x)$
with equality for some $x\in[a,b]$. This completes the proof. ∎
* Proof of Proposition 2.
As the set of feasible distributions is compact with respect to the weak
topology by Lemma 6 and the function $G\mapsto\int_{0}^{1}v^{*}(s)dG(s)$ is
upper semicontinuous in the weak topology the optimization problem (10) admits
a solution.
Let $G$ be a solution to the optimization problem (10) and denote by $B_{G}$
the set of points where the mean preserving contraction (MPC) constraint is
binding, i.e.,
$B_{G}=\left\\{z\in[0,1]\colon\int_{z}^{1}F(s)ds=\int_{z}^{1}G(s)ds\right\\}.$
(20)
Suppose that this solution is maximal in the sense that there does not exist
another solution $G^{\prime}$ for which the set of points where the MPC
constraint binds is larger, i.e., $B_{G}\subset B_{G^{\prime}}$ (where
$B_{G^{\prime}}$ defined as in (20) after replacing $G$ with $G^{\prime}$).
The existence of such a maximal optimal solution follows from Zorn’s Lemma
(see for example Section 1.12 in Aliprantis and Border 2013).
Fix a point $x\notin B_{G}$. We define $(a,b)$ to be the largest interval such
that the mean-preserving contraction constraint does not bind on that interval
for the solution $G$, i.e.
$\displaystyle a$ $\displaystyle=\max\Big{\\{}z\leq x\colon z\in
B_{G}\Big{\\}},$ $\displaystyle b$ $\displaystyle=\min\Big{\\{}z\geq x\colon
z\in B_{G}\Big{\\}}.$
Fix any $a<\hat{a}$ and $\hat{b}<b$, and consider the interval interval
$[\hat{a},\hat{b}]\subset[a,b]$. We define
$G_{[\hat{a},\hat{b}]}:[0,1]\to[0,1]$ to be the CDF of a random variable that
is distributed according to $G$ conditional on the realization being in the
interval $[\hat{a},\hat{b}]$
$G_{[\hat{a},\hat{b}]}(z)=\frac{G(z)-G(\hat{a}_{-})}{G(\hat{b})-G(\hat{a}_{-})}\,,$
where $G(\hat{a}_{-})=\lim_{s\nearrow\hat{a}}G(s)$. We note that
$G_{[\hat{a},\hat{b}]}$ is non-decreasing, right-continuous, and satisfies
$G_{[\hat{a},\hat{b}]}(\hat{b})=1$. Thus, it is a well defined cumulative
distribution supported on $[\hat{a},\hat{b}]$. As $G$ is feasible we get that
$\int_{\hat{a}}^{\hat{b}}u_{k}(s)dG_{[\hat{a},\hat{b}]}(s)+\frac{1}{G(\hat{b})-G(\hat{a}_{-})}\int_{[0,1]\setminus[\hat{a},\hat{b}]}u_{k}(s)dG(s)\geq
0\qquad\text{ for }k\in\\{1,\ldots,n\\}\,.$ (21)
To simplify notation we define the functions
$\tilde{u}_{1},\ldots,\tilde{u}_{n}$, where for all $k$
$\tilde{u}_{k}(z)=u_{k}(z)+\frac{1}{G(\hat{b})-G(\hat{a}_{-})}\int_{[0,1]\setminus[\hat{a},\hat{b}]}u_{k}(y)dG(y)\,.$
(22)
Note that using this notation (21) can be restated as:
$\int_{0}^{1}\tilde{u}_{k}(s)dG_{[\hat{a},\hat{b}]}(s)\geq 0\qquad\text{ for
}k\in\\{1,\ldots,n\\}.$ (23)
As $G$ satisfies the mean-preserving contraction constraint relative to $F$,
using the fact that $a<\hat{a}$ and $\hat{b}<b$, for $z\in[\hat{a},\hat{b}]$
we obtain:
$\int_{z}^{\hat{b}}G_{[\hat{a},\hat{b}]}(s)ds>\frac{1}{G(\hat{b})-G(\hat{a}_{-})}\left[\int_{z}^{1}F(s)ds-\int_{\hat{b}}^{1}G(s)ds-(\hat{b}-z)G(\hat{a}_{-})\right]=M(z)\,.$
(24)
Consider now the maximization problem over distributions supported on
$[\hat{a},\hat{b}]$ that satisfy the constraints derived above (after
replacing the strict inequality in (24) with a weak inequality) and maximize
the original objective:
$\displaystyle\max_{H}$ $\displaystyle\int_{0}^{1}v(s)dH(s)$ (25) subject to
$\displaystyle\int_{0}^{1}\tilde{u}_{i}(s)dH(s)\geq 0$ $\displaystyle\text{
for }i\in\\{1,\ldots,n\\}$ $\displaystyle\int_{z}^{\hat{b}}H(s)ds\geq M(z)$
$\displaystyle\text{ for }z\in[\hat{a},\hat{b}]$
$\displaystyle\int_{[\hat{a},\hat{b}]}dH(s)=1\,.$
By (23) and (24) the conditional CDF $G_{[\hat{a},\hat{b}]}$ is feasible in
the problem above. We claim that it is also optimal. Suppose, towards a
contradiction, that there exist a CDF $H$ that is feasible in (25) and
achieves a strictly higher value than $G_{[\hat{a},\hat{b}]}$. Consider the
CDF
$K(z)=\begin{cases}G(z)&\text{ if }z\in[0,1]\setminus[\hat{a},\hat{b}]\\\
G(\hat{a}_{-})+H(z)(G(\hat{b})-G(\hat{a}_{-}))&\text{ if
}z\in[{\hat{a},\hat{b}}],\end{cases}$
which equals $G$ outside the interval $[\hat{a},\hat{b}]$ and $H$ conditional
on being in $[\hat{a},\hat{b}]$. Using (22), the definition of $M(z)$, and the
feasibility of $H$ in (25), it can be readily verified that this CDF is
feasible in the original problem (10). Moreover, it achieves a higher value
than $G$, since $H$ achieves strictly higher value than
$G_{[\hat{a},\hat{b}]}$ in (25). However, this leads to a contradiction to the
optimality of $G$ in (10), thereby implying that $G_{[\hat{a},\hat{b}]}$ is
optimal in (25).
Next, we establish that there cannot exist an optimal solution $H$ to the
problem (25) where for some $z\in(\hat{a},\hat{b})$
$\int_{z}^{\hat{b}}H(s)ds=M(z).$ (26)
Suppose such an optimal solution exists. Then, $K$ would be an optimal
solution to the original problem satisfying $z\in B_{K}\supset B_{G}$, where
$B_{K}$ is defined as in (20) (after replacing $G$ with $K$) and is the set of
points where the mean preserving contraction constraint binds. However, this
contradicts that $G$ is a solution to the original problem that is maximal (in
terms of the set where the MPC constraints bind).
We next consider a relaxed version of the optimization problem (25) where we
replace the second constraint of (25) with a constraint that ensures that $H$
has the same mean as $G_{[\hat{a},\hat{b}]}$:
$\displaystyle\max_{H}$ $\displaystyle\int_{0}^{1}v(s)dH(s)$ subject to
$\displaystyle\int_{0}^{1}\tilde{u}_{i}(s)dH(s)\geq 0$ $\displaystyle\text{
for }i\in\\{1,\ldots,n\\}$
$\displaystyle\int_{\hat{a}}^{\hat{b}}H(s)ds=\int_{\hat{a}}^{\hat{b}}G_{[\hat{a},\hat{b}]}(s)ds$
$\displaystyle\int_{[\hat{a},\hat{b}]}dH(s)=1\,.$
By Lemma 8 there exists a solution $J$ to this relaxed problem that is the sum
of $n+2$ mass points. Since $G_{[\hat{a},\hat{b}]}$ is feasible in this
problem, it readily follows that
$\int_{0}^{1}v(s)dJ(s)\geq\int_{0}^{1}v(s)dG_{[\hat{a},\hat{b}]}(s).$ (27)
Suppose, towards a contradiction, that there exists $z\in[\hat{a},\hat{b}]$
such that
$\int_{z}^{\hat{b}}J(s)ds<M(z)\,.$ (28)
Then, by Lemma 9, there exists some $\lambda\in(0,1)$ such that
$(1-\lambda)G_{[\hat{a},\hat{b}]}+\lambda J$ satisfies
$\int_{r}^{\hat{b}}(1-\lambda)G_{[\hat{a},\hat{b}]}(s)+\lambda J(s)ds\geq
M(r)\,,$ (29)
for all $r\in[\hat{a},\hat{b}]$, and the inequality holds with equality for
some $r\in[\hat{a},\hat{b}]$. This implies that
$(1-\lambda)G_{[\hat{a},\hat{b}]}+\lambda J$ is feasible for the problem (25).
Furthermore, by the linearity of the objective, (27), and the optimality of
$G_{[\hat{a},\hat{b}]}$ in (25), it follows that
$(1-\lambda)G_{[\hat{a},\hat{b}]}+\lambda J$ is also optimal in (25). However,
this leads to a contradiction to the fact that (25) does not admit an optimal
solution where the equality in (26) holds for some
$z\in[\hat{a},\hat{b}]\subset[a,b]$.
Consequently, the inequality (28) cannot hold, and $J$ must be feasible in
problem (25). Together with (27) this implies that $J$ is an optimal solution
to (25). that assigns mass to only $n+2$ points in the interval
$[\hat{a},\hat{b}]$. This implies that the CDF
$\begin{cases}G(z)&\text{ if }z\in[0,1]\setminus[\hat{a},\hat{b}]\\\
G(\hat{a}_{-})+J(z)(G(\hat{b})-G(\hat{a}_{-}))&\text{ if
}z\in[\hat{a},\hat{b}]\end{cases}$ (30)
is a solution of the original problem that assigns mass to only $n+2$ points
in the interval $[\hat{a},\hat{b}]$. By setting $\hat{a}=a+\frac{1}{r}$ and
$\hat{b}=b-\frac{1}{r}$ we can thus find a sequence of solutions $(H^{r})$ to
(10) that each have at most $n+2$ mass points in the interval
$[a+\frac{1}{r},b-\frac{1}{r}]$. As the set of feasible distributions is
closed and the objective function is upper-semicontinuous this sequence admits
a limit point $H^{\infty}$ which itself is optimal in (10). This limit
distribution consists of at most $n+2$ mass points in the interval $(a,b)$.
Furthermore, by definition of $a,b$ and our construction in (30) each solution
$H^{r}$ and hence $H^{\infty}$ satisfies the MPC constraint with equality at
$\\{a,b\\}$. Thus, Lemma 7 implies that $H^{\infty}$ is continuous at these
points, and $H^{\infty}(a)=F(a)$ and $H^{\infty}(b)=F(b)$.
Hence, we have established that for every solution $G$ for which $B_{G}$ is
maximal, either $x\in B_{G}$ which by Lemma 7 implies that $G(x)=F(x)$. Or
$x\notin B_{G}$ and then one can find a new solution $\tilde{G}$ such that (i)
$\tilde{G}$ has at most $n+2$ mass points in the interval $(a,b)$ with
$a=\max\\{z\leq x\colon z\in B_{G}\\}$ and $b=\min\\{z\geq x\colon z\in
B_{G}\\}$, (ii) $\tilde{G}(a)=F(a)$ and $\tilde{G}(b)=F(b)$ which implies that
the mass inside the interval $[a,b]$ is preserved, and (iii) $\tilde{G}$
matches $G$ outside $(a,b)$. Since every interval contains a rational number
there can be at most countably many such intervals. Proceeding inductively,
the claim follows. ∎
To establish Proposition 3, we make use of the partition lemma, stated next:
###### Lemma 10 (Partition Lemma).
Suppose that distributions $F,G$ are such that
$\int_{x}^{1}G(t)dt\geq\int_{x}^{1}F(t)dt$ for $x\in I=[a,b]$, where the
inequality holds with equality only for the end points of $I$. Suppose further
that $G(a)=F(a)$, $G(x)=G(a)+\sum_{r=1}^{K}p_{r}\mathbf{1}_{x\leq m_{r}}$ for
$x\in I$ where $\sum_{r=1}^{K}p_{r}=F(b)-F(a)$, $\\{m_{r}\\}$ is a strictly
increasing collection in $r$, and $m_{r}\in I$ for $r\in[K]$.
There exists a collection of intervals $\\{J_{r}\\}_{r\in[K]}$ such that
$\\{P_{k}\\}=\\{J_{k}\setminus\cup_{\ell\in\mathcal{A}|\ell>k}J_{\ell}\\}$ is
a laminar partition, which satisfies:
* (a)
$J_{1}=I$, and if $K>1$, then $F(\inf J_{1})<F(\inf J_{K})<F(\sup
J_{K})<F(\sup J_{1})$;
* (b)
$\int_{P_{k}}dF(x)=p_{k}$ for all ${k\in[K]}$;
* (c)
$\int_{P_{k}}xdF(x)=p_{k}m_{k}$ for all ${k\in[K]}$.
* Proof of Lemma 10.
We prove the claim by induction on $K$. Note that when $K=1$ we have
$J_{1}={P}_{1}=I$, which readily implies properties (a) and (b). In addition,
the definition of $p_{1},m_{1}$ implies that
$\displaystyle G(b)b-G(a)a-p_{1}m_{1}$
$\displaystyle=G(a)(b-a)+p_{1}(b-m_{1})$ (31)
$\displaystyle=\int_{a}^{b}G(t)dt=\int_{a}^{b}F(t)dt$
$\displaystyle=F(b)b-F(a)a-\int_{I}tdF(t)$
$\displaystyle=G(b)b-G(a)a-\int_{{P}_{1}}tdF(t).$
Hence, property (c) also follows.
We proceed by considering two cases: $K=2$, $K>2$.
$K=2$: Let $t_{1},t_{2}\in I$ be such that
$F(t_{1})-F(a)=F(b)-F(t_{2})=p_{1}$. Observe that since
$\int_{x}^{1}G(t)dt\geq\int_{x}^{1}F(t)dt$ $x\in I$ and this inequality holds
with equality only at the end points of $I$, we have (i)
$\int_{a}^{t_{1}}F(x)dx>\int_{a}^{t_{1}}G(x)dx$ and (ii)
$\int_{t_{2}}^{b}F(x)dx<\int_{t_{2}}^{b}G(x)dx$. Using the first inequality
and the definition of $G$ we obtain:
$\displaystyle p_{1}(t_{1}-m_{1})^{+}+G(a)(t_{1}-a)$
$\displaystyle\leq\int_{a}^{t_{1}}G(x)dx<\int_{a}^{t_{1}}F(x)dx$ (32)
$\displaystyle=F(t_{1})t_{1}-F(a)a-\int_{a}^{t_{1}}xdF(x)$
$\displaystyle=(G(a)+p_{1})t_{1}-G(a)a-\int_{a}^{t_{1}}xdF(x).$
Rearranging the terms, this yields
$p_{1}m_{1}\geq p_{1}t_{1}-p_{1}(t_{1}-m_{1})^{+}>\int_{a}^{t_{1}}xdF(x).$
(33)
Similarly, using (ii) and the definition of $G$ we obtain:
$\displaystyle G(b)(b-t_{2})-p_{1}(m_{1}-t_{2})^{+}$
$\displaystyle\geq\int_{t_{2}}^{b}G(x)dx>\int_{t_{2}}^{b}F(x)dx$ (34)
$\displaystyle=F(b)b-F(t_{2})t_{2}-\int_{t_{2}}^{b}xdF(x)$
$\displaystyle=G(b)b-(G(b)-p_{1})t_{2}-\int_{t_{2}}^{b}xdF(x).$
Rearranging the terms, this yields
$p_{1}m_{1}\leq p_{1}t_{2}+p_{1}(m_{1}-t_{2})^{+}<\int^{b}_{t_{2}}xdF(x).$
(35)
Combining (33) and (35), and the fact that $F(t_{1})-F(a)=F(b)-F(t_{2})=p_{1}$
implies that there exists $\hat{t}_{1},\hat{t}_{2}\in\mathrm{int}(I)$
satisfying $\hat{t}_{1}<\hat{t}_{2}$ such that
$F(\hat{t}_{1})-F(a)+F(b)-F(\hat{t}_{2})=p_{1}$ and
$\int_{a}^{\hat{t}_{1}}xdF(x)+\int_{\hat{t}_{2}}^{b}xdF(x)=p_{1}m_{1}.$ (36)
Note that
$\displaystyle(b-a)G(a)+(b-m_{1})p_{1}$
$\displaystyle+(b-m_{2})p_{2}=\int_{a}^{b}G(x)dx=\int_{a}^{b}F(x)dx$
$\displaystyle=bF(b)-aF(a)-\int_{a}^{b}xdF(x)=bG(b)-aG(a)-\int_{a}^{b}xdF(x).$
Since $p_{1}+p_{2}=G(b)-G(a)$, this in turn implies that
$\displaystyle\int_{a}^{b}xdF(x)=p_{1}m_{1}+p_{2}m_{2}.$
Combining this observation with (36), we conclude that
$\displaystyle\int_{\hat{t}_{1}}^{\hat{t}_{2}}xdF(x)=p_{2}m_{2}.$ (37)
Let $J_{2}=[\hat{t}_{1},\hat{t}_{2}]$, and $J_{1}=I$, and define $P_{1},P_{2}$
as in the statement of the lemma. Observe that this construction immediately
satisfies (a) and (b). Moreover, (c) also follows from (36) and (37). Thus,
the claim holds when $K=2$.
$K>2$: Suppose that $K>2$, and that the induction hypothesis holds for any
$K^{\prime}\leq K-1$. Let $\hat{p}_{2}=p_{K}$, $\hat{m}_{2}=m_{K}$; and
$\hat{p}_{1}=\sum_{k\in[K-1]}p_{k}$,
$\hat{m}_{1}=\frac{1}{\hat{p}_{1}}\sum_{k\in[K-1]}p_{k}m_{k}$. Define a
distribution $\hat{G}$ such that $\hat{G}(x)=G(x)$ for $x\notin I$,
$\hat{G}(a)=F(a)$, and
$\hat{G}(x)=\hat{G}(a)+\sum_{r=1}^{2}\hat{p}_{r}\mathbf{1}_{x\leq\hat{m}_{r}}$.
This construction ensures that $\hat{p}_{1}+\hat{p}_{2}=F(b)-F(a)$ and
$\hat{y}_{2}>\hat{y}_{1}$. Moreover, $G$ is a mean preserving spread of
$\hat{G}$, and hence $\int_{x}^{1}\hat{G}(t)dt\geq\int_{x}^{1}{G}(t)dt$. Since
$\hat{G}(x)=G(x)$ for $x\notin I$, this in turn implies that
$\int_{x}^{1}\hat{G}(t)dt\geq\int_{x}^{1}F(t)dt$ for $x\in I$ where the
inequality holds with equality only for the end points of $I$. Thus, the
assumptions of the lemma hold for $\hat{G}$ and $F$, and using the induction
hypothesis for $K^{\prime}=2$, we conclude that there exists intervals
$\hat{J}_{1}$, $\hat{J}_{2}$ and sets $P_{2}=\hat{J}_{2}$,
$P_{1}=\hat{J}_{1}\setminus\hat{J}_{2}$, such that
* ($\hat{a}$)
$I=\hat{J}_{1}\supset\hat{J}_{2}$, and
$F(\inf\hat{J}_{1})<F(\inf\hat{J}_{2})<F(\sup\hat{J}_{2})<F(\sup\hat{J}_{1})$;
* ($\hat{b}$)
$\int_{P_{k}}dF(x)=\hat{p}_{k}$ for ${k\in\\{1,2\\}}$;
* ($\hat{c}$)
$\int_{P_{k}}xdF(x)=\hat{p}_{k}\hat{m}_{k}$ for all ${k\in\\{1,2\\}}$.
Note that $(\hat{b})$ and $(\hat{c})$ imply that $\hat{m}_{2}\in\hat{J}_{2}$.
Denote by ${x}_{0},{x}_{1}$ the end points of $\hat{J}_{2}$ and let
$q_{0}=F(x_{0})>F(a)$, $q_{1}=F(x_{1})<F(b)$. Define a cumulative distribution
function $F^{\prime}(\cdot)$, such that
${F}^{\prime}(x)=\begin{cases}F(x)/(1-\hat{p}_{2})&\mbox{for $x\leq
x_{0}$},\\\ F(x_{0})/(1-\hat{p}_{2})&\mbox{for $x_{0}<x<x_{1}$},\\\
(F(x)-\hat{p}_{2})/(1-\hat{p}_{2})&\mbox{for $x_{1}\leq x$}.\\\ \end{cases}$
(38)
Set $p^{\prime}_{k}=p_{k}/(1-\hat{p}_{2})$ and ${m}^{\prime}_{k}=m_{k}$ for
$k\in[K-1]$. Let distribution $G^{\prime}$ be such that
$G^{\prime}(x)=G(x)/(1-\hat{p}_{2})$ for $x\notin I$, and
$G^{\prime}(x)={G}^{\prime}(a)+\sum_{r\in[K-1]}p^{\prime}_{r}\mathbf{1}_{x\leq
m^{\prime}_{r}}$. Observe that by construction
${G}^{\prime}(a)={F}^{\prime}(a)$,
$\sum_{r\in[K-1]}p^{\prime}_{r}={F}^{\prime}(b)-{F}^{\prime}(a)$, and
$\\{m_{r}^{\prime}\\}$ is a strictly increasing collection in $r$, where
$m_{r}^{\prime}\in I$, $m_{r}^{\prime}<\hat{m}_{2}$ for $r\in[K-1]$. The
following lemma implies that $G^{\prime}$ and $F^{\prime}$ also satisfy the
MPC constraints over $I$:
###### Lemma 11.
$\int_{x}^{1}G^{\prime}(t)dt\geq\int_{x}^{1}F^{\prime}(t)dt$ for $x\in I$,
where the inequality holds with equality only for the end points of $I$.
* Proof.
The definition of $G^{\prime}$ implies that it can alternatively be expressed
as follows:
${G}^{\prime}(x)=\begin{cases}G(x)/(1-\hat{p}_{2})&\mbox{for
$x<\hat{m}_{2}$},\\\ (G(x)-\hat{p}_{2})/(1-\hat{p}_{2})&\mbox{for
$x\geq\hat{m}_{2}$}.\\\ \end{cases}$ (39)
Since $\int_{b}^{1}G(t)dt=\int_{b}^{1}F(t)dt$, (38) and (39) readily imply
that $\int_{b}^{1}G^{\prime}(t)dt=\int_{b}^{1}F^{\prime}(t)dt$. Similarly,
using these observations and (38) we have
$\displaystyle(1-\hat{p}_{2})\int_{a}^{1}F^{\prime}(t)dt$
$\displaystyle=\int_{a}^{1}F(t)dt-\int_{x_{0}}^{x_{1}}F(t)dt+F(x_{0})(x_{1}-x_{0})-\hat{p}_{2}(1-x_{1})$
(40)
$\displaystyle=\int_{a}^{1}F(t)dt-F(x_{1})x_{1}+F(x_{0})x_{0}+\hat{p}_{2}\hat{m}_{2}+F(x_{0})(x_{1}-x_{0})-\hat{p}_{2}(1-x_{1})$
$\displaystyle=\int_{a}^{1}G(t)dt-\hat{p}_{2}(1-\hat{m}_{2})$
Here, the second line rewrites $\int_{x_{0}}^{x_{1}}F(t)dt$ using integration
by parts, and leverages ($\hat{c}$). The third line uses the fact that
$\hat{p}_{2}=F(x_{1})-F(x_{0})$ and $\int_{a}^{1}G(t)dt=\int_{a}^{1}F(t)dt$.
On the other hand, (39) readily implies that:
$\displaystyle(1-\hat{p}_{2})\int_{a}^{1}G^{\prime}(t)dt$
$\displaystyle=\int_{a}^{1}G(t)dt-\hat{p}_{2}(1-\hat{m}_{2})$ (41)
Together with (40), this equation implies that
$\int_{a}^{1}G^{\prime}(t)dt=\int_{a}^{1}F^{\prime}(t)dt$. Thus, the
inequality in the claim holds with equality for the end points of $I$.
Recall that $\hat{m}_{2}\in\hat{I}_{2}$ and hence
$a<x_{0}\leq\hat{m}_{2}=m_{K}\leq x_{1}<b$. We complete the proof by focusing
on the value $x$ takes in the following cases: (i) $a<x\leq x_{0}$, (ii)
$x_{0}\leq x\leq\hat{m}_{2}$, (iii) $\hat{m}_{2}\leq x\leq x_{1}$, (iv)
$x_{1}\leq x<b$.
#### Case (i):
Using the observations $\int_{x}^{1}G(t)dt>\int_{x}^{1}F(t)dt$ and
$\int_{a}^{1}G(t)dt=\int_{a}^{1}F(t)dt$ together with (38) and (39) yields
$\int_{a}^{x}G^{\prime}(t)dt=\frac{1}{1-\hat{p}_{2}}\int_{a}^{x}G(t)dt<\frac{1}{1-\hat{p}_{2}}\int_{a}^{x}F(t)dt=\int_{a}^{x}F^{\prime}(t)dt.$
(42)
Together with the fact that
$\int_{a}^{1}G^{\prime}(t)dt=\int_{a}^{1}F^{\prime}(t)dt$ this implies that
$\int_{x}^{1}G^{\prime}(t)dt>\int_{x}^{1}F^{\prime}(t)dt$ in case (i).
#### Case (ii):
Using observations and (38) and (39) we obtain:
$\displaystyle({1-\hat{p}_{2}})\int_{x}^{1}G^{\prime}(t)-F^{\prime}(t)dt$
$\displaystyle=\int_{x}^{1}G(t)dt-(1-\hat{m}_{2})\hat{p}_{2}-\int_{x_{1}}^{1}F(t)dt-\int_{x}^{x_{1}}F(x_{0})dt+(1-x_{1})\hat{p}_{2}$
Since $G$ is an increasing function, it can be seen that the right hand side
is a concave function of $x$. Thus, for $x\in[x_{0},\hat{y}_{2}]$ this
expression is minimized for $x=x_{0}$ or $x=\hat{m}_{2}$. For $x=x_{0}$, case
(i) implies that the expression is non-negative. We next argue that for
$x=\hat{m}_{2}$ the expression remains non-negative. This in turn implies that
$\int_{x}^{1}G^{\prime}(t)-F^{\prime}(t)dt\geq 0$ for
$x\in[x_{0},\hat{m}_{2}]$, as claimed.
Setting $x=\hat{m}_{2}$, recalling that
$\int_{b}^{1}G(t)dt=\int_{b}^{1}F(t)dt$, and observing that $G(t)=G(b)=F(b)$
for $t\geq\hat{m}_{2}$ the right hand side of the previous equation reduces
to:
$\displaystyle R:$
$\displaystyle=(b-\hat{m}_{2})F(b)-(1-\hat{m}_{2})\hat{p}_{2}-\int_{x_{1}}^{b}F(t)dt-(x_{1}-\hat{m}_{2})F(x_{0})+(1-x_{1})\hat{p}_{2}$
(43)
$\displaystyle=(b-\hat{m}_{2})F(b)-\int_{x_{1}}^{b}F(t)dt-(x_{1}-\hat{m}_{2})F(x_{0})-(x_{1}-\hat{m}_{2})\hat{p}_{2}$
$\displaystyle=(b-x_{1})F(b)-\int_{x_{1}}^{b}F(t)dt+(x_{1}-\hat{m}_{2})(F(b)-F(x_{0})-\hat{p}_{2}).$
Since $F(b)\geq F(x_{1})=\hat{p}_{2}+F(x_{0})$, we conclude:
$\displaystyle R$ $\displaystyle\geq(b-x_{1})F(b)-\int_{x_{1}}^{b}F(t)dt\geq
0,$ (44)
where the last inequality applies since $F$ is weakly increasing. Thus, we
conclude that $\int_{\hat{m}_{2}}^{1}G^{\prime}(t)-F^{\prime}(t)dt\geq 0$, and
the claim follows.
#### Case (iii):
First observe that (38) and (39) imply that
$\displaystyle({1-\hat{p}_{2}})\int_{x}^{1}G^{\prime}(t)-F^{\prime}(t)dt$
$\displaystyle=\int_{x}^{1}G(t)dt-(1-x)\hat{p}_{2}-\int_{x_{1}}^{1}F(t)dt-\int_{x}^{x_{1}}F(x_{0})dt+(1-x_{1})\hat{p}_{2}$
Similar to case (ii), the right hand side is a concave function of $x$. Thus,
this expression is minimized for $x\in[\hat{m}_{2},x_{1}]$ this expression is
minimized for $x=\hat{m}_{2}$ or $x={x}_{1}$. When $x=\hat{m}_{2}$, case (ii)
implies that $\int_{x}^{1}G^{\prime}(t)-F^{\prime}(t)dt\geq 0$. Similarly,
when $x=x_{1}$, case (iv) implies that
$\int_{x}^{1}G^{\prime}(t)-F^{\prime}(t)dt\geq 0$. Thus, it follows that
$\int_{x}^{1}G^{\prime}(t)-F^{\prime}(t)dt\geq 0$ for all
$x\in[\hat{m}_{2},x_{1}]$.
#### Case (iv):
In this case, (38) and (39) readily imply that
$\displaystyle({1-\hat{p}_{2}})\int_{x}^{1}G^{\prime}(t)-F^{\prime}(t)dt$
$\displaystyle=\int_{x}^{1}G(t)-F(t)dt>0,$
where the inequality follows from our assumptions on $F$ and $G$. ∎
Summarizing, we have established that the distribution $G^{\prime}$ and
$F^{\prime}$ satisfy the conditions of the lemma. By the induction hypothesis,
we have that there exists intervals $\\{J_{k}^{\prime}\\}_{k\in[K-1]}$ and
sets
$P_{k}^{\prime}=J_{k}^{\prime}\setminus\cup_{\ell\in[K-1]|\ell>k}J_{\ell}^{\prime}$
for all $k\in\mathcal{A}^{\prime}$ such that:
* (a’)
$J_{1}^{\prime}=I$, and $F(\inf J_{1}^{\prime})<F(\inf
J_{K-1}^{\prime})<F(\sup J_{K-1}^{\prime})<F(\sup J_{1}^{\prime})$;
* (b’)
$\int_{P_{k}^{\prime}}dF^{\prime}(x)=p_{k}^{\prime}$ for all ${k\in[K-1]}$;
* (c’)
$\int_{P_{k}^{\prime}}xdF^{\prime}(x)=p_{k}^{\prime}m_{k}^{\prime}$ for all
${k\in[K-1]}$.
Let $J_{k}=J^{\prime}_{k}\setminus\hat{J}_{2}$ for $k\in[K-1]$ such that
$\hat{J}_{2}\not\subset J_{k}^{\prime}$, and $J_{k}=J^{\prime}_{k}$ for the
remaining $k\in[K-1]$. Define $J_{K}=\hat{J}_{2}=[x_{0},x_{1}]$. For
$k\in[K]$, let $P_{k}=J_{k}\setminus\cup_{\ell\in[K]|\ell>k}J_{\ell}$. Note
that the definition of the collection $\\{P_{k}\\}_{k\in[K]}$ implies that it
constitutes a laminar partition of $I$. Observe that the construction of
$\\{J_{k}\\}_{k\in{[K]}}$ and ($\hat{a}$), ($a^{\prime}$) imply that these
intervals also satisfy condition (a) of the lemma.
Note that by construction we have
$P_{k}\subset P_{k}^{\prime}\subset P_{k}\cup J_{K}\quad\mbox{and}\quad
P_{k}\cap J_{K}=\emptyset\quad\mbox{ for $k\in[K-1]$.}$ (45)
Since $\int_{J_{K}}dF^{\prime}(t)=0$ by (38) this observation implies that
$\int_{P_{k}^{\prime}}dF^{\prime}(t)=\int_{P_{k}}dF^{\prime}(t)$ for
$k\in[K-1]$.
Using (38), ($b^{\prime}$), and (45), this observation implies that
$\int_{P_{k}}dF(t)=\int_{P_{k}}dF^{\prime}(t)(1-\hat{p}_{2})=\int_{P_{k}^{\prime}}dF^{\prime}(t)(1-\hat{p}_{2})=m_{k}^{\prime}(1-\hat{p}_{2})=m_{k},$
for $k\in[K-1]$. Similarly, by ($\hat{b}$) we have
$\int_{P_{K}}dF(t)=\int_{\hat{P}_{2}}dF(t)=\hat{p}_{2}=p_{K}$.
Finally, observe that by ($\hat{c}$) we have
$\int_{P_{K}}tdF(t)=\int_{\hat{P}_{2}}tdF(t)=\hat{p}_{2}\hat{m}_{2}=p_{K}m_{K}$.
Similarly, (38) and (45) imply that for $k\in[K-1]$, we have
$\int_{P_{k}}tdF(t)=(1-\hat{p}_{2})\int_{P_{k}}tdF^{\prime}(t)=(1-\hat{p}_{2})\int_{P_{k}^{\prime}}tdF^{\prime}(t)=(1-\hat{p}_{2})p_{k}^{\prime}m_{k}^{\prime}=p_{k}m_{k}.$
These observations imply that the constructed $\\{J_{k}\\}_{k\in[K]}$ and
$\\{P_{k}\\}_{k\in[K]}$ satisfy the induction hypotheses (a)–(c) for $K$.
Thus, the claim follows by induction. ∎
* Proof of Proposition 3.
By definition, the interval $I_{j}$ in the statement of Proposition 3
satisfies the conditions of Lemma 10, (after setting $a=a_{j}$, $b=b_{j}$).
The lemma defines auxiliary intervals $\\{J_{r}\\}$ and explicitly constructs
a laminar partition that satisfies conditions (a)-(c). Here, conditions (b)
and (c) readily imply that the constructed laminar partition satisfies the
claim in Proposition 3, concluding the proof. ∎
* Proof of Theorem 1.
The set of vectors of distributions $G=(G_{1},\ldots,G_{n})$ satisfying (3)
and (4) is compact by the same argument given in Lemma 6. As
$G\mapsto\sum_{\theta\in\Theta}g(\theta)\int_{\Omega}\bar{v}(s,\theta)dG_{\theta}(s)$
is upper-hemi continuous it follows that an optimal solution exists. For this
optimal solution $G^{*}$ we can define $e_{\theta},d_{\theta}$ as in (5) and
(6). By Lemma 3 given these constants the optimal policy needs to solve
(7)-(9). The result then follows directly from Propositions 2 and 3 which
analyze the solution to optimization problems of this form. ∎
* Proof of Lemma 5.
The condition $G_{\theta}\succeq F$ can equivalently be stated as:
$\int_{0}^{x}(1-G_{\theta}(t))dt\geq\int_{0}^{x}(1-F(t))dt,$ (46)
for all $x$, where the inequality holds with equality for $x=1$. This
inequality can be expressed in the quantile space as
$\int_{0}^{x}G_{\theta}^{-1}(t)dt\geq\int_{0}^{x}F^{-1}(t)dt,$ (47)
for all $x\in[0,1]$, with equality at $x=1$. Note that since $G_{\theta}$ is a
discrete distribution, this condition holds if and only if it holds for
$x=\sum_{a\leq\ell}p_{a,\theta}$ and $\ell\in A$. For such $x$, we have
$\int_{0}^{x}G_{\theta}^{-1}(t)=\sum_{a\leq\ell}p_{a,\theta}m_{a,\theta}=\sum_{a\leq\ell}z_{a,\theta},$
(48)
and (47) becomes
$\sum_{a\leq\ell}z_{a,\theta}\geq\int_{0}^{\sum_{a\leq\ell}p_{a,\theta}}F^{-1}(t)dt.$
(49)
Since $\int_{0}^{1}F^{-1}(t)dt=\int_{0}^{1}G_{\theta}^{-1}(t)dt=\sum_{a\in
A}z_{a,\theta}$, the claim follows from (49) after rearranging terms. ∎
## References
* (1)
* Aliprantis and Border (2013) Aliprantis, Charalambos and Kim Border, Infinite Dimensional Analysis: A Hitchhiker’s Guide, Springer-Verlag Berlin and Heidelberg GmbH & Company KG, 2013.
* Alonso and Câmara (2016) Alonso, Ricardo and Odilon Câmara, “Persuading voters,” American Economic Review, 2016, 106 (11), 3590–3605.
* Arieli et al. (2020) Arieli, Itai, Yakov Babichenko, Rann Smorodinsky, and Takuro Yamashita, “Optimal Persuasion via Bi-Pooling,” Available at SSRN, 2020.
* Bergemann and Morris (2013) Bergemann, Dirk and Stephen Morris, “Robust predictions in games with incomplete information,” Econometrica, 2013, 81 (4), 1251–1308.
* Bergemann and Morris (2016) and , “Bayes correlated equilibrium and the comparison of information structures in games,” Theoretical Economics, 2016, 11 (2), 487–522.
* Bergemann and Morris (2019) and , “Information design: A unified perspective,” Journal of Economic Literature, 2019, 57 (1), 44–95.
* Blackwell (1950) Blackwell, David, “Comparison of experiments,” Technical Report, HOWARD UNIVERSITY Washington United States 1950.
* Blackwell and Girshick (1954) Blackwell, David A and Meyer A Girshick, “Theory of Games and Statistical Decisions,” John Willey and Sons, New York, 1954.
* Boleslavsky and Cotton (2015) Boleslavsky, Raphael and Christopher Cotton, “Grading standards and education quality,” American Economic Journal: Microeconomics, 2015, 7 (2), 248–79.
* Brocas and Carrillo (2007) Brocas, Isabelle and Juan D. Carrillo, “Influence through ignorance,” RAND Journal of Economics, 2007, 38 (4), 931–947.
* Candogan (2019a) Candogan, Ozan, “Optimality of double intervals in persuasion: A convex programming framework,” Available at SSRN 3452145, 2019.
* Candogan (2019b) , “Persuasion in Networks: Public Signals and k-Cores,” in “Proceedings of the 2019 ACM Conference on Economics and Computation” ACM 2019, pp. 133–134.
* Candogan and Drakopoulos (2017) and Kimon Drakopoulos, “Optimal Signaling of Content Accuracy: Engagement vs. Misinformation,” 2017.
* Dworczak and Pavan (2020) Dworczak, Piotr and Alessandro Pavan, “Preparing for the worst but hoping for the best: Robust (bayesian) persuasion,” 2020.
* Dworczak and Martini (2019) and Giorgio Martini, “The simple economics of optimal persuasion,” Journal of Political Economy, 2019, 127 (5), 1993–2048.
* Gentzkow and Kamenica (2016a) Gentzkow, Matthew and Emir Kamenica, “Competition in persuasion,” The Review of Economic Studies, 2016, 84 (1), 300–322.
* Gentzkow and Kamenica (2016b) and , “A Rothschild–Stiglitz approach to Bayesian persuasion,” American Economic Review, 2016, 106 (5), 597–601.
* Goldstein and Leitner (2018) Goldstein, Itay and Yaron Leitner, “Stress tests and information disclosure,” Journal of Economic Theory, 2018, 177, 34–69.
* Guo and Shmaya (2019) Guo, Yingni and Eran Shmaya, “The interval structure of optimal disclosure,” Econometrica, 2019, 87 (2), 653–675.
* Guo et al. (2020) , Hao Li, and Xianwen Shi, “Optimal discriminatory disclosure,” Technical Report, Working paper 2020.
* Inostroza and Pavan (2018) Inostroza, Nicolas and Alessandro Pavan, “Persuasion in global games with application to stress testing,” 2018.
* Ivanov (2015) Ivanov, Maxim, “Optimal signals in Bayesian persuasion mechanisms with ranking,” Work. Pap., McMaster Univ., Hamilton, Can, 2015.
* Kamenica (2019) Kamenica, Emir, “Bayesian persuasion and information design,” Annual Review of Economics, 2019, 11, 249–272.
* Kamenica and Gentzkow (2011) and Matthew Gentzkow, “Bayesian Persuasion,” American Economic Review, 2011, 101 (6), 2590–2615.
* Kleiner et al. (2020) Kleiner, Andreas, Benny Moldovanu, and Philipp Strack, “Extreme Points and Majorization: Economic Applications,” Available at SSRN, 2020.
* Kolotilin (2018) Kolotilin, Anton, “Optimal information disclosure: A linear programming approach,” Theoretical Economics, 2018, 13 (2), 607–635.
* Kolotilin and Wolitzky (2020) and Alexander Wolitzky, “The Economics of Partisan Gerrymandering,” 2020.
* Kolotilin and Zapechelnyuk (2019) and Andriy Zapechelnyuk, “Persuasion meets delegation,” arXiv preprint arXiv:1902.02628, 2019.
* Kolotilin et al. (2017) , Tymofiy Mylovanov, Andriy Zapechelnyuk, and Ming Li, “Persuasion of a privately informed receiver,” Econometrica, 2017, 85 (6), 1949–1964.
* Malamud et al. (2021) Malamud, Semyon, Anna Cieslak, and Andreas Schrimpf, “Optimal Transport of Information,” Swiss Finance Institute Research Paper, 2021, (21-15).
* Onuchic and Ray (2020) Onuchic, Paula and Debraj Ray, “Conveying Value Via Categories,” working paper, 2020.
* Orlov et al. (2018) Orlov, Dmitry, Pavel Zryumov, and Andrzej Skrzypacz, “Design of macro-prudential stress tests,” 2018.
* Ostrovsky and Schwarz (2010) Ostrovsky, Michael and Michael Schwarz, “Information disclosure and unraveling in matching markets,” American Economic Journal: Microeconomics, 2010, 2 (2), 34–63.
* Rayo and Segal (2010) Rayo, Luis and Ilya Segal, “Optimal information disclosure,” Journal of political Economy, 2010, 118 (5), 949–987.
* Rothschild and Stiglitz (1970) Rothschild, Michael and Joseph E Stiglitz, “Increasing risk: I. A definition,” Journal of Economic theory, 1970, 2 (3), 225–243.
* Schweizer and Szech (2018) Schweizer, Nikolaus and Nora Szech, “Optimal revelation of life-changing information,” Management Science, 2018, 64 (11), 5250–5262.
* Wei and Green (2020) Wei, Dong and Brett Green, “(Reverse) Price Discrimination with Information Design,” Available at SSRN 3263898, 2020.
* Winkler (1988) Winkler, Gerhard, “Extreme points of moment sets,” Mathematics of Operations Research, 1988, 13 (4), 581–587.
* Yamashita and Zhu (2021) Yamashita, Takuro and Shuguang Zhu, “Optimal public information disclosure by mechanism designer,” 2021.
* Yang (2020) Yang, Kai Hao, “Selling Consumer Data for Profit: Optimal Market-Segmentation Design and its Consequences,” 2020.
|
# Computing $L$-Functions of Quadratic Characters at Negative Integers
Henri Cohen
Université de Bordeaux,
Institut de Mathématiques, U.M.R. 5251 du C.N.R.S,
351 Cours de la Libération,
33405 TALENCE Cedex, FRANCE
###### Abstract
We survey a number of different methods for computing $L(\chi,1-k)$ for a
Dirichlet character $\chi$, with particular emphasis on quadratic characters.
The main conclusion is that when $k$ is not too large (for instance $k\leq
100$) the best method comes from the use of Eisenstein series of half-integral
weight, while when $k$ is large the best method is the use of the complete
functional equation, unless the conductor of $\chi$ is really large, in which
case the previous method again prevails.
## 1 Introduction
This paper can be considered as a complement of two of my old papers [2] and
[3], updated to include new formulas, and surveying existing methods.
The general goal of this paper is to give efficient methods for computing the
values at negative integers of $L$-functions of Dirichlet characters $\chi$.
Since these values are algebraic numbers, more precisely belong to the
cyclotomic field ${\mathbb{Q}}(\chi)$, we want to know their _exact_ value.
When $\chi(-1)=(-1)^{r-1}$ we have $L(\chi,1-k)=0$, so we always assume
implicitly that $\chi(-1)=(-1)^{r}$. In addition, if $\chi$ is a non-primitive
character modulo $F$ and $\chi_{f}$ is the primitive character associated to
$\chi$, we have
$L(\chi,1-k)=L(\chi_{f},1-k)\prod_{p\mid F,\ p\nmid
f}(1-\chi_{f}(p)p^{r-1})\;,$
so we may assume that $\chi$ is primitive.
Note that we will not consider the slightly different problem of computing
_tables_ of $L(\chi,1-k)$, either for fixed $k$ and varying $\chi$ (such as
$\chi=\chi_{D}$ the quadratic character of discriminant $D$), or for fixed
$\chi$ and varying $k$, although several of the methods considered here can be
used for this purpose.
In addition to their intrinsic interest, these computations have several
applications, for instance:
1. (1)
Computing $\lambda$-invariants of quadratic fields (I am indebted to J.
Ellenberg and S. Jain for this, see [7]).
2. (2)
Computing Sato–Tate distributions for modular forms of half-integral weight,
see [8] and [9].
3. (3)
Computing Hardy–Littlewood constants of polynomials, see [1].
There exist at least five different methods for computing these quantities,
some having several variants. We denote by $F$ the conductor of $\chi$.
1. (1)
Bernoulli methods: one can express $L(\chi,1-k)$ as a finite sum involving
$O(F)$ terms and Bernoulli numbers, so that the time required is
$\tilde{O}(F)$ (we use the “soft-O” notation $\tilde{O}(X)$ to mean
$O(X^{1+\varepsilon})$ for any $\varepsilon>0$). This method has two variants:
one which uses directly the definition of $\chi$-Bernoulli numbers, the second
which uses _recursions_.
2. (2)
Use of the _complete functional equation_. Using it, it is sufficient first to
compute numerically $L(\overline{\vphantom{T}\chi},k)$ to sufficient accuracy
(given by the functional equation), which is done using the Euler product, and
second to know an upper bound on the denominator of $L(\chi,1-k)$, which is
easy (and usually equal to $1$). The required time is also $\tilde{O}(F)$, but
with a much smaller implicit $O()$ constant.
3. (3)
Use of the _approximate functional equation_ , which involves in particular
computing the incomplete gamma function or similar higher transcendental
functions. The required time is $\tilde{O}(F^{1/2})$, but with a large
implicit $O()$ constant.
4. (4)
Use of Hecke-Eisenstein series (Hilbert modular forms) on the full modular
group, which expresses $L(\chi,1-k)$ as a finite sum involving $O(F^{1/2})$
terms and (twisted) sum of divisors function. The required time is
$\tilde{O}(F^{1/2})$ with a very small implicit $O()$ constant. A variant
which is useful only for very small $k$ such as $k\leq 10$ uses Hecke-
Eisenstein series on congruence subgroups of small level.
5. (5)
Use of Eisenstein series of half-integral weight over $\Gamma_{0}(4)$, which
again expresses $L(\chi,1-k)$ as a finite sum involving $O(F^{1/2})$ terms and
(twisted) sum of divisors function, but different from the previous ones. The
required time is again $\tilde{O}(F^{1/2})$, but with an even smaller implicit
$O()$ constant. An important variant, valid for all $k$, is to use modular
forms of half-integral weight on subgroups of $\Gamma_{0}(4)$.
The first three methods are completely general, but the last two are really
efficient only if $\chi$ is equal to a quadratic character or possibly a
quadratic character times a character of small conductor. We will therefore
present all five methods and their variants, but consider the last two methods
only in the case of quadratic characters, and therefore compare them only in
this case.
After implementing these methods and comparing their running times for various
values of $F$, we have arrived at the following conclusions: first, the two
fastest methods are always either the fifth (Eisenstein series of half-
integral weight) or the second (complete functional equation). Second, one
should choose the second method only if $k$ is large, for instance $k\geq
100$, except if $F$ is large. Note also that the case $F=1$ corresponds to the
computation of Bernoulli numbers, and that indeed the fastest method for this
is the use of the complete functional equation of the Riemann zeta function.
Because of these conclusions, we will give explicitly the formulas for the
first, third, and fourth method, but only describe the precise implementations
and timings for the second and fifth, which are the really useful ones.
## 2 Bernoulli Methods
### 2.1 Direct Formulas
###### Proposition 2.1
Define the $\chi$-Bernoulli numbers $B_{k}(\chi)$ by the generating function
$\dfrac{T}{e^{FT}-1}\sum_{0\leq r<F}\chi(r)e^{rT}=\sum_{k\geq
0}\dfrac{B_{k}(\chi)}{k!}T^{k}\;.$
Then
$L(\chi,1-k)=-\dfrac{B_{k}(\chi)}{k}-\chi(0)\delta_{k,1}\;.$
Note that since we assume $\chi$ primitive, the term $\chi(0)\delta_{k,1}$
vanishes unless $F=1$ and $k=1$, in which case $L(\chi,1-k)=\zeta(0)=-1/2$.
Also, recall that for $k\geq 2$ we have $B_{k}(\chi)=0$ if
$\chi(-1)\neq(-1)^{k}$.
###### Proposition 2.2
Set $S_{n}(\chi)=\sum_{0\leq r<F}\chi(r)r^{n}$. We have
$\displaystyle B_{k}(\chi)$
$\displaystyle=\dfrac{1}{F}\left(S_{k}(\chi)-\dfrac{kF}{2}S_{k-1}(\chi)+\sum_{1\leq
j\leq k/2}\binom{k}{2j}B_{2j}F^{2j}S_{k-2j}(\chi)\right)$
$\displaystyle=\dfrac{1}{F}\sum_{0\leq
r<F}\chi(r)\left(r^{k}-\dfrac{kF}{2}r^{k-1}+\sum_{1\leq j\leq
k/2}\binom{k}{2j}B_{2j}r^{k-2j}F^{2j}\right)$
$\displaystyle=\dfrac{1}{F}\sum_{1\leq j\leq
k+1}\dfrac{(-1)^{j-1}}{j}\binom{k+1}{j}\sum_{0\leq r<Fj}\chi(r)r^{k}\;.$
### 2.2 Recursions
There are a large number of recursions for $B_{k}(\chi)$. The following three
propositions give some of the most important ones:
###### Proposition 2.3
We have the recursion
$\sum_{0\leq j<k}F^{k-j}\binom{k}{j}B_{j}(\chi)=kS_{k-1}(\chi)\;,$
where $S_{n}(\chi)$ is as above.
###### Proposition 2.4
Let $\chi$ be a nontrivial primitive character of conductor $F$, set
$\varepsilon=\overline{\vphantom{T}\chi}(2)$ and
$Q_{k}(\chi)=\sum_{1\leq r<F/2}\chi(r)r^{k}\;.$
We have the recursion
$(2^{k}-\varepsilon)B_{k}(\chi)=-\Biggl{(}k2^{k-1}Q_{k-1}(\chi)+\sum_{1\leq
j<k/2}\binom{k}{2j}(2^{k-1-2j}-\varepsilon)F^{2j}B_{k-2j}(\chi)\Biggr{)}\;.$
###### Proposition 2.5
Let $\chi$ be a nontrivial primitive character of conductor $F$.
1. (1)
If $\chi$ is even we have
$\sum_{0\leq
j\leq(k-1)/2}\binom{k}{2j+1}F^{2j}\dfrac{B_{2k-2j}(\chi)}{2k-2j}=\dfrac{(-1)^{k}}{F}\sum_{0\leq
r<F/2}\chi(r)r^{k}(F-r)^{k}\;.$
2. (2)
If $\chi$ is odd we have
$\displaystyle\sum_{0\leq j\leq(k-1)/2}\binom{k}{2j+1}$ $\displaystyle
F^{2j}B_{2k-1-2j}(\chi)=$ $\displaystyle=\dfrac{(-1)^{k}k}{F}\sum_{0\leq
r<F/2}\chi(r)r^{k-1}(F-r)^{k-1}(F-2r)\;.$
In practice, it seems that the fastest way to compute $L(\chi,1-k)$ using
$\chi$-Bernoulli numbers is to use Proposition 2.4, but it is not competitive
with the other methods that we are going to give.
## 3 Using the Complete Functional Equation
In this section and the next, we use approximate methods to compute
$L(\chi,1-k)$, which for simplicity we call _transcendental_ methods, since
they use transcendental functions.
Since our goal is to compute these values as exact algebraic numbers, and
since we know that $L(\chi,1-k)\in{\mathbb{Q}}(\zeta_{u})$, where $u$ is the
order of $\chi$, we simply need to know an upper bound for the denominator of
$L(\chi,1-k)$ as an algebraic number, and we need to compute simultaneously
$L(\chi^{j},1-k)$ for $k$ modulo $u$ and coprime to $u$, so that the
individual values can then be obtained by simple linear algebra. A priori this
involves $\phi(u)$ computations, but since $L(\chi^{-1},1-k)$ is simply the
complex conjugate of $L(\chi,1-k)$, only $\lceil\phi(u)/2\rceil$ computations
are needed. In particular, if $u=1$, $2$, $3$, $4$, or $6$, a single
computation suffices.
Thus, we need two types of results: one giving the approximate size of
$L(\chi,1-k)$, so as to determine the relative accuracy with which to do the
computations, and second an upper bound for its denominator. The first result
is standard, and the second can be found in Section 11.4 of [5]:
###### Proposition 3.1
We have
$L(\chi,1-k)=\dfrac{2\cdot(k-1)!F^{k}}{(-2i\pi)^{k}{\mathfrak{g}}(\overline{\vphantom{T}\chi})}L(\overline{\vphantom{T}\chi},k)\;,$
where ${\mathfrak{g}}(\overline{\vphantom{T}\chi})$ is the standard Gauss sum
of modulus $|F|^{1/2}$ associated to $\overline{\vphantom{T}\chi}$.
###### Corollary 3.2
As $k\to\infty$ we have
$|L(\chi,1-k)|\sim 2\cdot e^{-1/2}\left(\dfrac{kF}{2\pi e}\right)^{k-1/2}\;.$
Proof. Clear from Stirling’s formula and the fact that
$L(\overline{\vphantom{T}\chi},k)$ tends to $1$ when $k\to\infty$.
$\sqcap$$\sqcup$
###### Theorem 3.3
Denote by $u$ the order of $\chi$, so that $u\mid\phi(F)$ and $L(\chi,1-k)\in
K={\mathbb{Q}}(\zeta_{u})$. We have
$D(\chi,k)L(\chi,1-k)\in{\mathbb{Z}}[\zeta_{u}]$, where the “denominator”
$D(\chi,k)$ can be chosen as follows:
1. (1)
If $F$ is not a prime power then $D(\chi,k)=1$.
2. (2)
Assume that $F=p^{v}$ for some odd prime $p$ and $v\geq 1$.
1. (a)
If $u\neq p^{v-1}(p-1)/\gcd(p-1,k)$ then $D(\chi,k)=1$.
2. (b)
If $u=p^{v-1}(p-1)/\gcd(p-1,k)$ then $D(\chi,k)=pk/((p-1)/u)$ if $v=1$ or
$D(\chi,k)=\chi(1+p)-1$ if $v\geq 2$.
3. (3)
If $F=2^{v}$ for some $v\geq 2$ then $D(\chi,k)=1$ if $v\geq 3$, while
$D(\chi,k)=2$ if $v=2$.
4. (4)
If $F=1$ then $D(\chi,k)=k\prod_{(p-1)\mid k}p$.
Stronger statements are easy to obtain, see [5], but these bounds are
sufficient.
To compute $L(\chi,1-k)$ using these results, we proceed as follows. Let $B$
be chosen so that
$B>(k-1/2)\log(kF/(2\pi e))+\log(|D(\chi,k)|)+10\;,$
where $10$ is simply a safety margin. Thanks to the above two results,
computing $L(\chi,1-k)$ to _relative_ accuracy $e^{-B}$ will guarantee that
the coefficients of the algebraic integer $D(\chi,k)L(\chi,1-k)$ on the
integral basis $(\zeta_{u}^{j})_{0\leq j<\phi(u)}$ will be correct to accuracy
$e^{-5}$, say, and since they are in ${\mathbb{Z}}$, they can thus be
recovered exactly.
Thanks to the functional equation, it is thus sufficient to compute
$L(\overline{\vphantom{T}\chi},k)$ to relative accuracy $e^{-B}$, but since
$L(\overline{\vphantom{T}\chi},k)$ is close to $1$, $k$ being large, this is
the same as absolute accuracy. Note from the above formula that $B$ will be
(considerably) larger than $k$.
To compute $L(\overline{\vphantom{T}\chi},k)$, we first compute $\prod_{p\leq
L(B,k)}(1-\chi(p)/p^{k})$, using an internal accuracy of $e^{-kB/(k-1)}$ and
limit $L(B,k)=(e^{B}/(k-1))^{1/(k-1)}$. More precisely, we initially set
$P=1$, and for primes $p$ going from $2$ to $L(B,k)$, we compute $1/p^{k}$ to
$p^{k}e^{-kB/(k-1)}$ of relative accuracy (this is crucial), and then set
$P\leftarrow P-P(1/p^{k})$. It is clear that this will compute $1/L(\chi,k)$
to the desired precision, from which we immediately obtain
$L(\overline{\vphantom{T}\chi},k)$. Important implementation remark: to
compute the accuracy needed in the intermediate computations, one does _not_
compute $\log(p^{k})=k\log(p)$, but only some rough approximation, for
instance by counting the number of bytes that the multi-precision integer
$p^{k}$ occupies in memory, or any other fast method.
Even though this method is designed to be fast for relatively large $k$, we
find that it is considerably faster than any of the Bernoulli methods, even
for very small $k$, the ratio increasing with increasing $k$ and decreasing
$F$.
Here are the times obtained using this method. The reader will notice that the
times for very small $k$ are larger than for moderate $k$ due to the very
large number of Euler factors to be computed, the smallest being impossibly
long. We use $*$ to indicate very long times (usually more than $100$
seconds), and on the contrary – to indicate a negligible time, less than $50$
milliseconds.
$D\diagdown k$ $2$ $4$ $6$ $8$ $10$ $12$ $14$ $16$ $18$ $10^{6}+1$ $*$ $1.03$
$0.20$ $0.11$ $0.09$ $0.08$ $0.08$ $0.07$ $0.08$ $10^{7}+1$ $*$ $14.4$ $2.36$
$1.19$ $0.93$ $0.81$ $0.79$ $0.73$ $0.77$ $10^{8}+1$ $*$ $*$ $27.9$ $13.1$
$9.75$ $8.32$ $7.97$ $7.25$ $7.54$ $10^{9}+1$ $*$ $*$ $*$ $*$ $105.$ $87.2$
$81.7$ $73.5$ $75.5$
$D\diagdown k$ $20$ $40$ $60$ $80$ $100$ $150$ $200$ $250$ $300$ $10^{5}+1$ –
– – – – $0.06$ $0.08$ $0.11$ $0.15$ $10^{6}+1$ $0.08$ $0.12$ $0.17$ $0.22$
$0.29$ $0.48$ $0.68$ $1.01$ $1.32$ $10^{7}+1$ $0.77$ $1.09$ $1.62$ $2.01$
$2.66$ $4.48$ $6.29$ $9.23$ $12.2$ $10^{8}+1$ $7.52$ $10.3$ $15.1$ $18.8$
$24.7$ $41.3$ $58.5$ $85.5$ $114.$ $10^{9}+1$ $75.2$ $99.8$ $*$ $*$ $*$ $*$
$*$ $*$ $*$
$D\diagdown k$ $400$ $800$ $1600$ $3200$ $6400$ $12800$ $25600$ $51200$
$10^{2}+5$ – – – – $0.24$ $1.25$ $6.36$ $31.3$ $10^{3}+1$ – – $0.07$ $0.34$
$1.85$ $9.84$ $51.2$ $*$ $10^{4}+1$ – $0.11$ $0.56$ $2.94$ $15.9$ $85.6$ $*$
$*$ $10^{5}+1$ $0.24$ $0.98$ $4.96$ $26.1$ $*$ $*$ $*$ $*$ $10^{6}+1$ $2.16$
$9.49$ $44.4$ $*$ $*$ $*$ $*$ $*$ $10^{7}+1$ $20.0$ $82.3$ $*$ $*$ $*$ $*$ $*$
$*$
## 4 Using the Approximate Functional Equation
All the methods that we have seen up to now take time proportional to the
conductor $F$, the main difference between them being the dependence in $k$
and the size of the implicit constant in the time estimates.
We are now going to study a number of methods which take time proportional to
$F^{1/2+\varepsilon}$ for any $\varepsilon>0$. The simplest version of the
_approximate functional equation_ that we will use is as follows:
###### Theorem 4.1
Let $e=0$ or $1$ be such that $\chi(-1)=(-1)^{k}=(-1)^{e}$. For any complex
$s$ we have the following formula, valid for any $A>0$:
$\Gamma\left(\dfrac{s+e}{2}\right)L(\chi,s)=\sum_{n\geq
1}\dfrac{\chi(n)}{n^{s}}\Gamma\left(\dfrac{s+e}{2},\dfrac{A\pi
n^{2}}{F}\right)+\varepsilon(\chi)\sum_{n\geq
1}\dfrac{\overline{\vphantom{T}\chi}(n)}{n^{1-s}}\Gamma\left(\dfrac{1-s+e}{2},\dfrac{\pi
n^{2}}{AF}\right)\;,$
where the _root number_ $\varepsilon(\chi)$ is given by the formula
$\varepsilon(\chi)={\mathfrak{g}}(\chi)/(i^{e}\sqrt{F})$, where
${\mathfrak{g}}(\chi)$ is the Gauss sum attached to $\chi$, and $\Gamma(s,x)$
is the incomplete gamma function
$\Gamma(s,x)=\int_{x}^{\infty}t^{s-1}e^{-t}\,dt$.
Since $\Gamma(s,x)\sim x^{s-1}e^{-x}$ hence tends to $0$ exponentially fast as
$x\to\infty$, the above formula does lead to a $\tilde{O}(F^{1/2})$ algorithm
for computing $L(\chi,s)$, not necessarily for a negative integer $s$. Note
that this type of formula is available for any type of $L$-function with
functional equation, not only those attached to a Dirichlet character.
The constant $A$ is included as a check on the implementation, since the left-
hand side is independent of $A$, but once checked the optimal choice is $A=1$.
This constant can also be used to compute $\varepsilon(\chi)$ if it is not
known (note that $\varepsilon(\chi)=1$ if $\chi$ is quadratic), but there are
better methods to do this.
Even though this method is in $\tilde{O}(F^{1/2})$, so asymptotically much
faster than the first two methods that we have seen, its main drawback is the
computation time of $\Gamma(s,x)$. Even though quite efficient methods are
known for computing it, our timings have shown that in all ranges of the
conductor $F$ and value of $k$, either the use of the full functional equation
or the use of Eisenstein series of half-integral weight (methods (2) and (5))
are considerably faster, so we will not discuss this method further.
## 5 Using Hecke–Eisenstein Series
### 5.1 The Main Theorem
The main theorem comes from the computation of the Fourier expansion of
Hecke–Eisenstein series in the theory of Hilbert modular forms, and is easily
proved using the methods of [3]:
###### Theorem 5.1
Let $K$ be a real quadratic field of discriminant $D>0$, let $\psi$ be a
primitive character modulo $F$ such that $\psi(-1)=(-1)^{k}$, let $N$ be a
squarefree integer, and assume that $\gcd(F,ND)=1$. If we set
$a_{k,\psi,N}(0)=\prod_{p\mid
N}(1-\psi\chi_{D}(p)p^{k-1})\dfrac{L(\psi,1-k)L(\psi\chi_{D},1-k)}{4}\;,$
and for $n\geq 1$:
$a_{k,\psi,N}(n)=\sum_{\begin{subarray}{c}d\mid n\\\
\gcd(d,N)=1\end{subarray}}\psi\chi_{D}(d)d^{k-1}\sum_{s\in{\mathbb{Z}}}\sigma_{k-1,\psi}\left(\dfrac{(n/d)^{2}D-s^{2}}{4N}\right)\;,$
where $\sigma_{k-1,\psi}(m)=\sum_{d\mid m}\psi(d)d^{k-1}$, then
$\sum_{n\geq 0}a_{k,\psi,N}(n)q^{n}\in M_{2k}(\Gamma_{0}(FN),\psi^{2})\;.$
Note that in the above we set implicitly $\sigma_{k-1,\psi}(m)=0$ if
$m\notin{\mathbb{Z}}_{\geq 1}$.
The restriction $\gcd(F,N)=1$ is not important, since letting $N$ have factors
in common with $F$ would not give more general results. Similarly for the
restriction on $N$ being squarefree. On the other hand, the restriction
$\gcd(F,D)=1$ is more important: similar results exist when $\gcd(F,D)>1$, but
they are considerably more complicated. Since we need them, we will give one
such result below in the case $\gcd(F,D)=4$.
We use this theorem in the following way. First, we must assume for practical
reasons that $k$, $F$, and $N$ are not too large. In that case it is very easy
to compute explicitly a basis for $M_{2k}(\Gamma_{0}(FN),\psi^{2})$. Given
this basis, it is then easy to express any constant term of an element of the
space as a linear combination of $u$ coefficients (not necessarily the first
ones), where $u$ is the dimension of the space. In particular, this gives
$a_{k,\psi,N}(0)$, and hence $L(\psi\chi_{D},1-k)$, as a finite linear
combination of some $a_{k,\psi,N}(n)$ for $n\geq 1$.
Second, since the conductor of $\psi$ must be small, the method is thus
applicable only to compute $L(\chi,1-k)$ for Dirichlet characters $\chi$ which
are “close” to quadratic characters, in other words of the form $\psi\chi_{D}$
with conductor of $\psi$ small. Of course the quantities $L(\psi,1-k)$ are
computed once and for all (using any method, since $F$ and $k$ are small).
Note that the auxiliary integer $N$ is used only to improve the speed of the
formulas, as we will see below, but of course one can always choose $N=1$ if
desired.
### 5.2 The Case $k$ Even
For future use, define
$S_{k}(m,N)=\sum_{s\in{\mathbb{Z}}}\sigma_{k}\left(\dfrac{m-s^{2}}{N}\right)\;,$
where for any arithmetic function $f$ such as $\sigma_{k}$ we set $f(x)=0$ if
$x\not\in{\mathbb{Z}}_{\geq 1}$, i.e., if $x$ is either not integral or non-
positive. Using the theorem with $F=N=1$ we immediately obtain formulas such
as
$\displaystyle L(\chi_{D},-1)$ $\displaystyle=-\dfrac{1}{5}S_{1}(D,4)$
$\displaystyle L(\chi_{D},-3)$ $\displaystyle=S_{3}(D,4)$ $\displaystyle
L(\chi_{D},-5)$
$\displaystyle=-\dfrac{1}{195}\left(\left(24+2^{5}\mbox{$\left(\dfrac{D}{2}\right)$}\right)S_{5}(D,4)+S_{5}(D,1)\right)$
To obtain a general formula we recall the following theorem of Siegel:
###### Theorem 5.2
Let $r=\dim(M_{k}(\Gamma))$ and define coefficients $c_{i}^{k}$ by
$\Delta^{-r}E_{12r-k+2}=\sum_{i\geq-r}c_{i}^{k}q^{i}\;,$
where by convention $E_{0}=1$. Then for any $f=\sum_{n\geq 0}a(n)q^{n}\in
M_{k}(\Gamma)$ we have
$\sum_{0\leq n\leq r}a(n)c_{-n}^{k}=0\;,$
and $c_{0}^{k}\neq 0$.
Combined with the main theorem (with $F=N=1$), we obtain the following
corollary:
###### Corollary 5.3
Keep the above notation, let $k\geq 2$ be an even integer, and set
$r=\dim(M_{2k}(\Gamma))=\lfloor k/6\rfloor+1$. If $D>0$ is a fundamental
discriminant we have
$L(\chi_{D},1-k)=\dfrac{4k}{c_{0}^{2k}B_{k}}\sum_{1\leq m\leq
r}S_{k-1}(m^{2}D,4)\sum_{1\leq d\leq
r/m}d^{k-1}\mbox{$\left(\dfrac{D}{d}\right)$}c_{-dm}^{2k}\;.$
For very small values of $k$ it is possible to improve on the speed of the
above general formula by choosing $F=1$ but larger values of $N$ in the
theorem. Without entering into details, on average we can gain a factor of
$3.95$ for $k=2$, of $1.6$ for $k=6$, and of $1.1$ for $k=8$, and I have found
essentially no improvement for other values of $k$ including for $k=4$.
The advantages of this method are threefold. First, it is by far the fastest
method seen up to now for computing $L(\chi_{D},1-k)$. Second, the universal
coefficients $c_{-n}^{k}$ that we need are easily computed thanks to Siegel’s
theorem. And third, the flexibility of choosing the auxiliary Dirichlet
character $\psi$ in the theorem allows us to compute $L(\chi,1-k)$ for more
general characters $\chi$ than quadratic ones.
The two disadvantages are first that the quantities $S_{k-1}(m^{2}D,4)$ need
to be computed for each $m$ (although some duplication can be avoided), and
second that $m^{2}D$ becomes large when $m$ increases. These two disadvantages
will disappear in the method using Eisenstein series of half-integral weight
(at the expense of losing some of the advantages mentioned above), so we will
not give the timings for this method.
### 5.3 The Case $k$ Odd
Thanks to the main theorem, although Hilbert modular forms in two variables
are only for _real_ quadratic fields, thus with discriminant $D>0$, if we
choose an odd character $\psi$ such as $\psi=\chi_{-3}$ or $\chi_{-4}$, it can
also be used to compute $L(\chi_{D},1-k)$ for $D<0$, hence $k$ odd. I have not
been able to find useful formulas with $\psi=\chi_{-3}$, so from now on we
assume that $\psi=\chi_{-4}$, so $F=4$. We first introduce some notation.
###### Definition 5.4
We set
$\sigma_{k}^{(1)}(m)=\sum_{d\mid
m}\mbox{$\left(\dfrac{-4}{d}\right)$}d^{k}\;,\quad\sigma_{k}^{(2)}(m)=\sum_{d\mid
m}\mbox{$\left(\dfrac{-4}{m/d}\right)$}d^{k}\;,\text{\quad and}$
$S_{k}^{(j)}(m,N)=\sum_{s\in{\mathbb{Z}}}\sigma_{k}^{(j)}\left(\dfrac{m-s^{2}}{N}\right)\;,$
with the usual understanding that $\sigma_{k}^{(j)}(m)=0$ if
$m\notin{\mathbb{Z}}_{\geq 1}$.
Note that, as for $\sigma_{k}$ itself when $k$ is odd, for $k$ even these
arithmetic functions occur naturally as Fourier coefficients of Eisenstein
series of weight $k+1$ and character $\left(\frac{-4}{.}\right)$. More
precisely, for $k\geq 3$ odd the series $E_{k}(\chi_{-4},1)$ and
$E_{k}(1,\chi_{-4})$ form a basis of the Eisenstein subspace of
$M_{k}(\Gamma_{0}(4),\chi_{-4})$, where
$\displaystyle E_{k}(\chi_{-4},1)(\tau)$
$\displaystyle=\dfrac{L(\chi_{-4},1-k)}{2}+\sum_{n\geq
1}\sigma_{k-1}^{(1)}(n)q^{n}\text{\quad and}$ $\displaystyle
E_{k}(1,\chi_{-4})(\tau)$ $\displaystyle=\sum_{n\geq
1}\sigma_{k-1}^{(2)}(n)q^{n}\;.$
To be able to use the theorem in general, it is necessary to assume the
following:
###### Conjecture 5.5
If $D>1$ is squarefree (not necessarily a discriminant), $F=4$, and $N=1$, the
statement of Theorem 5.1 is still valid verbatim.
This is probably easy to prove, and I have checked it on thousands of
examples. Assuming this conjecture, applying the theorem to $\psi=\chi_{-4}$
and the Hecke operator $T(2)$ it is immediate to prove the following:
###### Corollary 5.6
Let $D<-4$ be any fundamental discriminant. Set
$a_{k,D}(0)=\left(1-2^{k-1}\mbox{$\left(\dfrac{D}{2}\right)$}\right)\dfrac{L(\chi_{-4},1-k)L(\chi_{D},1-k)}{4}\;,\text{\quad
and}$ $a_{k,D}(n)=\sum_{d\mid
n}\mbox{$\left(\dfrac{4D/\delta}{d}\right)$}d^{k-1}S_{k-1}^{(1)}((n/d)^{2}|D/\delta|,1)\;,$
where $\delta=1$ if $D\equiv 1\allowbreak\ ({\rm{mod}}\,\,4)$ and $\delta=4$
if $D\equiv 0\allowbreak\ ({\rm{mod}}\,\,4)$. Then $\sum_{n\geq
0}a_{k,D}(n)q^{n}\in M_{2k}(\Gamma_{0}(2))$.
To use this result, we need an analogue of Siegel’s Theorem 5.2 for
$\Gamma_{0}(2)$, and for this we need to introduce a number of modular forms.
###### Definition 5.7
We set $F_{2}(\tau)=2E_{2}(2\tau)-E_{2}(\tau)$,
$F_{4}(\tau)=(16E_{4}(2\tau)-E_{4}(\tau))/15$, and
$\Delta_{4}(\tau)=(E_{4}(\tau)-E_{4}(2\tau))/240$, where $E_{2}$ and $E_{4}$
are the standard Eisenstein series of weight $2$ and $4$ on the full modular
group.
Note that $F_{2}\in M_{2}(\Gamma_{0}(2))$ and $F_{4}$ and $\Delta_{4}$ are in
$M_{4}(\Gamma_{0}(2))$.
###### Theorem 5.8
Let $k\in 2{\mathbb{Z}}$ be a positive even integer, set $r=\lfloor
k/4\rfloor+2$, $E=F_{2}F_{4}$ if $k\equiv 0\allowbreak\ ({\rm{mod}}\,\,4)$,
$E=F_{4}$ if $k\equiv 2\allowbreak\ ({\rm{mod}}\,\,4)$, and write
$E/\Delta_{4}^{r}=\sum_{i\geq-r}c_{i}^{k}q^{i}$. Then for any $F=\sum_{n\geq
0}a(n)q^{n}\in M_{k}(\Gamma_{0}(2))$ we have
$\sum_{0\leq n\leq r}a(n)c_{-n}^{k}=0\;,$
and in addition $c_{0}^{k}\neq 0$.
Note that since we will use this theorem for $M_{2k}(\Gamma_{0}(2))$ with $k$
odd, we have $2k\equiv 2\allowbreak\ ({\rm{mod}}\,\,4)$, so we will always use
$E=F_{4}$. The analogue of Corollary 5.3 is then as follows:
###### Corollary 5.9
Keep the above notation, let $k\geq 3$ be an odd integer, and set $r=(k+3)/2$.
If $D<-4$ is a fundamental discriminant we have
$L(\chi_{D},1-k)=\dfrac{8}{A}\sum_{1\leq m\leq
r}S^{(1)}_{k-1}(m^{2}|D|/\delta,1)\sum_{1\leq d\leq
r/m}d^{k-1}\mbox{$\left(\dfrac{4D/\delta}{d}\right)$}c_{-dm}^{2k}\;,$
with $A=c_{0}^{2k}(2^{k-1}\mbox{$\left(\frac{D}{2}\right)$}-1)E_{k-1}$, and
where the $E_{k}$ are the _Euler numbers_ ($E_{0}=1$, $E_{2}=-1$, $E_{4}=5$,
$E_{6}=-61$,…).
The advantages/disadvantages mentioned in the case $k$ even are the same here.
## 6 Using Eisenstein Series of Half-Integral Weight
We now come to the most powerful method known to the author for computing
$L(\chi_{D},1-k)$: the use of Eisenstein series of half-integral weight. Once
again, we will see a sharp distinction between $k$ even and $k$ odd. We first
begin by recalling some basic results on $M_{w}(\Gamma_{0}(4))$ (we use the
index $w$ for the weight since it will be used with $w=k+1/2$). Later, we will
see that it is more efficient to use modular forms on subgroups of
$\Gamma_{0}(4)$.
### 6.1 Basic Results on $M_{w}(\Gamma_{0}(4))$
Recall that the basic theta function
$\theta(\tau)=\sum_{s\in{\mathbb{Z}}}q^{s^{2}}=1+2\sum_{s\geq 1}q^{s^{2}}$
satisfies for any $\gamma=\left(\begin{smallmatrix}{a}&{b}\\\
{c}&{d}\end{smallmatrix}\right)\in\Gamma_{0}(4)$ the modularity condition
$\theta(\gamma(\tau))=v_{\theta}(\gamma)(c\tau+d)^{1/2}\theta(\tau)$, where
the _theta-multiplier system_ $v_{\theta}(\gamma)$ is given by
$v_{\theta}(\gamma)=\mbox{$\left(\dfrac{-4}{d}\right)$}^{-1/2}\mbox{$\left(\dfrac{c}{d}\right)$}\;,$
and all square roots are taken with the principal determination. The space
$M_{w}(\Gamma_{0}(4),v_{\theta}^{2w})$ of holomorphic functions behaving
modularly like $\theta^{2w}$ under $\Gamma_{0}(4)$ and holomorphic at the
cusps will be simply denoted $M_{w}(\Gamma_{0}(4))$ since there is no risk of
confusion. Note, however, that if $w$ is an odd integer and in the context of
modular forms of integral weight, $M_{w}(\Gamma_{0}(4))$ is denoted
$M_{w}(\Gamma_{0}(4),\chi_{-4})$.
We recall the following easy and well-known results (note that $F_{2}$ and
$\Delta_{4}$ are not the same functions as those used above):
###### Proposition 6.1
Define
$\displaystyle F_{2}(\tau)$
$\displaystyle=\dfrac{\eta(4\tau)^{8}}{\eta(2\tau)^{4}}=-\dfrac{1}{24}(E_{2}(\tau)-3E_{2}(2\tau)+2E_{2}(4\tau))\;,$
$\displaystyle\Delta_{4}(\tau)$
$\displaystyle=\dfrac{\eta(\tau)^{8}\eta(4\tau)^{8}}{\eta(2\tau)^{8}}=\dfrac{1}{240}(E_{4}(\tau)-17E_{4}(2\tau)+16E_{4}(4\tau))\;.$
1. (1)
We have
$\bigoplus_{w}M_{w}(\Gamma_{0}(4))={\mathbb{C}}[\theta,F_{2}]\text{\quad
and\quad}\bigoplus_{w}S_{w}(\Gamma_{0}(4))=\theta\Delta_{4}{\mathbb{C}}[\theta,F_{2}]\;.$
2. (2)
In particular we have the dimension formulas
$\dim(M_{w}(\Gamma_{0}(4)))=\begin{cases}0&\text{\quad for $w<0$}\\\ 1+\lfloor
w/2\rfloor&\text{\quad for $w\geq 0$\;.}\end{cases}$
$\dim(S_{w}(\Gamma_{0}(4)))=\begin{cases}0&\text{\quad for $w\leq 4$}\\\
\lfloor w/2\rfloor-1&\text{\quad for $w>2$, $w\notin 2{\mathbb{Z}}$}\\\
\lfloor w/2\rfloor-2&\text{\quad for $w>2$, $w\in
2{\mathbb{Z}}$\;.}\end{cases}$
We also recall that when $w\in 1/2+{\mathbb{Z}}$, the Kohnen $+$-space of
$M_{w}(\Gamma_{0}(4))$, denoted simply by $M_{w}^{+}$, is defined to be the
space of forms $F$ having a Fourier expansion $F(\tau)=\sum_{n\geq
0}a(n)q^{n}$ with $a(n)=0$ if $(-1)^{w-1/2}n\not\equiv 0,1\allowbreak\
({\rm{mod}}\,\,4)$. Note that we include Eisenstein series. It is clear that
$M_{1/2}^{+}=M_{1/2}(\Gamma_{0}(4))$ and $M_{3/2}^{+}=\\{0\\}$, so we will
always assume that $w\geq 5/2$. In that case there a single Eisenstein series
in $M_{w}^{+}$, due to the author, that we will denote by ${\mathcal{H}}_{k}$:
its importance is due to the fact that if we write
${\mathcal{H}}_{k}(\tau)=\sum_{n\geq 0}a_{k}(n)q^{n}$, then if
$D=(-1)^{w-1/2}n$ is a fundamental discriminant we have
$a_{k}(n)=L(\chi_{D},1-(w-1/2))$, so being able to compute efficiently the
Fourier coefficients of ${\mathcal{H}}_{k}$ automatically gives us a fast
method for computing our desired quantities $L(\chi_{D},1-k)$ with $k=w-1/2$.
The remaining part of $M_{w}^{+}$, which we of course denote by $S_{w}^{+}$,
is formed by the cusp forms belonging to $M_{w}^{+}$. One of Kohnen’s main
theorems is that $S_{w}^{+}$ is Hecke-isomorphic to the space of modular forms
of even weight $S_{2w-1}(\Gamma)$. In particular, note the following:
###### Corollary 6.2
For $w\geq 5/2$ half-integral we have
$\dim(M_{w}^{+})=\begin{cases}1+\lfloor w/6\rfloor&\text{\quad if
$6\nmid(w-3/2)$\;,}\\\ \lfloor w/6\rfloor&\text{\quad if
$6\mid(w-3/2)$\;.}\end{cases}$
Notation:
1. (1)
Recall that if $a(n)$ is any arithmetic function (typically $a=\sigma_{k-1}$
or twisted variants), we define $a(x)=0$ if $x\notin{\mathbb{Z}}_{\geq 1}$.
2. (2)
If $F$ is a modular form and $d\in{\mathbb{Z}}_{\geq 1}$, we denote by $F[d]$
the function $F(d\tau)$.
3. (3)
We will denote by $D(F)$ the differential operator $qd/dq=(1/(2\pi
i))d/d\tau$.
### 6.2 The Case $k$ Even using $\Gamma_{0}(4)$
The main idea is to use Rankin–Cohen brackets of known series with $\theta$ to
generate $M_{w}^{+}$: indeed, $\theta$ and its derivatives are _lacunary_ , so
multiplication by them is much faster than ordinary multiplication, at least
in reasonable ranges (otherwise Karatsuba or FFT type methods are faster to
construct _tables_).
First note the following immediate result:
###### Proposition 6.3
The form $\theta E_{2}[4]-6D(\theta)$ is a basis of $M_{5/2}^{+}$ and the form
$\theta E_{4}[4]$ is a basis of $M_{9/2}^{+}$.
In particular, we recover the formulas $L(\chi_{D},-1)=(-1/5)S_{1}(D,4)$ and
$L(\chi_{D},-3)=S_{3}(D,4)$ already obtained using Hecke–Eisenstein series.
As in the case of Hecke–Eisenstein series, we will need to distinguish two
completely different cases: the case $w-1/2$ _even_ , which is considerably
simpler, and the case $w-1/2$ odd, which is more complicated and less
efficient. The reason for this sharp distinction is the following theorem:
###### Theorem 6.4
Assume that $w\geq 9/2$ is such that $k=w-1/2\in 2{\mathbb{Z}}$. The modular
forms
$([\theta,E_{k-2j}[4]]_{j})_{0\leq j\leq\lfloor k/6\rfloor}$
form a basis of $M_{w}^{+}$, where we recall that $[f,g]_{n}$ denotes the
$n$th Rankin–Cohen bracket.
We can now easily achieve our goal. First, we compute the Fourier expansions
of the basis given by the theorem up to the Sturm bound. Then to compute
$L(\chi_{D},1-k)$ with $k=w-1/2$, we do as follows: we compute the Fourier
expansion of ${\mathcal{H}}_{k}$ up to the Sturm bound, and using the basis
coefficients we deduce a linear combination of the form
${\mathcal{H}}_{k}=\sum_{0\leq j\leq\lfloor
k/6\rfloor}c_{j}^{k}[\theta,E_{k-2j}[4]]_{j}\;.$
We can easily compute the coefficients of these brackets:
###### Proposition 6.5
Let $F_{r}=-B_{r}E_{r}/(2r)$ be the Eisenstein series of level $1$ and weight
$r$ normalized so that the Fourier coefficient $q^{1}$ is equal to $1$. We
have $[\theta,F_{r}[4]]_{n}=\sum_{m\geq 0}b_{n,r}(m)q^{m}\;,$ with
$\displaystyle b_{n,r}(m)$
$\displaystyle=m^{n}\sum_{s\in{\mathbb{Z}}}P_{n,r}(s^{2}/m)\sigma_{r-1}\left(\dfrac{m-s^{2}}{4}\right)\text{
, where}$ $\displaystyle P_{n,r}(X)$
$\displaystyle=\sum_{\ell=0}^{n}(-1)^{\ell}\binom{n-1/2}{\ell}\binom{2n+r-\ell-3/2}{n-\ell}X^{n-\ell}\;,$
are _Gegenbauer polynomials_.
In particular, if we generalize a previous notation and set for any polynomial
$P_{n}$ of degree $n$
$S_{k}(m,N,P_{n})=m^{n}\sum_{s\in{\mathbb{Z}}}P_{n}(s^{2}/m)\sigma_{k}\left(\dfrac{m-s^{2}}{N}\right)\;,$
we have
$L(\chi_{D},1-k)=\sum_{0\leq j\leq\lfloor
k/6\rfloor}c_{j}^{k}S_{k-2j-1}(D,4,P_{j,k-2j})\;.$
The biggest advantage of this formula compared to the one coming from
Hecke–Eisenstein series is that the different $S_{k-2j-1}$ can be computed
simultaneously since they involve factoring the same integers $(D-s^{2})/4$,
and in addition these integers stay small, contrary to the former method where
the integers were of the form $(n^{2}D-s^{2})/4$.
The two disadvantages are that first, it is not easy (although possible) to
generalize to general characters $\chi$, but mainly because for large $k$ the
computation of $c_{j}^{k}$ involves solving a linear system of size
proportional to $k$, so when $k$ is in the thousands, this becomes
prohibitive. It is possible that there is a much faster method to compute them
analogous to Siegel’s theorem which expresses the constant term of a modular
form as a universal (for a given weight) linear combination of higher degree
terms, but I do not know of such a method.
As already mentioned, this gives the fastest method known to the author for
computing $L(\chi_{D},1-k)$, at least when $k$ is not unreasonably large.
### 6.3 The Case $k$ Even using $\Gamma_{0}(4N)$
We can, however, do better by using subgroups of $\Gamma_{0}(4)$, i.e.,
brackets with $E_{k-2j}[4N]$ for $N>1$. Recall that in the case of
Hecke–Eisenstein series this allowed us to give faster formulas only for very
small values of $k$ ($k=2$, $6$ and $8$). On the contrary, we are going to see
that here we can obtain faster formulas for all $k$, only depending on
congruence and divisibility properties of the discriminant $D$.
After considerable experimenting, I have arrived at the following conjecture,
which I have tested on tens of thousands of cases and _proved_ in small
weights. All of these identities can in principle be proved.
###### Conjecture 6.6
For $N=4$, $8$, $12$ and $16$ and any even integer $k\geq 2$ there exist
universal coefficients $c_{j,N}^{k}$ such that for all positive fundamental
discriminants $D$ (which in addition must be congruent to $1$ modulo $8$ when
$N=16$) we have
$\left(1+\mbox{$\left(\dfrac{D}{N/4}\right)$}\right)L(\chi_{D},1-k)=\sum_{0\leq
j\leq\lfloor k/m_{N}\rfloor}c_{j,N}^{k}S_{k-2j-1}(D,N,P_{j,k-2j})\;,$
with $m_{N}=6$, $4$, $3$, and $4$ respectively and the same polynomials $P$ as
above.
By what we said above this conjecture is proved for $N=4$ (with
$c_{j,4}^{k}=2c_{j}^{k}$), and should be easy to prove using the finite-
dimensionality of the corresponding modular form spaces together with the
Sturm bounds, but I have not done these proofs. It is also easy to prove for
small values of $k$.
It is clear that if we can choose a larger value of $N$ than $N=4$ (i.e., when
$1+\mbox{$\left(\frac{D}{N/4}\right)$}\neq 0$) the number of terms involved in
$S_{k-2j-1}$ will be smaller. Computing that number leads to the following
algorithm:
If $3\mid D$ use $N=12$, otherwise if $D\equiv 1\allowbreak\
({\rm{mod}}\,\,8)$ use $N=16$, otherwise if $4\mid D$ use $N=8$, otherwise if
$D\equiv 1\allowbreak\ ({\rm{mod}}\,\,3)$ use $N=12$, otherwise use $N=4$.
Note, however, that the size of the linear system which needs to be solved to
find the coefficients $c_{j,N}^{k}$ is larger when $N>4$, so one must balance
the time to compute these coefficients with the size of $D$: for very large
$D$ it may be worthwhile, but for moderately large $D$ it may be better to
always choose $N=4$ (see the second table below).
As before, we give tables of timings using these improvements. Note that they
are only an indication, since congruences modulo $16$ and $3$ may improve the
times:
$D\diagdown k$ $2$ $4$ $6$ $8$ $10$ $12$ $14$ $16$ $18$ $10^{10}+1$ $0.07$
$0.07$ $0.07$ $0.08$ $0.08$ $0.09$ $0.09$ $0.11$ $0.11$ $10^{11}+9$ $0.30$
$0.32$ $0.33$ $0.35$ $0.36$ $0.39$ $0.40$ $0.44$ $0.44$ $10^{12}+1$ $2.25$
$2.31$ $2.32$ $2.41$ $2.42$ $2.53$ $2.55$ $2.67$ $2.69$ $10^{13}+1$ $10.3$
$10.5$ $10.5$ $10.8$ $10.9$ $11.2$ $11.3$ $11.7$ $11.8$ $10^{14}+1$ $54.0$
$54.7$ $55.0$ $55.8$ $56.2$ $57.3$ $57.6$ $59.0$ $59.3$
In the next table, we use the improvements for larger $N$ only when $D$ is
sufficiently large, and the corresponding timings have a ∗; all the others are
obtained only with $N=4$:
$D\diagdown k$ $20$ $40$ $60$ $80$ $100$ $150$ $200$ $250$ $300$ $350$
$10^{6}+1$ – – – – – – $0.07$ $0.14$ $0.29$ $0.51$ $10^{7}+1$ – – – – – $0.08$
$0.16$ $0.30$ $0.55$ $0.88$ $10^{8}+1$ – – – $0.06^{*}$ $0.10^{*}$ $0.25$
$0.50$ $0.89$ $1.50$ $2.28$ $10^{9}+1$ – $0.06^{*}$ $0.12^{*}$ $0.19^{*}$
$0.30^{*}$ $0.74^{*}$ $1.59^{*}$ $2.85^{*}$ $4.96^{*}$ $7.56$ $10^{10}+1$
$0.12^{*}$ $0.23^{*}$ $0.39^{*}$ $0.64^{*}$ $1.00^{*}$ $2.48^{*}$ $5.20^{*}$
$9.08^{*}$ $15.0^{*}$ $22.4^{*}$ $10^{11}+9$ $0.49^{*}$ $0.86^{*}$ $1.47^{*}$
$2.37^{*}$ $3.67^{*}$ $9.09^{*}$ $18.9^{*}$ $32.8^{*}$ $52.8^{*}$ $77.2^{*}$
$10^{12}+1$ $2.84^{*}$ $4.04^{*}$ $6.01^{*}$ $9.04^{*}$ $13.4^{*}$ $31.8^{*}$
$64.6^{*}$ $*$ $*$ $*$ $10^{13}+1$ $12.3^{*}$ $16.5^{*}$ $23.4^{*}$ $34.2^{*}$
$49.9^{*}$ $*$ $*$ $*$ $*$ $*$ $10^{14}+1$ $60.8^{*}$ $74.8^{*}$ $98.8^{*}$
$*$ $*$ $*$ $*$ $*$ $*$
For larger values of $k$ the time to compute the coefficients dominate, so we
first give a table giving these timings:
$N\diagdown k$ $100$ $200$ $300$ $400$ $500$ $600$ $700$ $800$ $900$ $1000$
$4$ – $0.04$ $0.20$ $0.69$ $1.95$ $4.04$ $7.54$ $13.3$ $22.4$ $34.4$ $8$ –
$0.17$ $0.87$ $2.77$ $6.95$ $14.7$ $28.4$ $49.2$ $83.5$ $*$ $12$ – $0.32$
$1.90$ $5.77$ $14.5$ $32.0$ $61.6$ $*$ $*$ $*$ $16$ – $0.20$ $1.13$ $3.64$
$9.59$ $20.5$ $31.4$ $53.4$ $89.8$ $*$
As already mentioned, these timings would become much smaller if we had a
method analogous to Siegel’s theorem to compute them.
$D\diagdown k$ $400$ $500$ $600$ $700$ $800$ $900$ $1000$ $10^{5}+1$ $0.73$
$2.03$ $4.16$ $7.72$ $13.6$ $22.7$ $34.8$ $10^{6}+1$ $0.87$ $2.26$ $4.53$
$8.27$ $14.3$ $23.8$ $36.2$ $10^{7}+1$ $1.39$ $3.21$ $6.91$ $10.4$ $17.4$
$27.8$ $41.6$ $10^{8}+1$ $3.33$ $6.61$ $11.4$ $18.3$ $28.4$ $42,7$ $61.7$
$10^{9}+1$ $10.7$ $19.6$ $32.0$ $48.1$ $70.1$ $99.3$ $*$ $10^{10}+1$
$31.9^{*}$ $58.9^{*}$ $98.9^{*}$ $*$ $*$ $*$ $*$ $10^{11}+9$ $108.^{*}$ $*$
$*$ $*$ $*$ $*$ $*$
### 6.4 The Case $k$ Odd using $\Gamma_{0}(4N)$
In this case, the Kohnen $+$-space, to which ${\mathcal{H}}_{k}$ belongs, is
the space of modular forms $\sum_{n\geq 0}a(n)q^{n}$ such that $a(n)=0$ if
$n\equiv 1$ or $2$ modulo $4$. Thus, we cannot hope to _directly_ obtain
elements in this space using brackets with $\theta$. What we can do is the
following: as above, for $\ell\geq 1$ odd consider the two Eisenstein series
$\displaystyle E_{\ell}^{(1)}:=E_{\ell}(\chi_{-4},1)(\tau)$
$\displaystyle=\dfrac{L(\chi_{-4},1-\ell)}{2}+\sum_{n\geq
1}\sigma_{\ell-1}^{(1)}(n)q^{n}\text{\quad and}$ $\displaystyle
E_{\ell}^{(2)}:=E_{\ell}(1,\chi_{-4})(\tau)$ $\displaystyle=\sum_{n\geq
1}\sigma_{\ell-1}^{(2)}(n)q^{n}\;,$
which belong to $M_{\ell}(\Gamma_{0}(4))$ (using our notation, otherwise we
should write $M_{\ell}(\Gamma_{0}(4),\chi_{-4})$). It is clear that for $u=1$
and $2$ the $j$-th brackets $[\theta,E_{k-2j}^{(u)}]_{j}$ belong to
$M_{k+1/2}(\Gamma_{0}(4))$, and it should be easy to prove that they generate
this space (I have extensively tested this, and if it was not the case the
implementation would detect it). We can therefore express any modular form, in
particular ${\mathcal{H}}_{k}$, as a linear combination of these brackets, and
therefore again obtain explicit formulas for $L(\chi_{D},1-k)$.
However, we can immediately do considerably better in two different ways.
First, by Shimura theory we know that $T(4){\mathcal{H}}_{k}$ still belongs to
$M_{k+1/2}(\Gamma_{0}(4))$, and by definition it is equal to $\sum_{n\geq
0}H_{k}(4n)q^{n}$. Expressing it as a linear combination of the above brackets
again gives formulas for $L(\chi_{D},1-k)$, but where the coefficients involve
$|D|/4$ instead of $|D|$, so much faster (and of course applicable only for
$D\equiv 0\allowbreak\ ({\rm{mod}}\,\,4)$). Note that this trick is _not_
applicable in the case of even $k$ because $T(4){\mathcal{H}}_{k}$ is not
anymore in the Kohnen $+$-space, so we would lose all the advantages of having
a space of small dimension.
The second way in which we can do better is to consider brackets of $\theta$
with $E_{\ell}^{(u)}[N]$ (where we replace $q^{n}$ by $q^{Nn}$) for suitable
values of $N$: note that these modular forms belong to
$M_{k+1/2}(\Gamma_{0}(4N))$. It is then necessary to apply a Hecke-type
operator to reduce the dimension of the spaces that we consider. More
precisely, if we only look at coefficients $a(|D|)$ with given
$\left(\frac{D}{2}\right)$, we see experimentally that there is a linear
relation between ${\mathcal{H}}_{k}$ and the above brackets. This leads to the
following analogue for $k$ odd of Conjecture 6.6, where generalizing the
notation $S_{k}^{(j)}(m,N)$ used above for $j=1$ and $2$ we also use
$S_{k}^{(j)}(m,N,P_{n})=m^{n}\sum_{s\in{\mathbb{Z}}}P_{n}(s^{2}/m)\sigma_{k}^{(j)}\left(\dfrac{m-s^{2}}{N}\right)\;,$
where $P_{n}$ is a polynomial of degree $n$.
###### Conjecture 6.7
For $N=1$, $2$, $3$, $5$, $6$, and $7$, any odd integer $k\geq 3$, and
$e\in\\{-1,0,1\\}$, there exist universal coefficients $c_{j,N,e}^{k}$ such
that for all negative fundamental discriminants $D$ such that
$\mbox{$\left(\frac{D}{2}\right)$}=e$ we have
$\left(1+\mbox{$\left(\dfrac{|D|}{N_{2}}\right)$}\right)L(\chi_{D},1-k)=\sum_{0\leq
j\leq
m(k,N,e)}c_{j,N,e}^{k}S_{k-j_{1}-1}^{(1+j_{0})}(|D|/\delta,N,P_{j_{1},k-j_{1}})\;,$
where $N_{2}=N/2$ if $N$ is even and $N_{2}=N$ if $N$ is odd, $\delta=4$ if
$4\mid D$ and $\delta=1$ otherwise, we write $j=2j_{1}+j_{0}$ with
$j_{0}\in\\{0,1\\}$, upper bounds for $m(k,N,e)$ will be given below, and
where we must assume $e\neq-1$ if $N=6$ and on the contrary $e=-1$ if $N=7$.
Upper bounds for $m(k,N,e)$ are given for $e=-1$, $0$, and $1$ as follows:
$((k-1)/4,(k-1)/3,(k-3)/4)$ for $N=1$, $((k-1)/4,(k-1)/2,(k-3)/4)$ for $N=2$,
$((k-1)/2,(2k-1)/3,(k-1)/2)$ for $N=3$, $((3k-2)/4,k-1,(3k-5)/4)$ for $N=5$,
$(*,k-1,k-1)$ for $N=6$, and $(k-1,*,*)$ for $N=7$, where $*$ denotes
impossible cases.
For concreteness, we give the special case $k=3$, $e=1$: if $D\equiv
1\allowbreak\ ({\rm{mod}}\,\,8)$ is a negative fundamental discriminant, we
have
$\displaystyle L(\chi_{D},-2)$
$\displaystyle=\dfrac{1}{35}S_{2}^{(1)}(|D|,1)=\dfrac{1}{7}S_{2}^{(1)}(|D|,2)\;,$
$\displaystyle(1-\mbox{$\left(\frac{D}{3}\right)$})L(\chi_{D},-2)$
$\displaystyle=-\dfrac{2}{63}(S_{2}^{(1)}(|D|,3)+14S_{2}^{(2)}(|D|,3))\;,$
$\displaystyle(1+\mbox{$\left(\frac{D}{5}\right)$})L(\chi_{D},-2)$
$\displaystyle=-\dfrac{2}{3}(S_{2}^{(1)}(|D|,5)+4S_{2}^{(2)}(|D|,5))\;,$
$\displaystyle(1-\mbox{$\left(\frac{D}{3}\right)$})L(\chi_{D},-2)$
$\displaystyle=\dfrac{1}{14}(-52S_{2}^{(1)}(|D|,6)+5S_{0}^{(1)}(|D|,6,1-3x))\;.$
Similarly to the case of even $k$, computing the number of terms involved in
the sums leads to the following algorithm:
1. (1)
When $D\equiv 0\allowbreak\ ({\rm{mod}}\,\,4)$: if $3\mid D$ use $N=6$,
otherwise if $5\mid D$ use $N=5$, otherwise if $D\equiv 2\allowbreak\
({\rm{mod}}\,\,3)$ use $N=6$, otherwise if $D\equiv\pm 1\allowbreak\
({\rm{mod}}\,\,5)$ use $N=5$, otherwise use $N=2$.
2. (2)
When $D\equiv 1\allowbreak\ ({\rm{mod}}\,\,4)$: if $7\mid D$ and $D\equiv
5\allowbreak\ ({\rm{mod}}\,\,8)$ use $N=7$, otherwise if $3\mid D$ and
$D\equiv 1\allowbreak\ ({\rm{mod}}\,\,8)$ use $N=6$, otherwise if $5\mid D$
use $N=5$, otherwise if $D\equiv 5\allowbreak\ ({\rm{mod}}\,\,8)$ and $D\equiv
3,4,6\allowbreak\ ({\rm{mod}}\,\,7)$ use $N=7$, otherwise if $D\equiv
2\allowbreak\ ({\rm{mod}}\,\,3)$ and $D\equiv 1\allowbreak\ ({\rm{mod}}\,\,8)$
use $N=6$, otherwise if $3\mid D$ (hence $D\equiv 5\allowbreak\
({\rm{mod}}\,\,8)$) use $N=3$, otherwise if $D\equiv\pm 1\allowbreak\
({\rm{mod}}\,\,5)$ use $N=5$, otherwise use $N=2$.
As in the case of $k$ even, care must be taken in choosing $N>1$ since the
size of the linear system to be solved in order to compute the universal
coefficients $c_{j,N,e}^{k}$ is larger, so the above algorithm is valid only
if this time is negligible.
We thus give a table of timings using this algorithm. Note that $-10^{j}-3$ is
usually (but not always) slower than $-10^{j}-4$ since in the latter case the
sums involve $|D|/4$ instead of $|D|$, and that a lot depends on
divisibilities by $3$, $5$, and $7$, so the tables are only an indication:
$D\diagdown k$ $1$ $3$ $5$ $7$ $9$ $11$ $13$ $15$ $17$ $19$ $-10^{10}-4$
$0.05$ $0.06$ $0.06$ $0.07$ $0.08$ $0.09$ $0.10$ $0.10$ $0.12$ $0.14$
$-10^{10}-3$ $0.05$ $0.05$ $0.06$ $0.06$ $0.07$ $0.07$ $0.08$ $0.09$ $0.10$
$0.11$ $-10^{11}-4$ $0.25$ $0.27$ $0.28$ $0.31$ $0.33$ $0.36$ $0.40$ $0.44$
$0.47$ $0.52$ $-10^{11}-3$ $0.50$ $0.53$ $0.56$ $0.60$ $0.64$ $0.68$ $0.73$
$0.79$ $0.85$ $0.93$ $-10^{12}-4$ $1.36$ $1.41$ $1.47$ $1.55$ $1.64$ $1.74$
$1.86$ $1.99$ $2.12$ $2.28$ $-10^{12}-3$ $2.21$ $2.30$ $2.40$ $2.53$ $2.67$
$2.82$ $3.00$ $3.20$ $3.40$ $3.64$ $-10^{13}-4$ $6.86$ $7.06$ $7.28$ $7.54$
$7.84$ $8.18$ $8.55$ $8.98$ $9.44$ $9.89$ $-10^{13}-3$ $35.4$ $35.7$ $35.9$
$36.0$ $36.6$ $36.8$ $37.0$ $37.1$ $37.8$ $38.0$ $-10^{14}-4$ $34.0$ $34.8$
$35.5$ $36.3$ $37.3$ $38.4$ $39.7$ $41.1$ $42.6$ $44.3$
Note that we have included the case $k=1$ which will be discussed below.
As we have done in the case $k$ even, for larger values of $k$ the time to
compute the coefficients dominate, so we first give a table giving these
timings:
$(\mbox{$\left(\frac{D}{2}\right)$},N)\diagdown k$ $81$ $101$ $151$ $201$
$251$ $301$ $351$ $401$ $451$ $501$ $(1,1)$ – – $0.07$ $0.20$ $0.53$ $1.22$
$2.28$ $3.76$ $6.15$ $9.37$ $(1,2)$ – – $0.16$ $0.48$ $1.23$ $3.00$ $5.50$
$8.83$ $14.3$ $21.3$ $(1,3)$ – $0.10$ $0.58$ $1.73$ $4.39$ $9.31$ $17.5$
$30.1$ $50.4$ $80.6$ $(1,5)$ $0.21$ $0.57$ $2.57$ $7.43$ $18.4$ $37.3$ $70.8$
$*$ $*$ $*$ $(1,6)$ – $0.07$ $0.38$ $1.44$ $3.90$ $10.2$ $18.7$ $39.4$ $71.7$
$*$ $(-1,1)$ – – $0.07$ $0.22$ $0.54$ $1.27$ $2.30$ $3.90$ $6.33$ $9.70$
$(-1,2)$ – – $0.16$ $0.51$ $1.23$ $3.11$ $5.51$ $9.15$ $14.4$ $22.0$ $(-1,3)$
– $0.11$ $0.62$ $1.82$ $4.59$ $9.69$ $18.1$ $31.4$ $52.5$ $64.1$ $(-1,5)$
$0.13$ $0.34$ $1.53$ $4.95$ $10.1$ $20.4$ $38.5$ $67.2$ $110.$ $*$ $(-1,7)$
$0.28$ $0.61$ $2.73$ $8.15$ $20.4$ $41.6$ $77.4$ $*$ $*$ $*$ $(0,1)$ – –
$0.12$ $0.39$ $1.05$ $1.91$ $3.34$ $5.80$ $9.33$ $14.1$ $(0,2)$ – $0.07$
$0.40$ $1.15$ $2.67$ $5.76$ $10.6$ $19.6$ $29.7$ $52.8$ $(0,3)$ $0.08$ $0.18$
$0.95$ $2.82$ $6.80$ $14.6$ $27.2$ $50.6$ $77.1$ $*$ $(0,5)$ $0.25$ $0.53$
$2.28$ $6.78$ $16.0$ $33.2$ $60.7$ $105.$ $*$ $*$ $(0,6)$ $0.29$ $0.58$ $2.64$
$7.66$ $19.0$ $41.3$ $75.6$ $110.$ $*$ $*$
For future reference, we observe that the times are very roughly
$\displaystyle 10^{-10}k^{4}(1.5,3.6,12,46.8,12.5)$ for $e=1$, $\displaystyle
10^{-10}k^{4}(1.5,3.6,12,25.4,51)$ for $e=-1$, and $\displaystyle
10^{-10}k^{4}(2.2,7,18,40,50)$ for $e=0$,
where as usual $e=\mbox{$\left(\frac{D}{2}\right)$}$.
In the next table we use $N=2$ only when $|D|$ is sufficiently large, and the
corresponding timings have a ∗; all the other timings are obtained with $N=1$.
$D\diagdown k$ $21$ $41$ $61$ $81$ $101$ $151$ $201$ $251$ $301$ $351$ $401$
$451$ $501$ $-10^{6}-20$ – – – – – $0.14$ $0.43$ $1.11$ $1.99$ $3.46$ $5.96$
$9.57$ $14.4$ $-10^{6}-3$ – – – – – $0.10$ $0.27$ $0.63$ $1.41$ $2.50$ $4.19$
$6.72$ $10.2$ $-10^{7}-4$ – – – – $0.05$ $0.19$ $0.51$ $1.26$ $2.23$ $3.82$
$6.45$ $10.2$ $15.2$ $-10^{7}-3$ – – – – $0.06$ $0.17$ $0.41$ $0.87$ $1.78$
$3.04$ $4.95$ $7.69$ $11.5$ $-10^{8}-20$ – – $0.05$ $0.08$ $0.13$ $0.36$
$0.85$ $1.85$ $3.17$ $5.18$ $8.32$ $12.7$ $18.5$ $-10^{8}-3$ – $0.05$ $0.08$
$0.13$ $0.18$ $0.44$ $0.99$ $1.84$ $3.29$ $5.24$ $8.07$ $11.9$ $16.9$
$-10^{9}-20$ $0.03^{*}$ $0.07^{*}$ $0.13^{*}$ $0.23^{*}$ $0.37^{*}$ $0.98$
$2.07$ $3.91$ $6.49$ $10.0$ $15.0$ $21.8$ $30.1$ $-10^{9}-19$ $0.08^{*}$
$0.15^{*}$ $0.26^{*}$ $0.43^{*}$ $0.62$ $1.44$ $3.00$ $5.26$ $8.60$ $13.0$
$19.0$ $26.3$ $35.5$ $-10^{10}-4$ $0.12^{*}$ $0.23^{*}$ $0.42^{*}$ $0.70^{*}$
$1.10^{*}$ $2.88^{*}$ $6.13^{*}$ $11.2^{*}$ $18.5$ $27.5$ $39.2$ $54.4$ $72.2$
$-10^{10}-3$ $0.37^{*}$ $0.60^{*}$ $0.97^{*}$ $1.52^{*}$ $2.34^{*}$ $5.35$
$10.7$ $18.2$ $28.6$ $41.7$ $59.5$ $80.2$ $105.$ $-10^{11}-4$ $0.50^{*}$
$0.88^{*}$ $1.52^{*}$ $2.45^{*}$ $3.80^{*}$ $9.40^{*}$ $19.4^{*}$ $34.1^{*}$
$55.6^{*}$ $82.7^{*}$ $*$ $*$ $*$ $-10^{11}-3$ $1.62^{*}$ $2.41^{*}$
$3.72^{*}$ $5.66^{*}$ $8.54^{*}$ $20.6^{*}$ $42.0^{*}$ $72.5^{*}$ $*$ $*$ $*$
$*$ $*$ $-10^{12}-4$ $2.30^{*}$ $3.61^{*}$ $5.78^{*}$ $9.00^{*}$ $13.7^{*}$
$33.1^{*}$ $67.2^{*}$ $117.^{*}$ $*$ $*$ $*$ $*$ $*$ $-10^{12}-3$ $6.65^{*}$
$9.46^{*}$ $14.1^{*}$ $21.1^{*}$ $31.6^{*}$ $75.5^{*}$ $*$ $*$ $*$ $*$ $*$ $*$
$*$ $-10^{13}-4$ $10.5^{*}$ $15.1^{*}$ $22.7^{*}$ $34.1^{*}$ $50.7^{*}$ $*$
$*$ $*$ $*$ $*$ $*$ $*$ $*$ $-10^{13}-3$ $40.3^{*}$ $49.3^{*}$ $64.8^{*}$
$88.3^{*}$ $*$ $*$ $*$ $*$ $*$ $*$ $*$ $*$ $*$ $-10^{14}-4$ $48.7^{*}$
$64.2^{*}$ $90.4^{*}$ $*$ $*$ $*$ $*$ $*$ $*$ $*$ $*$ $*$ $*$
## 7 The Case $k=1$
In this brief section, we consider the case $k=1$, i.e., the problem of
computing $L(\chi,0)$ for an odd character $\chi$. Of course the Bernoulli
method as well as the approximate functional equation are still applicable in
general. But in the case $\chi=\chi_{D}$ with $D<0$ there are still methods
coming from modular forms. Note that in that case for $D<-4$ we have
$L(\chi_{D},0)=h(D)$ which can therefore be computed using subexponential
algorithms, but it is still interesting to look at modular-type formulas. Note
that ${\mathcal{H}}_{1}$ is not quite but almost a modular form of weight
$3/2$, so it is not surprising that the method given above also works for
$k=1$.
For instance, we have the following result, where we refer to Definition 5.4
for the definition of $S_{0}^{(1)}$ (note that $S_{0}^{(2)}=S_{0}^{(1)}$):
###### Proposition 7.1
Let $D$ be a negative fundamental discriminant $D$.
1. (1)
Set $e=\mbox{$\left(\frac{D}{2}\right)$}$. We have
$\dfrac{S_{0}^{(1)}(|D|,N)}{L(\chi_{D},0)}=\begin{cases}3(1-e)&\text{\quad
when $N=1$ and $N=2$\;,}\\\
(1-\mbox{$\left(\frac{D}{3}\right)$})(5-e)/2&\text{\quad when $N=3$\;,}\\\
(1+\mbox{$\left(\frac{D}{5}\right)$})(1-e)/2&\text{\quad when $N=5$\;,}\\\
(1-\mbox{$\left(\frac{D}{3}\right)$})(1+e)/2&\text{\quad when $N=6$\;,}\\\
(1-\mbox{$\left(\frac{D}{7}\right)$})&\text{\quad when $N=7$ and
$e=-1$\;.}\end{cases}$
2. (2)
If $4\mid D$, we also have
$\dfrac{S_{0}^{(1)}(|D|/4,N)}{L(\chi_{D},0)}=\begin{cases}3&\text{\quad when
$N=1$\;,}\\\ 1&\text{\quad when $N=2$\;,}\\\
(1-\mbox{$\left(\frac{D}{3}\right)$})/2&\text{\quad when $N=3$ and
$N=6$\;,}\\\ (1+\mbox{$\left(\frac{D}{5}\right)$})/2&\text{\quad when
$N=5$\;.}\end{cases}$
In particular, Conjecture 6.7 is valid for $k=1$ with $m(1,N,e)=0$,
$c_{0,N,e}^{1}=2/(3(1-e))$, $2/(3(1-e))$, $2/(5-e)$, $2/(1-e)$, $2/(1+e)$, and
$1$ when $\delta=1$ for $N=1$, $2$, $3$, $5$, $6$, and $7$ respectively, and
$c_{0,N,0}^{1}=2/3$, $2$, $2$, $2$, and $2$ when $\delta=4$ and $N=1$, $2$,
$3$, $5$, and $6$ respectively.
Since we can efficiently compute $L(\chi_{D},0)$ by using class numbers this
result has no computational advantage, but is simply given to show that the
formulas that we obtained above for $k\geq 3$ odd have analogs for $k=1$.
## References
* [1] K. Belabas and H. Cohen, Numerical Algorithms for Number Theory using Pari/GP, Surveys in Math., American Math. Soc., to appear.
* [2] H. Cohen, Sums involving the values at negative integers of $L$-functions of quadratic characters, Math. Ann. 217 (1975), 271–285.
* [3] H. Cohen, A lifting of modular forms in one variable to Hilbert modular forms in two variables, Modular forms of one variable VI, Lecture Notes in Math. 627, Springer, Berlin, 1977, 175–196.
* [4] H. Cohen, Number Theory vol I: Tools and Diophantine Equations, Graduate Texts in Math. 239, Springer-Verlag (2007).
* [5] H. Cohen, Number Theory vol II: Analytic and Modern Tools, Graduate Texts in Math. 240, Springer-Verlag (2007).
* [6] H. Cohen and D. Zagier, Vanishing and Nonvanishing Theta Values, Ann. Math. Quebec 37 (2013), 45–61.
* [7] J. Ellenberg, S. Jain, and A. Venkatesh, Modeling $\lambda$-invariants by $p$-adic random matrices, Comm. pure appl. math. 64 (2011), 1243–1262.
* [8] I. Inam and G. Wiese, Fast computation of half-integral weight modular forms, arXiv:2010.11239.
* [9] I. Inam, Z. Özkaya, E. Tercan, and G. Wiese, On the distribution of coefficients of half-integral weight modular forms and the Ramanujan–Petersson conjecture, arXiv:2010.11240.
* [10] R. Odoni, On Gauss sums $\allowbreak\ ({\rm{mod}}\,\,p^{n})$, $n\geq 2$, Bull. London Math. Soc. 5 (1973), 325–327.
|
# Space-Time Quantum Metasurfaces
Wilton J. M. Kort-Kamp Los Alamos National Laboratory, Los Alamos, NM 87545,
USA Abul K. Azad Los Alamos National Laboratory, Los Alamos, NM 87545, USA
Diego A. R. Dalvit∗ Los Alamos National Laboratory, Los Alamos, NM 87545, USA
###### Abstract
Metasurfaces are a key photonic platform to manipulate classical light using
sub-wavelength structures with designer optical response. Static metasurfaces
have recently entered the realm of quantum photonics, showing their ability to
tailor nonclassical states of light. We introduce the concept of space-time
quantum metasurfaces for dynamical control of quantum light. We provide
illustrative examples of the impact of spatio-temporally modulated
metasurfaces in quantum photonics, including the creation of frequency-spin-
path hyperentanglement on a single photon and the realization of space-time
asymmetry at the deepest level of the quantum vacuum. Photonic platforms based
on the space-time quantum metasurface concept have the potential to enable
novel functionalities, such as on-demand entanglement generation for quantum
communications, nonreciprocal photon propagation for free-space quantum
isolation, and reconfigurable quantum imaging and sensing.
The generation, manipulation, and detection of nonclassical states of light is
at the heart of quantum photonics. As quantum information can be encoded into
the different degrees of freedom of a single photon, it is highly desirable to
develop photonic platforms that allow to control them while maintaining
quantum coherence. Metasurfaces Kildishev2013 ; Chen2016 have recently
transitioned from the classical to the quantum domain Solntsev2020 and
enabled enhanced light-matter interactions facilitated by the ultrathin
subwavelength nature of their constituent scatterers. Spin-orbital angular
momentum entanglement of a single photon has been demonstrated Stav2018 using
geometrically tailored metasurfaces that induce spin-orbit coupling of light
via the Pancharatnam-Berry phase Bomzon2002 , and multiphoton interferences
and polarization-state quantum reconstruction has also been achieved in a
single geometric phase metasurface Wang2018 . A metasurface-based
interferometer has been demonstrated for generating and probing entangled
photon states Georgi2019 , opening opportunities for implementing quantum
sensing and metrology protocols using metasurface platforms. Embedded quantum
building blocks into arrays of meta-atoms, including quantum dots,
semiconductor emitters, and nitrogen-vacancy centers, result in enhanced
Purcell factors Vaskin2019 , directional lasing Xie2020 , and circularly-
polarized single-photon emission Kan2020 . Recently, quantum metasurfaces
based on atomic arrays have been proposed Bekenstein2020 .
Most demonstrations of metasurfaces in quantum photonics are based on static
meta-atoms whose optical properties are determined by their material
composition and geometrical design that cannot be changed on demand. A few
realizations of active quantum metasurfaces have been reported, for example,
for tuning spontaneous emission from a Mie-resonant dielectric metasurface
using liquid crystals Bohn2018 . However, a fully tailored response requires
quantum metasurfaces that can continuously alter their scattering properties
simultaneously in space and time. At the classical level, spatio-temporally
modulated metasurfaces Shaltout2019 have been shown to provide that higher
degree of control, both by reconfigurable and fully-dynamic tailoring of the
optical response of meta-atoms using analog and digital modulation schemes
Cardin2020 ; Zhang2018 .
Here, we put forward the concept of space-time quantum metasurfaces (STQMs)
for spatio-temporal control of quantum light. In order to highlight the broad
implications of this concept in different areas of quantum science and
technology, we discuss two instances of how STQMs operate both at the single-
photon and virtual-photon levels. We describe STQM-enabled hyperentanglement
Barreiro2005 manipulation of nonclassical states of light and STQM-induced
photon pair generation (Fig. 1) in a process analogue to the dynamical Casimir
effect Moore1970 .
Results:
We introduce the STQM concept by considering the transmission of a single
photon through a metasurface whose meta-atoms are modulated in space and time.
The metasurface is composed of identical anisotropic scatterers (Fig. 2a)
suitably rotated with respect to each other. The combination of anisotropy and
rotation results in circular cross-polarization conversion and a spin-
dependent geometric phase distribution $\Psi({\bf r})$ akin to spin-orbit
coupling. To minimize photon absorption the metasurface is assumed to be
comprised of low-loss dielectric meta-atoms. The spatio-temporal modulation is
modeled as a perturbation of the electric permittivity, $\epsilon({\bf
r},t)=\epsilon_{um}+\Delta\epsilon\cos(\Omega t-\Phi({\bf r}))$, where
$\epsilon_{um}$ is the unmodulated permittivity, $\Delta\epsilon$ the
modulation amplitude, $\Omega$ the modulation frequency, and $\Phi({\bf r})$
is a “synthetic” phase. Such type of modulation has been recently demonstrated
using a heterodyne laser-induced dynamical grating on an amorphous Si
metasurface via the nonlinear Kerr effect Guo2019 , setting a traveling-wave
permittivity perturbation with $\Phi({\bf r})=\bm{\beta}\cdot{\bf r}$
($\bm{\beta}$ is an in-plane momentum “kick”). Note that the geometric phase
is fixed by the design of the metasurface while the synthetic phase is
reconfigurable on-demand.
Figure 1: Conceptual representation of a space-time quantum metasurface. A
spatio-temporal spinning modulation of graphene nanostructures generates
entangled vortex photon pairs out of the quantum vacuum.
STQMs for on-demand entanglement manipulation: The geometry of the dielectric
nanoresonator can be tailored so that it has maximal cross-polarized
transmission (Fig. 2b) and at the same time so that its Mie electric and
magnetic dipolar resonances dominate the optical response of the metasurface.
One can then describe the interaction of each Mie resonator with light using
the effective Hamiltonian $H_{int}=-{\bf p}\cdot{\bf E}-{\bf m}\cdot{\bf B}$
Kuznetsov2016 ; Novotny2007 , where ${\bf p}$ and ${\bf m}$ are the electric
and magnetic dipole operators and ${\bf E}$ and ${\bf B}$ are the local
quantized electromagnetic fields. Higher-order Mie resonances can be neglected
because the transmissivity and reflectivity of the metasurface are well-
described by that of an array of electric and magnetic dipoles corresponding
to the two lowest Mie multipoles. It is convenient to trace over the
nanostructure’s degrees of freedom to express the Hamiltonian only in terms of
photonic modes by relating dipoles and fields via effective electric
$\bm{\alpha}_{E}$ and magnetic $\bm{\alpha}_{M}$ polarizability tensors (see
Supplementary Information for the derivation of the polarizabilities). We show
in Fig. 2c the relevant polarizability components for describing the coupling
with the normally-incident photon and in Fig. 2d the electric field
distributions at the Mie resonance frequencies. The unmodulated Hamiltonian
describing cross-polarized transmission has an effective coupling strength
$\alpha^{(cr)}_{um}(\omega)$ that is a simple combination of the electric and
magnetic polarizabilities (see Methods).
Upon spatio-temporal modulation, the effective polarizabilities adiabatically
follow the harmonic driving because the response times of semiconductors
($<100$ fs for the nonlinear Kerr response time in amorphous Si) are much
faster than THz modulations achievable with all-optical schemes. Hence,
$\alpha^{(cr)}(\omega;{\bf
r},t)=\alpha^{(cr)}_{um}(\omega)+\Delta\alpha^{(cr)}(\omega)\cos(\Omega
t-\Phi({\bf r})).$ (1)
We calculate the polarizability modulation amplitude
$\Delta\alpha^{(cr)}(\omega)$ from the dependency of transmissivity on
permittivity modulation (Fig. 2e). For a $1\%$ permittivity modulation depth
the resulting polarizability change is approximately $20\%$, the increase
originating from the strong dispersion of the unmodulated polarizability close
to the input frequency. The STQM Hamiltonian $H_{1}(t)$ is the sum of the
unmodulated part plus a modulation contribution that annihilates the input
photon and creates a new one with Doppler-shifted frequency and synthetic
phase, in addition to flipping its spin components and adding geometric phases
in the same way as the unmodulated part (see Methods). In this work we
restrict to unitary evolution as photons do not suffer from severe decoherence
problems and absorption is negligible in high-index dielectrics Stav2018 .
Figure 2: Effective polarizabilities of all-dielectric space-time quantum
metasurfaces. (a) Anisotropic amorphous Si Mie nanocross meta-atom with
optimized geometrical parameters for maximal cross-polarization transmission
for a normally-incident $\lambda_{in}=1550$ nm input photon. Parameters are
$L_{1}=950$ nm, $L_{2}=435$ nm, $h=300$ nm, $w=200$ nm, and square unit cell
with period $P=1200$ nm. (b) Co- and cross-polarized reflectivity and
transmissivity for the full metasurface (solid) and electric/magnetic dipole
array (dashed). (c) Real parts of the electric and magnetic polarizabilities
normalized by the meta-atom volume. Solid line is the effective unmodulated
coupling strength for cross-polarized transmission: $\alpha^{(cr)}_{um}\approx
0.6\mu{\rm m}^{3}$ at the input frequency. (d) Electric field distribution for
the two electric and the two magnetic Mie resonances. (e) Polarizability
modulation amplitudes for permittivity modulation depth
$\Delta\epsilon/\epsilon_{um}=1\%$. Solid line is the polarizability
modulation amplitude for cross-polarized transmission:
$\Delta\alpha^{(cr)}/\alpha^{(cr)}_{um}\approx 0.2$ at the input frequency.
When the geometric phase is a linear function of the meta-atoms’ positions it
generates spin-momentum correlations, while a linear synthetic phase creates
momentum-frequency correlations. The two correlations are intertwined through
momentum and the photon evolves into a state that is hyperentangled in spin,
path, and frequency
$\\!\\!|\psi(t)\rangle\\!=\\!\sum_{p,q}[c^{(R)}_{p,q}(t)|\omega_{p};{\bf
k}_{p,q};\\!R\rangle+c^{(L)}_{p,q}(t)|\omega_{p};{\bf k}_{p,-q};\\!L\rangle],$
(2)
where $p$ are integers, $q=0,1$, $R(L)$ denotes right (left) circular
polarization, $\omega_{p}=\omega_{in}+p\Omega$ are harmonics of the input
frequency $\omega_{in}$, ${\bf k}_{p,q}={\bf
k}_{in}+p\bm{\beta}+q\bm{\beta}_{g}$ are in-plane momentum harmonics of the
in-plane input wave-vector ${\bf k}_{in}$, and $\bm{\beta}_{g}$ is the
momentum kick induced by the linear geometric phase. We will denote states in
the first term as $(p,q,R)$ and in the second term as $(p,-q,L)$, highlighting
that the geometric-phase-induced momentum kicks for right- and left-polarized
photons have opposite directions. To calculate the probability amplitudes we
consider a normally-incident single-photon pulse and assume modulation
frequencies and in-plane momentum kicks much smaller than the input frequency
and input wave-vector. Since the dielectric metasurface enables large
polarizability modulation amplitudes for modest permittivity variations, it is
possible for the input photon to transition to multiple frequency/momentum
harmonics during its transit within the metasurface. For input linear
polarization, the transition probabilities to states $(p,q,R)$ and $(p,-q,L)$
are identical and are given by
$|c^{(R/L)}_{p,q}(t)|^{2}=\frac{1}{2}\cos^{2}\Big{(}\frac{\omega_{in}t\alpha^{(cr)}_{um}}{2hP^{2}}\Big{)}J^{2}_{p}\Big{(}\frac{\omega_{in}t\Delta\alpha^{(cr)}}{2hP^{2}}\Big{)}$
(3)
when $p$ and $q$ have the same parity; for opposite parity the cosine is
replaced by a sine. $J_{p}(x)$ is the Bessel function and probabilities for
$\pm p$ are the same (see Supplementary Information for details on the state
evolution).
Figure 3: Entanglement manipulation with space-time quantum metasurfaces. (a)
Conversion probability into an output photon in frequency harmonic
$\omega_{in}+p\Omega$ and momentum harmonic $p\bm{\beta}+q\bm{\beta}_{g}$
versus polarizability modulation depth for the STQM of Fig. 2. (b) Density
matrices of input (i) and output photon featuring (ii) spin-path entanglement,
(iii-iv) frequency-path entanglement, and (v-vi) frequency-spin-path
hyperentanglement. Larger modulation depths result in more harmonics involved
in the entangled states (iv) and (vi). (c) Population dynamics of geometric-
phase kicked $q=1$ and unkicked $q=0$ states for in-transit photon at $0.1$
modulation depth, showing Rabi oscillations for the fundamental frequency
harmonic. Inset: Rabi dynamics in higher harmonics. Envelopes (dashed black)
are the populations of $p$-harmonics in the absence of geometric phase.
The probability that the output photon is in a given frequency/momentum
harmonic as a function of the modulation depth is shown in Fig. 3a. At zero
modulation, the output has the same frequency as the input and is
approximately an equal superposition of right- and left-polarized geometric-
phase-kicked states, with a small overlap with unkicked states due residual
co-polarized transmission. As the modulation increases, transitions to only
the first few frequency/momentum harmonics occur and a larger amount of the
available Hilbert space is explored at large modulation depths. Figure 3b
depicts the density matrices of the input (panel (i)) and output photons for
different configurations of the STQM, resulting in distinct kinds of quantum
correlations: (ii) Geometric phase with spatio-temporal modulation off, giving
a spin-path entangled output of same frequency as input; (iii-iv) No geometric
phase and spatio-temporal modulation on, resulting in frequency-path entangled
cross-polarized output; (v-vi) Geometric phase with spatio-temporal modulation
on, delivering a frequency-spin-path hyperentangled output. It is possible to
tailor the modulation depth to completely suppress the contribution of a given
harmonic to the output state, as shown in (iv, vi) for the fundamental
frequency. Under temporal modulation only, i.e., null synthetic phase (not
shown), the output photon is unentangled (hyperentangled) in the absence
(presence) of geometric phase. Figure 3c shows the population dynamics of
different harmonics while the photon is in-transit inside the STQM.
Interestingly, the evolution of populations with and without geometric phase
are fundamentally different. Due to spin-orbit coupling the photon undergoes
Rabi-flopping between state pairs $(p,0,R)\leftrightarrow(p,-1,L)$ and
$(p,0,L)\leftrightarrow(p,1,R)$, and this population exchange cannot take
place at zero geometric phase. Unmodulated and modulated polarizabilities
control the time-scales of Rabi and synthetic-phase dynamics, respectively.
When both phase distributions are azimuthally varying, i.e., $\Psi({\bf
r})=\ell_{g}\varphi$, $\Phi({\bf r})=\ell\varphi$ ($\ell_{g}$ and $\ell$
integers), the input photon becomes hyperentangled in frequency, spin, and
orbital angular momentum (OAM) Calvo2006 . The state of the photon can be
written as in Eq. (2) replacing linear momentum harmonics ${\bf k}_{p,q}$ by
OAM harmonics $\ell_{p,q}=p\ell+q\ell_{g}$. Such a rotating synthetic phase
could be implemented, e.g., via a heterodyne laser-induced dynamical grating
with Laguerre-Gauss petal modes Eichler1986 ; Naidoo2012 to generate an all-
optical spinning perturbation of the meta-atoms’ refractive index. As STQMs
offer the possibility to reconfigure the synthetic phase on-demand, the
question naturally arises as to what happens when the synthetic and geometric
phase distributions have utterly different symmetry, for instance one is
linear and the other spinning. It is then necessary to expand one phase in
terms of a mode basis with symmetries appropriate for the other phase, e.g.,
plane waves into cylindrical waves (see Supplementary Information for details
of mixed-phase STQMs). As the synthetic phase creates frequency-path
correlations and the geometric phase spin-OAM correlations, the two
correlations are not intertwined and the STQM does not produce
hyperentanglement but bipartite entanglement between pairs of degrees of
freedom of the single photon. Finally, we mention that all the analysis
presented in this section can be extended to other nonclassical inputs, such
as two-photon Fock states.
STQMs for tailored photon-generation out of quantum vacuum: Space-time quantum
metasurfaces can produce other nonclassical states of light and even induce
nonreciprocity Sounas2017 on quantum vacuum fluctuations. In addition to the
photon-number-conserving Hamiltonian $H_{1}(t)$ discussed above, STQMs couple
to the quantum electromagnetic field via a photon-number-non-conserving
Hamiltonian $H_{2}(t)$ that creates photon pairs out of the quantum vacuum
(see Methods). Their frequencies add up to the modulation frequency,
$\omega+\omega^{\prime}=\Omega$, thereby conserving energy, and this process
is essentially an analogue of the dynamical Casimir effect (DCE) in which an
oscillating boundary parametrically excites virtual into real photons
Dalvit2006 ; Dodonov2010 . Although the mechanical DCE effect has not been
detected because it requires unfeasibly large mechanical oscillation
frequencies, various analogue DCE systems have been demonstrated Wilson2011 ;
Jaskula2012 ; Lahteenmaki2013 ; Vezzoli2019 . STQMs allow for a novel degree
of dynamical control over the quantum vacuum through the synthetic phase: The
scattering matrix for the DCE process Maghrebi2013 becomes asymmetric via the
spatio-temporal modulation, reflecting that Lorentz reciprocity is broken at
the level of quantum vacuum fluctuations.
Figure 4: Steered quantum vacuum. (a) A linear synthetic phase is imprinted on
a metasurface through a traveling-wave modulation and is tuned on demand
(colored arrows) to steer the emitted dynamical Casimir photons. (b) Emission
lobes of one photon for varying momentum kick and fixed (vertical) emission
direction of its twin. (c) Density polar plots of angular emission spectrum
for various $\beta=(0,0.2,0.3,0.38,0.5)\Omega/c$ from left to right. The areas
to the right (left) of the vertical solid line correspond to the angular
emission spectrum of the high- (low-) frequency photon in a pair. Frequencies
are $\omega/\Omega=0.7$ and $\omega^{\prime}/\Omega=0.3$. Shaded zones
correspond to forbidden photon emission directions. Between the two rightmost
panels two special events simultaneously happen: the merge of the emission
“island” of the high-frequency photon with the grazing line and the birth of
forbidden regions for the low-frequency photon. (d) Spherical polar plots for
the same panels as in (c). Figure 5: Photo-emission rates for various
synthetic phases. (a) Spectral weight function for linear synthetic phase.
Sharp edges of each plateau correspond to the special events of Fig. 4(c).
Inset: Crossings responsible for non-monotonicity in (f). (b) Spectral weight
function for rotating synthetic phase. Solid lines correspond to a finite
radius metasurface ($\Omega R/c=30$) and dashed line is the $\ell=0$ case for
an infinite metasurface. (c) Angular-momentum spectra for finite radius
metasurface for the high-frequency photon. (d) Same as (c) for the low-
frequency photon. (e) Spectral photo-production rate for null synthetic phase
for a graphene-disk STQM. Inset: unmodulated electric polarizability
$\alpha_{um}(\omega)$ (solid) and modulation amplitude $\Delta\alpha(\omega)$
(dashed). (f) Emission rate for linear synthetic phase. The profile on the
left shows the rate at null synthetic phase. The black thick curve joins peaks
of maximal emission and the thin black curve
$c\beta=2\omega_{res}(E_{F})-\Omega$ is its projection on the $\beta-E_{F}$
plane. The rate decreases non-monotonically to zero at $\beta_{max}$. In (e-f)
parameters are: $\Omega/2\pi=10$ THz, $\Delta E_{F}/E_{F}=1\%$,
$n_{MS}=10^{3}\;{\rm mm}^{-2}$, $D=5\;\mu$m, and graphene mobility
$\mu=10^{4}\;{\rm cm}^{2}\;{\rm V}^{-1}\;{\rm s}^{-1}$.
We first consider the case of the linear synthetic phase (Fig. 4a) and set the
geometric phase to zero. Momentum conservation dictates that the emitted
photons must have in-plane momenta that add up to the imprinted kick,
$\bf{k}+\bf{k}^{\prime}=\bm{\beta}$, and the emitted photons are frequency-
path entangled. In Fig. 4b we show the one-photon angular emission
distribution for a fixed propagation direction of its twin, indicating how the
externally imprinted momentum controls the directivity of the emission
process. Figure 4c contains polar plots of the emission distributions for a
given circularly-polarized photon pair (see Methods). In the absence of kick,
the high-frequency photon can be emitted in any azimuthal direction but it has
a maximum polar angle of emission, while no such a constraint exists for the
low-frequency photon. As the magnitude of the momentum kick $\beta$ increases,
the distributions undergo intricate changes. The region of allowed emission
for the first photon gets deformed when the kick is non-zero and at a critical
value of the kick an “island” of emission appears surrounded by a sea of
forbidden emission directions (shaded areas). The island drifts to higher
polar angles until it touches the grazing emission line, starts to shrink in
size, and finally at $\beta_{max}=\Omega/c$ it collapses to a point and the
photon is only emitted parallel to the kick. Far-field emission above that
value of the kick is not possible. Regarding the second photon, its emission
distribution remains mostly unperturbed until two areas of forbidden emission
appear at large polar angles and opposite to the kick direction. The forbidden
region grows until it engulfs its allowed emission region and a second island
forms (not shown). Finally, it ends up being emitted at a grazing angle but in
a direction anti-parallel to the kick. The corresponding spherical plots are
shown in Fig. 4d, with emission profiles resembling cone- (dome-) like shapes
for the high- (low-) frequency photon and become increasingly distorted as the
kick grows. When both photons are emitted with the same frequency, i.e., twin
photons, the emission distribution is disk-shaped and gets elongated in a
direction parallel to the kick as this increases in magnitude (not shown). The
modulation also excites hybrid entangled pairs composed of one photon and one
evanescent surface wave (shaded areas in Fig. 4c), and when
$\beta>\beta_{max}$ only evanescent modes are created and subsequently decay
via non-radiative loss mechanisms.
The two-photon emission rate from an STQM of area $A$ with arbitrary synthetic
phase $\Phi({\bf r})$ is
${\Gamma}_{\Phi}=\frac{An_{MS}^{2}\Omega^{4}}{512\pi^{3}c^{4}}\\!\int_{0}^{\Omega}\\!\\!\\!d\omega|\Delta\alpha(\omega)+\Delta\alpha(\Omega-\omega)|^{2}f_{\Phi}(\omega).$
(4)
The rate scales as the square of the meta-atoms number surface density
$n_{MS}$ indicating coherent emission of photon pairs. Electro-optical
properties of the meta-atoms are contained in the modulated electric
polarizability amplitude $\Delta\alpha(\omega)$. The spectral weight function
$f_{\Phi}$ results from the angular integration of all emission events and is
plotted in Fig. 5a for the case of the linear synthetic phase.
$f_{\beta}(\omega)$ has a central plateau-like form with sharp edges that at
zero kick coalesce into a single logarithmic integrable divergency at the
center of the spectrum and corresponds to the emission of twin photons
MaiaNeto1996 . As the kick grows, the plateau becomes lower and at the maximum
allowed kick the spectral weight function vanishes.
STQMs can affect the quantum vacuum in more exotic ways, e.g., a modulation
with a spinning synthetic phase “stirs” the vacuum (Fig. 1) and induces
angular momentum nonreciprocity Sounas2013 at the level of quantum
fluctuations. The rotating modulation generates vortex photon pairs that carry
angular momenta satisfying $m+m^{\prime}=\ell$. For $\ell\neq 0$ the average
of the Poynting vector over all possible emission events results in a single
vortex line along the synthetic spinning axis. Photon pairs are frequency-
angular momentum entangled and their quantum correlations could be probed
using photo-coincidence detection and techniques based on angular momentum
sorting of light Berkhout2010 ; Mirhosseini2013 . The spectral weight function
$f_{\ell}(\omega)$ is reported in Fig. 5b for a finite-radius metasurface,
showing plateau-like structures as in Fig. 5a but without the sharp features
on the edges, and with decreasing height as the spinning grows. There is a
drastic but subtle difference between the two spectral weight functions
$f_{\beta}(\omega)$ and $f_{\ell}(\omega)$ that is not apparent in the plots:
The former vanishes beyond the finite kick threshold $\beta_{max}$, while
there is no finite spinning threshold for the latter. Figures 5c-d show the
angular momentum spectra of high- and low-frequency photons in an emitted pair
(see Methods). When the STQM does not imprint any spinning, the spectra are
symmetric around the peak at $m=0$, with oppositely twisted photons in each
emitted pair. When spinning is imprinted, the two spectra are related as
$f_{\ell}(m,\omega)=f_{\ell}(\ell-m,\Omega-\omega)$ and the spectrum for the
high- (low-) frequency photon is centered around $m=\ell$ ($m=0$). This is the
angular-momentum equivalent of asymmetric linear momentum emission in Fig. 4d.
Photo-emission rates can be boosted with suitable functional meta-atoms, such
as atomically-thin nanostructures made of plasmonic materials that can support
highly localized plasmons Yu2017 ; Abajo2015 ; Muniz2020 and enable large
electric polarizabilities conducive to enhanced coupling of the STQM with the
quantum vacuum. As an example, we consider a STQM based on graphene disks
whose Fermi energy $E_{F}$ is spatio-temporally modulated (Fig. 1). Changing
the Fermi energy it is possible to tune the plasmonic resonances into the DCE
spectral range and to modify the electric polarizability modulation amplitude
(inset of Fig. 5e). Furthermore, the use of ultra-high mobility graphene
samples minimizes photon absorption and substantially enhances photo-
production rates. Figure 5e depicts the spectral rate for a STQM for null
synthetic phase at selected Fermi energies, featuring Lorentzian peaks at
complementary frequencies. For high-Q resonances the emission rate for
arbitrary synthetic phase can be approximated as
${\Gamma}_{\Phi}\approx
g\Omega\;(An_{MS}^{2}D^{6}\omega^{4}_{res}/c^{4})\;f_{\Phi;res}\Big{(}\frac{\Delta
E_{F}}{E_{F}}\Big{)}^{2}\Big{(}\frac{\Omega}{\gamma}\Big{)}^{3}$ (5)
Here, $g$ is a numerical factor determined by the plasmon eigenmode, $D$ the
disk diameter, $\omega_{res}$ is the plasmonic resonance frequency,
$f_{\Phi;res}$ the spectral weight on resonance, $\Delta E_{F}/E_{F}$ the
Fermi energy modulation depth, and $\Omega/\gamma\gg 1$ with $\gamma$ the
scattering rate of graphene. (see Supplementary Information for the derivation
of the polarizability and emission rate). Figure 5f shows the emission rate
for the linear synthetic phase as a function of momentum kick and Fermi
energy. Giant photon-pair production rates on the order of $10^{12}$
photons$/{\rm cm}^{2}{\rm s}$ are obtained at low-THz driving frequencies and
modest modulation depths. Conventional electrical doping may not allow to
reach the large Fermi energies where the rate is maximized, but it suffices
for exploring the lower Fermi energy region where photon-pairs are already
produced in giant numbers. A heterodyne dynamical grating based on ultrafast
all-optical THz modulation of graphene conductivity Tasolamprou2019 could
enable giant and steered photo-pair emission out of the quantum vacuum.
Finally, we note that electro-optical ultrafast on-chip graphene modulators
Li2014 ; Phare2015 ; Kovacevic2018 could potentially be employed to
independently bias different STQM graphene pixels with designer temporal
delays to implement complex synthetic phases.
Discussion: Metasurfaces are crossing the classical-quantum divide to offer
novel possibilities for flat quantum optics and photonics. On the quantum
side, they can become an enabler platform for generating and manipulating
nonclassical states of light in real time. We uncovered a key property of
space-time quantum metasurfaces relevant for potential applications: On-demand
reconfiguration of the synthetic phase allows dynamically tunable quantum
correlations, enabling to tailor the nature of entanglement depending on the
symmetry properties of both geometric and synthetic phases. We also
illustrated a second key property of space-time quantum metasurfaces with
fundamental relevance: Lorentz nonreciprocity at the deepest level of vacuum
fluctuations is attained through joint space and time modulations of optical
properties and can be interpreted as an asymmetric quantum vacuum.
Spatio-temporally modulated quantum metasurfaces have the potential to become
a flexible photonic platform for generating nonclassical light with designer
spatial and spectral shapes, for on-demand manipulation of entanglement for
free-space communications, and for reconfigurable sensing and imaging systems.
Conversion efficiencies into specific frequency, linear momentum, or orbital
angular momentum harmonics for selective quantum information encoding could be
enhanced through advanced modulation protocols. Incorporation of quantum
matter building blocks into space-time metasurfaces may further expand the
possibilities afforded by the proposed platform. As such, space-time quantum
metasurfaces can provide breakthrough advances in quantum photonics.
Methods: The effective polarizability tensors $\bm{\alpha}_{E}(\omega)$ and
$\bm{\alpha}_{M}(\omega)$ of the dielectric nanostructure are obtained using a
Cartesian multipole expansion Evlyukhin2013 of the full-wave simulated
electromagnetic field under a plane wave excitation, and computing ratios of
the resulting Mie electric and magnetic dipoles to the incident field at the
nanostructure’s center. The Hamiltonian for the all-dielectric STQM in cross-
polarized transmission is
$\displaystyle
H_{1}(t)=-\sum_{j,\gamma,\gamma^{\prime}}\;[\alpha^{(cr)}_{um}(\omega)+\Delta\alpha^{(cr)}(\omega)\cos(\Omega
t-\Phi_{\\!j})]$ (6) $\displaystyle\times
A^{*}_{\gamma;j}A_{\gamma^{\prime};j}e^{i(\omega-\omega^{\prime})t}[e^{i\Psi_{\\!j}}a^{\dagger}_{\gamma,R}a_{\gamma^{\prime},L}\\!+\\!e^{-i\Psi_{\\!j}}a^{\dagger}_{\gamma,L}a_{\gamma^{\prime},R}]\\!+\\!h.c.$
The sums are over all meta-atoms and field modes, the geometric $\Psi({\bf
r})$ and synthetic $\Phi({\bf r})$ phase distributions are evaluated at the
position of the meta-atoms, $\Omega$ is the modulation frequency,
$A_{\gamma}$, $A_{\gamma^{\prime}}$ are spatial modes, and
$a_{\gamma^{\prime},L/R}$ and $a^{\dagger}_{\gamma,R/L}$ are annihilation and
creation operators of circularly polarized photons. The unmodulated coupling
strength is
$\alpha^{(cr)}_{um}(\omega)\\!=\\!{\rm
Re}[\alpha_{E,xx}(\omega)+\alpha_{M,yy}(\omega)-\alpha_{E,yy}(\omega)-\alpha_{M,xx}(\omega)]$
and $\Delta\alpha^{(cr)}(\omega)$ is the modulation coupling strength obtained
by replacing in the above equation each effective polarizability by its
respective modulation amplitude.
The Hamiltonian for the all-plasmonic STQM for two-photon emission is
$\displaystyle H_{2}(t)$ $\displaystyle=$
$\displaystyle\frac{1}{8}\sum_{j,\gamma,\gamma^{\prime}}\sum_{\lambda,\lambda^{\prime}}\;[\Delta\alpha(\omega-\Omega)+\Delta\alpha(\omega^{\prime}-\Omega)]$
$\displaystyle\times$ $\displaystyle
A^{*}_{\gamma;j}A^{*}_{\gamma^{\prime};j}\;e^{i\Phi_{j}}e^{i(\omega+\omega^{\prime}-\Omega)t}\;a^{\dagger}_{\gamma,\lambda}a^{\dagger}_{\gamma^{\prime},\lambda^{\prime}}+h.c.$
where we neglected multiscattering between meta-atoms Holloway2005 .
$\lambda,\lambda^{\prime}$ are polarization states of the two photons and
$\Delta\alpha(\omega)$ is the modulated electric polarizability amplitude of
the meta-atom computed with the plasmon wavefunction formalism Yu2017 . For a
graphene disk of diameter $D$ with a high-Q localized bright-mode plasmonic
resonance
$\Delta\alpha(\omega)\\!\approx\\!\frac{\pi^{2}a_{1}^{2}\alpha_{fs}cD^{2}\Delta
E_{F}}{512\hbar\Omega^{2}}\frac{(\gamma/2\Omega)^{2}}{[((\omega\\!-\\!\omega_{res})/\Omega)^{2}\\!+\\!(\gamma/2\Omega)^{2}]^{2}}.$
$\omega_{res}(E_{F})=(\alpha_{fs}cE_{F}/\pi|\xi_{1}|\hbar D)^{1/2}$ is the
resonance frequency of the lowest bright-mode, $E_{F}$ and $\Delta E_{F}$ are
the Fermi energy and its modulation amplitude, $\alpha_{fs}$ is the fine
structure constant, $\gamma=ev^{2}_{F}/E_{F}\mu$ the scattering rate of
graphene, $v_{F}$ the Fermi velocity, and $\mu$ the mobility. The eigenmode
coefficients $a_{1}=6.1$ and $\xi_{1}=-0.072$ determine
$g=5\pi^{4}a_{1}^{4}\xi_{1}^{2}/2(512)^{3}$ in Eq. (5). Emission rates are
computed with time-dependent perturbation theory. Non-paraxial quantization of
the electromagnetic field with angular momentum is employed for the STQM with
rotating synthetic phase Enk1994 . The spectral weight functions for the
linear and spinning synthetic phases are respectively decomposed into in-plane
linear momentum $f_{\bm{\beta}}({\bf k},\omega)$ and angular momentum
${f}_{\ell}(m,\omega)$ spectra
$\displaystyle f_{\bm{\beta}}(\omega)$ $\displaystyle=$ $\displaystyle\int
d{\bf k}\;(c/\Omega)^{2}\;(1-|c{\bf k}/\omega|^{2})^{-1/2}f_{\bm{\beta}}({\bf
k},\omega),$ (8) $\displaystyle f_{\ell}(\omega)$ $\displaystyle=$
$\displaystyle\sum_{m}{f}_{\ell}(m,\omega).$
Explicit expressions for these spectra can be found in the Supplementary
Information.
Acknowledgements: This work was supported by the DARPA QUEST program. We are
grateful to A. Efimov, M. Julian, C. Lewis, M. Lucero, and A. Manjavacas for
discussions.
Author Contributions: D.A.R.D. and W. K.-K. conducted the theory work and A.
K. A. analyzed experimental feasibility. All authors discussed the findings
and contributed to writing the paper.
Competing Interests: The authors declare no competing interests.
∗Correspondence<EMAIL_ADDRESS>
## References
* (1) A. V. Kildishev, A. Boltasseva, and V. M. Shalaev, Planar photonics with metasurfaces, Science 339, 1232009 (2013).
* (2) H.-T. Chen, A. J. Taylor, and N. Yu, A review of metasurfaces: physics and applications, Rep. Prog. Phys. 79, 076401 (2016).
* (3) A. S. Solntsev, G. S. Agarwal, and Y. S. Kivshar, Metasurfaces for quantum photonics, arXiv:2007.14722.
* (4) T. Stav, A. Faerman, E. Maguid, D. Oren, V. Kleiner, E. Hasman, and M. Segev, Quantum entanglement of the spin and orbital angular momentum of photons using metamaterials, Science 361, 1101-1104 (2018).
* (5) Z. Bomzon, G. Biener, V. Kleiner, and E. Hasman, Space-variant Pancharatnam–Berry phase optical elements with computer-generated subwavelength gratings, Opt. Lett. 27, 1141-1143 (2002).
* (6) K. Wang, J. G. Titchener, S. S. Kruk, L. Xu, H.-P. Chung, M. Parry, I. I. Kravchenko, Y.-H. Chen, A. S. Solntsev, Y. S. Kivshar, D. N. Neshev, and A. A. Sukhorukov, Quantum metasurface for multiphoton interference and state reconstruction, Science 361, 1104–1108 (2018).
* (7) P. Georgi, M. Massaro, K.-H. Luo, B. Sain, N, Montaut, H. Herrmann, T. Weiss, G. Li, C. Silberhorn, and T. Zentgraf, Metasurface interferometry toward quantum sensors, Light: Science & Applications 8, 70 (2019).
* (8) A. Vaskin, R. Kolkowskia, A. F. Koenderink, and I. Staude, Light-emitting metasurfaces, Nanophotonics 8, 1151–1198 (2019).
* (9) Y.-Y. Xie, P.-N. Ni, Q.-H. Wang, Q. Kan, G. Briere, P.-P. Chen, Z.-Z. Zhao, A. Delga, H.-R. Ren, H.-D. Chen, C. Xu, and P. Genevet, Metasurface-integrated vertical cavity surface-emitting lasers for programmable directional lasing emissions, Nat. Nanotech. 15, 125–130 (2020).
* (10) Y. Kan, S. K. H. Andersen, F. Ding, S. Kumar, C. Zhao, and S. I. Bozhevolnyi, Metasurface-enabled generation of circularly polarized single photons, Adv. Mater. 32, 1907832 (2020).
* (11) R. Bekenstein, I. Pikovski, H. Pichler, E. Shahmoon, S. F. Yelin, and M. D. Lukin, Quantum metasurfaces with atom arrays, Nat. Phys. 16, 676–681 (2020).
* (12) J. Bohn, T. Bucher, K. E. Chong, A. Komar, D.-Y. Choi, D. N. Neshev, Y. S. Kivshar, T. Pertsch, and I. Staude, Active tuning of spontaneous emission by Mie-resonant dielectric metasurfaces, Nano Lett. 18, 3461–3465 (2018).
* (13) A. M. Shaltout, V. M. Shalaev, and M. L. Brongersma, Spatiotemporal light control with active metasurfaces, Science 364, 648 (2019).
* (14) A. E. Cardin, S. R. Silva, S. R. Vardeny, W. J. Padilla, A. Saxena, A. J. Taylor, W. J. M. Kort-Kamp, H.-T. Chen, D. A. R. Dalvit, and A. K. Azad, Surface-wave-assisted nonreciprocity in spatio-temporally modulated metasurfaces, Nat. Commun. 11, 1469 (2020).
* (15) L. Zhang, X. Q. Chen, S. Liu, Q. Zhang, J. Zhao, J. Y. Dai, G. D. Bai, X. Wan, Q. Cheng, G. Castaldi, V. Galdi, and T. J. Cui, Space-time-coding digital metasurfaces, Nat. Commun. 9, 4334 (2018).
* (16) J. T. Barreiro, N. K. Langford, N. A. Peters, and P. G. Kwiat, Generation of Hyperentangled Photons Pairs, Phys. Rev. Lett. 95, 260501 (2005).
* (17) G. T. Moore, Quantum theory of the electromagnetic field in a variable‐length one‐dimensional cavity, Journal of Mathematical Physics 11, 2679 (1970).
* (18) X. Guo, Y. Ding, Y. Duan, and X. Ni, Nonreciprocal metasurface with space–time phase modulation, Light: Science & Applications 8, 123 (2019).
* (19) A. I. Kuznetsov, A. E. Miroshnichenko, M. L. Brongersma, Y. S Kivshar, and B. Luk’yanchuk, Optically resonant dielectric nanostructures, Science 354, aag2472 (2016).
* (20) L. Novotny and B. Hecht, Principles of Nano-Optics (Cambridge University Press, New York, 2007).
* (21) G. F. Calvo, A. Picón, and E. Bagan, Quantum field theory of photons with orbital angular momentum, Phys. Rev. A 73, 013805 (2006).
* (22) H. J. Eichler, P. Günter, and D. H. Pohl, Laser induced dynamical gratings (Springer-Verlag, Heidelberg, 1986).
* (23) D. Naidoo, K. Ait-Ameur, M. Brunel, and A. Forbes, Intra-cavity generation of superpositions of Laguerre–Gaussian beams, Appl. Phys. B, 106, 683–690 (2012).
* (24) D. L. Sounas and A. Alù, Non-reciprocal photonics based on time modulation, Nat. Photonics 1, 774–783 (2017).
* (25) D. A. R. Dalvit, P. A. Maia Neto, and F. D. Mazzitelli, in Casimir Physics, Lecture Notes in Physics 834 (eds. D. Dalvit, P. Milonni, D. Roberts, and F. Da Rosa), (Springer, Heidelberg, 2006).
* (26) V. V. Dodonov, Current status of the dynamical Casimir effect, Physica Scripta 82, 038105 (2010).
* (27) C. M. Wilson, G. Johansson, A. Pourkabirian, M. Simoen, J. R. Johansson, T. Duty, F. Nori, and P. Delsing, Observation of the dynamical Casimir effect in a superconducting circuit, Nature 479, 376–379 (2011).
* (28) J.-C. Jaskula, G. B. Partridge, M. Bonneau, R. Lopes, J. Ruaudel, D. Boiron, and C. I. Westbrook, Acoustic Analog to the Dynamical Casimir Effect in a Bose-Einstein Condensate, Phys. Rev. Lett. 109, 220401 (2012).
* (29) P. Lahteenmaki, G. S. Paraoanu, J. Hassel, and P. J. Hakonen, Dynamical Casimir effect in a Josephson metamaterial, Proc. Nat. Acad. Sci. USA 110, 4234-4238 (2013).
* (30) S. Vezzoli, A. Mussot, N. Westerberg, A. Kudlinski, H. D. Saleh, A. Prain, F. Biancalana, E. Lantz, and D. Faccio, Optical analogue of the dynamical Casimir effect in a dispersion-oscillating fibre, Commun. Phys. 2, 84 (2019).
* (31) M. F. Maghrebi, R. Golestanian, and M. Kardar, Scattering approach to the dynamical Casimir effect, Phys. Rev. D 87, 025016 (2013).
* (32) P. A. Maia Neto and L. A. S. Machado, Quantum radiation generated by a moving mirror in free space, Phys. Rev. A 54, 3420 (1996).
* (33) D. L. Sounas, C. Caloz, and A. Alù, Giant non-reciprocity at the subwavelength scale using angular momentum-biased metamaterials, Nat. Commun. 4, 2407 (2013).
* (34) G. C. Berkhout, M. P. Lavery, J. Courtial, M. W. Beijersbergen, and M. J. Padgett, Efficient Sorting of Orbital Angular Momentum States of Light, Phys. Rev. Lett. 105, 153601 (2010).
* (35) M. Mirhosseini, M. Malik, Z. Shi, and R. W. Boyd, Efficient separation of the orbital angular momentum eigenstates of light, Nat. Commun. 4, 2783 (2013).
* (36) R. Yu, J. D. Cox, J. R. M. Saavedra, and F. J. García de Abajo, Analytical modeling of graphene plasmons, ACS Photonics 4, 3106 (2017).
* (37) F. J. García de Abajo and A. Manjavacas, Plasmonics in atomically thin materials, Faraday Discuss. 178, 87-1073548 (2015).
* (38) Y. Muniz, A. Manjavacas, C. Farina, D. A. R. Dalvit, and W. J. M. Kort-Kamp, Two-Photon Spontaneous Emission in Atomically Thin Plasmonic Nanostructures, Phys. Rev. Lett. 125, 033601 (2020).
* (39) A. C. Tasolamprou, A. D. Koulouklidis, C. Daskalaki, C. P. Mavidis, G. Kenanakis, G. Deligeorgis, Z. Viskadourakis, P. Kuzhir, S. Tzortzakis, M. Kafesaki, E. N. Economou, and C. M. Soukoulis, Experimental demonstration of ultrafast THz modulation in a graphene-based thin film absorber through negative photoinduced conductivity, ACS Photonics 6, 720–727 (2019).
* (40) W. Li, B. Chen, C. Meng, W. Fang, Y. Xiao, X. Li, Z. Hu, Y. Xu, L. Tong, H. Wang, W. Liu, J. Bao, and Y. R. Shen, Ultrafast all-optical graphene modulator, Nano Lett. 14, 955–959 (2014).
* (41) C. T. Phare, Y.-H. D. Lee, J. Cardenas, and M. Lipson, Graphene electro-optic modulator with 30 GHz bandwidth, Nat. Photonics 9, 511–514 (2015).
* (42) G. Kovacevic, C. Phare, S. Y. Set, M. Lipson, and S. Yamashita, Ultra-high-speed graphene optical modulator design based on tight field confinement in a slot waveguide, Appl. Phys. Express 11, 065102 (2018).
* (43) A. B. Evlyukhin, C. Reinhardt, E. Evlyukhin, and B. N. Chichkov, Multipole analysis of light scattering by arbitrary-shaped nanoparticles on a plane surface, J. Opt. Soc. Am. B 30, 2589 (2013).
* (44) S. J. Van Enk and G. Nienhuis, Commutation rules and eigenvalues of spin and orbital angular momentum of radiation fields, J. Mod. Opt. 41, 963-977 (1994).
* (45) C. L. Holloway, M. A. Mohamed, E. F. Kuester, and A. Dienstfrey, Reflection and transmission properties of a metafilm: with an application to a controllable surface composed of resonant particles, IEEE Transactions on Electromagnetic Compatibility 47, 853-865 (2005).
|
# Deep Learning-Based Autoencoder for Data-Driven Modeling of an RF
Photoinjector
J. Zhu, Y. Chen, F. Brinker, W. Decking, S. Tomin, H. Schlarb Deutsches
Elektronen-Synchrotron DESY, Notkestrasse 85, 22607 Hamburg, Germany
###### Abstract
Modeling of large-scale research facilities is extremely challenging due to
complex physical processes and engineering problems. Here, we adopt a data-
driven approach to model the photoinector of European XFEL with a deep
learning-based autoencoder. A deep convolutional neural network (decoder) is
used to build images measured on the screen from a small feature map generated
by another neural network (encoder). We demonstrate that the autoencoder
trained only with experimental data can make high-fidelity predictions of
megapixel images for the longitudinal phase-space measurement. The prediction
significantly outperforms existing methods. We also show the scalability and
explicability of the autoencoder by sharing the same decoder with more than
one encoder used for different setups of the photoinjector, and propose a
pragmatic way to model a photoinjector with various diagnostics and working
points. This opens the door to a new way of accurately modeling a
photoinjector using neural networks. The approach can possibly be extended to
the whole accelerator and even other types of scientific facilities.
## I INTRODUCTION
Operations of large-scale scientific user facilities like European XFEL [1]
are very challenging as it is required to meet the specifications of various
user experiments [2] and be capable of switching machine status rapidly.
Recently, machine learning (ML) is quickly providing new powerful tools for
accelerator physicists to build fast-prediction surrogate models [3, 4, 5] or
extract essential information [6, 7, 8] from large amounts of data. These ML-
based models can be extremely useful for building virtual accelerators which
are capable of making fast predictions of the behavior of beams [9], assisting
accelerator tuning by virtually bringing destructive diagnostics online [4],
providing an initial guess of input parameters for a model-independent
adaptive feedback control algorithm [10] or driving a model-based feedback
control algorithm [11]. One way of training a ML-based model is to make use of
simulated data. However, beam dynamics simulations are typically carried out
under different theoretical assumptions on collective effects such as space
charge forces, wakefields and coherent synchrotron radition. In addition,
electron emission from a photocathode is governed by multiple physical
processes and is even more difficult to simulate [12]. Moreover, aging of
accelerator components affects the long-term operation of a facility, but is
generally not included in simulation. As a result, it is extremely challenging
to achieve a good agreement between simulation and measurement for a large
range of machine operation parameters even exploiting complicated physical
models [13]. Furthermore, it can be prohibitively expensive to collect a large
amount of high-resolution simulation data [14].
Previous work has demonstrated prediction of the longitudinal phase-space at
the exit of the LCLS accelerator using the L1S phase and a shallow multi-layer
perceptron [4]. The images were cropped to 100 x 100 pixels and the phase-
space distribution must be centered in order to produce reasonable results.
Nonetheless, the predicted longitudinal phase-space is blurry and has
significant artifacts in the background. Moreover, the current profile was
predicted by using another multi-layer perceptron instead of extracted
directly from the predicted longitudinal phase-space. Indeed, a multi-layer
perceptron consisting of purely fully connected layers has intrinsic
limitations in image-related tasks as it intends to find the connection
between each pair of nodes between each two adjacent layers. First of all, it
unnecessarily complicates the training of the neural network as pixels
representing the phase-space distribution apparently has little connection
with majority of the background pixels. Secondly, the number of parameters
scales at least proportionally to the number of pixels in the image, which
makes it impraticle to be applied on megapixel images due to the huge memory
requirement.
In this paper, we propose a deep learning-based autoencoder to make high-
fidelity predictions of megapixel images measured on a screen and demonstrate
it experimentally at the longitudinal phase-space diagnostic beamline at the
injector of European XFEL. Besides the performance, another major advantage of
this approach over the existing ones [4, 11] is that the output of our model
is the full image from the camera, so that the same neural network structure
can be applied on measurements of distributions with different footprints and
the preprocessing step is much simpler. The concerned physical properties can
then be extracted by using the well-established routines. More importantly, we
demonstrate the scalability and explicability of the autoencoder by sharing
the same decoder with encoders used for different setups of the photoinjector,
and propose a pragmatic way to model a photoinjector with various diagnostics
and working points. It must be pointed out that our method is essentially
different from the variational autoencoder [15] and the generative adversarial
network [16], both of which learn a joint probability distribution from the
training dataset, allowing to synthesize images from random noise. In this
study, however, we aim to find an explicit mapping between the input
parameters and the output image.
## II Deep Learning Model
### II.1 Neural network
The general architecture of the autoencoder is illustrated in Fig. 1(a). More
generally, given an input $\mathbf{x}\in\mathbb{R}^{m}$ and the measurement
$\mathbf{y}\in\mathbb{R}^{n}$, the model is asked to learn two neural networks
$g_{\varphi}:\mathbb{R}^{m}\to\mathbb{R}^{c}$ and
$f_{\theta}:\mathbb{R}^{c}\to\mathbb{R}^{n}$, where $\mathbb{R}^{c}$ is the
latent space and $\mathbf{z}\in\mathbb{R}^{c}$ is called the latent features.
Both $m$ and $n$ can be very large as modern area detectors typically have
millions of pixels. The learning process is described as minimizing a loss
function $\mathcal{L}(\mathbf{y},f_{\theta}(g_{\varphi}(\mathbf{x}))$ using a
gradient descent algorithm. Therefore, the model only learns from non-fixed
input data $\tilde{\mathbf{x}}$ and the encoder can be simplified to
$g_{\varphi}(\mathbf{x})=g_{\varphi}(\tilde{\mathbf{x}}|\bar{\mathbf{x}})=g_{\varphi}(\tilde{\mathbf{x}})$,
where $\bar{\mathbf{x}}$ is the fixed input data and
$\bar{\mathbf{x}}\oplus\tilde{\mathbf{x}}=\mathbf{x}$. Here we have assumed
that the influence of the jitter of $\bar{\mathbf{x}}$ is negligible. Although
it can be challenging for neural networks to learn a universal approximator
for the whole input parameter space of an accelerator, this approach can be
well-suited for user facilities as they are typically operated on a finite
number of working points.
The detailed structure of the autoencoder is shown in Fig. 1(b). We use a
multi-layer perceptron to learn latent features and then map them to the image
on the screen using a concatenation of transposed convolutional layers [17].
The transposed convolutional layer performs the transformation in the opposite
direction of a normal convolution, which projects localized feature maps to a
higher-dimensional space. Despite of the deepness of the neural network, a
single prediction only takes about 20 ms on a mid-range graphics card, which
is orders of magnitude faster than standard beam dynamics simulation.
Figure 1: (a) General architecture of the autoencoder. (b) Diagram of the
neural network. The leftmost blue box represents the input layer. It is
followed by three fully-connected layers (encoder) in purple with each layer
activated by the Leaky ReLU (Rectified Linear Unit) function. The latent space
is depicted in grey. The ten yellow boxes represent the transposed
convolutional layers (decoder). Each transposed convolutional layer is
followed by a batch normalization layer [18] and activated by the leaky ReLU
function except the last one, which is activated by the sigmoid function
depicted in green. The kernel sizes of the first and second transposed
convolutional layers are 3 x 4 and 3 x 3, respectively, and the kernel sizes
of the other eight transposed convolutional layers are all 5 x 5. The total
number of trainable parameters is 1,898,161. (c) Example of the longitudinal
phase-spaces cropped from the measured image and the corresponding prediction.
### II.2 Loss function
Neural networks are trained using the mini-batch stochastic gradient decent
optimization algorithm [18] driven by a loss function. For most of the
regression problems, the choice of the loss function defaults to the mean
squared error (MSE) [5, 4, 6]. However, a MSE loss function treats pixels as
uncorrelated features and was found to result in overly smoothed images as
well as loss of high-frequency features in high-resolution image generation
applications [19]. In our model, the loss function takes into account the
correlations between adjacent pixels and is given by
$L_{batch}=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}(1-h(\mathbf{y}_{i},\mathbf{\hat{y}_{i}})),$
(1)
where $N_{b}$ the batch size for training, $\mathbf{\hat{y}}$ the prediction
and $h$ is the SSIM (structural similarity index measure) [20] in multiple
scales written as
$h(\mathbf{y},\mathbf{\hat{y}})=\left[\frac{1}{N_{p}^{(M)}}\sum_{\begin{subarray}{c}\forall\mathbf{p}\in\mathbf{y}^{(M)}\\\
\forall\mathbf{\hat{p}}\in\mathbf{\hat{y}}^{(M)}\end{subarray}}{l(\mathbf{p},\hat{\mathbf{p}})}c(\mathbf{p},\hat{\mathbf{p}})s(\mathbf{p},\hat{\mathbf{p}})\right]^{\alpha_{M}}\prod_{j=0}^{M-1}\left[\frac{1}{N_{p}^{(j)}}\sum_{\begin{subarray}{c}\forall\mathbf{p}\in\mathbf{y}^{(j)}\\\
\forall\mathbf{\hat{p}}\in\mathbf{\hat{y}}^{(j)}\end{subarray}}c(\mathbf{p},\hat{\mathbf{p}})s(\mathbf{p},\hat{\mathbf{p}})\right]^{\alpha_{j}}.$
(2)
Here, $l(\mathbf{p},\hat{\mathbf{p}})$, $c(\mathbf{p},\hat{\mathbf{p}})$ and
$s(\mathbf{p},\hat{\mathbf{p}})$ measure the distortions in luminance,
contrast and structure [20], respectively, between a uniform sliding window
$\mathbf{p}$ of size 8 x 8 pixels on the measured image $\mathbf{y}^{(j)}$ and
its counterpart $\hat{\mathbf{p}}$ on the predicted one
$\hat{\mathbf{y}}^{(j)}$. The number of pixels in $\mathbf{y}^{(j)}$ is
denoted as $N_{p}^{(j)}$. The superscription $j\in\\{0,...,M\\}$ indicates
that the image is downsampled by a factor of $2^{j}$ using average pooling.
Because $l(\mathbf{p},\hat{\mathbf{p}})$, $c(\mathbf{p},\hat{\mathbf{p}})$ and
$s(\mathbf{p},\hat{\mathbf{p}})$ all range between 0 and 1 for non-negative
image data, having $\alpha_{j}<1$ prevents the model from overfitting on fine
local features which could be induced by machine jitter. Comparisons between
images at different scales obviously enable the model to learn the
correlations between pixels in a wider area. We empirically chose $M=2$ with
$\alpha_{0}$ = 0.05, $\alpha_{1}$ = 0.30 and $\alpha_{2}$ = 0.65 for this
study.
## III Experimental results
### III.1 Experiment setup
The experiment was carried out at the injector of European XFEL [21] and the
layout of the beamline is shown in Fig. 2. The nominal beam energy is
$\sim$130 MeV which was measured at the maximum mean momentum gain (MMMG)
phases of the gun and A1 as well as the zero-crossing [22] phase of AH1. We
refer to this working point as the reference working point and the
corresponding phases as the reference phases. The bunch charge is around 250
pC. The transverse deflecting structure (TDS) and the dipole magnet were used
to measure the longitudinal phase-space at a resolution of about 0.047
ps/pixel and 0.0031 MeV/pixel. We collected data for two different working
points (WPs). For WP1, the phases of the gun, A1 and AH1 were uniformly
sampled within $\pm$ 3 degrees, $\pm$ 6 degrees and $\pm$ 6 degrees relative
to the respective reference phases. It is worth mentioning that the actual
MMMG phase of A1 and the zero-crossing phase of AH1 shift as the gun phase
varies due to the time of flight change. For WP2, AH1 was switched off and the
gradient of A1 was reduced accordingly to keep the norminal beam energy at
$\sim$130 MeV. The sample ranges of the gun and A1 phases remain the same.
Figure 2: Schematic of the European XFEL photoinjector and its diagnostic
beamline. The phases of the gun, the 1.3 GHz cryomodule (A1) and the 3.9 GHz
cryomodule (AH1) are used as input to predict the image on the screen. The
laser heater was switched off during the experiment. Figure 3: Statistics of
the data for WP1 (the first column) and WP2 (the second column): (a-b)
Histograms of the x and y coordinates of the centers of mass for the
preprocessed images. (c) Histogram of the minimum Euclidean distances between
the input phase vectors of each data point and the rest ones.
### III.2 Data analysis
The original image size is 1750 x 2330 pixels. After background subtraction
and normalization, all the pixel values below 0.01 were set to 0. In order to
have a reasonable training time during our study with limited computational
resources, all the images were slightly cropped at the same locations and then
downsampled to 768 x 1024 pixels. The autoencoder was implemented and trained
using the ML framework TensorFlow [23] version 2.3.1. For training, we adopted
the weight initialization in [24] and Adam optimizer [25] with a fixed
learning rate of 1e-3 and the training was terminated after 600 epochs. In
total, 3,000 shots were collected for each working point. 80% of the data were
used for training and the rest were used for testing. As mentioned previously,
the proposed autoencoder does not require the phase-space distribution to be
centered. Fig. 3(a) and (b) show the distributions of the x and y coordinates
of the centers of mass, respectively, for the preprocessed images. Evidently,
the centers of mass distribute over a wide area of 160 x 46 pixels for WP1 and
122 x 52 pixels for WP2.
In machine learning, it is crucial that the information of the test dataset
should not be leaked into the training dataset in order to avoid overfitting
of the model. Specifically, the input phase vectors in the test dataset should
not appear again in the training dataset in this study. Fig. 3(c) shows that
there is no duplicated phase vector in the data for both WP1 and WP2.
Therefore, randomly splitted training and test datasets will not contain the
same phase vector.
Figure 4: (a) Example of an entire predicted image. The relative phases of
gun, A1 and AH1 are -1.17 degree, -1.38 degree and 0.04 degree, respectively.
(b-d) Longitudinal phase-spaces cropped from the measured image, the predicted
image and the image predicted by the model using MSE as the loss function,
respectively. (e-g) Comparisons of the current profiles, the energy spectra
and the RMS slice energy spreads $\sigma_{E}$ between the longitudinal phase-
spaces shown in (b-d). Figure 5: Comparisons of the measured and the
predicted longitidinal phase-spaces, current profiles, energy spectra and the
RMS slice energy spreads $\sigma_{E}$ for two shots with high peaks in the
energy spectra. (a) The relative phases of the gun, A1 and AH1 are -0.59
degree, -0.33 degree and -2.76 degree, respectively. (b) The relative phases
of the gun and A1 are -2.60 degree and 0.20 degree, respectively. AH1 was
switched off.
An example image predicted by the model trained on WP1 data is shown in Fig.
4(a). The average multi-scale SSIM given by Eq. (2) over the whole test
dataset is calculated to be as high as $\sim$0.997. The model successfully
predicts the electron distribution recorded on the screen with a clean
background. The predicted longitudinal phase-space shown in Fig. 4(c) and the
measured one shown in Fig. 4(b) agree very well at different longitudinal
positions of the bunch, which have experienced different non-linear processes
while traveling through the beamline. We also trained another model to
demonstrate the influence of the loss function. The second model has the same
structure as the first one but uses MSE as the loss function. The phase-space
shown in Fig. 4(d) is apparently blurrier than the one shown in Fig. 4(c).
Fig. 4(e-g) further compare the current profiles, the energy spectra and the
RMS slice energy spreads of the longitudinal phase-spaces shown in Fig.
4(b-d). The predictions all agree excellently with the measurements except the
slice energy spread along the first half of the bunch. Indeed, it can be
distinguished from the sharpness of the image at the corresponding region.
This is understandable because the input does not cover the complete state of
the photoinjector. For example, the arrival time jitter of the photocathode
laser [26] has a non-negligible impact on these regions which possess only a
few pixels.
The ability of measuring high peak currents is of critical importance for a
free-electron laser facility. Although all the current profiles resemble in
this study, the energy spectra vary dramatically during the phase scan. Fig.
5(a-b) show two typical results with high peaks in the energy spectra. Another
autoencoder was trained on WP2 data and the training was terminated after 300
epoches. In Fig. 5(a), the height of the peak is underestimated by about 20%
while the slice energy spread is overestimated by less than 20%. In Fig. 5(b),
the height of the peak is underestimated by about 14%, and the slice energy
spread is only slightly overestimated at the centre of the bunch. It should be
noted that the peak shown in Fig. 5(a) is twice as high as that shown in Fig.
5(b) due to the effect of AH1. As explained above, the precision of the model
will decrease as the number of pixels which represent the distrubtion
decreases. Nevertheless, the prediction and the measurement agree well even in
these extreme cases. Consequently, it can be inferred that the model is able
to predict longitudinal phase-spaces with high peak currents in the scenario
where the parameter change results in a dramatical change of the current
profile while the energy spectrum is stable.
### III.3 More on the loss function
As discussed previously, the coefficient $\alpha_{j}$ in Eq. (2) is critical
to the performance of the model. We deliberately chose $\alpha_{j}<1$ to avoid
overfitting on a single scale of the image. In another word, the model is not
expected to generate a precise prediction because the shot-to-shot jitter of
machine parameters like the arrival time of the photocathode laser are not
available as input. To illustrate the outcome of overfitting, we trained a
model using the single-scale SSIM ($M=0$ and $\alpha_{0}$ = 1.0) as the loss
function. Namely, we ask the model to learn an exact mapping between the phase
vector and the image. A typical result is shown in Fig. 6. The predicted
longitudinal phase-space is indeed close to the measured one except that the
distribution is twisted along the longitudinal axis. Moreover, the agreement
between the predicted and measured current profiles is also not as good as the
result shown in 5(a).
The characteristics of the sliding window $\mathbf{p}$ also affects the
performance of the model. The standard SSIM uses a Gaussian sliding window of
size 11 x 11 pixels. It is found that the performance of the model trained
with the uniform sliding window is slightly better than the one trained with
the Gaussian sliding window in terms of the current profile and the energy
sprectra, although the latter generates a smoother image.
Figure 6: (a) Prediction of the shot shown in Fig. 5(a) by the model using
SSIM as the loss function. (b) Comparison of the current profiles between the
predicted longitudinal phase-space in (a) and the measured one shown in Fig.
5(a).
## IV Scalability and explicability
The design of the autoencoder aims at clearly separating the functions of the
encoder and the decoder. Ideally, the encoder takes the input and generates
the latent features which contain information about the phase-space of the
electron bunch. The decoder translates the latent features into the
corresponding diagnostic signal, which is the image on the screen in this
study. This design leads to a scalable and explicable model for a complex
system because of parameter sharing. On the one hand, it is desirable to use
the same latent features as the input for more decoders which model various
beam diagnostics. On the other hand, different encoders can share a common
decoder, as illustrated in Fig. 7(a), allowing for integrating multiple
distinct working points into a single model. Separating the encoders for
different working points is also practically necessary, because the time
interval between the data collections of two working points can be
significantly long so that machine parameters not used as input may have
changed due to long-term phenomena such as drift.
Figure 7: (a) General architecture of sharing a decoder with two encoders.
(b) Prediction of the shot shown in Fig. 5(b) using a model of which only the
encoder was trained. (c-d) Comparisons of the current profiles and the energy
spectra between the predicted longitudinal phase-space in (b) and the measured
one shown in Fig. 5(b).
To prove the concept of the design, we utilized the trained decoder for WP1 to
train a model for WP2 from scratch. The weights in the decoder were frozen
during training. Namely, only the encoder was trained. The results are shown
in Fig. 7(b-d). Although the decoder has not experienced any data without AH1
before, it can still translate the latent features to the screen image
reasonably well. The performance of the model becomes significantly better
when the decoder is fine-tuned as well. In the long run, it is expected that
the decoder will become representative enough after trained on enough data. As
a result, when a new working point is introduced, it can be required to train
only a new encoder instead of the whole model with all the existing data.
## V Conclusion
In summary, we have demonstrated modeling of the longitudinal diagnostic
beamline at the injector of European XFEL using a deep learning-based
autoencoder. After trained only with the experimental data, the autoencoder is
capable of making high-fidelity predictions of megapixel images used for
longitudinal phase-space measurement with RF phases as input. The prediction
significantly outperforms existing methods and is orders of magnitude faster
than standard beam dynamics simulation. The longitudinal phase-space extracted
from the predicted image agrees very well with the measurement not only
visually, but also on important physical properties such as the current
profile, the energy spectrum and the RMS slice energy spread. Due to the
constraint of the computational resources, the original images were
downsampled by a factor of two. This downsampling can be avoided by ultilizing
a state of art graphics card or the distributed training strategy. Thus, the
full-sized camera images can be used to train the model without loosing any
information. In addition, a pragmatic way has been proposed to model a
photoinjector with various diagnostics and working points using deep neural
networks. We have shown that the autoencoder is scalable and explicable by
sharing the same decoder with encoders used for different setups of the
photoinjector. Moreover, the influences of the loss function which drives the
training of the autoencoder have been discussed in depth. We conclude that the
impact of the machine jitter can be mitigated by choosing proper values of the
hyperparameters in the loss function. On the contrary, the values of the
hyperparameters should be adapted to improve the accuracy of the prediction if
the machine jitter is negligible.
Because both the autoencoder and the loss function do not depend on any
characteristics of an RF photoinjector or the longitudinal phase-space of an
electron bunch, we expect this architecture to be generalized to many other
image-based diagnostics, not only for accelerators but also for other types of
scientific facilities. Looking forward, the autoencoder can be further
extended to include more diagnostics (decoders) for the longitudinal phase-
space as well as the transverse phase-space, with the ultimate goal of
building a complete virtual photoinjector using experimental data.
## References
* Decking and et al. [2020] W. Decking and et al., A mhz-repetition-rate hard x-ray free-electron laser driven by a superconducting linear accelerator, Nat. Photonics 14, 391 (2020).
* Pascarelli _et al._ [2020] S. Pascarelli, S. Molodtsov, and T. Tschentscher, Creating a diverse international user facility., Nat. Rev. Phys. 2, 337 (2020).
* Sanchez-Gonzalez and et al. [2017] A. Sanchez-Gonzalez and et al., Accurate prediction of x-ray pulse properties from a free-electron laser using machine learning, Nat. Commun. 8, 15461 (2017).
* Emma _et al._ [2018] C. Emma, A. Edelen, M. J. Hogan, B. O’Shea, G. White, and V. Yakimenko, Machine learning-based longitudinal phase space prediction of particle accelerators, Phys. Rev. Accel. Beams 21, 112802 (2018).
* Edelen _et al._ [2020] A. Edelen, N. Neveu, M. Frey, Y. Huber, C. Mayes, , and A. Adelmann, Machine learning for orders of magnitude speedup in multiobjective optimization of particle accelerator systems, Phys. Rev. Accel. Beams 23, 044601 (2020).
* Ren _et al._ [2020] X. Ren, A. Edelen, A. Lutman, G. Marcus, T. Maxwell, , and D. Ratner, Temporal power reconstruction for an x-ray free-electron laser using convolutional neural networks, Phys. Rev. Accel. Beams 23, 040701 (2020).
* Xu _et al._ [2020] X. Xu, Y. Zhou, and Y. Leng, Machine learning based image processing technology application in bunch longitudinal phase information extraction, Phys. Rev. Accel. Beams 23, 032805 (2020).
* Tennant _et al._ [2020] C. Tennant, A. Carpenter, T. Powers, A. Shabalina Solopova, L. Vidyaratne, and K. Iftekharuddin, Superconducting radio-frequency cavity fault classification using machine learning at jefferson laboratory, Phys. Rev. Accel. Beams 23, 114601 (2020).
* [9] S. Nagaitsev and et al., Accelerator and beam physics research goals and opportunities, arXiv:2101.04107.
* Scheinker _et al._ [2018] A. Scheinker, A. Edelen, D. Bohler, C. Emma, and A. Lutman, Demonstration of model-independent control of the longitudinal phase space of electron beams in the linac-coherent light source with femtosecond resolution, Phys. Rev. Lett. 121, 044801 (2018).
* Emma _et al._ [2021] C. Emma, A. Edelen, A. Hanuk, B. O’Shea, and A. Scheinker, Virtual diagnostic suite for electron beam prediction and control at facet-ii, Information 12, 61 (2021).
* Moody _et al._ [2018] N. A. Moody, K. L. Jensen, A. Shabaev, S. G. Lambrakos, J. Smedley, D. Finkenstadt, and et al., Perspectives on designer photocathodes for x-ray free-electron lasers:influencing emission properties with heterostructures and nanoengineeredelectronic states, Phys. Rev. Applied 10, 047002 (2018).
* Chen _et al._ [2020] Y. Chen, I. Zagorodnov, and M. Dohlus, Beam dynamics of realistic bunches at the injector section of the european x-ray free-electron laser, Phys. Rev. Accel. Beams 23, 044201 (2020).
* Qiang _et al._ [2017] J. Qiang, Y. Ding, P. Emma, Z. Huang, D. Ratner, T. O. Raubenheimer, M. Venturini, and F. Zhou, Start-to-end simulation of the shot-noise driven microbunching instability experiment at the linac coherent light source, Phys. Rev. Accel. Beams 20, 054402 (2017).
* Kingma and Welling [2019] D. P. Kingma and M. Welling, An introduction to variational autoencoders, Foundations and Trends® in Machine Learning 12, 307 (2019).
* [16] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial networks, arXiv:1406.2661.
* [17] V. Dumoulin and F. Visin, A guide to convolution arithmetic for deep learning, arXiv:1603.07285v2.
* Goodfellow _et al._ [2016] I. Goodfellow, Y. Bengio, and A. Courville, _Deep Learning_ (MIT Press, 2016) http://www.deeplearningbook.org.
* Ledig _et al._ [2017] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, Photo-realistic single image super-resolution using a generative adversarial network, in _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ (2017) pp. 105–114.
* Wang _et al._ [2004] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE Trans. Image Process. 13, 600 (2004).
* Brinker [2016] F. Brinker, Commissioning of the European XFEL Injector, in _7th International Particle Accelerator Conference_ (2016) p. TUOCA03.
* Akre _et al._ [2001] R. Akre, L. Bentson, P. Emma, and P. Krejcik, A transverse rf deflecting structure for bunch length and phase space diagnostics, in _PACS2001. Proceedings of the 2001 Particle Accelerator Conference (Cat. No.01CH37268)_, Vol. 3 (2001) pp. 2353–2355 vol.3.
* [23] M. Abadi and et al., Tensorflow: Large-scale machine learning on heterogeneous distributed systems, arXiv:1603.04467v2, software available from tensorflow.org.
* He _et al._ [2015] K. He, X. Zhang, S. Ren, and J. Sun, Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, in _2015 IEEE International Conference on Computer Vision (ICCV)_ (2015) pp. 1026–1034.
* [25] J. B. Diederik P. Kingma, Adam: A method for stochastic optimization, arXiv:1412.6980.
* Winkelmann _et al._ [2019] L. Winkelmann, A. Choudhuri, H. Chu, I.Hartl, C. Li, C. Mohr, J. Mueller, F. Peters, S. Pfeiffer, S. Salman, and U. Grosse-Wortmann, The European XFEL Photocathode Laser, in _39th Free Electron Laser Conference_ (2019) p. WEP046.
|
Computational Fluid Dynamics (CFD) simulations are performed to investigate the impact of adding a grid to a two-inlet dry powder inhaler (DPI). The purpose of the paper is to show the importance of the correct choice of closure model and modeling approach, as well as to perform validation against particle dispersion data obtained from studies and flow velocity data obtained from particle image velocimetry (PIV) experiments.
CFD simulations are performed using the Ansys Fluent 2020R1 software package. Two RANS turbulence models (realisable $k$-$\epsilon$ and $k$-$\omega$ SST) and the Stress Blended Eddy Simulation (SBES) models are considered. Lagrangian particle tracking for both carrier and fine particles is also performed.
Excellent comparison with the PIV data is found for the SBES approach and the particle tracking data are consistent with the dispersion results, given the simplicity of the assumptions made.
This work shows the importance of selecting the correct turbulence modelling approach and boundary conditions to obtain good agreement with PIV data for the flow-field exiting the device. With this validated, the model can be used with much higher confidence to explore the fluid and particle dynamics within the device.
Keywords: dry powder inhaler, CFD, turbulence models, SBES, particle tracking
§ ABBREVIATIONS
API - Active Pharmaceutical Ingredients
CC - Curvature Correction
CFD - Computational Fluid Dynamics
DPI - Dry Powder Inhaler
DPM - Discrete Phase Model
FPF - Fine Particle Fraction
LDV - Laser Doppler Velocimetry
LES - Large Eddy Simulation
LRN - Low Reynolds Number
NSE - Navier-Stokes Equations
PIV - Particle Image Velocimetry
RANS - Reynolds-Averaged Navier-Stokes
SBES - Stress Blended Eddy Simulation
SRS - Scale-Resolving Simulation
SST - Shear Stress Transport
URANS - Unsteady Reynolds-Averaged Navier-Stokes
WALE - Wall-Adapting Local Eddy-viscosity
§ INTRODUCTION
Pharmaceutical aerosol generated through a dry powder inhaler (DPI) is a multi-phase flow comprising a continuous phase (air) and a disperse phase (particles), which contains the active pharmaceutical ingredients (API). During aerosolization, there is an interaction between the two phases - the air flow contributes to the dispersion and deposition of the particles, and the presence and motion of particles modulates the air flow-field. The transition of local flow from laminar to turbulent and the high volume fraction of particles near the release point, relative to the fluid volume in a DPI, leads to complex particle-flow interactions. In addition, these particle-flow interactions are symbiotic with the device design, the inhalation flow, and the formulation and properties of the drug which further increases its complexity. Experimental investigations of these phenomena have significant practical challenges, thus computational modelling of the fluid flow and particle dynamics has been performed to study these processes and optimize device delivery [1, 2, 3].
The modelling of the continuous phase of a DPI has been performed using Computational Fluid Dynamics (CFD), which has traditionally involved solving the Reynolds-Averaged Navier-Stokes (RANS) equations numerically, with suitable turbulence closure models. These equations are time-averaged forms of the governing continuity and momentum equations (Navier-Stokes equations (NSE)), and the turbulence model serves to close this system of mean-flow equations. However, time-averaging leads to a loss of information and some turbulence models have limitations in accurately modelling turbulent swirling flows that are inherent in a DPI [4]. These issues can be mitigated by using Large Eddy Simulation (LES), that solves the filtered NSE and can resolve large-scale turbulence eddies and detailed flow structures, depending on the applied local filter width. LES has been shown to provide more high-fidelity information of the flow field compared with RANS, but it has not been widely used for DPI modelling because of the higher computational requirements, especially if it is applied in boundary layers [5].
One of the earliest CFD studies on DPIs was conducted by Coates et al. [6] in which they studied the flow-field and particle trajectories in the Aerolizer® DPI for different design parameters of the inhaler mouthpiece and grid. The flow-field was simulated using the RANS approach with the $k$-$\omega$ Shear Stress Transport (SST) turbulence model [7] and with particles tracked using a Lagrangian approach. Flow field validation was carried out by comparing the simulation results with laser doppler velocimetry (LDV) data at the exit of the device. An increase in the size of the grid openings reduced the flow straightening effect, and also the turbulence intensity, just downstream of the grid. Consequently, particle collisions with the grid also decreased, but led to an increase in particle-wall collisions in the mouthpiece. This balancing effect, of lower turbulence intensity and particle-grid collisions with higher particle-wall collisions in the mouthpiece, was found to result in similar values of fine particle fraction (FPF) for these design changes.
In a follow-up study on the effect of flow rates on DPI performance [8], they reported the expected increase of turbulence intensity, integral scale strain rates and particle-wall collisions with an increase in air flow rates. This led to an improvement in powder de-agglomeration and thus its dispersion in the flow, but only up to a flow rate of 65 l/min. A later study by Coates et al. [9] on the effect of tangential inlet size on the inhaler flow-field showed that a reduction of inlet area size resulted in higher turbulence intensities and velocity of particle-wall collisions in the region just downstream of the inlets.
A RANS approach using the $k$-$\omega$ SST turbulence model was used by Donovan et al. [10] to study the flow-field and particle trajectories in the Aerolizer® and Handihaler® DPI geometries. The particles were modelled using a Stokesian drag law with non-spherical corrections to account for particle shape effects. The swirling flow in the Aerolizer® intensified particle-wall collisions, which lead to an improvement in drug detachment, whereas the absence of swirling flow in the Handihaler® lead to fewer particle collisions with the inhaler wall, and thus lower aerosol performance. It was also shown that increasing the mean particle diameter increased the number of particle-wall collisions due to the increased Stokes number leading to more ballistic trajectories.
The application of RANS with various models for turbulent flow (standard $k$-$\epsilon$, RNG $k$-$\epsilon$ and k-$\omega$ SST) was used by Milenkovic et al. [5] to model the flow in a Turbuhaler® DPI geometry. They also used LES, but for only a single parametric case, which was then compared with the RANS solutions. The LES generated radial and tangential flows within the device showed enhanced presence of eddies and secondary flow structures that were most similar to those obtained with the $k$-$\omega$ SST model.
In a later study, Milenkovic et al. [11] modelled the dynamic flow in the same DPI geometry instead of a steady flow. This dynamic flow comprised an initial rapid increase of flow rate that gradually plateaued to a steady flow rate, and was simulated by imposing dynamic outlet pressures. They showed that the normalised dynamic flow-field velocities were similar for peak inspiratory flow rates (PIFR) of 30, 50 and 70 l/min.
A Lagrangian approach with one-way coupling was used by Sommerfeld and Schmalfuß [12] to determine the fluid stresses experienced by the carrier particles along their path through a DPI. The RANS equations with the $k$-$\omega$ SST turbulence model were solved for steady flow through the inhaler. Their results indicated that wall collisions largely prevailed in particle motion, wherein de-agglomeration of drug powder mainly occurred due to wall impacts in the swirl chamber and with the grid placed just after it. The wall-collision frequency of the particles was found to increase with particle size due to their increased inertia, but this reduced their wall-impact velocities.
Longest et al. [13] performed CFD simulations using the low Reynolds number (LRN) $k$-$\omega$ turbulence model and employed a Lagrangian particle tracking algorithm to predict individual particle trajectories and determine particle interaction with the mean turbulent flow-field. Six different inhaler designs were studied and they explored both turbulence and impaction as potential particle break-up mechanisms. It was found that turbulence was the primary de-aggregation mechanism for carrier-free particles, with high turbulence kinetic energy, long exposure time, and small characteristic eddy length scales. However, in a later study by Longest and Farkas [14], on powder dispersion in a dose aerosolization and containment unit, they found an undesirable increase in aerodynamic diameter when flow turbulence was increased.
It is important to keep in mind that CFD simulations can only be used with confidence once they have been validated. It is for this reason that we are employing three complementary methods in our current investigation of the impact of inhaler design on performance. CFD can provide information on the flow field and particle behaviour both inside and outside of the inhaler, however there are many uncertainties pertaining to turbulence modelling and the dynamics and break-up of particle agglomerates. Particle image velocimetry (PIV) studies provide high quality data on the flow field outside of the device. Finally, studies provide a means of studying device performance for a powder formulation and the interaction of the inhaled particle cloud with the respiratory tract. Ultimately,
models should reliably determine particle deposition inside the device as this in turn affects the determination of emitted FPF from simulations. The size, distribution and velocity of aerosol particles upon exiting the DPI mouthpiece govern their motion and deposition in the respiratory tract, which is of utmost importance in assessing the performance of the DPI.
In a previous study [15] we presented both PIV data and studies for four different inhalers having two tangential inlets, six tangential inlets, two inlets with an inlet grid and two inlets with an exit grid. Given that the two and six inlet cases showed very similar results, in this paper we present a CFD study of the two inlet cases and compare our results with both the and PIV data. The inhaler geometries studied here are shown in Figure <ref>.
DPI device models examined in this study
§ MATERIALS AND METHODS
§.§ PIV Setup
The PIV experimental setup, which the CFD model geometry replicates, is shown in Fig. <ref>. The DPI device models used in the PIV experiments were geometrically scaled-up three times to that of the original models shown in Fig. <ref>. Each model was placed in a tank with a closed-loop water flow system, wherein a steady water flow-rate was maintained through the model to attain a Reynolds number of $\approx$ 8400. The Reynolds number is defined based on the average flow velocity at the DPI mouthpiece exit and the mouthpiece exit inner-diameter. Two component-two dimensional (2C-2D) PIV measurements were performed in a longitudinal plane outside the DPI mouthpiece exit, within a downstream distance of four jet diameters. The detailed description of the PIV apparatus, methodology, and associated measurement uncertainties is provided in Gomes dos Reis et al. [15].
PIV experimental setup
§.§ CFD Modelling Approach
In all cases time-dependent simulations were performed as, not unexpectedly, convergence to a steady flow could not be achieved. Therefore, simulations were started in steady mode to establish an initial flow field and then time-dependent simulations were performed. Once these had established realistic flow fields, transient statistics were evaluated to enable the mean flow velocities and the Reynolds stresses to be obtained for comparison with the PIV experimental data. All simulations were performed using Ansys® Fluent 2020R1 [16] and were run in double precision to eliminate rounding error.
§.§.§ Turbulence Modelling
Based on the above literature review it was decided to investigate three different turbulence modelling approaches. The realisable $k$-$\epsilon$ [17] and the $k$-$\omega$ SST <cit.> models were chosen as being representative of the unsteady-RANS (URANS) modelling approaches. It is clear that the $k$-$\omega$ SST model is the most widely used, however a $k$-$\epsilon$ model was also included as this approach is widely used in internal flow simulations. It is well-known that these two-equation models do not capture swirling flow correctly, so both were solved with a Curvature Correction (CC) term included [18], as it has been shown to correctly capture the swirl profile in cyclones [19]. Whilst Reynolds stress models can in theory provide good solutions for swirling flows they are renowned for being numerically stiff and hard to solve, so they were not investigated in this study.
In order to investigate the impact of using a Scale-Resolving Simulation (SRS) approach, simulations were made using the Stress Blended Eddy Simulation (SBES) approach [20] as this takes advantage of the best aspects of the RANS and LES approaches. In the near wall region, where the flow is attached and LES simulations are prohibitively expensive, the $k$-$\omega$ SST model provides the eddy viscosity. Away from the wall, in regions where the mesh is sufficiently fine, the model blends the eddy viscosity with that from an LES modelling approach. The subgrid-scale closure of the Wall-Adapting Local Eddy-viscosity (WALE) model [21] was used.
In all cases the computational mesh was constructed so that there were sufficient inflation layers adjacent to the inhaler walls that the $y\textsuperscript{+}$ values were low enough for the flow to be resolved up to the wall in the $k$-$\omega$ models. Care was taken to ensure that the transition to SRS occurred where expected and that in this case the unresolved turbulence led to an eddy viscosity consistent with the LES approach. A recent study that highlights the best practices and checks to be performed can be consulted for more detail [22].
§.§.§ Particle Modelling
Once the flow was established, the Discrete Phase Model (DPM) was used to perform time-dependent particle tracking in the time-dependent flow for the SBES simulations, assuming a drag model appropriate for smooth spheres. The simulations were performed for a low particle loading using one-way coupling as the current work compares the flow field with PIV data in which the drug particles are absent. As the large scale turbulence structures are captured in these simulations, no additional turbulent dispersion was added. At the walls, particles were assumed to reflect with coefficients of restitution of 0.9 in the tangential direction and 0.7 in the normal direction, based on values determined for typical drug formulations [23]. User-defined functions were used to capture the number of impacts and the impact kinetic energy of the particles.
Two different sets of particle tracking were performed. Firstly, 280 diameter particles were released from the spherical end cap of the inhaler (dosing cup) to represent the carrier particles, and their impact behaviour with the wall and grid (if present) was studied. Particle de-agglomeration occurs when carrier particles impact the wall or each other, knocking active drug particle off the carrier particle. Here we investigated the importance of wall impact by recording both the average number of wall impacts and the average impact kinetic energy of the particles. Secondly, 1.24 diameter particles were released from an annulus one nozzle diameter upstream of the mouthpiece exit, occupying the outer 20% of the device mouthpiece to represent the fine particles. This simulation was made to investigate the subsequent dispersion of these particles assuming they had been released from wall impaction and had subsequently travelled along the wall region. In both cases a particle density of 1540 was used, based on that for lactose [24].
§.§.§ Model Setup
The model geometry was created to mirror that of the PIV experiment, briefly described in Section <ref>, but for an incompressible fluid of air, at ambient conditions. The Reynolds number based on the jet diameter $D_a$ was 8400, as used experimentally. The geometry used, showing the external surface mesh, is given in Figure <ref>(a). A spherical region of ambient air is modelled around the inlet region, as it was found that applying boundary conditions at the inlets of the inhaler led to an over-constrained flow in that region. The air exiting the device enters a box, just as was used in the PIV experiments, in order provide the same downstream flow domain to allow direct comparison of the jet behaviour with the experimental data. Figure <ref>(b) shows a section through the computational mesh for the case with a grid at the exit, showing the poly-hexcore structure used, with hexahedral mesh in the important central regions, connected to inflation mesh at the walls by a layer of polyhedra. Local mesh controls were applied to ensure good resolution where needed. Based on mesh studies, the final mesh comprised $\sim$1 million cells and $\sim$2 million nodes.
The adequacy of the inflation mesh was checked by examining the wall $y^+$ values. For the SST model $y^+ < 8$ over all walls, with most of the domain having $y^+ < 3$, meaning that the model was resolving the flow to the wall. For the realisable $k$-$\epsilon$, $ 11 < y^+ < 200$, which was consistent with the use of scalable wall functions.
CFD model geometry: (a) Geometry ; (b) Mesh
At the inlet a total pressure of 0 was applied with a 1% turbulence intensity. At the exit the mass flow rate was specified to achieve the required Reynolds number. All walls were treated as being smooth with no slip.
To solve the equations the coupled solver was applied in time-dependent mode. The very strong swirl meant that a segregated approach was very hard to converge. Gradients were calculated using the Green-Gauss node based method to achieve high accuracy. A second order differencing scheme was used for the pressure, a bounded central differencing scheme for momentum, a second order upwind scheme for the turbulence quantities and a bounded second order implicit scheme for the transient terms. The solution required the use of small time steps ($\sim$ 5) and typically 5 - 8 iterations per timestep.
§ RESULTS
§.§ Effect of Turbulence Model
Initially we investigated the effect of the choice of the turbulence modelling approach. Figure <ref> shows a comparison of the time-averaged axial $U$ and radial $V$ velocity components predicted by the CFD modelling with the PIV data. The axial and radial coordinates are represented by $x$ and $y$, respectively. The velocity components have been normalised by the jet-exit mean velocity $U_a$, and the spatial coordinates by the jet-exit diameter $D_a$. Comparisons are presented at two representative downstream lines, located just after the exit from the device and two diameters further downstream. It is evident that in all cases the SBES predictions are closer to the experimental data. In particular the realisable $k$-$\epsilon$ URANS models tends to over-predict the back-flow at the device outlet and the radial velocities distributions are much closer to the measured data for the SBES model. Given the importance of the prediction of the jet spreading rate, the use of the URANS models was discontinued.
§.§ Effect of the Grid
The impact of the grid on the flow field is shown in Figure <ref>, which presents the axial and swirl velocity components on a centre-plane. From Figure <ref>(a) it is apparent that the case with shows a large vortex breakdown region at the exit of the device which leads to back-flow in the central region and as a consequence the wide dispersion of the axial flow. The case shows much reduced jet spreading and the case shows focusing of the high velocity jet generated by the grid towards the central axis. Both flow fields for devices with grids are potentially beneficial in that they are likely to focus particles along the centre of the jet.
Impact of the turbulence model on the comparison with the PIV data. Mean velocities for the model : mean axial velocity at (a) $x/{D_a}$ = 0; (b) $x/{D_a}$ = 2; mean radial velocity at (c) $x/{D_a}$ = 0; (d) $x/{D_a}$ = 2; RKE; SST; SBES; PIV.
The swirl velocities, given in Figure <ref>(b), show the strong swirling flow exiting the device in the absence of a grid and that it is significantly reduced by the presence of the grid. In the case the region of strong swirl is small and this may have an effect on particle de-agglomeration, whereas the model shows strong swirl within the device being suppressed at the exit.
Flow-field contour plots: (a) mean axial velocity ; (b) mean swirl velocity
Validation of the above flow fields was performed via comparison with detailed PIV data. Figure <ref> shows a comparison of the mean axial and radial velocity components with the PIV data. It is evident that in all cases there is good agreement between simulations and experiment. Mean axial velocities are well predicted with the worse agreement being a slight under-prediction of the central values at $x/D_a = 3$ for the case. There are also some differences in the radial velocity in this case but the velocities are small and much less important in determining the flow field.
Mean axial and radial velocities for: (a) and (d) ; (b) and (e) ; (c) and (f) models; SBES: $x/{D_a}$ = 0, $x/{D_a}$ = 1, $x/{D_a}$ = 3; PIV: $x/{D_a}$ = 0, $x/{D_a}$ = 1, $x/{D_a}$ = 3.
Figures <ref> and <ref> show the axial and radial velocity fluctuations and Reynolds stress comparisons with the PIV data. The best agreement is observed for the case. However, whilst there are some deviations in the cases where grids are present, these are relatively small and are most pronounced close to the device in the case. In this case, small deviations of measuring locations and fabrication tolerances would have the most pronounced effect. What is clear is that the CFD results correctly capture the magnitude and trends of these quantities in all cases, providing confidence for it to be used to investigate the entire flow field.
RMS axial and radial fluctuating velocities for: (a) and (d) ; (b) and (e) ; (c) and (f) models; SBES: $x/{D_a}$ = 0, $x/{D_a}$ = 1, $x/{D_a}$ = 3; PIV: $x/{D_a}$ = 0, $x/{D_a}$ = 1, $x/{D_a}$ = 3.
§.§ Impact on the Pressure Drop
The measured pressure drop data are compared with the mean values obtained from the simulation in Figure <ref> for an air flow rate of 60. In the absence of a grid the values are very close, while the trend is correctly predicted, the value is under-predicted by about 35%, for the two cases with a grid. The reason for this is unclear but is most likely related to small differences between the CAD geometry used to construct the CFD model and the 3D-printed physical device model, and the surface roughness of the physical model.
Reynolds shear-stress for: (a) ; (b) ; (c) models; SBES: $x/{D_a}$ = 0, $x/{D_a}$ = 1, $x/{D_a}$ = 3; PIV: $x/{D_a}$ = 0, $x/{D_a}$ = 1, $x/{D_a}$ = 3.
Pressure drop across the device models: Measured; CFD.
§.§ Influence on the Carrier Particles
As discussed in Section <ref>, carrier particles were released in the dosing cup of the device and their paths were tracked to collect data on spreading and wall impacts. Figure <ref>(a) shows the radial distribution of particles across the device at the exit and one jet-exit diameter downstream. For all three cases the exit distribution is very similar with particles clustered around the device wall. Even in the case with an there is sufficient swirl to keep the particles at the wall. However, once they exit the device there is a very clear difference in behaviour. The particles in the case have all moved in the radial direction by one jet-exit diameter and continue to move along that trajectory (data not shown). In the case there is a small amount of outward spreading and in the case there is spreading both inwards and outwards. The Stokes number for the particles is in the intermediate range ($\sim$0.3), so this behaviour is readily explained by the flow fields shown in Figure <ref>, as once a particle leaves the inhaler it will tend to follow its initial trajectory while slowly responding to the influence of the flow.
Figure <ref>(b) shows the average number of wall impacts per particle for the three cases. In terms of particle impacts, the best performing system is the case, followed by the and the worse is the case with , with the median number of impacts in these cases being 16, 11 and 8, respectively. Clearly, the presence of a grid promotes particle-wall impacts but it is interesting that the case has the best performance in this sense. The same trend is present in the data for the mean particle impact energy in Figure <ref>(c), with the median value for the case being about twice that of the other two cases.
Cumulative distributions of particle variables: (a) Particle radial location, , , at $x/{D_a}$ = 0, , , at $x/{D_a}$ = 1 ; (b) Number of particle-wall impacts ; (c) Average particle-impact kinetic energy; , , .
Based on the above data, it is clear that a grid is important to reduce the particle spread and that the presence of a grid increases both the number of particle wall impacts and their energy. According to these predictions the device should perform best.
Cumulative distribution of radial location for the fine particles: , , at $x/{D_a}$ = 1, , , at $x/{D_a}$ = 5.
§.§ Influence on the Drug Particles
Figure <ref> shows the spreading of the fine particles once they exit the device. At one jet-exit diameter downstream, the particles in the case have already spread out around 1.5 jet diameters from the axis, whereas for the there is almost no spreading and there is a slight focusing effect in the case. At 5 jet-exit diameters downstream, the fine particles are spread over 5 jet diameters in the case, whereas the spreading is only 1.5 and 2 diameters for the and , respectively. Thus if reduced dispersion, and consequently less mouth-cavity deposition of the active ingredient is the aim, the device is to be preferred based on these results.
§ DISCUSSION
The objective of this paper was to perform CFD studies for a number of inhaler designs and to confirm the results with experimental data in order to determine the utility of appropriate CFD simulations. Of course, this can only be done if the simulations results are of high quality and the models used are correctly applied. It takes experience and significant knowledge to do this correctly, so we have tried to outline the important questions to ask when setting up models and checking the results. For example, it was found that the common practice of applying boundary conditions at the device inlets leads to a non-physical influence on the flow in a very important part of the device. Similarly, the impact of using the correct turbulence modelling approach is highlighted.
Whilst it is no surprise that these strongly swirling flows are time-dependent, it is clear that simply switching on transient flow, to change a RANS simulation to a URANS simulation, is not the correct approach. Doing this does indeed allow a transient simulation to made and the high residuals associated with an unconverged steady-state to be reduced but the URANS approach does not provide a physical description of the turbulence structure [20]. What is now evident is that the earliest studies used relatively coarse computational meshes, lower order numerics and simpler turbulence models (without, for example, curvature correction terms) so that the flows often appeared much steadier than they do now because the swirl was artificially dissipated. The approach advocated here gives much more realistic turbulent flow fields meaning that both the swirl behaviour and the impact of the flow on particle transport are captured much more accurately.
Validation against detailed PIV data has allowed the models to be assessed and it is clear that the SBES is a good approach, especially given the nature of the flow where there are significant regions of the flow domain occupied by attached boundary layers, which are known to be captured well using the SST model. The comparisons with PIV data presented herein provide very good validation of the modelling approach. This is important as CFD can then be used with confidence to explore the flow behaviour within the device itself, a region very difficult to access experimentally, and to screen ideas for new device designs.
The impact of the grid on mouth-cavity deposition is well captured in the simulations as the results conform with the results showing that there was a significant difference between the devices, with most deposition in the case and least in case. The studies showed that more drug remained in the device for the case, a parameter that was not assessed in this model. Moreover, the fine particle fraction (FPF) in the study was similar amongst the devices, with values of 52.83% ± 3.45, 53.05% ± 7.17 and 56.25% ± 4.54 for the , and , respectively. From the CFD results presented here, the presence of the grid led to a higher mean number of impacts and increased impact kinetic energy of the particles, which is expected to translate into greater drug detachment from the carrier particles. Although there was a numerical increase in FPF, the increased number of particle-wall impacts observed in the CFD did not lead to a significant increase in FPF, as shown in a previous study [15]. During aerosolization, drug detachment from the carrier is thought to derive from both particle-wall and particle-particle collisions. From CFD results, the case was predicted to have a better performance due to its greater de-agglomeration potential resulting from the higher number of particle-wall impacts. However, particle-particle collisions were not modelled in this study, which is the likely explanation for the differences observed between CFD and results.
§ CONCLUSIONS
This paper has shown that provided the correct modelling choices are made and the simulations are executed with the appropriate care and knowledge, CFD can provide significant insights into DPI performance. Simulations using the Stress Blended Eddy Simulation (SBES) approach are well suited for this task, which is supported by the very good agreement with the PIV data. This turbulence modelling choice is important as it allows the transient nature of the flow and the significant turbulence generation by highly swirling flows to be captured. This has a follow-on effect on the dispersion of the fine particles that have low Stokes numbers and follow the turbulent eddies. This work shows that it is possible to improve upon the use of RANS or URANS significantly without going to a full LES simulation. In particular, the proposed approach uses the optimal turbulence modelling approach in each zone: RANS in attached boundary layers at the walls and LES in the regions of separated flow and wakes. Use of pure LES is not practical as it requires locally refined meshes in all three dimensions at the wall if the boundary layer is to be captured correctly.
The simulations capture important experimental observations of the reduction in radial spreading of the flow and fine particles due to the presence of a grid, with the geometry performing best, in line with the reduced mouth-cavity deposition observed in the experiments.
Keeping in mind that the experiments did not use a throat geometry and the simulations did not model all aspects of the particle behaviour, specifically particle-particle interactions and particle detachment the adopted CFD approach captured the dispersion data quite well.
§ ACKNOWLEDGMENTS
The research was supported by the Australian Research Council. The authors acknowledge the University of Sydney for providing High Performance Computing resources that have greatly contributed to the research results reported here (http://sydney.edu.au/researchsupport). The research was also benefited from computational resources provided through the NCMAS, supported by the Australian Government. The computational facilities supporting this project included the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) at Monash.
[1]
Wong W, Fletcher DF, Traini D, Chan HK, Young PM.
The use of computational approaches in inhaler development.
Advanced Drug Delivery Reviews. 2012;64(4):312–322.
[2]
Islam N, Cleary MJ.
Developing an efficient and reliable dry powder inhaler for
pulmonary drug delivery - A review for multidisciplinary researchers.
Medical Engineering and Physics. 2012;34(4):409–427.
[3]
Sommerfeld M, Cui Y, Schmalfuß S.
Potential and constraints for the application of CFD combined with
Lagrangian particle tracking to dry powder inhalers.
European Journal of Pharmaceutical Sciences. 2019;128:299–324.
[4]
Yang Y, Knudsen Kaer S.
Comparison of Reynolds averaged Navier-Stokes based simulation and
large-eddy simulation for one isothermal swirling flow.
Journal of Thermal Science. 2012;21(2):154–161.
[5]
Milenkovic J, Alexopoulos AH, Kiparissides C.
Flow and particle deposition in the Turbuhaler: A CFD simulation.
International Journal of Pharmaceutics. 2013;448(1):205–213.
[6]
Coates MS, Fletcher DF, Chan HK, Raper JA.
Effect of design on the performance of a dry powder inhaler using
computational fluid dynamics. Part 1: Grid structure and mouthpiece length.
Journal of Pharmaceutical Sciences. 2004;93(11):2863–2876.
[7]
Menter FR.
Two-equation eddy-viscosity turbulence models for engineering
AIAA Journal. 1994;32(8):1598–1605.
[8]
Coates MS, Chan HK, Fletcher DF, Raper JA.
Influence of air flow on the performance of a dry powder inhaler
using computational and experimental analyses.
Pharmaceutical Research. 2005;22(9):1445–1453.
[9]
Coates MS, Chan HK, Fletcher DF, Raper JA.
Effect of design on the performance of a dry powder inhaler using
computational fluid dynamics. Part 2: Air inlet size.
Journal of Pharmaceutical Sciences. 2006;95(6):1382–1392.
[10]
Donovan MJ, Kim SH, Raman V, Smyth HD.
Dry powder inhaler device influence on carrier particle
Journal of Pharmaceutical Sciences. 2012;101(3):1097–1107.
[11]
Milenkovic J, Alexopoulos AH, Kiparissides C.
Deposition and fine particle production during dynamic flow in a dry
powder inhaler: A CFD approach.
International Journal of Pharmaceutics. 2014;461(1-2):129–136.
[12]
Sommerfeld M, Schmalfuß S.
Numerical analysis of carrier particle motion in a dry powder
Journal of Fluids Engineering, Transactions of the ASME. 2016;138(4).
[13]
Longest PW, Son YJ, Holbrook L, Hindle M.
Aerodynamic factors responsible for the deaggregation of
carrier-free drug powders to form micrometer and submicrometer aerosols.
Pharmaceutical Research. 2013;30(6):1608–1627.
[14]
Longest W, Farkas D.
Development of a new inhaler for high-efficiency dispersion of
spray-dried powders using computational fluid dynamics (CFD) modeling.
AAPS Journal. 2019;21(2):1–15.
[15]
Gomes dos Reis L, Chaugule V, Fletcher DF, Young PM, Traini D, Soria J.
In-vitro and particle image velocimetry studies of dry powder
International Journal of Pharmaceutics. 2021, in-press;.
[16]
Ansys® Fluent 2020R1;.
Available from:
[17]
Shih TH, Liou WW, Shabbir A, Yang Z, Zhu J.
A new k-$\epsilon$ eddy viscosity model for high reynolds number
turbulent flows.
Computers and Fluids. 1995;24(3):227–238.
[18]
Smirnov PE, Menter FR.
Sensitization of the SST turbulence model to rotation and curvature
by applying the Spalart-Shur correction term.
Journal of Turbomachinery. 2009;131(4):1–8.
[19]
Alahmadi YH, Nowakowski AF.
Modified shear stress transport model with curvature correction for
the prediction of swirling flow in a cyclone separator.
Chemical Engineering Science. 2016;147:150–165.
[20]
Menter F.
Stress-blended eddy simulation (SBES)—A new paradigm in hybrid
RANS-LES modeling.
Notes on Numerical Fluid Mechanics and Multidisciplinary Design.
[21]
Nicoud F, Ducros F.
Subgrid-scale stress modelling based on the square of the velocity
gradient tensor.
Flow, Turbulence and Combustion. 1999;62(3):183–200.
[22]
Brown GJ, Fletcher DF, Leggoe JW, Whyte DS.
Application of hybrid RANS-LES models to the prediction of flow
behaviour in an industrial crystalliser.
Applied Mathematical Modelling. 2020;77:1797–1819.
[23]
Bharadwaj R, Smith C, Hancock BC.
The coefficient of restitution of some pharmaceutical
International Journal of Pharmaceutics. 2010;402(1-2):50–56.
[24]
Zuurman K, Riepma KA, Bolhuis GK, Vromans H, Lerk CF.
The relationship between bulk density and compactibility of lactose
International Journal of Pharmaceutics. 1994;102(1-3):1–9.
[1]
Wong W, Fletcher DF, Traini D, Chan HK, Young PM.
The use of computational approaches in inhaler development.
Advanced Drug Delivery Reviews. 2012;64(4):312–322.
[2]
Islam N, Cleary MJ.
Developing an efficient and reliable dry powder inhaler for
pulmonary drug delivery - A review for multidisciplinary researchers.
Medical Engineering and Physics. 2012;34(4):409–427.
[3]
Sommerfeld M, Cui Y, Schmalfuß S.
Potential and constraints for the application of CFD combined with
Lagrangian particle tracking to dry powder inhalers.
European Journal of Pharmaceutical Sciences. 2019;128:299–324.
[4]
Yang Y, Knudsen Kaer S.
Comparison of Reynolds averaged Navier-Stokes based simulation and
large-eddy simulation for one isothermal swirling flow.
Journal of Thermal Science. 2012;21(2):154–161.
[5]
Milenkovic J, Alexopoulos AH, Kiparissides C.
Flow and particle deposition in the Turbuhaler: A CFD simulation.
International Journal of Pharmaceutics. 2013;448(1):205–213.
[6]
Coates MS, Fletcher DF, Chan HK, Raper JA.
Effect of design on the performance of a dry powder inhaler using
computational fluid dynamics. Part 1: Grid structure and mouthpiece length.
Journal of Pharmaceutical Sciences. 2004;93(11):2863–2876.
[7]
Menter FR.
Two-equation eddy-viscosity turbulence models for engineering
AIAA Journal. 1994;32(8):1598–1605.
[8]
Coates MS, Chan HK, Fletcher DF, Raper JA.
Influence of air flow on the performance of a dry powder inhaler
using computational and experimental analyses.
Pharmaceutical Research. 2005;22(9):1445–1453.
[9]
Coates MS, Chan HK, Fletcher DF, Raper JA.
Effect of design on the performance of a dry powder inhaler using
computational fluid dynamics. Part 2: Air inlet size.
Journal of Pharmaceutical Sciences. 2006;95(6):1382–1392.
[10]
Donovan MJ, Kim SH, Raman V, Smyth HD.
Dry powder inhaler device influence on carrier particle
Journal of Pharmaceutical Sciences. 2012;101(3):1097–1107.
[11]
Milenkovic J, Alexopoulos AH, Kiparissides C.
Deposition and fine particle production during dynamic flow in a dry
powder inhaler: A CFD approach.
International Journal of Pharmaceutics. 2014;461(1-2):129–136.
[12]
Sommerfeld M, Schmalfuß S.
Numerical analysis of carrier particle motion in a dry powder
Journal of Fluids Engineering, Transactions of the ASME. 2016;138(4).
[13]
Longest PW, Son YJ, Holbrook L, Hindle M.
Aerodynamic factors responsible for the deaggregation of
carrier-free drug powders to form micrometer and submicrometer aerosols.
Pharmaceutical Research. 2013;30(6):1608–1627.
[14]
Longest W, Farkas D.
Development of a new inhaler for high-efficiency dispersion of
spray-dried powders using computational fluid dynamics (CFD) modeling.
AAPS Journal. 2019;21(2):1–15.
[15]
Gomes dos Reis L, Chaugule V, Fletcher DF, Young PM, Traini D, Soria J.
In-vitro and particle image velocimetry studies of dry powder
International Journal of Pharmaceutics. 2021, in-press;.
[16]
Ansys® Fluent 2020R1;.
Available from:
[17]
Shih TH, Liou WW, Shabbir A, Yang Z, Zhu J.
A new k-$\epsilon$ eddy viscosity model for high reynolds number
turbulent flows.
Computers and Fluids. 1995;24(3):227–238.
[18]
Smirnov PE, Menter FR.
Sensitization of the SST turbulence model to rotation and curvature
by applying the Spalart-Shur correction term.
Journal of Turbomachinery. 2009;131(4):1–8.
[19]
Alahmadi YH, Nowakowski AF.
Modified shear stress transport model with curvature correction for
the prediction of swirling flow in a cyclone separator.
Chemical Engineering Science. 2016;147:150–165.
[20]
Menter F.
Stress-blended eddy simulation (SBES)—A new paradigm in hybrid
RANS-LES modeling.
Notes on Numerical Fluid Mechanics and Multidisciplinary Design.
[21]
Nicoud F, Ducros F.
Subgrid-scale stress modelling based on the square of the velocity
gradient tensor.
Flow, Turbulence and Combustion. 1999;62(3):183–200.
[22]
Brown GJ, Fletcher DF, Leggoe JW, Whyte DS.
Application of hybrid RANS-LES models to the prediction of flow
behaviour in an industrial crystalliser.
Applied Mathematical Modelling. 2020;77:1797–1819.
[23]
Bharadwaj R, Smith C, Hancock BC.
The coefficient of restitution of some pharmaceutical
International Journal of Pharmaceutics. 2010;402(1-2):50–56.
[24]
Zuurman K, Riepma KA, Bolhuis GK, Vromans H, Lerk CF.
The relationship between bulk density and compactibility of lactose
International Journal of Pharmaceutics. 1994;102(1-3):1–9.
|
11institutetext: LifeEye LLC, Tunis, Tunisia 11email:
<EMAIL_ADDRESS>
https://www.lifeye.io/ 22institutetext: Faculty of Economics and Management of
Sfax, University of Sfax, 3018 Sfax, Tunisia 22email:
<EMAIL_ADDRESS>
33institutetext: Department of Computer Science & Engineering, Qatar
University, Doha, Qatar
33email<EMAIL_ADDRESS>
# Dairy Cow rumination detection: A deep learning approach
Safa Ayadi 1122 Ahmed ben said 1133 Rateb Jabbar 1133 Chafik Aloulou 22 Achraf
Chabbouh 11 Ahmed Ben Achballah 11
###### Abstract
Cattle activity is an essential index for monitoring health and welfare of the
ruminants. Thus, changes in the livestock behavior are a critical indicator
for early detection and prevention of several diseases. Rumination behavior is
a significant variable for tracking the development and yield of animal
husbandry. Therefore, various monitoring methods and measurement equipment
have been used to assess cattle behavior. However, these modern attached
devices are invasive, stressful and uncomfortable for the cattle and can
influence negatively the welfare and diurnal behavior of the animal. Multiple
research efforts addressed the problem of rumination detection by adopting new
methods by relying on visual features. However, they only use few postures of
the dairy cow to recognize the rumination or feeding behavior. In this study,
we introduce an innovative monitoring method using Convolution Neural Network
(CNN)-based deep learning models. The classification process is conducted
under two main labels: ruminating and other, using all cow postures captured
by the monitoring camera. Our proposed system is simple and easy-to-use which
is able to capture long-term dynamics using a compacted representation of a
video in a single 2D image. This method proved efficiency in recognizing the
rumination behavior with 95%, 98% and 98% of average accuracy, recall and
precision, respectively.
###### Keywords:
Rumination behavior Dairy cows Deep learning Action recognition Machine
Learning Computer vision.
## 1 Introduction
Cattle products are among the most consumed products worldwide (i.e., meat and
milk) [1], which makes dairy farmers pressured by the intensity of commercial
farming demands to optimize the operational efficiency of the yield system.
Therefore, maintaining cattle physiological status is an important task to
maintain an optimal milk production. It is widely known that rumination
behavior is a key indicator for monitoring health and welfare of ruminants [2,
3]. When the cow is stressed [4], anxious [5], suffering from severe disease
or influenced by any several factors, including the nutrition diet program [6,
7], the rumination time will decrease accordingly. Early detection of any
abnormality will prevent severe outcomes of the lactation program.
Furthermore, the saliva produced while masticating aids to improve the rumen
state [8]. Rumination time helps farmers to predict estrus [9] and calving
[10, 11] period of dairy cows. It was proved that the rumination time reduces
on the $14^{th}$ and $7^{th}$ day before calving [10] and decreases slowly
three days before estrus [9]. By predicting calving moments, the
farmer/veterinarian would be able to maintain the health condition of the cow
and prevent risks of any disease (e.g., Calf pneumonia) that could be mortal
when the cow is having a difficult calving [11].
In previous decades, farmers performed a direct observation to monitor
rumination [12]. However, this method has shown many limitations; it is time-
consuming and requires labor wages, especially on large-sized farms. In modern
farms, many devices based on sensors have been used to automatically monitor
animal behavior such as sound sensors [13], noseband pressure sensors [14] and
motion sensors [15, 16]. However, many of these sensors are designed to
extract only a few behavioral patterns (e.g., sound waves), which developed
the need of devising an automated system as a mean to assess health and
welfare of animals and reduce operational costs for farmers. Machine Learning
is able to extract and learn automatically from large-scale data using for
example sophisticated Neural Networks (NNs). NNs are mainly used in Deep
Learning algorithms that handily become state-of-the-art across a range of
difficult problem domains [17, 18, 19, 20]. Thus, the use of these developed
technologies can improve the monitoring process and achieve an efficient
performance in recognizing animal behavior. One of the most common used type
of Deep Neural Networks for visual motion recognition is the convolutional
neural networks (CNNs). CNNs can automatically recognize deep features from
images and accurately perform computer vision tasks [21, 22]. Aside from these
continuous achievements, these technologies require further improvement due to
their lack of precision.
This work proposes a monitoring method to recognize cow rumination behavior.
We show that CNN can accurately perform an excellent classification
performance using an easy-to-use extension of state-of-the-art base
architectures. Our contributions are as follow:
* •
We propose a simple and easy-to-use method that can capture long-term dynamics
through a standard 2D image using dynamic images method [23].
* •
With a standard deep learning CNN-based model, we accurately performed the
classification tasks using all postures of the cows.
* •
We conduct comprehensive comparative study to validate the effectiveness of
the proposed methodology for cow rumination detection.
The remainder of this paper is organized as follows. The related works of this
study are presented in Section 2. The developed method and the used equipment
are described in detail in Section 3. The implementation of the model, the
evaluation of the yielded results and the comparison with the state-of-the-art
are discussed in Section 4. Finally, a conclusion and directions for future
research are presented in Section 5.
## 2 Related works
In this section, we review existing research works, equipment and methods that
addressed the challenging problem of rumination detection. The existing
intelligent monitoring equipment can be split into four categories.
### 2.1 Sound sensor for rumination detection
The monitoring method with sound sensor, is mainly used to identify the
rumination behavior by planting a microphone around the throat, forehead or
other parts of the ruminant to record chewing, swallowing or regurgitating
behavior. In fact, acoustic methods exhibit excellent performance in
recognizing ingestive events. Milone et al.[24] created an automated system to
identify ingestive events based on hidden Markov models. The classification of
chew and bite had an accuracy of 89% and 58% respectively. Chelotti et al.
[25] proposed a Chew-bit Intelligent Algorithm (CBIA) using sound sensor and
six machine learning algorithms to identify three jaw movement events. This
classification achieved 90% recognition performance using the Multi-Layer
Perceptron. Clapham et al. [26] used manual identification and sound metrics
to identify the jaw movement that detected 95% of behavior, however this
system requires manual calibration periodically which is not recommended for
automated learning systems. Furthermore, some systems use sound sensors to
recognize the rumination and grazing behavior after analysing jaw movement
[27, 28]. The monitoring methods with sound sensor gave a good performance.
However, owing to their high-cost and trends in the distributed signals that
negatively affect event detection, these devices are primarily used for
research purposes.
### 2.2 Noseband pressure sensor for rumination detection
The monitoring method with a noseband pressure sensor, generally used to
recognize rumination and feeding behavior using a halter and a data logger to
record mastication through picks of pressure. Shen et al. [14] used noseband
pressure as core device to monitor the number of ruminations, the duration of
rumination and the number of regurgitated bolus and achieved 100%, 94,2% and
94.45% respectively as results of recognition. Zehner et al. [29] created two
software to classify and identify the rumination and eating behavior using two
versions of RumiWatch 111https://www.rumiwatch.ch/ noseband pressure sensors.
The achieved detection accuracy 96% and 91% of rumination time and 86% and 96%
of feeding time for 1h resolution data provided by two noseband devices. The
obtained results are important with these technologies however; the monitoring
process is only useful for short-term monitoring and requires improvements to
efficiently monitor the health and the welfare of animals.
### 2.3 Triaxial acceleration sensor for rumination detection
The monitoring method with a triaxial acceleration sensor that can, recognize
broader sets of movement at various scales of rotation. It is common to use
accelerometer sensors for its low cost. Shen et al. [16] used triaxial
acceleration to collect jaw movement and classify them into three categories:
feeding, ruminating and other using three machine learning algorithms. Among
them, the K-Nearest Neighbour (KNN) algorithm scored the best performance with
93.7% of precision. Another work focused on identifying different activities
of the cow using Multi-class SVM and the overall model performed 78% of
precision and 69% of kappa [30]. Rayas-Amor et al. [31] used the HOBO-Pendant
G-three-axis data recorder to monitor grazing and rumination behavior. The
system recognized 96% and 94.5% respectively of 20 variances in visual
observation per cow/day. The motion-sensitive bolus sensor was applied by
Andrew et al. [32] to measure jaw motion through the bolus movement using SVM
algorithm. This algorithm managed to recognize 86% of motion. According to
these findings, the accelerometer made an interesting performance in
recognizing behavior; however, it still confuses activities of animals that
share the same postures.
### 2.4 Video equipment sensor for rumination detection
The monitoring method with video equipment, recognize ruminant behavior by
recording cow movement and extracting visual motions to identify and classify
the animal behavior. According to the state-of-the-art, many initiatives
focused on detecting the mouth movement using the optical flow technique that
can detect motion from two consecutive frames. Mao et al. [15] used this
technique to track the rumination behavior of dairy cows automatically. This
method reached 87.80% of accuracy. Another work by Li et al. [33] on tracking
multiple targets of cows to detect their mouth areas using optical flow
technique achieved 89.12% of the tracking rate. The mean shift [34] and STC
algorithms [35] were used by Chen et al. [36, 37] to monitor the rumination
time using the optical flow technique to track the mouth movement of the cow.
The monitoring process achieved 92.03% and 85.45% of accuracy, respectively.
However, the learning process of these two methods is based only on the prone
position and thus, it is not possible to monitor the diurnal rumination
behavior in its different pastures. On other hand, the mouth tracking method
can easily be influenced by cow movement which creates inferences in the
training stage. CNN is another technique to extract features from images
without any manual extraction. This technology is generally used for object
detection [38] and visual action recognition [37, 39]. D Li et al. [39] used
KELM [40] to identify mounting behavior of pigs. This network achieved
approximately 94.5% of accuracy. Yang et al. [41] applied Faster R-CNN [42] to
recognize feeding behavior of group-housed pigs. The algorithm achieved 99.6%
of precision and 86.93% of recall. Another recent work based on CNN was
proposed by Belkadi at al. [38]. It was developed on commercial dairy cows to
recognize the feeding behavior, feeding place and food type. They implemented
four CNN-based models to: detect the dairy cow, check availability of the food
in the feeder and identify the food category, recognize the feeding behavior
and identify each cow. This system is able to detect 92%, 100% and 97% of the
feeding state, food type and cow identity, respectively. Although the achieved
performance is significant, this method is not suitable for detecting other
behaviors since their used images focus only on the feeder area which boosted
their performance. Overall, many of these proposed methods worked only with
few postures of dairy cows to recognize the rumination or feeding behaviors.
Conversely, video analysis methods can easily be influenced by weather
conditions and other external, factors which causes noisy effects for the
learning performance. These methods are more applicable to monitor cows housed
indoors or for commercial purposes [43].
### 2.5 Evaluation
All the four categories performed well when it comes to recognizing animal
behavior. However, many of these wearable devices are invasive, stressful and,
accordingly, can influence the diurnal behavior of animals [44]. Thus, using
video equipment is more reliable and less invasive. In this work, we propose a
method that relies on a non-stressful device and use a deep learning CNN-based
method to recognize the rumination behavior of indoor-housed cows
automatically.
Figure 1: The proposed system for cow rumination behavior recognition.
## 3 Method and materials
Our proposed system, is mainly constructed with four stages as depicted in
Fig. 1. We use video equipment as a core device to collect cattle activities.
The recorded videos, are continuously stored in the database and automatically
segmented into frames. We collect data and carefully annotate it under two
main labels (Section 3.1). Subsequently, these frames are cleaned from noisy
effects and significantly generated to obtain a compacted representation of a
video (Section 3.2). We apply the dynamic images approach that uses a standard
2D image as input to recognize dairy cow behavior (Section 3.3). This method
can use a standard CNN-based architecture. All these processes were conducted
offline. To implement and test our model, we choose several key architectures
(Section 3.4) that gave relevant results in the classification stage. To avoid
the overfitting of the model, we implemented a few regularization methods that
can boost the performance of the network (Section 3.5).
### 3.1 Data acquisition
The research and experiments were conducted at the Lifeye LLC
company222https://www.crunchbase.com/organization/lifeye for its project
entitled Moome333https://www.moome.io, based in Tunis (Tunisia).The
experimental subjects are Holstein dairy cows farmed indoor originating from
different farms of rural areas from the south of Tunisia. Cattles were
monitored by cameras planted in a corner offering a complete view of the cow
body. The recorded videos were stored in an SD card, then, they were manually
fed in the database storage and automatically divided into frames and
displayed in the platform. To ensure a real-time monitoring, cameras are
directly connected to the developed system. The collected data includes 25,400
frames collected during the daytime and the unlit time in July 2019 and
February 2020, then they were accurately distributed into two main labels
according to each data folder content. Each data folder contains approximately
243 and 233 frames for 1 min video with a resolution of 640 × 480 pixels. Fig.
3 illustrates examples of used frames. In fact, all captured cow postures were
used for the training and testing sets, including, eating, standing, sitting,
drinking, ruminating, lifting head and other movements. The definition of
dairy cow rumination is presented in Table 1.
Table 1: Definition of dairy cow rumination labels. Behavior | Definition
---|---
Ruminating | The cow is masticating or swallowing of ingesta while sitting or standing.
Other | The cow is standing, eating, drinking, sitting or doing any other activity.
Figure 2: Example of resultant frames from the pre-processing stage.
### 3.2 Data pre-processing
The aim of data pre-processing is to improve quality the frames by enhancing
important image features or suppressing unwanted distortions from the image.
In this study, the methods used for image pre-processing (Fig. 1c) including
cropping, resizing, adding noises, data augmentation, and applying the dynamic
image summarization method. The aim of cropping is delimiting the cow area by
eliminating noisy pixels coming from sunlight or any noisy effects. Next,
these cropped images were resized to 224 × 224 pixels (Fig.2a) for the network
training process. To ensure a good performance of the CNN model and test its
stability, we added some noisy effects on images by randomly changing the
brightness of images. In addition, to avoid overfitting issues, we applied the
data augmentation technique by lightening the edges of the frames using
negative effect (Fig. 2b) and gamma correction effect with 0.5 adjustment
parameter (Fig. 3c). These corrections can be made even on low-quality images
which can brighten the object threshold and facilitate the learning process.
The obtained frames are generated using the dynamic image method. This method
is able to summarize video content in single RGB image representation, using
the rank pooling method [23] to construct a vector $d^{*}$ that contains
enough information to rank all T frames $\mathit{I}_{1},…,\mathit{I}_{T}$ in
the video and make a standard RGB image (Fig. 2d) using the $RankSVM$ [45]
formulation:
$\begin{split}d^{*}=p(\mathit{I}_{1},…,\mathit{I}_{T};\psi)=\operatorname*{arg\,min}_{d}{E(d)}\\\
E(d)=\frac{\lambda}{2}||d||^{2}+\frac{2}{T(T-1)}\sum_{q>t}\max{\\{0,1-S(q|d)+S(t|d)\\}}.\end{split}$
(1)
Where $d$ $\in$ $\mathrm{I\\!R}^{d}$ and $\psi(\mathit{I}_{t})$ $\in$
$\mathrm{I\\!R}^{d}$ are vectors of parameters and image features,
respectively while $\lambda$ is a regularization parameter. Up to time $t$,
the time average of these features is calculated using
$\mathit{V}_{t}\\!=\frac{1}{t}\sum_{T=1}^{t}\psi(\mathit{I}_{T})$. The ranking
function associates to each time $t$ a score $S(t|d)=\langle
d,\mathit{V}_{t}\rangle$.The second term is constructed to test how many pairs
are correctly ranked: if at least a unit margin is present, then the pair is
well ranked, i.e. $S(q|d)>S(t|d)+1$ with $q>t$.
Figure 3: Sample frames from the collected dataset.
### 3.3 Dynamic image approach
The dynamic image (Fig.1d) is a CNN-based approach which powerfully recognizes
motion and temporal features from a standard RGB image. It uses a compact
representation of video that summarizes the motion of moving actors in a
single frame. Interestingly, the dynamic image approach uses a standard CNN
architecture pre-trained in still image Benchmark. This approach proved [23]
its efficiency in learning long-term dynamics and accurately performed 89.1%
of accuracy using the CaffeNet model trained on ImageNet and fine-tuned on
UCF101 dataset [46].
### 3.4 Key architectures
To recognize rumination behavior of dairy cow, we used an end-to-end
architecture that can efficiently recognize long-term dynamics and temporal
features with a standard CNN architecture as it was presented in Section 3.3.
To ensure good performance of our system, we chose to use only two well-known
key architectures: VGG [47] and ResNet [48, 49] that were adopted and tested
in section 4. These two models are powerful and useful for image
classification tasks. They achieved remarkable performance on ImageNet
Benchmark [50] which make them the core of multiple novel CNN-based approaches
[51, 52]. The VGG model presents two main versions: VGG16 model with 16-layers
and VGG19 model with 19-layers. ResNet model presents more than two versions
that can handle a large number of layers with a strong performance using the
so-called technique “identity shortcut connection” that enables the network to
skip one or more layers.
### 3.5 Overfitting prevention method
Overfitting occurs when the model learns noises from the dataset while
training, which make the learning performance much better on the training set
than on the test set. To prevent these inferences, we adopted few
regularization methods to improve the performance of the model. The first
technique adopted is the dropout method [53], which can reduce interdependency
among neurons by randomly dropping layers and connections during the training
phase and thus forcing nodes within a layer to be more active and more adapted
to correct mistakes from prior layers. The second technique is the data
augmentation method, which prevents the model from overfitting all samples by
increasing the diversity of images available for the training phase using
different filters such as those presented in Section 3.3. The third technique
is the early stopping method [54], which tracks and optimize the performance
of the model by planting a trigger that stops the training process when the
test error starts to increase and the train error starts decrease.
Figure 4: Cow rumination behavior recognition procedures.
## 4 Experiments
In this section, we present the implementation process of the proposed model
and the adopted evaluation metrics (Section 4.1). Subsequently, we evaluate
the obtained results of rumination behavior recognition (Section 4.2).
Finally, we compare the proposed model with other architectures (Section 4.3).
### 4.1 Implementation
We empirically evaluated the cow rumination behavior recognition using cow
generated dataset as detailed in Section 3.2. For classification tasks, we
implemented three pretrained CNN-base models: VGG16, VGG19, and ResNet152V2 to
evaluate each model performance on the generated dataset. In the fine-tuning
stage, we froze parameters of upper layers and replaced the rest of the layers
by other layers as depicted in Fig. 4b. The dropout ratio was set to 0.5 to
prevent overfitting. We used Adam optimizer [55] with an intial learning rate
$\mathit{lr}_{0}=0.001$ and eventually change its value during the training
stage using the exponential decay formula:
$lr={\mathit{lr}_{0}}\times{e^{kt}}$ (2)
Where t and k correspond to the iteration number and the decay steps,
respectively. Models were trained on GPUs with batch size=12.
Let $T=\\{25,50,100\\}$ be the number of frames used to generate a dynamic
image (Fig. 4a). The aim is to evaluate the performance of the network with
short video sequences. As for the rest of this study, we refer the datasets
that contains dynamic images generated from 25, 50, and 100 frames for a
single image as T25, T50 and T100, respectively.
### 4.2 Evaluation approach
In Table 2, the evaluation stage is made of two trials: in trial 1, we tested
the model only on the generated data without data augmentation. In trial 2, we
added more generated frames using the data augmentation technique. The whole
generated data were divided into training and testing sets. In each trial, we
evaluated the model performance based on accuracy, validation accuracy
(val_acc), loss and validation loss (val_loss) results as metrics to measure
model efficiency. Then, we evaluated the precision, the sensitivity and AUC
metrics. The accuracy is one of the most common used metrics that count the
percentage of correct classifications for the test data and it is calculated
using Eq. (3). The loss value calculates the error of the model during the
optimization process. The precision metric is obtained by Eq. (4) is
consistent with the percentage of the outcomes. The sensitivity stands for the
percentage of the total relevant results that are correctly classified. It is
expressed using Eq. (5). The Area Under Curve AUC reflects how much the model
is capable to distinguish between classes. The higher the AUC, the better the
network is predicting classes. To measure the effectiveness of the model,
machine learning uses the confusion matrix which contains four main variable:
True Positive (TP), True Negative (TN), False Positive (FP) and False Negative
(FN).
$Accuracy=\frac{TP+TN}{TP+FP+TN+FN}$ (3)
$Precision=\frac{TP}{TP+FP}$ (4)
$Sensitivity=\frac{TP}{TP+FN}$ (5)
### 4.3 Evaluation results
Table 2: Results of cow rumination behavior model. Trial | N° frames | Key architecture | Dataset size | loss | Val_loss | Accuracy | Val_acc
---|---|---|---|---|---|---|---
1 | T=25 | ResNet152V2 | N=1015 Test=213 | 0.0359 | 0.7463 | 98.73% | 84.04%
| VGG-16 | 0.2081 | 0.2922 | 91.34% | 90.61%
| VGG-19 | 0.2874 | 0.3241 | 88.17% | 88.73%
| T=50 | ResNet152V2 | N=508 Test=107 | 0.0207 | 0.7929 | 99.86% | 82.24%
| VGG-16 | 0.1697 | 0.3679 | 92.39% | 85.98%
| VGG-19 | 0.2453 | 0.3600 | 88.45% | 86.92%
| T=100 | ResNet152V2 | N=254 Test=53 | 0.0050 | 0.7851 | 100% | 84.91%
| VGG-16 | 0.1200 | 0.4572 | 95.76% | 86.79%
| VGG-19 | 0.1363 | 0.3805 | 95.20% | 84.91%
2 | T=25 | ResNet152V2 | N=2030 Test=426 | 0.1153 | 0.8742 | 95.46% | 81.46%
| VGG-16 | 0.1370 | 0.3706 | 94.19% | 90.85%
| VGG-19 | 0.2186 | 0.3045 | 90.78% | 88.97%
| T=50 | ResNet152V2 | N=2032 Test=427 | 0.0449 | 0.3687 | 98.12% | 88.13%
| VGG-16 | 0.0794 | 0.1944 | 96.91% | 93.91%
| VGG-19 | 0.1375 | 0.2108 | 94.44% | 92.97%
| T=100 | ResNet15V2 | N=1016 Test=213 | 0.0277 | 0.3246 | 98.95% | 93.90%
| VGG-16 | 0.0648 | 0.0707 | 0.9754 | 98.12%
| VGG-19 | 0.1049 | 0.0821 | 95.01% | 97.65%
Figure 5: Results of (a) train AUC, test AUC, (b) train loss and test loss
during the training phase using VGG16 key architecture finetuned on T100
dataset with data augmentation.
In the first experiment, the performance of the proposed model was lower in
the evaluation phase than in training phase. VGG16 gave important results with
91% of accuracy using T25. However, with the growth of data size the network
values did not improve accordingly. On other hand, the performance got higher
with both of datasets T50 and T100. There are 5.89%, 7.93% and 6.05% boosts of
accuracies with T50 dataset using ResNet152V2, VGG16 and VGG19 models,
respectively. In the second experiment, there are remarkable improvements with
highest accuracy obtained by VGG16 using T100 dataset. With the presented AUC
and loss results in Fig. 6 and accuracy value equal to 98.12%, the network has
proven its potential in predicting the rumination behavior. To ensure the
reliability and efficiency of the model, we present the sensitivity and
precision results in the Table 3 using T100 dataset.
Table 3: Recall and precision of three models using the T100. | Recall | Precision | Number of frames
---|---|---|---
VGG16 | Rumination | 99% | 97% | 110
| Other | 97% | 99% | 103
VGG19 | Rumination | 98% | 97% | 110
| Other | 97% | 98% | 103
ResNet152V2 | Rumination | 98% | 91% | 110
| other | 89% | 98% | 103
Figure 6: Average and STD of the accuracy and AUC metrics using 10 fold cross-
validation with VGG 16 as the base network.
Both of VGG16 and VGG19 achieved higher than 97% in both precision and recall
metrics, which proves the robustness of the network. We notice that VGG16
achieved the best performance by accurately predicting 99% of rumination
behavior. To ensure that the model is performing well with different test
sets, we conduct 10 folds cross-validation and present the average, Standard
Deviation (STD) values of accuracy and AUC metrics. The results of this
procedure are detailed in the Fig. 6, knowing that K is the number of folds.
With these obtained results, the model has proved its potential in predicting
and recognizing cow rumination behavior with remarkable highest and lowest
average accuracy equal to 93% and 97%, respectively. The STD accuracy of the
network varies between 2.7% and 6.9%. In addition, most of average AUC results
are close to 1.00 while the AUC std values are less than 1.2%, which
demonstrate the efficiency and the reliability of our method in recognizing
behavior.
### 4.4 Comparison
To make the comparison more significant, we compare our proposed method with
ResNet50, ResNet152, InceptionV3 [56] and DenseNet121 [57] models using T100
generated dataset. The efficiency of the model is done using the accuracy,
mean precision and mean recall metrics. The mean precision and recall were
calculated using the obtained results during the training stage. The results
of the classification are detailed in Table 4.
Table 4: Comparison of DenseNet121, InceptionV3, ResNet50 and Resnet152 models with VGG16 architecture using T100 dataset. Key architecture | Accuracy | Mean precision | Mean recall
---|---|---|---
DenseNet121 | 93% | 93.5% | 93.5%
InceptionV3 | 92% | 92% | 92%
ResNet50 | 78% | 82% | 79%
ResNet152 | 75% | 74.5% | 74.5%
VGG16 | 98% | 98% | 98%
Overall, VGG16 performs favourably against the other architectures. Compared
with the presented results, most of models performed less than 98%.
DenseNet121 network achieved 93.5% in both of mean precision and recall
metrics. InceptionV3 gave 92% of accuracy, recall and precision results.
However, both of ResNet50 and ResNet152 performed less than 82%.
## 5 Conclusion
In this paper, we proposed an effective recognition method with video to
monitor and classify cow behavior using deep learning approaches. These
technologies proved their potential in complex environments such as farms.
They enabled conducting a monitoring method without appealing to these
attached and invasive devices. Despite the surrounding inferences (e.g.,
sunlight and poor lighting) that produced undesirable effects on cow movements
such as chewing or swallowing behaviors, we were able to accurately recognize
these deep features of rumination behavior using all postures of the dairy
cow. Our network basis is simple and easy-to-use based on a standard CNN-based
deep learning models. Through an RGB image, the network can recognize long-
term dynamics using a compacted representation of a video. The proposed method
achieved competitive prediction performance with 98.12% of accuracy. Future
works include the extension of our monitoring method to track rumination time
and cows physical activity such as walking and resting.
## Acknowledgment
This research work is supported by LifeEye LLC. The statements made herein are
solely the responsibility of the authors.
## References
* [1] A. Bouwman, K. Van der Hoek, B. Eickhout, and I. Soenario, “Exploring changes in world ruminant production systems,” _Agricultural Systems_ , vol. 84, no. 2, pp. 121–153, 2005.
* [2] D. K. Thomsen, M. Y. Mehlsen, M. Hokland, A. Viidik, F. Olesen, K. Avlund, K. Munk, and R. Zachariae, “Negative thoughts and health: Associations among rumination, immunity, and health care utilization in a young and elderly sample,” _Psychosomatic Medicine_ , vol. 66, no. 3, pp. 363–371, 2004.
* [3] M. Stangaferro, R. Wijma, L. Caixeta, M. Al-Abri, and J. Giordano, “Use of rumination and activity monitoring for the identification of dairy cows with health disorders: Part iii. metritis,” _Journal of Dairy Science_ , vol. 99, no. 9, pp. 7422–7433, 2016.
* [4] T. Vandevala, L. Pavey, O. Chelidoni, N.-F. Chang, B. Creagh-Brown, and A. Cox, “Psychological rumination and recovery from work in intensive care professionals: associations with stress, burnout, depression and health,” _Journal of intensive care_ , vol. 5, no. 1, p. 16, 2017.
* [5] S. Nolen-Hoeksema, “The role of rumination in depressive disorders and mixed anxiety/depressive symptoms.” _Journal of abnormal psychology_ , vol. 109, no. 3, p. 504, 2000.
* [6] L. Grinter, M. Campler, and J. Costa, “Validation of a behavior-monitoring collar’s precision and accuracy to measure rumination, feeding, and resting time of lactating dairy cows,” _Journal of dairy science_ , vol. 102, no. 4, pp. 3487–3494, 2019.
* [7] T. Suzuki, Y. Kamiya, M. Tanaka, I. Hattori, T. Sakaigaichi, T. Terauchi, I. Nonaka, and F. Terada, “Effect of fiber content of roughage on energy cost of eating and rumination in holstein cows,” _Animal Feed Science and Technology_ , vol. 196, pp. 42–49, 2014.
* [8] K. A. Beauchemin, “Ingestion and mastication of feed by dairy cattle,” _Veterinary Clinics of North America: Food Animal Practice_ , vol. 7, no. 2, pp. 439–463, 1991.
* [9] S. Reith, H. Brandt, and S. Hoy, “Simultaneous analysis of activity and rumination time, based on collar-mounted sensor technology, of dairy cows over the peri-estrus period,” _Livestock Science_ , vol. 170, pp. 219–227, 2014.
* [10] S. Paudyal, F. Maunsell, J. Richeson, C. Risco, A. Donovan, and P. Pinedo, “Peripartal rumination dynamics and health status in cows calving in hot and cool seasons,” _Journal of dairy science_ , vol. 99, no. 11, pp. 9057–9068, 2016.
* [11] L. Calamari, N. Soriani, G. Panella, F. Petrera, A. Minuti, and E. Trevisi, “Rumination time around calving: An early signal to detect cows at greater risk of disease,” _Journal of Dairy Science_ , vol. 97, no. 6, pp. 3635–3647, 2014.
* [12] M. Krause, K. Beauchemin, L. Rode, B. Farr, and P. Nørgaard, “Fibrolytic enzyme treatment of barley grain and source of forage in high-grain diets fed to growing cattle,” _Journal of Animal Science_ , vol. 76, no. 11, pp. 2912–2920, 1998.
* [13] V. Lopreiato, M. Vailati-Riboni, V. Morittu, D. Britti, F. Piccioli-Cappelli, E. Trevisi, and A. Minuti, “Post-weaning rumen fermentation of simmental calves in response to weaning age and relationship with rumination time measured by the hr-tag rumination-monitoring system,” _Livestock Science_ , vol. 232, p. 103918, 2020.
* [14] W. Shen, A. Zhang, Y. Zhang, X. Wei, and J. Sun, “Rumination recognition method of dairy cows based on the change of noseband pressure,” _Information Processing in Agriculture_ , 2020.
* [15] Y. Mao, D. He, and H. Song, “Automatic detection of ruminant cows’ mouth area during rumination based on machine vision and video analysis technology,” _International Journal of Agricultural and Biological Engineering_ , vol. 12, no. 1, pp. 186–191, 2019.
* [16] W. Shen, F. Cheng, Y. Zhang, X. Wei, Q. Fu, and Y. Zhang, “Automatic recognition of ingestive-related behaviors of dairy cows based on triaxial acceleration,” _Information Processing in Agriculture_ , 2019.
* [17] R. Jabbar, M. Shinoy, M. Kharbeche, K. Al-Khalifa, M. Krichen, and K. Barkaoui, “Driver drowsiness detection model using convolutional neural networks techniques for android application,” in _2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT)_. IEEE, 2020, pp. 237–242.
* [18] S. Alhazbi, A. B. Said, and A. Al-Maadid, “Using deep learning to predict stock movements direction in emerging markets: The case of qatar stock exchange,” in _2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT)_. IEEE, 2020, pp. 440–444.
* [19] A. B. Said, A. Mohamed, T. Elfouly, K. Abualsaud, and K. Harras, “Deep learning and low rank dictionary model for mhealth data classification,” in _2018 14th International Wireless Communications & Mobile Computing Conference (IWCMC)_. IEEE, 2018, pp. 358–363.
* [20] M. ABDELHEDI, R. JABBAR, T. MNIF, and C. ABBES, “Prediction of uniaxial compressive strength of carbonate rocks and cement mortar using artificial neural network and multiple linear regressions.”
* [21] Y. Chen, W. Li, C. Sakaridis, D. Dai, and L. Van Gool, “Domain adaptive faster r-cnn for object detection in the wild,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 3339–3348.
* [22] H. Zhang, D. Liu, and Z. Xiong, “Two-stream action recognition-oriented video super-resolution,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 8799–8808.
* [23] H. Bilen, B. Fernando, E. Gavves, A. Vedaldi, and S. Gould, “Dynamic image networks for action recognition,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 3034–3042.
* [24] D. H. Milone, J. R. Galli, C. A. Cangiano, H. L. Rufiner, and E. A. Laca, “Automatic recognition of ingestive sounds of cattle based on hidden markov models,” _Computers and electronics in agriculture_ , vol. 87, pp. 51–55, 2012.
* [25] J. O. Chelotti, S. R. Vanrell, J. R. Galli, L. L. Giovanini, and H. L. Rufiner, “A pattern recognition approach for detecting and classifying jaw movements in grazing cattle,” _Computers and Electronics in Agriculture_ , vol. 145, pp. 83–91, 2018.
* [26] W. M. Clapham, J. M. Fedders, K. Beeman, and J. P. Neel, “Acoustic monitoring system to quantify ingestive behavior of free-grazing cattle,” _Computers and Electronics in Agriculture_ , vol. 76, no. 1, pp. 96–104, 2011\.
* [27] J. O. Chelotti, S. R. Vanrell, L. S. M. Rau, J. R. Galli, A. M. Planisich, S. A. Utsumi, D. H. Milone, L. L. Giovanini, and H. L. Rufiner, “An online method for estimating grazing and rumination bouts using acoustic signals in grazing cattle,” _Computers and Electronics in Agriculture_ , vol. 173, p. 105443, 2020.
* [28] L. M. Rau, J. O. Chelotti, S. R. Vanrell, and L. L. Giovanini, “Developments on real-time monitoring of grazing cattle feeding behavior using sound,” in _2020 IEEE International Conference on Industrial Technology (ICIT)_. IEEE, 2020, pp. 771–776.
* [29] N. Zehner, C. Umstätter, J. J. Niederhauser, and M. Schick, “System specification and validation of a noseband pressure sensor for measurement of ruminating and eating behavior in stable-fed cows,” _Computers and Electronics in Agriculture_ , vol. 136, pp. 31–41, 2017.
* [30] P. Martiskainen, M. Järvinen, J.-P. Skön, J. Tiirikainen, M. Kolehmainen, and J. Mononen, “Cow behaviour pattern recognition using a three-dimensional accelerometer and support vector machines,” _Applied animal behaviour science_ , vol. 119, no. 1-2, pp. 32–38, 2009.
* [31] A. A. Rayas-Amor, E. Morales-Almaráz, G. Licona-Velázquez, R. Vieyra-Alberto, A. García-Martínez, C. G. Martínez-García, R. G. Cruz-Monterrosa, and G. C. Miranda-de la Lama, “Triaxial accelerometers for recording grazing and ruminating time in dairy cows: An alternative to visual observations,” _Journal of Veterinary Behavior_ , vol. 20, pp. 102–108, 2017.
* [32] A. W. Hamilton, C. Davison, C. Tachtatzis, I. Andonovic, C. Michie, H. J. Ferguson, L. Somerville, and N. N. Jonsson, “Identification of the rumination in cattle using support vector machines with motion-sensitive bolus sensors,” _Sensors_ , vol. 19, no. 5, p. 1165, 2019.
* [33] T. Li, B. Jiang, D. Wu, X. Yin, and H. Song, “Tracking multiple target cows’ ruminant mouth areas using optical flow and inter-frame difference methods,” _IEEE Access_ , vol. 7, pp. 185 520–185 531, 2019.
* [34] Y. Cheng, “Mean shift, mode seeking, and clustering,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 17, no. 8, pp. 790–799, 1995\.
* [35] K. Zhang, L. Zhang, Q. Liu, D. Zhang, and M.-H. Yang, “Fast visual tracking via dense spatio-temporal context learning,” in _European conference on computer vision_. Springer, 2014, pp. 127–141.
* [36] C. Yujuan, H. Dongjian, F. Yinxi, and S. Huaibo, “Intelligent monitoring method of cow ruminant behavior based on video analysis technology,” _International Journal of Agricultural and Biological Engineering_ , vol. 10, no. 5, pp. 194–202, 2017.
* [37] Y. Chen, D. He, and H. Song, “Automatic monitoring method of cow ruminant behavior based on spatio-temporal context learning,” _International Journal of Agricultural and Biological Engineering_ , vol. 11, no. 4, pp. 179–185, 2018.
* [38] B. Achour, M. Belkadi, I. Filali, M. Laghrouche, and M. Lahdir, “Image analysis for individual identification and feeding behaviour monitoring of dairy cows based on convolutional neural networks (cnn),” _Biosystems Engineering_ , vol. 198, pp. 31–49, 2020.
* [39] D. Li, Y. Chen, K. Zhang, and Z. Li, “Mounting behaviour recognition for pigs based on deep learning,” _Sensors_ , vol. 19, no. 22, p. 4924, 2019.
* [40] G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme learning machine: a new learning scheme of feedforward neural networks,” in _2004 IEEE international joint conference on neural networks (IEEE Cat. No. 04CH37541)_ , vol. 2. IEEE, 2004, pp. 985–990.
* [41] Q. Yang, D. Xiao, and S. Lin, “Feeding behavior recognition for group-housed pigs with the faster r-cnn,” _Computers and Electronics in Agriculture_ , vol. 155, pp. 453–460, 2018.
* [42] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in _Advances in neural information processing systems_ , 2015, pp. 91–99.
* [43] V. Ambriz-Vilchis, N. Jessop, R. Fawcett, D. Shaw, and A. Macrae, “Comparison of rumination activity measured using rumination collars against direct visual observations and analysis of video recordings of dairy cows in commercial farm environments,” _Journal of Dairy Science_ , vol. 98, no. 3, pp. 1750–1758, 2015.
* [44] K. Fenner, S. Yoon, P. White, M. Starling, and P. McGreevy, “The effect of noseband tightening on horses’ behavior, eye temperature, and cardiac responses,” _PLoS One_ , vol. 11, no. 5, p. e0154179, 2016.
* [45] A. J. Smola and B. Schölkopf, “A tutorial on support vector regression,” _Statistics and computing_ , vol. 14, no. 3, pp. 199–222, 2004.
* [46] K. Soomro, A. R. Zamir, and M. Shah, “Ucf101: A dataset of 101 human actions classes from videos in the wild,” _arXiv preprint arXiv:1212.0402_ , 2012\.
* [47] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” _arXiv preprint arXiv:1409.1556_ , 2014.
* [48] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778.
* [49] ——, “Identity mappings in deep residual networks,” in _European conference on computer vision_. Springer, 2016, pp. 630–645.
* [50] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in _Advances in neural information processing systems_ , 2012, pp. 1097–1105.
* [51] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, “Long-term recurrent convolutional networks for visual recognition and description,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 2625–2634.
* [52] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” _IEEE Transactions on Neural Networks and Learning Systems_ , 2020.
* [53] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” _The journal of machine learning research_ , vol. 15, no. 1, pp. 1929–1958, 2014.
* [54] L. Prechelt, “Early stopping-but when?” in _Neural Networks: Tricks of the trade_. Springer, 1998, pp. 55–69.
* [55] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _arXiv preprint arXiv:1412.6980_ , 2014.
* [56] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 2818–2826.
* [57] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 4700–4708.
|
# Environment-Adaptive Multiple Access for Distributed V2X Network:
A Reinforcement Learning Framework
Seungmo Kim, Member, IEEE, Byung-Jun Kim, and B. Brian Park, Senior Member,
IEEE S. Kim is with the Department of Electrical and Computer Engineering,
Georgia Southern University in Statesboro, GA. B. J. Kim is with the
Department of Mathematical Sciences, Michigan Technological University in
Houghton, MI. B. B. Park is with the Link Lab & Department of Engineering
Systems and Environment, University of Virginia, Charlottesville, VA. The
corresponding author is S. Kim who can be reached at
<EMAIL_ADDRESS>This work was supported in part by the Georgia
Department of Transportation (GDOT) via grant RP 20-03.
###### Abstract
Cellular vehicle-to-everything (C-V2X) communications amass research interest
in recent days because of its ability to schedule multiple access more
efficiently as compared to its predecessor technology, i.e., dedicated short-
range communications (DSRC). However, the foremost issue still remains: a
vehicle needs to keep the V2X performance in a highly dynamic environment.
This paper proposes a way to exploit the dynamicity. That is, we propose a
resource allocation mechanism adaptive to the environment, which can be an
efficient solution for air interface congestion that a V2X network often
suffers from. Specifically, the proposed mechanism aims at granting a higher
chance of transmission to a vehicle with a higher crash risk. As such, the
channel access is prioritized to those with urgent needs. The adaptation is
implemented based on reinforcement learning (RL). We model the RL framework as
a contextual multi-armed bandit (MAB), which provides efficiency as well as
accuracy. We highlight the most superb aspect of the proposed mechanism: it is
designed to be operated at a vehicle autonomously without need for any
assistance from a central entity. Henceforth, the proposed framework is
expected to make a particular fit to distributed V2X network such as C-V2X
mode 4.
###### Index Terms:
Reinforcement learning, Multi-armed bandit, Intelligent transportation system,
Connected vehicles, C-V2X, NR-V2X mode 4, Sidelink
## I Introduction
#### I-1 Background
It is no secrete any more that vehicle-to-everything (V2X) communications hold
massive potential for realizing intelligent transportation system (ITS).
Nonetheless, at the same time, we encounter various technical challenges in
deploying V2X communications in practice.
Especially in the United States (U.S.), the decision on a long debate on the
5.9 GHz band (i.e., 5.850-5.925 GHz) came out that the lower 45 MHz will be
taken by Wi-Fi (including outdoor operations allowed [1]) while the ITS
operations will be kept in the upper 30 MHz. Furthermore, the U.S. Federal
Communications Commission (FCC) decided to oust dedicated short-range
communications (DSRC) [2], the long-time primary system of the band, while
cellular V2X (C-V2X) will act as the technology with an exclusive right to
operate ITS applications in the band [3].
As such, the ruling has now cleared debates on coexistence among the
dissimilar systems [4] and has led to urgent need for thorough study on C-V2X.
The technology started to adopt some smart methods in its multiple access
across the physical (PHY) and the medium access control (MAC) layers. For
instance, Long Term Evolution V2X (LTE-V2X) adopted the demodulation reference
signal (DMRS) density is increased as an effort to enable a vehicle to
efficiently perform the channel estimation and synchronization tracing even in
high Doppler cases with a very high speed [5]. Not only that, the LTE-V2X used
turbo codes, hybrid automatic repeat request (HARQ), and single carrier
frequency division multiplexing access (SC-FDM) as a means to achieve higher
reliability. With the synchronous scheme and frequency division multiplexing
(FDM) in resource allocation scheme of LTE-V2X, the spectral efficiency and
the system capacity can be improved.
Lately, the impetus of evolution has got even more rapid with the introduction
of 5G [6]. While being complementary to its predecessor LTE-V2X, the 5G’s
version of V2X–namely, New Radio V2X (NR-V2X)–further evolved the PHY layer
structure of sidelink signals, channels, bandwidth parts, and resource pools
in such a way to support a wider variety of transmission types (i.e., unicast
and groupcast) with available feedback besides broadcast.
However, there still remain issues to solve. In particular, due to high
mobility and dynamicity [7], it makes a compelling case to lighten
communications load in C-V2X for minimizing latency and maximizing
reliability. While some methods of lightening networking load for DSRC (such
as [8]) have been introduced, C-V2X is still leaving much to explore possibly
due to higher complexity in its resource management and scheduling mechanisms
as compared to DSRC.
Interestingly, a vehicular network features a unique characteristic that each
vehicle experiences an ever-changing environment due to the nature of
mobility. We propose to exploit the environment as the main driver to
coordinate multiple access in a V2X network. This makes a compelling case of
proposing a reinforcement learning (RL)-based approach where a vehicle
autonomously enriches knowledge about the environment over time and updates
its V2X networking parameters on the fly.
To this end, this paper is positioned to be the first proposal of a RL
framework aiming at lightening the load of a C-V2X network. Specifically, we
propose a RL mechanism that optimizes the transport block size (TBS) according
to environment that a vehicle experiences. The proposed mechanism features its
ability to be executed at each vehicle autonomously without any support from
central entity. It yields that the proposed scheme can be particularly useful
in the distributed mode (i.e., mode 4) of a C-V2X network, which has been
regarded a challenging type of system to manage multiple access as compared to
mode 3.
#### I-2 Related Work
In the literature, several learning-based resource allocation methods for V2X
network have been proposed. One main body of the prior work is RL. Compared to
other methods (i.e, supervised and unsupervised learning [9]), RL has received
increasing attention in solving difficult adaptation problems [10][11], thanks
to its ability to treat environment dynamics in a sequential manner [12].
However, feature representation and online learning ability are two major
challenges to be solved for learning control of uncertain dynamic systems
[13]. As an effort to keep a V2X network’s performance stable in such a
dynamic environment, a recent work [14] has proposed to apply a MAB-based
approach, which turned out to be effective in achieving convergence of
learning in a sufficiently short time to deal with the dynamicity. Meanwhile,
advanced methods such as federated learning has recently been proposed as a
solution to achieve self-adaptation of a wireless system [15]; however, its
“localized” validity does not suit our goal of achieving a universal finality.
Moreover, as a method dealing with the time variance of the input in a RL
framework, online learning enables adaptations with data being available in a
“streaming” manner, as opposed to the offline learning that is trained by an
entire training data set at once [16][17]. As such, the technique is known to
be particularly efficient in areas where it is computationally infeasible to
train over the entire dataset.
Distinguished from the above-viewed prior work, this paper targets to improve
such current setting in such a way that the C-V2X can differentiate the
priority of access according to the level of danger. Specifically, this paper
finds quantification of environmental state of a vehicle particularly
challenging due to its spatiotemporal dynamicity. In that regard, prior to
this paper, the authors have been building a similar framework [18]-[20]. This
work is a significant extension of them in the sense that this work designs a
RL framework with the input of driver behaviors, while the prior work focused
on external factors. (Considering the level of recent onboard sensor
technologies [21], it is plausible to posit that the driver behaviors can be
detected at an acceptable accuracy.) Another key improvement is that this work
proposes to design C-V2X while the previous work discussed DSRC.
TABLE I: Frequently used symbols and acronyms Label | Definition
---|---
$\left(\alpha,\beta\right)$ | Beta distribution parameters indicating a (success, failure)
BLER | Block error rate
HARQ | Hybrid automatic repeat request
MAB | Multi-armed bandit
NPRB | Number of physical resource blocks
NR-V2X | New Radio vehicle-to-everything
PSFCH | Physical sidelink feedback channel
$r$ | Reward
RL | Reinforcement learning
SL-SCH | Sidelink shared channel
TBS | Transport block size
$\mathbf{x}_{i}$ | Vector containing driver behavior types (w/ size $N\times 1$)
$x_{j}$ | $\mathbf{x}_{i}$’s $j$th element, denoting a behavior type $i$
$\mathbf{y}$ | Action vector (w/ size $M\times 1$)
$y$ | A value of the action, denoting a TBS value
#### I-3 Contributions of This Paper
Being uniquely positioned to extend the current literature as aforementioned,
this paper highlights several technical contributions:
* •
It provides a framework of quantifying the crash risk around a vehicle;
* •
It presents a RL algorithm that optimizes the resource allocation for sidelink
communications in NR-V2X mode 4, adaptive to the quantified crash risk;
* •
The RL algorithm itself features autonomous operation at a vehicle without
need for any support from a centralized entity (e.g., server or network core)
Figure 1: Overview of the problem formulation and solving method
($\hat{\textbf{y}}^{(t)}$: A set of action values at time $t$; $\pi$: The
policy of selecting an action given a state of the vehicle)
## II System Model: 3GPP NR-V2X Mode 4
This paper postulates the connection type of a network to be completely
distributed. As such, the model naturally applies to C-V2X mode 4 where the
nodes are connected directly in a distributed manner without going through the
network core. In what follows, we spell out key technical details defining PHY
and MAC layers of the 3GPP NR-V2X.
#### II-1 Sidelink
The 3GPP introduced sidelink in Release 12 as the third option after downlink
and uplink mainly for the support of device-to-device communications. As the
standardization organization introduced LTE-V2X in Release 14, the sidelink
started to take a vital technical basis in supporting both basic safety and
advanced use cases for ITS.
While being backward-compatible to the LTE-V2X, NR-V2X features some key
technical enhancements. One example is the waveform type. Enhanced from
LTE-V2X that uses single-carrier frequency-division multiple access (SC-FDMA),
NR-V2X sidelink uses the cyclic-prefix orthogonal frequency division
multiplexing (CP-OFDM) waveform with supporting multiple options for
subcarrier spacings (i.e., 15, 30, 60 and 120 kHz) and modulation schemes
(i.e., quadrature phase shift keying (QPSK), 16-quadrature amplitude
modulation (QAM), 64-QAM, and 256-QAM).
#### II-2 PSFCH
It is significant to note that starting from Release 16, the NR-V2X adopted
feedback functions via the physical sidelink feedback channel (PSFCH) for
unicast and groupcast [22].
The PSFCH carries HARQ feedback over sidelink from a recipient (Rx) vehicle of
a message over PHY sidelink shared channel (PSSCH). Sidelink HARQ feedback may
be in two particular forms: (i) conventional acknowledgement (ACK)/negative
acknowledgement (NACK); or (ii) NACK-only, i.e., nothing transmitted in case
of successful decoding. (See Section 6.2.4. of [22])
We reiterate the significance of existence of such a feedback functionality
since this paper proposes a RL framework, which essentially necessitates
feedback (i.e., reward) as a result of an action.
#### II-3 SPS
C-V2X mode 4 communication relies on a distributed resource allocation scheme,
namely sensing-based semipersistent scheduling (SPS) [23] which schedules
radio resources in a standalone fashion at a vehicle. Owing to the
characteristic of traffic that usually is periodic, it has been found
effective to sense congestion on a resource and estimate a future congestion
on the resource [24]. Specifically, this estimation forms the basis on how the
resource is booked.
In that way, the SPS minimizes the chance of “double booking” between
transmitters that are using overlapping resources. To elaborate the technical
details, a vehicle reserves certain resource blocks (RBs) for a random number
of consecutive packets. This number depends on the number of packets
transmitted per second, or inversely the packet transmission interval. As
such, via a sidelink control information (SCI), each vehicle sends information
its packet transmission interval and its reselection counter. Neighboring
vehicles use this information to estimate which RBs are free when making their
own reservation to reduce packet collisions. It leads to that vehicles
autonomously select their resources without the assistance from the cellular
infrastructure.
#### II-4 Receiver
After receiving a signal, the first step that the Rx performs is
synchronization. Then, the synchronized signals are passed to the CP-OFDM
demodulation. It is followed by extraction of the DMRSs for channel
estimation. (Notice that we do not assume perfect channel estimation for
realistic modeling.) Now, the process turns into extraction of the data on the
desired transport blocks (TBs). The resource allocation information is
obtained from the corresponding SCI messages, which is always SCI format 1 in
V2X [22]. Then, equalization follows where we use a minimum mean square error
(MMSE) equalizer. We do not formulate the channel and MMSE since the typical
notations (i.e., $X$ for a transmitted signal, $H$ for a channel, $N$ for the
complex white Gaussian noise with zero mean, and $Y$ for the received signal)
conflict with other notations used in this paper (i.e., $x$ and $y$ for the
input and output of the proposed RL loop).
#### II-5 Performance Evaluation Metrics
It is significant to notice that this paper relies on the block error rate
(BLER) and the normalized throughput as metrics measuring the performance of a
sidelink in NR-V2X.
First, the full definition of BLER can be found from one of the latest 3GPP
technical specifications as the ratio of the number of erroneous blocks
received to the total number of blocks sent. An erroneous block is defined as
a TB, the cyclic redundancy check (CRC) of which is wrong. (See Section
F.6.1.1 of TS 34.121 [25].)
Meanwhile, the normalized throughput is defined as
$\displaystyle R=\frac{\mathsf{N}_{\text{bits, tx}}}{\mathsf{N}_{\text{bits,
sf}}\times\lfloor\mathsf{N}_{\text{sfs}}\hskip 1.4457pt/\hskip
1.4457pt\mathsf{N}_{\text{sfs, harq}}\rfloor}$ (1)
where $\mathsf{N}_{\text{bits, tx}}$ gives the number of transmitted bits;
$\mathsf{N}_{\text{bits, sf}}$ is the maximum number of bits that can be
contained in a subframe; $\mathsf{N}_{\text{sfs}}$ gives the number of
subframes that have been observed in a simulation; $\mathsf{N}_{\text{sfs,
harq}}$ indicates the number of subframes between consecutive HARQ processes.
## III Proposed Learning Mechanism for Optimal Sidelink Resource Allocation
in C-V2X Mode 4
We remind that the ultimate goal of our proposition is to design a resource
allocation mechanism for NR-V2X mode 4, in which each vehicle optimizes its
operation according to its environmental state. We also remind that this paper
is proposing to define the state of a vehicle as the level of danger measured
at the vehicle, as an effort to design a mechanism optimizing the operation of
a vehicle adaptive to the danger that the vehicle marks. This section presents
details on how we quantify the danger of a vehicle, which will form the basis
for a learning mechanism that will be performed thereafter.
TABLE II: An example of driver-related crash causing factors [26] for constitution of relationship between $\mathbf{x}$ and $\mathbf{y}$ $\mathbf{x}$ = Input value | Driver distraction type | $\mathbf{y}$ = TBS index
---|---|---
$x_{1}$ | Driving too fast for conditions or in excess of posted limit | 1
$x_{2}$ | Under the influence of alcohol, drugs, or medication | 2
$x_{3}$ | Failure to keep in proper lane
$x_{4}$ | Failure to yield right of way
$x_{5}$ | Distracted (e.g., phone, talking, eating, etc)
$x_{6}$ | Overcorrecting / Oversteering
$x_{7}$ | Failure to obey traffic signs, signals, or officers | 3
$x_{8}$ | Erratic, reckless, careless, or negligent operation of vehicle
$x_{9}$ | Swerving due to wind, slippery surface, object, etc
$x_{10}$ | Vision obscured due to rain, snow, glare, lights, etc | 4
$x_{11}$ | Driving on wrong way / side of road
$x_{12}$ | Drowsy, asleep, fatigued, ill, or blackout
$x_{13}$ | Improper turn
Figure 2: Regression of driver’s behavior type to TBS by using a 12-order
polynomial
$\mathbb{P}\left[\text{Crash}\right]_{x\in\mathbf{x}}=b_{1}x^{12}+b_{2}x^{11}+\cdots+b_{13}$
as an example of mapping $\mathbf{x}$ and $\mathbf{y}$ for the proposed RL
mechanism
### III-A Input Dimension Reduction
Let the environment around a vehicle at time $t$ be denoted by
$\mathbf{\Omega}\in\mathbb{R}^{2}$, which is composed of features defining the
risk of a vehicle such as weather, vehicle speed, etc. As an important means
to circumvent the curse of dimensionality, we map the large-volume space
$\mathbf{\Omega}$ to a smaller space of a selected representative feature,
i.e., an $N$-by-1 vector $\mathbf{x}\mathrel{\mathop{\mathchar
58\relax}}=\left[x_{1}\hskip 5.05942ptx_{2}\hskip 5.05942pt\cdots\hskip
5.05942ptx_{N}\right]$ where each $x_{i}$ gives a value for the feature.
Notice that the significance lies in what feature to extract as a
representative of the crash risk. In what follows, we elaborate the technical
details on quantification of the environment, focusing on seeking answers to
two key questions: Q1: From what dataset do we use to map
$\mathbf{\Omega}\rightarrow\mathbf{x}$?; and Q2: Based on what rationale can
we identify the feature $\mathbf{x}$?
Regarding Q1, we propose to draw from a nationwide dataset provided by the
U.S. National Highway Traffic Safety Association (NHTSA) regarding fatal
injuries suffered in motor vehicle traffic crashes, which is also known as the
Fatality Analysis Reporting System (FARS) [26]. Let the entire FARS dataset be
regarded $\mathbf{\Omega}$. Now, from $\mathbf{\Omega}$, we extract the most
dominant feature $\mathbf{x}$, which serves as an estimate input with a
reduced dimension.
Proceeding to addressing Q2, we identify the types of driver’s dangerous
behavior as the key factor in defining the crash risk of a vehicle. More
specifically, referring to the FARS dataset, we further identify key crash-
causing driver behavior types in order to calculate
$\mathbb{P}[\text{crash}\hskip 1.084pt|\hskip 1.084pt\mathbf{x}]$. As shown in
Fig. 2, this probability provides criteria on which a C-V2X resource
allocation mechanism is predicated on. To elaborate, a smaller TBS index is
assigned for a highly crash-causing driver behavior type, which will yield a
higher probability of successful message delivery and thus a higher chance of
propagating the message to more vehicles in the network. This way, the air
interface can be filled with more urgent messages with a higher chance. It is
also important to notice that the distribution shown in Fig. 2 will be used as
an initial factory setting for a vehicle, which will be updated in such a way
that the distribution is customized over numerous drives according to the
driver’s behavioral characteristics while driving.
### III-B Problem Formulation
Now, we formulate a contextual MAB between the context matrix $\mathbf{x}$ and
a vehicle’s action $\mathbf{y}\hskip 1.084pt|\hskip 1.084pt\mathbf{x}$. That
is, here we write a problem of finding an optimal policy, i.e.,
$\hat{\mathbf{y}}^{(t)}=\pi\left(\mathbf{x}\right)$ where
$\hat{\mathbf{y}}^{(t)}$ denotes an action selected by the policy $\pi$ at
time $t$.
Provided the relationship shown in Fig. 1, suppose a function $f$ mapping the
original environmental space $\mathbf{\Omega}$ to the action space
$\mathbf{y}$. Now, we note that the policy $\pi$ is an estimation of function
$f$, due to the dimension reduction $\mathbf{\Omega}\rightarrow\mathbf{x}$.
The key challenge here is that the selected feature $\mathbf{x}$ keeps updated
in time $t$. As a means to deal with the challenge, we narrow our perspective
down to establishing a RL mechanism autonomously updating the policy
$\pi(\mathbf{x})$ based on time-varying $\mathbf{x}$.
Henceforth, we translate the proposed environment-adaptive C-V2X resource
allocation problem to a problem that finds an optimal policy selecting an
optimal action given a context $\mathbf{x}$ at a given time $t$. We propose to
formulate this problem as a variant of the 0-1 knapsack problem (KP) [27] that
aims to maximize the reward while keeping the cost under a certain level. Let
the context at time $t$ be $\mathbf{x}^{(t)}=[x_{1}^{(t)}\hskip
1.084pt\cdots\hskip 1.084ptx_{N}^{(t)}]\in\mathbb{R}^{1\times N}$ where
$x_{i}^{(t)}$ gives the $i$th value of the feature $\mathbf{x}$. As has been
illustrated in Fig. 1, we denote by $\mathbf{y}\in\mathbf{R}^{1\times M}$, the
vector of possible action values. We aim at keeping the problem as a finite-
horizon decision problem, which means the optimal $\pi$ can be found within a
finite number of time epochs. As such, modifying the KP, we formulate the
process of predicting the optimal $\pi^{\ast}$, which is formally written as
$\displaystyle\left(\mathbf{y}^{(t)}\right)^{\ast}$
$\displaystyle\mathrel{\mathop{\mathchar
58\relax}}=\pi^{\ast}\left(\mathbf{x}^{(t)}\right)$
$\displaystyle=\operatorname*{argmax}_{y^{(t)}\in\mathbf{y}^{(t)}}\hskip
3.61371pt\sum_{k=1}^{K}r\left(y^{(t)}\hskip 1.084pt|\hskip
1.084ptx^{(t)}\right)$ $\displaystyle\text{s.
t.}\displaystyle\sum_{y^{(t)}\in\mathbf{y}^{(t)}}c\left(y^{(t)}\hskip
1.084pt|\hskip 1.084ptx^{(t)}\right)\leq C$ (2)
where $K$ indicates the number of arms, i.e., number of TBS options. Also,
$c(\cdot)$ denotes the cost and $C$ gives the maximum acceptable cost for
operating action $y^{(t)}$ in context $x^{(t)}$.
As an important reminder, TBS represents the action space $\mathbf{y}$ in this
paper, which is rationalized as follows. It is obvious that there are numerous
factors determining the performance of a C-V2X system including TBS,
modulation and coding scheme (MCS), DMRS density, waveform, OFDM numerology,
and etc. (For instance, it is critical for OFDM to operate with an adequate
set of parameters such as subcarrier spacing, number of slots per subframe,
and slot length [28].) We choose TBS since it makes the most plausible case
that we control the payload size according to the context related to the crash
risk. In other words, a vehicle at a higher crash risk due to a dangerously
behaving driver transmits a message with a smaller size so it can be delivered
at a higher chance of success. See Section IV-A for further details on our
selection of TBS as $\mathbf{y}$ for the proposed learning mechanism.
1 %— Initial factory setting —%
2 $\mathbf{x}^{(0)}\longleftarrow\mathbf{x}_{\text{ini},N\times 1}$;
3 $\mathbf{y}^{(0)}\longleftarrow\mathbf{0}_{M\times 1}$;
4 $r^{(0)}\longleftarrow 0$;
5
6for _t = 1, $\cdots$, $\infty$_ do
7
8 %— Input vector update —%
9 if _Dangerous driver behavior detected_ then
10 $\mathbf{x}^{(t)}\longleftarrow\mathbf{x}_{N\times 1}^{(t)}$;
11 end if
12
13 %— V2X for unicast or groupcast —%
14 if _Received a msg to send from upper layer_ then
15 %— Thompson sampling —%
16 Sample
$\hat{\theta}_{k}^{(t)}\sim\text{Beta}\left(\alpha_{k}^{(t)},\beta_{k}^{(t)}\right)$
for $k=1,\cdots,M$;
17 Select arm
$\hat{k}^{(t)}\longleftarrow\max_{k}\mathbf{\hat{\theta}}_{k}^{(t)}$;
18 Take action $\hat{y}^{(t)}\longleftarrow y|_{\hat{k}^{(t)}}$;
19 % Observe reward
20 if _Correct TBS selection_ then
21 $r^{(t)}\longleftarrow 1$;
22 else
23 $r^{(t)}\longleftarrow 0$;
24 end if
25 % Update Beta distribution
26 $\left(\alpha_{k}^{(t)},\hskip
0.72229pt\beta_{k}^{(t)}\right)\longleftarrow\Big{(}\alpha_{k}^{(t-1)}+r^{(t)},$
27 $\hskip 108.405pt\beta_{k}^{(t-1)}+\left(1-r^{(t)}\right)\Big{)}$;
28 end if
29
30 end for
Algorithm 1 Proposed RL-based data size optimization algorithm at a vehicle
for 5G NR-V2X mode 4 sidelink unicast and groupcast
### III-C Problem Solving Algorithm
Algorithm 1 presents a pseuodocode for the proposed mechanism. We remind that
the algorithm aims to learn an optimal TBS for a sidelink transmission (i.e.,
unicast or groupcast) in a NR-V2X mode 4 network.
Lines 1-4 indicate the initial setting of key variables. While initialization
of $\mathbf{y}$ and $r$ are straightforward, that of $\mathbf{x}$ takes a bit
further discussion. Let $\mathbf{x}_{\text{ini}}$ denote the initial
distribution of $\mathbf{x}_{i}$ and be given to every vehicle as a factory
setting. We recall that such a factory setting does not come out of the blue:
an example of the setting can be founded on a nationwide consensus by a U.S.
federal agency [26], which has been discussed in Fig. 2. By $w_{j}$, we denote
the weight of the $j$th level of driver’s dangerous behavior, which forms the
Y-axis of Fig. 2. As such, the $\mathbf{x}_{\text{ini}}$ provides an initial
mapping between $x_{j}$ and its weight $w_{j}$.
Through Lines 6-9, we recall that a vehicle is supposed to update this
distribution reflecting its driver’s driving behavior over time, which yields
that the weights $w_{j}$ will be distributed differently according to (i) a
time instant $t$ and (ii) vehicle index $i$. Specifically, the input vector
$x_{j}$ is updated when the driver behaves differently from the initial
setting $\mathbf{x}_{\text{ini}}$.
Lines 10-23 execute an event where the vehicle receives a message to send from
the upper layer. We remind that this paper postulates a unicast or groupcast
since they are the types of transmission providing feedback, as per the latest
3GPP NR-V2X standard. (See Section 6.2.4 of [22].
To break down, through Lines 12-14, the algorithm runs a TS wherein the
vehicle (i) samples following the current Beta distribution and (ii) selects a
TBS value according to the sampling. The algorithm proceeds to Lines 15-20 in
which the vehicle observes the reward of the action. As written in Lines
21-23, the vehicle updates the Beta distribution based on the success and
failure of the latest action. It is important to note that the reward is
defined by whether the agent has selected a correct arm, i.e., a correct TBS.
(a) BLER with NPRB = 6
(b) BLER with NPRB = 20
(c) Normalized throughput with NPRB = 6
(d) Normalized throughput with NPRB = 20
Figure 3: Performance of SL-SCH in NR-V2X mode 4 in terms of BLER and
normalized throughput
## IV Results and Discussions
The baseline configuration is taken from the “Reference measurement channel
for transmitter characteristics” as defined by Table A.8.3-1 [29]. Table III
summarizes the parameters. Notice that to simulate realistic V2X
transmissions, multiple hybrid automatic retransmission request (HARQ)
processes and retransmissions have been introduced in this simulation.
### IV-A Sidelink Performance according to TBS
We start with corroborating that the TBS is a plausible factor to distinguish
the performance of a NR-V2X network. Figs. 3(a) through 3(d) show the BLER and
the normalized throughput versus SNR. The figures also demonstrate the
performance being distinguished according to NPRB. Notice that we postulate
four different options for the TBS. (See Section IV-C) However, we stress that
the framework is extendible: any other TBS value defined in Table 7.1.7.2.1-1
of TS 36.213 [23] could be eligible in the output space $\mathbf{y}$.
(a) Selection of TBS index: A/B testing
(b) Selection of TBS index: TS (Proposed)
(c) Regret
Figure 4: Example run of the proposed RL mechanism with assumption of $y^{\ast}=\pi\left(x_{1}\right)=1$ TABLE III: Parameters [23][29] Parameter | Value
---|---
System | 3GPP Release 16
Bandwidth | 10 MHz
Duplex mode | FDD
CP mode | Normal
Modulation | QPSK
# Rx antennas | 2
Delay profile | Extended Vehicular A model (EVA) [30]
Doppler frequency | 500 Hz
Fading | Rayleigh
Equalization | MMSE
### IV-B Convergence and Accuracy of the Proposed RL Mechanism
Fig. 4 displays the average length of time taken for selection of the optimal
TBS index for NR-V2X, as a means to evaluate the time complexity of the
proposed RL scheme. Based on that we model the MAB problem as a Bernoulli-
bandit, we evaluate the convergence performance based on TS. Over other
algorithms to solve a MAB problem, TS has been evidenced to outperform other
alternatives such as $\epsilon$-greedy and upper confidence bound (UCB) [31].
As an example, we set TBS index 1 as the successful selection among the 4
different values for TBS as has been presented in Table II. Comparison between
Fig. 4(a) and 4(b) substantiates that the convergence of the proposed
mechanism based on TS. We inform that this result is from 30 rounds of
simulation where a vehicle learns on 4 arms representing the 4 TBS indices.
One can observe from Fig. 4(b) that the proposed algorithm consumes first 7
runs on “exploring” the four arms as a means of training. The convergences of
A/B testing and the proposed mechanism shown in Figs. 4(a) and 4(b) lead to
the difference in terms of regret as shown in Fig. 4(c). Notice that the
regret measured at time $t$ with arm $k$ selected is denoted by $\rho$, which
is formally written as
$\rho^{(t)}=\left|\left(y_{k}^{(t)}\right)^{\ast}-\hat{y}_{k}^{(t)}\right|$.
(a) TBS mapping versus $x_{i}$
(b) Resulting BLER versus $x_{i}$
(c) Resulting throughput versus $x_{i}$
Figure 5: (NPRB = {6, 10}, SNR = -2 dB)
### IV-C NR-V2X Performance with the Proposed Mechanism
Now, we evaluate the performance of a NR-V2X network with application of the
proposed mechanism. We remind of two metrics for measurement of the
performance, namely, BLER and normalized throughput. We also recall from Table
III that our focus is the SL-SCH for a groupcast or a unicast in NR-V2X mode
4.
Fig. 5(a) displays the four possible options for each of NPRB = {6, 20}. We
selected from Table 7.1.7.2.1-1 of [23] {152, 328, 712, 1032} for NPRB of 6
and {536, 1416, 2472, 3426} for NPRB of 20 as an example. However, we
reiterate that any other TBS value defined in the reference [23] could be
eligible in the output space $\mathbf{y}$.
Figs. 5(b) and 5(c) show the resulting performance for each $x_{i}$ in terms
of BLER and normalized throughput, respectively. The results commonly suggest
that the proposed mechanism works as we intended in such a way that a higher
crash-causing factor gets to yield a lower BLER. For instance, revisiting
Table II, a smaller index $i$ indicates a higher statistical gravity in
causing a crash. Figs. 5(b) and 5(c) tell that our proposed mechanism leads a
NR-V2X network to where $x$ with a smaller index $i$ achieves a lower BLER and
a higher throughput. This way, a network can be managed in a way that a
vehicle driven by a dangerously behaving driver can take the sidelink resource
with a higher chance, which will, in turn, elevates the chance of the air
interface filled up with more urgent messages.
## V Conclusions
Can we adapt multiple access for C-V2X according to the dynamically changing
environment around a vehicle? For that, can a vehicle measure the crash risk
around itself without support from infrastructure? This paper laid out answers
to these questions. Technically speaking, this paper presented a comprehensive
algorithmic framework that features: (i) quantification of the driver’s
dangerous behaviors as the crash risk indicator of a vehicle; (ii) a
contextual MAB algorithm for selection of an optimal TBS for SL-SCH in NR-V2X
mode 4 adaptive to the driver’s behavior; (iii) the algorithm’s ability to
operate at a vehicle autonomously without need for any support from a
centralized entity. Indeed, our simulations found that the proposed mechanism
was able to find an optimal TBS. This resulted in a more reliable performance
(in terms of BLER and normalized throughput) for a more dangerously driven
vehicle.
We identify as future work the “relaxation” of the regression of the driver’s
behavior and its crash causing statistics. While this paper characterized it
as an order-12 polynomial kernel, it can be relaxed to a multi-kernel
framework so it can accommodate a wider variety of driver behaviors in a more
precise manner.
## References
* [1] S. Kim and C. Dietrich, “Coexistence of outdoor Wi-Fi and radar at 3.5 GHz,” IEEE Wireless Commun. Lett., vol. 6, iss. 4, Aug. 2017.
* [2] S. Kim and M. Bennis, “Spatiotemporal analysis on broadcast performance of DSRC with external interference in 5.9 GHz band,” arXiv:1912.02537, Dec. 2019.
* [3] U.S. FCC, “In the matter of use of the 5.850-5.925 GHz,” FCC 20-164A1, ET Docket No. 19-138, Nov. 2020.
* [4] S. Kim and C. Dietrich, “A novel method for evaluation of coexistence between DSRC and Wi-Fi at 5.9 GHz,” in Proc. IEEE Globecom 2018.
* [5] S. Chen, J. Hu, Y. Shi, L. Zhao, and W. Li, “A vision of C-V2X: Technologies, field testing and challenges with Chinese development,” arXiv:2002.08736, Feb. 2020.
* [6] S. Kim, E. Visotsky, P. Moorut, K. Bechta, A. Ghosh, and C. Dietrich, “Coexistence of 5G with the incumbents in the 28 and 70 GHz bands,” IEEE J. Sel. Areas Commun., vol. 35, iss. 8, Aug. 2017.
* [7] S. Kim, “Impacts of mobility on performance of blockchain in VANET,” IEEE Access, vol. 7, May 2019.
* [8] S. Kim and T. Dessalgn, “Mitigation of civilian-to-military interference in DSRC for urban operations,” in Proc. IEEE MILCOM 2019.
* [9] L. Liang, H. Ye, and G. Y. Li, “Towards intelligent vehicular networks: A machine learning framework,” arXiv:1804.00338v1, Apr. 2018.
* [10] H. Ye, G. Y. Li, and B.-H. Juang, “Deep reinforcement learning based resource allocation for V2V communications,” IEEE Trans. Veh. Technol., vol. 68, no. 4, Apr. 2019.
* [11] B Ko, S Ryu, BB Park, SH Son, “Speed harmonisation and merge control using connected automated vehicles on a highway lane closure: a reinforcement learning approach,” IET Intell. Transp. Syst., vol. 14, iss. 8, May 2020.
* [12] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, MIT Press, Oct. 1998.
* [13] J. Liu, Z. Huang, X. Xu, X. Zhang, S. Sun, and D. Li, “Multi-kernel online reinforcement learning for path tracking control of intelligent vehicles,” IEEE Trans. Syst., Man, Cybern., Syst., Early access, Feb. 2020.
* [14] S. Kim and A. S. Ibrahim, “Byzantine-fault-tolerant consensus via reinforcement learning for permissioned blockchain implemented in a V2X network,” arXiv:2007.13957, Jul. 2020.
* [15] M. N. Nguyen, S. R. Pandey, T. N. Dang, E. N. Huh, C. S. Hong, N. H. Tran, and W. Saad, “Self-organizing democratized learning: towards large-scale distributed learning systems,” arXiv:2007.03278, Jul. 2020.
* [16] A. Slivkins, “Introduction to multi-armed bandits,” arXiv:1904.07272v5, Sep. 2019.
* [17] X. Zhang, M. Peng, S. Yan, and Y. Sun, “Deep-reinforcement-learning-based mode selection and resource allocation for cellular V2X communications,” IEEE Internet Things J., vol. 7, no. 7, Jul. 2020.
* [18] T. Dessalgn and S. Kim, “Danger aware vehicular networking,” in Proc. IEEE SoutheastCon 2019.
* [19] S. Kim and B. J. Kim, “Reinforcement learning for accident risk-adaptive V2X networking,” arXiv:2004.02379, Apr. 2020.
* [20] S. Kim and B. J. Kim, “Prioritization of basic safety message in DSRC based on distance to danger,” arXiv:2003.09724, Mar. 2020.
* [21] Z. He, J. Hu, B. B. Park, and M. W. Levin, “Vehicle sensor data-based transportation research: Modeling, analysis, and management,” J. Intell. Transp. Syst., vol. 23, no. 2, Mar. 2019.
* [22] 3GPP, “LTE; 5G; Overall description of radio access network (RAN) aspects for vehicle-to-everything (V2X) based on LTE and NR,” 3GPP TR 37.985 V16.0.0 Release 16, Jul. 2020.
* [23] 3GPP, “LTE; Evolved Universal Terrestrial Radio Access (E-UTRA); Physical layer procedures,” 3GPP TS 36.213 V16.3.0 Release 16, Nov. 2020.
* [24] J. Lee, T. Kim, S. Han, S. Kim, Y. Han, “An analysis of sensing scheme using energy detector for cognitive radio networks,” in Proc. IEEE PIMRC 2008.
* [25] 3GPP, “Universal mobile telecommunications system (UMTS); User equipment (UE) conformance specification; Radio transmission and reception (FDD); Part 1: conformance specification,” 3GPP TS 34.121-1 V16.2.0 Release 16, Nov. 2020.
* [26] Website of the Fatality Analysis Reporting System (FARS) by the United States National Highway Traffic Safety Administration, [Online]. Available: https://www-fars.nhtsa.dot.gov/People/PeopleDrivers.aspx
* [27] K. W. Ross and D. H, K. Tsang, “The stochastic knapsack problem,” IEEE Trans. Commun., vol. 37, no. 7, Jul. 1989.
* [28] S. Kim, J. Choi, and C. Dietrich, “PSUN: An OFDM - pulsed radar coexistence technique with application to 3.5 GHz LTE,” Hindawi Mobile Inform. Syst. J., Sep. 2016.
* [29] 3GPP, “LTE; Evolved Universal Terrestrial Radio Access (E-UTRA); User Equipment (UE) radio transmission and reception,” 3GPP TS 36.101 V16.7.0 Release 16, Dec. 2020.
* [30] 3GPP, “LTE; Evolved Universal Terrestrial Radio Access (E-UTRA); Base Station (BS) radio transmission and reception,” 3GPP TS 36.104 V16.7.0 Release 16, Nov. 2020.
* [31] O. Chapelle and L. Li, “An empirical evaluation of thompson sampling,” Advances in Neural Inform. Process. Syst., 2011.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.